Machine learning articles

In this article we will see how a neural network is defined and focus on feed forward neural networks. After defining them, we will outline some of their main properties regarding their power as universal approximators.

In this article we will see what is the meaning of training a neural network: namely modifying its parameters in order to diminish the value of the loss function. This process is permitted by the backpropagation algorithm, which allows us to compute the exact way in which the network’s parameters need to be changed by looking at the data.

The backpropagation algorithm is the engine of neural network training. In this article we will look into the equations that make backpropagation and prove them, showing how this algorithm is simply a complex application of the chain rule for computing the derivative of a composite function.

In this article we will see that a kernel method is simply a method by which we can “transport” a non-linear problem into a higher dimensional space and compute scalar products in this space. The kernel trick allows us to perform all the calculations without ever visiting the space explicitly.