Newton's Method
Find Root of non-linear function
Used to find the roots of an function . It uses taylor series expansion for that. It's an iterative process. Let be initial starting point. Let's say next point is , which is the root of the . Then . Which means .
Finding minimum of a function
Let's say you have a function whose minimum you want to find, this means . Hence you can just use newton's method to find root of . This will find the minimum of if .
So
​Now we want
Therefor .
If the function is actually quadratic then the above method will give you the minimum in exactly in one iteration.
It has quadratic convergence.
Calculating the inverse of hessian is a very computationally expensive operation, which causes newton's method to be slow.
It can be very sensitive to the initialization of the algorithm.
Connection with Gradient Descent
So how gradient descent works, is by replacing a function at a point by it's linear approximation at that point using the gradient of the function and then moving along the negative gradient of the line. But it only gradient only gives the direction to move and not the magnitude of movement, for which we use step size which determines how much to move. Now, if step size is too big then we might be oscillating as we might overshoot the actual minimum, if step size is too small then it might take too many iterations to reach the minimum.
So now, we can see ​(inverse hessian, or inverse of curvature) as the step size. Because if the curvature is high, it means the gradient is changing quickly so we need to take small step sizes and conversely if the curvature is low it means we can take big step sizes. Hence, is basically a smart/adaptive step size based on the curvature of the function.
​Resources
Thoughts
Last updated