# Univariate Newton-Raphson method¶

This notebook introduces Newton's method given a univariate, i.e. single variable, optimization example where it is the objective to compute $f(x)=\sqrt[b]{x}$ while $b\in\mathbb{N}$ which is considered constant. This scenario suits numerical optimization as it suffices to incrementally approach a solution posed by its inverse $f^{-1}(x)=x^b$ being an exponential function.
Note: The analytical solution $f(x)=\sqrt[b]{x}=x^{1/b}$ is straightforward and more elegant when implemented as x**(1/b) because it does not require additional libraries such that it should always be preferred over the below jaunt.

last update: 30/09/2020

## Cost function¶

Optimization generally involves the definition of a cost (or loss) function prior to the actual minimization to assess each intermediate result candidate $x_k, \, k \in \mathbb{N}$ by analyzing how much it deviates from the point of convergence. We employ the squared loss here as it is differentiable and given by

$$L(x_k)=\left( f^{-1}(x_k)-x)\right)^2=\left(x_k^b-x\right)^2$$

where $x$ represents the requested input which is constant.

Dual differentation of the loss function is an important requirement for the Newton method. The first-order derivative can be given by

$$\frac{\partial}{\partial x_k} L(x_k)=2b\left(x_k^{b+1}-x_kx\right)$$

of our cost function is obtained from the chain rule and written as

Similarly, we obtain the second-order derivative given by

$$\frac{\partial^2}{\partial^2 x_k} L(x_k)=2b\left((b+1)x_k^b-x\right)$$

which is implemented as

## Newton-Raphson method¶

Using Newton's method, we aim to converge to $f'(x)=0$ which may be achieved by second-order Taylor expansion yielding

$$x_{k+1} = x_k - \frac{f'(x_k)}{f''(x_k)}=x_k-\left(\frac{\partial}{\partial x_k} L(x_k)\right)\left(\frac{\partial^2}{\partial^2 x_k} L(x_k)\right)^{-1}$$

which is implemented hereafter with a tolerance value and maximum iteration number as break conditions.

For sake of benchmarking, the binary search algorithm is used in the following.

## Validation¶

The requested input $x\in\mathbb{R}_+$ for root computation and respective exponent $b\in\mathbb{N}$ are set hereafter and can be varied to see their effects on the computational process. From the plot below, it can be observed that the Newton method generally converges to a more accurate result with fewer iterations whereas the binary search yields good first approximates oscillating around the convergence point $x$.