Univariate optimisation deals with finding the minimum or maximum of a function with a single variable (). While simpler than multivariate optimisation, it forms the foundation for many advanced optimisation techniques.
Mathematical Formulation
For a univariate function where , we want to find:
- for minimization, or
- for maximization
Optimality Conditions
Necessary Condition
For a smooth function, a necessary condition for an extremum (minimum or maximum) is:
This means the derivative of the function at the optimal point equals zero.
Sufficient Conditions
To determine whether a stationary point (where ) is a minimum, maximum, or neither:
Let’s say , but . Then:
- If is even and , then is a minimum
- If is even and , then is a maximum
- If is odd, then is neither a minimum nor a maximum (inflection point)
For most practical cases, checking the second derivative is sufficient:
- If and , then is a minimum
- If and , then is a maximum
- If and , higher-order derivatives must be checked
Analytical Methods
In some cases, we can solve for the optimum analytically:
- Find all points where L’(θ) = 0
- Check the second derivative to determine whether each point is a minimum or maximum
- Compare function values at these points and at the boundaries to find the global optimum
Example: Projectile Motion
Consider the problem of finding the angle that maximizes the range of a projectile:
- Range function:
- Analytical solution requires solving
- When (flat ground), the optimal angle is
Numerical Methods
Many univariate optimisation problems can’t be solved analytically. Numerical methods include:
Bracketing Methods
These methods narrow down the interval containing the optimum:
- Bisection Method: Repeatedly halves the interval
- Golden Section Search: Uses the golden ratio to reduce the interval
- Fibonacci Search: Uses Fibonacci numbers for interval reduction
Point-Based Methods
These methods generate a sequence of points converging to the optimum:
- Newton-Raphson Method:
- Requires first and second derivatives
- Quadratic convergence near the solution
- Secant Method: Approximates the second derivative using finite differences
- Fixed-Point Iteration: where is constructed to converge to the optimum
Special Cases
Linear Functions
For :
- If , minimum at lower bound of domain
- If , minimum at upper bound of domain
- If , function is constant (no unique optimum)
Quadratic Functions
For :
- If , unique minimum at
- If , unique maximum at
- If , reduces to linear case
Applications
Univariate optimisation appears in many engineering problems:
- Finding optimal timing in control systems
- Determining optimal thickness of a material
- Setting optimal temperature for a chemical reaction
- Finding optimal angle or position in mechanical systems