We welcome your feedback! If you: encounter any bugs or issues | notice mistakes | have feature suggestions | see areas for improvement.
Please feel free to explain the problem here by creating an issue on GitHub, or contact me via mail : mbhagath555@gmail.com.
Optimization is the process of finding the best solution from set of all possible solutions. For example, finding the best route to deliver goods that minimizes the total travel time and/or cost, scheduling machines in a factory to maximize the production quantity. In engineering, it involves finding the optimal shape of a dam to maximize its strength against the water pressure. Most recently, adjusting the parameters of a machine learning model to achieve better accuracy.
In order to achieve above mentioned goals, It is essential to describe the problem as a mathematical function (also called objective or cost function) with all the influencing factors as inputs to the function. The mathematical modelling of the problem requires domain knowledge. Systematic methods are then used to find the input values that minimize or maximize the function value.
Lets assume we have a function \( f(x) \) that we want to minimize. The optimization process starts with an initial guess \( x = x_0 \) and iteratively update the \( x\) to \( x_i \) until certain number of iterations or the solution met the convergence criteria (more on this in optimization methods section). All the optimization methods vary in how they update the \( x \) value in each iteration.
Optimization algorithms can be broadly categorized into three main types based on their approach to finding the minimum or maximum of a function: