Alfisol, Item 003:

LMMpro, version 2.0
The Langmuir Optimization Program
plus
The MichaelisMenten Optimization Program



nNLLS Optimization
The nNLLS regression method used by LMMpro was presented by Schulthess & Dey in 1996
(Soil Sci. Soc. Am. J. 60:433442).
nNLLS stands for normal nonlinear least squares. This regression method optimizes the parameters of the equation
without converting the equation into another form or shape. The best fit is that equation that yields the smallest error.
The error of each datum point is defined as the distance between the datum point and its nearest
point on the parabolic curve. The nearest trajectory of a datum point to the curve is a line that is normal (that is, perpendicular)
to the tangent of the nearest point on the curve.
For the Langmuir Equation, the optimization involves two loops, as follows:
 Make an initial guess of the K and Γ_{max} values.
Let Q_{i} = normal distance of each datum point to the Langmuir isotherm predicted.
The sign of Q_{i} is negative if the point is below or to the right of the curve.
 Let SUM1 =  Σ Q_{i}  = absolute value of the sum of the errors.
 Let SUM2 = Σ Q_{i}^{2}.
 Repeat continuously steps 2 and 3 using a different Γ_{max} value
until a minimum SUM1 value is achieved.
 Repeat continuously steps 2, 3, and 4 using a different K value
until a minimum SUM2 value is achieved.
When you exit both of these loops, you will have the optimized K and Γ_{max} values.
For the MichaelisMenten Equation, the optimization involves two loops, as follows:
 Make an initial guess of the K_{M} and V_{max} values.
Let Q_{i} = normal distance of each datum point to the MichaelisMenten reaction rate predicted.
The sign of Q_{i} is negative if the point is below or to the right of the curve.
 Let SUM1 =  Σ Q_{i}  = absolute value of the sum of the errors.
 Let SUM2 = Σ Q_{i}^{2}.
 Repeat continuously steps 2 and 3 using a different V_{max} value
until a minimum SUM1 value is achieved.
 Repeat continuously steps 2, 3, and 4 using a different K_{M} value
until a minimum SUM2 value is achieved.
When you exit both of these loops, you will have the optimized K_{M} and V_{max} values.
Calculating the coordinates of the closest point on the curve.
An exact value of Q_{i} defined above can be calculated as follows.
 Let (m,n) = coordinates of datum point.
Let (r,s) = coordinates of closest point on the predicted curve.
 The distance beween two points is defined by:
Q = [ (m  r)^{2} + (n  s)^{2} ]^{0.5}

 Substitute the theoretical equation for s into the equation above:
For Langmuir Equation: 
s = Γ = 
Γ_{max} K r 1 + K r 

For MichaelisMenten Equation: 
s = v = 
V_{max} r K_{M} + r 

 Using differential equations, we solve to minimize Q as a function of r.
That is, dQ/dr = 0. Simplify the result into the following equation of the
fourth degree:
A r^{4} + B r^{3} + C r^{2} + D r + E = 0

 For the Langmuir Equation:
Let A = K^{3} .
Let B = 3K^{2}  mK^{3} .
Let C = 3K  3mK^{2} .
Let D = 1  3mK + K^{2}Γ_{max}^{2}  nK^{2}Γ_{max} .
Let E =  m  nKΓ_{max} .
Note: the definition of D above is correct. It is not correct in the original publication (Schulthess & Dey, 1996).
Also note that n and Γ_{max} are multiplied by the axes conversion factor (ACF) in order to maintain
the parity of the units in each component of this equation of the fourth degree.
 For the MichaelisMenten Equation:
Let A = K_{M}^{3} .
Let B = 3K_{M}^{2}  mK_{M}^{3} .
Let C = 3/K_{M}  3mK_{M}^{2} .
Let D = 1  3m/K_{M} + K_{M}^{2}V_{max}^{2} 
nK_{M}^{2}V_{max} .
Let E =  m  (n/K_{M})V_{max} .
Note that n and V_{max} are multiplied by the axes conversion factor (ACF) in order to maintain
the parity of the units in each component of this equation of the fourth degree.
 Choose the feasible root for the value of r.
The solution to the equation of the fourth degree yields four roots. Only one of these is the feasible answer.
 Once r is known, use Equation [2] and [3] to evaluate the values of s and Q.
Note that the nNLLS regression will optimize the parameters assuming that minimizing the normal error yields the best results.
This method does not have any known bias in favor of any particular region of the curve. Furthermore, the ACF factor gives
much flexibility to how one wishes to weigh the axes units.
Also note that the nNLLS regression will result in an optimized curve with the data evenly distributed above and
below the curve. That is, the sum of the errors above the curve will be the same value as the sum of the errors below the curve.
This balance is a result of the SUM1 definition outlined above. While the SUM1 definition balances the data around the curve,
the SUM2 definition tightens the curve to get as close to the data as possible.