Any of your choices above will be correct, but only one will turn out to be much more practical. Choice (A) is certainly correct. The sum of all the errors for all of the data points will be equal to zero if the line chosen is indeed the best line through the data. This was the method used prior to 1795, which is when Gauss derived the method of linear regression, or perhaps prior to 1805, which is when Legendre first published the method of linear regression. We will not use choice (A) because it is not easy to minimize it. Using this definition, a plot of the error as a function of the various values for m and b goes from -∞ to +∞, and locating where it crosses zero can only be done by trial-and-error (no pun intended).
Choice (B) is also corrrect. As with (A), the sum of all the errors for all of the data points will be equal to zero if the line chosen is indeed the best line through the data. This time, however, a plot of the error as a function of the various values for m and b will have a minimum, and this minimum will coincide with zero. Notice that you have to manually evaluate the absolute value of the error for each data point, and this means that locating where the minimum occurs can only be done by trial-and-error (once again, no pun intended). Consequently, this is not a practical answer.
Choice (C) is also correct. The sum of all the errors for all of the data points will be equal to zero if the line chosen is indeed the best line through the data. Note, however, that the sum of all the errors squared for all of the data points will NOT be equal to zero. Instead, it will be equal to a minimum value. Choice (C) turns out to be the most practical answer simply because we can locate the minimum without any trial-and-error guesses or without any concern with the error being positive or negative. Notice that the square of the error is always positive regardless of what the sign was prior to squaring it. The criteria for minimizing a linear regression is to minimize the square of the error terms.
Choice (D) is also correct. It is as effective and almost as practical as choice (C). It only has one minor inconvenience, which should be obvious to you. Namely, calculating the fourth power of a number is much more troublesome than calculating the square of a number. Why do more work than you have to? Other than that, it's perfect.
Substitute the equation of the line for y_{p} above. Namely, let y_{p} = mx_{i} + b. Let ε^{2} be the sum of all the errors from i=1 to i=n.
Accordingly, we seek to find the first derivative of the line expressed in step 2 above as a function of the change in m and b. Namely, complete the following two first derivatives:
Note, with d(ε^{2})/dm you seek to find the equation that gives the slope of the error-square function as a function of the value of m chosen. You seek the value of m where this slope is equal to zero; that is, where it is horizontal. Similarly, with d(ε^{2})/db you seek to find the equation that gives the slope of the error-square function as a function of the value of b chosen. You seek the value of b where this slope is equal to zero; that is, where it is horizontal.
Also note that it does not matter that you do not yet know the correct value of m or b. There is no guessing involved. You only need to know that the first-derivative will be zero when these values are correct.
Believe it or not, in math we love to keep things simple. Accordingly, it is sometimes customary to simplify the notation as follows:
When you complete this step you will have the best values of m and b for the line y = mx + b that predicts the data. You know the data values because they are your data. You also know the number of points involved (n), as well as the values for Sx, Sy, Sxy, and Sxx. With all of these known values, you use your final two equations to get the best values of m and b. No guessing was involved, no trial-and-error was involved, and that was the whole point of doing it this way.
Γ = |
Γ_{max} K c
1 + K c |
The practical answer is to transform the Langmuir equation into a linear form. If we accomplish this, then we can use all our knowledge of how to get the best m and b values. The best m and b values would, in turn, tell us what the best K and Γ_{max} values happen to be.
(A) Show how to manipulate the Langmuir equation into the following linear equation:
1 Γ | = |
1 Γ_{max} | + |
1 Γ_{max} K c |
(B) In y = mx + b, what is y, m, x, and b in the the Lineweaver-Burk linear equation shown above?
(C) If you know the best m and b values for the Lineweaver-Burk equation, what are the best K and Γ_{max} values for the Langmuir equation?
Γ | = | Γ_{max} | - |
Γ K c |
(B) In y = mx + b, what is y, m, x, and b in the the Eadie-Hofstee linear equation shown above?
(C) If you know the best m and b values for the Eadie-Hofstee equation, what are the best K and Γ_{max} values for the Langmuir equation?
Γ c | = | K Γ_{max} | - | K Γ |
(B) In y = mx + b, what is y, m, x, and b in the the Scatchard linear equation shown above?
(C) If you know the best m and b values for the Scatchard equation, what are the best K and Γ_{max} values for the Langmuir equation?
c Γ | = |
c Γ_{max} | + |
1 K Γ_{max} |
(B) In y = mx + b, what is y, m, x, and b in the the Langmuir linear equation shown above?
(C) If you know the best m and b values for the Langmuir linear equation, what are the best K and Γ_{max} values for the original Langmuir equation?