Abstract

Important for many science and engineering fields, meaningful nonlinear models result from fitting such models to data by estimating the value of each parameter in the model. Since parameters in nonlinear models often characterize a substance or a system (e.g., mass diffusivity), it is critical to find the optimal parameter estimators that minimize or maximize a chosen objective function. In practice, iterative local methods (e.g., Levenberg–Marquardt method) and heuristic methods (e.g., genetic algorithms) are commonly employed for least squares parameter estimation in nonlinear models. However, practitioners are not able to know whether the parameter estimators derived through these methods are the optimal parameter estimators that correspond to the global minimum of the squared error of the fit. In this paper, a focused regions identification method is introduced for least squares parameter estimation in nonlinear models. Using expected fitting accuracy and derivatives of the squared error of the fit, this method rules out the regions in parameter space where the optimal parameter estimators cannot exist. Practitioners are guaranteed to find the optimal parameter estimators through an exhaustive search in the remaining regions (i.e., focused regions). The focused regions identification method is validated through two case studies in which a model based on Newton’s law of cooling and the Michaelis–Menten model are fitted to two experimental data sets, respectively. These case studies show that the focused regions identification method can find the optimal parameter estimators and the corresponding global minimum effectively and efficiently.

1 Introduction

Fitting nonlinear models to data, also called nonlinear curve fitting, is an important inverse problem for many science and engineering fields [13]. A major goal of nonlinear curve fitting is to estimate the value of each parameter in a specified nonlinear model. For example, Fick’s second law is fitted to data collected from standard moisture diffusion tests to estimate the mass diffusivity of various composite materials [4,5]. Michaelis–Menten model is fitted to data measured in enzymatic reactions to estimate the maximum reaction rate and the Michaelis constant of these reactions [6,7].

In nonlinear curve fitting problems, parameters in the nonlinear model usually characterize a substance or a system and have clear interpretations, such as mass diffusivity between two species, viscosity of a fluid, and maximum velocity of a chemical reaction [8]. It is therefore critical to find the optimal parameter estimators that minimize or maximize a chosen objective function, such as the squared error of the fit. In practice, iterative local methods (e.g., Levenberg–Marquardt method [9,10] and Nelder–Mead method [11]) and heuristic methods (e.g., genetic algorithms [12] and simulated annealing [13]) are commonly employed for parameter estimation in nonlinear models to minimize the squared error of the fit. However, practitioners are not able to know whether the parameter estimators derived through these extant methods are the optimal parameter estimators that correspond to the global minimum of the squared error of the fit. The iteration process of these methods also does not shed light on where the global minimum may be located. Recent research introduces a solution interval method for least squares parameter estimation in nonlinear models [14]. Practitioners do not need to guess the initial value of each parameter in a nonlinear model to initialize the method, but the solution interval method is still not guaranteed to find the optimal parameter estimators that correspond to the global minimum of the squared error of the fit.

In this paper, a generally applicable method is introduced to reduce the search space for the optimal least squares parameter estimators in nonlinear curve fitting problems. The method applies to nonlinear models that have a closed-form expression. Using expected fitting accuracy and derivatives of the squared error of the fit, this method rules out the suboptimal regions in parameter space where the optimal parameter estimators cannot exist. Notably, the ruled-out regions may include one or more local minima in which some iterative local methods can be trapped. The remaining regions in the parameter space are defined as the focused regions. If adequate computational power is available, practitioners are guaranteed to find the optimal parameter estimators that correspond to the global minimum of the squared error of the fit through an exhaustive search in the focused regions. In addition, a new approach to detect potential outliers using inequalities derived by expected fitting accuracy and derivatives of the squared error of the fit is also proposed for nonlinear curve fitting problems.

This paper begins with a brief review of extant least squares methods that are commonly employed to solve nonlinear curve fitting problems in Sec. 2. Section 3 presents the new method to identify the focused regions and search for the optimal least squares parameter estimators in nonlinear models. Four steps to implement the focused regions identification method in practice are provided in Sec. 4. The applications of the method are demonstrated through case studies of the Rumford cooling experiment and the puromycin experiment in Sec. 5. Section 6 presents an approach to detect potential outliers in data sets of nonlinear curve fitting problems using inequalities derived in Sec. 3. The paper concludes with a discussion of the contribution of the work presented here and future research directions.

2 Background and Related Work

Nonlinear models have been applied in many science and engineering fields. A review paper written by Archontoulis and Miguez provides 72 commonly employed nonlinear models that have a closed-form expression, such as exponential decay function, logistic function, Gompertz function, Gaussian function, and Michaelis–Menten model [3]. A major goal of fitting a specified nonlinear model to data, also called nonlinear curve fitting, is to estimate the value of each parameter in the nonlinear model. These parameter estimators should minimize or maximize a chosen objective function, such as squared error of the fit and maximum likelihood [15]. In nonlinear curve fitting problems, the nonlinear model usually has less than ten parameters. Each parameter in the nonlinear model has a clear interpretation (e.g., mass diffusivity and viscosity). Sometimes, the values of one or more parameters in the nonlinear model are bounded. For example, the viscosity of a fluid, μ, needs to be a positive constant (μ > 0). However, inequality or equality constraints that establish relationships between several parameters in the nonlinear model are usually not applied to nonlinear curve fitting problems. In this section, several methods that are frequently used to solve nonlinear curve fitting problems are briefly reviewed. These methods estimate parameter values in nonlinear models to minimize the squared error of the fit. Detailed reviews of extant nonlinear parameter estimation methods that cover various objective functions can be found in books written by Bates and Watts [16], Björck [17], Gelman et al. [18], and Tarantola [8], among others.

Many nonlinear curve fitting problems are solved through linear transformation in practice. Specifically, the nonlinear model is transformed into a linear model, the value of each parameter in the model is then derived by linear regression. However, the parameter estimators derived through linear transformation are usually not the optimal estimators that minimize the squared error of the fit for the original problem. For example, Michaelis–Menten model, also called Michaelis–Menten equation, is a well-known nonlinear model in biochemistry [6]. The nonlinear model has the mathematical form
(1)
where x and y represent substrate concentration and reaction velocity in a biochemical reaction, respectively; parameter θ1 denotes the maximum reaction velocity, and parameter θ2 is defined as the Michaelis constant. Parameters θ1 and θ2 are, by definition, positive constants. To fit the Michaelis–Menten model to experimental data and estimate the values of parameters θ1 and θ2 in the model, Lineweaver and Burk transform the Michaelis–Menten model into a linear model as [19]
(2)

Equation (2) is often called the Lineweaver–Burk equation, and the graphical representation of Eq. (2) (1/x versus 1/y graph) is known as the Lineweaver–Burk plot or the double reciprocal plot in biochemistry. The values of 1/θ1 and θ2/θ1 in Eq. (2) are derived by regressing 1/y against 1/x using all data points collected from the biochemical experiment, and then the values of parameters θ1 and θ2 can be calculated accordingly. However, parameter estimators derived through the Lineweaver–Burk equation minimize the squared error of the fit for the reciprocal of the reaction velocity, 1/y, rather than for the reaction velocity, y. In other words, the experimental data points with low reaction velocity (a low value of y), which often have greater percentage error, have a higher impact on the linear regression results compared to the data points with high reaction velocity (a high value of y). The parameter estimators derived through the Lineweaver–Burk equation are therefore not the optimal least squares estimators for the original problem of fitting the Michaelis–Menten model represented by Eq. (1) to experimental data. An example using the Lineweaver–Burk equation to estimate parameter values in the Michaelis–Menten model is provided in Sec. 5.2.

Since optimal least squares parameter estimators usually cannot be derived through linear transformation and many nonlinear models even cannot be transformed into a linear model, iterative local methods, such as the Gauss–Newton method [20], the Levenberg–Marquardt method [9,10], the Nelder–Mead method [11], and the trust region reflective method [21,22], are commonly employed to estimate parameter values in nonlinear models to minimize the squared error of the fit. Using these methods, the estimator for each parameter begins with an initial guess, and then the estimator is updated in an iterative manner. The iterative equation used in these methods is
(3)
where θ = [θ1, θ2, …, θn] is a column vector of n parameters in a specified nonlinear model y = f (x1, x2, …, xq, θ) that is fit to m data points [x1(i), x2(i), …, xq(i), y(i)], and i ∈ {1, 2, …, m}; θ(s + 1) and θ(s) represent parameter values at iteration step s + 1 and s, respectively; Δ(s) represents the increment vector at the iteration step s, which is specified differently in these iterative local methods. For example, when the Levenberg–Marquardt method is employed, the increment vector Δ in the iteration step s is computed through [10,23]
(4)
In Eq. (4), λ is a non-negative damping factor, D is a diagonal scaling matrix, r = [r1, r2, …, rm] is a column vector of m residuals [14], and J is a m × n Jacobian matrix of residuals defined by
(5)
where i ∈ {1, 2, …, m}, j ∈ {1, 2, …, n}.

Besides these iterative local methods, several heuristic methods, also call guided random search techniques [24], such as genetic algorithms [12], simulated annealing [13], particle swarm optimization [25], and tabu search [26,27], also have been applied for parameter estimation in nonlinear models to minimize the squared error of the fit. These heuristic methods are usually initialized by multiple initial guesses, known as initial population, as the initial estimators for each parameter in the nonlinear model. These parameter estimators are then updated in an iterative process involving random variation [24].

Importantly, in nonlinear curve fitting problems, it is critical to find the optimal estimator for each parameter that minimizes the squared error of the fit (i.e., global minimum) since each parameter in the nonlinear model characterizes a substance or a system (e.g., mass diffusivity between two species and maximum velocity of a chemical reaction). However, it is not guaranteed to find the optimal parameter estimators using these iterative local methods and heuristic methods since these methods are based on sampling over the parameter space. Specifically, using these methods, the iterative process is usually stopped when the result reaches a prespecified step tolerance, function tolerance, or maximum number of iterations. If the initial guess(es) and the stopping criteria are properly specified, these algorithms may converge to a local minimum where the first-order derivatives of the squared error of the fit equal (or approximate) zero [14,24]. However, since multiple or even an infinite number of local minima may exist for nonlinear curve fitting problems, practitioners are not able to know whether the local minimum derived from an iterative local method or a heuristic method is the global minimum that corresponds to the optimal parameter estimators, and thus it is difficult to determine when to stop the search process using these methods in practice. In other words, since the goal is to find the optimal parameter estimators that minimize the squared error of the fit (i.e., global minimum), when practitioners find a local minimum using these extant methods, they do not know whether they should stop the search process or keep searching the parameter space until they exhaust all available computational power (e.g., through using other initial guesses or increasing the population size). In addition, the iteration process of these extant methods also does not inform practitioners about where the global minimum may be located. Notably, in practice, it is also usually not feasible to perform an exhaustive search in the parameter space of nonlinear curve fitting problems since the parameter space is infinite in many cases. Recent research introduces a solution interval method for least squares parameter estimation in nonlinear models [14]. To initialize the solution interval method, practitioners do not need to guess the initial value of each parameter in a nonlinear model. It is also proved that the method can find the optimal estimator for each parameter that minimizes the squared error of the fit when the nonlinear curve fitting problem satisfies the specified monotonic and symmetric conditions [14]. However, many nonlinear curve fitting problems do not fulfill these conditions, such as the curve fitting problem defined in Sec. 5.2; it is therefore not guaranteed to find the optimal parameter estimators and the corresponding global minimum for nonlinear curve fitting problems using the solution interval method.

3 Focused Regions Identification for Least Squares Parameter Estimation in Nonlinear Models

Based on the research gap discussed in Sec. 2, a generally applicable method is introduced in this section to identify the focused regions for least squares parameter estimation in nonlinear models. Using expected fitting accuracy and derivatives of the squared error of the fit, this method rules out the suboptimal regions, where the optimal parameter estimators that correspond to the global minimum of the squared error of the fit are impossible to exist, from parameter space. The remaining regions in the parameter space are defined as the focused regions to search for the optimal parameter estimators. If adequate computational power is available, practitioners are guaranteed to find the optimal parameter estimators that correspond to the global minimum of the squared error of the fit through an exhaustive search in the focused regions. Notably, the method introduced in this section applies to nonlinear models that have a closed-form expression. The method becomes less efficient as the number of parameters in the nonlinear model increases, and ten parameters are found as a practical limit for the application of the method due to current typical computational resources. As stated in Sec. 2, many nonlinear models, such as exponential decay function, logistic function, Gaussian function, and Michaelis–Menten model, have less than ten parameters.

For a generic nonlinear curve fitting problem, there are m data points [x1(i), x2(i), …, xq(i), y(i)], where i ∈ {1, 2, …, m}. The objective is to fit a specified nonlinear model y = f (x, θ) to the m data points by finding the optimal values for parameters θ that minimize the squared error of the fit, where θ = [θ1, θ2, …, θn] represents n constant parameters in the nonlinear model, and x = [x1, x2, …, xq]. Here, y = f (x, θ) represents the closed-form expression of the nonlinear model using a finite number of standard operations that do not include limit, differentiation, and integration. Fitting a nonlinear model that does not have a closed-form expression (e.g., a differential equation) to data is beyond the scope of this paper, and model selection is also not discussed in this paper. The squared error of the fit for the problem, also called the residual sum of squares (RSS), is
(6)
where θ = [θ1, θ2, …, θn] are the estimated values of parameters θ. Sometimes there are bounds applied to parameter θj (e.g., θj(lb)_<θj<θj(ub)¯, where θj(lb)_ and θj(ub)¯ are the lower bound and upper bound of parameter θj, respectively), and the n-dimensional parameter space is defined by the allowed value range of each parameter θj, where j ∈ {1, 2, …, n}.

The regions that cannot satisfy expected fitting accuracy are ruled out from parameter space first. When an expected fitting accuracy is specified, an upper limit is applied to the squared error of the fit or the squared error of each data point. The regions in which the squared error exceeds the upper limit are ruled out from parameter space. Notably, the ruled-out regions may include several or even an infinite number of local minima in which some iterative local methods, such as the Gauss–Newton method and the Levenberg–Marquardt method, can be trapped. The ruled-out regions identification through expected fitting accuracy is demonstrated in Fig. 1(a), where n = 1.

Fig. 1
A representative focused regions identification process: (a) ruled-out regions identification through expected fitting accuracy and (b) focused regions identification using derivatives of the squared error of the fit
Fig. 1
A representative focused regions identification process: (a) ruled-out regions identification through expected fitting accuracy and (b) focused regions identification using derivatives of the squared error of the fit
Close modal
In curve fitting problems, the fitting accuracy is usually measured by the coefficient of determination, denoted R2, or mean squared error, denoted MSE. The coefficient of determination is defined as
(7)
where y¯ is the mean of the observed data y(i) as
(8)
If the coefficient of determination is expected to be greater than or equal to a specified value, denoted R2(e), the regions defined by Eq. (9) should be ruled out from the space of parameters θ:
(9)
Equation (9) can be rewritten as
(10)
Equation (10) includes the summation of m terms on each side of the inequality. In cases when the ranges of parameters θ cannot be easily derived from Eq. (10), a more conservative criterion, based on the condition that each of the m terms on the left side of Eq. (10) is larger than its corresponding term on the right side of Eq. (10) can be used. In this case, the regions that should be ruled out from the parameter space can be defined by
(11)

Equation (11) represents the intersection of m inequalities related to parameters θ after m data points are plugged into the equation. The regions defined by Eq. (11) are the subregions of the regions defined by Eq. (10). In other words, smaller suboptimal regions are ruled out if Eq. (11) is employed instead of Eq. (10). Importantly, as demonstrated in the two case studies in Sec. 5, when Eq. (10) or Eq. (11) is employed to rule out suboptimal regions from the parameter space, sampling over the parameter space is not necessary given the closed-form nature of Eqs. (10) and (11). It is conservative to assign R2(e) = 0 in Eqs. (10) and (11) since a successful curve fitting result must have its coefficient of determination, R2, greater than or equal to zero. In practice, a larger expected coefficient of determination, R2(e), could be employed (e.g., 0.5 or 0.7) at the discretion of the practitioner. Cases when the whole parameter space is ruled out by Eq. (10) or Eq. (11) indicate that the curve fitting result cannot reach the expected coefficient of determination, and practitioners need to check data reliability and the fidelity of the nonlinear model employed to fit the data.

Similarly, the mean squared error of the fit is defined as
(12)
If the mean squared error is expected to be less than or equal to a specified value, denoted MSE(e), the regions defined by Eq. (13) should be ruled out from the space of parameters θ:
(13)
Equation (13) can be rewritten as
(14)
The left side of the inequality represented by Eq. (14) includes the summation of m terms. In cases when the ranges of parameters θ cannot be easily derived from Eq. (14), a more conservative set of regions that should be ruled out from the parameter space can be defined by
(15)

Equation (15) is derived based on the condition that each term on the left side of Eq. (14) is greater than the expected MSE(e). Equation (15) represents the intersection of m inequalities related to parameters θ after m data points are plugged into the equation. The regions defined by Eq. (15) are the subregions of the regions defined by Eq. (14). In practice, when the expected fitting accuracy is specified as that the squared error for each data point is expected to be less than or equal to a specified value, denoted as MSE(e) in Eqs. (14) and (15), the ruled-out regions can be defined by the union of the m inequalities related to parameters θ after m data points are plugged into Eq. (15). Cases when the whole parameter space is ruled out by Eq. (14) or Eq. (15) indicate that the curve fitting result cannot satisfy the expected mean squared error or the expected squared error for each data point.

The remaining regions in the parameter space derived through the expected fitting accuracy are then further reduced using derivatives of the squared error of the fit. The regions that are finally left over in the parameter space are defined as the focused regions, as demonstrated in Fig. 1(b). Specifically, if θ* = [θ1*, θ2*, …, θn*] are the optimal values of parameters θ, S(θ*) is the global minimum of the squared error of the fit. The squared error of the fit S(θ* + h) is then written as the Taylor series at θ* as
(16)
where h = [h1, h2, …, hn] are arbitrary small positive or negative values, and
(17)
In Eq. (17), α is a multi-index defined by [28,29]
(18)
(19)
(20)
(21)
(22)
Since S(θ*) is the global minimum of the squared error of the fit, S(θ* + h) must be greater than S(θ*) for any small positive or negative values of h, and thus
(23)
for any k ∈ {1, 2, …} based on Eq. (16), where k represents the order of derivatives for the squared error of the fit. The regions in which Eq. (23) is not satisfied should be ruled out from parameter space. In practice, to further reduce the remaining regions in the parameter space derived through expected fitting accuracy, the order of derivatives for the squared error of the fit in Eq. (23) could be iteratively increased (i.e., assign k = 1 in Eq. (23) first, and then k = 2, 3, …) until the new inequalities including the kth-order derivatives of S(θ) derived from Eq. (23) cannot rule out any new area from the remaining regions for parameters θ. The iteration also could be halted when the remaining regions are small enough for an exhaustive search. For example, when n = 1, Eq. (16) is simplified as
(24)
Given h1 could be any small positive or negative value and each derivative term in Eq. (24) must have a positive value, the regions defined by Eqs. (25)(28) should be ruled out from parameter space. For instance, as shown in Eq. (25), when k = 1, since h1 could have any small positive or negative value, the first-order derivative of the squared error of the fit must be zero at the global optimum θ1*, the regions where the first-order derivative is greater or less than zero should be ruled out from the parameter space. In practice, the regions defined by Eqs. (25)(28) can be ruled out from parameter space one by one until the new inequalities including the kth-order derivative of S(θ1) cannot rule out any new area from the remaining regions for parameter θ1 or the remaining regions derived through the kth-order derivative of S(θ1) are small enough for practitioners to perform an exhaustive search using the available computational power
(25)
(26)
(27)
(28)
where k represents the order of derivative for the squared error of the fit. Notably, the squared error of the fit is the summation of m terms. In cases when the ranges of parameters θ cannot be easily derived from an inequality related to the derivative of the squared error of the fit, the subregions of the regions defined by the inequality could be ruled out instead. For example, the first inequality in Eq. (25) is expressed as
(29)
If the range of parameter θ1 cannot be easily derived from Eq. (29), the regions that should be ruled out from the parameter space can be defined conservatively by
(30)
which is the intersection of the regions defined by m inequalities represented by the equation. Equation (30) is derived based on the condition that each of the m terms on the left side of Eq. (29) has a positive value, and the regions defined by Eq. (30) are therefore the subregions of the regions defined by Eq. (29).

Ultimately, the optimal values for parameters θ that minimize the squared error of the fit could be searched in the focused regions identified through the expected fitting accuracy and the derivatives of the squared error of the fit. As shown in the two case studies in Sec. 5, the search space for the optimal parameter estimators that correspond to the global minimum of the squared error of the fit is significantly reduced using the expected fitting accuracy and the derivatives of the squared error of the fit. The optimal parameter estimators could be found through an exhaustive search, also called grid search, with specified significant digits (e.g., four significant digits) in the focused regions, if adequate computational power is available. If exhaustive search is not feasible, an iterative method that can solve nonlinear least squares problems subject to bounds, such as the trust region reflective method [21,22], could be employed to find the parameter estimators. The initial values of parameters to initialize the chosen iterative method could be specified based on previous experience, expert advice, linear least squares estimators (if linear transformation is feasible for the specified nonlinear model y = f (x, θ)) [3,30], graphical exploration [31], or the roots of parameter solution equations [14]. Notably, the parameter estimators derived through an iterative least squares method may not be the optimal parameter estimators since one or more local minima may exist in the focused regions.

4 Four Steps to Estimate Parameter Values in Nonlinear Models

In Sec. 3, a generic method is introduced to identify the focused regions in the parameter space for least squares parameter estimation in nonlinear models. The method rules out the suboptimal regions from the parameter space, and the remaining regions in the parameter space are defined as the focused regions to search for the optimal parameter estimators that correspond to the global minimum of the squared error of the fit. In this section, the procedure to implement the focused regions identification method is summarized in four steps. These four steps are illustrated in Fig. 2. Practitioners can follow these steps to identify the focused regions of their nonlinear curve fitting problems and search for the optimal parameter estimators in the focused regions.

Fig. 2
Four steps to apply the focused regions identification method
Fig. 2
Four steps to apply the focused regions identification method
Close modal

Step 1—Data collection and model selection: Research data (m data points) are first collected from scientific experiments or computer simulations. An appropriate mathematical model with a closed-form expression including n parameters is selected to fit the research data. Practitioners need to check data reliability (e.g., potential outliers) and the fidelity of the mathematical model before fitting the model to data. If the selected mathematical model is linear, linear regression is employed to solve the values of these n parameters in the model to minimize the squared error of the fit.

Step 2—The ruled-out regions delineation using the expected fitting accuracy: If the selected mathematical model is nonlinear, the regions that cannot satisfy the expected fitting accuracy are ruled out from the parameter space. The n-dimensional parameter space is defined by the allowed value range of each parameter θj in the nonlinear model (e.g., θj(lb)_<θj<θj(ub)¯, where θj(lb)_ and θj(ub)¯ are the lower bound and upper bound of parameter θj, respectively, where j ∈ {1, 2, …, n}). When the expected fitting accuracy is defined by the coefficient of determination (e.g., R2 ≥ 0), the regions defined by Eq. (10) are ruled out from the parameter space. In cases when the ranges of parameters θ cannot be easily derived from Eq. (10), the regions defined by Eq. (11) are ruled out instead. When the expected fitting accuracy is defined by the mean squared error (e.g., MSE ≤ 10−2) or squared error for each data point, Eq. (14) or Eq. (15) are employed in the similar manner. If the whole parameter space is ruled out in Step 2, it indicates that the fitting result cannot reach the expected fitting accuracy, and practitioners need to double check data reliability (e.g., potential outliers) and the fidelity of the nonlinear model employed to fit the data.

Step 3—The ruled-out regions delineation using the derivatives of the squared error of the fit: The remaining regions in the parameter space are further reduced using the derivatives of the squared error of the fit if the whole parameter space is not ruled out in Step 2. For the first-order derivatives of the squared error of the fit (k = 1), Eq. (23) leads to 2n inequalities. The regions defined by these 2n inequalities are ruled out from the parameter space first. In cases when the ranges of parameters θ cannot be easily derived from an inequality, the subregions of the regions defined by the inequality could be ruled out instead, as exemplified by Eq. (30). The order of the derivatives of the squared error of the fit is then iteratively increased (k = 2, 3, 4, …) until the new inequalities derived from Eq. (23) for the kth-order derivatives of the squared error of the fit cannot rule any new area out from the remaining parameter space. Practitioners also can stop the iteration when the remaining parameter space is small enough for an exhaustive search using the available computational power. If the whole parameter space is ruled out in Step 3, it indicates that a global minimum does not exist in the remaining regions derived in Step 2, and practitioners need to consider whether the expected fitting accuracy is overestimated in Step 2.

Step 4—The optimal parameter estimators search in the focused regions: If the whole parameter space is not ruled out in Step 3, the regions that are finally left over in the parameter space are defined as the focused regions. An exhaustive search with specified significant digits (e.g., four significant digits) is employed to find the optimal parameter estimators that correspond to the global minimum of the squared error of the fit in the focused regions if adequate computational power is available for the task. If the exhaustive search is not feasible, an iterative least squares method, such as the trust region reflective method [21,22], could be employed to find the parameter estimators in the focused regions. However, an iterative least squares method is not guaranteed to find the optimal parameter estimators (global minimum) in the focused regions.

5 Case Studies of the Rumford Cooling Experiment and the Puromycin Experiment

To demonstrate the four-step procedure outlined in Sec. 4, a model based on Newton’s law of cooling and the Michaelis–Menten model are fitted to two data sets collected from the Rumford cooling experiment and the puromycin experiment, respectively, as two case studies. The results of these two case studies validate the effectiveness of the focused regions identification method. Notably, the original parameter space is infinite in both case studies. Using the method, the search space for the optimal least squares parameter estimators is significantly reduced from an infinite space to one or several finite focused regions. In addition, it is proved that the optimal parameter estimators corresponding to the global minimum of the squared error of the fit are located in the focused regions identified through the method in both case studies.

5.1 The Rumford Cooling Experiment.

The Rumford cooling experiment is an influential experiment in the history of cooling law [32]. In the experiment, a blunt steel borer is shoved against the bottom of the bore of a cylinder by a strong screw. The boring is performed for 30 min, and the temperature of the cylinder is recorded after the boring is stopped. A book written by Roller [33] includes more details of the Rumford cooling experiment, such as Rumford’s diagrams of his experimental apparatus. The cylinder cooling data set appears in Table 1. There are 13 data points in the experimental data set [16,33]. A model based on Newton’s law of cooling [16], as shown in Eq. (31), is fitted to the experimental data set by finding the optimal least squares estimator for parameter θ in the equation:
(31)
where x denotes time, y represents cylinder temperature, and parameter θ is a constant cooling coefficient in the model.
Table 1

Temperature versus time data set for the Rumford cooling experiment in Sec. 5.1

No.Time x (min)Temperature y (°F)
14126
25125
37123
412120
514119
616118
720116
824115
928114
1031113
1134112
1237.5111
1341110
No.Time x (min)Temperature y (°F)
14126
25125
37123
412120
514119
616118
720116
824115
928114
1031113
1134112
1237.5111
1341110
To find the optimal least squares estimator for the cooling coefficient θ in the model, the focused regions identification method presented in Secs. 3 and 4 is employed. The squared error of the fit for this problem, also called the RSS, is
(32)
where x(i) and y(i) represent the 13 data points in Table 1. The regions for the value of parameter θ that cannot satisfy the expected fitting accuracy are ruled out from the parameter space θ ∈ (−∞, ∞) first. Here, the expected fitting accuracy is specified as that the coefficient of determination, R2, is greater than or equal to zero (R2(e) = 0) for demonstration purposes. Equation (11) is reduced to
(33)
Plugging the 13 data points into Eq. (33), the two regions that should be ruled out from parameter space are given by
(34)

Notably, the regions shown in Eq. (34) are the intersection of the regions defined by the 13 inequalities represented by Eq. (33). The remaining region θ ∈ [−0.01699, 0.05102] is shown as the gray area in Fig. 3(a).

Fig. 3
Focused region identification process for the case study of the Rumford cooling experiment in Sec. 5.1: (a) ruled-out regions identification through expected fitting accuracy and (b) focused regions identification using derivatives of the squared error of the fit
Fig. 3
Focused region identification process for the case study of the Rumford cooling experiment in Sec. 5.1: (a) ruled-out regions identification through expected fitting accuracy and (b) focused regions identification using derivatives of the squared error of the fit
Close modal
The remaining region is further reduced using the derivatives of the squared error of the fit defined by Eq. (32). The first-order derivative of the squared error of the fit is
(35)
Since x(i) > 0 and eθx(i)>0, Eq. (25) leads to
(36)
(37)
The ruled-out regions are derived by plugging the 13 data points into Eqs. (36) and (37) as
(38)
The remaining region to search for the optimal parameter estimator is therefore reduced to θ ∈ [0.008207, 0.01505] using the first-order derivative of the squared error of the fit. The second-order derivative of the squared error of the fit is
(39)
Since x(i)2 > 0 and eθx(i)>0, Eq. (26) leads to
(40)
Plugging the 13 data points into Eq. (40), the ruled-out region derived by the second-order derivative of the squared error of the fit is
(41)

Since no new area can be ruled out from the remaining region by Eq. (40), θ ∈ [0.008207, 0.01505] is defined as the focused region to search for the optimal parameter estimator that corresponds to the global minimum of the squared error of the fit. The focused region is shown as the gray area in Fig. 3(b). Figure 3(b) indicates that the focused regions identification method presented in this paper significantly reduces the search space for the optimal least squares parameter estimator θ* in this case study from an infinite space θ ∈ (−∞, ∞) to a small region with its length of 0.006843.

Exhaustive search with four significant digits in the focused region θ ∈ [0.008207, 0.01505] yields the optimal least squares parameter estimator for the cooling coefficient in the model as θ* = 0.009415, which is shown as the dot in Fig. 3(b). The coefficient of determination, R2, for this result is 0.8682. The RSS is 44.16, and it is the global minimum of the nonlinear least squares parameter estimation problem. Notably, this result is the same with the parameter estimator derived in previous studies [16]. However, previous studies using graphical exploration and iterative methods to search for the least squares parameter estimator cannot guarantee that the result is the optimal estimator that corresponds to the global minimum of the squared error of the fit, while the focused regions identification method does.

The effectiveness of the focused regions identification method could be compared with extant iterative least squares methods. The parameter estimators derived through the Levenberg–Marquardt method using eight different initial guesses are shown in Table 2. Notably, Fig. 3 only shows a small portion of the parameter space θ ∈ (−∞, ∞), and several local minima exist in the parameter space. The results in Table 2 show that the parameter estimator derived through the Levenberg–Marquardt method is sensitive to the initial guess. The Levenberg–Marquardt iterative algorithm converges to the optimal parameter estimator θ* = 0.009415 only when the initial guess employed to initialize the iterative algorithm is close to the optimal estimator. Importantly, the Levenberg–Marquardt iterative algorithm stops when it finds a local minimum in this case study. Practitioners are not able to know whether the parameter estimator derived through the Levenberg–Marquardt method is the optimal estimator that corresponds to the global minimum of the squared error of the fit, so it is difficult for practitioners to determine when the search process should be halted. It is also not feasible to cover the infinite parameter space θ ∈ (−∞, ∞) with an unlimited number of initial guesses in this case study. In contrast, using the focused regions identification method in this case study, the regions in which the optimal parameter estimator cannot be located are ruled out from the parameter space, and the remaining region is defined as the focused region. Practitioners are therefore guaranteed to find the optimal parameter estimator by performing an exhaustive search in the focused region, and practitioners can stop the search process once the optimal parameter estimator is derived.

Table 2

The parameter estimators derived through the Levenberg–Marquardt method for the Rumford cooling experiment case study in Sec. 5.1

No.Initial guess θ0Solution θRSSR2
1−100NonconvergenceN/AN/A
2−10−10Infinite−Infinite
3−1−0.70737.588 × 1028−2.266 × 1026
40.010.00941544.160.8682
510.00941544.160.8682
654.9964.269 × 104−126.5
710104.269 × 104−126.5
81001004.269 × 104−126.5
No.Initial guess θ0Solution θRSSR2
1−100NonconvergenceN/AN/A
2−10−10Infinite−Infinite
3−1−0.70737.588 × 1028−2.266 × 1026
40.010.00941544.160.8682
510.00941544.160.8682
654.9964.269 × 104−126.5
710104.269 × 104−126.5
81001004.269 × 104−126.5

The effectiveness of the focused regions identification method also could be compared with extant heuristic methods. Genetic algorithm is employed to search for the optimal least squares parameter estimator in this case study. The population size of the genetic algorithm is specified as 1000 for demonstration purposes. Since random variation is involved in the iterative search process of the algorithm, the genetic algorithm initialized by 1000 random seeds is run 100 times and the parameter estimator that has the minimum RSS is chosen as the result. The parameter estimator derived by the genetic algorithm is θ = 0.009771, which has 3.781% deviation from the optimal parameter estimator. The coefficient of determination, R2, is 0.8605, and the RSS is 46.72. This result indicates that the fitting accuracy of the genetic algorithm with the population size of 1000 is lower than that of the focused regions identification method in this case study. Although the genetic algorithm with a larger population size may be able to find the optimal parameter estimator in this case study, practitioners never know whether they should run the algorithm again because they do not know the result is the global minimum or not. In contrast, using the focused regions identification method, practitioners are guaranteed that the optimal parameter estimator is located in the focused regions, and they can stop the search process once the optimal parameter estimator is derived.

5.2 The Puromycin Experiment.

In the puromycin experiment, the substrate concentration and reaction velocity are measured once an enzyme is treated with puromycin. The 12 data points collected from the puromycin experiment appear in Table 3 [16]. The Michaelis–Menten model [7], also called the Michaelis–Menten equation, is fitted to the experimental data set by finding the optimal least squares estimators for the two parameters in the model. The Michaelis–Menten model is one of the best-known models of enzyme kinetics [6]. Fitting the Michaelis–Menten model to the data set collected from the puromycin experiment has been used as a benchmark problem in nonlinear regression analysis [16,34]. The mathematical form of the Michaelis–Menten model is
(42)
where x represents substrate concentration, y denotes reaction velocity, parameter θ1 represents the maximum reaction velocity achieved in a biochemical experiment, and parameter θ2 is the Michaelis constant. Both parameters θ1 and θ2 in Eq. (42) are positive constants. These two parameters are often represented by Vmax and KM, respectively, in biochemistry.
Table 3

Reaction velocity versus substrate concentration data set for the puromycin experiment in Sec. 5.2

No.Substrate concentration x (parts per million)Reaction velocity y (counts/min2)
10.0247
20.0276
30.0697
40.06107
50.11123
60.11139
70.22152
80.22159
90.56191
100.56201
111.10200
121.10207
No.Substrate concentration x (parts per million)Reaction velocity y (counts/min2)
10.0247
20.0276
30.0697
40.06107
50.11123
60.11139
70.22152
80.22159
90.56191
100.56201
111.10200
121.10207
The focused regions identification method presented in Secs. 3 and 4 is employed to find the optimal least squares estimators for parameters θ1 and θ2 in Eq. (42). The squared error of the fit for the problem, also called the RSS, is expressed as
(43)
where θ = [θ1, θ2], x(i) and y(i) represent the 12 data points in Table 3. The regions in the space of parameters θ1 and θ2 that cannot satisfy the expected fitting accuracy are ruled out from the parameter space first. In this case study, the expected fitting accuracy is specified as that the squared error for each data point is smaller than or equal to 1600 (MSE(e) = 1600) for demonstration purposes. In other words, the absolute error of the predicted reaction velocity y at x(i) should be smaller than or equal to 40 for each data point in Table 3. Equation (15) is expressed as
(44)

Plugging the 12 data points into Eq. (44), the 16 regions that should be ruled out from the parameter space θ1 ∈ (0, ∞) and θ2 ∈ (0, ∞) are provided in Appendix  A. Notably, the 16 regions provided in Appendix  A are the union of the regions defined by the 12 inequalities represented by Eq. (44) since the absolute error for each data point cannot exceed 40. The remaining regions in the space of parameters θ1 and θ2 are shown in Fig. 4.

Fig. 4
Ruled-out regions derived by expected fitting accuracy in the case study of the puromycin experiment in Sec. 5.2
Fig. 4
Ruled-out regions derived by expected fitting accuracy in the case study of the puromycin experiment in Sec. 5.2
Close modal
The remaining regions are further reduced using the derivatives of the squared error of the fit defined by Eq. (43). The first-order partial derivative for parameter θ1 is
(45)
Since x(i) > 0 and (θ2 + x(i))2 > 0, the ruled-out regions are defined by
(46)
(47)
The 12 data points in Table 3 are plugged into Eqs. (46) and (47), and the 12 regions that should be ruled out from the parameter space are provided in Appendix  B. Similarly, the first-order partial derivative for parameter θ2 is
(48)

Since x(i) > 0, θ1 > 0, and θ2 > 0, the ruled-out regions are also defined by Eqs. (46) and (47), and no new region can be ruled out from the space of parameters θ1 and θ2 using the first-order partial derivative for parameter θ2.

The overall ruled-out regions derived by the expected fitting accuracy and the first-order derivatives are provided in Appendix  C, which are the union of the 16 regions defined in Appendix  A and the 12 regions defined in Appendix  B. The regions that are finally left over in the space of parameters θ1 and θ2 are shown in Fig. 5. Since the search space for the optimal least squares parameter estimators is significantly reduced and it is feasible to perform an exhaustive search in the leftover regions shown in Fig. 5, these leftover regions are defined as the focused regions in this case study. It is also proved that no new region can be ruled out using the second-order derivatives of the squared error of the fit. Detailed mathematical derivations for the second-order derivatives are not included to conserve space.

Fig. 5
Focused regions identified in the case study of the puromycin experiment in Sec. 5.2
Fig. 5
Focused regions identified in the case study of the puromycin experiment in Sec. 5.2
Close modal

The optimal least squares estimators for parameters θ1 and θ2 are derived through an exhaustive search with four significant digits in the focused regions as θ1* = 212.7 and θ2* = 0.06412. The optimal estimators are shown as the dot in Fig. 5. The coefficient of determination, R2, for this result is 0.9613. The RSS is 1195. This is the same result as previous studies yield [14,16,34]. However, to our knowledge, it is the first time that the result is proved to be the optimal parameter estimators that correspond to the global minimum of the squared error of the fit for this benchmark problem. In contrast, previous studies using other approaches cannot guarantee that the final parameter estimators derived through these approaches correspond to the global minimum of the problem. Specifically, using the focused regions identification method, for all parameter estimators that are located in the ruled-out regions defined in Appendix  C, their corresponding squared error of the fit, RSS, is larger than 1600 or the first-order derivatives of the squared error of the fit are not zero, and therefore the parameter estimators θ1* = 212.7 and θ2* = 0.06412 derived through the exhaustive search in the focused regions are the optimal parameter estimators that correspond to the global minimum of the squared error of the fit in this case study.

The fitting accuracy of the focused regions identification method is compared with the popular linear transformation method introduced by Lineweaver and Burk [19]. The Michaelis–Menten model is transformed into the Lineweaver–Burk equation, as shown in Eq. (2) in Sec. 2, and the values for parameters θ1 and θ2 are estimated through linear regression. The parameter estimators derived by the linear transformation method are θ1 = 195.8 and θ2 = 0.04841. The coefficient of determination, R2, is 0.9378, and the RSS is 1920. These results show that the fitting accuracy of the linear transformation method is lower than that of the focused regions identification method in this case study. Importantly, the parameter estimators derived by the linear transformation method have noticeable deviations from the optimal parameter estimators (7.945% for the maximum reaction velocity, θ1, and 24.50% for the Michaelis constant, θ2). Such deviations can lead to a significant error when these estimated parameter values are employed to design other biochemical reactions.

The effectiveness of the focused regions identification method could be compared with extant iterative methods. The parameter estimators derived through the Levenberg–Marquardt method using six different initial guesses are found in our previous study [14]. Similar to the results in Sec. 5.1, the parameter estimators derived by the Levenberg–Marquardt method are sensitive to the initial guesses. The Levenberg–Marquardt iterative algorithm converges to the optimal parameter estimators only when the initial guesses for parameter values are close to the optimal estimators. Although practitioners can find reasonable initial guesses using the solution interval method presented in our previous study [14], the solution interval method does not promise that it can output the optimal parameter estimators that minimize the squared error of the fit (global minimum) in this case study, and practitioners are therefore not able to know when to stop the search process using an iterative local method with multiple initial guesses. In contrast, practitioners are guaranteed that the parameter estimators derived by the focused regions identification method in this paper are the optimal estimators that correspond to the global minimum of the squared error of the fit in this case study, and practitioners can stop the search process once the optimal parameter estimators are derived.

The effectiveness of the focused regions identification method also could be compared with extant heuristic methods. The genetic algorithm with a population size of 1000 is run 100 times, and the parameter estimators that have the minimum RSS are selected from the 100 results. The parameter estimators derived by the genetic algorithm are θ1 = 184.0 and θ2 = 0.03514. The coefficient of determination, R2, is 0.8825, and the RSS is 3626. Notably, the parameter estimators derived by the genetic algorithm with the population size of 1000 have noticeable deviations from the optimal parameter estimators (13.49% for the maximum reaction velocity, θ1, and 45.20% for the Michaelis constant, θ2), which can lead to significant error in biochemical reaction design. As stated in Sec. 2, the genetic algorithm with a larger population size still cannot guarantee that the results are the optimal parameter estimators that minimize the squared error of the fit, while the focused regions identification method does in this case study.

6 Outlier Detection for Nonlinear Curve Fitting Problems

The inequalities derived through the expected fitting accuracy and the derivatives of the squared error of the fit in Sec. 3, such as Eqs. (11), (15), and (23), also could be employed to detect potential outliers in nonlinear curve fitting problems. The principle of deleting one data point at a time is commonly used to detect potential outliers in linear curve fitting problems [35]. Specifically, θ represents the estimated values of parameters in a linear model computed from all the m data points, and θ(i) represents the estimated values of parameters θ in the linear model computed from m−1 data points (without the ith data point). The difference between θ and θ(i) indicates the impact of the ith data point on the estimated values of parameters θ in the linear model. However, parameter estimation in nonlinear models is usually more computationally expensive compared to linear models, and therefore it may not be feasible to compare θ and θ(i) for each of these m data points in practice when fitting a nonlinear model to data.

As an alternative, the inequalities derived through the expected fitting accuracy and the derivatives of the squared error of the fit in Sec. 3 could be employed to detect potential outliers in nonlinear curve fitting problems. For example, using Eq. (11), the regions that should be ruled out from the parameter space computed from all the m data points, denoted Cn, could be compared with the ruled-out regions computed from m−1 data points (without the ith data point), denoted Cn(i), where i ∈ {1, 2, …, m}. When there is a significant difference between Cn and Cn(i), practitioners are recommended to check whether the ith data point is an outlier.

The approach to detect potential outliers in nonlinear curve fitting problems is demonstrated and validated using the case study of the Rumford cooling experiment in Sec. 5.1. An outlier (time x = 2 min, temperature y = 128 °F) is inserted into the original data set shown in Table 1 as the first data point in the modified data set for demonstration purposes. Equation (33) is employed to detect the potential outliers for the problem. Plugging the 14 data points (the inserted outlier and the 13 original data points in Table 1) into Eq. (33), the two regions defined by Eq. (49) should be ruled out from the parameter space θ ∈ (−∞, ∞):
(49)

The remaining region is θ ∈ [−0.05502, 0.09526] with a length of 0.1503. The length derived by Eq. (33) using all 14 data points is set as a benchmark. The impact of the ith data point on the length of the remaining region derived by Eq. (33) is then evaluated through deleting the ith data point from the modified data set. Figure 6 shows the absolute deviation from the benchmark length for deleting each data point in the modified data set. The first data point (i.e., the inserted outlier) is identified as a potential outlier in Fig. 6 since its deviation is significantly greater than any other data point. Such a result validates the effectiveness of the potential outliers detection approach presented in this section, especially given the fact that it is difficult to identify the outlier by visual inspection in the case study of the Rumford cooling experiment since the outlier follows the temperature decreasing trend well.

Fig. 6
Absolute percentage deviation from the benchmark length for each data point in Sec. 6
Fig. 6
Absolute percentage deviation from the benchmark length for each data point in Sec. 6
Close modal

7 Conclusions and Future Work

A focused regions identification method is introduced for least squares parameter estimation in nonlinear models. The novelty of this method is that the method uses expected fitting accuracy (e.g., the coefficient of determination, R2, and MSE) and derivatives of the squared error of the fit to rule out the suboptimal regions from the parameter space where the optimal parameter estimators cannot exist. A new approach to detect potential outliers based on the principle of deleting one data point at a time is also proposed for nonlinear curve fitting problems. The new approach employs the inequalities derived through expected fitting accuracy and derivatives of the squared error of the fit to measure the deviation that results from deleting each data point. The application of the focused regions identification method is demonstrated through two case studies in which a model based on Newton’s law of cooling and the Michaelis–Menten model are fitted to two data sets collected from the Rumford cooling experiment and the puromycin experiment, respectively. The application of the potential outliers detection approach is demonstrated through the case study of the Rumford cooling experiment.

Using the focused regions identification method, practitioners can significantly reduce the search space for the optimal parameter estimators that correspond to the global minimum of the squared error of the fit for nonlinear curve fitting problems. The ruled-out regions may include one or more local minima in which some iterative local methods, such as the Gauss–Newton method and the Levenberg–Marquardt method, can be trapped. Practitioners can then find the guaranteed optimal parameter estimators and their corresponding global minimum for the nonlinear curve fitting problem through an exhaustive search in the remaining regions in the parameter space, defined as the focused regions, if adequate computational power is available for the exhaustive search. Potential outliers in the data sets of nonlinear curve fitting problems also can be detected efficiently using the new approach proposed in this paper.

This work has limitations that offer opportunities for future research. As stated in Sec. 3, the focused regions identification method presented in this paper only applies to nonlinear models that have a closed-form expression, and the method becomes inefficient when the nonlinear model has more than ten parameters. Future research may extend the method to nonlinear models without a closed-form expression, such as a differential equation, or nonlinear models that have a greater number of parameters. In addition, in both case studies in this paper, the alternative inequalities in which a summation is not included, such as Eqs. (11), (15), and (30), are employed to rule out suboptimal regions from the parameter space. Notably, the suboptimal regions ruled out by these alternative inequalities are the subregions of the regions defined by the original inequalities derived through expected fitting accuracy and derivatives of the squared error of the fit, such as, e.g., Eqs. (10), (14), and (29). These alternative inequalities may be modified in future research to rule out larger suboptimal regions from the parameter space, and then the resulting focused regions become smaller. It will be less computationally expensive to perform an exhaustive search in the reduced focused regions to find the optimal parameter estimators for nonlinear curve fitting problems.

Acknowledgment

This material is partially supported by the Air Force Office of Scientific Research (Grant No. FA9550-18-0088). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The authors attest that all data for this study are included in the paper.

Appendix A: The Regions That can not Satisfy the Expected Squared Error for Each Data Point in Sec. 5.2

The following 16 regions are ruled out from the parameter space by Eq. (44) using the expected squared error for each data point in Sec. 5.2. The remaining regions are shown in Fig. 4.
(A1)
(A2)
(A3)
(A4)
(A5)
(A6)
(A7)
(A8)
(A9)
(A10)
(A11)
(A12)
(A13)
(A14)
(A15)
(A16)

Appendix B: The Regions That are Ruled out Using the First-Order Derivatives in Sec. 5.2

The following 12 regions are ruled out from the parameter space by Eqs. (46) and (47) using the first-order derivatives in Sec. 5.2.
(B1)
(B2)
(B3)
(B4)
(B5)
(B6)
(B7)
(B8)
(B9)
(B10)
(B11)
(B12)

Appendix C: Focused Regions Identified in Sec. 5.2

As a union of the regions defined by Eqs. (A1)(A16) and the regions defined by Eqs. (B1)(B12), the following 32 regions are ruled out from the parameter space. The remaining regions in the parameter space are defined as the focused regions for the nonlinear curve fitting problem in Sec. 5.2. The focused regions are shown in Fig. 5.
(C1)
(C2)
(C3)
(C4)
(C5)
(C6)
(C7)
(C8)
(C9)
(C10)
(C11)
(C12)
(C13)
(C14)
(C15)
(C16)
(C17)
(C18)
(C19)
(C20)
(C21)
(C22)
(C23)
(C24)
(C25)
(C26)
(C27)
(C28)
(C29)
(C30)
(C31)
(C32)

References

1.
Rhinehart
,
R. R.
,
2016
,
Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments
,
John Wiley & Sons
,
Chichester, UK
.
2.
Hauser
,
J. R.
,
2009
,
Numerical Methods for Nonlinear Engineering Models
,
Springer Science & Business Media
,
Berlin/Heidelberg, Germany
.
3.
Archontoulis
,
S. V.
, and
Miguez
,
F. E.
,
2015
, “
Nonlinear Regression Models and Applications in Agricultural Research
,”
Agron. J.
,
107
(
2
), pp.
786
798
.
4.
ASTM
,
2014
,
D5229/D5229M-14 Standard Test Method for Moisture Absorption Properties and Equilibrium Conditioning of Polymer Matrix Composite Materials
,
ASTM International
,
West Conshohocken, PA
, https://www.astm.org/d5229_d5229m-12.html
5.
Crank
,
J.
,
1975
,
The Mathematics of Diffusion
,
Oxford University Press
,
Oxford, UK
.
6.
Voet
,
D.
, and
Voet
,
J. G.
,
2010
,
Biochemistry
,
John Wiley & Sons
,
Hoboken, NJ
.
7.
Johnson
,
K. A.
, and
Goody
,
R. S.
,
2011
, “
The Original Michaelis Constant: Translation of the 1913 Michaelis–Menten Paper
,”
Biochemistry
,
50
(
39
), pp.
8264
8269
.
8.
Tarantola
,
A.
,
2005
,
Inverse Problem Theory and Methods for Model Parameter Estimation
,
Society for Industrial and Applied Mathematics and Mathematical Programming Society
,
Philadelphia, PA
.
9.
Levenberg
,
K.
,
1944
, “
A Method for the Solution of Certain Non-Linear Problems in Least Squares
,”
Quart. Appl. Math.
,
2
(
2
), pp.
164
168
.
10.
Marquardt
,
D. W.
,
1963
, “
An Algorithm for Least-Squares Estimation of Nonlinear Parameters
,”
J. Soc. Ind. Appl. Math.
,
11
(
2
), pp.
431
441
.
11.
Nelder
,
J. A.
, and
Mead
,
R.
,
1965
, “
A Simplex Method for Function Minimization
,”
Comput. J.
,
7
(
4
), pp.
308
313
.
12.
Mitchell
,
M.
,
1998
,
An Introduction to Genetic Algorithms
,
MIT Press
,
Cambridge, MA
.
13.
Kirkpatrick
,
S.
,
Gelatt
,
C. D.
, and
Vecchi
,
M. P.
,
1983
, “
Optimization by Simulated Annealing
,”
Science
,
220
(
4598
), pp.
671
680
.
14.
Zhang
,
G.
,
Allaire
,
D.
, and
Cagan
,
J.
,
2021
, “
Taking the Guess Work Out of the Initial Guess: A Solution Interval Method for Least Squares Parameter Estimation in Nonlinear Models
,”
ASME J. Comput. Inf. Sci. Eng.
,
21
(
2
), p.
021011
.
15.
Jennrich
,
R. I.
, and
Ralston
,
M. L.
,
1979
, “
Fitting Nonlinear Models to Data
,”
Ann. Rev. Biophys. Bioeng.
,
8
(
1
), pp.
195
238
.
16.
Bates
,
D. M.
, and
Watts
,
D. G.
,
1988
,
Nonlinear Regression Analysis and Its Applications
,
John Wiley & Sons
,
New York
.
17.
Björck
,
Å
,
1996
,
Numerical Methods for Least Squares Problems
,
Society for Industrial and Applied Mathematics
,
Philadelphia, PA
.
18.
Gelman
,
A.
,
Carlin
,
J. B.
,
Stern
,
H. S.
,
Dunson
,
D. B.
,
Vehtari
,
A.
, and
Rubin
,
D. B.
,
2013
,
Bayesian Data Analysis
,
CRC Press
,
Boca Raton, FL
.
19.
Lineweaver
,
H.
, and
Burk
,
D.
,
1934
, “
The Determination of Enzyme Dissociation Constants
,”
J. Am. Chem. Soc.
,
56
(
3
), pp.
658
666
.
20.
Gauss
,
C. F.
,
1857
,
Theory of the Motion of the Heavenly Bodies Moving About the Sun in Conic Sections: A Translation of Gauss's “Theoria Motus.” With an Appendix
,
Little, Brown and Company
,
Boston, MA
. https://archive.org/details/motionofheavenly00gausrich/page/n15/mode/2up
21.
Coleman
,
T. F.
, and
Li
,
Y.
,
1994
, “
On the Convergence of Interior-Reflective Newton Methods for Nonlinear Minimization Subject to Bounds
,”
Math. Program.
,
67
(
1–3
), pp.
189
224
.
22.
Coleman
,
T. F.
, and
Li
,
Y.
,
1996
, “
An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds
,”
SIAM J. Optim.
,
6
(
2
), pp.
418
445
.
23.
Moré
,
J. J.
,
1983
, “
Recent Developments in Algorithms and Software for Trust Region Methods
,”
Mathematical Programming: The State of the Art
,
A.
Bachem
,
B.
Korte
, and
M.
Grötschel
, eds.,
Springer-Verlag
,
Berlin, Germany
, pp.
258
287
.
24.
Sobieszczanski-Sobieski
,
J.
,
Morris
,
A.
, and
VanTooren
,
M.
,
2015
,
Multidisciplinary Design Optimization Supported by Knowledge Based Engineering
,
John Wiley & Sons
,
Chichester, UK
.
25.
Kennedy
,
J.
, and
Eberhart
,
R.
,
1995
, “
Particle Swarm Optimization
,”
Proceedings of the International Conference on Neural Networks
,
Perth, WA, Australia
,
Nov. 27–Dec. 1
, pp.
1942
–.
26.
Glover
,
F.
,
1989
, “
Tabu Search—Part I
,”
ORSA J. Comput.
,
1
(
3
), pp.
190
206
.
27.
Glover
,
F.
,
1990
, “
Tabu Search—Part II
,”
ORSA J. Comput.
,
2
(
1
), pp.
4
32
.
28.
Folland
,
G.
,
2005
, “Higher-Order Derivatives and Taylor’s Formula in Several Variables,” Preprint, pp.
1
4
. http://sites.math.washington.edu/∼folland/Math425/taylor2.pdf
29.
Reed
,
M.
, and
Simon
,
B.
,
1980
,
Methods of Modern Mathematical Physics I: Functional Analysis
,
Academic Press, Inc.
,
San Diego, CA
.
30.
Motulsky
,
H. J.
, and
Ransnas
,
L. A.
,
1987
, “
Fitting Curves to Data Using Nonlinear Regression: A Practical and Nonmathematical Review
,”
FASEB J.
,
1
(
5
), pp.
365
374
.
31.
Motulsky
,
H.
, and
Christopoulos
,
A.
,
2004
,
Fitting Models to Biological Data Using Linear and Nonlinear Regression: A Practical Guide to Curve Fitting
,
Oxford University Press
,
Oxford, UK
.
32.
Besson
,
U.
,
2012
, “
The History of the Cooling Law: When the Search for Simplicity Can Be an Obstacle
,”
Sci. Educ.
,
21
(
8
), pp.
1085
1110
.
33.
Roller
,
D.
,
1950
,
CASE 3 The Early Development of the Concepts of Temperature and Heat—The Rise and Decline of the Caloric Theory
,
Harvard University Press
,
Cambridge, MA
, https://www.degruyter.com/document/doi/10.4159/harvard.9780674599161.c3/html
34.
Montgomery
,
D. C.
,
Peck
,
E. A.
, and
Vining
,
G. G.
,
2006
,
Introduction to Linear Regression Analysis
,
Wiley-Interscience
,
Hoboken, NJ
.
35.
Rousseeuw
,
P. J.
, and
Leroy
,
A. M.
,
1987
,
Robust Regression and Outlier Detection
,
John Wiley & Sons
,
New York
.