Maximum Number of Users for Curve_fitting_toolbox Reached. Try Again Later.
Modeling Data and Curve Fitting¶
A common use of least-squares minimization is curve fitting, where 1 has a parametrized model part meant to explain some phenomena and wants to adjust the numerical values for the model and then that information technology about closely matches some information. With scipy
, such problems are typically solved with scipy.optimize.curve_fit, which is a wrapper effectually scipy.optimize.leastsq. Since lmfit's minimize()
is besides a high-level wrapper around scipy.optimize.leastsq information technology can exist used for curve-plumbing equipment problems. While it offers many benefits over scipy.optimize.leastsq, using minimize()
for many bend-fitting problems still requires more attempt than using scipy.optimize.curve_fit.
The Model
class in lmfit provides a simple and flexible approach to bend-plumbing fixtures problems. Like scipy.optimize.curve_fit, a Model
uses a model role – a part that is meant to calculate a model for some phenomenon – and then uses that to best lucifer an assortment of supplied information. Beyond that similarity, its interface is rather dissimilar from scipy.optimize.curve_fit, for case in that it uses Parameters
, but also offers several other important advantages.
In addition to allowing you lot to turn whatever model function into a curve-plumbing fixtures method, lmfit also provides canonical definitions for many known lineshapes such as Gaussian or Lorentzian peaks and Exponential decays that are widely used in many scientific domains. These are available in the models
module that will be discussed in more item in the next chapter (Built-in Plumbing fixtures Models in the models module). We mention it here equally yous may want to consult that list before writing your own model. For now, we focus on turning Python functions into high-level fitting models with the Model
class, and using these to fit data.
Motivation and simple case: Fit data to Gaussian profile¶
Let's start with a simple and common example of fitting data to a Gaussian summit. Equally we will see, there is a built-in GaussianModel
class that can help do this, but here we'll build our own. Nosotros start with a simple definition of the model function:
from numpy import exp , linspace , random def gaussian ( x , amp , cen , wid ): return amp * exp ( - ( x - cen ) ** 2 / wid )
We desire to utilise this part to fit to data \(y(10)\) represented by the arrays y
and x
. With scipy.optimize.curve_fit, this would exist:
from scipy.optimize import curve_fit x = linspace ( - 10 , 10 , 101 ) y = gaussian ( ten , 2.33 , 0.21 , one.51 ) + random . normal ( 0 , 0.two , x . size ) init_vals = [ 1 , 0 , 1 ] # for [amp, cen, wid] best_vals , covar = curve_fit ( gaussian , 10 , y , p0 = init_vals )
That is, we create information, brand an initial estimate of the model values, and run scipy.optimize.curve_fit with the model function, data arrays, and initial guesses. The results returned are the optimal values for the parameters and the covariance matrix. It's uncomplicated and useful, merely it misses the benefits of lmfit.
With lmfit, we create a Model
that wraps the gaussian
model function, which automatically generates the appropriate residual function, and determines the corresponding parameter names from the function signature itself:
from lmfit import Model gmodel = Model ( gaussian ) print ( f 'parameter names: { gmodel . param_names } ' ) print ( f 'contained variables: { gmodel . independent_vars } ' )
parameter names: ['amp', 'cen', 'wid'] independent variables: ['x']
As you tin can see, the Model gmodel
adamant the names of the parameters and the independent variables. By default, the first argument of the part is taken as the contained variable, held in independent_vars
, and the rest of the functions positional arguments (and, in certain cases, keyword arguments – encounter below) are used for Parameter names. Thus, for the gaussian
function higher up, the independent variable is x
, and the parameters are named amp
, cen
, and wid
, and – all taken directly from the signature of the model role. As nosotros volition see below, you tin can modify the default assignment of independent variable / arguments and specify yourself what the independent variable is and which function arguments should exist identified as parameter names.
The Parameters are non created when the model is created. The model knows what the parameters should be named, just nothing about the calibration and range of your data. You will unremarkably accept to brand these parameters and assign initial values and other attributes. To help yous do this, each model has a make_params()
method that will generate parameters with the expected names:
params = gmodel . make_params ()
This creates the Parameters
but does non automatically give them initial values since it has no thought what the scale should be. Y'all can set initial values for parameters with keyword arguments to make_params()
:
params = gmodel . make_params ( cen = 0.3 , amp = 3 , wid = one.25 )
or assign them (and other parameter backdrop) afterward the Parameters
class has been created.
A Model
has several methods associated with it. For example, one can use the eval()
method to evaluate the model or the fit()
method to fit information to this model with a Parameter
object. Both of these methods can have explicit keyword arguments for the parameter values. For example, ane could use eval()
to calculate the predicted function:
x_eval = linspace ( 0 , x , 201 ) y_eval = gmodel . eval ( params , 10 = x_eval )
or with:
y_eval = gmodel . eval ( x = x_eval , cen = vi.v , amp = 100 , wid = 2.0 )
Admittedly, this a slightly long-winded style to calculate a Gaussian part, given that you lot could have called your gaussian
function directly. But now that the model is set up upwards, we can use its fit()
method to fit this model to data, every bit with:
upshot = gmodel . fit ( y , params , x = x )
or with:
upshot = gmodel . fit ( y , x = ten , cen = 0.5 , amp = 10 , wid = 2.0 )
Putting everything together, included in the examples
folder with the source code, is:
# <examples/doc_model_gaussian.py> import matplotlib.pyplot every bit plt from numpy import exp , loadtxt , pi , sqrt from lmfit import Model data = loadtxt ( 'model1d_gauss.dat' ) 10 = data [:, 0 ] y = data [:, 1 ] def gaussian ( x , amp , cen , wid ): """one-d gaussian: gaussian(x, amp, cen, wid)""" return ( amp / ( sqrt ( ii * pi ) * wid )) * exp ( - ( ten - cen ) ** 2 / ( 2 * wid ** 2 )) gmodel = Model ( gaussian ) result = gmodel . fit ( y , 10 = x , amp = v , cen = 5 , wid = 1 ) impress ( issue . fit_report ()) plt . plot ( x , y , 'o' ) plt . plot ( 10 , result . init_fit , '--' , characterization = 'initial fit' ) plt . plot ( x , result . best_fit , '-' , label = 'best fit' ) plt . legend () plt . testify () # <end examples/doc_model_gaussian.py>
which is pretty meaty and to the betoken. The returned result
will be a ModelResult
object. As we will see below, this has many components, including a fit_report()
method, which will show:
[[Model]] Model(gaussian) [[Fit Statistics]] # fitting method = leastsq # function evals = 33 # information points = 101 # variables = 3 chi-square = three.40883599 reduced chi-square = 0.03478404 Akaike info crit = -336.263713 Bayesian info crit = -328.418352 [[Variables]] amp: 8.88021830 +/- 0.11359493 (1.28%) (init = v) cen: 5.65866102 +/- 0.01030495 (0.18%) (init = 5) wid: 0.69765468 +/- 0.01030495 (1.48%) (init = 1) [[Correlations]] (unreported correlations are < 0.100) C(amp, wid) = 0.577
Equally the script shows, the event volition as well take init_fit
for the fit with the initial parameter values and a best_fit
for the fit with the best fit parameter values. These can be used to generate the following plot:
which shows the data in bluish dots, the best fit every bit a solid green line, and the initial fit equally a dashed orange line.
Notation that the model fitting was really performed with:
gmodel = Model ( gaussian ) result = gmodel . fit ( y , params , x = x , amp = 5 , cen = 5 , wid = ane )
These lines clearly express that nosotros want to plow the gaussian
function into a fitting model, and and so fit the \(y(x)\) data to this model, starting with values of 5 for amp
, 5 for cen
and one for wid
. In addition, all the other features of lmfit are included: Parameters
can have bounds and constraints and the result is a rich object that can be reused to explore the model fit in detail.
The Model
class¶
The Model
grade provides a general way to wrap a pre-divers office as a plumbing equipment model.
- grade Model ( func , independent_vars = None , param_names = None , nan_policy = 'raise' , prefix = '' , name = None , ** kws )¶
-
Create a model from a user-supplied model function.
The model role will usually take an independent variable (more often than not, the first statement) and a series of arguments that are meant to be parameters for the model. It volition render an array of information to model some data as for a curve-plumbing fixtures problem.
- Parameters
-
-
func (callable) – Function to exist wrapped.
-
independent_vars (
list
ofstr
, optional) – Arguments to func that are independent variables (default is None). -
param_names (
listing
ofstr
, optional) – Names of arguments to func that are to be made into parameters (default is None). -
nan_policy ({'raise' , 'propagate' , 'omit'} , optional) – How to handle NaN and missing values in data. See Notes beneath.
-
prefix (str , optional) – Prefix used for the model.
-
name (str , optional) – Proper name for the model. When None (default) the proper noun is the aforementioned every bit the model office (func).
-
**kws (dict , optional) – Additional keyword arguments to pass to model function.
-
Notes
i. Parameter names are inferred from the function arguments, and a residue function is automatically constructed.
2. The model function must return an array that volition exist the same size as the data being modeled.
3. nan_policy sets what to do when a NaN or missing value is seen in the data. Should exist one of:
-
'raise' : heighten a ValueError (default)
-
'propagate' : do null
-
'omit' : drop missing data
Examples
The model function will normally take an independent variable (generally, the commencement argument) and a serial of arguments that are meant to be parameters for the model. Thus, a simple peak using a Gaussian defined as:
>>> import numpy as np >>> def gaussian ( x , amp , cen , wid ): ... return amp * np . exp ( - ( x - cen ) ** 2 / wid )
can be turned into a Model with:
>>> gmodel = Model ( gaussian )
this volition automatically discover the names of the independent variables and parameters:
>>> print ( gmodel . param_names , gmodel . independent_vars ) ['amp', 'cen', 'wid'], ['ten']
Model
course Methods¶
- Model. eval ( params = None , ** kwargs )¶
-
Evaluate the model with supplied parameters and keyword arguments.
- Parameters
-
-
params (Parameters , optional) – Parameters to use in Model.
-
**kwargs (optional) – Additional keyword arguments to pass to model part.
-
- Returns
-
Value of model given the parameters and other arguments.
- Return type
-
numpy.ndarray, float, int or circuitous
Notes
1. if params is None, the values for all parameters are expected to be provided as keyword arguments. If params is given, and a keyword argument for a parameter value is also given, the keyword statement will exist used.
2. all non-parameter arguments for the model function, including all the independent variables will need to exist passed in using keyword arguments.
3. The render type depends on the model part. For many of the built-models information technology is a numpy.ndarray, with the exception of ConstantModel and ComplexConstantModel, which return a bladder/int or complex value.
- Model. fit ( data , params = None , weights = None , method = 'leastsq' , iter_cb = None , scale_covar = True , verbose = False , fit_kws = None , nan_policy = None , calc_covar = True , max_nfev = None , ** kwargs )¶
-
Fit the model to the data using the supplied Parameters.
- Parameters
-
-
data (array_like) – Array of information to be fit.
-
params (Parameters , optional) – Parameters to utilise in fit (default is None).
-
weights (array_like , optional) – Weights to use for the calculation of the fit residual (default is None). Must have the same size every bit data.
-
method (str , optional) – Proper noun of fitting method to utilize (default is 'leastsq').
-
iter_cb (callable , optional) – Callback function to phone call at each iteration (default is None).
-
scale_covar (bool , optional) – Whether to automatically calibration the covariance matrix when calculating uncertainties (default is True).
-
verbose (bool , optional) – Whether to print a message when a new parameter is added because of a hint (default is True).
-
fit_kws (dict , optional) – Options to pass to the minimizer existence used.
-
nan_policy ({'enhance' , 'propagate' , 'omit'} , optional) – What to exercise when encountering NaNs when fitting Model.
-
calc_covar (bool , optional) – Whether to calculate the covariance matrix (default is True) for solvers other than 'leastsq' and 'least_squares'. Requires the
numdifftools
packet to be installed. -
max_nfev (int or None , optional) – Maximum number of function evaluations (default is None). The default value depends on the fitting method.
-
**kwargs (optional) – Arguments to pass to the model office, maybe overriding parameters.
-
- Returns
- Return type
-
ModelResult
Notes
1. if params is None, the values for all parameters are expected to be provided as keyword arguments. If params is given, and a keyword statement for a parameter value is also given, the keyword argument will be used.
ii. all not-parameter arguments for the model function, including all the independent variables will need to be passed in using keyword arguments.
3. Parameters (yet passed in), are copied on input, so the original Parameter objects are unchanged, and the updated values are in the returned ModelResult.
Examples
Take
t
to be the independent variable and data to exist the bend nosotros will fit. Use keyword arguments to prepare initial guesses:>>> result = my_model . fit ( data , tau = 5 , N = iii , t = t )
Or, for more control, laissez passer a Parameters object.
>>> result = my_model . fit ( data , params , t = t )
Keyword arguments override Parameters.
>>> result = my_model . fit ( data , params , tau = v , t = t )
- Model. guess ( data , 10 , ** kws )¶
-
Judge starting values for the parameters of a Model.
This is non implemented for all models, just is available for many of the congenital-in models.
- Parameters
-
-
data (array_like) – Array of data (i.e., y-values) to utilize to gauge parameter values.
-
x (array_like) – Array of values for the independent variable (i.e., ten-values).
-
**kws (optional) – Boosted keyword arguments, passed to model function.
-
- Returns
-
Initial, guessed values for the parameters of a Model.
- Return blazon
-
Parameters
- Raises
-
NotImplementedError – If the guess method is non implemented for a Model.
Notes
Should be implemented for each model subclass to run self.make_params(), update starting values and return a Parameters object.
Inverse in version 1.0.3: Argument
ten
is at present explicitly required to approximate starting values.
- Model. make_params ( verbose = False , ** kwargs )¶
-
Create a Parameters object for a Model.
- Parameters
-
-
verbose (bool , optional) – Whether to impress out messages (default is False).
-
**kwargs (optional) – Parameter names and initial values.
-
- Returns
-
params – Parameters object for the Model.
- Return type
-
Parameters
Notes
1. The parameters may or may not take decent initial values for each parameter.
two. This applies whatsoever default values or parameter hints that may have been set.
- Model. set_param_hint ( proper name , ** kwargs )¶
-
Set up hints to use when creating parameters with make_params().
This is especially user-friendly for setting initial values. The name can include the models prefix or non. The hint given can also include optional bounds and constraints
(value, vary, min, max, expr)
, which will exist used by make_params() when building default parameters.- Parameters
-
-
proper name (str) – Parameter name.
-
**kwargs (optional) –
Arbitrary keyword arguments, needs to be a Parameter attribute. Can be any of the post-obit:
-
- valuebladder, optional
-
Numerical Parameter value.
-
- varybool, optional
-
Whether the Parameter is varied during a fit (default is Truthful).
-
- minfloat, optional
-
Lower spring for value (default is
-numpy.inf
, no lower bound).
-
- maxbladder, optional
-
Upper bound for value (default is
numpy.inf
, no upper spring).
-
- exprstr, optional
-
Mathematical expression used to constrain the value during the fit.
-
-
Case
>>> model = GaussianModel () >>> model . set_param_hint ( 'sigma' , min = 0 )
Run into Using parameter hints.
- Model. print_param_hints ( colwidth = 8 )¶
-
Print a nicely aligned text-table of parameter hints.
- Parameters
-
colwidth (int , optional) – Width of each column, except for first and terminal columns.
Model
class Attributes¶
- func ¶
-
The model function used to summate the model.
- independent_vars ¶
-
Listing of strings for names of the independent variables.
- nan_policy ¶
-
Describes what to practise for NaNs that betoken missing values in the data. The choices are:
-
'raise'
: Raise aValueError
(default) -
'propagate'
: Practise not check for NaNs or missing values. The fit volition attempt to ignore them. -
'omit'
: Remove NaNs or missing observations in information. If pandas is installed,pandas.isnull()
is used, otherwisenumpy.isnan()
is used.
-
- name ¶
-
Proper name of the model, used simply in the string representation of the model. By default this will be taken from the model function.
- opts ¶
-
Extra keyword arguments to pass to model function. Normally this will be determined internally and should non exist changed.
- param_hints ¶
-
Lexicon of parameter hints. Run into Using parameter hints.
- param_names ¶
-
List of strings of parameter names.
- prefix ¶
-
Prefix used for proper noun-mangling of parameter names. The default is
''
. If a detailModel
has argumentsaamplitude
,center
, andsigma
, these would become the parameter names. Using a prefix of'g1_'
would catechumen these parameter names tog1_amplitude
,g1_center
, andg1_sigma
. This can be essential to avoid proper noun collision in composite models.
Determining parameter names and contained variables for a function¶
The Model
created from the supplied function func
will create a Parameters
object, and names are inferred from the function arguments, and a residual function is automatically constructed.
Past default, the independent variable is taken as the first argument to the function. You can, of course, explicitly gear up this, and will demand to do so if the independent variable is non showtime in the listing, or if at that place is really more than i independent variable.
If non specified, Parameters are constructed from all positional arguments and all keyword arguments that have a default value that is numerical, except the independent variable, of course. Importantly, the Parameters tin be modified after creation. In fact, you will have to do this because none of the parameters have valid initial values. In add-on, ane tin can identify bounds and constraints on Parameters, or fix their values.
Explicitly specifying independent_vars
¶
As we saw for the Gaussian example above, creating a Model
from a office is fairly easy. Let's try another 1:
import numpy as np from lmfit import Model def disuse ( t , tau , Due north ): return N * np . exp ( - t / tau ) decay_model = Model ( decay ) impress ( f 'contained variables: { decay_model . independent_vars } ' ) params = decay_model . make_params () print ( ' \northward Parameters:' ) for pname , par in params . items (): impress ( pname , par )
independent variables: ['t'] Parameters: tau <Parameter 'tau', value=-inf, bounds=[-inf:inf]> North <Parameter 'N', value=-inf, premises=[-inf:inf]>
Here, t
is assumed to be the independent variable because information technology is the first argument to the function. The other function arguments are used to create parameters for the model.
If y'all desire tau
to be the contained variable in the in a higher place example, you tin can say and then:
decay_model = Model ( decay , independent_vars = [ 'tau' ]) impress ( f 'independent variables: { decay_model . independent_vars } ' ) params = decay_model . make_params () print ( ' \northward Parameters:' ) for pname , par in params . items (): print ( pname , par )
contained variables: ['tau'] Parameters: t <Parameter 't', value=-inf, bounds=[-inf:inf]> North <Parameter 'N', value=-inf, bounds=[-inf:inf]>
You tin can also supply multiple values for multi-dimensional functions with multiple contained variables. In fact, the meaning of independent variable here is uncomplicated, and based on how it treats arguments of the function y'all are modeling:
- independent variable
-
A function argument that is not a parameter or otherwise function of the model, and that will be required to be explicitly provided as a keyword argument for each fit with
Model.fit()
or evaluation withModel.eval()
.
Note that independent variables are not required to be arrays, or fifty-fifty floating betoken numbers.
Functions with keyword arguments¶
If the model role had keyword parameters, these would be turned into Parameters if the supplied default value was a valid number (but non None
, True
, or False
).
def decay2 ( t , tau , N = ten , check_positive = Imitation ): if check_small : arg = abs ( t ) / max ( 1.e-9 , abs ( tau )) else : arg = t / tau return North * np . exp ( arg ) modernistic = Model ( decay2 ) params = modernistic . make_params () print ( 'Parameters:' ) for pname , par in params . items (): impress ( pname , par )
Parameters: tau <Parameter 'tau', value=-inf, bounds=[-inf:inf]> Due north <Parameter 'N', value=10, premises=[-inf:inf]>
Here, even though N
is a keyword argument to the role, it is turned into a parameter, with the default numerical value every bit its initial value. By default, it is permitted to be varied in the fit – the ten is taken as an initial value, not a stock-still value. On the other hand, the check_positive
keyword argument, was non converted to a parameter because information technology has a boolean default value. In some sense, check_positive
becomes similar an independent variable to the model. However, because it has a default value it is not required to exist given for each model evaluation or fit, as independent variables are.
Defining a prefix
for the Parameters¶
As we will come across in the next chapter when combining models, it is sometimes necessary to decorate the parameter names in the model, but still have them exist correctly used in the underlying model role. This would be necessary, for instance, if two parameters in a composite model (see Composite Models : adding (or multiplying) Models or examples in the side by side chapter) would accept the same name. To avert this, we can add a prefix
to the Model
which volition automatically practise this mapping for united states of america.
def myfunc ( x , amplitude = 1 , centre = 0 , sigma = 1 ): # function definition, for now simply ``pass`` pass modern = Model ( myfunc , prefix = 'f1_' ) params = mod . make_params () print ( 'Parameters:' ) for pname , par in params . items (): impress ( pname , par )
Parameters: f1_amplitude <Parameter 'f1_amplitude', value=1, bounds=[-inf:inf]> f1_center <Parameter 'f1_center', value=0, premises=[-inf:inf]> f1_sigma <Parameter 'f1_sigma', value=one, bounds=[-inf:inf]>
Y'all would refer to these parameters every bit f1_amplitude
and and so forth, and the model volition know to map these to the amplitude
argument of myfunc
.
Initializing model parameters¶
As mentioned above, the parameters created past Model.make_params()
are generally created with invalid initial values of None
. These values must be initialized in order for the model to be evaluated or used in a fit. In that location are 4 different ways to do this initialization that can exist used in whatsoever combination:
Y'all can supply initial values in the definition of the model office.
You tin can initialize the parameters when creating parameters with
Model.make_params()
.Yous can requite parameter hints with
Model.set_param_hint()
.Yous can supply initial values for the parameters when you utilise the
Model.eval()
orModel.fit()
methods.
Of course these methods can be mixed, allowing you lot to overwrite initial values at any bespeak in the process of defining and using the model.
Initializing values in the office definition¶
To supply initial values for parameters in the definition of the model function, you can simply supply a default value:
def myfunc ( x , a = 1 , b = 0 ): return a * x + ten * a - b
instead of using:
def myfunc ( x , a , b ): return a * x + 10 * a - b
This has the advantage of working at the function level – all parameters with keywords tin be treated as options. It also means that some default initial value will always be available for the parameter.
Initializing values with Model.make_params()
¶
When creating parameters with Model.make_params()
yous can specify initial values. To do this, use keyword arguments for the parameter names and initial values:
mod = Model ( myfunc ) pars = mod . make_params ( a = 3 , b = 0.five )
Initializing values by setting parameter hints¶
After a model has been created, merely prior to creating parameters with Model.make_params()
, you can set parameter hints. These allows you to set up non but a default initial value but besides to set other parameter attributes decision-making bounds, whether it is varied in the fit, or a constraint expression. To set a parameter hint, you can use Model.set_param_hint()
, equally with:
mod = Model ( myfunc ) mod . set_param_hint ( 'a' , value = 1.0 ) mod . set_param_hint ( 'b' , value = 0.3 , min = 0 , max = 1.0 ) pars = modernistic . make_params ()
Parameter hints are discussed in more detail in section Using parameter hints.
Initializing values when using a model¶
Finally, you lot tin explicitly supply initial values when using a model. That is, every bit with Model.make_params()
, you tin can include values as keyword arguments to either the Model.eval()
or Model.fit()
methods:
10 = linspace ( 0 , 10 , 100 ) y_eval = modern . eval ( x = x , a = 7.0 , b =- 2.0 ) y_sim = y_eval + random . normal ( 0 , 0.two , ten . size ) out = mod . fit ( y_sim , pars , x = ten , a = iii.0 , b = 0.0 )
These approaches to initialization provide many opportunities for setting initial values for parameters. The methods can be combined, so that you tin can fix parameter hints just and so alter the initial value explicitly with Model.fit()
.
Using parameter hints¶
After a model has been created, y'all can give it hints for how to create parameters with Model.make_params()
. This allows yous to set not merely a default initial value only also to set up other parameter attributes controlling bounds, whether it is varied in the fit, or a constraint expression. To set a parameter hint, you tin use Model.set_param_hint()
, as with:
mod = Model ( myfunc ) modernistic . set_param_hint ( 'a' , value = ane.0 ) modernistic . set_param_hint ( 'b' , value = 0.three , min = 0 , max = 1.0 )
Parameter hints are stored in a model'due south param_hints
attribute, which is but a nested dictionary:
print ( 'Parameter hints:' ) for pname , par in mod . param_hints . items (): impress ( pname , par )
Parameter hints: a {'value': 1.0} b {'value': 0.3, 'min': 0, 'max': 1.0}
You can change this dictionary straight, or with the Model.set_param_hint()
method. Either style, these parameter hints are used by Model.make_params()
when making parameters.
An important feature of parameter hints is that you can force the cosmos of new parameters with parameter hints. This tin can be useful to make derived parameters with constraint expressions. For case to get the full-width at half maximum of a Gaussian model, one could use a parameter hint of:
mod = Model ( gaussian ) mod . set_param_hint ( 'fwhm' , expr = 'ii.3548*sigma' )
Saving and Loading Models¶
New in version 0.9.8.
It is sometimes desirable to save a Model
for later utilize outside of the code used to ascertain the model. Lmfit provides a save_model()
function that will save a Model
to a file. In that location is too a companion load_model()
role that tin can read this file and reconstruct a Model
from it.
Saving a model turns out to be somewhat challenging. The main result is that Python is not unremarkably able to serialize a function (such as the model function making upwardly the heart of the Model) in a way that can be reconstructed into a callable Python object. The dill
packet tin sometimes serialize functions, but with the limitation that information technology tin can be used only in the same version of Python. In add-on, class methods used every bit model functions volition non retain the rest of the course attributes and methods, and so may not be usable. With all those warnings, it should exist emphasized that if you lot are willing to save or reuse the definition of the model function as Python lawmaking, then saving the Parameters and rest of the components that make upwards a model presents no problem.
If the dill
package is installed, the model function volition exist saved using information technology. Merely because saving the model function is non always reliable, saving a model will ever save the name of the model function. The load_model()
takes an optional funcdefs
argument that tin can incorporate a dictionary of function definitions with the function names as keys and function objects as values. If 1 of the lexicon keys matches the saved name, the corresponding role object volition exist used as the model function. With this approach, if you lot salve a model and can provide the code used for the model function, the model can be saved and reliably reloaded and used.
- save_model ( model , fname )¶
-
Save a Model to a file.
- Parameters
-
-
model (Model) – Model to be saved.
-
fname (str) – Name of file for saved Model.
-
- load_model ( fname , funcdefs = None )¶
-
Load a saved Model from a file.
- Parameters
-
-
fname (str) – Name of file containing saved Model.
-
funcdefs (dict , optional) – Lexicon of custom function names and definitions.
-
- Returns
-
Model object loaded from file.
- Return type
-
Model
Every bit a simple case, one tin can save a model as:
# <examples/doc_model_savemodel.py> import numpy every bit np from lmfit.model import Model , save_model def mysine ( x , amp , freq , shift ): render amp * np . sin ( x * freq + shift ) sinemodel = Model ( mysine ) pars = sinemodel . make_params ( amp = ane , freq = 0.25 , shift = 0 ) save_model ( sinemodel , 'sinemodel.sav' ) # <end examples/doc_model_savemodel.py>
To load that later, 1 might practise:
# <examples/doc_model_loadmodel.py> import matplotlib.pyplot equally plt import numpy as np from lmfit.model import load_model def mysine ( x , amp , freq , shift ): return amp * np . sin ( x * freq + shift ) data = np . loadtxt ( 'sinedata.dat' ) x = information [:, 0 ] y = data [:, i ] model = load_model ( 'sinemodel.sav' , funcdefs = { 'mysine' : mysine }) params = model . make_params ( amp = 3 , freq = 0.52 , shift = 0 ) params [ 'shift' ] . max = 1 params [ 'shift' ] . min = - 1 params [ 'amp' ] . min = 0.0 event = model . fit ( y , params , ten = 10 ) print ( result . fit_report ()) plt . plot ( 10 , y , 'o' ) plt . plot ( x , outcome . best_fit , '-' ) plt . bear witness () # <terminate examples/doc_model_loadmodel.py>
See besides Saving and Loading ModelResults.
The ModelResult
form¶
A ModelResult
(which had been called ModelFit
prior to version 0.9) is the object returned by Model.fit()
. It is a subclass of Minimizer
, and so contains many of the fit results. Of course, it knows the Model
and the prepare of Parameters
used in the fit, and it has methods to evaluate the model, to fit the information (or re-fit the data with changes to the parameters, or fit with dissimilar or modified information) and to print out a report for that fit.
While a Model
encapsulates your model function, it is fairly abstract and does not incorporate the parameters or data used in a detail fit. A ModelResult
does comprise parameters and information too every bit methods to alter and re-do fits. Thus the Model
is the idealized model while the ModelResult
is the messier, more than complex (but maybe more useful) object that represents a fit with a prepare of parameters to data with a model.
A ModelResult
has several attributes holding values for fit results, and several methods for working with fits. These include statistics inherited from Minimizer
useful for comparing dissimilar models, including chisqr
, redchi
, aic
, and bic
.
- class ModelResult ( model , params , information = None , weights = None , method = 'leastsq' , fcn_args = None , fcn_kws = None , iter_cb = None , scale_covar = True , nan_policy = 'raise' , calc_covar = True , max_nfev = None , ** fit_kws )¶
-
Result from the Model fit.
This has many attributes and methods for viewing and working with the results of a fit using Model. It inherits from Minimizer, and so that it can be used to change and re-run the fit for the Model.
- Parameters
-
-
model (Model) – Model to use.
-
params (Parameters) – Parameters with initial values for model.
-
data (array_like , optional) – Data to be modeled.
-
weights (array_like , optional) – Weights to multiply
(data-model)
for fit residue. -
method (str , optional) – Name of minimization method to apply (default is 'leastsq').
-
fcn_args (sequence , optional) – Positional arguments to send to model function.
-
fcn_dict (dict , optional) – Keyword arguments to send to model function.
-
iter_cb (callable , optional) – Function to call on each iteration of fit.
-
scale_covar (bool , optional) – Whether to calibration covariance matrix for doubtfulness evaluation.
-
nan_policy ({'enhance' , 'propagate' , 'omit'} , optional) – What to do when encountering NaNs when fitting Model.
-
calc_covar (bool , optional) – Whether to calculate the covariance matrix (default is True) for solvers other than 'leastsq' and 'least_squares'. Requires the
numdifftools
package to be installed. -
max_nfev (int or None , optional) – Maximum number of function evaluations (default is None). The default value depends on the plumbing equipment method.
-
**fit_kws (optional) – Keyword arguments to send to minimization routine.
-
ModelResult
methods¶
- ModelResult. eval ( params = None , ** kwargs )¶
-
Evaluate model role.
- Parameters
-
-
params (Parameters , optional) – Parameters to utilize.
-
**kwargs (optional) – Options to send to Model.eval().
-
- Returns
-
Array or value for the evaluated model.
- Return type
-
numpy.ndarray, float, int, or complex
- ModelResult. eval_components ( params = None , ** kwargs )¶
-
Evaluate each component of a composite model function.
- Parameters
-
-
params (Parameters , optional) – Parameters, defaults to ModelResult.params.
-
**kwargs (optional) – Keyword arguments to pass to model office.
-
- Returns
-
Keys are prefixes of component models, and values are the estimated model value for each component of the model.
- Return type
-
dict
- ModelResult. fit ( data = None , params = None , weights = None , method = None , nan_policy = None , ** kwargs )¶
-
Re-perform fit for a Model, given data and params.
- Parameters
-
-
data (array_like , optional) – Data to be modeled.
-
params (Parameters , optional) – Parameters with initial values for model.
-
weights (array_like , optional) – Weights to multiply
(data-model)
for fit residual. -
method (str , optional) – Name of minimization method to utilize (default is 'leastsq').
-
nan_policy ({'raise' , 'propagate' , 'omit'} , optional) – What to do when encountering NaNs when fitting Model.
-
**kwargs (optional) – Keyword arguments to send to minimization routine.
-
- ModelResult. fit_report ( modelpars = None , show_correl = True , min_correl = 0.1 , sort_pars = False )¶
-
Return a printable fit report.
The written report contains fit statistics and best-fit values with uncertainties and correlations.
- Parameters
-
-
modelpars (Parameters , optional) – Known Model Parameters.
-
show_correl (bool , optional) – Whether to show listing of sorted correlations (default is True).
-
min_correl (float , optional) – Smallest correlation in absolute value to prove (default is 0.1).
-
sort_pars (callable , optional) – Whether to show parameter names sorted in alphanumerical guild (default is Faux). If Imitation, then the parameters will be listed in the order as they were added to the Parameters dictionary. If callable, then this (i argument) office is used to extract a comparison fundamental from each list element.
-
- Returns
-
Multi-line text of fit study.
- Return type
-
str
- ModelResult. conf_interval ( ** kwargs )¶
-
Calculate the confidence intervals for the variable parameters.
Conviction intervals are calculated using the
confidence.conf_interval()
function and keyword arguments (**kwargs) are passed to that function. The issue is stored in theci_out
aspect and so that it tin be accessed without recalculating them.
- ModelResult. ci_report ( with_offset = True , ndigits = v , ** kwargs )¶
-
Return a formatted text report of the confidence intervals.
- Parameters
-
-
with_offset (bool , optional) – Whether to subtract best value from all other values (default is Truthful).
-
ndigits (int , optional) – Number of significant digits to show (default is five).
-
**kwargs (optional) – Keyword arguments that are passed to the conf_interval role.
-
- Returns
-
Text of formatted study on confidence intervals.
- Return blazon
-
str
- ModelResult. eval_uncertainty ( params = None , sigma = 1 , ** kwargs )¶
-
Evaluate the incertitude of the model function.
This tin can be used to give confidence bands for the model from the uncertainties in the best-fit parameters.
- Parameters
-
-
params (Parameters , optional) – Parameters, defaults to ModelResult.params.
-
sigma (float , optional) – Conviction level, i.e. how many sigma (default is ane).
-
**kwargs (optional) – Values of options, independent variables, etcetera.
-
- Returns
-
Incertitude at each value of the model.
- Return type
-
numpy.ndarray
Notes
-
This is based on the excellent and clear example from https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-prediction-intervals, which references the original work of: J. Wolberg, Information Assay Using the Method of Least Squares, 2006, Springer
-
The value of sigma is number of sigma values, and is converted to a probability. Values of 1, 2, or iii requite probabilities of 0.6827, 0.9545, and 0.9973, respectively. If the sigma value is < 1, it is interpreted as the probability itself. That is,
sigma=ane
andsigma=0.6827
will requite the aforementioned results, inside precision errors.
Examples
>>> out = model . fit ( information , params , 10 = x ) >>> dely = out . eval_uncertainty ( x = ten ) >>> plt . plot ( x , data ) >>> plt . plot ( x , out . best_fit ) >>> plt . fill_between ( x , out . best_fit - dely , ... out . best_fit + dely , colour = '#888888' )
- ModelResult. plot ( datafmt = 'o' , fitfmt = '-' , initfmt = '--' , xlabel = None , ylabel = None , yerr = None , numpoints = None , fig = None , data_kws = None , fit_kws = None , init_kws = None , ax_res_kws = None , ax_fit_kws = None , fig_kws = None , show_init = Fake , parse_complex = 'abs' , championship = None )¶
-
Plot the fit results and residuals using matplotlib.
The method will produce a matplotlib effigy (if package bachelor) with both results of the fit and the residuals plotted. If the fit model included weights, errorbars volition also be plotted. To show the initial conditions for the fit, pass the argument
show_init=True
.- Parameters
-
-
datafmt (str , optional) – Matplotlib format string for data points.
-
fitfmt (str , optional) – Matplotlib format string for fitted curve.
-
initfmt (str , optional) – Matplotlib format cord for initial conditions for the fit.
-
xlabel (str , optional) – Matplotlib format string for labeling the ten-axis.
-
ylabel (str , optional) – Matplotlib format string for labeling the y-axis.
-
yerr (numpy.ndarray , optional) – Array of uncertainties for information array.
-
numpoints (int , optional) – If provided, the last and initial fit curves are evaluated not but at data points, but refined to contain numpoints points in total.
-
fig (matplotlib.figure.Figure , optional) – The figure to plot on. The default is None, which means employ the current pyplot figure or create one if there is none.
-
data_kws (dict , optional) – Keyword arguments passed to the plot office for data points.
-
fit_kws (dict , optional) – Keyword arguments passed to the plot function for fitted bend.
-
init_kws (dict , optional) – Keyword arguments passed to the plot function for the initial conditions of the fit.
-
ax_res_kws (dict , optional) – Keyword arguments for the axes for the residuals plot.
-
ax_fit_kws (dict , optional) – Keyword arguments for the axes for the fit plot.
-
fig_kws (dict , optional) – Keyword arguments for a new figure, if a new ane is created.
-
show_init (bool , optional) – Whether to bear witness the initial conditions for the fit (default is Faux).
-
parse_complex ({'abs' , 'existent' , 'imag' , 'angle'} , optional) – How to reduce complex data for plotting. Options are one of: 'abs' (default), 'real', 'imag', or 'angle', which correspond to the NumPy functions with the same proper noun.
-
title (str , optional) – Matplotlib format string for effigy championship.
-
- Returns
- Return blazon
-
matplotlib.figure.Figure
Meet likewise
-
ModelResult.plot_fit
-
Plot the fit results using matplotlib.
-
ModelResult.plot_residuals
-
Plot the fit residuals using matplotlib.
Notes
The method combines ModelResult.plot_fit and ModelResult.plot_residuals.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is not specified and the fit includes weights, yerr set to
1/cocky.weights
.If model returns complex information, yerr is treated the same style that weights are in this instance.
If fig is None so matplotlib.pyplot.figure(**fig_kws) is called, otherwise fig_kws is ignored.
- ModelResult. plot_fit ( ax = None , datafmt = 'o' , fitfmt = '-' , initfmt = '--' , xlabel = None , ylabel = None , yerr = None , numpoints = None , data_kws = None , fit_kws = None , init_kws = None , ax_kws = None , show_init = Faux , parse_complex = 'abs' , title = None )¶
-
Plot the fit results using matplotlib, if available.
The plot will include the information points, the initial fit curve (optional, with
show_init=True
), and the all-time-fit curve. If the fit model included weights or if yerr is specified, errorbars will too exist plotted.- Parameters
-
-
ax (matplotlib.axes.Axes , optional) – The axes to plot on. The default in None, which means utilize the electric current pyplot axis or create one if there is none.
-
datafmt (str , optional) – Matplotlib format string for data points.
-
fitfmt (str , optional) – Matplotlib format string for fitted curve.
-
initfmt (str , optional) – Matplotlib format string for initial weather for the fit.
-
xlabel (str , optional) – Matplotlib format string for labeling the x-centrality.
-
ylabel (str , optional) – Matplotlib format string for labeling the y-axis.
-
yerr (numpy.ndarray , optional) – Array of uncertainties for information array.
-
numpoints (int , optional) – If provided, the final and initial fit curves are evaluated not just at information points, merely refined to contain numpoints points in total.
-
data_kws (dict , optional) – Keyword arguments passed to the plot function for data points.
-
fit_kws (dict , optional) – Keyword arguments passed to the plot function for fitted bend.
-
init_kws (dict , optional) – Keyword arguments passed to the plot role for the initial weather condition of the fit.
-
ax_kws (dict , optional) – Keyword arguments for a new axis, if a new one is created.
-
show_init (bool , optional) – Whether to testify the initial conditions for the fit (default is False).
-
parse_complex ({'abs' , 'existent' , 'imag' , 'angle'} , optional) – How to reduce complex data for plotting. Options are one of: 'abs' (default), 'real', 'imag', or 'bending', which correspond to the NumPy functions with the aforementioned proper noun.
-
championship (str , optional) – Matplotlib format cord for effigy title.
-
- Returns
- Render type
-
matplotlib.axes.Axes
Come across too
-
ModelResult.plot_residuals
-
Plot the fit residuals using matplotlib.
-
ModelResult.plot
-
Plot the fit results and residuals using matplotlib.
Notes
For details most plot format strings and keyword arguments run into documentation of matplotlib.axes.Axes.plot.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is non specified and the fit includes weights, yerr set to
1/self.weights
.If model returns complex information, yerr is treated the same way that weights are in this instance.
If ax is None and so matplotlib.pyplot.gca(**ax_kws) is chosen.
- ModelResult. plot_residuals ( ax = None , datafmt = 'o' , yerr = None , data_kws = None , fit_kws = None , ax_kws = None , parse_complex = 'abs' , title = None )¶
-
Plot the fit residuals using matplotlib, if available.
If yerr is supplied or if the model included weights, errorbars will too exist plotted.
- Parameters
-
-
ax (matplotlib.axes.Axes , optional) – The axes to plot on. The default in None, which means use the current pyplot axis or create one if at that place is none.
-
datafmt (str , optional) – Matplotlib format string for data points.
-
yerr (numpy.ndarray , optional) – Array of uncertainties for data array.
-
data_kws (dict , optional) – Keyword arguments passed to the plot part for data points.
-
fit_kws (dict , optional) – Keyword arguments passed to the plot function for fitted curve.
-
ax_kws (dict , optional) – Keyword arguments for a new centrality, if a new i is created.
-
parse_complex ({'abs' , 'real' , 'imag' , 'angle'} , optional) – How to reduce complex data for plotting. Options are ane of: 'abs' (default), 'real', 'imag', or 'angle', which correspond to the NumPy functions with the aforementioned name.
-
title (str , optional) – Matplotlib format string for figure title.
-
- Returns
- Return type
-
matplotlib.axes.Axes
Run into as well
-
ModelResult.plot_fit
-
Plot the fit results using matplotlib.
-
ModelResult.plot
-
Plot the fit results and residuals using matplotlib.
Notes
For details about plot format strings and keyword arguments run into documentation of matplotlib.axes.Axes.plot.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is non specified and the fit includes weights, yerr gear up to
one/cocky.weights
.If model returns complex data, yerr is treated the same style that weights are in this case.
If ax is None then matplotlib.pyplot.gca(**ax_kws) is chosen.
ModelResult
attributes¶
- aic ¶
-
Floating betoken best-fit Akaike Information Benchmark statistic (come across MinimizerResult – the optimization result).
- best_fit ¶
-
numpy.ndarray upshot of model function, evaluated at provided independent variables and with best-fit parameters.
- best_values ¶
-
Dictionary with parameter names every bit keys, and best-fit values as values.
- bic ¶
-
Floating point all-time-fit Bayesian Information Benchmark statistic (encounter MinimizerResult – the optimization result).
- chisqr ¶
-
Floating point all-time-fit chi-square statistic (come across MinimizerResult – the optimization result).
- ci_out ¶
-
Conviction interval information (see Adding of confidence intervals) or
None
if the conviction intervals have not been calculated.
- covar ¶
-
numpy.ndarray (square) covariance matrix returned from fit.
- data ¶
-
numpy.ndarray of data to compare to model.
- errorbars ¶
-
Boolean for whether error bars were estimated by fit.
- ier ¶
-
Integer returned code from scipy.optimize.leastsq.
- init_fit ¶
-
numpy.ndarray result of model office, evaluated at provided independent variables and with initial parameters.
- init_params ¶
-
Initial parameters.
- init_values ¶
-
Lexicon with parameter names as keys, and initial values as values.
- iter_cb ¶
-
Optional callable part, to be called at each fit iteration. This must take take arguments of
(params, iter, resid, *args, **kws)
, whereparams
will have the current parameter values,iter
the iteration,resid
the current residual assortment, and*args
and**kws
as passed to the objective part. Run into Using a Iteration Callback Function.
- jacfcn ¶
-
Optional callable role, to exist called to calculate Jacobian assortment.
- lmdif_message ¶
-
String message returned from scipy.optimize.leastsq.
- bulletin ¶
-
String message returned from
minimize()
.
- method ¶
-
String naming fitting method for
minimize()
.
- call_kws ¶
-
Dict of keyword arguments actually send to underlying solver with
minimize()
.
- model ¶
-
Instance of
Model
used for model.
- ndata ¶
-
Integer number of data points.
- nfev ¶
-
Integer number of function evaluations used for fit.
- nfree ¶
-
Integer number of complimentary parameters in fit.
- nvarys ¶
-
Integer number of independent, freely varying variables in fit.
- params ¶
-
Parameters used in fit; will comprise the all-time-fit values.
- redchi ¶
-
Floating point reduced chi-square statistic (run across MinimizerResult – the optimization upshot).
- residuum ¶
-
numpy.ndarray for residual.
- scale_covar ¶
-
Boolean flag for whether to automatically calibration covariance matrix.
- success ¶
-
Boolean value of whether fit succeeded.
- weights ¶
-
numpy.ndarray (or
None
) of weighting values to exist used in fit. If notNone
, it volition be used as a multiplicative gene of the residual array, and then thatweights*(data - fit)
is minimized in the least-squares sense.
Computing uncertainties in the model part¶
Nosotros return to the first example above and ask not only for the uncertainties in the fitted parameters simply for the range of values that those uncertainties mean for the model office itself. Nosotros can apply the ModelResult.eval_uncertainty()
method of the model result object to evaluate the dubiousness in the model with a specified level for \(\sigma\).
That is, adding:
dely = result . eval_uncertainty ( sigma = iii ) plt . fill_between ( ten , effect . best_fit - dely , result . best_fit + dely , color = "#ABABAB" , label = '3-$\sigma$ doubt ring' )
to the instance fit to the Gaussian at the beginning of this chapter will give iii-\(\sigma\) bands for the all-time-fit Gaussian, and produce the figure below.
Saving and Loading ModelResults¶
New in version 0.9.8.
As with saving models (encounter section Saving and Loading Models), it is sometimes desirable to save a ModelResult
, either for afterwards utilise or to organize and compare different fit results. Lmfit provides a save_modelresult()
function that will save a ModelResult
to a file. At that place is as well a companion load_modelresult()
function that tin read this file and reconstruct a ModelResult
from it.
As discussed in section Saving and Loading Models, at that place are challenges to saving model functions that may get in difficult to restore a saved a ModelResult
in a way that can be used to perform a fit. Use of the optional funcdefs
statement is more often than not the most reliable mode to ensure that a loaded ModelResult
can exist used to evaluate the model function or redo the fit.
- save_modelresult ( modelresult , fname )¶
-
Save a ModelResult to a file.
- Parameters
-
-
modelresult (ModelResult) – ModelResult to be saved.
-
fname (str) – Name of file for saved ModelResult.
-
- load_modelresult ( fname , funcdefs = None )¶
-
Load a saved ModelResult from a file.
- Parameters
-
-
fname (str) – Proper noun of file containing saved ModelResult.
-
funcdefs (dict , optional) – Dictionary of custom function names and definitions.
-
- Returns
-
ModelResult object loaded from file.
- Render type
-
ModelResult
An example of saving a ModelResult
is:
# <examples/doc_model_savemodelresult.py> import numpy every bit np from lmfit.model import save_modelresult from lmfit.models import GaussianModel information = np . loadtxt ( 'model1d_gauss.dat' ) x = information [:, 0 ] y = information [:, 1 ] gmodel = GaussianModel () result = gmodel . fit ( y , x = ten , amplitude = 5 , center = 5 , sigma = 1 ) save_modelresult ( result , 'gauss_modelresult.sav' ) print ( result . fit_report ()) # <end examples/doc_model_savemodelresult.py>
To load that after, one might do:
# <examples/doc_model_loadmodelresult.py> import matplotlib.pyplot every bit plt import numpy equally np from lmfit.model import load_modelresult information = np . loadtxt ( 'model1d_gauss.dat' ) x = data [:, 0 ] y = information [:, one ] effect = load_modelresult ( 'gauss_modelresult.sav' ) print ( issue . fit_report ()) plt . plot ( 10 , y , 'o' ) plt . plot ( ten , result . best_fit , '-' ) plt . evidence () # <end examples/doc_model_loadmodelresult.py>
Composite Models : calculation (or multiplying) Models¶
One of the more interesting features of the Model
class is that Models tin be added together or combined with basic algebraic operations (add together, subtract, multiply, and split up) to give a composite model. The composite model will have parameters from each of the component models, with all parameters existence available to influence the whole model. This power to combine models volition get even more than useful in the side by side affiliate, when pre-built subclasses of Model
are discussed. For now, nosotros'll consider a simple instance, and build a model of a Gaussian plus a line, as to model a peak with a background. For such a simple trouble, we could just build a model that included both components:
def gaussian_plus_line ( 10 , amp , cen , wid , slope , intercept ): """line + 1-d gaussian""" gauss = ( amp / ( sqrt ( 2 * pi ) * wid )) * exp ( - ( x - cen ) ** ii / ( ii * wid ** 2 )) line = slope * x + intercept return gauss + line
and use that with:
mod = Model ( gaussian_plus_line )
But we already had a function for a gaussian function, and maybe we'll observe that a linear background isn't sufficient which would mean the model part would have to be changed.
Instead, lmfit allows models to be combined into a CompositeModel
. Every bit an alternative to including a linear groundwork in our model office, nosotros could define a linear part:
def line ( x , gradient , intercept ): """a line""" return gradient * 10 + intercept
and build a composite model with just:
mod = Model ( gaussian ) + Model ( line )
This model has parameters for both component models, and can be used as:
# <examples/doc_model_two_components.py> import matplotlib.pyplot as plt from numpy import exp , loadtxt , pi , sqrt from lmfit import Model data = loadtxt ( 'model1d_gauss.dat' ) 10 = information [:, 0 ] y = data [:, 1 ] + 0.25 * x - one.0 def gaussian ( x , amp , cen , wid ): """1-d gaussian: gaussian(10, amp, cen, wid)""" return ( amp / ( sqrt ( 2 * pi ) * wid )) * exp ( - ( 10 - cen ) ** 2 / ( 2 * wid ** 2 )) def line ( ten , slope , intercept ): """a line""" return slope * ten + intercept mod = Model ( gaussian ) + Model ( line ) pars = modern . make_params ( amp = 5 , cen = 5 , wid = one , slope = 0 , intercept = i ) result = mod . fit ( y , pars , x = ten ) impress ( event . fit_report ()) plt . plot ( ten , y , 'o' ) plt . plot ( 10 , event . init_fit , '--' , characterization = 'initial fit' ) plt . plot ( x , result . best_fit , '-' , label = 'best fit' ) plt . legend () plt . prove () # <cease examples/doc_model_two_components.py>
which prints out the results:
[[Model]] (Model(gaussian) + Model(line)) [[Fit Statistics]] # plumbing equipment method = leastsq # function evals = 44 # data points = 101 # variables = 5 chi-square = 2.57855517 reduced chi-foursquare = 0.02685995 Akaike info crit = -360.457020 Bayesian info crit = -347.381417 [[Variables]] amp: viii.45931062 +/- 0.12414515 (i.47%) (init = 5) cen: five.65547873 +/- 0.00917678 (0.16%) (init = five) wid: 0.67545524 +/- 0.00991686 (1.47%) (init = one) slope: 0.26484404 +/- 0.00574892 (2.17%) (init = 0) intercept: -0.96860202 +/- 0.03352202 (3.46%) (init = 1) [[Correlations]] (unreported correlations are < 0.100) C(gradient, intercept) = -0.795 C(amp, wid) = 0.666 C(amp, intercept) = -0.222 C(amp, gradient) = -0.169 C(cen, slope) = -0.162 C(wid, intercept) = -0.148 C(cen, intercept) = 0.129 C(wid, slope) = -0.113
and shows the plot on the left.
On the left, data is shown in blueish dots, the total fit is shown in solid light-green line, and the initial fit is shown as a orange dashed line. The figure on the correct shows again the data in blueish dots, the Gaussian component as a orange dashed line and the linear component as a dark-green dashed line. Information technology is created using the following code:
comps = result . eval_components () plt . plot ( ten , y , 'o' ) plt . plot ( ten , comps [ 'gaussian' ], '--' , label = 'Gaussian component' ) plt . plot ( ten , comps [ 'line' ], '--' , label = 'Line component' )
The components were generated later the fit using the ModelResult.eval_components()
method of the outcome
, which returns a lexicon of the components, using keys of the model proper name (or prefix
if that is set). This will utilize the parameter values in result.params
and the independent variables ( 10
) used during the fit. Note that while the ModelResult
held in upshot
does store the best parameters and the best estimate of the model in issue.best_fit
, the original model and parameters in pars
are left unaltered.
You can employ this blended model to other data sets, or evaluate the model at other values of x
. Y'all may desire to practice this to give a finer or coarser spacing of data betoken, or to extrapolate the model outside the fitting range. This can be done with:
xwide = linspace ( - 5 , 25 , 3001 ) predicted = mod . eval ( result . params , x = xwide )
In this example, the argument names for the model functions practise not overlap. If they had, the prefix
argument to Model
would accept allowed us to place which parameter went with which component model. As we will see in the next chapter, using composite models with the congenital-in models provides a unproblematic style to build up complex models.
- course CompositeModel ( left , correct , op [, **kws ] )¶
-
Combine ii models (left and right) with binary operator (op).
Unremarkably, ane does not have to explicitly create a CompositeModel, merely tin use normal Python operators
+
,-
,*
, and/
to combine components every bit in:>>> modernistic = Model ( fcn1 ) + Model ( fcn2 ) * Model ( fcn3 )
- Parameters
-
-
left (Model) – Left-manus model.
-
right (Model) – Right-hand model.
-
op (callable binary operator) – Operator to combine left and right models.
-
**kws (optional) – Additional keywords are passed to Model when creating this new model.
-
Notes
The 2 models must use the same independent variable.
Note that when using built-in Python binary operators, a CompositeModel
volition automatically be synthetic for you. That is, doing:
mod = Model ( fcn1 ) + Model ( fcn2 ) * Model ( fcn3 )
will create a CompositeModel
. Hither, left
will be Model(fcn1)
, op
will be operator.add together()
, and correct
will be another CompositeModel that has a left
aspect of Model(fcn2)
, an op
of operator.mul()
, and a right
of Model(fcn3)
.
To use a binary operator other than +
, -
, *
, or /
you can explicitly create a CompositeModel
with the appropriate binary operator. For example, to convolve two models, you could define a elementary convolution function, perhaps as:
import numpy as np def convolve ( dat , kernel ): """simple convolution of two arrays""" npts = min ( len ( dat ), len ( kernel )) pad = np . ones ( npts ) tmp = np . concatenate (( pad * dat [ 0 ], dat , pad * dat [ - one ])) out = np . convolve ( tmp , kernel , manner = 'valid' ) noff = int (( len ( out ) - npts ) / 2 ) return ( out [ noff :])[: npts ]
which extends the information in both directions so that the convolving kernel function gives a valid effect over the data range. Because this function takes ii array arguments and returns an array, information technology can be used as the binary operator. A full script using this technique is here:
# <examples/doc_model_composite.py> import matplotlib.pyplot equally plt import numpy as np from lmfit import CompositeModel , Model from lmfit.lineshapes import gaussian , step # create data from broadened step x = np . linspace ( 0 , 10 , 201 ) y = step ( x , amplitude = 12.5 , eye = 4.v , sigma = 0.88 , course = 'erf' ) np . random . seed ( 0 ) y = y + np . random . normal ( scale = 0.35 , size = x . size ) def jump ( x , mid ): """Heaviside step function.""" o = np . zeros ( x . size ) imid = max ( np . where ( x <= mid )[ 0 ]) o [ imid :] = 1.0 render o def convolve ( arr , kernel ): """Unproblematic convolution of 2 arrays.""" npts = min ( arr . size , kernel . size ) pad = np . ones ( npts ) tmp = np . concatenate (( pad * arr [ 0 ], arr , pad * arr [ - 1 ])) out = np . convolve ( tmp , kernel , mode = 'valid' ) noff = int (( len ( out ) - npts ) / two ) return out [ noff : noff + npts ] # create Composite Model using the custom convolution operator mod = CompositeModel ( Model ( bound ), Model ( gaussian ), convolve ) pars = modern . make_params ( amplitude = 1 , heart = iii.5 , sigma = 1.5 , mid = 5.0 ) # 'mid' and 'heart' should be completely correlated, and 'mid' is # used every bit an integer alphabetize, so a very poor fit variable: pars [ 'mid' ] . vary = False # fit this model to data assortment y result = mod . fit ( y , params = pars , x = x ) print ( result . fit_report ()) # generate components comps = result . eval_components ( x = x ) # plot results fig , axes = plt . subplots ( 1 , 2 , figsize = ( 12.8 , 4.8 )) axes [ 0 ] . plot ( 10 , y , 'bo' ) axes [ 0 ] . plot ( x , effect . init_fit , 'k--' , label = 'initial fit' ) axes [ 0 ] . plot ( x , result . best_fit , 'r-' , label = 'best fit' ) axes [ 0 ] . legend () axes [ 1 ] . plot ( x , y , 'bo' ) axes [ 1 ] . plot ( x , 10 * comps [ 'leap' ], 'k--' , characterization = 'Leap component' ) axes [ ane ] . plot ( ten , ten * comps [ 'gaussian' ], 'r-' , label = 'Gaussian component' ) axes [ ane ] . fable () plt . show () # <finish examples/doc_model_composite.py>
which prints out the results:
[[Model]] (Model(spring) <office convolve at 0x7f76614caf70> Model(gaussian)) [[Fit Statistics]] # plumbing fixtures method = leastsq # function evals = 25 # data points = 201 # variables = three chi-foursquare = 24.7562335 reduced chi-square = 0.12503148 Akaike info crit = -414.939746 Bayesian info crit = -405.029832 [[Variables]] mid: 5 (fixed) amplitude: 0.62508459 +/- 0.00189732 (0.30%) (init = ane) eye: 4.50853671 +/- 0.00973231 (0.22%) (init = iii.5) sigma: 0.59576118 +/- 0.01348582 (ii.26%) (init = 1.5) [[Correlations]] (unreported correlations are < 0.100) C(amplitude, center) = 0.329 C(aamplitude, sigma) = 0.268
and shows the plots:
Using composite models with built-in or custom operators allows you to build complex models from testable sub-components.
Source: https://lmfit.github.io/lmfit-py/model.html
Enviar um comentário for "Maximum Number of Users for Curve_fitting_toolbox Reached. Try Again Later."