Pysens Tutorial
PySens Tutorial / Basic Test==================
import json
import pysens
%pylab inline
First let's define our parameter space in json format. You should describe each parameter variations with a probability distributions.
As a test we will use the Ishigami function implemented as a basic test in the library.
%%writefile ishigami.json
{
"model": "test",
"parameters": [
{
"name": "X1",
"distrib": {
"max": 3.14,
"min": -3.14,
"type": "uniform"
},
"unit": "mm"
},
{
"name": "X2",
"distrib": {
"min": -3.14,
"max": 3.14,
"type": "uniform"
},
"unit": "V"
},
{
"name": "X3",
"distrib": {
"min": -3.14,
"max": 3.14,
"type": "uniform"
},
"unit": "V"
}
]
}
Now we can sample this space using one of the available methods in pysens.sample
smpl=pysens.sample.Sobol('ishigami.json')
smpl.build()
Let's visualise our DOE and print its statistical properties:
smpl.plot_plan()
pysens.tools.print_stat(smpl.mat)
Now we can evaluate the model at each of the point described by the DOE plan:
ev=pysens.evaluate.TestIshigami('ishigami-Sobol.csv')
ev.simulate()
This generated a hundred npy files each containing the results of one evaluation of the model. They have been stored in a new subdirectory named by the time and date of the analysis.
(In the following lines, we use the name of the subdirectory stored in ev.subdir
. You can run the post-processing and analysis without the evaluator object, just by giving the directory name instead.)
prcss=pysens.process.TestModels(ev.subdir)
Now we have both the DOE plan in ishigami-Sobol.csv
and the features in Out.csv
. Both files are in the subdirectory. For the test models, there's only one feature, the scalar output of the model.
Let's compute the linear correclations of the inputs / output:
anls=pysens.analyse.CorrelationsSA(ev.subdir+'ishigami-Sobol.csv',ev.subdir+'Out.csv')
anls.analyse()
anls.save_results()
anls.plot_results()
Comments