To be able to visualize decision trees and DNN weight map, you must enable ipywidgets. To do so, run the following cell, and refresh the page!
!jupyter nbextension enable --py widgetsnbextension
import ROOT
from ROOT import TFile, TMVA, TCut
Welcome to JupyROOT 6.09/01
To use new interactive features in notebook we have to enable a module called JsMVA. This can be done by using ipython magic: %jsmva.
%jsmva on
First let's start with the classical version of declaration. If you know how to use TMVA in C++ then you can use that version here in python: first we need to pass a string called job name, as second argument we need to pass an opened output TFile (this is optional, if it's present then it will be used to store output histograms) and as third (or second) argument we pass a string which contains all the settings related to Factory (separated with ':' character).
outputFile = TFile( "TMVA.root", 'RECREATE' )
TMVA.Tools.Instance();
factory = TMVA.Factory( "TMVAClassification", outputFile #this is optional
,"!V:Color:DrawProgressBar:Transformations=I;D;P;G,D:AnalysisType=Classification" )
The options string can contain the following options:
Option | Default | Predefined values | Description |
---|---|---|---|
V | False | - | Verbose flag |
Color | True | - | Flag for colored output |
Transformations | "" | - | List of transformations to test. For example with "I;D;P;U;G" string identity, decorrelation, PCA, uniform and Gaussian transformations will be applied |
Silent | False | - | Batch mode: boolean silent flag inhibiting any output from TMVA after the creation of the factory class object |
DrawProgressBar | True | - | Draw progress bar to display training, testing and evaluation schedule (default: True) |
AnalysisType | Auto | Classification, Regression, Multiclass, Auto | Set the analysis type |
By enabling JsMVA we have new, more readable ways to do the declaration (this applies to all functions, not just to the constructor).
factory = TMVA.Factory("TMVAClassification", TargetFile=outputFile,
V=False, Color=True, DrawProgressBar=True, Transformations=["I", "D", "P", "G", "D"],
AnalysisType="Classification")
Arguments of constructor: The options string can contain the following options:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
JobName | yes, 1. | not optional | - | Name of job |
TargetFile | yes, 2. | if not passed histograms won't be saved | - | File to write control and performance histograms histograms |
V | no | False | - | Verbose flag |
Color | no | True | - | Flag for colored output |
Transformations | no | "" | - | List of transformations to test. For example with "I;D;P;U;G" string identity, decorrelation, PCA, uniform and Gaussian transformations will be applied |
Silent | no | False | - | Batch mode: boolean silent flag inhibiting any output from TMVA after the creation of the factory class object |
DrawProgressBar | no | True | - | Draw progress bar to display training, testing and evaluation schedule (default: True) |
AnalysisType | no | Auto | Classification, Regression, Multiclass, Auto | Set the analysis type |
First we need to declare a DataLoader and add the variables (passing the variable names used in the test and train trees in input dataset). To add variable names to DataLoader we use the AddVariable function. Arguments of this function:
String containing the variable name. Using ":=" we can add definition too.
String (label to variable, if not present the variable name will be used) or character (defining the type of data points)
If we have label for variable, the data point type still can be passed as third argument
dataset = "tmva_class_example" #the dataset name
loader = TMVA.DataLoader(dataset)
loader.AddVariable( "myvar1 := var1+var2", 'F' )
loader.AddVariable( "myvar2 := var1-var2", "Expression 2", 'F' )
loader.AddVariable( "var3", "Variable 3", 'F' )
loader.AddVariable( "var4", "Variable 4", 'F' )
It is possible to define spectator variables, which are part of the input data set, but which are not used in the MVA training, test nor during the evaluation, but can be used for correlation tests or others. Parameters:
loader.AddSpectator( "spec1:=var1*2", "Spectator 1", 'F' )
loader.AddSpectator( "spec2:=var1*3", "Spectator 2", 'F' )
After adding the variables we have to add the datas to DataLoader. In order to do this we check if the dataset file doesn't exist in files directory we download from CERN's server. When we have the root file we open it and get the signal and background trees.
if ROOT.gSystem.AccessPathName( "tmva_class_example.root" ) != 0:
ROOT.gSystem.Exec( "wget https://root.cern.ch/files/tmva_class_example.root")
input = TFile.Open( "tmva_class_example.root" )
# Get the signal and background trees for training
signal = input.Get( "TreeS" )
background = input.Get( "TreeB" )
To pass the signal and background trees to DataLoader we use the AddSignalTree and AddBackgroundTree functions, and we set up the corresponding DataLoader variable's too. Arguments of functions:
# Global event weights (see below for setting event-wise weights)
signalWeight = 1.0
backgroundWeight = 1.0
loader.AddSignalTree(signal, signalWeight)
loader.AddBackgroundTree(background, backgroundWeight)
loader.fSignalWeight = signalWeight
loader.fBackgroundWeight = backgroundWeight
loader.fTreeS = signal
loader.fTreeB = background
DataSetInfo |
| |||
Add Tree TreeS of type Signal with 6000 events | ||||
DataSetInfo |
| |||
Add Tree TreeB of type Background with 6000 events |
With using DataLoader.PrepareTrainingAndTestTree function we apply cuts on input events. In C++ this function also needs to add the options as a string (as we seen in Factory constructor) which with JsMVA can be passed (same as Factory constructor case) as keyword arguments.
Arguments of PrepareTrainingAndTestTree:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
SigCut | yes, 1. | - | - | TCut object for signal cut |
Bkg | yes, 2. | - | - | TCut object for background cut |
SplitMode | no | Random | Random, Alternate, Block | Method of picking training and testing events |
MixMode | no | SameAsSplitMode | SameAsSplitMode, Random, Alternate, Block | Method of mixing events of differnt classes into one dataset |
SplitSeed | no | 100 | - | Seed for random event shuffling |
NormMode | no | EqualNumEvents | None, NumEvents, EqualNumEvents | Overall renormalisation of event-by-event weights used in the training (NumEvents: average weight of 1 per event, independently for signal and background; EqualNumEvents: average weight of 1 per event for signal, and sum of weights for background equal to sum of weights for signal) |
nTrain_Signal | no | 0 (all) | - | Number of training events of class Signal |
nTest_Signal | no | 0 (all) | - | Number of test events of class Signal |
nTrain_Background | no | 0 (all) | - | Number of training events of class Background |
nTest_Background | no | 0 (all) | - | Number of test events of class Background |
V | no | False | - | Verbosity |
VerboseLevel | no | Info | Debug, Verbose, Info | Verbosity level |
mycuts = TCut("")
mycutb = TCut("")
loader.PrepareTrainingAndTestTree(SigCut=mycuts, BkgCut=mycutb,
nTrain_Signal=0, nTrain_Background=0, SplitMode="Random", NormMode="NumEvents", V=False)
loader.DrawInputVariable("myvar1")
DataSetFactory |
| ||||||||||||||||||||||
| |||||||||||||||||||||||
DataSetInfo | Correlation matrix (Signal) | ||||||||||||||||||||||
DataSetInfo | Correlation matrix (Background) | ||||||||||||||||||||||
DataSetFactory |
|
loader.DrawInputVariable("myvar1", processTrfs=["D", "N"]) #Transformations: I;N;D;P;U;G,D
DataLoader |
| ||||||||||||||||||||||||||||||
Transformation, Variable selection : | |||||||||||||||||||||||||||||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | |||||||||||||||||||||||||||||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | |||||||||||||||||||||||||||||||
Input : variable 'var3' <---> Output : variable 'var3' | |||||||||||||||||||||||||||||||
Input : variable 'var4' <---> Output : variable 'var4' | |||||||||||||||||||||||||||||||
DataLoader |
| ||||||||||||||||||||||||||||||
Transformation, Variable selection : | |||||||||||||||||||||||||||||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | |||||||||||||||||||||||||||||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | |||||||||||||||||||||||||||||||
Input : variable 'var3' <---> Output : variable 'var3' | |||||||||||||||||||||||||||||||
Input : variable 'var4' <---> Output : variable 'var4' | |||||||||||||||||||||||||||||||
Preparing the Decorrelation transformation... | |||||||||||||||||||||||||||||||
TFHandler_DataLoader |
| ||||||||||||||||||||||||||||||
TFHandler_DataLoader |
| ||||||||||||||||||||||||||||||
DataLoader |
| ||||||||||||||||||||||||||||||
Transformation, Variable selection : | |||||||||||||||||||||||||||||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | |||||||||||||||||||||||||||||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | |||||||||||||||||||||||||||||||
Input : variable 'var3' <---> Output : variable 'var3' | |||||||||||||||||||||||||||||||
Input : variable 'var4' <---> Output : variable 'var4' | |||||||||||||||||||||||||||||||
DataLoader |
| ||||||||||||||||||||||||||||||
Transformation, Variable selection : | |||||||||||||||||||||||||||||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | |||||||||||||||||||||||||||||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | |||||||||||||||||||||||||||||||
Input : variable 'var3' <---> Output : variable 'var3' | |||||||||||||||||||||||||||||||
Input : variable 'var4' <---> Output : variable 'var4' | |||||||||||||||||||||||||||||||
Preparing the Decorrelation transformation... | |||||||||||||||||||||||||||||||
TFHandler_DataLoader |
| ||||||||||||||||||||||||||||||
TFHandler_DataLoader |
|
loader.DrawCorrelationMatrix("Signal")
To add which we want to train on dataset we have to use the Factory.BookMethod function. This method will add a method and it's options to Factory.
Arguments:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
DataLoader | yes, 1. | - | - | Pointer to DataLoader object |
Method | yes, 2. | - | kVariable kCuts , kLikelihood , kPDERS , kHMatrix , kFisher , kKNN , kCFMlpANN , kTMlpANN , kBDT , kDT , kRuleFit , kSVM , kMLP , kBayesClassifier, kFDA , kBoost , kPDEFoam , kLD , kPlugins , kCategory , kDNN , kPyRandomForest , kPyAdaBoost , kPyGTB , kC50 , kRSNNS , kRSVM , kRXGB , kMaxMethod | Selected method number, method numbers defined in TMVA.Types |
MethodTitle | yes, 3. | - | - | Label for method |
* | no | - | - | Other named arguments which are the options for selected method. |
factory.BookMethod( DataLoader=loader, Method=TMVA.Types.kSVM, MethodTitle="SVM",
Gamma=0.25, Tol=0.001, VarTransform="Norm" )
factory.BookMethod( loader,TMVA.Types.kMLP, "MLP",
H=False, V=False, NeuronType="tanh", VarTransform="N", NCycles=600, HiddenLayers="N+5",
TestRate=5, UseRegulator=False )
factory.BookMethod( loader,TMVA.Types.kLD, "LD",
H=False, V=False, VarTransform="None", CreateMVAPdfs=True, PDFInterpolMVAPdf="Spline2",
NbinsMVAPdf=50, NsmoothMVAPdf=10 )
factory.BookMethod( loader,TMVA.Types.kLikelihood,"Likelihood","NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmoothBkg[1]=10",
NSmooth=1, NAvEvtPerBin=50, H=True, V=False,TransformOutput=True,PDFInterpol="Spline2")
factory.BookMethod( loader, TMVA.Types.kBDT, "BDT",
H=False, V=False, NTrees=850, MinNodeSize="2.5%", MaxDepth=3, BoostType="AdaBoost", AdaBoostBeta=0.5,
UseBaggedBoost=True, BaggedSampleFraction=0.5, SeparationType="GiniIndex", nCuts=20 )
<ROOT.TMVA::MethodBDT object ("BDT") at 0x6811880>
Factory | Booking method: SVM | |||
SVM |
| |||
Transformation, Variable selection : | ||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | ||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | ||||
Input : variable 'var3' <---> Output : variable 'var3' | ||||
Input : variable 'var4' <---> Output : variable 'var4' | ||||
Factory | Booking method: MLP | |||
MLP |
| |||
Transformation, Variable selection : | ||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | ||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | ||||
Input : variable 'var3' <---> Output : variable 'var3' | ||||
Input : variable 'var4' <---> Output : variable 'var4' | ||||
MLP | Building Network. | |||
Initializing weights | ||||
Factory | Booking method: LD | |||
Factory | Booking method: Likelihood | |||
Factory | Booking method: BDT | |||
There is two way to book DNN:
factory.BookDNN(loader)
trainingStrategy = [{
"LearningRate": 1e-1,
"Momentum": 0.0,
"Repetitions": 1,
"ConvergenceSteps": 300,
"BatchSize": 20,
"TestRepetitions": 15,
"WeightDecay": 0.001,
"Regularization": "NONE",
"DropConfig": "0.0+0.5+0.5+0.5",
"DropRepetitions": 1,
"Multithreading": True
}, {
"LearningRate": 1e-2,
"Momentum": 0.5,
"Repetitions": 1,
"ConvergenceSteps": 300,
"BatchSize": 30,
"TestRepetitions": 7,
"WeightDecay": 0.001,
"Regularization": "L2",
"DropConfig": "0.0+0.1+0.1+0.1",
"DropRepetitions": 1,
"Multithreading": True
}, {
"LearningRate": 1e-2,
"Momentum": 0.3,
"Repetitions": 1,
"ConvergenceSteps": 300,
"BatchSize": 40,
"TestRepetitions": 7,
"WeightDecay": 0.001,
"Regularization": "L2",
"Multithreading": True
},{
"LearningRate": 1e-3,
"Momentum": 0.1,
"Repetitions": 1,
"ConvergenceSteps": 200,
"BatchSize": 70,
"TestRepetitions": 7,
"WeightDecay": 0.001,
"Regularization": "NONE",
"Multithreading": True
}, {
"LearningRate": 1e-3,
"Momentum": 0.1,
"Repetitions": 1,
"ConvergenceSteps": 200,
"BatchSize": 70,
"TestRepetitions": 7,
"WeightDecay": 0.001,
"Regularization": "NONE",
"Multithreading": True
}]
factory.BookMethod(DataLoader=loader, Method=TMVA.Types.kDNN, MethodTitle="DNN",
H = False, V=False, VarTransform="Normalize", ErrorStrategy="CROSSENTROPY",
Layout=["TANH|100", "TANH|50", "TANH|10", "LINEAR"],
TrainingStrategy=trainingStrategy,Architecture="STANDARD")
<ROOT.TMVA::MethodDNN object ("DNN") at 0x6634270>
Factory | Booking method: DNN | |||
DNN |
| |||
Transformation, Variable selection : | ||||
Input : variable 'myvar1' <---> Output : variable 'myvar1' | ||||
Input : variable 'myvar2' <---> Output : variable 'myvar2' | ||||
Input : variable 'var3' <---> Output : variable 'var3' | ||||
Input : variable 'var4' <---> Output : variable 'var4' |
When you use the jsmva magic, the original C++ version of Factory::TrainAllMethods is rewritten by a new training method, which will produce notebook compatible output during the training, so we can trace the process (progress bar, error plot). For some methods (MLP, DNN, BDT) there will be created a tracer plot (for MLP, DNN test and training error vs epoch, for BDT error fraction and boost weight vs tree number). There are also some method which doesn't support interactive tracing, so for these methods just a simple text will be printed, just to we know that TrainAllMethods function is training this method currently.
For methods where is possible to trace the training interactively there is a stop button, which can stop the training process. This button just stops the training of the current method, and doesn't stop the TrainAllMethods completely.
factory.TrainAllMethods()
TFHandler_SVM |
| ||||||||||||||||||||||||||||||
Building SVM Working Set...with 6000 event instances | |||||||||||||||||||||||||||||||
Elapsed time for Working Set build : 1.24 sec | |||||||||||||||||||||||||||||||
Sorry, no computing time forecast available for SVM, please wait ... | |||||||||||||||||||||||||||||||
Elapsed time : 1.68 sec | |||||||||||||||||||||||||||||||
Elapsed time for training with 6000 events : 2.94 sec | |||||||||||||||||||||||||||||||
SVM |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 1.03 sec | |||||||||||||||||||||||||||||||
Creating xml weight file: tmva_class_example/weights/TMVAClassification_SVM.weights.xml | |||||||||||||||||||||||||||||||
Creating standalone class: tmva_class_example/weights/TMVAClassification_SVM.class.C | |||||||||||||||||||||||||||||||
TFHandler_MLP |
| ||||||||||||||||||||||||||||||
Training Network | |||||||||||||||||||||||||||||||
Elapsed time for training with 6000 events : 1.43 sec | |||||||||||||||||||||||||||||||
MLP |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.00932 sec | |||||||||||||||||||||||||||||||
Creating xml weight file: tmva_class_example/weights/TMVAClassification_MLP.weights.xml | |||||||||||||||||||||||||||||||
Creating standalone class: tmva_class_example/weights/TMVAClassification_MLP.class.C | |||||||||||||||||||||||||||||||
Write special histos to file: TMVA.root:/tmva_class_example/Method_MLP/MLP | |||||||||||||||||||||||||||||||
LD | Results for LD coefficients: | ||||||||||||||||||||||||||||||
Variable: Coefficient: | |||||||||||||||||||||||||||||||
myvar1: -0.359 | |||||||||||||||||||||||||||||||
myvar2: -0.109 | |||||||||||||||||||||||||||||||
var3: -0.211 | |||||||||||||||||||||||||||||||
var4: +0.722 | |||||||||||||||||||||||||||||||
(offset): -0.054 | |||||||||||||||||||||||||||||||
Elapsed time for training with 6000 events : 0.00231 sec | |||||||||||||||||||||||||||||||
LD |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.000759 sec | |||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||
Creating xml weight file: tmva_class_example/weights/TMVAClassification_LD.weights.xml | |||||||||||||||||||||||||||||||
Creating standalone class: tmva_class_example/weights/TMVAClassification_LD.class.C | |||||||||||||||||||||||||||||||
================================================================ | |||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||
--- Short description: | |||||||||||||||||||||||||||||||
The maximum-likelihood classifier models the data with probability | |||||||||||||||||||||||||||||||
density functions (PDF) reproducing the signal and background | |||||||||||||||||||||||||||||||
distributions of the input variables. Correlations among the | |||||||||||||||||||||||||||||||
variables are ignored. | |||||||||||||||||||||||||||||||
--- Performance optimisation: | |||||||||||||||||||||||||||||||
Required for good performance are decorrelated input variables | |||||||||||||||||||||||||||||||
(PCA transformation via the option "VarTransform=Decorrelate" | |||||||||||||||||||||||||||||||
may be tried). Irreducible non-linear correlations may be reduced | |||||||||||||||||||||||||||||||
by precombining strongly correlated input variables, or by simply | |||||||||||||||||||||||||||||||
removing one of the variables. | |||||||||||||||||||||||||||||||
--- Performance tuning via configuration options: | |||||||||||||||||||||||||||||||
High fidelity PDF estimates are mandatory, i.e., sufficient training | |||||||||||||||||||||||||||||||
statistics is required to populate the tails of the distributions | |||||||||||||||||||||||||||||||
It would be a surprise if the default Spline or KDE kernel parameters | |||||||||||||||||||||||||||||||
provide a satisfying fit to the data. The user is advised to properly | |||||||||||||||||||||||||||||||
tune the events per bin and smooth options in the spline cases | |||||||||||||||||||||||||||||||
individually per variable. If the KDE kernel is used, the adaptive | |||||||||||||||||||||||||||||||
Gaussian kernel may lead to artefacts, so please always also try | |||||||||||||||||||||||||||||||
the non-adaptive one. | |||||||||||||||||||||||||||||||
All tuning parameters must be adjusted individually for each input | |||||||||||||||||||||||||||||||
variable! | |||||||||||||||||||||||||||||||
================================================================ | |||||||||||||||||||||||||||||||
Filling reference histograms | |||||||||||||||||||||||||||||||
Building PDF out of reference histograms | |||||||||||||||||||||||||||||||
Elapsed time for training with 6000 events : 0.0304 sec | |||||||||||||||||||||||||||||||
Likelihood |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.00743 sec | |||||||||||||||||||||||||||||||
Creating xml weight file: tmva_class_example/weights/TMVAClassification_Likelihood.weights.xml | |||||||||||||||||||||||||||||||
Creating standalone class: tmva_class_example/weights/TMVAClassification_Likelihood.class.C | |||||||||||||||||||||||||||||||
Write monitoring histograms to file: TMVA.root:/tmva_class_example/Method_Likelihood/Likelihood | |||||||||||||||||||||||||||||||
BDT | #events: (reweighted) sig: 3000 bkg: 3000 | ||||||||||||||||||||||||||||||
#events: (unweighted) sig: 3000 bkg: 3000 | |||||||||||||||||||||||||||||||
Training 850 Decision Trees ... patience please | |||||||||||||||||||||||||||||||
Elapsed time for training with 6000 events : 1.54 sec | |||||||||||||||||||||||||||||||
BDT |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.443 sec | |||||||||||||||||||||||||||||||
Creating xml weight file: tmva_class_example/weights/TMVAClassification_BDT.weights.xml | |||||||||||||||||||||||||||||||
Creating standalone class: tmva_class_example/weights/TMVAClassification_BDT.class.C | |||||||||||||||||||||||||||||||
TFHandler_DNN |
| ||||||||||||||||||||||||||||||
TFHandler_DNN |
| ||||||||||||||||||||||||||||||
TFHandler_DNN |
| ||||||||||||||||||||||||||||||
Using Standard Implementation.Training with learning rate = 0.1, momentum = 0, repetitions = 1 | |||||||||||||||||||||||||||||||
Training with learning rate = 0.01, momentum = 0.5, repetitions = 1 | |||||||||||||||||||||||||||||||
Training with learning rate = 0.01, momentum = 0.3, repetitions = 1 | |||||||||||||||||||||||||||||||
Training with learning rate = 0.001, momentum = 0.1, repetitions = 1 | |||||||||||||||||||||||||||||||
Training with learning rate = 0.001, momentum = 0.1, repetitions = 1 | |||||||||||||||||||||||||||||||
Elapsed time for training with 6000 events : 4.53 sec | |||||||||||||||||||||||||||||||
DNN |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.212 sec | |||||||||||||||||||||||||||||||
Creating xml weight file: tmva_class_example/weights/TMVAClassification_DNN.weights.xml | |||||||||||||||||||||||||||||||
Creating standalone class: tmva_class_example/weights/TMVAClassification_DNN.class.C |
To test test the methods and evaluate the performance we need to run Factory.TestAllMethods and Factory.EvaluateAllMethods functions.
factory.TestAllMethods()
factory.EvaluateAllMethods()
Factory | Test all methods | ||||||||||||||||||||||||||||||
Factory | Test method: SVM for Classification performance | ||||||||||||||||||||||||||||||
SVM |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.983 sec | |||||||||||||||||||||||||||||||
Factory | Test method: MLP for Classification performance | ||||||||||||||||||||||||||||||
MLP |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.00927 sec | |||||||||||||||||||||||||||||||
Factory | Test method: LD for Classification performance | ||||||||||||||||||||||||||||||
LD |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.00108 sec | |||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||
Factory | Test method: Likelihood for Classification performance | ||||||||||||||||||||||||||||||
Likelihood |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.00623 sec | |||||||||||||||||||||||||||||||
Factory | Test method: BDT for Classification performance | ||||||||||||||||||||||||||||||
BDT |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.367 sec | |||||||||||||||||||||||||||||||
Factory | Test method: DNN for Classification performance | ||||||||||||||||||||||||||||||
DNN |
| ||||||||||||||||||||||||||||||
Elapsed time for evaluation of 6000 events : 0.193 sec | |||||||||||||||||||||||||||||||
Factory | Evaluate all methods | ||||||||||||||||||||||||||||||
Factory | Evaluate classifier: SVM | ||||||||||||||||||||||||||||||
TFHandler_SVM |
| ||||||||||||||||||||||||||||||
SVM |
| ||||||||||||||||||||||||||||||
TFHandler_SVM |
| ||||||||||||||||||||||||||||||
Factory | Evaluate classifier: MLP | ||||||||||||||||||||||||||||||
TFHandler_MLP |
| ||||||||||||||||||||||||||||||
MLP |
| ||||||||||||||||||||||||||||||
TFHandler_MLP |
| ||||||||||||||||||||||||||||||
Factory | Evaluate classifier: LD | ||||||||||||||||||||||||||||||
LD |
| ||||||||||||||||||||||||||||||
Also filling probability and rarity histograms (on request)... | |||||||||||||||||||||||||||||||
TFHandler_LD |
| ||||||||||||||||||||||||||||||
Factory | Evaluate classifier: Likelihood | ||||||||||||||||||||||||||||||
Likelihood |
| ||||||||||||||||||||||||||||||
TFHandler_Likelihood |
| ||||||||||||||||||||||||||||||
Factory | Evaluate classifier: BDT | ||||||||||||||||||||||||||||||
BDT |
| ||||||||||||||||||||||||||||||
TFHandler_BDT |
| ||||||||||||||||||||||||||||||
Factory | Evaluate classifier: DNN | ||||||||||||||||||||||||||||||
DNN |
| ||||||||||||||||||||||||||||||
TFHandler_DNN |
| ||||||||||||||||||||||||||||||
Evaluation results ranked by best signal efficiency and purity (area) | |||||||||||||||||||||||||||||||
DataSet MVA | |||||||||||||||||||||||||||||||
Name: Method: ROC-integ | |||||||||||||||||||||||||||||||
tmva_class_example DNN : 0.940 | |||||||||||||||||||||||||||||||
tmva_class_example MLP : 0.939 | |||||||||||||||||||||||||||||||
tmva_class_example SVM : 0.937 | |||||||||||||||||||||||||||||||
tmva_class_example BDT : 0.931 | |||||||||||||||||||||||||||||||
tmva_class_example LD : 0.895 | |||||||||||||||||||||||||||||||
tmva_class_example Likelihood : 0.827 | |||||||||||||||||||||||||||||||
Testing efficiency compared to training efficiency (overtraining check) | |||||||||||||||||||||||||||||||
DataSet MVA Signal efficiency: from test sample (from training sample) | |||||||||||||||||||||||||||||||
Name: Method: @B=0.01 @B=0.10 @B=0.30 | |||||||||||||||||||||||||||||||
tmva_class_example DNN : 0.390 (0.345) 0.804 (0.798) 0.962 (0.963) | |||||||||||||||||||||||||||||||
tmva_class_example MLP : 0.365 (0.345) 0.806 (0.797) 0.962 (0.964) | |||||||||||||||||||||||||||||||
tmva_class_example SVM : 0.400 (0.322) 0.802 (0.791) 0.961 (0.961) | |||||||||||||||||||||||||||||||
tmva_class_example BDT : 0.350 (0.380) 0.778 (0.805) 0.955 (0.959) | |||||||||||||||||||||||||||||||
tmva_class_example LD : 0.261 (0.242) 0.679 (0.662) 0.901 (0.903) | |||||||||||||||||||||||||||||||
tmva_class_example Likelihood : 0.106 (0.101) 0.400 (0.371) 0.812 (0.813) | |||||||||||||||||||||||||||||||
Dataset:tmva_class_exa...: Created tree 'TestTree' with 6000 events | |||||||||||||||||||||||||||||||
Dataset:tmva_class_exa...: Created tree 'TrainTree' with 6000 events | |||||||||||||||||||||||||||||||
Factory | Thank you for using TMVA! | ||||||||||||||||||||||||||||||
For citation information, please visit: http://tmva.sf.net/citeTMVA.html |
To draw the classifier output distribution we have to use Factory.DrawOutputDistribution function which is inserted by invoking jsmva magic. The parameters of the function are the following: The options string can contain the following options:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
datasetName | yes, 1. | - | - | The name of dataset |
methodName | yes, 2. | - | - | The name of method |
factory.DrawOutputDistribution(dataset, "MLP")
To draw the classifier probability distribution we have to use Factory.DrawProbabilityDistribution function which is inserted by invoking jsmva magic. The parameters of the function are the following: The options string can contain the following options:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
datasetName | yes, 1. | - | - | The name of dataset |
factory.DrawProbabilityDistribution(dataset, "LD")
To draw the ROC (receiver operating characteristic) curve we have to use Factory.DrawROCCurve function which is inserted by invoking jsmva magic. The parameters of the function are the following: The options string can contain the following options:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
datasetName | yes, 1. | - | - | The name of dataset |
factory.DrawROCCurve(dataset)
To draw the classifier cut efficiencies we have to use Factory.DrawCutEfficiencies function which is inserted by invoking jsmva magic. The parameters of the function are the following: The options string can contain the following options:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
datasetName | yes, 1. | - | - | The name of dataset |
methodName | yes, 2. | - | - | The name of method |
factory.DrawCutEfficiencies(dataset, "MLP")
If we trained a neural network then the weights of the network will be saved to XML and C file. We can read back the XML file and we can visualize the network using Factory.DrawNeuralNetwork function.
The arguments of this function:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
datasetName | yes, 1. | - | - | The name of dataset |
methodName | yes, 2. | - | - | The name of method |
This visualization will be interactive, and we can do the following with it:
The synapses are drawn with 2 colors, one for positive weight and one for negative weight. The absolute value of the synapses are scaled and transformed to thickness of line between to node.
factory.DrawNeuralNetwork(dataset, "MLP")
The DrawNeuralNetwork function also can visualize deep neural networks, we just have to pass "DNN" as method name. If you have very big network with lots of thousands of neurons then drawing the network will be a little bit slow and will need a lot of ram, so be careful with this function.
This visualization also will be interactive, and we can do the following with it:
factory.DrawNeuralNetwork(dataset, "DNN")
The trained decision trees will be save to XML save too, so we can read back the XML file and we can visualize the trees. This is the purpose of Factory.DrawDecisionTree function.
The arguments of this function:
Keyword | Can be used as positional argument | Default | Predefined values | Description |
---|---|---|---|---|
datasetName | yes, 1. | - | - | The name of dataset |
methodName | yes, 2. | - | - | The name of method |
This function will produce a little box where you can enter the index of the tree (the number of trees will be also will appear before this input box) you want to see. After choosing this number you have to press the Draw button. The nodes of tree will be colored, the color is associated to signal efficiency.
The visualization of tree will be interactive and you can do the following with it:
Mouseover (node, weight): showing decision path
Zooming and grab and move supported
Reset zoomed tree: double click
Expand all closed subtrees, turn off zoom: button in the bottom of the picture
Click on node:
factory.DrawDecisionTree(dataset, "BDT") #11
factory.DrawDNNWeights(dataset, "DNN")
outputFile.Close()