nonconformist.evaluation.run_experiment¶
-
nonconformist.evaluation.
run_experiment
(models, csv_files, iterations=10, folds=10, fit_params=None, scoring_funcs=None, significance_levels=None, normalize=False, verbose=False, header=0)¶ Performs a cross-validation evaluation of one or several conformal predictors on a collection of data sets in csv format.
Parameters: models : object or iterable
Conformal predictor(s) to evaluate.
csv_files : iterable
List of file names (with absolute paths) containing csv-data, used to evaluate the conformal predictor.
iterations : int
Number of iterations to use for evaluation. The data set is randomly shuffled before each iteration.
folds : int
Number of folds to use for evaluation.
fit_params : dictionary
Parameters to supply to the conformal prediction object on training.
scoring_funcs : iterable
List of evaluation functions to apply to the conformal predictor in each fold. Each evaluation function should have a signature
scorer(prediction, y, significance)
.significance_levels : iterable
List of significance levels at which to evaluate the conformal predictor.
verbose : boolean
Indicates whether to output progress information during evaluation.
Returns: scores : pandas DataFrame
Tabulated results for each data set, iteration, fold and evaluation function.