cca_zoo.model_selection.learning_curve#

class cca_zoo.model_selection.learning_curve(estimator, X, y=None, groups=None, train_sizes=array([0.1, 0.325, 0.55, 0.775, 1.]), cv=None, scoring=None, exploit_incremental_learning=False, n_jobs=None, pre_dispatch='all', verbose=0, shuffle=False, random_state=None, error_score=nan, return_times=False, fit_params=None)[source]#

Bases:

Learning Curve.

Determines cross-validated training and test scores for different training set sizes. A cross-validation generator splits the whole dataset k times in training and test data. Subsets of the training set with varying sizes will be used to train the estimator and a score for each training subset size and the test set will be computed. Afterwards, the scores will be averaged over all k runs for each training subset size.

Read more in the User Guide.

Parameters:
  • estimator (object) – An object type that implements the “fit” and “predict” methods. An object of this type is cloned for each validation.

  • representations (list or tuple of numpy arrays or array-likes) – Input data as a list or tuple of numpy arrays or array-likes with the same number of rows (samples).

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs), optional) – Target relative to representations for classification or regression; None for unsupervised learning.

  • groups (array-like of shape (n_samples,), default=None) – Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).

  • train_sizes (array-like of shape (n_ticks,), default=np.linspace(0.1, 1.0, 5)) – Relative or absolute numbers of training examples that will be used to generate the learning curve. If the dtype is float, it is regarded as a fraction of the maximum size of the training set (that is determined by the selected validation method), i.e., it has to be within (0, 1]. Otherwise, it is interpreted as absolute sizes of the training sets. Note that for classification, the number of samples usually has to be big enough to contain at least one sample from each class.

  • cv (int, cross-validation generator, or an iterable, default=None) – Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross-validation, - int, to specify the number of folds in a (Stratified)KFold, - CV splitter, - An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and “y” is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False, so the splits will be the same across calls. Refer to the User Guide for the various cross-validation strategies that can be used here.

  • scoring (str or callable, default=None) – A str (see model evaluation documentation) or a scorer callable object / function with signature “scorer(estimator, representations, y)”.

  • exploit_incremental_learning (bool, default=False) – If the estimator supports incremental learning, this will be used to speed up fitting for different training set sizes.

  • n_jobs (int, default=None) – Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See the Glossary for more details.

  • pre_dispatch (int or str, default='all') – Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.

  • verbose (int, default=0) – Controls the verbosity: the higher, the more messages.

  • shuffle (bool, default=False) – Whether to shuffle training data before taking prefixes of it based on “train_sizes”.

  • random_state (int, RandomState instance, or None, default=None) – Used when “shuffle” is True. Pass an int for reproducible output across multiple function calls. See the Glossary for more details.

  • error_score ('raise' or numeric, default=np.nan) – Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised.

  • return_times (bool, default=False) – Whether to return the fit and score times.

  • fit_params (dict, default=None) – Parameters to pass to the fit method of the estimator.

Returns:

  • train_sizes_abs (array, shape (n_unique_ticks,)) – Numbers of training examples that have been used to generate the learning curve.

  • train_scores (array, shape (n_ticks, n_cv_folds)) – Scores on training sets.

  • test_scores (array, shape (n_ticks, n_cv_folds)) – Scores on test set.

  • fit_times (array, shape (n_ticks, n_cv_folds)) – Times spent for fitting in seconds. Only present if return_times is True.

  • score_times (array, shape (n_ticks, n_cv_folds)) – Times spent for scoring in seconds. Only present if return_times is True.

See also

sklearn.model_selection.learning_curve

The function to create the learning curve.