skfp.metrics.multioutput_auroc_score#

skfp.metrics.multioutput_auroc_score(y_true: ndarray, y_score: ndarray, *args, **kwargs) float#

Area Under Receiver Operating Characteristic curve (AUROC / ROC AUC) score for multioutput problems.

Returns the average value over all tasks. Missing values in target labels are ignored. Columns with constant true value are ignored by default, but can also use default value - see auroc_score function. As such, it can be safely used e.g. in cross-validation. Also supports single-task evaluation.

Any additional arguments are passed to the underlying auroc_score and roc_auc_score functions, see scikit-learn documentation for more information.

Parameters:
  • y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Ground truth (correct) target values.

  • y_score (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Target scores, i.e. probability of the class with the greater label for each output** of the classifier.

  • *args – Any additional parameters for the underlying scikit-learn metric function.

  • **kwargs – Any additional parameters for the underlying scikit-learn metric function.

Returns:

score – Average AUROC value over all tasks.

Return type:

float

Examples

>>> import numpy as np
>>> from skfp.metrics import multioutput_auroc_score
>>> y_true = [[0, 0], [1, 1]]
>>> y_score = [[0.75, 0.0], [0.9, 0.0]]
>>> multioutput_auroc_score(y_true, y_score)
0.75
>>> y_true = [[0, 0], [1, np.nan], [np.nan, 1]]
>>> y_score = [[0.75, 0.0], [0.25, 0.0], [0.0, 0.25]]
>>> multioutput_auroc_score(y_true, y_score)
0.5