skfp.metrics.multioutput_auprc_score#

skfp.metrics.multioutput_auprc_score(y_true: ndarray, y_score: ndarray, *args, **kwargs) float#

Area Under Precision-Recall Curve (AUPRC / AUC PRC / average precision) score for multioutput problems.

Returns the average value over all tasks. Missing values in target labels are ignored. Also supports single-task evaluation.

Any additional arguments are passed to the underlying average_precision_score function, see scikit-learn documentation for more information.

Parameters:
  • y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Ground truth (correct) target values.

  • y_score (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Target scores, i.e. probability of the class with the greater label for each output** of the classifier.

  • *args – Any additional parameters for the underlying scikit-learn metric function.

  • **kwargs – Any additional parameters for the underlying scikit-learn metric function.

Returns:

score – Average AUPRC value over all tasks.

Return type:

float

Examples

>>> import numpy as np
>>> from skfp.metrics import multioutput_auprc_score
>>> y_true = [[0, 0], [1, 1]]
>>> y_score = [[0.75, 0.0], [0.9, 0.0]]
>>> multioutput_auprc_score(y_true, y_score)
0.75
>>> y_true = [[0, 0], [1, np.nan], [np.nan, 1]]
>>> y_score = [[0.75, 0.0], [0.25, 0.0], [0.0, 0.25]]
>>> multioutput_auprc_score(y_true, y_score)
0.75