skfp.metrics.multioutput_f1_score#

skfp.metrics.multioutput_f1_score(y_true: ndarray, y_pred: ndarray, *args, **kwargs) float#

F1 score for multioutput problems.

Returns the average value over all tasks. Missing values in target labels are ignored. Columns with constant true value are also ignored, which differs from default scikit-learn behavior (it returns value 0 by default). Also supports single-task evaluation.

Any additional arguments are passed to the underlying f1_score function, see scikit-learn documentation for more information.

Parameters:
  • y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Ground truth (correct) target values.

  • y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Estimated target values.

  • *args – Any additional parameters for the underlying scikit-learn metric function.

  • **kwargs – Any additional parameters for the underlying scikit-learn metric function.

Returns:

score – Average F1 score value over all tasks.

Return type:

float

Examples

>>> import numpy as np
>>> from skfp.metrics import multioutput_f1_score
>>> y_true = [[0, 0], [1, 1]]
>>> y_pred = [[0, 0], [0, 1]]
>>> multioutput_f1_score(y_true, y_pred)
0.5
>>> y_true = [[0, np.nan], [1, np.nan], [np.nan, np.nan]]
>>> y_pred = [[0, 0], [0, 0], [1, 0]]
>>> multioutput_f1_score(y_true, y_pred)
0.0