Evaluator

class evaluator.Accuracy(reduce=None)

Bases: Metric

A class representing accuracy metrics for classification models.

- compute(y_true, y_pred, reduce=None)

Compute the accuracy metric based on the specified reduce method.

- accuracy(y_true, y_pred)

Compute the accuracy for each class.

- micro_accuracy(y_true, y_pred)

Compute the micro-averaged accuracy.

- macro_accuracy(y_true, y_pred)

Compute the macro-averaged accuracy.

- weighted_accuracy(y_true, y_pred)

Compute the weighted accuracy.

accuracy(y_true, y_pred)
compute(y_true, y_pred)

Parameters: - y_true (array-like): The true labels. - y_pred (array-like): The predicted labels.

Returns: - accuracy (float or array): Computed accuracy metric.

Raises: - ValueError: If an unknown reduce method is specified.

macro_accuracy(y_true, y_pred)
micro_accuracy(y_true, y_pred)
weighted_accuracy(y_true, y_pred)
class evaluator.Evaluator(metrics: list[Metric])

Bases: object

Class to evaluate model performance using a list of metrics.

metrics

A list of Metric objects representing the evaluation metrics.

Type:

list[Metric]

evaluate(y_true, y_pred)
class evaluator.F1Score(reduce=None)

Bases: Metric

Class representing the F1 Score metric.

- compute(y_true, y_pred, reduce=None)

Computes the F1 Score based on the specified reduction method.

- f1score(y_true, y_pred)

Computes the F1 Score for each class.

- micro_f1score(y_true, y_pred)

Computes the micro-averaged F1 Score.

- macro_f1score(y_true, y_pred)

Computes the macro-averaged F1 Score.

- weighted_f1score(y_true, y_pred)

Computes the weighted F1 Score.

compute(y_true, y_pred)

Computes the F1 Score based on the specified reduction method.

Parameters:
  • y_true (array-like) – The true labels.

  • y_pred (array-like) – The predicted labels.

Returns:

The computed F1 Score(s).

Return type:

float or array-like

Raises:

ValueError – If an unknown reduce method is specified.

f1score(y_true, y_pred)
macro_f1score(y_true, y_pred)
micro_f1score(y_true, y_pred)
weighted_f1score(y_true, y_pred)
class evaluator.Metric

Bases: object

A base class for defining evaluation metrics.

compute(y_true, y_pred)

Abstract method to compute the metric value.

Parameters:
  • y_true – The true labels.

  • y_pred – The predicted labels.

Returns:

The computed metric value.

class evaluator.Precision(reduce=None)

Bases: Metric

A class representing the precision metric.

This class provides methods to compute precision for different reduce methods, such as micro, macro, and weighted.

Parameters:

reduce (str, optional) – The reduce method to use. Defaults to None.

compute(y_true, y_pred)

Computes the precision metric based on the specified reduce method.

Parameters:
  • y_true (array-like) – The true labels.

  • y_pred (array-like) – The predicted labels.

Returns:

The computed precision metric.

Return type:

float or array-like

Raises:

ValueError – If an unknown reduce method is specified.

macro_precision(y_true, y_pred)

Computes the macro-averaged precision metric.

Parameters:
  • y_true (array-like) – The true labels.

  • y_pred (array-like) – The predicted labels.

Returns:

The macro-averaged precision.

Return type:

float

micro_precision(y_true, y_pred)

Computes the micro-averaged precision metric.

Parameters:
  • y_true (array-like) – The true labels.

  • y_pred (array-like) – The predicted labels.

Returns:

The micro-averaged precision.

Return type:

float

precision(y_true, y_pred)

Computes the precision metric for each class.

Parameters:
  • y_true (array-like) – The true labels.

  • y_pred (array-like) – The predicted labels.

Returns:

The precision for each class.

Return type:

array-like

weighted_precision(y_true, y_pred)

Computes the weighted precision metric.

Parameters:
  • y_true (array-like) – The true labels.

  • y_pred (array-like) – The predicted labels.

Returns:

The weighted precision.

Return type:

float

class evaluator.Recall(reduce=None)

Bases: Metric

A Class to compute recall metric for multi-class classification.

reduce

The reduce method to use. Defaults to None.

Type:

str, optional

compute(y_true, y_pred)

Compute the recall metric.

Parameters:
  • y_true (array-like) – True labels.

  • y_pred (array-like) – Predicted labels.

  • reduce (str, optional) – The reduce method to use. Defaults to None.

Returns:

Computed recall metric.

Return type:

recall (float or array)

Raises:

ValueError – If an unknown reduce method is provided.

macro_recall(y_true, y_pred)

Compute the macro-averaged recall metric.

Parameters:
  • y_true (array-like) – True labels.

  • y_pred (array-like) – Predicted labels.

Returns:

Computed macro-averaged recall metric.

Return type:

recall (float)

micro_recall(y_true, y_pred)

Compute the micro-averaged recall metric.

Parameters:
  • y_true (array-like) – True labels.

  • y_pred (array-like) – Predicted labels.

Returns:

Computed micro-averaged recall metric.

Return type:

recall (float)

recall(y_true, y_pred)

Compute the recall metric for each class.

Parameters:
  • y_true (array-like) – True labels.

  • y_pred (array-like) – Predicted labels.

Returns:

Computed recall metric for each class.

Return type:

recalls (array)

weighted_recall(y_true, y_pred)

Compute the weighted recall metric.

Parameters:
  • y_true (array-like) – True labels.

  • y_pred (array-like) – Predicted labels.

Returns:

Computed weighted recall metric.

Return type:

recall (float)