CategoricalAccuracy
minnt.metrics.CategoricalAccuracy
Bases: Mean
Categorical classification accuracy metric.
The predictions are assumed to be logits or probabilities predicted by a model, while the ground-truth targets are either class indices (sparse format) or whole distributions (dense format). In both cases, the predicted class is considered to be the one with the highest probability.
Source code in minnt/metrics/categorical_accuracy.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | |
__init__
__init__(
dim: int = 1,
*,
ignore_index: int = -100,
probs: bool = False,
device: device | None = None
) -> None
Create the CategoricalAccuracy metric object.
Parameters:
-
dim(int, default:1) –If the input has 2 or more dimensions, this value specifies the dimension along which the classes are defined. The default is the same behavior as torch.nn.CrossEntropyLoss.
-
ignore_index(int, default:-100) –An optional target class value that is ignored during metric computation (equivalent to zeroing out sample weights for the corresponding samples). The default of -100 is the same as in torch.nn.CrossEntropyLoss.
-
probs(bool, default:False) –For consistency with other categorical losses and metrics, we include this parameter describing whether the predictions are logits or probabilities. However, to predict the most probable class, it does not matter whether logits or probabilities are used.
Source code in minnt/metrics/categorical_accuracy.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | |
update
Update the accumulated categorical accuracy using new predictions and gold labels.
Optional sample weight might be provided; if not, all values are weighted with 1.
Parameters:
-
y(Tensor) –The predicted outputs, either logits or probabilities (depending on the
probsparameter). If they have 2 or more dimensions, the class dimension is specified by thedimparameter. We consider the class with the highest probability to be predicted. -
y_true(Tensor) –The ground-truth targets in two possible formats:
- The gold targets might be "sparse" class indices. In this case, their shape has to be
exactly the same as
ywith the class dimension removed. - The gold targets might be full "dense" probability distributions. In this case, their
shape has to be exactly the same as
yand we consider the class with the highest probability to be the gold class.
- The gold targets might be "sparse" class indices. In this case, their shape has to be
exactly the same as
-
sample_weights(Tensor | None, default:None) –Optional sample weights. If provided, their shape must be broadcastable to a prefix of a shape of
ywith the class dimension removed, and the loss for each sample is weighted accordingly.
Returns:
-
Self–self
Source code in minnt/metrics/categorical_accuracy.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | |