CategoricalCrossEntropy
minnt.losses.CategoricalCrossEntropy
Bases: Loss
Categorical cross-entropy loss implementation.
Source code in minnt/losses/categorical_cross_entropy.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | |
__init__
__init__(
dim: int = 1,
*,
ignore_index: int = -100,
label_smoothing: float = 0.0,
probs: bool = False,
reduction: Reduction = "mean"
) -> None
Create the CategoricalCrossEntropy loss object with the specified reduction method.
Parameters:
-
dim(int, default:1) –If the input has 2 or more dimensions, this value specifies the dimension along which the classes are defined. The default is the same behavior as torch.nn.CrossEntropyLoss.
-
ignore_index(int, default:-100) –An optional target class value that is ignored during loss computation (equivalent to zeroing out sample weights for the corresponding samples). Only applicable for sparse targets; when dense targets are used, the default of -100 cannot be overwritten and this parameter is ignored. This is the same behavior as torch.nn.CrossEntropyLoss.
-
label_smoothing(float, default:0.0) –A float in [0.0, 1.0] specifying the label smoothing factor. If greater than 0.0, the used ground-truth targets are computed as a mixture of the original targets and uniform distribution with weight
1 - label_smoothing. -
probs(bool, default:False) –If False, the predictions are assumed to be logits; if
True, the predictions are assumed to be probabilities. Note that gold targets are always expected to be probabilities. -
reduction(Reduction, default:'mean') –The reduction method to apply to the computed loss.
Source code in minnt/losses/categorical_cross_entropy.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | |
__call__
Compute the categorical cross-entropy loss, optionally with sample weights.
Parameters:
-
y(Tensor) –The predicted outputs, either logits or probabilities (depending on the
probsparameter). If they have 2 or more dimensions, the class dimension is specified by thedimparameter. -
y_true(Tensor) –The ground-truth targets in two possible formats:
- The gold targets might be "sparse" class indices. In this case, their shape has to be
exactly the same as
ywith the class dimension removed. - The gold targets might be full "dense" probability distributions. In this case, their
shape has to be exactly the same as
y.
- The gold targets might be "sparse" class indices. In this case, their shape has to be
exactly the same as
-
sample_weights(Tensor | None, default:None) –Optional sample weights. If provided, their shape must be broadcastable to a prefix of a shape of
ywith the class dimension removed, and the loss for each sample is weighted accordingly.
Returns:
-
Tensor–A tensor representing the computed loss. A scalar tensor if reduction is
"mean"or"sum"; otherwise (if reduction is"none"), a tensor of the same shape asywithout the class dimension.
Source code in minnt/losses/categorical_cross_entropy.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | |