CrossEntropyLoss#
- class vulkpy.nn.CrossEntropyLoss#
Bases:
ReduceLoss
Cross Entropy Loss
Methods Summary
Methods Documentation
- __call__(x: Array, y: Array) Array #
Compute Loss
- Parameters:
x (vulkpy.Array) – Batch input features
y (vulkpy.Array) – Batch labels/targets
- Returns:
loss – Loss
- Return type:
- backward() Array #
Backward
- Returns:
loss – Batch gradients
- Return type:
Notes
\[dx = \frac{-y}{x + \epsilon}\]Warning
Generally, users should not call this method directly. Use
grad()
instead, where reduction scale is corrected.
- forward(x: Array, y: Array) Array #
Forward
- Parameters:
x (vulkpy.Array) – Batch input features
y (vulkpy.Array) – Batch input labels as One hot vector
- Returns:
loss – Cross Entropy Loss
- Return type:
Notes
\[L = - f _{\text{reduce}} ( y_i \log (x_i) )\]Warning
Generally, users should not call this method directly. Use
__call__
instead, where input / output are stored for training.
- grad() Array #
Compute Gradients
- Returns:
dx – Batch gradients of dL/dx
- Return type:
Notes
This method calculates gradients for the last
__call__(x, y)
.
- __init__(*args, **kwargs)#
Initialize Cross Entropy Loss
- Parameters:
reduce ({"mean", "sum"}, optional) – Reduction method over batch. The default is
"mean"
.