SoftmaxCrossEntropyLoss#

class vulkpy.nn.SoftmaxCrossEntropyLoss#

Bases: CrossEntropyLoss

Softmax Cross Entropy Loss

See also

vulkpy.nn.Softmax

Softmax layer

vulkpy.nn.CrossEntropyLoss

Cross Entropy loss without Softmax

Methods Summary

__call__(x, y)

Compute Loss

backward()

Backward

forward(x, y)

Forward

grad()

Compute Gradients

Methods Documentation

__call__(x: Array, y: Array) Array#

Compute Loss

Parameters:
Returns:

loss – Loss

Return type:

vulkpy.Array

backward() Array#

Backward

Returns:

loss – Batch gradients

Return type:

vulkpy.Array

Notes

\[dx = \rm{softmax}(x) - y\]

Warning

Generally, users should not call this method directly. Use grad() instead, where reduction scale is corrected.

forward(x: Array, y: Array) Array#

Forward

Parameters:
Returns:

loss – Loss

Return type:

vulkpy.Array

Notes

\[L = - f _{\text{reduce}} (y_i \log (\rm{softmax}(x) _i))\]

Warning

Generally, users should not call this method directly. Use __call__ instead, where input / output are stored for training.

grad() Array#

Compute Gradients

Returns:

dx – Batch gradients of dL/dx

Return type:

vulkpy.Array

Notes

This method calculates gradients for the last __call__(x, y).

__init__(*args, **kwargs)#

Initialize Softmax Cross Entropy Loss

Parameters:

reduce ({"mean", "sum"}) – Reduction method over batch. The default is "mean".