ReLU#

class vulkpy.nn.ReLU#

Bases: Module

Rectified Linear Unit (ReLU)

Methods Summary

__call__(x)

Call Module

backward(dy)

Backward

forward(x)

Forward

update()

Update parameters based on accumulated gradients

zero_grad()

Reset accumulated gradients to 0.

Methods Documentation

__call__(x: Array) Array#

Call Module

Parameters:

x (vulkpy.Array) – Input

Returns:

y – Output

Return type:

vulkpy.Array

Raises:

ValueError – If input (x) shape doesn’t have at least 2-dimensions.

Notes

This function stores input (x) and output (y) for training.

backward(dy: Array) Array#

Backward

Parameters:

dy (vulkpy.Array) – Batch grad

Returns:

Batch grad

Return type:

vulkpy.Array

Notes

\[dx = dy \cdot \max(\rm{sign}(y), 0)\]

if x == 0, dy/dx => 0

forward(x: Array) Array#

Forward

Parameters:

x (vulkpy.Array) – Batch input

Returns:

Batch output

Return type:

vulkpy.Array

Notes

\[y = \max(x, 0)\]

Warning

Generally, users should not call this method directly. Use __call__ instead, where input / output are stored for training.

update()#

Update parameters based on accumulated gradients

Notes

Base class implement no-operation. Subclass can customize this method.

zero_grad()#

Reset accumulated gradients to 0.

Notes

Base class implement no-operation. Subclass can customize this method.

__init__()#