SGD#
- class vulkpy.nn.SGD#
Bases:
Optimizer
SGD Optimizer
Use constant learning rate
See also
vulkpy.nn.Adam
Adam optimizer
Methods Summary
init_state
(shape)Initialize Optimizer state
Methods Documentation
- init_state(shape: Iterable[int]) SGDState #
Initialize Optimizer state
- Parameters:
shape (iterable of ints) – Shape of parameter
- Returns:
Optimizer state
- Return type:
Notes
Currently SGDState is empty, however, we might add some field like momentum in future.
- __init__(lr: float)#
Initialize Stachostic Gradient Decent (SGD) Optimizer
- Parameters:
lr (float) – Learning rate