SGD Optimizer

neualpy.optimizer.SGD(learning_rate=0.001, momentum=0.0, dampening=0.0, weight_decay=0.0, nesterov=False)
info

SGD Optimizer is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low.

Applies stochastic gradient descent algorithm

For more information, check this page.

Supported Arguments

  • learning_rate=0.001: (Float) Learning Rate for the optimizer
  • momentum=0 : (Float) Momentum for the optimizer
  • dampening=0 : (Float) Dampening of momentum
  • weight_decay=0 : (Float) Weight decay for the optimizer
  • nesterov=False : (Bool) Enables Nesterov momentum

Code Example

from neuralpy.models import Sequential
from neuralpy.optimizer import SGD
...
# Rest of the imports
...
model = Sequential()
...
# Rest of the architecture
...
model.compile(optimizer=SGD(), ...))