RMSProp Optimizer

neualpy.optimizer.RMSProp(learning_rate=0.001, alpha=0.99, eps=1e-08, weight_decay=0.0, momentum=0.0, centered=False)
info

RMSProp Optimizer is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low.

Applies RMSProp algorithm

For more information, check this page.

Supported Arguments

  • learning_rate=0.001: (Float) Learning Rate for the optimizer
  • alpha=(0.9,0.999) : (Float) Learningn Rate decay
  • eps=0 : (Float) Term added to the denominator to improve numerical stability
  • weight_decay=0 : (Float) Weight decay for the optimizer
  • momentum=0 : (Float) Momentum for the optimizer
  • centered=False : (Bool) if True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance

Code Example

from neuralpy.models import Sequential
from neuralpy.optimizer import RMSProp
...
# Rest of the imports
...
model = Sequential()
...
# Rest of the architecture
...
model.compile(optimizer=RMSProp(), ...))