Rprop Optimizer
neualpy.optimizer.Rprop(learning_rate=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50.0))
info
Rprop Optimizer is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low.
Applies Rprop algorithm, Rprop:Resilient backpropagation algorithm
For more information, check this page.
Supported Arguments
learning_rate=0.01
: (Float) Learning Rate for the optimizeretas=(0.5,1.2)
: (Tuple) pair of (etaminus, etaplis), that are multiplicative increase and decrease factorsstep_sizes=(le-06,50)
: (Tuple) a pair of minimal and maximal allowed step sizes for the optimizer
Code Example
from neuralpy.models import Sequential
from neuralpy.optimizer import Rprop
...
# Rest of the imports
...
model = Sequential()
...
# Rest of the architecture
...
model.compile(optimizer=Rprop(), ...))