RNN

neuralpy.layers.recurrent.RNN(hidden_size, num_layers=1, input_size=None, non_linearity='tanh', bias=True, batch_first=False, dropout=0, bidirectional=False, name=None)
danger

RNN Layer is unstable and buggy, not ready for any real use

Applies a multi-layer Elman RNN with tanh or ReLU non-linearity to an input sequence.

To learn more about RNN, please check pytorch documentation

Supported Arguments:

  • hidden_size: (Integer) The number of features in the hidden state
  • num_layers=1: (Integer) Number of recurrent layers
  • input_size=None: (Integer) The number of expected features in the input
  • non_linearity=tanh: (String) The non-linearity to use.Default tanh
  • bias=True: (Boolean) If true then uses the bias,Defaults to true
  • batch_first=False: (Boolean) If true, then the input and output tensors are provided as (batch, seq, feature). Default: false
  • dropout=0: (Integer) f non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer,with dropout probability equal to dropout. Default: 0
  • bidirectional=False: (Boolean) If true, becomes a bidirectional RNN. Default: false
  • name=None: (String) Name of the layer, if not provided then automatically calculates a unique name for the layer

Example Code

from neuralpy.models import Sequential
from neuralpy.layers.recurrent import RNN
# Making the model
model = Sequential()
model.add(RNN(hidden_size=256, num_layers=4, input_size=28))
model.add(RNN(hidden_size=128, num_layers=2))