Flatten
neuralpy.layers.sparse.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, name=None)
info
Embedding is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low.
A simple lookup table that stores embeddings of a fixed dictionary and size.
For more information, check this page
Supported Arguments
num_embeddings
: (Integer) size of the dictionary of embeddingsembedding_dim
: (Integer) the size of each embedding vectorpadding_idx=None
: (Integer) If given, pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the indexmax_norm=None
: (Float) If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_normnorm_type=2.0
: (Float) The p of the p-norm to compute for the max_norm option.Default 2scale_grad_by_freq=False
: (Boolean) If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default Falsesparse=False, name=None
: (Boolean) If True, gradient w.r.t. weight matrix will be a sparse tensor.name=None
: (String) Name of the layer, if not provided then automatically calculates a unique name for the layer
Example Code
from neuralpy.models import Sequential
from neuralpy.layers.sparse import Embedding
# Initializing the Sequential models
model = Sequential()
...
...
...
model.add(Embedding(10, 3, name="Embedding layer"))