Core Layers¶
-
class
npdl.layers.
Linear
(n_out, n_in=None, init=’glorot_uniform’)[source]¶ A fully connected layer implemented as the dot product of inputs and weights.
Parameters: n_out : (int, tuple)
Desired size or shape of layer output
n_in : (int, tuple) or None
The layer input size feeding into this layer
init : (Initializer, optional)
Initializer object to use for initializing layer weights
-
class
npdl.layers.
Dense
(n_out, n_in=None, init=’glorot_uniform’, activation=’tanh’)[source]¶ A fully connected layer implemented as the dot product of inputs and weights. Generally used to implemenent nonlinearities for layer post activations.
Parameters: n_out : int
Desired size or shape of layer output
n_in : int, or None
The layer input size feeding into this layer
activation : str, or npdl.activatns.Activation
Defaults to
Tanh
init : str, or npdl.initializations.Initializer
Initializer object to use for initializing layer weights
-
class
npdl.layers.
Softmax
(n_out, n_in=None, init=’glorot_uniform’)[source]¶ A fully connected layer implemented as the dot product of inputs and weights.
Parameters: n_out : int
Desired size or shape of layer output
n_in : int, or None
The layer input size feeding into this layer
init : str, or npdl.initializations.Initializer
Initializer object to use for initializing layer weights
-
class
npdl.layers.
Dropout
(p=0.0)[source]¶ A dropout layer.
Applies an element-wise multiplication of inputs with a keep mask.
A keep mask is a tensor of ones and zeros of the same shape as the input.
Each
forward()
call generates an new keep mask stochastically where there distribution of ones in the mask is controlled by the keep param.Parameters: p : float
fraction of the inputs that should be stochastically kept.