bgd.layers.fc module
- class bgd.layers.fc.FullyConnected(n_in, n_out, copy=False, with_bias=True, dtype=<class 'numpy.float32'>, initializer=<bgd.initializers.GlorotUniformInitializer object>, bias_initializer=<bgd.initializers.ZeroInitializer object>)[source]
Bases:
bgd.layers.layer.Layer
Fully connected (dense) neural layer. Each output neuron is a weighted sum of its inputs with possibly a bias.
- Parameters
n_in (int) – Number of input neurons.
n_out (int) – Number of output neurons.
copy (bool) – Whether to copy layer output.
with_bias (bool) – Whether add a bias to output neurons.
dtype (type) – Type of weights and biases.
initializer (
bgd.initializers.Initializer
) – Initializer of the weights.bias_initializer (
bgd.initializers.Initializer
) – Initializer of the biases.
- weights
Matrix of weights.
- Type
np.ndarray
- biases
Vector of biases.
- Type
np.ndarray
- get_parameters()[source]
Retuns a tuple containing the parameters of the layer. If layer is non-learnable, None is returned instead. The tuple can have a size > 1 for convenience. For example, a convolutional neural layer has one array for the filters weights and one array for the biases.
- update_parameters(delta_fragments)[source]
Update parameters: base class has no learnable parameters. Subclasses that are learnable must override this method.
- Parameters
delta_fragments (tuple) – Tuple of NumPy arrays. Each array is a parameter update vector and is used to update parameters using the following formula: params = params - delta_fragment. The tuple can have a size > 1 for convenience. For example, a convolutional neural layer has one array to update the filters weights and one array to update the biases: weights = weights - delta_fragments[0] biases = biases - delta_fragments[1]