bgd.layers.layer module

class bgd.layers.layer.Layer(copy=False, save_input=True, save_output=True)[source]

Bases: object

Base class for neural and non-neural layers. Each layer must implement methods ‘_forward’ and ‘_backward’: the output shape during forward pass must be equal to the input shape during backward pass, and vice versa.

Parameters
  • copy (bool) – Whether to copy layer output.

  • save_input (bool) – Whether to keep a reference to the input array..

  • save_output (bool) – Whether to keep a reference to the output array.

input_shape

Shape of the input array.

Type

tuple

current_input

Reference to the current input array.

Type

np.ndarray

current_output

Reference to the current output array.

Type

np.ndarray

propagate

Whether to propagate the signal. This is usually set to False for the first layer of a sequential model for example. Indeed, the first layer has no neighbouring layer to transfer the signal to.

Type

bool

abstract _backward(error)[source]

Wrapped method for applying a backward pass on input X.

abstract _forward(X)[source]

Wrapped method for applying a forward pass on input X.

activate_propagation()[source]

Activate signal propagation during backpropagation.

backward(*args, **kwargs)[source]

Wrapper method for method ‘_backward’. Given an input array, checks the array shape, update parameters and propagate the signal if needed (and if layer is learnable).

Returns a tuple of size 2 where first element is the signal to propagate and the second element is the error gradient with respect to the parameters of the layer. If propagation is deactivated for current layer, then the signal is replaced by None. If layer is non-learnable, the gradient is replaced by None.

deactivate_propagation()[source]

Deactivate signal propagation during backpropagation.

forward(X)[source]

Wrapper method for method ‘_forward’. Given an input array, checks the input shape, computes the output and saves the output if needed.

Parameters

X (np.ndarray) – Input array

abstract get_parameters()[source]

Retuns a tuple containing the parameters of the layer. If layer is non-learnable, None is returned instead. The tuple can have a size > 1 for convenience. For example, a convolutional neural layer has one array for the filters weights and one array for the biases.

learnable()[source]

Tells whether the layer has learnable parameters.

update_parameters(delta_fragments)[source]

Update parameters: base class has no learnable parameters. Subclasses that are learnable must override this method.

Parameters

delta_fragments (tuple) – Tuple of NumPy arrays. Each array is a parameter update vector and is used to update parameters using the following formula: params = params - delta_fragment. The tuple can have a size > 1 for convenience. For example, a convolutional neural layer has one array to update the filters weights and one array to update the biases: weights = weights - delta_fragments[0] biases = biases - delta_fragments[1]