das.tcn.tcn#

class das.tcn.tcn.TCN(nb_filters=64, kernel_size=2, nb_stacks=1, dilations=None, activation='norm_relu', use_skip_connections=True, use_separable=False, padding='causal', dropout_rate=0.0, return_sequences=True, name='tcn')[source]#

Creates a TCN layer.

Parameters
  • input_layer – A tensor of shape (batch_size, timesteps, input_dim).

  • nb_filters – The number of filters to use in the convolutional layers.

  • kernel_size – The size of the kernel to use in each convolutional layer.

  • dilations – The list of the dilations. Example is: [1, 2, 4, 8, 16, 32, 64].

  • nb_stacks – The number of stacks of residual blocks to use.

  • activation – The activations to use (norm_relu, wavenet, relu…).

  • use_skip_connections – Boolean. If we want to add skip connections from input to each residual block.

  • use_separable – Boolean. Use separable convolutions in each residual block.

  • return_sequences – Boolean. Whether to return the last output in the output sequence, or the full sequence.

  • padding – The padding to use in the convolutional layers, ‘causal’ or ‘same’.

  • dropout_rate – Float between 0 and 1. Fraction of the input units to drop.

  • name – Name of the model. Useful when having multiple TCN.

Returns

A TCN layer.

das.tcn.tcn.channel_normalization(x: keras.src.engine.base_layer.Layer) keras.src.engine.base_layer.Layer[source]#

Normalize a layer to the maximum activation

This keeps a layers values between zero and one. It helps with relu’s unbounded activation

Parameters

x – The layer to normalize

Returns

A maximal normalized layer

das.tcn.tcn.residual_block(x: keras.src.engine.base_layer.Layer, s: int, i: int, activation: str, nb_filters: int, kernel_size: int, padding: str = 'causal', use_separable: bool = False, dropout_rate: float = 0, name: str = '') Tuple[keras.src.engine.base_layer.Layer, keras.src.engine.base_layer.Layer][source]#

Defines the residual block for the WaveNet TCN

Parameters
  • x – The previous layer in the model

  • s – The stack index i.e. which stack in the overall TCN

  • i – The dilation power of 2 we are using for this residual block

  • activation – The name of the type of activation to use

  • nb_filters – The number of convolutional filters to use in this block

  • kernel_size – The size of the convolutional kernel

  • padding – The padding used in the convolutional layers, ‘same’ or ‘causal’.

  • use_separable – Use separable convolution

  • dropout_rate – Float between 0 and 1. Fraction of the input units to drop.

  • name – Name of the model. Useful when having multiple TCN.

Returns

A tuple where the first element is the residual model layer, and the second is the skip connection.

das.tcn.tcn.wave_net_activation(x: keras.src.engine.base_layer.Layer) keras.src.engine.base_layer.Layer[source]#

This method defines the activation used for WaveNet

described in https://deepmind.com/blog/wavenet-generative-model-raw-audio/

Parameters

x – The layer we want to apply the activation to

Returns

A new layer with the wavenet activation applied