fastNLP.modules

fastNLP.modules.dropout

class fastNLP.modules.dropout.TimestepDropout(p=0.5, inplace=False)[source]

This module accepts a [batch_size, num_timesteps, embedding_dim)] and use a single dropout mask of shape (batch_size, embedding_dim) to apply on every time step.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fastNLP.modules.other_modules

class fastNLP.modules.other_modules.BiAffine(n_enc, n_dec, n_labels, biaffine=True, **kwargs)[source]
forward(input_d, input_e, mask_d=None, mask_e=None)[source]
Parameters:
  • input_d (Tensor) – the decoder input tensor with shape = [batch, length_decoder, input_size]
  • input_e (Tensor) – the child input tensor with shape = [batch, length_encoder, input_size]
  • mask_d – Tensor or None, the mask tensor for decoder with shape = [batch, length_decoder]
  • mask_e – Tensor or None, the mask tensor for encoder with shape = [batch, length_encoder]
Returns:

Tensor, the energy tensor with shape = [batch, num_label, length, length]

class fastNLP.modules.other_modules.BiLinear(n_left, n_right, n_out, bias=True)[source]
forward(input_left, input_right)[source]
Parameters:
  • input_left (Tensor) – the left input tensor with shape = [batch1, batch2, …, left_features]
  • input_right (Tensor) – the right input tensor with shape = [batch1, batch2, …, right_features]
class fastNLP.modules.other_modules.GroupNorm(num_features, num_groups=20, eps=1e-05)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class fastNLP.modules.other_modules.LayerNormalization(layer_size, eps=0.001)[source]
Parameters:
  • layer_size (int) –
  • eps (float) – default=1e-3
forward(z)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fastNLP.modules.utils

fastNLP.modules.utils.initial_parameter(net, initial_method=None)[source]

A method used to initialize the weights of PyTorch models.

Parameters:
  • net – a PyTorch model
  • initial_method (str) –

    one of the following initializations.

    • xavier_uniform
    • xavier_normal (default)
    • kaiming_normal, or msra
    • kaiming_uniform
    • orthogonal
    • sparse
    • normal
    • uniform
fastNLP.modules.utils.seq_mask(seq_len, max_len)[source]

Create sequence mask.

Parameters:
  • seq_len – list or torch.Tensor, the lengths of sequences in a batch.
  • max_len – int, the maximum sequence length in a batch.
Returns:

mask, torch.LongTensor, [batch_size, max_len]