fastNLP.modules¶
- fastNLP.modules.aggregator
- fastNLP.modules.decoder
- fastNLP.modules.encoder
- fastNLP.modules.encoder.char_embedding
- fastNLP.modules.encoder.conv
- fastNLP.modules.encoder.conv_maxpool
- fastNLP.modules.encoder.embedding
- fastNLP.modules.encoder.linear
- fastNLP.modules.encoder.lstm
- fastNLP.modules.encoder.masked_rnn
- fastNLP.modules.encoder.transformer
- fastNLP.modules.encoder.variational_rnn
fastNLP.modules.dropout¶
-
class
fastNLP.modules.dropout.
TimestepDropout
(p=0.5, inplace=False)[source]¶ This module accepts a
[batch_size, num_timesteps, embedding_dim)]
and use a single dropout mask of shape(batch_size, embedding_dim)
to apply on every time step.-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
fastNLP.modules.other_modules¶
-
class
fastNLP.modules.other_modules.
BiAffine
(n_enc, n_dec, n_labels, biaffine=True, **kwargs)[source]¶ -
forward
(input_d, input_e, mask_d=None, mask_e=None)[source]¶ Parameters: - input_d (Tensor) – the decoder input tensor with shape = [batch, length_decoder, input_size]
- input_e (Tensor) – the child input tensor with shape = [batch, length_encoder, input_size]
- mask_d – Tensor or None, the mask tensor for decoder with shape = [batch, length_decoder]
- mask_e – Tensor or None, the mask tensor for encoder with shape = [batch, length_encoder]
Returns: Tensor, the energy tensor with shape = [batch, num_label, length, length]
-
-
class
fastNLP.modules.other_modules.
GroupNorm
(num_features, num_groups=20, eps=1e-05)[source]¶ -
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
fastNLP.modules.other_modules.
LayerNormalization
(layer_size, eps=0.001)[source]¶ Parameters: - layer_size (int) –
- eps (float) – default=1e-3
-
forward
(z)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
fastNLP.modules.utils¶
-
fastNLP.modules.utils.
initial_parameter
(net, initial_method=None)[source]¶ A method used to initialize the weights of PyTorch models.
Parameters: - net – a PyTorch model
- initial_method (str) –
one of the following initializations.
- xavier_uniform
- xavier_normal (default)
- kaiming_normal, or msra
- kaiming_uniform
- orthogonal
- sparse
- normal
- uniform