fastNLP.modules.aggregator

fastNLP.modules.aggregator.attention

class fastNLP.modules.aggregator.attention.Attention(normalize=False)[source]
forward(query, memory, mask)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class fastNLP.modules.aggregator.attention.DotAtte(key_size, value_size)[source]
forward(Q, K, V, seq_mask=None)[source]
Parameters:
  • Q – [batch, seq_len, key_size]
  • K – [batch, seq_len, key_size]
  • V – [batch, seq_len, value_size]
  • seq_mask – [batch, seq_len]
class fastNLP.modules.aggregator.attention.MultiHeadAtte(input_size, output_size, key_size, value_size, num_atte)[source]
forward(Q, K, V, seq_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fastNLP.modules.aggregator.avg_pool

class fastNLP.modules.aggregator.avg_pool.AvgPool(stride=None, padding=0)[source]

1-d average pooling module.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fastNLP.modules.aggregator.kmax_pool

class fastNLP.modules.aggregator.kmax_pool.KMaxPool(k=1)[source]

K max-pooling module.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fastNLP.modules.aggregator.max_pool

class fastNLP.modules.aggregator.max_pool.MaxPool(stride=None, padding=0, dilation=1)[source]

1-d max-pooling module.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fastNLP.modules.aggregator.self_attention

class fastNLP.modules.aggregator.self_attention.SelfAttention(input_size, attention_unit=350, attention_hops=10, drop=0.5, initial_method=None, use_cuda=False)[source]

Self Attention Module.

Args: input_size: int, the size for the input vector dim: int, the width of weight matrix. num_vec: int, the number of encoded vectors

forward(input, input_origin)[source]
Parameters:
  • input – the matrix to do attention. [baz, senLen, h_dim]
  • inp – then token index include pad token( 0 ) [baz , senLen]
Return output1:

the input matrix after attention operation [baz, multi-head , h_dim]

Return output2:

the attention penalty term, a scalar [1]

penalization(attention)[source]

compute the penalization term for attention module