Complex Networks¶
When dealing with complex functions, we usually need to code a lot to build a complex Neural Network . Therefore, in order to make it easier for users to build complex network models, we provide some common basic function modules to simplify the user’s code and reduce development costs. These modules are usually composed of fine-grained functions combined based on certain logics. For implementation, please refer to nets .
1.simple_img_conv_pool¶
simple_img_conv_pool
is got by concatenating api_fluid_layers_conv2d with api_fluid_layers_pool2d . This module is widely used in image classification models, such as the MNIST number classification.
For API Reference, please refer to api_fluid_nets_simple_img_conv_pool
2.img_conv_group¶
img_conv_group
is composed of api_fluid_layers_conv2d , api_fluid_layers_batch_norm, api_fluid_layers_dropout and api_fluid_layers_pool2d. This module can implement the combination of multiple api_fluid_layers_conv2d , api_fluid_layers_batch_norm , api_fluid_layers_dropout and a single api_fluid_layers_pool2d. Among them, the number of api_fluid_layers_conv2d , api_fluid_layers_batch_norm and api_fluid_layers_dropout can be controlled separately, resulting in various combinations. This module is widely used in more complex image classification tasks, such as VGG.
For API Reference, please refer to api_fluid_nets_img_conv_group
3.sequence_conv_pool¶
sequence_conv_pool
is got by concatenating api_fluid_layers_sequence_conv with api_fluid_layers_sequence_pool. The module is widely used in the field of natural language processing and speech recognition . Models such as the text classification model , TagSpace and Multi view Simnet.
For API Reference, please refer to api_fluid_nets_sequence_conv_pool
4.glu¶
The full name of glu
is Gated Linear Units, which originates from the paper Language Modeling with Gated Convolutional Networks . It consists of api_fluid_layers_split , api_fluid_layers_sigmoid and api_fluid_layers_elementwise_mul. It divides the input data into 2 equal parts, calculates the Sigmoid of second part, and then performs dot product of the sigmoid vlaue with the first part to get the output.
For API Reference, please refer to api_fluid_nets_glu
5.scaled_dot_product_attention¶
scaled_dot_product_attention
originates from the paper Attention Is All You Need , mainly composed of api_fluid_layers_fc and api_fluid_layers_softmax . For the input data Queries
, Key
and Values
, calculate the Attention
according to the following formula.
This module is widely used in the model of machine translation, such as Transformer .
For API Reference, please refer to api_fluid_nets_scaled_dot_product_attention