TracedLayer

class paddle.jit. TracedLayer ( program, parameters, feed_names, fetch_names ) [source]
Api_attr

imperative

TracedLayer is used to convert a forward dygraph model to a static graph model. This is mainly used to save the dygraph model for online inference using C++. Besides, users can also do inference in Python using the converted static graph model, which usually has better performance than the original dygraph model.

TracedLayer would run the static graph model using Executor and CompiledProgram . The static graph model would share parameters with the dygraph model.

All TracedLayer objects should not be created by constructor and should be created by static method TracedLayer.trace(layer, inputs) .

The TracedLayer can only be used to convert the data-independent dygraph model into the static graph model, which means the dygraph model should be independent with the tensor data and shape.

static trace ( layer, inputs )

trace

This method is the only allowed method to create TracedLayer object. It would call the layer(*inputs) method to run the dygraph model and convert it into a static graph model.

Parameters
  • layer (paddle.nn.Layer) – the layer object to be traced.

  • inputs (list(Tensor)|tuple(Tensor)|Tensor) – the input tensors of the layer object.

Returns

A tuple of 2 items, whose the first item is the output of

layer(*inputs) , and the second item is the created TracedLayer object.

Return type

tuple

Examples

import paddle

class ExampleLayer(paddle.nn.Layer):
    def __init__(self):
        super(ExampleLayer, self).__init__()
        self._fc = paddle.nn.Linear(3, 10)

    def forward(self, input):
        return self._fc(input)


layer = ExampleLayer()
in_var = paddle.uniform(shape=[2, 3], dtype='float32')
out_dygraph, static_layer = paddle.jit.TracedLayer.trace(layer, inputs=[in_var])

# run the static graph model using Executor inside
out_static_graph = static_layer([in_var])

print(len(out_static_graph)) # 1
print(out_static_graph[0].shape) # (2, 10)

# save the static graph model for inference
static_layer.save_inference_model(dirname='./saved_infer_model')
set_strategy ( build_strategy=None, exec_strategy=None )

set_strategy

Set the strategies when running static graph model.

Parameters
  • build_strategy (BuildStrategy, optional) – build strategy of CompiledProgram inside TracedLayer. Default None.

  • exec_strategy (ExecutionStrategy, optional) – execution strategy of CompiledProgram inside TracedLayer. Default None.

Returns

None

Examples

import paddle

class ExampleLayer(paddle.nn.Layer):
    def __init__(self):
        super(ExampleLayer, self).__init__()
        self._fc = paddle.nn.Linear(3, 10)

    def forward(self, input):
        return self._fc(input)

layer = ExampleLayer()
in_var = paddle.uniform(shape=[2, 3], dtype='float32')

out_dygraph, static_layer = paddle.jit.TracedLayer.trace(layer, inputs=[in_var])

build_strategy = paddle.static.BuildStrategy()
build_strategy.enable_inplace = True

exec_strategy = paddle.static.ExecutionStrategy()
exec_strategy.num_threads = 2

static_layer.set_strategy(build_strategy=build_strategy, exec_strategy=exec_strategy)
out_static_graph = static_layer([in_var])
save_inference_model ( path, feed=None, fetch=None, **kwargs )

save_inference_model

Save the TracedLayer to a model for inference. The saved inference model can be loaded by C++ inference APIs.

path is the prefix of saved objects, and the saved translated program file suffix is .pdmodel , the saved persistable variables file suffix is .pdiparams .

Parameters
  • path (str) – The path prefix to save model. The format is dirname/file_prefix or file_prefix.

  • feed (list[int], optional) – the input variable indices of the saved inference model. If None, all input variables of the TracedLayer object would be the inputs of the saved inference model. Default None.

  • fetch (list[int], optional) – the output variable indices of the saved inference model. If None, all output variables of the TracedLayer object would be the outputs of the saved inference model. Default None.

  • kwargs – Supported keys including ‘clip_extra’.set to True if you want to clip extra information for every operator.

Returns

None

Examples

import numpy as np
import paddle

class ExampleLayer(paddle.nn.Layer):
    def __init__(self):
        super(ExampleLayer, self).__init__()
        self._fc = paddle.nn.Linear(3, 10)

    def forward(self, input):
        return self._fc(input)

save_dirname = './saved_infer_model'
in_np = np.random.random([2, 3]).astype('float32')
in_var = paddle.to_tensor(in_np)
layer = ExampleLayer()

out_dygraph, static_layer = paddle.jit.TracedLayer.trace(layer, inputs=[in_var])
static_layer.save_inference_model(save_dirname, feed=[0], fetch=[0])

paddle.enable_static()
place = paddle.CPUPlace()
exe = paddle.static.Executor(place)
program, feed_vars, fetch_vars = paddle.static.load_inference_model(save_dirname,
                                    exe)

fetch, = exe.run(program, feed={feed_vars[0]: in_np}, fetch_list=fetch_vars)
print(fetch.shape) # (2, 10)