empty_cache

paddle.device.cuda. empty_cache ( ) [source]

Releases idle cached memory held by the allocator so that those can be used in other GPU application and visible in nvidia-smi. In most cases you don’t need to use this function, Paddle does not release the memory back to the OS when you remove Tensors on the GPU, Because it keeps gpu memory in a pool so that next allocations can be done much faster.

Examples

import paddle

# required: gpu
paddle.set_device("gpu")
tensor = paddle.randn([512, 512, 512], "float")
del tensor
paddle.device.cuda.empty_cache()