SoFunction
Updated on 2024-11-18

Solve Tensorflow hogging GPU video memory issue

When I use Pytorch for model training, I found that the real model itself does not take up any significant memory, but after converting it to tensorflow (and converting the weights), I found that Python-tensorflow eats all the memory by default and doesn't free it up if I don't manually terminate the program (I have two coherent models, and the first one doesn't free up memory after running) (I have two coherent models, and the second one doesn't free up memory after running). (I have two sequential models, the first one doesn't release the memory after running) (/tensorflow/tensorflow/issues/1727), which has a significant impact on the follow-up.

Later, I found out that python-tensorflow limits the video memory in two ways:

1. Setting the graphics card utilization rate

This method is better used in study and work, study can improve the efficiency of the use of graphics cards, work can be convenient to get the GPU memory consumption limit, used to provide the parameters of the graphics card purchase, the code is shown below:

The 0.1 here means 10% of the total video memory is used.

2. set the graphics card on demand (I didn't specifically test this one, just got it from the tensorflow forums)

gpu_options = (allow_growth=True)
sess = (config=(gpu_options=gpu_options))

The above article to solve the problem of Tensorflow taking up GPU video memory is all I have to share with you, I hope to give you a reference, and I hope you will support me more.