By default, TensorFlow pre-allocate the whole memory of the GPU card (which can causes CUDA_OUT_OF_MEMORY
warning).
To change this, it is possible to
change the percentage of memory pre-allocated, using per_process_gpu_memory_fraction
config option,
A value between 0 and 1 that indicates what fraction of the
available GPU memory to pre-allocate for each process. 1 means
to pre-allocate all of the GPU memory, 0.5 means the process
allocates ~50% of the available GPU memory.
disable the pre-allocation, using allow_growth
config option. Memory allocation will grow as usage grows.
If true, the allocator does not pre-allocate the entire specified
GPU memory region, instead starting small and growing as needed.
For example:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
sess = tf.Session(config=config) as sess:
or
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess= tf.Session(config=config):
More information on the config options here.