Home » Python » Can Keras with Tensorflow backend be forced to use CPU or GPU at will?

Can Keras with Tensorflow backend be forced to use CPU or GPU at will?

Posted by: admin April 4, 2018 Leave a comment

Questions:

I have Keras installed with the Tensorflow backend and CUDA. I’d like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.

Answers:

If you want to force Keras to use CPU

Way 1

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""

before Keras / Tensorflow is imported.

Way 2

Run your script as

$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py

See also

  1. https://github.com/keras-team/keras/issues/152
  2. https://github.com/fchollet/keras/issues/4613
Questions:
Answers:

A rather graceful and separable way of doing this is to use

import tensorflow as tf
from keras import backend as K

num_cores = 4

if GPU:
    num_GPU = 1
    num_CPU = 1
if CPU:
    num_CPU = 1
    num_GPU = 0

config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,\
        inter_op_parallelism_threads=num_cores, allow_soft_placement=True,\
        device_count = {'CPU' : num_CPU, 'GPU' : num_GPU})
session = tf.Session(config=config)
K.set_session(session)

Here with booleans GPU and CPU you can specify whether to use a GPU or GPU when running your code. Notice that I’m doing this by specifying that there are 0 GPU devices when I want to just use the CPU. As an added bonus, via this method you can specify how many GPUs and CPUs to use too! Additionally, via num_cores you can set the number of CPU cores to use.

All of this is executed in the constructor of my class, before any other operations, and is completely separable from any model, or other code I use.

The only thing to note is that you’ll need tensorflow-gpu and cuda/cudnn installed because you’re always giving the option of using a GPU.

Questions:
Answers:

As per keras tutorial, you can simply use the same tf.device scope as in regular tensorflow:

with tf.device('/gpu:0'):
    x = tf.placeholder(tf.float32, shape=(None, 20, 64))
    y = LSTM(32)(x)  # all ops in the LSTM layer will live on GPU:0

with tf.device('/cpu:0'):
    x = tf.placeholder(tf.float32, shape=(None, 20, 64))
    y = LSTM(32)(x)  # all ops in the LSTM layer will live on CPU:0

Questions:
Answers:

This worked for me (win10), place before you import keras:

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

Questions:
Answers:

Just import tensortflow and use keras, it’s that easy.

import tensorflow as tf
# your code here
with tf.device('/gpu:0'):
    model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)

Questions:
Answers:

I just spent some time figure it out.
Thoma’s answer is not complete.
say your program is test.py, you want to use gpu0 to run this program, and keep other gpus free.
you should write CUDA_VISIBLE_DEVICES=0 python test.py

notice it’s DEVICES not DEVICE