Home » Python » Printing the loss during TensorFlow training

# Printing the loss during TensorFlow training

Posted by: admin April 4, 2018 Leave a comment

Questions:

I am looking at the TensorFlow “MNIST For ML Beginners” tutorial, and I want to print out the training loss after every training step.

My training loop currently looks like this:

``````for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
``````

Now, `train_step` is defined as:

``````train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
``````

Where `cross_entropy` is the loss which I want to print out:

``````cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
``````

One way to print this would be to explicitly compute `cross_entropy` in the training loop:

``````for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
print 'loss = ' + str(cross_entropy)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
``````

I now have two questions regarding this:

1. Given that `cross_entropy` is already computed during `sess.run(train_step, ...)`, it seems inefficient to compute it twice, requiring twice the number of forward passes of all the training data. Is there a way to access the value of `cross_entropy` when it was computed during `sess.run(train_step, ...)`?

2. How do I even print a `tf.Variable`? Using `str(cross_entropy)` gives me an error…

Thank you!

Answers:

You can fetch the value of `cross_entropy` by adding it to the list of arguments to `sess.run(...)`. For example, your `for`-loop could be rewritten as follows:

``````for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = ' + loss_val
``````

The same approach can be used to print the current value of a variable. Let’s say, in addition to the value of `cross_entropy`, you wanted to print the value of a `tf.Variable` called `W`, you could do the following:

``````for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val, W_val = sess.run([train_step, cross_entropy, W],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = %s' % loss_val
print 'W = %s' % W_val
``````

Questions:
Answers:

Instead of just running the training_step, run also the cross_entropy node so that its value is returned to you. Remember that:

``````var_as_a_python_value = sess.run(tensorflow_variable)
``````

will give you what you want, so you can do this:

``````[_, cross_entropy_py] = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
``````

to both run the training and pull out the value of the cross entropy as it was computed during the iteration. Note that I turned both the arguments to sess.run and the return values into a list so that both happen.