Home » Python » python – Error when checking input: expected conv2d_17_input to have 4 dimensions, but got array with shape (28, 28, 1)-Exceptionshub

python – Error when checking input: expected conv2d_17_input to have 4 dimensions, but got array with shape (28, 28, 1)-Exceptionshub

Posted by: admin February 24, 2020 Leave a comment

Questions:

I have trained a model for Handwritten digit recognition from MNIST.
The input shape of 1st Conv2D is (28, 28, 1)
After training, I wanted to predict a downloaded image of a digit.
The shape of the image was (1024, 791, 3), upon applying the following code i resized the image to (28,28,1) and called model.predict() :

resized_image = cv2.resize(image, (28, 28))
#resized_image = tf.image.resize(image, size = (28,28))
resized_image = resized_image / 225.0
resized_image = resized_image[:,:,:1] 
prediction = model.predict(resized_image)
#prediction = model.predict(resized_image, batch_size = 1, verbose = 1, steps = 1)
print(labels[np.argmax(prediction)])

But i am getting following error:

Error when checking input: expected conv2d_17_input to have 4 dimensions, but got array with shape (28, 28, 1)

model is :

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(filters = 32, kernel_size = (3,3), padding = 'Same', activation = 'relu', input_shape = (28,28,1)),
    tf.keras.layers.MaxPool2D(pool_size = (2,2)),
    tf.keras.layers.Conv2D(filters = 64, kernel_size = (3,3), padding = 'Same', activation = 'relu'),
    tf.keras.layers.MaxPool2D(pool_size = (2,2)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation = 'relu'),
    tf.keras.layers.Dense(10, activation = "softmax")
])

I have also tried uncommenting

resized_image = tf.image.resize(image, size = (28,28))
prediction = model.predict(resized_image, batch_size = 1, verbose = 1, steps = 1)

Yet i received the same error.

How to&Answers:

You didn’t show your model, but usually, you need to consider the batch. So, your input must be [batch_size, width, height, channel]. If you have just one image, you must set batch_size=1, in your case, [1, 28, 28, 1].

Also, make sure your input is a tf.tensor.

Hope it helps you.