unhashable type: 'numpy.ndarray' error in tensorflow
Solution 1
In my case, the problem was naming the input parameter the same as the placeholder variable. This, of course, replaces your tensorflow variable with the input variable; resulting in a different key for the feed_dict.
A tensorflow variable is hashable, but your input parameter (np.ndarray) isn't. The unhashable error is therefore a result of you trying to pass your parameter as the key instead of a tensorflow variable. Some code to visualize what I'm trying to say:
a = tf.placeholder(dtype=tf.float32, shape=[1,2,3])
b = tf.identity(a)
with tf.Session() as sess:
your_var = np.ones((1,2,3))
a = your_var
sess.run(b, feed_dict={a: a})
Hopes this helps anyone stumbling upon this problem in the future!
Solution 2
Please carefully check the datatype you feed "x_train/y_train" and the tensor "x/y_label" you defined by 'tf.placeholder(...)'
I have met the same problem with you. And the reason is x_train in my code is "np.float64", but what I defined by tf.placeholder() is tf.float32. The date type float64 and float32 is mismatching.
Solution 3
I think problem is in defining the dictionary. A dictionary key has to be a 'hashable type', e.g. a number, a string or a tuple are common. A list or an array don't work:
In [256]: {'x':np.array([1,2,3])}
Out[256]: {'x': array([1, 2, 3])}
In [257]: x=np.array([1,2,3])
In [258]: {x:np.array([1,2,3])}
...
TypeError: unhashable type: 'numpy.ndarray'
I don't know enough of tensorflow to know what these are:
y_label = tf.placeholder(shape=[None,1], dtype=tf.float32, name='y_label')
x = tf.placeholder(shape=[None,3], dtype=tf.float32, name='x')
The error indicates that they are are numpy arrays, not strings. Does x
have a name
attribute?
Or maybe the dictionary should be specified as:
{'x': x_train, 'y_label': y_train}
madsthaks
Updated on July 05, 2022Comments
-
madsthaks almost 2 years
data = pd.read_excel("/Users/madhavthaker/Downloads/Reduced_Car_Data.xlsx") train = np.random.rand(len(data)) < 0.8 data_train = data[train] data_test = data[~train] x_train = data_train.ix[:,0:3].values y_train = data_train.ix[:,-1].values x_test = data_test.ix[:,0:3].values y_test = data_test.ix[:,-1].values y_label = tf.placeholder(shape=[None,1], dtype=tf.float32, name='y_label') x = tf.placeholder(shape=[None,3], dtype=tf.float32, name='x') W = tf.Variable(tf.random_normal([3,1]), name='weights') b = tf.Variable(tf.random_normal([1]), name='bias') y = tf.matmul(x,W) + b init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) summary_op = tf.summary.merge_all() #Fit all training data for epoch in range(1000): sess.run(train, feed_dict={x: x_train, y_label: y_train}) # Display logs per epoch step if (epoch+1) % display_step == 0: c = sess.run(loss, feed_dict={x: x_train, y_label:y_train}) print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \ "W=", sess.run(W), "b=", sess.run(b)) print("Optimization Finished!") training_cost = sess.run(loss, feed_dict={x: x_train, y_label: y_train}) print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')
Here is the error:
x--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-37-50102cbac823> in <module>() 6 #Fit all training data 7 for epoch in range(1000): ----> 8 sess.run(train, feed_dict={x: x_train, y_label: y_train}) 9 10 # Display logs per epoch step TypeError: unhashable type: 'numpy.ndarray'
Here are the shapes of both of the numpy arrays that I am inputting:
y_train.shape = (78,) x_train.shape = (78, 3)
I have no idea what is causing this. All of my shapes match up and I shouldn't have any issues. Let me know if you need any more information.
Edit: From my comment on one of the answers below, it seems as though I had to specify a specific size for my placeholders.
None
was not satisfactory. When I changed that and re-ran my code, everything worked fine. Still not quite sure why that is.