How to convert Pytorch autograd.Variable to Numpy?
Solution 1
Two possible case
Using GPU: If you try to convert a cuda float-tensor directly to numpy like shown below,it will throw an error.
x.data.numpy()
RuntimeError: numpy conversion for FloatTensor is not supported
So, you cant covert a cuda float-tensor directly to numpy, instead you have to convert it into a cpu float-tensor first, and try converting into numpy, like shown below.
x.data.cpu().numpy()
Using CPU: Converting a CPU tensor is straight forward.
x.data.numpy()
Solution 2
I have found the way. Actually, I can first extract the Tensor
data from the autograd.Variable
by using a.data
. Then the rest part is really simple. I just use a.data.numpy()
to get the equivalent numpy
array. Here's the steps:
a = a.data # a is now torch.Tensor
a = a.numpy() # a is now numpy array
Related videos on Youtube
Bishwajit Purkaystha
Updated on July 09, 2022Comments
-
Bishwajit Purkaystha almost 2 years
The title says it all. I want to convert a
PyTorch autograd.Variable
to its equivalentnumpy
array. In their official documentation they advocated usinga.numpy()
to get the equivalentnumpy
array (forPyTorch tensor
). But this gives me the following error:Traceback (most recent call last): File "stdin", line 1, in module File "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 63, in getattr raise AttributeError(name) AttributeError: numpy
Is there any way I can circumvent this?
-
Bishwajit Purkaystha almost 7 yearsYes, I did that. I answered to my own question. But it has disappeared anyhow.
-
blitu12345 almost 7 yearsI thought to mention out this in particular, as many do face this issue.Cheers !!
-
Alex Walczak about 6 yearsHow does this compare to
x.cpu().data.numpy()
andnp.asarray(torch_tensor)
? Can't find any documentation that mentions this -
Vaishnavi about 5 yearsThanks a lot! I was getting this error when I tried to plot a tensor using plt.imshow(). This worked... Thank you
-
drevicko over 4 yearsThis only works if a is a CPU tensor, if it's on GPU, it'll fail (see @blitu12345's answer)
-
drevicko over 4 yearsIf you want to keep training a tensor on GPU, you'd have to duplicate the numpy array then run
x.data.cuda()
after this, no? (or would the numpy array remain.. perhaps?)