Convert Pytorch Tensor to Numpy Array using Cuda

23,152

Solution 1

I believe you also have to use .detach(). I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following:

embedding = learn.model.u_weight

embedding_list = list(range(0, 64382))

input = torch.cuda.LongTensor(embedding_list)
tensor_array = embedding(input)
# the output of the line bwlow is a numpy array
tensor_array.cpu().detach().numpy() 

Solution 2

If the tensor is on gpu or cuda as you say.

You can use self.tensor.weight.data.cpu().numpy() It will copy the tensor to cpu and convert it to numpy array.

If the tensor is on cpu already you can do self.tensor.weight.data.numpy() as you correctly figured out, but you can also do self.tensor.weight.data.cpu().numpy() in this case since tensor is already on cpu, .cpu() operation will have no effect. and this could be used as a device-agnostic way to convert the tensor to numpy array.

Share:
23,152
Noa Yehezkel
Author by

Noa Yehezkel

Code Languages: C++, C#, C Additional Knowledge: Matlab, Git & Gerrit, SystemC, Cesium, STK

Updated on February 08, 2020

Comments

  • Noa Yehezkel
    Noa Yehezkel about 4 years

    I would like to convert a Pytorch tensor to numpy array using cuda:

    this is the code line while not using cuda:

    A = self.tensor.weight.data.numpy()

    How can I do the same operation using cuda? According to this: https://discuss.pytorch.org/t/how-to-transform-variable-into-numpy/104/3 it seems:

    A = self.tensor.weight.data.cpu().numpy()

  • ZaydH
    ZaydH over 4 years
    You only need to call detach if the Tensor has associated gradients. When detach is needed, you want to call detach before cpu. Otherwise, PyTorch will create the gradients associated with the Tensor on the CPU then immediately destroy them when numpy is called. Calling detach first eliminates that superfluous step. For more information see: discuss.pytorch.org/t/…