Get total amount of free GPU memory and available using pytorch

37,584

Solution 1

PyTorch can provide you total, reserved and allocated info:

t = torch.cuda.get_device_properties(0).total_memory
r = torch.cuda.memory_reserved(0)
a = torch.cuda.memory_allocated(0)
f = r-a  # free inside reserved

Python bindings to NVIDIA can bring you the info for the whole GPU (0 in this case means first GPU device):

from pynvml import *
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
print(f'total    : {info.total}')
print(f'free     : {info.free}')
print(f'used     : {info.used}')

pip install pynvml

You may check the nvidia-smi to get memory info. You may use nvtop but this tool needs to be installed from source (at the moment of writing this). Another tool where you can check memory is gpustat (pip3 install gpustat).

If you would like to use C++ cuda:

include <iostream>
#include "cuda.h"
#include "cuda_runtime_api.h"
  
using namespace std;
  
int main( void ) {
    int num_gpus;
    size_t free, total;
    cudaGetDeviceCount( &num_gpus );
    for ( int gpu_id = 0; gpu_id < num_gpus; gpu_id++ ) {
        cudaSetDevice( gpu_id );
        int id;
        cudaGetDevice( &id );
        cudaMemGetInfo( &free, &total );
        cout << "GPU " << id << " memory: free=" << free << ", total=" << total << endl;
    }
    return 0;
}

Solution 2

This is useful for me!

def get_memory_free_MiB(gpu_index):
    pynvml.nvmlInit()
    handle = pynvml.nvmlDeviceGetHandleByIndex(int(gpu_index))
    mem_info = pynvml.nvmlDeviceGetMemoryInfo(handle)
    return mem_info.free // 1024 ** 2
Share:
37,584
Hari Prasad
Author by

Hari Prasad

Updated on August 27, 2021

Comments

  • Hari Prasad
    Hari Prasad over 2 years

    I'm using google colab free Gpu's for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch.