Can OpenMP be used for GPUs?

19,828

Solution 1

Yes. The OpenMP 4 target constructs were designed to support a wide range of accelerators. Compiler support for NVIDIA GPUs is available from GCC 7+ (see 1 and 2, although the latter has not been updated to reflect OpenMP 4 GPU support), Clang (see 3,4,5), and Cray. Compiler support for Intel GPUs is available in the Intel C/C++ compiler (see e.g. 6).

The IBM-developed Clang/LLVM implementation of OpenMP 4+ for NVIDIA GPUs is available from https://github.com/clang-ykt. The build recipe is provided in "OpenMP compiler for CORAL/OpenPower Heterogeneous Systems".

The Cray compiler supports OpenMP target for NVIDIA GPUs. From Cray Fortran Reference Manual (8.5):

The OpenMP 4.5 target directives are supported for targeting NVIDIA GPUs or the current CPU target. An appropriate accelerator target module must be loaded to use target directives.

The Intel compiler supports OpenMP target for Intel Gen graphics for C/C++ but not Fortran. Furthermore, the teams and distribute clauses are not supported because they are not necessary/appropriate. Below is a simple example showing how the OpenMP target features work in different environments.

void vadd2(int n, float * a, float * b, float * c)
{
    #pragma omp target map(to:n,a[0:n],b[0:n]) map(from:c[0:n])
#if defined(__INTEL_COMPILER) && defined(__INTEL_OFFLOAD)
    #pragma omp parallel for simd
#else
    #pragma omp teams distribute parallel for simd
#endif
    for(int i = 0; i < n; i++)
        c[i] = a[i] + b[i];
}

The compiler options for Intel and GCC are as follows. I don't have GCC setup for NVIDIA GPUs but you can see the documentation for the appropriate -foffload options.

$ icc -std=c99 -qopenmp -qopenmp-offload=gfx -c vadd2.c && echo "SUCCESS" || echo "FAIL"
SUCCESS
$ gcc-7 -fopenmp -c vadd2.c && echo "SUCCESS" || echo "FAIL"
SUCCESS

Solution 2

  1. OpenMP 4.0 standard includes support of accelerators (GPU, DSP, Xeon Phi, and so on), but I don't know any existence implementation of OpenMP 4.0 standard for GPU, only early experience.

  2. OpenACC is indeed similar to OpenMP and easy to use. Good OpenACC tutorial: part 1 and part 2.

Unfortunately, I think there is no portable solution for CPU and GPU, at least for now (except for OpenCL, but it is too low level compare to OpenMP and OpenACC).

If you need portable solution, you could consider using Intel Xeon Phi accelerator instead of GPU. Intel Fortran (and C/C++) compiler includes OpenMP support both for CPU and Xeon Phi.

In addition, to create a really portable solution, it is not enough to use suitable parallel technology. You have to modify your program in order to provide enough level of parallelism. See "Structured Parallel Programming" or similar books for examples of possible approaches.

Solution 3

To add to what was said about support on other platforms above: IBM is contributing to two OpenMP 4.5 compilers: One is the open source Clang/LLVM one. The other is IBM's XL compiler. Both compilers share the same helper OpenMP offloading library, but differ in the compiler's code generation and optimization for the GPU. For Fortran, the XL Fortran compiler supports a large subset of OpenMP 4.5 offloading to NVIDIA GPUs, starting in version 15.1.5. (And version 13.1.5 for XL C/C++). More features are being added this year and next year, with the aim of complete support in 2018. If you're on POWER, you can join the XL compiler beta program to get access to our latest OpenMP offloading features in Fortran and C/C++.

Solution 4

The previous answer covers most of it, but since you spoke about giving the GPU some work as well, you might want to take a look at frameworks for heterogeneous computing (CPU + GPU simultaneously), such as StarPU.

As StarPU is only for C/C++, you have ForOpenCL for Fortran.

You'll have to consider the trade-off performance-convenience in any case.

Share:
19,828
André Almeida
Author by

André Almeida

Updated on June 12, 2022

Comments

  • André Almeida
    André Almeida almost 2 years

    I've been searching the web but I'm still very confused about this topic. Can anyone explain this more clearly? I come from an Aerospace Engineering background (not from a Computer Science one), so when I read online about OpenMP/CUDA/etc. and multithreading I don't really understand a great deal of what is being said.

    I'm currently trying to parallelize an in-house CFD software written in FORTRAN. These are my doubts:

    1. OpenMP shares the workload using multiple threads from the CPU. Can it be used to allow the GPU to get some of the work too?

    2. I've read about OpenACC. Is it similar to OpenMP (easy to use)?

    I've also read about CUDA and kernels, but I don't have any much experience in parallel programming and I don't have the faintest idea of what a kernel is.

    1. Is there an easy and portable way to share my workload with the GPU, for FORTRAN (if OpenMP doesn't do that and OpenACC is not portable)?

    Can you give me a "for dummies" type of answer?