Remote offscreen rendering

8,166

It's been a while since I asked this question, so I thought I'd mention the solution we ultimately used.

Hijacking the local X screen

In the end, I just ran the remote opengl programs on the server's local X screen. The machine was running Ubuntu server edition and it wasn't running an xserver be default, so I had to set up an xserver to run at startup (I just installed Ubuntu's ubuntu-desktop package, killing a mosquito with a sledgehammer), and then gave myself access to the X screen using these commands as root: "export DISPLAY=:0.0; xhost + local:". Then I could ssh into the machine, call "export DISPLAY=:0.0" and then run my opengl programs as normal. Anyone sitting at the remote machine would see a window pop-up and watch my program running, but we don't have a monitor connected so this wasn't a problem.

It's important to use some form of offscreen rendering, because reading pixels directly from the onscreen color buffer could result in garbage data if the window becomes obscured by another window. Since you can't see the X screen, it's difficult to know if this has happened. Offscreen rendering (e.g. Framebuffer objects (fbo) or pbuffers) doesn't have this problem.

Hijacking the server's local Xscreen isn't an ideal solution, so here are a few alternatives I found along the way:

Virtual Framebuffers

Xvfb is an option, but it didn't work for me, because OpenGL wasn't benefitting from hardware acceleration, and framebuffer objects weren't supported, which are necessary for CUDA interoperability with OpenGL. Nevertheless, this might be a workable option where hijacking the local screen isn't acceptable, or where user's can't get xhost privileges.

VirtualGL

From the VirtualGL website:

VirtualGL is an open source package which gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration.

This is exactly what I want, and it looks very promising, but I didn't have the time to deal with a new library dependency, so I haven't tested it. My guess is that this is the ideal solution once I can get it compiled, installed, and configured. This is what VirtualBox and some VNC servers use to support hardware accelerated 3D.

Share:
8,166

Related videos on Youtube

redmoskito
Author by

redmoskito

Updated on September 17, 2022

Comments

  • redmoskito
    redmoskito almost 2 years

    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA.

    My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start.

    We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    • Hubert Kario
      Hubert Kario over 13 years
      I would try <code>x11vnc</code>, but it's rather unlikely to work
  • redmoskito
    redmoskito over 13 years
    Thanks for the tip! I've started looking into xvfb, and it looks like rendering doesn't use graphics hardware, but instead renders into virtual memory. Can anyone confirm/deny this? If this is the case, I think this solution won't be good, because we're looking to take advantage of the power of our graphics card.