How to render offscreen on OpenGL?

63,096

Solution 1

It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.

So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:

//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);

This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.

There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.

Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.

//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);

//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);

//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);

This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.

Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:

//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);

//Deinit:
glDeleteBuffers(1,&pbo);

//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);

The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.

Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.

When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.

Solution 2

I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.

Here are your options:

Pixel buffers

A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.

The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.

You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.

The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).

The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.

Framebuffer Objects

Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.

FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).

Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.

The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.

The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.

Solution 3

If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:

The TR (Tile Rendering) library is an OpenGL utility library for doing tiled rendering. Tiled rendering is a technique for generating large images in pieces (tiles).

TR is memory efficient; arbitrarily large image files may be generated without allocating a full-sized image buffer in main memory.

Solution 4

The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).

Solution 5

The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples

Share:
63,096
Rookie
Author by

Rookie

english is not my native language, bear with me.

Updated on July 09, 2022

Comments

  • Rookie
    Rookie almost 2 years

    My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.

    How can I do this?

    I want to be able to choose the render area size to any size, for example 10000x10000, if possible?

  • keltar
    keltar almost 12 years
    Either that or pbuffers. Still, render target size limited by MAX_TEXTURE_SIZE (in case of FBO), so splitting to several regions may be required for some very large outputs.
  • Rookie
    Rookie almost 12 years
    Whats the difference to PBO? Which one is faster?
  • Rookie
    Rookie almost 12 years
    So, if i use FBO, i render into a texture, and then use glGetTexImage() to get the texture pixel data and save to file?
  • keltar
    keltar almost 12 years
    Something like that. However, you still need valid initialized GL context to perform any rendering - doesn't matter if it's window or offscreen.
  • KillianDS
    KillianDS almost 12 years
    @Rookie: essentially you use a PBO to speed up the transfer of your GPU to your Main memory (RAM), it is not an offscreen rendering technique and can be used alongside FBO's.
  • Rookie
    Rookie almost 12 years
    @keltar, i tried with max texture size (8192x8192), but i got GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER error from glCheckFramebufferStatus(). I think its because my gfx card only has 256mb memory? 8192x8192 image takes 256mb i think... so the lesson here is: it is not guaranteed to work with MAX_TEXTURE_SIZE.
  • Rookie
    Rookie almost 12 years
    also, is it possible to use non-power-of-two size textures with FBO? if my card doesnt support non-power-of-two size textures, does it mean i cant do that on FBO either? edit: looks like it crashes (at glGetTexImage()) if i dont use power-of-two size textures. edit: actually its not about power-of-two, but i think its something to do with dimensions being divisible by 4.
  • Nicol Bolas
    Nicol Bolas almost 12 years
    @Rookie: "Whats the difference to PBO?" He didn't say PBOs; he said "pbuffers" which are a very different thing.
  • Rookie
    Rookie almost 12 years
    @NicolBolas, i know, but that other guy said something about PBO's (looks like he has removed his answer now).
  • Rookie
    Rookie almost 12 years
    looks like GL_MAX_VIEWPORT_DIMS isnt guaranteed to return valid values. it gives me 8192x8192, but when i create such FBO, it gives error: GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER. smaller size such as 8192x4096 works. i believe this is because of my gfx card has only 256mb memory?
  • Nicol Bolas
    Nicol Bolas almost 12 years
    @Rookie: The maximum viewport size is the size of the viewport; whether you can successfully create a renderbuffer that size is not answered by that query. Your problem is likely that glRenderbufferStorage failed with a GL_OUT_OF_MEMORY error, since INCOMPLETE_DRAW_BUFFER would suggest an attached image doesn't have storage. Of course, if you had used an 8-bit format like GL_R8, it probably would have succeeded. So the size is still accurate; it's failing for other reasons.
  • Rookie
    Rookie almost 12 years
    Out of interest: does that libtr work in 3d perspective mode too? (i cant think of how its possible, without having any kind of seams).
  • Rookie
    Rookie almost 12 years
    Doesnt glReadPixels do exactly what reading the teximage does? or is it faster? it might speed up at the edges though, since there are some unused areas of pixels which i will ignore...
  • rtrobin
    rtrobin almost 12 years
    It should be faster. glReadPixels directly reads the data from fb to memory. But glGetTexImage needs fb render to texture first, then reading from texture to memory. With properly use of fb, you can just read the areas of pixels which you need.
  • genpfault
    genpfault almost 12 years
    It does! "OpenGL programs typically call glFrustum, glOrtho or gluPerspective to setup the projection matrix. There are three corresponding functions in the TR library."
  • keltar
    keltar almost 12 years
    It's not very easy to believe that 256mb card have 8192 MAX_TEXTURE_SIZE. What card is it? I mean, framebuffer/backbuffer is always in video memory, so there is already not enough memory to place 256mb-sized texture. Then there are textures, shaders and vertex buffers - all may be freed and re-loaded from RAM again when needed, but still must be in video memory when you draw them. So, i'm guessing either you haven't checked MAX_TEXTURE_SIZE correctly, or something is quite wrong with your hw/driver.
  • Said algharibi
    Said algharibi almost 12 years
    libtr is pretty good as far as I remember. You will definnitely need it for such huge resolutions.
  • Ciro Santilli OurBigBook.com
    Ciro Santilli OurBigBook.com over 11 years
    how do I use the FBO without opening a window? if I don't call glutCreateWindow(argv[0]); my program just doesn't run.
  • KillianDS
    KillianDS over 11 years
    Sadly enough you can't (in general). OpenGL does not provide a way to create an offscreen context. There are some tricks using a window/context from another application but most people just create a 1x1 window and immediately hide it.
  • KillianDS
    KillianDS over 11 years
    Please don't edit the post with side issues. The answer is about how to render offscreen, not how to setup an OpenGL context with no window (related, but a technically totally different topic) .
  • over_optimistic
    over_optimistic about 11 years
    on Android, opengles 2.x there is no GL_DRAW_ just remove "_DRAW" and it's fine. Also glViewport(0, 0, width, height) should be called before rendering. You will want to set it back to previous values after it's done rendering.
  • KillianDS
    KillianDS about 11 years
    @over_optimistic: as far as I know both the OpenGL ES spec and the OpenGL spec define the specific draw buffer constant, are you sure android does not support it or are you just using immediate mode? A viewport should indeed be set, just like any other element required to render your scene, but that's not the essential part of offscreen rendering. A viewport is an element of how you render the scene, not to where you render it.
  • KillianDS
    KillianDS about 11 years
    Apparently it's not supported in OpenGL ES 2.0, will add that.
  • ılǝ
    ılǝ almost 10 years
    Well Android's GLES20 doesn't provide glReadBuffer
  • Ryan Kennedy
    Ryan Kennedy over 9 years
    Just out of curiosity... @KillianDS you mentioned "naïve 'security camera'"... how would you implement a "non-naïve" security camera?
  • KillianDS
    KillianDS over 9 years
    @RyanMuller: naive might be a bad choice of words ;), 'possible component of a security camera' might be a better phrase. You might want to use different culling faces for your security cameras, you might also want to disable some effects for your camera (e.g. shadows) just for performance sake. You might want to post-process the image to insert some static, and so on.
  • Ryan Kennedy
    Ryan Kennedy over 9 years
    @KillianDS Ah, makes sense. I thought perhaps you were hinting at some darker wizardry ;-)
  • bluenote10
    bluenote10 almost 9 years
    The question clearly states without a window, so I was really expecting an answer to this "side topic" here. To me, this is the more interesting aspect of the question, since by default any rendering in OpenGL is literally offscreen.
  • manatttta
    manatttta almost 9 years
    Hey. I was trying this code (the first example). I am rendering a 3d world to a window and I want to create a higher definition snapshot to disk, so I am trying to create a renderBuffer. I tried your code but what I get is a blank output snapshot, and my 3d world gets frozen, even after I called the glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0). IKs there a more extensive explanation of this?
  • buburs
    buburs almost 9 years
    glBindFramebuffer(GL_READ_FRAMEBUFFER​,fbo); is missing before the call to glReadPixels
  • parker.sikand
    parker.sikand over 7 years
    This is a good answer, but make sure to read the docs for each of the functions mentioned. The functions are very picky about format specifiers.
  • parker.sikand
    parker.sikand over 7 years
    Framebuffer is cited as the preferred method, along with examples here. Specifically, refer to the "Render to buffer" example.