Rendering to multiple textures with one pass in directx 11

13,096

You need to use MRT (Multiple Render Targets) to render this in one pass.

You can bind both targets as output using OMSetRenderTargets

http://msdn.microsoft.com/en-us/library/windows/desktop/ff476464(v=vs.85).aspx

There's an example in http://hieroglyph3.codeplex.com/ (DefferedRendering) which then shows how to write to both textures at once.

Here is a small sample :

ID3D11DeviceContext* deviceContext; //Your immediate context

ID3D11RenderTargetView* m_pRenderViews[2]; //Not more than D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT (8)
m_pRenderViews[0] = pRTV1; //First target
m_pRenderViews[1] = pRTV2; //second target

deviceContext->OMSetRenderTargets(2, &m_pRenderViews[0], NULL); //NULL means no depth stencil attached

Then your pixel shader will need to output a structure instead of a single color:

struct PS_OUTPUT
{
    float4 Color: SV_Target0;
    float4 Normal: SV_Target1;
};

PS_OUTPUT PS(float4 p: SV_Position, float2 uv : TEXCOORD0)
{
      PS_OUTPUT output;
      output.Color = //Set first output
      output.Normal= //Set second output
      return output;
}

Also in DirectX11 you shouldn't need to write depth to your normal buffer, you can just use the depth buffer.

For Pixel/Compute shader sync, you can't run a pixel shader and a compute shader at the same time on the same device, so when your draw calls are finished, the textures are ready to use in compute for dispatch.

Share:
13,096
l3utterfly
Author by

l3utterfly

Updated on June 05, 2022

Comments

  • l3utterfly
    l3utterfly about 2 years

    I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target respectively. However, this seems a waste of time because I can get the information of each pixel's color, normal, and depth in the first pass. So is there a way to somehow output two textures with the pixel shader?

    Any help would be appreciated.

    P.S. I'm thinking something along the lines of RWTexture2D or RWStructuredBuffer in the pixel shader. A little background: I will need the two images for further processing in the compute shader. Which brings up a side question of synchronization: since the pixel shader (unlike the compute shader) writes each pixel one at a time, how would I know when the pixel shader is finished and tell the compute shader to start image post-processing?

  • l3utterfly
    l3utterfly over 11 years
    Thanks. Especially for the links. You can't begin to imagine how helpful those are. Thanks again. By the way, by "depth buffer", you mean a SV_Depth value for the pixel shader? (I think I read this somewhere, but I'm not too sure.) +1 and accepted answer.
  • mrvux
    mrvux over 11 years
    No probs. SV_Depth is if you want to write your own depth value, it's useful in some cases (depth from PixelShader raytracers, or some Texture Array fiddling). What I meant is DepthStencil resource can be bound as Shader Resource View as well, so since you will also bind a depth buffer to your scene you might as well reuse it, it's free )