2D Image Processing With WebGL

12,413

Solution 1

You can make a custom pixel shader for each operation you're intending to use. Just learn some GLSL and follow the "Learning WebGL" tutorials to get a grasp of basic WebGL.

You can render your image with the shader modifying the parameters you can include to control the different visual styles and then when the user clicks "ok" you can read back the pixels to store it as your current image.

Just remember to avoid cross domain images, because that will disable the reading back of pixels.

Also, check the quick reference card (PDF) for quick info on shader operations.

Solution 2

I was going to write a tutorial and post it on my blog but I don't know when I'll have time to finish so here's what I have Here's a more detailed set of posts on my blog.

WebGL is actually a rasterization library. I takes in attributes (streams of data), uniforms (variables) and expects you to provide "clip space" coordinates in 2d and color data for pixels.

Here's a simple example of 2d in WebGL (some details left out)

// Get A WebGL context
var gl = canvas.getContext("experimental-webgl");

// setup GLSL program
vertexShader = createShaderFromScriptElement(gl, "2d-vertex-shader");
fragmentShader = createShaderFromScriptElement(gl, "2d-fragment-shader");
program = createProgram(gl, vertexShader, fragmentShader);
gl.useProgram(program);

// look up where the vertex data needs to go.
var positionLocation = gl.getAttribLocation(program, "a_position");

// Create a buffer and put a single clipspace rectangle in
// it (2 triangles)
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
   -1.0, -1.0,
    1.0, -1.0,
   -1.0,  1.0,
   -1.0,  1.0,
    1.0, -1.0,
    1.0,  1.0]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);

// draw
gl.drawArrays(gl.TRIANGLES, 0, 6);

Here's the 2 shaders

<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
void main() {
   gl_Position = vec4(a_position, 0, 1);
}
</script>

<script id="2d-fragment-shader" type="x-shader/x-fragment">
void main() {
   gl_FragColor = vec4(0,1,0,1);  // green
}
</script>

This will draw a green rectangle the entire size of the canvas.

In WebGL it's your responsibility to provide a vertex shader that provides clipspace coordinates. Clipspace coordinates always go from -1 to +1 regardless of the size of the canvas. If you want 3d it's up to you to supply shaders that convert from 3d to 2d because WebGL is only a rasterization API

In one simple example, if you want to work in pixels you could pass in a rectangle that uses pixels instead of clip space coordinates and convert to clip space in the shader

For example:

<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;

uniform vec2 u_resolution;

void main() {
   // convert the rectangle from pixels to 0.0 to 1.0
   vec2 zeroToOne = a_position / u_resolution;

   // convert from 0->1 to 0->2
   vec2 zeroToTwo = zeroToOne * 2.0;

   // convert from 0->2 to -1->+1 (clipspace)
   vec2 clipSpace = zeroToTwo - 1.0;

   gl_Position = vec4(clipSpace, 0, 1);
}
</script>

Now we can draw rectangles by changing the data we supply

// set the resolution
var resolutionLocation = gl.getUniformLocation(program, "u_resolution");
gl.uniform2f(resolutionLocation, canvas.width, canvas.height);

// setup a rectangle from 10,20 to 80,30 in pixels
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
    10, 20,
    80, 20,
    10, 30,
    10, 30,
    80, 20,
    80, 30]), gl.STATIC_DRAW);

You'll notice WebGL considers the bottom right corner to be 0,0. To get it to be the more traditional top right corner used for 2d graphics we just flip the y coordinate.

   gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);

You want to manipulate images you need to pass in textures. In the same way the size of the canvas is represented by clipspace coordinates textures are are referenced by texture coordinates that go from 0 to 1.

<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
attribute vec2 a_texCoord;

uniform vec2 u_resolution;

varying vec2 v_texCoord;

void main() {
   // convert the rectangle from pixels to 0.0 to 1.0
   vec2 zeroToOne = a_position / u_resolution;

   // convert from 0->1 to 0->2
   vec2 zeroToTwo = zeroToOne * 2.0;

   // convert from 0->2 to -1->+1 (clipspace)
   vec2 clipSpace = zeroToTwo - 1.0;

   gl_Position = vec4(clipSpace, 0, 1);

   // pass the texCoord to the fragment shader
   // The GPU will interpolate this value between points.
   v_texCoord = a_texCoord;
}
</script>

<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision float mediump;

// our texture
uniform sampler2D u_image;

// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;

void main() {
   gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>

To draw an image requires loading the image and since that happen asynchronously we need to change our code a little. Take all the code we had and put it in a function called "render"

var image = new Image();
image.src = "http://someimage/on/our/server";  // MUST BE SAME DOMAIN!!!
image.onload = function() {
  render();
}

function render() {
   ...
   // all the code we had before except gl.draw


   // look up where the vertex data needs to go.
   var texCoordLocation = gl.getAttribLocation(program, "a_texCoord");

   // provide texture coordinates for the rectangle.
   var texCoordBuffer = gl.createBuffer();
   gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
   gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
       1.0,  1.0, 
       0.0,  1.0, 
       0.0,  0.0, 
       1.0,  1.0, 
       0.0,  0.0, 
       1.0,  0.0]), gl.STATIC_DRAW);
   gl.enableVertexAttribArray(texCoordLocation);
   gl.vertexAttribPointer(texCoordLocation, 2, gl.FLOAT, false, 0, 0);

   var texture = gl.createTexture();
   gl.bindTexture(gl.TEXTURE_2D, texture);
   gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
   gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
   gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
   gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
   gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);

   gl.draw(...)

If you want to do image processing you just change your shader. Example, Swap red and blue

void main() {
   gl_FragColor = texture2D(u_image, v_texCoord).bgra;
}

Or blend with the pixels next to it.

uniform vec2 u_textureSize;

void main() {
   vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
   gl_FragColor = (texture2D(u_image, v_texCoord) +
                   texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
                   texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
}

And we have to pass in the size of the texture

var textureSizeLocation = gl.getUniformLocation(program, "u_textureSize");
...
gl.uniform2f(textureSizeLocation, image.width, image.height);

Etc... Click the last link below for a convolution sample.

Here are working versions with a slightly different progression

Draw Rect in Clip Space

Draw Rect in Pixels

Draw Rect with origin at top left

Draw a bunch of rects in different colors

Draw an image

Draw an image red and blue swapped

Draw an image with left and right pixels averaged

Draw an image with a 3x3 convolution

Draw an image with multiple effects

Solution 3

Just try glfx ( http://evanw.github.com/glfx.js/ ) I think it is exactly what you need. You can use set of predefined shaders or easily add yours ;) enjoy! It is very easy with glfx!

<script src="glfx.js"></script>
<script>

window.onload = function() {
    // try to create a WebGL canvas (will fail if WebGL isn't supported)
    try {
        var canvas = fx.canvas();
    } catch (e) {
        alert(e);
        return;
    }

    // convert the image to a texture
    var image = document.getElementById('image');
    var texture = canvas.texture(image);

    // apply the ink filter
    canvas.draw(texture).ink(0.25).update();

    // replace the image with the canvas
    image.parentNode.insertBefore(canvas, image);
    image.parentNode.removeChild(image);
};

</script>
<img id="image" src="image.jpg">
Share:
12,413

Related videos on Youtube

Dalton Tan
Author by

Dalton Tan

Updated on June 04, 2022

Comments

  • Dalton Tan
    Dalton Tan almost 2 years

    I intend to create a simple photo editor in JS. My main question is, is it possible to create filters that render in real-time? For example, adjusting brightness and saturation. All I need is a 2D image where I can apply filters using the GPU.

    All the tutorials I've read are very complex and don't really explain what the API mean. Please point me in the right direction. Thanks.

  • Dalton Tan
    Dalton Tan over 12 years
    Now I can render a flat image on the canvas. Is it possible to write it without the 3D part? As for the learning materials, what should I look for exactly? There seems to be OpenGL ES, the normal OpenGL, GLSL, WebGL along with all the different versions, too.
  • Chiguireitor
    Chiguireitor over 12 years
    You could just draw your texture with orographic projection with a screen aligned quad. From learning materials you can focus on GLSL which is the language you're going to use for your effects
  • Dalton Tan
    Dalton Tan over 12 years
    Thanks for the detailed explanation. I'm having a little problem here, hope you know about it. stackoverflow.com/questions/8679596/…
  • Dalton Tan
    Dalton Tan over 12 years
    One more question. How do I dynamically call a combination of functions based on user input? For exmaple, blur + saturation or just sharpen alone. In your example I saw you using kernel, something I do not understand. Thanks.
  • gman
    gman over 12 years
    I got the kernel thing from here (docs.gimp.org/en/plug-in-convmatrix.html) and here (codeproject.com/KB/graphics/ImageConvolution.aspx) As for adding them up you've 2 things off top of my head. #1) you can attempt to generate shaders on the fly. #2) you can render to framebuffers and repeat for each step
  • B''H Bi'ezras -- Boruch Hashem
    B''H Bi'ezras -- Boruch Hashem almost 4 years
    hey I've seen your other article also webglfundamentals.org/webgl/lessons/webgl-image-processing.h‌​tml and I've seen it copied other places on the web, but I couldn't find any information about "u_image" in the fragment shader? Where is that being set from the JavaScript program? Is that particular name, "u_image", automatically set when you bind it to gl.TEXTURE_2D? I couldn't find any docs on this, do you know where they are?