Android: fast bitmap blur?

27,917

Solution 1

I finally found a suitable solution:

  • RenderScript allows implementing heavy computations which are scaled transparently to all cores available on the executing device. I've come to the conclusion, that with respect to a reasonable balance of performance and implementation complexity, this is a better approach than JNI or shaders.
  • Since API Level 17, there is the ScriptIntrinsicBlur class available from the API. This is exactly what I've been looking for, namely a high level, hardware-accelerated Gaussian blur implementation.
  • ScriptIntrinsicBlur is now a part of the android support library (v8) which supports Froyo and above (API>8). The android developer blog post on the support RenderScript library has some basic tips on how to use it.

However, the documentation on the ScriptIntrinsicBlur class is very rare and I've spent some more time on figuring out the correct invocation arguments. For bluring an ordinary ARGB_8888-typed bitmap named photo, here they are:

final RenderScript rs = RenderScript.create( myAndroidContext );
final Allocation input = Allocation.createFromBitmap( rs, photo, Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT );
final Allocation output = Allocation.createTyped( rs, input.getType() );
final ScriptIntrinsicBlur script = ScriptIntrinsicBlur.create( rs, Element.U8_4( rs ) );
script.setRadius( myBlurRadius /* e.g. 3.f */ );
script.setInput( input );
script.forEach( output );
output.copyTo( photo );

Solution 2

Probably the most demanding requirement is live blur, meaning you blur live as the view changes. In this situation a blur should not take longer than 10 or so ms (to have some playroom onto the 16ms/60fps) to look smooth. It is possible to achieve this effect with the right settings, even on not so high end devices (galaxy s3 and even slower).

Here is how to improve performance in descending importance:

  1. Use downscaled images: This decreases the pixels to blur enormously. Also it works for you when you want a real blurred image. Also image loading and memory consumption is drastically lowered.

  2. Use Renderscript ScriptIntrinsicBlur - there is probably not a better/faster solution in Android as of 2014. One mistake I often see is that the Renderscript context is not reused, but created everytime the blur algorithm is used. Mind you that RenderScript.create(this); takes around 20ms on a Nexus 5, so you want to avoid this.

  3. Reuse Bitmaps: don't create unnecessary instances and always use the same instance. When you need really fast blur, garbage collection plays a major role (taking a good 10-20 ms for collection some bitmaps). Also crop and blur only what you need.

  4. For a live blur, probably because of context switching, it's not possible to blur in another thread (even with threadpools), only the main thread was fast enough to keep the view updated timely, with threads I saw lags of 100-300ms

on more tips see my other post here https://stackoverflow.com/a/23119957/774398

btw. I did a simple live blur in this app: github, Playstore

Share:
27,917
theV0ID
Author by

theV0ID

Projects I've been working on lately: Carna (2010 – 2015) cnvmats.py (2015 - 2016)

Updated on October 09, 2020

Comments

  • theV0ID
    theV0ID over 3 years

    I've been searching the past three days for a built-in, hardware-accelerated way of bluring a bitmap with android. I stumbled upon certain work-arounds like shrinking the bitmap and scaling it up again, but this method produced low quality results which were not suitable for my image recognition requirements. I also read that implementing convolution with shaders or JNI is a good way to go, but I cannot believe that there is no built-in solution in the Android framework for this very common purpose. Currently I've ended up with a self-written convolution implementation in Java, but it is awkwardly slow. My question is:

    • Is there really no built-in solution in the Android framework?
    • In case there is none: what is the most efficient way of accelerating the convolution with a still reasonable complexity of implementation and maintenance? Shall we use JNI, shaders or something completely different?