How do I get the raw Android camera buffer in C using JNI?
Solution 1
I made a little investigation on topic. This presentation (from p.277, Chinese) helped a lot.
Camera preview call chain
As others mentioned, you can get a buffer using a Camera.setPreviewCallback
method.
Here's how it happens there (a verbose version):
- User calls
Camera.startPreview()
which is a native function. -
android_hardware_Camera_startPreview
callsstartPreview
method of C++Camera
class. -
Camera
calls astartPreview
method ofICamera
interface -
ICamera
makes anIPC
call to remote client. - It calls a
setCameraMode
method ofCameraService
class. -
CameraService
sets a window to display a preview and calls astartPreview
method ofCameraHardwareInterface
class. - The latter tries to call a
start_preview
method on particularcamera_device_t
device.
I didn't looked up further but it should perform a call to the driver. - When image arrives,
dataCallback
ofCameraService
is invoked. - It passes data to
handlePreviewData
method of client. - Client either copies the buffer or sends it directly to the
ICameraClient
. -
ICameraClient
sends it overIPC
to theCamera
. -
Camera
calls a registered listener and passes buffer toJNI
. - It invokes a callback in Java class. If user provided a buffer with
Camera.addCallbackBuffer
then it copies to the buffer first. - Finally Java class
Camera
handles the message and invokes aonPreviewFrame
method ofCamera.PreviewCallback
.
As you can see 2 IPC
calls were invoked and buffer was copied at least twice on steps 10, 11. First instance of raw buffer which is returned by camera_device_t
is hosted in another process and you cannot access it due to security checks in CameraService
.
Preview surface
However, when you set a preview surface using either Camera.setPreviewTexture
or Camera.setPreviewDisplay
it is be passed directly to the camera device and refreshed in realtime without participation of all the chain above. As it's documentation says:
Handle onto a raw buffer that is being managed by the screen compositor.
Java class Surface
has a method to retrieve it's contents:
public static native Bitmap screenshot(int width, int height, int minLayer, int maxLayer);
But this API is hidden. See i.e. this question for a way to use it.
Solution 2
There is no public API to do what you want; the only official (that is, guaranteed to work) method is the Java-level preview callbacks set up through calling Camera.setPreviewCallback(). In Android > 3.0, you can also use Camera.setPreviewTexture() to route preview data to the GPU, and process it there using GLES (or read it back to the CPU). The GPU path is what the ICS AOSP camera application uses for its video effects.
Presumably, OpenCV and others have looked through the Android framework native code, and have bypassed the Java Camera API, talking to the services below directly.
This is fairly dangerous, because there is absolutely no guarantee that those interfaces won't change between Android versions, since they are not part of the public API. Using them may be fine now, and then when a user upgrades their device, your app will stop working.
Solution 3
Have you taken a look at OpenCV for Android. Their advanced tutorials show how to use JNI and there is a NativeProcessor object in their camera package.
Related videos on Youtube
Nick
Updated on September 15, 2022Comments
-
Nick about 1 year
I've been searching Google and StackOverflow exhaustively and cannot find this. Maybe I'm missing something obvious. Thanks!
(This is because the Java implementation of the preview callback [even with buffer] is too inefficient.)
-
Andrey Ermakov over 11 yearsdo you have only usual or any extra (maybe root) privileges?
-
timemanx about 10 yearsDid you ever find a way to do this. Coz I'm looking for the same thing.
-
-
Alex Cohn over 9 yearsTwo comments to this excellent summary. 1. Steps 10-11 may involve
memcpy
, but most likely they only pass the buffer without copy, by using inter-process memory sharing. 2. Thescreenshot
method for Surface copies pixels, it also involves IPC, and possibly color conversion, so it would be a mistake to use it for better performance. 2a. Thisscreenshot
method produces an RGB bitmap, while most video encoding needs YUV.