Home » Android » Green images when doing a JPEG encoding from YUV_420_888 using the new Android camera2 api

Green images when doing a JPEG encoding from YUV_420_888 using the new Android camera2 api

Posted by: admin May 14, 2020 Leave a comment


I am trying to use the new camera api. The burst capture was going too slow, so I use the YUV_420_888 format in the ImageReader and do a JPEG enconding later, as was suggested in the following post:

Android camera2 capture burst is too slow

The problem is that I am getting green images when I try to encode JPEG from YUV_420_888 using RenderScript as follows:

RenderScript rs = RenderScript.create(mContext);
ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));
Type.Builder yuvType = new Type.Builder(rs, Element.YUV(rs)).setX(width).setY(height).setYuvFormat(ImageFormat.YUV_420_888);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);

Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);



Bitmap bmpout = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);

ByteArrayOutputStream baos = new ByteArrayOutputStream();
bmpout.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] jpegBytes = baos.toByteArray();

data variable (the YUV_420_888 data) is obtained from:

ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] data = new byte[buffer.remaining()];

What am I doing wrong in the JPEG encoding to get the images only in green?

Thanks in advance

Edited: This is an example of the images in green that I obtain:


How to&Answers:

So there are several layers of answers to this question.

First, I don’t believe there’s a direct way to copy an Image of YUV_420_888 data into an RS Allocation, even if the allocation is of format YUV_420_888.

So if you’re not using the Image for anything else than this JPEG encode step, then you can just use an Allocation as the output for the camera directly, by using Allocation#getSurface and Allocation#ioReceive. Then you can perform your YUV->RGB conversion and read out the bitmap.

However, note that JPEG files, under the hood, actually store YUV data, so when you go to compress the JPEG, Bitmap is going to do another RGB->YUV conversion as it saves the file. For maximum efficiency, then, you’d want to feed the YUV data directly to a JPEG encoder that can accept it, and avoid the extra conversion steps entirely. Unfortunately, this isn’t possible through the public APIs, so you’d have to drop down to JNI code and include your own copy of libjpeg or an equivalent JPEG encoding library.

If you don’t need to save JPEG files terribly quickly, you can swizzle the YUV_420_888 data into a NV21 byte[] and then use YuvImage, though you need to pay attention to the pixel and row strides of your YUV_420_888 data and map it correctly to NV21 – YUV_420_888 is flexible and can represent several different kinds of memory layouts (including NV21) and may be different on different devices. So when modifying the layout to NV21, it’s critical to make sure you are doing the mapping correctly.


I managed to get this to work, the answer provided by panonski is not quite right, and a big issue is that this YUV_420_888 format covers many different memory layouts where the NV21 format is very specific (I don’t know why the default format was changed in this way, it makes no sense to me)

Note that this method can be pretty slow for a couple of reasons.

  1. Because NV21 interlaces the chroma channels, and YUV_420_888 includes formats that have non-interlaced chroma channels, the only reliable option (that I know of) is to do a byte-by-byte copy. I am interested to know if there is a trick to speed this process up, I suspect there is one. I provide a grayscale-only option because that part is very fast row-by-row copy.

  2. When grabbing frames from the camera, their bytes will be marked as protected which means direct access is impossible and they must be copied to be manipulated directly.

  3. The image appears to be stored in reverse byte order, so after conversion the final array needs to be reversed. This might just be my camera and I suspect there is another trick to be found here that can speed this up a lot.

Anyway here is the code:

private byte[] getRawCopy(ByteBuffer in) {
    ByteBuffer rawCopy = ByteBuffer.allocate(in.capacity());
    return rawCopy.array();

private void fastReverse(byte[] array, int offset, int length) {
    int end = offset + length;
    for (int i = offset; i < offset + (length / 2); i++) {
        array[i] = (byte)(array[i] ^ array[end - i  - 1]);
        array[end - i  - 1] = (byte)(array[i] ^ array[end - i  - 1]);
        array[i] = (byte)(array[i] ^ array[end - i  - 1]);

private ByteBuffer convertYUV420ToN21(Image imgYUV420, boolean grayscale) {

    Image.Plane yPlane = imgYUV420.getPlanes()[0];
    byte[] yData = getRawCopy(yPlane.getBuffer());

    Image.Plane uPlane = imgYUV420.getPlanes()[1];
    byte[] uData = getRawCopy(uPlane.getBuffer());

    Image.Plane vPlane = imgYUV420.getPlanes()[2];
    byte[] vData = getRawCopy(vPlane.getBuffer());

    // NV21 stores a full frame luma (y) and half frame chroma (u,v), so total size is
    // size(y) + size(y) / 2 + size(y) / 2 = size(y) + size(y) / 2 * 2 = size(y) + size(y) = 2 * size(y)
    int npix = imgYUV420.getWidth() * imgYUV420.getHeight();
    byte[] nv21Image = new byte[npix * 2];
    Arrays.fill(nv21Image, (byte)127); // 127 -> 0 chroma (luma will be overwritten in either case)

    // Copy the Y-plane
    ByteBuffer nv21Buffer = ByteBuffer.wrap(nv21Image);
    for(int i = 0; i < imgYUV420.getHeight(); i++) {
        nv21Buffer.put(yData, i * yPlane.getRowStride(), imgYUV420.getWidth());

    // Copy the u and v planes interlaced
    if(!grayscale) {
        for (int row = 0; row < imgYUV420.getHeight() / 2; row++) {
            for (int cnt = 0, upix = 0, vpix = 0; cnt < imgYUV420.getWidth() / 2; upix += uPlane.getPixelStride(), vpix += vPlane.getPixelStride(), cnt++) {
                nv21Buffer.put(uData[row * uPlane.getRowStride() + upix]);
                nv21Buffer.put(vData[row * vPlane.getRowStride() + vpix]);

        fastReverse(nv21Image, npix, npix);

    fastReverse(nv21Image, 0, npix);

    return nv21Buffer;


If I understood your description correctly I can see at least two problems in your code:

  1. It seems you are only passing the Y part of your image to the YUV->RGB conversion code, because it looks like you are only using the first plane in ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();, ignoring the U and V planes.

  2. I’m not familiar with these Renderscript types yet, but it looks like Element.RGBA_8888 and Bitmap.Config.ARGB_8888 refer to slight different ordering of bytes so you might need to do some reordering work.

Both problems could be the cause of the green color of the resulting picture.


Do you have to use RenderScript?
If not, you could transform the image from YUV to N21 and then from N21 to JPEG without any fancy structures.
First you take the 0 and 2 plane to get N21:

private byte[] convertYUV420ToN21(Image imgYUV420) {
    byte[] rez = new byte[0];

    ByteBuffer buffer0 = imgYUV420.getPlanes()[0].getBuffer();
    ByteBuffer buffer2 = imgYUV420.getPlanes()[2].getBuffer();
    int buffer0_size = buffer0.remaining();
    int buffer2_size = buffer2.remaining();
    rez = new byte[buffer0_size + buffer2_size];

    buffer0.get(rez, 0, buffer0_size);
    buffer2.get(rez, buffer0_size, buffer2_size);

    return rez;

Then you can use YuvImage’s built in method to compress to JPEG. The w and the h arguments are the width and the height of your image file.

private byte[] convertN21ToJpeg(byte[] bytesN21, int w, int h) {
    byte[] rez = new byte[0];

    YuvImage yuv_image = new YuvImage(bytesN21, ImageFormat.NV21, w, h, null);
    Rect rect = new Rect(0, 0, w, h);
    ByteArrayOutputStream output_stream = new ByteArrayOutputStream();
    yuv_image.compressToJpeg(rect, 100, output_stream);
    rez = output_stream.toByteArray();

    return rez;


This is a answer/question.
On several similar post, it’s recommended to use this script :

But i don’t know how to use it. Advices are welcome.


Did the above conversion worked? because I tried it using renderscript by copying the first and last plane and I still received a green filtered image like the one from above.