Home » Android » android – openCV4android detect Shape and Color (HSV)

android – openCV4android detect Shape and Color (HSV)

Posted by: admin June 15, 2020 Leave a comment


I’m a beginner in openCV4android and I would like to get some help if possible
I’m trying to detect colored triangles,squares or circles using my Android phone camera but I don’t know where to start.
I have been reading OReilly Learning OpenCV book and I got some knowledge about OpenCV.

Here is what I want to make:

1- Get the tracking color (just the color HSV) of the object by touching the screen
– I have already done this by using the color blob example from the OpenCV4android example

2- Find on the camera shapes like triangles, squares or circles based on the color choosed before.

I have just found examples of finding shapes within an image. What I would like to make is finding using the camera on real time.

Any help would be appreciated.

Best regards and have a nice day.

How to&Answers:

If you plan to implement NDK for your opencv stuff then you can use the same idea they are using in OpenCV tutorial 2-Mixedprocessing.

  // on camera frames call your native method

public Mat onCameraFrame(CvCameraViewFrame inputFrame)
mRgba = inputFrame.rgba();
Nativecleshpdetect(mRgba.getNativeObjAddr()); // native method call to perform color and object detection
// the method getNativeObjAddr gets the address of the Mat object(camera frame) and passes it to native side as long object so that you dont have to create and destroy Mat object on each frame
public native void Nativecleshpdetect(long matAddrRgba);

In Native side

JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial2_Tutorial2Activity_Nativecleshpdetect(JNIEnv*, jobject,jlong addrRgba1)

    Mat& mRgb1 = *(Mat*)addrRgba1;
// mRgb1 is a mat object which points to the address of the input camera frame, so all the manipulations you do here will reflect on the live camera frame  

 //once you have your mat object(i.e mRgb1 ) you can implement all the colour and shape detection algorithm you have learnt in opencv book  


since all manipulations are done using pointers you have to be bit careful handling them. hope this helps


Why dont you make use of JavaCV i think its a better alternative..you dont have to use the NDK at all for this..

try this:


If you check OpenCV’s Back Projection tutorial it does what you are looking for (and a bit more).

Back Projection:

“In terms of statistics, the values stored in the BackProjection
matrix represent the probability that a pixel in a image belongs to
the region with the selected color.”

I have converted that tutorial to OpenCV4Android (2.4.8) like you were looking for, it does not use Android NDK. You can see all the code here at Github.

You can also check this answer for more details.


Though its a bit late i would like to make a contribution to the question.

1- Get the tracking color (just the color HSV) of the object by
touching the screen – I have already done this by using the color blob
example from the OpenCV4android example

Implement OnTouchListener to your activity

onTouch function

int cols = mRgba.cols();
int rows = mRgba.rows();

int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;

int x = (int) event.getX() - xOffset;
int y = (int) event.getY() - yOffset;

Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");

if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) return false;

Rect touchedRect = new Rect();

touchedRect.x = (x > 4) ? x - 4 : 0;
touchedRect.y = (y > 4) ? y - 4 : 0;

touchedRect.width = (x + 4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
touchedRect.height = (y + 4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;

Mat touchedRegionRgba = mRgba.submat(touchedRect);

Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);

// Calculate average color of touched region
mBlobColorHsv = Core.sumElems(touchedRegionHsv);

int pointCount = touchedRect.width * touchedRect.height;
for (int i = 0; i < mBlobColorHsv.val.length; i++)
    mBlobColorHsv.val[i] /= pointCount;

mBlobColorRgba = converScalarHsv2Rgba(mBlobColorHsv);
mColor = mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] + ", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3];
Log.i(TAG, "Touched rgba color: (" + mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] +
        ", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3] + ")");

mRGBA is a mat object which was initiated in onCameraViewStarted as

mRgba = new Mat(height, width, CvType.CV_8UC4);

And for the 2nd part:

2- Find on the camera shapes like triangles, squares or circles based
on the color choosed before.

I have tried to find out the selected contours shape using approxPolyDP

MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(0).toArray());
                //Processing on mMOP2f1 which is in type MatOfPoint2f
                double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
                Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);

                //Convert back to MatOfPoint
                MatOfPoint points = new MatOfPoint(approxCurve.toArray());

                System.out.println("points length" + points.toArray().length);

                if( points.toArray().length == 5)
                    mShape = "Pentagon";
                else if(points.toArray().length > 5)
                    Imgproc.drawContours(mRgba, contours, 0, new Scalar(255, 255, 0, -1));
                    mShape = "Circle";
                else if(points.toArray().length == 4)
                    mShape = "Square";
                else  if(points.toArray().length == 4)
                    mShape = "Triangle";

This was done on onCameraFrame function after i obtained the contour list

For me if the length of point array was more than 5 it was usually a circle. But there is other algorithm to obtain circle and its attributes.