Home » Android » How is location accuracy measured in Android?

How is location accuracy measured in Android?

Posted by: admin March 11, 2020 Leave a comment


Does anyone know the proper interpretation of the accuracy measurements returned by getAccuracy()? For instance, are they calculated as:

  • Circular Error Probability (meaning, if i understand correctly, radius of a 50% confidence circle)?

  • Radius of 95% confidence circle?

  • something else?

In addition, what are the actual calculations that are used and how much can we rely on them? Does it depend on the source of the location estimate (GPS vs. network)?

Many thanks for any advice you can give me.

How to&Answers:

To answer one part of the question, the number is the radius of 68% confidence, meaning that there is a 68% chance that the true location is within that radius of the measured point in meters. Assuming that errors are normally distributed (which, as the docs say, is not necessarily true), that means that this is one standard deviation. For example, if Location.getAccuracy returns 10, then there’s a 68% chance the true location of the device is within 10 meters of the reported coordinates.



The location is a tricky task to do, when you have a limited battery life and when there is no GPS signal in buildings and in areas with many big building and etc. But Android makes it a much easier. When you request a location, you just have to specify what accuracy do you need.

If you specify that you want an accuracy for an example *100 meters*, Android will try to get the location and if it can get a location for accuracy 70 meters, it will return it to you, but if Android can get a location with an accuracy higher than 100 meters, your application will wait and will not receive anything till there is a location in such accuracy.

Typically Android will first get the Cell ID and then will send it to Google server, which maps such Cell IDs and the server will return a latitude and longitude with an accuracy which is low for an example 1000 meters. By this time Android will also try to see all WiFi networks in the area an will send information about them too to the Google server and if possible Google server will return a new location with higher accuracy for an example 800 meters.

By this time the GPS will be on. The GPS device needs at least 30 seconds from a cold start to get a fix, so if can get a fix it will return latitude and longitude but again with an accuracy, which will be the highest possible for an example 100 meters. The longer the GPS works, the better accuracy you will get.

Important notice: The first two methods requires an internet connection. If there is no data connection, you will have to wait for the GPS, but if the device is in a building, you will probably get no location.


The documentation on getAccuracy says that it returns the accuracy in meters.
I would guess that this means if you get a return value of 60 you are somewhere in a circle with a 60 meter radius around the supplied position.


As far as I can see from a quick glance at the Android source code, this is dependent on the hardware of the device and what value it chooses to return.

The GpsLocationProvider.java file has a reportLocation method which gets called by the native code and gets passed the accuracy as a value. Thus, no calculation seems to be happening in the framework at least.

The qcom (which I believe is the Qualcomm) GPS git repo is passing the hor_unc_circular parameter for accuracy which seems to imply that, at least, that implementation is using CER.


If, as quoted it in the docs, it is an accuracy, then the users actual position is somewhere within QUOTED_LOCATION +/- ACCURACY. So the accuracy defines a radius where you could expect the user to be. What the docs don’t say is how sure you can be that the user is within radius – the standard is 95% so I guess this it.


I get that you’re asking for a definite answer in terms of probability, but I think there are two things to consider here. First off, it’s up to the provider to decide what they want to put in this value, so depending on the provider, it may just be a bad guess. Secondly, it may help to think of this as a potential rounding problem. If I’m trying to calculate your location based on a number of inputs, and some of those inputs are only available to a certain number of significant digits, then it’s only possible to calculate a location with a given number of significant digits. Think of it this way — what’s “about” one plus “about” one hundred. Probably about one hundred, because the accuracy of one hundred is likely less than the magnitude of 1. If I suddenly say the answer is about 101, then I may end up implying a level of accuracy that wasn’t there. However, if I actually specify the accuracy, then I can say that it 100 plus or minus 10 plus 1 plus or minus .1 is 101 plus or minus 10. I get that this is generally referring to something like a 95% confidence level (standard error), but again, that all assumes the provider understands statistics and isn’t just guessing.