Home » excel » comparison – Compute statistical significance with Excel

comparison – Compute statistical significance with Excel

Posted by: admin May 14, 2020 Leave a comment

Questions:

I have 2 columns and multiple rows of data in excel. Each column represents an algorithm and the values in rows are the results of these algorithms with different parameters. I want to make statistical significance test of these two algorithms with excel. Can anyone suggest a function?

As a result, it will be nice to state something like “Algorithm A performs 8% better than Algorithm B with .9 probability (or 95% confidence interval)”

The wikipedia article explains accurately what I need:
http://en.wikipedia.org/wiki/Statistical_significance

It seems like a very easy task but I failed to find a scientific measurement function.

Any advice over a built-in function of excel or function snippets are appreciated.

Thanks..

Edit:

After tharkun’s comments, I realized I should clarify some points:
The results are merely real numbers between 1-100 (they are percentage values). As each row represents a different parameter, values in a row represents an algorithm’s result for this parameter. The results do not depend on each other.
When I take average of all values for Algorithm A and Algorithm B, I see that the mean of all results that Algorithm A produced are 10% higher than Algorithm B’s. But I don’t know if this is statistically significant or not. In other words, maybe for one parameter Algorithm A scored 100 percent higher than Algorithm B and for the rest Algorithm B has higher scores but just because of this one result, the difference in average is 10%.
And I want to do this calculation using just excel.

How to&Answers:

Thanks for the clarification. In that case you want to do an independent sample T-Test. Meaning you want to compare the means of two independent data sets.

Excel has a function TTEST, that’s what you need.

For your example you should probably use two tails and type 2.

The formula will output a probability value known as probability of alpha error. This is the error which you would make if you assumed the two datasets are different but they aren’t. The lower the alpha error probability the higher the chance your sets are different.

You should only accept the difference of the two datasets if the value is lower than 0.01 (1%) or for critical outcomes even 0.001 or lower. You should also know that in the t-test needs at least around 30 values per dataset to be reliable enough and that the type 2 test assumes equal variances of the two datasets. If equal variances are not given, you should use the type 3 test.

http://depts.alverno.edu/nsmt/stats.htm