2
$\begingroup$

I am looking to see what is the best statistical test to test whether a bimodal/multimodal sample is significantly different from a value.

I am making a device that deviate a certain result away from zero. I can get a sample of, say, 1000 results, and I want to evaluate the performance of my device by seeing how different is the collected sample differ from zero, knowing that the sample distribution is expected to be at least bimodal to either side of zero.

Thank you!

Adding more context: Thank you for quick replay. So there is my situation.

There is an instrument that measures subtle movement of the object its in contact with. In an controlled experimental room, it will always read 0. But we think that a sound wave of certain characteristics can deviate that reading, but could be to either side. So I want to test if the sound can truly reliable do that by emitting such sound wave n times.

Essentially I want to perform a test similar to one-sample t-test (whether the sample collected is significantly differ from zero), except the sample could deviate to either side rather than only one side.

$\endgroup$
2
  • 3
    $\begingroup$ Are you sure that's what you want? And what exactly do you mean by differ? If you are talking about the mean, median, or any other measure of central tendency, I think you are making a mistake. If you edit your question to give some context, someone might be able to give a better answer. $\endgroup$ Commented yesterday
  • 1
    $\begingroup$ Are you looking for some kind of unimodality testing? For instance this one? : asiffer.github.io/libfolding/unimodality-testing $\endgroup$ Commented yesterday

2 Answers 2

5
$\begingroup$

Let's simulate some simple examples, to clarify what you may want to do.

Let's start with a bimodal sample, which would look like this: Histogram 1

It is definitively bimodal, but would you say your device worked well? Maybe not, because the densities around 0 are still quite high (in many instances, it either did not "push" the result at all, or not much).

Now, let's look at another bimodal distribution;

Histogram 2

I think we could agree that your device worked much better this time (much fewer observatiosn at or around 0).

And let's look at a 3rd example

Histogram 3

I think we can agree that your device now works "perfectly".

But...

  1. All 3 are bimodal (and I do not think that declaring one "more bimodal" than the other is sensible)
  2. In all 3 cases, the mean is 0, and the median is 0 (the variances are different, but that will not lead us any place useful), and so no t-test, or median test, etc. will let you discern a difference.

So it is not a t-test (for means), or a median test, or a test of multi-modality that you need. Instead it is a test looking at what proportions of your observations are within a range $[-\epsilon, +\epsilon]$, with epsilon of your choosing (in your context, how far from 0 must the object be for you to say it is meaningfully far from 0).
And you also need to define another parameter $p$, which is the proportion of observations in the $[-\epsilon, +\epsilon]$ interval (aka "0 interval") which, in your context, you would consider meaningless (in other words, you would be satisfied with the performance of your device, even if such a small proportion was in the "0 interval".

Then the test you can use is a binomial test. You said that you can "easily" collect 1000 datapoints, so the (relative) low power of a binomial test is no longer an issue.
Then dichotomize your observations; inside the "0 interval", or outside. And compute the upper bound of the CI of the proportion observed in the "0 interval". If it is below $p$, you have demonstrated, at the selected significance level, that your device would allow, at most $100.p\%$ of the results in the "0 interval", and so 'gets the job done'.

Yes, dichotomizing a continuous variable is (usually appropriately) frowned upon; but you can collect large enough samples so that the objections do not matter in practice. And all the continuous proportion tests I know of are based on normal approximations, or normal assumptions, but a bimodal distribution is anything but normal...

$\endgroup$
1
  • $\begingroup$ This is super helpful and thank you so much for taking the time to produce examples. $\endgroup$ Commented 6 hours ago
4
$\begingroup$

An alternative to what jginestet suggested is to take the absolute value of the measure, and then see how far it is from 0. You could do a statistical test on this, seeing if it was different from 0. This would work well in the three examples in jginestet's answer, and it doesn't require dichotomizing the data.

$\endgroup$
1
  • $\begingroup$ That's true, I should try that! Thank you so much! $\endgroup$ Commented 6 hours ago

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.