Questions tagged [accuracy]
Accuracy of an estimator is the degree of closeness of the estimates to the true value. For a classifier, accuracy is the proportion of correct classifications. (This second usage is not good practice. See the tag wiki for a link to further information.)
846 questions
4
votes
1
answer
123
views
Accuracy in Machine Learning vs. Accuracy in Statistics vs. pass@1,1 in Generative Modeling: What's the Difference?
I've encountered the term "accuracy" used differently across several evaluation contexts, and I want to clearly understand their mathematical and conceptual distinctions using consistent ...
0
votes
2
answers
53
views
How to investigate if my poor classification is because of bad data or some other reason [duplicate]
I currently have a RandomForestClassifier that is classifying workload based on fNIRS data. Our classification accuracy is about 49% I want to investigate why our classification accuracy is so bad and ...
1
vote
1
answer
43
views
Two approaches to go from 2AFC accuracy to d′ - how do they differ and which should I use?
I’ve recently encountered two approaches used to express performance on perceptual tasks as d' when trying to convert (non-linear) accuracy on a 2AFC (2-alternative forced choice) task to a linear ...
7
votes
1
answer
197
views
Doubling your accuracy - extension
Frederick Mosteller's 50 Challenging Problems in Probability has a nice question I have not seen before, and I was wondering whether it could be extended.
49. Doubling your accuracy
An unbiased ...
1
vote
0
answers
69
views
Order sensitivity of scoring rules
This is from another question here.
The theorem below is from Lambert's paper about forecasting, (Elicitation and Evaluation of Statistical Forecasts):
$\textbf{Proposition}\quad 1:$ Let $(\Theta = \{\...
3
votes
1
answer
127
views
Calculation of geometric mean for classification
Consider binary classification, the geometric mean is defined as $\sqrt{\text{Precision} \times \text{Recall}} = \sqrt{ \frac{TP}{TP+FP} \times \frac{TP}{TP+FN} }$. But there can be different TP/FP/FN ...
1
vote
0
answers
76
views
How to measure accuracy between multiple raters and a reference value?
I am interested in assessing the accuracy of raters to a reference standard for subjective ratings on a Likert scale from 1-10 as in:
...
3
votes
1
answer
229
views
How do I calculate Harrell's c statistic for a Royston Parmar model?
I am trying to calculate the concordance (c) statistic for a Royston-Parmar model. My model stratifies the baseline hazard and uses splines to model log(t).
I am not sure If I am calculating the c-...
0
votes
0
answers
80
views
Is using the TEST set to calculate the optimal threshold for binary classification and then calculating the accuracy on the same test set wrong
I have a dataset that has been split into 2 parts, train and test set. After training a model with the training set to classify between class 0 and 1, I used the sklearn roc_curve to calculate the ...
2
votes
1
answer
86
views
Metric choice for Machine Learning algorithm
I am currently building a ML model for a binary classification problem.
I am currently using a curated dataset that was provided in a research paper, that has been perfectly balanced. However, it is ...
0
votes
0
answers
33
views
Evaluating Accuracy of mixture model clustering and categorisation
I am running a Mixture model and I have no free parameters, I just have it evaluating for a given datapoint, its likelihood of belonging to one cluster. Separately, I have a ground truth about these ...
2
votes
3
answers
153
views
Testing forecasting accuracy - outliers [ with example]
I have a simple model that produces forecast values. The model works on hourly data. Now, I am only interested in observations with flags. I would like to identify where the forecasts are ...
1
vote
2
answers
151
views
Is it possible that false-positive rate decreases with increasing prevalence?
I am interested in the effect of prevalence on prediction performance. Chouldechova (2016) states that:
[w]hen using a test-fair [recidivism prediction instrument] in
populations where recidivism ...
1
vote
1
answer
134
views
How to evaluate performance of classification model for different subsets of classes?
Consider a classification problem where there are N classes. While this may seem strange, I have a model that processes features, and essentially, evaluate which classes are impossible (or near ...
0
votes
0
answers
34
views
assessing classifier accuracy when class presence is scarce
What can I do, to assess a classifiers accuracy, when class presence is scarce.
Setup 1: I have 1000 boxes, 500 contain gold. I build an automated tool to find the gold.
The recommended approach would ...