Questions tagged [average-precision]
For questions related to the Average Precision metric.
57 questions
1
vote
1
answer
73
views
Average precision in integral form
Recently I came across this wonderful guide explaining the nature of average precision:
https://datascience-intro.github.io/1MS041-2022/Files/AveragePrecision.pdf
My question is about Lemma4, ...
0
votes
0
answers
153
views
Average precision vs Average recall in object detection
There are two popular metrics for object detection: Average precision and Average recall. Do you can explain with examples, what are the cases to use AP, and what are the cases to use AR?
I agree that ...
1
vote
1
answer
153
views
How does average_precision_score metric in scikit-learn work for non-probability prediction scores
Scikit-learn has an AP metric function Here
The description of y_score (predictions) says :- ...
2
votes
0
answers
254
views
Choosing a metric for lightGBM classifier (mean average precision k)
I have developed a binary classifier using LightGBM, where I've primarily used the AUC metric due to its simplicity, ease of use, and interpretability.
Recently, I've taken an interest in utilizing ...
1
vote
1
answer
767
views
Why use average_precision_score from sklearn? [duplicate]
I have precision and recall values and want to measure an estimator performance:
...
2
votes
0
answers
70
views
Is there a way to effect the shape of precision-recall curve?
As long as I know, for both ROC and PR curves, the classifier performance is usually measured by the AUC. This might indicate that classifiers with equivalent performance might have different ROC/PR ...
1
vote
0
answers
72
views
Regarding area ABOVE the curve - complement of AUROC
When handling probabilities close to 1, it is often more helpful to use the complement (i.e. 1-P).
For instance, we say "there is a 1 in 1,000,000 chance of an event occurring", instead of &...
0
votes
0
answers
603
views
Comparing AUC-PR between groups with different baselines
So I know that the area under the precision-recall curve is often a more useful metric than AUROC when dealing with highly imbalanced datasets. However, while AUROC can easily be used to compare ...
0
votes
1
answer
86
views
Better in AUC and AUC PR, but lower in the optimal threshold
Suppose we have two models; model A and model B.
Model A outperforms both AUC ROC and AUC PR to model B.
However, when we compare the two models with their optimal threshold values, model B ...
1
vote
0
answers
55
views
Does a model with 0.5 AUROC imply an average precision equal to the proportion of positive examples?
A random model has an area under the ROC curve equal to 0.5.
We also know that a random model has an area under the Precision-Recall curve equal to the proportion (p) of positive examples.
Then, here'...
0
votes
0
answers
60
views
Average Margin of Error
Sampling 100 random respondents for a binary true/false response from a total population of 220,000,000 yields a margin of error of 9.8%.
If a new random sample of 100 respondents from the same ...
1
vote
0
answers
120
views
roc-auc≈0.5, accuracy≈precision≈average_precision≈65%, recall≈1 [closed]
After reading this and this
, I tried it on mine by fitting the 2-input model i.e. text and numerical. The result remains similar even several attempts on tuning the hyperparameters e.g. embedding ...
1
vote
0
answers
102
views
Questions about mAP results from YOLOv2 paper
In the YOLOv2 paper, is the mAP metric displayed in Table 3 calculated in the same way as the metric in the column '0.5' from Table 5?
1
vote
0
answers
226
views
Can you estimate average precision from log loss?
I am doing my final thesis in the field of Deepfakes and their detection. The final outcome is to have a binary classifier which could predict which video was updated and which was not. In other words,...
0
votes
1
answer
1k
views
What is the average precision in the case of no positives for a given category in the context of object detection
In attempting to calculate the average precision of an object detection model, I am wondering about an edge case. Suppose at evaluation time that for a given category, that no detections of that ...