Skip to main content

Questions tagged [average-precision]

For questions related to the Average Precision metric.

Filter by
Sorted by
Tagged with
1 vote
1 answer
73 views

Recently I came across this wonderful guide explaining the nature of average precision: https://datascience-intro.github.io/1MS041-2022/Files/AveragePrecision.pdf My question is about Lemma4, ...
arstep's user avatar
  • 11
0 votes
0 answers
153 views

There are two popular metrics for object detection: Average precision and Average recall. Do you can explain with examples, what are the cases to use AP, and what are the cases to use AR? I agree that ...
Ars ML's user avatar
  • 41
1 vote
1 answer
153 views

Scikit-learn has an AP metric function Here The description of y_score (predictions) says :- ...
Anmol's user avatar
  • 113
2 votes
0 answers
254 views

I have developed a binary classifier using LightGBM, where I've primarily used the AUC metric due to its simplicity, ease of use, and interpretability. Recently, I've taken an interest in utilizing ...
Programming Noob's user avatar
1 vote
1 answer
767 views

I have precision and recall values and want to measure an estimator performance: ...
Ars ML's user avatar
  • 41
2 votes
0 answers
70 views

As long as I know, for both ROC and PR curves, the classifier performance is usually measured by the AUC. This might indicate that classifiers with equivalent performance might have different ROC/PR ...
Gideon Kogan's user avatar
1 vote
0 answers
72 views

When handling probabilities close to 1, it is often more helpful to use the complement (i.e. 1-P). For instance, we say "there is a 1 in 1,000,000 chance of an event occurring", instead of &...
Cyruno's user avatar
  • 13
0 votes
0 answers
603 views

So I know that the area under the precision-recall curve is often a more useful metric than AUROC when dealing with highly imbalanced datasets. However, while AUROC can easily be used to compare ...
Eike P.'s user avatar
  • 3,382
0 votes
1 answer
86 views

Suppose we have two models; model A and model B. Model A outperforms both AUC ROC and AUC PR to model B. However, when we compare the two models with their optimal threshold values, model B ...
R and C.F's user avatar
1 vote
0 answers
55 views

A random model has an area under the ROC curve equal to 0.5. We also know that a random model has an area under the Precision-Recall curve equal to the proportion (p) of positive examples. Then, here'...
killezio's user avatar
  • 111
0 votes
0 answers
60 views

Sampling 100 random respondents for a binary true/false response from a total population of 220,000,000 yields a margin of error of 9.8%. If a new random sample of 100 respondents from the same ...
B Chase's user avatar
1 vote
0 answers
120 views

After reading this and this , I tried it on mine by fitting the 2-input model i.e. text and numerical. The result remains similar even several attempts on tuning the hyperparameters e.g. embedding ...
user357565's user avatar
1 vote
0 answers
102 views

In the YOLOv2 paper, is the mAP metric displayed in Table 3 calculated in the same way as the metric in the column '0.5' from Table 5?
Yandle's user avatar
  • 1,229
1 vote
0 answers
226 views

I am doing my final thesis in the field of Deepfakes and their detection. The final outcome is to have a binary classifier which could predict which video was updated and which was not. In other words,...
MichiganMagician's user avatar
0 votes
1 answer
1k views

In attempting to calculate the average precision of an object detection model, I am wondering about an edge case. Suppose at evaluation time that for a given category, that no detections of that ...
IntegrateThis's user avatar

15 30 50 per page