top | item 44038658

(no title)

nicoco | 9 months ago

I agree with that. Classically used "AI benchmarks" need to be questioned. In my field, these guys have dropped a bomb, and no one seem to care: https://hal.science/hal-04715638/document

discuss

order

baxtr|9 months ago

Can you give brief summary why this paper is a breakthrough for an outsider of the field?

mzl|9 months ago

Checking it shortly (I haven't seen the paper before) this seems to be a very good analysis of how results are reported specifically for medical imaging benchmarks.

As is often the case with statistics, selecting just a single number to report (whatever that number is) will hide a lot of different behaviours. Here, they show that just using the mean is a bad way to report data as the confidence intervals (reconstructed by the methods in the paper in most cases) show that the models can't really be distinguished based on their mean.

nicoco|9 months ago

I don't think it qualifies as a breakthrough. In short:

1. Segmentation is a very classical in medical image processing. 2. Everyday there are papers claiming that they beat the state of the art 3. This paper says that most of the time, the state of the art has not been beat because they actually are in the margin of error.