top | item 20351237

(no title)

jimfleming | 6 years ago

The article's demonstration of a counting model is horribly inaccurate to the point where I'm not sure why it was included. Most people see "AI" as being either good at something or not. There's little nuance such as some models being better than others. This kind of demonstration just weakens the reader's confidence in more fully developed results or different approaches. I'm not sure this brief statement offsets the prominent visuals:

> On the day of the protest, Mr. Yip and the A.I. team used technology that is much more advanced. They spent weeks training their program to improve its accuracy in analyzing crowd imagery.

Setting aside the presentation, from the photos the researchers appear to be using object detection rather than density estimation. This choice is problematic given the quantities involved and the need for temporal consistency.

I'm also skeptical of using human volunteers and surveys to calibrate the model. Humans are terrible at counting large numbers of people in real-time. That's a central point of the article with different groups of people providing wildly different counts.

discuss

order

fspeech|6 years ago

You can count from a still picture or a video segment and use that to test or calibrate. So it may just be inaccurate reporting to give the impression that the calibration depends on humans surveying live action, which is exactly what they are working to replace. If the researchers publish their results the exact methodology used will tell but I assume they are competent.