(no title)
howlin | 3 years ago
The biggest defeating problem for pure AI teams is that they don't understand the domain well enough to know if their data sets are representative. Humans are great at salience assessments, and can ignore tons of the examples and features they witness when using their experience. This affects dataset curation. When a naive ML system trains on this data, it won't appreciate the often implicit curation decisions that were made, and will thus be miscalibrated for the real world.
A domain expert can offer a lot of benefits. They could know how to feature engineer in a way that is resilient to these saliency issues. They can immediately recognize when a system is making stupid decisions on out of sample data. And if the ML model allows for introspection, then the domain expert can assess whether the model's representations look sensible.
I'm scenarios where datasets actually do accurately resemble the "real world", it is possible for ML to transcend human experts. Linguistics is a pretty good example of this.
kspacewalk2|3 years ago
1) The AI expert is auxiliary here, and the domain expert is in the driver's seat. How can it be otherwise? You no more put the AI expert in charge than you'd put an electronic health record IT specialist in charge of the hospital's processes. The relationship needs to be outcome-focused, not technology-focused.
2) The end result is most likely to be a productivity tool which augments the abilities/accuracy/speed of human experts rather than replacing them. AGI being not that sciencey of a fiction, we aren't likely to be actually diagnosed by an AI radiologist in our lifetimes, nor will an AI scientist make an important scientific discovery. Ditch the hype and get to work on those productivity tools, because that's all you can do for the foreseeable future. That might seem like a disappointing reduction in ambition, but at least it's reality-based.
Avicebron|3 years ago
raincom|3 years ago
This is called the frame problem in AI.