(no title)
ml_more | 1 year ago
What can be done to solve it (while not perfect) is pretty powerful. You can force feed them the facts (RAG) and then verify the result. Which is way better than trusting LLMs while doing neither of those things (which is what a lot of people do today anyway). See the recent 5 cases of lawyers getting in trouble for ChatGPT hallucinating citations of case law.
LLMs write better than most college students so if you do those two things (RAG + check) you can get college graduate level writing with accurate facts... and that unlocks a bit of value out in the world.
Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there. The good news is that it hasn't been solved yet so if someone wants to solve it there might be money on the table.
latexr|1 year ago
> Don't take my word for it look at the proposed valuations of AI companies. Clearly investors think there's something there.
Investors back whatever they think will make them money. They couldn’t give less of a crap if something is valuable to the world, or works well, of is in any way positive to others. All they care is if they can profit from it and they’ll chase every idea in that pursuit.
Source: all of modern history.
https://www.sydney.edu.au/news-opinion/news/2024/05/02/how-c...
https://www.decof.com/documents/dangerous-products.pdf
Terr_|1 year ago
A not-flagrantly-illegal example of this might be casinos, where IMO it is basically impossible to argue the fleeting entertainment they offer offsets the financial ruin inflicted on certain vulnerable types of patron.
> All they care is if they can profit from it
Notably that isn't the same as the business itself being profitable: Some investors may be hoping they can dump their stake at a higher price onto a Greater Fool [0] and exit before the collapse.
[0] https://en.wikipedia.org/wiki/Greater_fool_theory
Gormo|1 year ago
"The world" is an abstraction: concretely, every bit of value that is generated within that abstraction accrues to someone in particular -- investors in AI projects, for example.
wk_end|1 year ago
Take the example of case law. Would you need to formalize the entirety of case law? Would the AI then need to produce a formal proof of its argument, so that you can ascertain that its citations are valid? How do you know that the formal proof corresponds to whatever longform writing you ask the AI to generate? Is this really something that LLMs are suited for? That the law is suited for?
Gormo|1 year ago
threeseed|1 year ago
Of course. Because enterprise companies take a long time to evaluate new technologies. And so there is plenty of money to be made selling them tools over the next few years. As well as selling tools to those who are making tools.
But from my experience in rolling out these technologies only a handful of these companies will exist in 5-10 years. Because LLMs are "garbage in, garbage out" and we've never figured out how to keep the "garbage in" to a minimum.