So they want to align the AI with corporate goals. At least they're being honest here. I want a personal assistant to summarize a page, remove all advertising and do a fact check. Can we have that?
Does an LLM need to “know” what a fact is? I think what you’re asserting is mostly philosophical, and it doesn’t affect the end result which is that you demonstrably can provide certain LLMs with incorrect information and have that information corrected.
The open web was a nice idea but the economics for it were never sustainable. Ads lead to SEO spam and AI can be easily hacked because no one has figured out how to make statistical correlations "unhackable" so you'll eventually get sophisticated attacks like AI SEO spam that game whatever neural network is doing the summarizing to inject ads into the summaries.
There is a way to fix all these problems by removing profit motives but that's obviously practically unworkable so the quality of the content is just going to keep getting worse and worse until everyone starts using services like arxiv and semanticscholar to get any useful information because those will be the only places where neither the hosting nor the content is motivated by profit incentives.
Our open web is as sustainable as anything open: that is, it’s perfectly sustainable as long as people are not being jerks[0].
However, unlike most other open things that are subject to physical constraints, the Web is global—and globally there is a huge discrepancy in quality of life, level of education, mental health, freedom of thought, etc.
Which might mean that the ratio of figurative jerks relative to non-jerks stands to increase for as long as we have countries that are oppressed, poor, or otherwise score abysmally in relevant regards—or as long as we have global, open Web.
Either of those things will change, I personally hope that the former does.
NoZebra120vClip|2 years ago
Seriously, though, I hope that you realize that LLMs are incapable of "fact-checking", because they don't know what "facts" are.
sillysaurusx|2 years ago
rmilejczz|2 years ago
A-Train|2 years ago
ZoomerCretin|2 years ago
fastball|2 years ago
climatologist|2 years ago
There is a way to fix all these problems by removing profit motives but that's obviously practically unworkable so the quality of the content is just going to keep getting worse and worse until everyone starts using services like arxiv and semanticscholar to get any useful information because those will be the only places where neither the hosting nor the content is motivated by profit incentives.
strogonoff|2 years ago
However, unlike most other open things that are subject to physical constraints, the Web is global—and globally there is a huge discrepancy in quality of life, level of education, mental health, freedom of thought, etc.
Which might mean that the ratio of figurative jerks relative to non-jerks stands to increase for as long as we have countries that are oppressed, poor, or otherwise score abysmally in relevant regards—or as long as we have global, open Web.
Either of those things will change, I personally hope that the former does.
[0] https://en.wikipedia.org/wiki/Tragedy_of_the_commons
unknown|2 years ago
[deleted]
moonchrome|2 years ago