Calling cutting edge-models consumer facing models like ChatGPT-4 garbage generating machines is very intellectually dishonest. These models are fully capable of drafting these kinds of texts, esp. when qualified staff is guiding the model.
Well, I just popped in "Write a new Federal law banning the collection of melted snow by individuals or small-business proprietorships for the purpose of protecting endangered plant species. Include a loophole that excludes minority-owned businesses or people who contribute a sufficient amount of money to carbon sequestration technologies or senators or representatives who voted in favor of strongly pro-union causes." and I won't burden HN with the results but it definitely has the shape of a fully-fledged bill for Congress to pass.
One problem ChatGPT would have in its current form is it would need auxiliary assistance to craft a larger-sized bill, as bills easily exceed its current window size. But that's a solvable problem too.
They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.
A technology can be both wildly powerful, mindblowingly cool, and deeply imperfect. I don't believe it's intellectually dishonest to emphasize the latter when it comes to impact against the human beings on the other end of the barrel. Especially when the technology starts to break out of the communities that already understand it (and its limitations).
> They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.
I truly can’t tell whether you are describing the US Congress or LLMs.
How is it intellectually dishonest? It generates garbage, it's fully up to you to dig into that garbage and find something worthy from it. It has no idea it's even generating garbage!
You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Of note: I use chatgpt a lot to generate a lot of garbage. Or for those of you offended by the word, then mentally replace it with something more "neutral" sounding like "debris" or "fragments".
> You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Exactly.
It is AI snake oil with the humans still checking if it will hallucinate which it certainly will and thus cannot be fully autonomous and needs qualified people monitoring and reading / checking the output.
Since not only it can generate garbage, it is untrustworthy to be left by itself and to be fully autonomous at the click of a button.
Intellectual honesty is very much in the garbage generating machine camp. Making an embedding space of reasonable language and then randomly sampling it is not a way to draft a law.
As someone that doesn’t know how the human brain works, and has never drafted any laws, let alone empirically seen what value an LLM can bring in this scenario, you should certainly quality this with a massive “in my layperson’s opinion”.
I beg to disagree. There are already hundreds of real-world examples whereby these models are doing terrible jobs with anything related to jurisprudence.
jerf|2 years ago
One problem ChatGPT would have in its current form is it would need auxiliary assistance to craft a larger-sized bill, as bills easily exceed its current window size. But that's a solvable problem too.
dmreedy|2 years ago
A technology can be both wildly powerful, mindblowingly cool, and deeply imperfect. I don't believe it's intellectually dishonest to emphasize the latter when it comes to impact against the human beings on the other end of the barrel. Especially when the technology starts to break out of the communities that already understand it (and its limitations).
spenczar5|2 years ago
I truly can’t tell whether you are describing the US Congress or LLMs.
MaxMatti|2 years ago
waboremo|2 years ago
You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Of note: I use chatgpt a lot to generate a lot of garbage. Or for those of you offended by the word, then mentally replace it with something more "neutral" sounding like "debris" or "fragments".
rvz|2 years ago
Exactly.
It is AI snake oil with the humans still checking if it will hallucinate which it certainly will and thus cannot be fully autonomous and needs qualified people monitoring and reading / checking the output.
Since not only it can generate garbage, it is untrustworthy to be left by itself and to be fully autonomous at the click of a button.
smeagull|2 years ago
KyeRussell|2 years ago
ihatepython|2 years ago
This is true. It is not really _generating_ garbage, as much as it is regurgitating garbage from the input data.
jruohonen|2 years ago
jruohonen|2 years ago
https://arxiv.org/abs/2303.12712
All things mentioned in Section 10.2 are extremely worrying in a context of lawmaking and jurisprudence in general.
PartiallyTyped|2 years ago