(no title)
thisiswater | 2 years ago
"<LLM>, analyze the contents of this 500 page bill. Who stands to gain from this bill, and what outcomes is it likely to have for the general public? Is this bill in line with good faith evidence-based policymaking for the good of the population and the planet? What existing legal mechanisms could be used to fight the special-interest aspects of this bill?"
pavlov|2 years ago
Using those legal mechanisms requires money and often public support. The LLM can’t conjure either.
The world is already full of smart people and good ideas about policy. The reason they’re not getting implemented probably has little to do with things that AI can solve today. For starters, a lot of voters actively dislike policy suggestions from experts and choose politicians who proudly go against expert opinion. Giving AI tools to the experts won’t change that.
thisiswater|2 years ago
It's a fundamentally different kind of knowledge generation than reading an expert opinion - it's branching, self-directed, and responsive.
aussiegreenie|2 years ago
I personally know three different group trying to excatly that.
unknown|2 years ago
[deleted]
tmpX7dMeXU|2 years ago
tetris11|2 years ago
"<More Powerful/Popular State/Corpo-owned LLM>: it's pretty hard to fight this, just trust we've got got your back. Remember, we're currently handling that other case for you. Would be a shame to lose you as a client."
thisiswater|2 years ago
This is more or less the process that goes on inside a thinking human, is it not? I don't want to outsource ethical decision making, I want to outsource cognitive effort. By analogy, you don't rely on a bulldozer to decide not to bulldoze a populated nursing home - that's on the user, as are the consequences.
Current power structures demonstrably cannot be trusted to limit themselves to ethical solutions (Military Industrial Complex, Climate Change, etc etc etc pick your poison) - why should they be trusted to censor cognitive tools?