top | item 37609863

(no title)

thisiswater | 2 years ago

I agree - LLMs are a massive threat to the current state of government power.

"<LLM>, analyze the contents of this 500 page bill. Who stands to gain from this bill, and what outcomes is it likely to have for the general public? Is this bill in line with good faith evidence-based policymaking for the good of the population and the planet? What existing legal mechanisms could be used to fight the special-interest aspects of this bill?"

discuss

order

pavlov|2 years ago

Is there really a shortage of this kind of analysis today?

Using those legal mechanisms requires money and often public support. The LLM can’t conjure either.

The world is already full of smart people and good ideas about policy. The reason they’re not getting implemented probably has little to do with things that AI can solve today. For starters, a lot of voters actively dislike policy suggestions from experts and choose politicians who proudly go against expert opinion. Giving AI tools to the experts won’t change that.

thisiswater|2 years ago

I think the difference is that instead of deferring to an expert's opinion, you can interact with a knowledge machine which can explore the topic in and on your terms, answer your questions about it, respond to your points and concerns about those responses, etc.

It's a fundamentally different kind of knowledge generation than reading an expert opinion - it's branching, self-directed, and responsive.

aussiegreenie|2 years ago

This excatly the kind of work Universities love.

I personally know three different group trying to excatly that.

tmpX7dMeXU|2 years ago

This is incredibly naive. “If only the people were EDUCATED!” No, we are far beyond that.

tetris11|2 years ago

Are they?

"<More Powerful/Popular State/Corpo-owned LLM>: it's pretty hard to fight this, just trust we've got got your back. Remember, we're currently handling that other case for you. Would be a shame to lose you as a client."

thisiswater|2 years ago

Exactly. I want an unaligned LLM to give me X potential solutions ranging in ethicality from "don't worry about it, they might be nice people" to "steal a nuke and ransom the world", and let me as an aligned human craft my prompt or chain of reasoning to weed out the useless or unethical responses, and then I can decide what is useful and suitable.

This is more or less the process that goes on inside a thinking human, is it not? I don't want to outsource ethical decision making, I want to outsource cognitive effort. By analogy, you don't rely on a bulldozer to decide not to bulldoze a populated nursing home - that's on the user, as are the consequences.

Current power structures demonstrably cannot be trusted to limit themselves to ethical solutions (Military Industrial Complex, Climate Change, etc etc etc pick your poison) - why should they be trusted to censor cognitive tools?