top | item 41557540

(no title)

egnehots | 1 year ago

well it's human nature to struggle with collective action when the risks are unclear, vague and not shared. stakeholders are juggling immediate, tangible concerns, like climate change, economic stability, and political issues, making it tough to justify moving AI up the priority list.

discuss

order

ben_w|1 year ago

Extra fun: AI can make a big (+ and -) difference to climate change[0], mess with the economy[1], and get used as a tool to sow political chaos[2].

But sure, humans are necessarily very myopic, it's necessary that we ignore 98% of the issues in the world or we wouldn't be able to even function.

[0] High power use, can help roll-out renewables and storage

[1] What happens when those humanoid robots we see demos of, get good enough to replace all the staff in the factories where they get made? And the rest of their supply chain?

[2] Imagine if the pizzagate conspiracy theorists had had access to an un-censored sound-and-video GenAI tool

janalsncm|1 year ago

I would like to learn more about what AI can do specifically to solve the climate crisis.

My guess is that a lot of the actual lift would come from industrial automation to create cheaper green products. I guess that is “AI” in some sense.

But if we are building solar panels, the R&D budget should be put towards streamlining the build process. Figure out how to commoditize solar panels so that oil is too expensive.

Building huge “foundation” models like I see huge AI labs doing is a bit like building better visualizations of an impending asteroid impact. It’s not really what we need right now.

southernplaces7|1 year ago

>imagine if the pizzagate conspiracy theorists had had access to an un-censored sound-and-video GenAI tool

They didn't but a huge number of other conspiracy theorists still running their own ideas do have access to all that with today's AI, and we don't see a vast watershed of billions of people being brainwashed into believing complete nonsense to any degree greater than has already been the case for a long, long time before AI came along.

People do have a certain level of discernment, even when absolutely bombarded with propaganda and fakery. Usually, it seems to take, finally, coercion to make them simply swallow too much of something obviously absurd. This too was the case before AI and, simultaneously now, widespread access to information sources that let you verify the veracity of nearly anything you like in minutes as long as it's not grossly complex to untangle.

Even the Nazis of the 1930s and the bolsheviks earlier, despite all their mass efforts at convincing through propaganda and misinformation (applied to people with less ability than today to find contrary sources of information) ultimately didn't convince as many as they'd have liked voluntarily. They had to coerce them into just never openly disagreeing.

I don't think we're in danger of AI by itself doing anything major to suddenly make billions of people behave much differently in their beliefs from how they already have for centuries at least.