top | item 42876584

(no title)

pololeono | 1 year ago

I don't know, I could imagine that this changes when AI becomes more sensible than the average internet advice.

Also I would like to see some evidence how dangerous the experiment with the AI inspired Fusor actually was. I recently read here "hiking in jeans" is dangerous.

discuss

order

RansomStark|1 year ago

I'd be interested in how an LLM could become more sensible than the average internet, they are by definition the average of the internet. I'm waiting for the next major innovation, and given AI's history I might be waiting a long time.

Fusors are somewhat dangerous, they use extremely high voltage, in the thousands to hundred thousand volt range. x-rays become an issue above around 30,000 volts, but they are frequently made by high school students, and I'm not aware of any deaths.

lots of details available here: https://fusor.net/board/viewtopic.php?t=4843

lupusreal|1 year ago

> they are frequently made by high school students, and I'm not aware of any deaths.

That's been done no more than a few dozen times I think? Maybe less than that. I think it's a rare enough activity that the accident rate simply hasn't been probed enough.

Wood burning with microwave transformers is notorious for getting people killed, but how many people does it kill relative to how many people have tried it? Maybe a handful few out of a hundred? On the other hand, kids building fusors are probably smarter than the average public, to whom wood burning with transformers is frighteningly accessible. I don't think teenagers building fusors is quite that dangerous, but I don't think we have enough data to call it a statistically safe activity.

exe34|1 year ago

> they are by definition the average of the internet.

Are you referring to base models?

Nowadays they also train on stolen books and are further "aligned" based on feedback. I imagine they are already learning to teach based on feedback from users.