top | item 44372002

(no title)

old_man_cato | 8 months ago

Sometimes I feel like I'm losing my mind with this shit.

Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?

What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?

discuss

order

shaldengeki|8 months ago

No, you're wrong. They wrote the story before coming up with the model!

In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.

old_man_cato|8 months ago

https://ai-2027.com/research says that:

AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.

You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?

refulgentis|8 months ago

Correct. Entirely.

And I'm yuge on LLMs.

It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.

As neutrally as possible, I think everyone can agree:

- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,

- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.

- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.

- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.

It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.

In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago

It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.

radioactivist|8 months ago

Thank you for this comment, it is exactly my impression of all of this as well.

heavyset_go|8 months ago

The point? MIRI and friends want more donations.