(no title)
theevilsharpie | 4 months ago
If you want to hand-wave that away by stating that any company with technology capable of achieving AGI would guard it as the most valuable trade secret in history, then fine. Even if we assume that AGI-capable technology exists in secret somewhere, I've seen no credible explanation from any organization on how they plan to control an AGI and reliably convince it to produce useful work (rather than the AGI just turning into a real-life SHODAN). An uncontrollable AGI would be, at best, functionally useless.
AGI is --- and for the foreseeable future, will continue to be --- science fiction.
atleastoptimal|4 months ago
The second is a significant open problem (the alignment problem) and I'd wager it is a very real risk which companies need to take more seriously. However, whether it would be feasible to control or direct an AGI towards reliably safe, useful outputs has no bearing on whether reaching AGI is possible via current methods. Current scaling gains and the rate of improvement (see METR's horizons on work an AI model can do reliably on its own) make it fairly plausible, at least more plausible than the plain denial that AGI is possible I see around here with very little evidence.