(no title)
tjopies | 2 years ago
We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.
Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.
Right now, it sounds more like the CRISPR discussion all over again.
My 2 cents for what it is worth.
KyeRussell|2 years ago
tjopies|2 years ago
eternalban|2 years ago
The experiments to determine the answers must not be in the sole pervue of corporations. Executives of corporations have fiduciary duty only to the shareholders.
So a completely liberal approach to traversing the space of pervasive AI in society, with stated 10% probability for catastrophic results (number is per Sam Altman), can not be left to a decision making process that only seeks to maximize profits.
To "be careful as we chart" decisively means it can not be treated as a mere innovation to be subjected to market forces. That's really the only fundamental issue. This isn't a 'product' and 'market' may happily seek a local maxima which then leads to the "10%" failed state. That's it. Address that and we can safely explore away.
So not fear mongering. Correctly categorizing.
Herval_freire|2 years ago
Here's the thing. Before chatGPT, it was pretty much a given that society was under more or less zero risk of losing jobs to AI.
Now with GPT-4 that zero risk has changed to unknown risk.
That is a huge change and it is a change that would be highly unwise to not address.
I agree that only time will tell. But as humans we act on predictions of the future. We all have to make a bet on what that future will be.
Right now this blog post is writing about a scenario that although speculative is also very realistic. It is, again, unwise to dismiss the possibility of a realistic scenario.