(no title)
dpcan | 2 years ago
Every numbskull to every professor now has the ability to generate information on a scholarly level and solve problems that could take a human years of training and experience to produce.
I'm really trying hard not to get all doom and gloom about this stuff, but I'm honestly worried. It keeps me up at night.
3rd3|2 years ago
Mass unemployment is a likely outcome, but modern states already have amazing safety nets in place and they're not far away from a universal basic income.
As technology has improved, so has life on almost all relevant metrics (education, longevity, poverty, crime etc.) and even though technology amplifies greed, criminals do not prevail because they are heavily selected against seemingly regardless of the level of technological sophistication.
As for paperclippers, there is actually no indication that larger neural nets misinterpret human intention. Rather, larger NNs mostly become better at understanding. Current AIs when prompted to make paperclips do not interpret this as us wanting to turn the entire world into paper clips, but in terms of (the more sensible interpretation) of meeting the economic demand for paperclips, hence giving advice on how to optimize the machinery for producing paperclips etc. Small models do tend to find weird, unintended shotcuts, but my hunch is we are seeing this less with very large NNs.
Lastly, we can always employ AIs to control other AIs. AI scammers will be counteracted by scam detection AIs. Same for fake news, mind control, drone warfare, propaganda, etc. AI will allow us to make better political arguments against mass surveillance and for freedom, and it will make journalism more efficient at exposing crime and injustice. Demonopolizing AI is hence extremely important, so that every misuse of AI or misaligned AI can be counteracted by other AIs.
neel8986|2 years ago
very few western and east asian nation have that. 80% of population dont have safety net
bumby|2 years ago
I'm hopeful about the future use of AI, but this strikes me as potentially overly optimistic. Or possible misinterprets what academics do.
ChatGPT is very good at collating existing information. I don't know that it is anywhere near the level of creativity necessary to generate novel ideas (especially those that have to be rooted in reality to solve engineering problems).
For example, if you ask "How can we redesign a Wankel engine to achieve 2% better efficiency?" it does give a good summary of the main mechanisms that impact efficiency (thermal losses, friction, combustion efficiency) but it doesn't give any actionable, novel ways of implementing them. It's basically a 101 course summary of combustion engines. "Generate better materials" is not an actionable solution. So unless you think a freshman/sophomore understanding is all that's needed to solve our big problems, we've still got a ways to go before we can turn the reins over to AI.
SantalBlush|2 years ago
I think a lot of people misunderstand what a scholarly level is. ChatGPT is not even close.