This isn't a threat to civilization. Civilization already has the tools to solve this problem. They're ancient, and (for better or worse) they're re-emerging all around us: tribalism, aristocracy, credentialism, reputation. A hundred variations of "Trust is earned; suspect strangers."
The threat here is to the philosophical conceit that you can trust strangers -- not via a network where people vouch for each other, but just by averaging large numbers of them. Democracy makes good things happen. It's a beautiful idea, the philosophical gem of the last two or three centuries, and it works well in some contexts, and certainly is better than what it replaced... but its success has resulted in overapplication. It certainly can't withstand an army of malicious bots.
I have enjoyed the fact that the absolute political freedom of internet communities has allowed for myriad experiments in government on the sort of timescale that allows for lessons and improvement. The core conceit that people, in large numbers, are basically good and wise was the core philosophy that ruled the internet of ten or twenty or thirty years ago, with its open networks and upvotes. But it seems to me that we have all collectively been coming around to the fact that moderation, and reputation, and credentials, and curation, all have some serious up sides. Tyranny is certainly a problem, but maybe kings and aristocrats aren't all bad.
The bots are really only a threat to communities that haven't figured out yet that there is a balance here, a give and take between the individual and society. That zero contribution should equal zero power, that a high degree of influence ought only be achievable by long and faithful service, that trust is something real people have in each other and that proxies for it can always be gamed, that social status serves a very real and useful function, and that destructive behavior should be met with extreme prejudice... but at the same time, that outsiders can sometimes say very important things.
Lord Vetinari in Terry Pratchett's Discworld is an interesting study in tyranny. He is referred to in-world as a tyrant, but Pratchett frequently lampshades that that isn't a great word to describe what he actually does. Vetinari has no life outside running the city, no personal goals, no individual ego. In his words, "it is always about the city"[0]. He views the city as an organism and himself as its caretaker, and he does a remarkably good job. He can be ruthless and brutal when necessary to protect the city, but the city thrives under him.
Vetinari has a direct parallel in the Benevolent Dictator for Life in FOSS communities, or dang on HN. Individuals often hate these "tyrants" and wish them gone, but the community as a whole only thrives because of their careful, patient, and deliberate care. You can't easily replace a BDFL with a democratic process without losing the community's soul.
The problem with the BDFL model is that if your dictator isn't benevolent, what you have is a normal, flawed monarchy. And very few people are capable of being as ego-free as a benevolent dictator needs to be.
Unfortunately, if there's one thing the world doesn't need at this point, with the amount of problems that can only be solved by cooperation, it's even more tribalism.
Nice to see a comment that's uncommonly thoughtful and nuanced. Thank you.
"Trust is earned; suspect strangers" and “a give and take between the individual and society” -- these seem to go beyond humans and even our scale of being. From a neuroscience perspective, all minds need governance, but we just know it as decentralized autonomy. There are no “decider” neurons or cells or decision-making councils of neurons or cells. And yet, we have had over three billion years of minds figuring out how to help their body/community/society autonomously navigate a complex world. This decentralized autonomy requires trust. It’s a more expansive definition of trust, but probably worth exploring.
One could argue that at every scale minds exist at, from the microscopic to the globe-spanning, they eventually face daunting complexity and information load, and the way forward is to figure out how to stably cohere into societies and divide up tasks. The early ones are often centralized, but the ones that eventually win out are ones that figure out real decentralization and also solve the free rider problem. We humans are still very early in our experiments with decentralization. Democracy is our first real attempt at decentralized governance, but a very early one.
Decentralization almost always requires a new form of communication to scale up the network size. Synaptic transmission got minds to one level; language got us to a whole new level, but it can't stitch together billions of very diverse individuals. The internet changed connectivity nearly overnight, but it did not change communication, and we are seeing the evolving impact of this imbalance in the form of conspiracy theories, fake news, echo chambers and what-not. One way AI could actually help is by providing a “selective myelination" that preferably distributes and accelerates trustworthy communication and slows down the rest.
The crowd isn’t wise, it’s dumb as rocks and social media was a generally terrible idea except for the heavily weeded gardens. AI is more likely to save it than harm it.
A lot of replies to this post (and to fears about AI in general) are of the form "But humans are as untrustworthy, if not more. So the AI can't be any worse". I have to say many of you have a very dim view of humans.
When I read an answer off of stackoverflow that is not highly voted, I know that the answer could be incorrect. But there is a trust that the actor on the other side was also facing the same problem, and is not malicious. After all, why would a malicious actor care to give a wrong answer to a "Rust borrow checker question"?
If the answer was AI generated and posted by a bot operated by a karma farmer, out goes that trust. This is obviously not a threat to society, and chances are the asker will eventually figure out the right answer, but you can see how this sort of thing reduces the signal-to-noise ratio of almost anything on the internet by orders of magnitude. Trust underpins everything.
Hacker news IS a social network in my opinion. I come here to read comments knowing that there is a like-minded person on the other side who is sharing their thoughts. If I was looking for information or something to learn, I would read a newspaper, a textbook or a research paper. The discussion with a real human from some corner of the world is what makes this forum tick.
This tech is not going away and I presume some steady state will be reached eventually, but in the meantime, there is no doubt in my mind that the internet as we know it is in jeopardy.
HN ususlly disappoints me any time social issues pop up. It seems a majority HN'rs fancy themselves as liberals. Yet in reality they are authoritarian conservatives that happen to have some liberal beliefs. The calls for oppression when someone has a different view are common.
For me the unreliability is not a problem because I’m not using ChatGPT to search for information but rather to help me think and solve problems. ChatGPT can make incredibly good questions and can create relationships between concepts that it would take me hours to make on my own. So ChatGPT just become my private coach and a really smart actual assistant.
Indeed, this is an assisted brain storming tool, not automated problem solving where you're supposed to trust the result wholesale.
For work I used ChatGPT to write a very specific programming problem and it wasn't a solution you'd ever copy/paste...the lack of context to the surrounding system means that's impractical 99% of the time outside of toy problems.
But it was still super useful for getting my brain working and suggesting a really good basic structure it would have taken me 3-4 failure cycles to get to.
The same exists in DALL-E/GPT for corporate art generation, writing 'essays', or stories or w/e. It's almost never producing the end product (besides toy problems).
So, if it can't do that in the first place, why evaluate it as if that's what it's supposed to be?
This is how I am using it too. It completely blows away the procrastination aspect of starting a task. I just start chatting about it and get the ball rolling.
In line with this, something I feel like I’ve observed is that you can split people into those that like/need to hash things out with others and those that either don’t or aren’t good at it. It’s more than rubber-ducking. It’s collaborating on exploring a cognitive space (ideas, understanding, etc).
Personally, I’m one of these people, and I’ve struggled with how much others either don’t work this way, don’t want to or don’t know how. I think it’s powerful and that finding good collaborators at this way of thinking is a factor for general success.
Interesting to think about how AI can provide a general base level for this kind of thinking.
I've always put chat bots through the ringer since the smarterchild days, only to be disappointed.
ChatGPT gave me answers I would have never gotten regarding extremely technical concepts that the wikipedia articles would never surface, and would need hours of research papers or formal academic work or maybe just working in the field and being in those communities.
I think it is ironic that Google is sitting on a better conversation AI that is just too busy finessing its own engineers trying to escape.
This just reminded me ChatGPT is around and I could just ask it how to do something in Hashicorp Vault...and it gave me exactly the prompt I needed to do the rest.
I think there are some problems with assuming that ChatGPT-like AI will improve in quality, _especially_ if it gets good enough to popular and generally trusted.
Where will the next generation of AI training data come from, if the vast majority of information on the internet is generated by AI? Isn’t new data required? It feels like an internet filled with ‘incorrect’ spam generated by AIs is as much, maybe more, of a danger to future AI as it is to humans.
This is almost the same thing, but: it feels like ChatGPT is so good at e.g. programming questions because it’s been trained on millions of human discussions about programming. Even if you can weed out AI spam from future training sets, if people start relying on AI for e.g Stack Overflow like answers, and perhaps even reference information, and therefore stop writing or conversing online about new technologies, where will training data about new technologies come from? Primary reference material probably isn’t enough (just like it’s not really enough for most humans).
No. For these systems to improve on their defects they should rely less on having seen something very specific in the training data, and more on reasoning and 'understanding' (forming conceptual models). The amount of training data available is already vastly more than strictly necessary given more data-efficient algorithms. I didn't have to read the whole internet to learn anything I know.
This is an excellent point. One of the main things I see people say about ChatGPT is "when it gets better in the future". But as you point out, it's already trained on the entire internet. There are many features they could add and there are infinite special cases for handling various prompts. But the core of the product, the LLM generated answers, can't get much better without an order of magnitude increase in training data.
In terms of petabytes of training data, it will be a long time before ChatGPT's own responses are a significant portion of the training set. And even then, at least for a while, that should just shift responses closer to a sort of average human response
The way stuff is trained right now is at most 1/100 of capacity possible in my unscientific claim. So there's possibility to go 2-3 orders of magnitude with data we have.
While I don't agree with the doomsaying of the article, I do think the current round of thinking falls into the "late underestimation" category, in the grand scheme of things.
Science fiction from the 1960's through about the 1990's was of the opinion that AI would produce intelligence indistinguishable from humans, and would do so extremely quickly (we should have had it by now), and that the most pressing questions would be things like whether those intelligences deserved voting rights and whether they made humanity obsolete.
That was a wild overestimation of what the technology was capable of.
But the widespread application of AI, either in search results, social media moderation, propaganda and disinformation, bots, fake reviews, or just straight up spam, forms an absolute assault on the nature of trust and information in society, overturning heuristics that have been in effect for all of history (that people are usually trustworthy, that information is usually representative, that lying is hard). While I don't think the threat is apocalyptic, I do think no one even remotely saw this coming.
I don’t think the solutions proposed will be that impactful. How, exactly, is a platform supposed to ban content that may actually end up being more valid-seeming than the average human? How exactly will government policies prevent this on a global internet?
I fear we are about to enter a period where very little can be trusted. The biggest skill our kids can learn is reasoning and logic.
Maybe AI platforms can create a bot crawler or API that scrapes text across a website of choice and returns a score of how confident it is that the text on that page was written by/plagiarized from an AI, having them run it through their AI-writing trained models. A score above a certain threshold will get your page flagged by some central fact-checking authority and apply a AI citation label to your page (among many other solutions).
AI platforms generating text should have a "memory" of what was written/generated by them at a minimum and give the public the ability to check for plagiarism against their AI text generation models. Sounds like a good start up idea?
Production of the chips needed to train and run these AI models are intrinsically centralized and require enormous capital investment to develop. If politicians get sufficiently spooked, I expect they'll try to regulate the sale of GPUs/etc and will probably be mostly successful. Popularly cited instances of prohibition failing don't map onto this problem; you can't grow a GPU with hydroponics in your closet, or brew GPUs in your bath tub.
Counter-argument: There will be at least one country that does not regulate GPU sales or access in this way, and that's where all of the AI innovation will happen. That country will get a huge lead in the next tech race, with commensurate economic advantages. Many countries will realize this and will compete to be thought leaders rather than aggressively regulate the nascent technology out of existence.
In my experience chatGPT is more useful than StackOverflow in 9 out of 10 cases because it can generate custom tailor-made code for your exact use case. Sure, it might not be "correct" in the ultra-pedantic StackOverflow definition of "100% generalizably correct", but usually it's 90+% correct for your use-case and only needs slight alteration to be 100% correct for your use-case.
Such generated code is not a "threat" or "misinformation" or whatever author's point is. This is going to be a productivity multiplier for programmers so that programmers can do more faster than ever before!
It's also more willing to confidently invent wrong answers than StackOverflow. Try asking it to generate a Rust function that uses the `ripgrep` crate to search text with regular expressions. The ripgrep create doesn't expose an interface to do that (as far as I can tell) but ChatGPT is happy to generate some plausible-sounding but totally incorrect code.
The interesting thing is that on SO, the good stuff gets upvoted, while the bad/incorrect stuff is being pushed down or out completely. And it’s not rare that you see poor/wrong code snippets. Thanks to this mechanism, it actually shouldn’t be harmful to have auto-generated responses there, because they get reviewed and curated by humans. Isn’t the combination of AI legwork with Human review the best thing we have so far?
It works really well if slapping together something vaguely close to correct is good enough. This covers a significant fraction of all programming work.
We’re going to enter into a curation economy. Not quite Yahoo! 90s style but more in a “must be trusted and attributable” to be valid. It doesnt scale but there may be more value in the brand of your information than the current state of the world provides.
Yes, I will end up depending more heavily on sources that I know are thoughtfully edited by humans. Unfortunately, that set will diminish if bot herders overwhelm crowdsourced sites.
I remember people saying that AI was going to displace artists because they overhead business managers saying how great it was that they no longer had to hire a human artist for a design job. In other words, had there been an option to forgo the human in the first place, the human artist would never have stood a chance to take such a job. I think this reveals a depressing fact. The desire to automate humans away has always existed, and now people can finally realize those dreams. Maybe even prompt engineering itself is in that sense a bottleneck to be overcome in the future, if a human still has to attend to constructing it?
I don't think "artificial intelligence" is a good term for describing what's happening right now. It makes it sound like a robot overlord is calling the shots and displacing human workers, but in reality, it's just humans pitted against humans with different beliefs. AI tools still require humans that believe in the proliferation of AI to use them and spread the results everywhere.
I agree with the article's mention that maybe some reflection is needed when proliferating new discoveries. The "just a tool" argument doesn't consider that what affordances a new technology offers will dictate what most people will use it for in practice. I don't think many people would argue that nuclear weapons are "just a tool" once their uncontrolled spread makes them impossible for any side to ignore. And the dangers aren't yet on a "destruction of humanity" level like many who worry about AGI, but perhaps "the destruction of humanity in the arts" is more plausible given the recent developments up to now? We aren't even close to anything resembling AGI yet, but specific domains are still vulnerable to being overrun by AI generation this early on (StackOverflow, ArtStation, etc).
But this is a matter of tradeoffs, and maybe researchers will continue to be overeager in sharing their exciting findings, until an even more severe line is crossed than the one that set off the current war on AI art. I don't look forward to a time where demonstrators will scrape GitHub listings and arXiv papers to seek out contributors to anything that touches OpenAI or Stability and implore them to have a change of heart, or scream louder and louder to have their voices heard by the programmers writing their torch denoising code from inside the safety of their office buildings.
"Threat to the fabric of society"? The author is an influential figure, and should know better. This sensationalizing, fear-mongering and FUD is the real "threat to society".
> Someone else coaxed chatGPT into extolling the virtues of nuclear war (alleging it would "give us a fresh start, free from the mistakes of the past").
An intersting thought experiment. If we go back 100 years ago in many towns throughout the US I could've started a newspaper and published anything about the world or world politics that I wanted to. I could publish in my newspaper that credible sources said the Kaiser was going to invade Japan, I could've published about how a new miracle cure of an imaginary plant in India would cure bunions, or pretty much whatever I wanted as long as it wasn't easy to verify, and this worked because there wasn't an easy way for people in more isolated farming communities to really check or corroborate what was going on, this was in large part because it didn't have an effect on their world at all.
Now 100 years later i can do the same thing and realistically it is just as hard to actually verify or validate with the amount of misleading, partial, censored, or manipulated information out there. How much difference will this make on my life personally though?
Here's the thing society isn't static or changing we've seen recently several events that has destroyed many peoples trust in various sources of infomration, my guess is with another generation coming up in a world where it is widely acknowledged that most sources of information are biased or fabricated they will grow up with an "immunity" so to speak from this misinformation and grow to disregard everything they find and read online, just like they did before.
Its not a thought experiment.
The 30 years war was fought over the printing press at its core. Newspapers are nothing compared to the bible in terms of misinformation risk, harmful cults regularly spring up to this day by astray pastors making their own version of the Bible to proclaim themselves as some prophet.
So naturally, the Catholic church at the time was extremely concerned with people being able to 'translate' the bible (any translations will inevitably deviate from the official latin codex to an extent), that's how the protestant catholic conflict began, as the war to control the source of truth.
And 1/3rd of Germany's population died because of this war.
That being said, its absolutely worth it in the end. The Muslim world chose to not have this conflict, and banned the printing press (So the Quran can't be printed or translated). What ended up happening there is 500 years of utter stagnation with no new technological or social leaps.
Generative AI will definitely cause conflicts that indirectly kill millions. But that's just the price of adaptation, I was shocked at how people didn't seem to care about the COVID deaths after a while, I guess that's a good thing in the end.
That website makes Firefox use too much CPU and the fan of my laptop starts to scream - very annoying.
It is, of course, kind of a great joke. Never-heard-before wisdom with deep, impactful visions published on a website with "trusted insights for computing´s leading professionals" - and they just simply can not deliver decent HTML / JavaScript.
Looks like we need even more leading professionals!
AI could be civilization-ending. Pretty much everything we do involves trust in others. Things like vaccine hesitancy can become worse to the point that people make increasingly life-threatening decisions. With AI-generated disinformation at scale fully able to drown out actual vetted information, it's possible we'll end up in an environment where even reasonable people simply don't know what decisions to make. Voting becomes problematic. Where to live, what products to buy, what to study, what job to take, all become difficult or impossible to decide intelligently.
> all become difficult or impossible to decide intelligently
The solution is for people to learn to reason logically for themselves. Maybe when we actually get to the point where people can't survive until they do so, they'll start taking logical reasoning seriously.
It's not a threat until AI can start creating new ideas in a vacuum. As it stands, they require our creative output and all of their outputs can be considered derived works. It's very effective, but it's still dependent on INPUT (human) -> OUTPUT (AI).
Its not that A.I. software is smart, but that for the most humans are mediocre and uncreative. Humans are stuck in their mental ruts. Given a large enough A.I. and large enough sampling of total human knowledge, an A.I. can predict much human activity.
OpenAI needs to feed all of Wikipedia, Reddit comments, stack overflow, GitHub, textbooks, IMDb, hacker news, books, news, yelp, etc.. into ChatGPT. That’s pretty much the next step to out googling google unless google does it first.
[+] [-] Dove|3 years ago|reply
The threat here is to the philosophical conceit that you can trust strangers -- not via a network where people vouch for each other, but just by averaging large numbers of them. Democracy makes good things happen. It's a beautiful idea, the philosophical gem of the last two or three centuries, and it works well in some contexts, and certainly is better than what it replaced... but its success has resulted in overapplication. It certainly can't withstand an army of malicious bots.
I have enjoyed the fact that the absolute political freedom of internet communities has allowed for myriad experiments in government on the sort of timescale that allows for lessons and improvement. The core conceit that people, in large numbers, are basically good and wise was the core philosophy that ruled the internet of ten or twenty or thirty years ago, with its open networks and upvotes. But it seems to me that we have all collectively been coming around to the fact that moderation, and reputation, and credentials, and curation, all have some serious up sides. Tyranny is certainly a problem, but maybe kings and aristocrats aren't all bad.
The bots are really only a threat to communities that haven't figured out yet that there is a balance here, a give and take between the individual and society. That zero contribution should equal zero power, that a high degree of influence ought only be achievable by long and faithful service, that trust is something real people have in each other and that proxies for it can always be gamed, that social status serves a very real and useful function, and that destructive behavior should be met with extreme prejudice... but at the same time, that outsiders can sometimes say very important things.
[+] [-] lolinder|3 years ago|reply
Vetinari has a direct parallel in the Benevolent Dictator for Life in FOSS communities, or dang on HN. Individuals often hate these "tyrants" and wish them gone, but the community as a whole only thrives because of their careful, patient, and deliberate care. You can't easily replace a BDFL with a democratic process without losing the community's soul.
The problem with the BDFL model is that if your dictator isn't benevolent, what you have is a normal, flawed monarchy. And very few people are capable of being as ego-free as a benevolent dictator needs to be.
[0] Making Money, page 98
[+] [-] rob74|3 years ago|reply
[+] [-] ChaitanyaSai|3 years ago|reply
"Trust is earned; suspect strangers" and “a give and take between the individual and society” -- these seem to go beyond humans and even our scale of being. From a neuroscience perspective, all minds need governance, but we just know it as decentralized autonomy. There are no “decider” neurons or cells or decision-making councils of neurons or cells. And yet, we have had over three billion years of minds figuring out how to help their body/community/society autonomously navigate a complex world. This decentralized autonomy requires trust. It’s a more expansive definition of trust, but probably worth exploring.
One could argue that at every scale minds exist at, from the microscopic to the globe-spanning, they eventually face daunting complexity and information load, and the way forward is to figure out how to stably cohere into societies and divide up tasks. The early ones are often centralized, but the ones that eventually win out are ones that figure out real decentralization and also solve the free rider problem. We humans are still very early in our experiments with decentralization. Democracy is our first real attempt at decentralized governance, but a very early one.
Decentralization almost always requires a new form of communication to scale up the network size. Synaptic transmission got minds to one level; language got us to a whole new level, but it can't stitch together billions of very diverse individuals. The internet changed connectivity nearly overnight, but it did not change communication, and we are seeing the evolving impact of this imbalance in the form of conspiracy theories, fake news, echo chambers and what-not. One way AI could actually help is by providing a “selective myelination" that preferably distributes and accelerates trustworthy communication and slows down the rest.
[+] [-] analognoise|3 years ago|reply
Nothing that happens on the...internet? Changes that.
It's the internet. People gonna lie on the internet. I think people overestimate it's importance at this point.
If it becomes unreliable or shitty, people will stop using it, go outside, and touch grass.
[+] [-] dougmwne|3 years ago|reply
[+] [-] rakejake|3 years ago|reply
When I read an answer off of stackoverflow that is not highly voted, I know that the answer could be incorrect. But there is a trust that the actor on the other side was also facing the same problem, and is not malicious. After all, why would a malicious actor care to give a wrong answer to a "Rust borrow checker question"?
If the answer was AI generated and posted by a bot operated by a karma farmer, out goes that trust. This is obviously not a threat to society, and chances are the asker will eventually figure out the right answer, but you can see how this sort of thing reduces the signal-to-noise ratio of almost anything on the internet by orders of magnitude. Trust underpins everything.
Hacker news IS a social network in my opinion. I come here to read comments knowing that there is a like-minded person on the other side who is sharing their thoughts. If I was looking for information or something to learn, I would read a newspaper, a textbook or a research paper. The discussion with a real human from some corner of the world is what makes this forum tick.
This tech is not going away and I presume some steady state will be reached eventually, but in the meantime, there is no doubt in my mind that the internet as we know it is in jeopardy.
[+] [-] citizenpaul|3 years ago|reply
[+] [-] stanete|3 years ago|reply
[+] [-] dmix|3 years ago|reply
For work I used ChatGPT to write a very specific programming problem and it wasn't a solution you'd ever copy/paste...the lack of context to the surrounding system means that's impractical 99% of the time outside of toy problems.
But it was still super useful for getting my brain working and suggesting a really good basic structure it would have taken me 3-4 failure cycles to get to.
The same exists in DALL-E/GPT for corporate art generation, writing 'essays', or stories or w/e. It's almost never producing the end product (besides toy problems).
So, if it can't do that in the first place, why evaluate it as if that's what it's supposed to be?
[+] [-] gardenhedge|3 years ago|reply
[+] [-] maegul|3 years ago|reply
Personally, I’m one of these people, and I’ve struggled with how much others either don’t work this way, don’t want to or don’t know how. I think it’s powerful and that finding good collaborators at this way of thinking is a factor for general success.
Interesting to think about how AI can provide a general base level for this kind of thinking.
[+] [-] yieldcrv|3 years ago|reply
ChatGPT gave me answers I would have never gotten regarding extremely technical concepts that the wikipedia articles would never surface, and would need hours of research papers or formal academic work or maybe just working in the field and being in those communities.
I think it is ironic that Google is sitting on a better conversation AI that is just too busy finessing its own engineers trying to escape.
[+] [-] XorNot|3 years ago|reply
[+] [-] jrmg|3 years ago|reply
Where will the next generation of AI training data come from, if the vast majority of information on the internet is generated by AI? Isn’t new data required? It feels like an internet filled with ‘incorrect’ spam generated by AIs is as much, maybe more, of a danger to future AI as it is to humans.
This is almost the same thing, but: it feels like ChatGPT is so good at e.g. programming questions because it’s been trained on millions of human discussions about programming. Even if you can weed out AI spam from future training sets, if people start relying on AI for e.g Stack Overflow like answers, and perhaps even reference information, and therefore stop writing or conversing online about new technologies, where will training data about new technologies come from? Primary reference material probably isn’t enough (just like it’s not really enough for most humans).
[+] [-] versteegen|3 years ago|reply
No. For these systems to improve on their defects they should rely less on having seen something very specific in the training data, and more on reasoning and 'understanding' (forming conceptual models). The amount of training data available is already vastly more than strictly necessary given more data-efficient algorithms. I didn't have to read the whole internet to learn anything I know.
[+] [-] hooande|3 years ago|reply
In terms of petabytes of training data, it will be a long time before ChatGPT's own responses are a significant portion of the training set. And even then, at least for a while, that should just shift responses closer to a sort of average human response
[+] [-] machiaweliczny|3 years ago|reply
[+] [-] andrewstuart|3 years ago|reply
This is the early overestimation.
[+] [-] thinkingkong|3 years ago|reply
[+] [-] Dove|3 years ago|reply
Science fiction from the 1960's through about the 1990's was of the opinion that AI would produce intelligence indistinguishable from humans, and would do so extremely quickly (we should have had it by now), and that the most pressing questions would be things like whether those intelligences deserved voting rights and whether they made humanity obsolete.
That was a wild overestimation of what the technology was capable of.
But the widespread application of AI, either in search results, social media moderation, propaganda and disinformation, bots, fake reviews, or just straight up spam, forms an absolute assault on the nature of trust and information in society, overturning heuristics that have been in effect for all of history (that people are usually trustworthy, that information is usually representative, that lying is hard). While I don't think the threat is apocalyptic, I do think no one even remotely saw this coming.
[+] [-] cs702|3 years ago|reply
He's been on HN many times before, always criticizing the same things:
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
As far as I know, all he's ever done is criticize, without ever delving into the mathematical details.
To understand those who disagree with him, read "The Bitter Lesson" by Rich Sutton:
http://incompleteideas.net/IncIdeas/BitterLesson.html
--
EDITS: Modified and rearranged sentences to reflect more accurately what I meant to write the first time around.
[+] [-] wepple|3 years ago|reply
I fear we are about to enter a period where very little can be trusted. The biggest skill our kids can learn is reasoning and logic.
[+] [-] ChrisMarshallNY|3 years ago|reply
And we are doing everything we can, to prevent exactly that.
No wonder the youngsters don't like us.
[+] [-] sizzle|3 years ago|reply
AI platforms generating text should have a "memory" of what was written/generated by them at a minimum and give the public the ability to check for plagiarism against their AI text generation models. Sounds like a good start up idea?
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] crawfordcomeaux|3 years ago|reply
[+] [-] LarryMullins|3 years ago|reply
[+] [-] aftbit|3 years ago|reply
[+] [-] o_1|3 years ago|reply
[+] [-] umvi|3 years ago|reply
Such generated code is not a "threat" or "misinformation" or whatever author's point is. This is going to be a productivity multiplier for programmers so that programmers can do more faster than ever before!
[+] [-] aftbit|3 years ago|reply
[+] [-] josefrichter|3 years ago|reply
[+] [-] Retric|3 years ago|reply
[+] [-] thinkingkong|3 years ago|reply
[+] [-] neolefty|3 years ago|reply
[+] [-] nonbirithm|3 years ago|reply
I don't think "artificial intelligence" is a good term for describing what's happening right now. It makes it sound like a robot overlord is calling the shots and displacing human workers, but in reality, it's just humans pitted against humans with different beliefs. AI tools still require humans that believe in the proliferation of AI to use them and spread the results everywhere.
I agree with the article's mention that maybe some reflection is needed when proliferating new discoveries. The "just a tool" argument doesn't consider that what affordances a new technology offers will dictate what most people will use it for in practice. I don't think many people would argue that nuclear weapons are "just a tool" once their uncontrolled spread makes them impossible for any side to ignore. And the dangers aren't yet on a "destruction of humanity" level like many who worry about AGI, but perhaps "the destruction of humanity in the arts" is more plausible given the recent developments up to now? We aren't even close to anything resembling AGI yet, but specific domains are still vulnerable to being overrun by AI generation this early on (StackOverflow, ArtStation, etc).
But this is a matter of tradeoffs, and maybe researchers will continue to be overeager in sharing their exciting findings, until an even more severe line is crossed than the one that set off the current war on AI art. I don't look forward to a time where demonstrators will scrape GitHub listings and arXiv papers to seek out contributors to anything that touches OpenAI or Stability and implore them to have a change of heart, or scream louder and louder to have their voices heard by the programmers writing their torch denoising code from inside the safety of their office buildings.
[+] [-] lxe|3 years ago|reply
[+] [-] swamp40|3 years ago|reply
I'm convinced.
[+] [-] optimalsolver|3 years ago|reply
[+] [-] kneebonian|3 years ago|reply
Now 100 years later i can do the same thing and realistically it is just as hard to actually verify or validate with the amount of misleading, partial, censored, or manipulated information out there. How much difference will this make on my life personally though?
Here's the thing society isn't static or changing we've seen recently several events that has destroyed many peoples trust in various sources of infomration, my guess is with another generation coming up in a world where it is widely acknowledged that most sources of information are biased or fabricated they will grow up with an "immunity" so to speak from this misinformation and grow to disregard everything they find and read online, just like they did before.
[+] [-] aiappreciator|3 years ago|reply
That being said, its absolutely worth it in the end. The Muslim world chose to not have this conflict, and banned the printing press (So the Quran can't be printed or translated). What ended up happening there is 500 years of utter stagnation with no new technological or social leaps.
Generative AI will definitely cause conflicts that indirectly kill millions. But that's just the price of adaptation, I was shocked at how people didn't seem to care about the COVID deaths after a while, I guess that's a good thing in the end.
[+] [-] POPOSYS|3 years ago|reply
It is, of course, kind of a great joke. Never-heard-before wisdom with deep, impactful visions published on a website with "trusted insights for computing´s leading professionals" - and they just simply can not deliver decent HTML / JavaScript.
Looks like we need even more leading professionals!
[+] [-] gcanyon|3 years ago|reply
I hope I'm wrong.
[+] [-] commandlinefan|3 years ago|reply
The solution is for people to learn to reason logically for themselves. Maybe when we actually get to the point where people can't survive until they do so, they'll start taking logical reasoning seriously.
[+] [-] poulpy123|3 years ago|reply
[+] [-] deafpolygon|3 years ago|reply
[+] [-] peter303|3 years ago|reply
[+] [-] bottlepalm|3 years ago|reply