I'm sure this has been talked about somewhere, so feel free to just leave the link. But in an internet where people just go to one of these chatbots for all their answers/searches, why would anyone continue to post content to the internet? It seems at some point you would just be working for free as a content creator for all of these companies. Nothing you post would be linked to, you would never be cited as the source of the information, and you wouldn't even know if anyone even saw anything you wrote.
I'm still tempted to post things online for my own reference. And I believe we benefit from increasing the common wealth of information online.
But the fact my own posts are feeding the thing that will cause massive redundances in my industry, ultimately being a detriment to my financial worth, is causing me pause for thought.
Because ChatGPT isn't just a way to share the information I put online. It'll eventually, in a year I'd guess, replace me and my colleagues; it'll be regurgitating fully formed projects, and probably learn to do the ancillory activies.
It seems silly to claim that we should all just stop publishing our thoughts simply because some process could come along and combine them with others in order to produce something unique. That is, of course, the story of all human history.
Soon, the only "content" will be the SEO optimization junk floating around and it will create more content using Chat GPT 4. It will be turtles all the way down.
ChatGPT can solve a lot of problems but it cannot solve problems that you as a user don’t know questions to, which makes article/blogs superior to it, and will continue to be for a while still.
I have a project where I shared a lot of JavaScript info over the years and those articles are growing 10% month over month. Some get 500 daily views despite there being ChatGPT.
You just can’t trust it, neither can you get it to give you real context or the required visuals.
Some may do it as a form of immortality, if these truly are 'foundational' models someone may want to feed as much of their identity and ways of thinking into them by way of flooding online discourse.
So all future models contain some morsel of their being. And the more original the person's ways of thinking, and expressing, the more influence they have on the model, as their thoughts are not as easily compressed or aligned with common embeddings.
I would guess many more people will move to youtube to create content and much of the web hollows out because there's no longer an incentive to write articles on many topics because they no longer get any traffic.
Given that google owns youtube I imagine they would be in an ideal position to extract and use the information in youtubes videos to power their chatbot in a way nobody else can.
I would like to be paid for my content but Google broke that early "social contract" of the internet since they started taking a bigger and bigger cut of the advertising pie. Social networks dont even pay anything. Models will simply have to pay us if they want our data, and i think this is a more honest proposition
I wrote a blog post yesterday about clean coding. Well, actually I asked ChatGPT to write it for me because I was lazy to start it. Then I wrote the following:
"Well, why would anyone write about anything anymore unless it’s something very specific and unknown. From ChatGPT I see that clean coding is a well established concept. I feel so because I agree with most of the things it says."
So I feel what you are saying. At the same time I think blog posts might evolve to hybrid things where you just talk to an AI and share your thoughts on its output.
I'm not so sure of a concrete answer to that. However, what I think we may see is that the people who publish their non-AI content would do so if their content is so novel (or contradictory to the mainstream) that AI information sources would not yet behave as a substitute. The era where tons of people publish tons of crap online is already in decline, but particularly valuable content may still find a place online.
In an abstract sense, I can imagine someone wanting self help advice may actually not want it from an AI, but a human instead because the domain of that advice would be highly dependent on individual experience or opinion that AI couldn't reliably provide. There would be too much risk in an AI providing purely specious advice that doesn't apply to reality.
For instance, a generic AI available to the public probably will not provide you non-mainstream dietary advice. An AI giving dating advice may ultimately default to reductive "boomer" advice and be unwilling to give controversial advice based on real world experience that may be superior.
Though it may not be forever, humans still have the advantage of individual initiative and experience in the physical world. If anything about your life is extraordinary or if you're radical in any way, which describes a minority of the public, there may still be a place for their sort of content.
I won't claim that it won't happen, but to paint such prophecies you have to have quite a specific idea of the "internet", and I'm not even sure I can clearly imagine what is it for you.
Internet is a communication system. WhatsApp is internet. Will messaging your mom become irrelevant because you can just as ChatGPT? Well, maybe, but I don't see it happen in the near future. Okay, I see, you meant to say "world wide web", stuff you access in your browser, yeah… This doesn't really help either, because you access all sorts of stuff using your browser, it's just a lousy set of wrappers to render whatever there is, including WhatsApp.
So, okay, what do people do on the internet besides WhatsApp? They watch Twitch, for instance. Why? Are they looking for answers there? Surely not, generally it's such a mindbogglingly useless stupid waste of time it's hard to believe people actually watch this shit, yet they do, a lot, and even donate money like they are grateful their useless time is uselessly wasted. Also, it's a well-known fact they aren't even looking for a specific kind of content: if you are a streamer with 10K online you can do basically whatever you want, these people are following you, not whatever it is you did when they joined your channel. So, will people stop watching real people, because there are, well, rendered people? I'd say it's unlikely in the foreseeable future.
For the same reasons it's unlikely that people won't visit 4chan and HN anymore, all sorts of thematic forums and such. Obviously, they won't stop accessing online libraries, because when you want to read Kafka, you want to read Kafka and not a ChatGPT-generated summary of Kafka. Same with watching LoTR (even though it can be completely generated by NN, the movie has to have a name and you want to know that it's the same stuff your friend, "friend" or the favourite twitch-streamers of yours recommends, not some custom-generated movie, tailored specially for you). Same with every blogger, podcaster, youtuber. You may like recommendation systems, but it doesn't really diminish the role of trusted opinions for majority of people so far.
So, what else is there on the internet? Shitty information portals with copywriter-generated articles? Well, ok, now it will be ChatGPT-generated articles. So what? I suppose it may turn out to be actually better than human copywriters. Maybe Wikipedia will be less relevant (but it wouldn't, if it was better structured, and the main (even though it's false) claim of Wikipedia is that it doesn't generate original content anyway).
Surely a lot of things will look quite different 30 years forward. But it's hard to predict how exactly they will look, and I'm pretty sure it won't be whatever you imagine right now.
I think chrome have shift right click, or double right click to open browser context menu anyways.
I currently do not have access to PC to double check
Wow that is really pathetic. There are easy ways around it but the implication is quite ironic: talking about chat bots taking all the credit for everything and then not letting anyone quote them.
I'm finding that whatever search is used, it's much the same results with same political leanings so in fact Ai search doesn't offer anything new in that event. I'm a sceptic.
Apologies if this is a little off topic, but I’ve been really excited to try OpenAI’s GPT API and have been locked out for months with no response from their support chat. Any insight into how to get access would be really highly appreciated!
Here’s what happened: I made an account to play around with chat GPT, then wanted to switch to my company email address to use their API on that account. They wouldn’t let me use my phone number to sign up for a second account, so I deleted the first one. Unfortunately, deleting that account didn’t free it so I could sign up again with a new number. I then went and bought a new phone number from Google Voice out of desperation since their support never replied, but they don’t allow voip numbers, so that was in vain. My initial support request was in early January, and both that one and my follow up a couple weeks ago have gone unseen. So it feels as though I’m hard locked out from an API that looks like a lot of fun to use for both personal and professional projects. What is one to do?
Talkatone provides a VOIP service that passes most 2FA checks (including ones that try to filter for VOIP)
I hesitate to talk about it too much lest they get abused to high hell and eventually filtered like everyone else, but at least a few months ago I was able to register an OpenAI account using one of their numbers
I'm super excited for ChatGPT-4.
I remember playing around with gpt-2, and distinctly being underwhelmed.
GPT-3 was the first time I felt truly excited about generative text AI.
How do they keep churning these out this fast? Feels like this kind of technology should take longer to develop, if only through the baby-with-nine-mums-in-one-month adage.
LLMs have been around for a while and they aren't really that different than they were a few years ago tech-wise. The question was always about being able to get good data and compute power for training/running them.
Now that people understand the capabilities of the tech, it's got potential for profit and there's incentive to throw money at it.
OpenAI is treating GPT as a "foundational model". They spend time training the foundational model, then build on top of that. GPT was published may 2020. GPT 3.5 ("text-davinci-003" and "code-davinci-002") shipped a year ago, and ChatGPT was just a fine tuned on top of those.
So they've had plenty of time to increase the training set, improve the architecture and run GPUs full power to get a GPT-4.
[+] [-] user00012-ab|3 years ago|reply
[+] [-] tarkin2|3 years ago|reply
I'm still tempted to post things online for my own reference. And I believe we benefit from increasing the common wealth of information online.
But the fact my own posts are feeding the thing that will cause massive redundances in my industry, ultimately being a detriment to my financial worth, is causing me pause for thought.
Because ChatGPT isn't just a way to share the information I put online. It'll eventually, in a year I'd guess, replace me and my colleagues; it'll be regurgitating fully formed projects, and probably learn to do the ancillory activies.
[+] [-] taylorlapeyre|3 years ago|reply
[+] [-] hackerbrother|3 years ago|reply
-- This has been true ever since the FB/Twitter era took off
[+] [-] DavidSJ|3 years ago|reply
[+] [-] berkeleyjunk|3 years ago|reply
[+] [-] skilled|3 years ago|reply
I have a project where I shared a lot of JavaScript info over the years and those articles are growing 10% month over month. Some get 500 daily views despite there being ChatGPT.
You just can’t trust it, neither can you get it to give you real context or the required visuals.
[+] [-] bick_nyers|3 years ago|reply
Just because someone could be financially motivated to post content doesn't mean that they have to.
I like to comment on Reddit and HN, and I don't expect to be paid for it (but if you would like to pay me then by all means).
[+] [-] thequadehunter|3 years ago|reply
Also the same reason people still hang out in real life even though online game, chat services, and social media exist.
These are tools, and some people get too into the tools...but at the end of the day there's a time and a place for them.
[+] [-] botro|3 years ago|reply
So all future models contain some morsel of their being. And the more original the person's ways of thinking, and expressing, the more influence they have on the model, as their thoughts are not as easily compressed or aligned with common embeddings.
[+] [-] dageshi|3 years ago|reply
Given that google owns youtube I imagine they would be in an ideal position to extract and use the information in youtubes videos to power their chatbot in a way nobody else can.
[+] [-] seydor|3 years ago|reply
[+] [-] macrolocal|3 years ago|reply
[+] [-] gurelkaynak|3 years ago|reply
"Well, why would anyone write about anything anymore unless it’s something very specific and unknown. From ChatGPT I see that clean coding is a well established concept. I feel so because I agree with most of the things it says."
So I feel what you are saying. At the same time I think blog posts might evolve to hybrid things where you just talk to an AI and share your thoughts on its output.
You can read the post if you want: https://gurel.kaynak.link/2023/03/09/clean-coding/
[+] [-] ravenstine|3 years ago|reply
In an abstract sense, I can imagine someone wanting self help advice may actually not want it from an AI, but a human instead because the domain of that advice would be highly dependent on individual experience or opinion that AI couldn't reliably provide. There would be too much risk in an AI providing purely specious advice that doesn't apply to reality.
For instance, a generic AI available to the public probably will not provide you non-mainstream dietary advice. An AI giving dating advice may ultimately default to reductive "boomer" advice and be unwilling to give controversial advice based on real world experience that may be superior.
Though it may not be forever, humans still have the advantage of individual initiative and experience in the physical world. If anything about your life is extraordinary or if you're radical in any way, which describes a minority of the public, there may still be a place for their sort of content.
[+] [-] krick|3 years ago|reply
Internet is a communication system. WhatsApp is internet. Will messaging your mom become irrelevant because you can just as ChatGPT? Well, maybe, but I don't see it happen in the near future. Okay, I see, you meant to say "world wide web", stuff you access in your browser, yeah… This doesn't really help either, because you access all sorts of stuff using your browser, it's just a lousy set of wrappers to render whatever there is, including WhatsApp.
So, okay, what do people do on the internet besides WhatsApp? They watch Twitch, for instance. Why? Are they looking for answers there? Surely not, generally it's such a mindbogglingly useless stupid waste of time it's hard to believe people actually watch this shit, yet they do, a lot, and even donate money like they are grateful their useless time is uselessly wasted. Also, it's a well-known fact they aren't even looking for a specific kind of content: if you are a streamer with 10K online you can do basically whatever you want, these people are following you, not whatever it is you did when they joined your channel. So, will people stop watching real people, because there are, well, rendered people? I'd say it's unlikely in the foreseeable future.
For the same reasons it's unlikely that people won't visit 4chan and HN anymore, all sorts of thematic forums and such. Obviously, they won't stop accessing online libraries, because when you want to read Kafka, you want to read Kafka and not a ChatGPT-generated summary of Kafka. Same with watching LoTR (even though it can be completely generated by NN, the movie has to have a name and you want to know that it's the same stuff your friend, "friend" or the favourite twitch-streamers of yours recommends, not some custom-generated movie, tailored specially for you). Same with every blogger, podcaster, youtuber. You may like recommendation systems, but it doesn't really diminish the role of trusted opinions for majority of people so far.
So, what else is there on the internet? Shitty information portals with copywriter-generated articles? Well, ok, now it will be ChatGPT-generated articles. So what? I suppose it may turn out to be actually better than human copywriters. Maybe Wikipedia will be less relevant (but it wouldn't, if it was better structured, and the main (even though it's false) claim of Wikipedia is that it doesn't generate original content anyway).
Surely a lot of things will look quite different 30 years forward. But it's hard to predict how exactly they will look, and I'm pretty sure it won't be whatever you imagine right now.
[+] [-] koch|3 years ago|reply
jQuery('body').bind('cut copy paste', function (e) { e.preventDefault(); });
jQuery("body").on("contextmenu",function(e){ return false; });
[+] [-] WithinReason|3 years ago|reply
[+] [-] Szpadel|3 years ago|reply
[+] [-] andirk|3 years ago|reply
[+] [-] butz|3 years ago|reply
[+] [-] MopMop|3 years ago|reply
I'm finding that whatever search is used, it's much the same results with same political leanings so in fact Ai search doesn't offer anything new in that event. I'm a sceptic.
[+] [-] DeathArrow|3 years ago|reply
[+] [-] jsf01|3 years ago|reply
Here’s what happened: I made an account to play around with chat GPT, then wanted to switch to my company email address to use their API on that account. They wouldn’t let me use my phone number to sign up for a second account, so I deleted the first one. Unfortunately, deleting that account didn’t free it so I could sign up again with a new number. I then went and bought a new phone number from Google Voice out of desperation since their support never replied, but they don’t allow voip numbers, so that was in vain. My initial support request was in early January, and both that one and my follow up a couple weeks ago have gone unseen. So it feels as though I’m hard locked out from an API that looks like a lot of fun to use for both personal and professional projects. What is one to do?
[+] [-] BoorishBears|3 years ago|reply
I hesitate to talk about it too much lest they get abused to high hell and eventually filtered like everyone else, but at least a few months ago I was able to register an OpenAI account using one of their numbers
[+] [-] AussieWog93|3 years ago|reply
Buy a burner SIM then change the 2FA to Google Auth?
[+] [-] stuckinhell|3 years ago|reply
I can't wait to see how GPT-4 is!
[+] [-] hoseja|3 years ago|reply
[+] [-] thequadehunter|3 years ago|reply
LLMs have been around for a while and they aren't really that different than they were a few years ago tech-wise. The question was always about being able to get good data and compute power for training/running them.
Now that people understand the capabilities of the tech, it's got potential for profit and there's incentive to throw money at it.
[+] [-] jldugger|3 years ago|reply
So they've had plenty of time to increase the training set, improve the architecture and run GPUs full power to get a GPT-4.
[+] [-] hexomancer|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] thequadehunter|3 years ago|reply
[+] [-] boznz|3 years ago|reply
[+] [-] tandr|3 years ago|reply