ChatGPT help me solve a refactoring bug today. I had spent hours messing around trying to figure out what the issue was until I realized, via asking ChatGPT, that I had misunderstood a piece of the code and the docs. It was able to answer and provide examples (until it had error and crashed) in a way a senior engineer might have been able to.
The funny thing is I had tried just pasting in code and saying "find the bug" and it wasn't helpful at all, but when I posted in a portion and asked it to explain what the code was doing I was able to work backwards and solve the issue.
Its nice anecdote where the AI felt additive instead of existentially destructive which has been a overbearing anxiety for me this last month.
From Star Trek: First Contact: "When you build a machine to do a man's job, you take something away from the man."
You surrendered the need to think to the machine. You are lesser for it. I don't think these AIs are just removing drudgery, like, say, a calculator. They actually do the work. Or more correctly, they produce something that will pass for the work.
Wholesale embracing of this sort of technology is bad for us.
GPT3 has shown how ML can be trained on multiple unstructured data sources to produce structured information on demand.
Iterate a few more versions from here, so that the models are stronger at producing the correct structured data, and the impact on every office job will be profound.
I.e. instead of training a generative model on text from the internet, train it on every single excel file, sql database, word document and email your company stores. Then query this model asking it to generate Report X showing Y and Z.
When you step back and consider it, 99% of office jobs are about producing structured data from unstructured data sources. The implications of this are being hugely underestimated.
Nah. When AI is able to do all what you have said, requirements will just get harder and humans will still have to put hours to make something done. Just like 30 years ago it wasn't feasible to implement streaming music over the internet in a weekend, and now any teenager can do so by just 'npm install'ing... AI will only open the door to even more complex problems to solve.
Consider "report generator" as one category, throw in "Buzzfeed writer" and "Stack Overflow copy/paster", and a clear bimodal distribution emerges. The human touch is still necessary to add context and distinguish fact from plausible hallucination, but experts can now scale their contributions 10x as a result of immediate access, minimal latency, and reduced communication costs.
We're moving towards a world of chair-fillers at one end, and maestros at the other. The clearest difference between labor in 2022 and 2026 will be the hollowing-out of the middle.
The value of a human is in reacting to changing requirements, considering context and in understanding other humans. AI cannot do any of that reliably.
Some office tasks can be automated and those that can don't need AI anyway - they need properly labelled data, databases and some coding.
AI will be very good at creating the illusion of competence. AI cannot actually ensure competence or verify it. That will remain the domain of humans.
>I.e. instead of training a generative model on text from the internet, train it on every single excel file, sql database, word document and email your company stores. Then query this model asking it to generate Report X showing Y and Z.
This has already been possible for decades using old-fashioned automation (Python scripts etc.), assuming the data entry is designed for this.
Honestly, I think the reason managers have teams of people reporting to them is not just to give them unbiased information.
Part of it is probably ego stroking, but I suspect the humans in the loop are doing some sort of analysis too, and reporting qualitative patterns that an AI might not pick up on.
Deep thinking work will not go away. From trades to tech. It'll change work for sure, but it's not going to obliterate workers jobs.
I'm no luddite, but I've seen enough rocky digital transformations to know that human beings don't operate like manufacturing pipelines. Automation and AI assisted automation will be harder to generally implement.
But what I do feel confident about is that there will be a large mass of consultants who'll sell and expensive dream to a lot of mid tier businesses. The next big flex for business IT will be to have a notch on your belt for a failed AI automation project.
> Widespread adoption of generative AI will act as a lubricant between systems,
I largely agree with this article, but I feel like you have to be careful with these general predictions. Many technologies have purported themselves to be this "business lubricant" tech (ever since the spreadsheet), but the actual number of novel spreadsheet applications remains small. It feels like the same can be said for generative AI, too. Almost every day I feel the need to explain that "generation" and "abstract thought" are distinct concepts, because conflating the two leads to so much misconception around AI. Stable Diffusion has no concept of artistic significance, just art. Similarly, ChatGPT can only predict what happens next, which doesn't bestow it heuristic thought. Our collective awe-struck-ness has left us vulnerable to the fact that AI generation is, generally speaking, hollow and indirect.
AI will certainly change the future, and along with it the future of work, but we've all heard idyllic interpretations of benign tech before. Framing the topic around content rather than capability is a good start, but you easily get lost in the weeds again when you start claiming it will change everything.
> Our collective awe-struck-ness has left us vulnerable to the fact that AI generation is, generally speaking, hollow and indirect.
This totally resonates with me. This is absolutely correct. Thinking about the future of work, there's much of what I do every day in my job that is hollow and indirect. And I would be totally okay if I could have something like ChatGPT do it for me.
I don't have a problem with the main point of the article, but there is a huge terminology confusion that is rapidly gathering force to confuse people. The key breakthroughs of GPT3 et al are not primarily about generative AI. People had been building generative models long before GPT3, and it was generally found that discriminative models had better performance.
They key to the power of GPT3 is that it has billions of parameters, AND those parameters are well-justified because it was trained on billions of documents. So the term should be something like "gigaparam AI" or something like that. Maybe GIGAI as a parallel to GOFAI. If you could somehow build a gigaparam discrimative model, you would get better performance on the task it was trained on than GPT3.
Good point on the terminology. What do you think the right terminology should be? LLMs is too much of a mouthful and is not as informative for the general public, imo. People are also using Foundation Models, which I rather like.
ChatGPT and generative AI will not
who write these things for a living will definitely have fewer clients. But this is a small percentage of paid writing, and not the most lucrative or desirable.
I do not think that the world is changing because of large language models. That seems to be a controversial opinion so I won't get into it here. But these are powerful new tools, no question. The way I work has changed and I'm very glad to have ChatGPT.
I do believe that in the coming years knowing how to use ChatGPT or similar products will be as important as knowing how to use Google is now. People that know how to leverage LLMs going forward will simply have an advantage over those who don't. It won't be long before it isn't optional for executives and knowledge workers. This will be a big change for many people. But we adapted to Google in the early 2000s and people will adapt to this as well.
Or worse, subtly inaccurate. The problem I have with generative AI right now, its product looks like it makes sense and sometimes it does, but there is always the risk of total nonsense hidden somewhere in the middle. So you still need someone capable to check and correct for most professional work, and sometimes that is harder or more time consuming than making the product itself.
The same sort of problem with self driving cars, they are often correct but not often enough, and staying alert to correct the AI is worse than driving yourself which is more work, paradoxically enough.
AI might manage to push through these barriers, but I remain skeptical with the technology in the current state: statistical machines that are good in the common cases but sketchy at the edges.
I've been asking friends in non-programming engineering fields how ChatGPT does in their area of expertise, and I believe programming is the area that ChatGPT is the most accurate. Finding solution to general engineering problems seems blatantly wrong in almost all cases, whereas in programming, it seems to be able to generate mostly correct code for simple, boiler-plate like tasks.
Yup. What seems to be largely missed is that these models have zero understanding, and are actually destroyers of information, not creators. In classic Information Theory, information is basically surprise value — how much unexpected info is in the message? — yet these "AI" systems put out the most expected subset in each instance. This highly averaged output is very recognizable and so very striking, but it is not actually very informative (perhaps except in cases where it is specifically used as a verbose search engine, where the query takes advantage of the breadth of the AI's training).
No, but I do think that both of these things are revealing to us just how bad highly technical people and highly intuitive and empathetic people are at properly communicating with one another.
(It is a broad generalization to assume that these traits are mutually exclusive. They do, of course, co-exist in many people. However, it seems to me that the number of people in whom they coexist robustly are few. However, if is these few from whence come true once-in-a-generation geniuses.)
what a weird moment to take a stab at AI, right now when all the projects are starting to bear fruit and we can see advances that remind me of moore's law in the 90s.
these projects have direct commercial applications right now.
I paid for Tabnine pro since it was 50% off for a year but I won't renew it unless it massively improves.
I mean, it does give good completions sometimes but the time saved isn't that great imho. Maybe chatgpt is better but it feels like AI still have some way to go to actually be so useful you would be less sucessful without it.
Their product MaestroAI is marketed as “for teams” (and of course with the obligatory fading-color call-to-action buttons) presumably to attract VC $$$ but I would love something like this (powered by LLMs) to extract info from all my documents.
Maybe something like this exists? Please no DEVONThink suggestions :)
On the topic of Content is King, I have a different view than the author. I think in the case of these trained AIs, 'content' refers to the training datasets and not the generated outputs.
Trained AIs are in something like the early digital streaming days where there was only one provider in town, so that provider aggregated All The Content. Over the next decade we would see the content owners claw their content back from Netflix, and onto competitor platforms -- which takes us to where we are today. Netflix's third party content has dwindled and forced them to focus on creating their own first party content which can not be clawed away.
When these generative AIs start to produce income, it will be at the expense of the artists whose art was in the training dataset nonconsensually. This triggers the same content clawback we saw in digital streaming. Training datasets will be heavily scrutinized and monetized because the algorithms powering generative AIs aren't actually carrying much water. What is DALL-E without its dataset? Content is King.
[+] [-] impalallama|3 years ago|reply
The funny thing is I had tried just pasting in code and saying "find the bug" and it wasn't helpful at all, but when I posted in a portion and asked it to explain what the code was doing I was able to work backwards and solve the issue.
Its nice anecdote where the AI felt additive instead of existentially destructive which has been a overbearing anxiety for me this last month.
[+] [-] teddyh|3 years ago|reply
[+] [-] ape4|3 years ago|reply
[+] [-] henry_bone|3 years ago|reply
You surrendered the need to think to the machine. You are lesser for it. I don't think these AIs are just removing drudgery, like, say, a calculator. They actually do the work. Or more correctly, they produce something that will pass for the work.
Wholesale embracing of this sort of technology is bad for us.
[+] [-] hardwaresofton|3 years ago|reply
[+] [-] mlfia|3 years ago|reply
Iterate a few more versions from here, so that the models are stronger at producing the correct structured data, and the impact on every office job will be profound.
I.e. instead of training a generative model on text from the internet, train it on every single excel file, sql database, word document and email your company stores. Then query this model asking it to generate Report X showing Y and Z.
When you step back and consider it, 99% of office jobs are about producing structured data from unstructured data sources. The implications of this are being hugely underestimated.
[+] [-] lmarcos|3 years ago|reply
[+] [-] xianshou|3 years ago|reply
We're moving towards a world of chair-fillers at one end, and maestros at the other. The clearest difference between labor in 2022 and 2026 will be the hollowing-out of the middle.
[+] [-] eldaisfish|3 years ago|reply
The value of a human is in reacting to changing requirements, considering context and in understanding other humans. AI cannot do any of that reliably.
Some office tasks can be automated and those that can don't need AI anyway - they need properly labelled data, databases and some coding.
AI will be very good at creating the illusion of competence. AI cannot actually ensure competence or verify it. That will remain the domain of humans.
[+] [-] AussieWog93|3 years ago|reply
This has already been possible for decades using old-fashioned automation (Python scripts etc.), assuming the data entry is designed for this.
Honestly, I think the reason managers have teams of people reporting to them is not just to give them unbiased information.
Part of it is probably ego stroking, but I suspect the humans in the loop are doing some sort of analysis too, and reporting qualitative patterns that an AI might not pick up on.
[+] [-] gonzo41|3 years ago|reply
I'm no luddite, but I've seen enough rocky digital transformations to know that human beings don't operate like manufacturing pipelines. Automation and AI assisted automation will be harder to generally implement.
But what I do feel confident about is that there will be a large mass of consultants who'll sell and expensive dream to a lot of mid tier businesses. The next big flex for business IT will be to have a notch on your belt for a failed AI automation project.
[+] [-] smoldesu|3 years ago|reply
I largely agree with this article, but I feel like you have to be careful with these general predictions. Many technologies have purported themselves to be this "business lubricant" tech (ever since the spreadsheet), but the actual number of novel spreadsheet applications remains small. It feels like the same can be said for generative AI, too. Almost every day I feel the need to explain that "generation" and "abstract thought" are distinct concepts, because conflating the two leads to so much misconception around AI. Stable Diffusion has no concept of artistic significance, just art. Similarly, ChatGPT can only predict what happens next, which doesn't bestow it heuristic thought. Our collective awe-struck-ness has left us vulnerable to the fact that AI generation is, generally speaking, hollow and indirect.
AI will certainly change the future, and along with it the future of work, but we've all heard idyllic interpretations of benign tech before. Framing the topic around content rather than capability is a good start, but you easily get lost in the weeds again when you start claiming it will change everything.
[+] [-] teknopaul|3 years ago|reply
That's not my experience, I am continuously amazed by the amount of tasks worker bees manage to do in excel.
I kind of wish MS access was more of a thing, because when eventually it doesn't scale and you need a "proper" system, it takes a rewrite.
[+] [-] jbcranshaw|3 years ago|reply
This totally resonates with me. This is absolutely correct. Thinking about the future of work, there's much of what I do every day in my job that is hollow and indirect. And I would be totally okay if I could have something like ChatGPT do it for me.
[+] [-] ChrisMarshallNY|3 years ago|reply
I can't wait for Wall-E!
https://www.thelist.com/img/gallery/things-only-adults-notic...
[+] [-] epistemer|3 years ago|reply
[deleted]
[+] [-] d_burfoot|3 years ago|reply
They key to the power of GPT3 is that it has billions of parameters, AND those parameters are well-justified because it was trained on billions of documents. So the term should be something like "gigaparam AI" or something like that. Maybe GIGAI as a parallel to GOFAI. If you could somehow build a gigaparam discrimative model, you would get better performance on the task it was trained on than GPT3.
[+] [-] jbcranshaw|3 years ago|reply
[+] [-] hooande|3 years ago|reply
I do not think that the world is changing because of large language models. That seems to be a controversial opinion so I won't get into it here. But these are powerful new tools, no question. The way I work has changed and I'm very glad to have ChatGPT.
I do believe that in the coming years knowing how to use ChatGPT or similar products will be as important as knowing how to use Google is now. People that know how to leverage LLMs going forward will simply have an advantage over those who don't. It won't be long before it isn't optional for executives and knowledge workers. This will be a big change for many people. But we adapted to Google in the early 2000s and people will adapt to this as well.
[+] [-] ChildOfChaos|3 years ago|reply
[+] [-] zabzonk|3 years ago|reply
or wildly inaccurate, particularly in fields such as programming
[+] [-] blablablerg|3 years ago|reply
The same sort of problem with self driving cars, they are often correct but not often enough, and staying alert to correct the AI is worse than driving yourself which is more work, paradoxically enough.
AI might manage to push through these barriers, but I remain skeptical with the technology in the current state: statistical machines that are good in the common cases but sketchy at the edges.
[+] [-] aero142|3 years ago|reply
[+] [-] toss1|3 years ago|reply
[+] [-] RyanShook|3 years ago|reply
[+] [-] oliveshell|3 years ago|reply
(It is a broad generalization to assume that these traits are mutually exclusive. They do, of course, co-exist in many people. However, it seems to me that the number of people in whom they coexist robustly are few. However, if is these few from whence come true once-in-a-generation geniuses.)
[+] [-] kobalsky|3 years ago|reply
these projects have direct commercial applications right now.
[+] [-] greedo|3 years ago|reply
[+] [-] 29athrowaway|3 years ago|reply
[+] [-] resource0x|3 years ago|reply
[+] [-] k__|3 years ago|reply
I wouldn't let it write a whole article, but it can really save time at research. Just needs a bit of fact checking in the end.
[+] [-] astockwell|3 years ago|reply
[+] [-] commitpizza|3 years ago|reply
I mean, it does give good completions sometimes but the time saved isn't that great imho. Maybe chatgpt is better but it feels like AI still have some way to go to actually be so useful you would be less sucessful without it.
[+] [-] d4rkp4ttern|3 years ago|reply
Maybe something like this exists? Please no DEVONThink suggestions :)
[+] [-] revskill|3 years ago|reply
[+] [-] kylehotchkiss|3 years ago|reply
[+] [-] devinprater|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] aaron695|3 years ago|reply
[deleted]
[+] [-] a13o|3 years ago|reply
Trained AIs are in something like the early digital streaming days where there was only one provider in town, so that provider aggregated All The Content. Over the next decade we would see the content owners claw their content back from Netflix, and onto competitor platforms -- which takes us to where we are today. Netflix's third party content has dwindled and forced them to focus on creating their own first party content which can not be clawed away.
When these generative AIs start to produce income, it will be at the expense of the artists whose art was in the training dataset nonconsensually. This triggers the same content clawback we saw in digital streaming. Training datasets will be heavily scrutinized and monetized because the algorithms powering generative AIs aren't actually carrying much water. What is DALL-E without its dataset? Content is King.