top | item 35180392

Anyone else witnessing a panic inside NLP orgs of big tech companies?

398 points| georgehill | 3 years ago |old.reddit.com

509 comments

order
[+] hn_throwaway_99|3 years ago|reply
Wow - this is just wild. I've seen lots of arguments around "AI won't take everyone's job, it will just open up new areas for new jobs." Even if you take that with the benefit of the doubt (which I don't really think is warranted):

1. You don't need to take everyone's job. You just need to take a shitload of people's jobs. I think a lot of our current sociological problems, problems associated with wealth inequality, etc., are due to the fact that lots of people no longer have competitive enough skills because technology made them obsolete.

2. The state of AI progress makes it impossible for humans in many fields to keep up. Imagine if you spent your entire career working on NLP, and now find GPT-4 will run rings around whatever you've done. What do you do now?

I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say "great, that can free up people for other thing", but given our current economic systems, how are these people supposed to eat?

EDIT: I see a lot of responses along the lines of "Have you seen the bugs Google/Bing Translate has?" or "Imagine how frustrated you get with automated chat bots now!" Gang, the whole point is that GPT-4 blows these existing models out of the water. People who work in these fields are blown away by the huge advances in quality of output in just a short time. So I'm a bit baffled why folks are comparing the annoyances of ordering at a McDonald's automated kiosk to what state-of-the-art LLMs can do. And reminder that the first LLM was only created in 2018.

[+] csa|3 years ago|reply
> I think a lot of our current sociological problems, problems associated with wealth inequality, etc.,

I see where you’re coming from, but is this really the main source of the inequality?

Based on numbers relating to workers’ diminishing share of profits, it seems to be that the capital class has been able to take a bigger piece of the profit pie without sharing. In the past, companies have shared profits more widely due to benevolence (it happens), government edict (e.g., ww2 era), or social/political pressure (e.g., post-war boom).

Fwiw, I think that the mid-20th century build up of the middle class was an anomaly (sadly), and perhaps we are just reverting to the norm in terms of capital class and worker class extremes.

I see tons of super skilled folks still getting financially fucked by the capital class simply because there is no real option other than to try to attempt to become part of the capital class.

[+] pyuser583|3 years ago|reply
The problem starts long before AI takes the jobs.

I used to do a job that was eventually automated. We did the one and only thing the computer couldn’t do - again and again in a very mechanical fashion.

It was a shit job. You might get promoted to supervisor - but that was like being a supervisor at McDonalds.

Why not treat the job seriously? Why didn’t the company use it as a way to recruit talent? Why didn’t the workers unionize?

Because we all knew it would be automated anyway.

We were treated like robots, and we treated the org like it was run by robots.

There’s a huge shadow over the economy that treats most new jobs like shit jobs.

[+] pxc|3 years ago|reply
> I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say "great, that can free up people for other thing", but given our current economic systems, how are these people supposed to eat?

And it doesn't mean that the replacements will be much better, or even as good as the Humana they replace. They will probably suck in ways that will become familiar and predictable, and at the same time irritating and inescapable. Think of the outsourced, automated voice systems at your doctor's office, self-checkout at the grocery store, those touchscreen kiosks at McDonalds, etc.

I already find myself wanting to scream

> GIVE ME A FUCKING HUMAN BEING

every now and then. That's only going to get worse.

[+] bgroat|3 years ago|reply
Something curious that I've noticed is that the people who I see that are MOST excited are in tech.

When I show ChatGPT to a lay person they don't really care.

When I show it to a professional copywriter they say that if they submitted this content to a client they would lose the client.

I'm reminded of when my son was learning to talk and everything he said seemed brilliant and coherent to me.

To any stranger it sounded like gibberish.

I think GPT is like my son, and all tech people are like excited parents.

Maybe the kid will learn to speak like an adult, but it can't yet

[+] dogcomplex|3 years ago|reply
It is very obvious there is a mass unemployment wave coming - or at least a mass "retraining" wave, though the new jobs "teaching AIs" or whatever remain to be seen. I hope everyone currently just questioning whether this will happen now is prepared to state it with conviction in the coming months and fight for some sort of social protection program for all these displaced people, because the profits from this new world aren't getting distributed without a fight.
[+] timoth3y|3 years ago|reply
The problem is not that automation will eliminate our jobs.

The problem is that we have created an economy where that is a bad thing.

[+] WalterBright|3 years ago|reply
Think of people who have jobs like archaeology, digging up bones. The only way these jobs can exist is if technology has taken over much of the grunt work of production.

As for human translators, the need for them far, far exceeds the number of them. Have you ever needed translation help? I sure have, but no human translator was available or was too expensive.

[+] mostlysimilar|3 years ago|reply
This is possibly a death spiral. GPT is only possible because it's been trained on the work humans have learned to do and then put out in the world. Now GPT is as good as them and will put them all out of work. How can it improve if the people who fed it are now jobless?
[+] hnbad|3 years ago|reply
When I was studying Computational Linguistics I kept running into the unspoken question: given that Google Translate already exists, what is even the point of all of this? We were learning all these ideas about how to model natural language and tag parts of speech using linguistic theory so we could eventually discover that utopian solution that would let us feed two language models into a machine to make it perfectly translate a sentence from one language into another. And here was Google Translate being "good enough" for 80% of all use cases using a "dumb" statistic model that didn't even have a coherent concept of what a language is.

It's been close to two decades and I still wonder if that "pure" approach has any chance of ever turning into something useful. Except now it's not just language but "AI" in general: ChatGPT is not an AGI, it's a model fed with prose that can generate coherent responses for a given input. It doesn't always work out right and it "hallucinates" (i.e. bullshits) more than we'd like but it feels like this is a more economically viable shot at most use cases for AGI than doing it "right" and attempting to create an actual AGI.

We didn't need to teach computers how language works in order to get them to provide adequate translations. Maybe we also don't need to teach them how the world works in order to get them to provide answers about it. But it will always be a 80% solution because it's an evolutionary dead end: it can't know things, we have only figured out how to trick it into pretending that it does.

[+] leroy-is-here|3 years ago|reply
I personally think that humans easily apply structure to language that doesn’t really exist. In fact, we restructure our languages daily, as individuals, when communicating verbally and through text. We make up words and shorthands and abbreviations and portmanteaus. But I think the brain simply makes connections between words and things and the structure of speaking those words is interpreted like audio or visuals in our brains — just patterns to be placed.

Really, words, utterances by themselves, carry meaning. Language is just a structure for _us_, so to speak, that we agree on for ease of communication. I think this is why probabilistic models do so well: the ideas we all have are mostly similar, it really is about just mapping from one kind of word to another, or kind of phrase to another.

Feel free to respond, I’m most certainly out of my depth here.

[+] nl|3 years ago|reply
> Computational Linguistics I kept running into the unspoken question

I've done a lot of work in NLP and the times when computational linguistics has been useful is very rare. The only time I shipped something to production that used it was a classifier for documents that needed to evaluate them on a sentence by sentence basis for possible compliance issues. Computational linguistics was useful then because I could rewrite mulit-clause sentences into simpler single clause sentences which the classifier could get better accuracy on.

> And here was Google Translate being "good enough" for 80% of all use cases using a "dumb" statistic model that didn't even have a coherent concept of what a language is.

I assume you are aware if Frederick Jelinek quote "Every time I fire a linguist, the performance of the speech recognizer goes up"?[1]

That was in 1998. It's been pretty clear for a long time that computational linguistics can provide some tools to help us understand language but it is insufficiently reliable to use for unconstrained tasks.

[1] https://en.wikipedia.org/wiki/Frederick_Jelinek

[+] satvikpendem|3 years ago|reply
> But it will always be a 80% solution because it's an evolutionary dead end: it can't know things, we have only figured out how to trick it into pretending that it does.

At the margin, these are equivalent (Chinese room, and all). I wonder if humans also learn similarly then retroactively tell themselves they actually do know things instead of just containing experiences encoded in their neurons (and whether that is any different than a neural network encoding trained "knowledge" in its neurons, too). This is the semantics of epistemology, at the end of the day.

[+] dogcomplex|3 years ago|reply
Ask a toddler how the world works and you'll get a very similar response. It is entirely likely the 80%-of-human-intelligence barrier is not a "dead end" but merely a temporary limitation until these models are made to hone their understanding and update over time (i.e. get feedback) instead of going for zero-shot perfection. The GPT models incorporating video should start developing this "memory" naturally as they incorporate temporal coherence (time) into the model.

The fact we got this far through brute force is just insanely telling. This is a natural phenomena we're stumbling upon, not something crafted by humans.

Also - fun fact, the Facebook Llama model that fits on a Raspberry Pi and is almost as good as GPT3? Also basically brute force. They just trained it a lot longer and it shrunk the model. Food for thought.

[+] esperent|3 years ago|reply
Google translate works amazingly will on languages with a similar grammar (or at least, it works so on European languages, which I have the experience to judge).

However, translation of more distant languages is pretty terrible. Vietnamese to English is something I use Google translate for everyday and it's a mess. I can usually guess what the intended meaning was but if you're translating a paragraph or more it won't even be able to translate the same important subject words consistently throughout. Throw in any kind of slang or abbreviations (which Vietnamese people use a lot when messaging each other) and it's completely lost.

[+] lisasays|3 years ago|reply
Given that Google Translate already exists, what is even the point of all of this?

Because for the other 20 percent it's plainly -not- good enough. It can't even produce an acceptable business letter in a resource-rich target language, for example. It just gets you "a good chunk of the way there."

And there's no evidence that either (1) throwing exponentially more data at the problem with see matching gains in accuracy or (2) this additional data will even be available.

[+] hnfong|3 years ago|reply
I learnt some very basics of computational linguistics since it was related to a side project. I kept wondering why people were spending huge amounts of resources into tagging and labelling corpora of thousands of words, while to me it seems that in theory it should be possible to feed wikipedia (of a certain language) into a program and have it spit out some statistically correct rules about words and grammar.

I guess the same intuition led to these new AI technologies...

[+] tootie|3 years ago|reply
I know only the bare basics of NLP and AI but isn't Google basically just a specialized case of LLM? Translate and Search work on the same principle that all you need to do is regression analysis on a huge pile of human behavior. Early search engines worked so hard to try to understand content and user intent and got thumped by a comparatively naive heuristic and a giant pile of data.
[+] WA|3 years ago|reply
To take it further: if 80% is good enough and you gotta do some work anyways on the output of LLMs, maybe all the extrapolations like "just wait 10 years and most jobs are doomed" are exaggerated. It’s not unlikely that LLMs hit a wall, because they inherently lack any sort of logic and reasoning.

Which of course is a good thing to make sure many people get to keep their jobs.

[+] dserban|3 years ago|reply
The PR folks at my current company are in full panic mode on Linkedin, judging from the passive-aggressive tone of their posts (sometimes very nearly begging customers not to use ChatGPT and friends).

They fully understand that LLMs are stealing lunch money from established information retrieval industry players selling overpriced search algorithms. For a long time, my company was deluded about being protected by insurmountable moats. I'm watching our PR folks going through the five stages of grief very loudly and very publicly on social media (particularly noticeable on Linkedin).

Here's a new trend happening these days. Upon releasing new non-fiction books to the general public, authors are simultaneously offering an LLM-based chatbot box where you can ask the book any question.

There is no good reason this should not work everywhere else, in exactly the same way. Take for example a large retailer who has a large internal knowledge base. Train an LLM on that corpus, ask the knowledge base any question. And retail is a key target market of my company.

Needless to say I'm looking for employment elsewhere.

[+] swatcoder|3 years ago|reply
> There is no good reason this should not work everywhere else, in exactly the same way. Take for example a large retailer who has a large internal knowledge base. Train an LLM on that corpus, ask the knowledge base any question.

Since LLM’s can’t scope themselves to be strictly true or accurate, there are indeed good reasons, like liability for false claims and added traditional support burden from incorrect guidance.

Everybody is getting so far ahead of the horse with this stuff, but we’re just not there yet and don’t know for sure how far we’re going to get.

[+] mashygpig|3 years ago|reply
> Here's a new trend happening these days. Upon releasing new non-fiction books to the general public, authors are simultaneously offering an LLM-based chatbot box where you can ask the book any question.

Can you link to an example?

[+] smrtinsert|3 years ago|reply
How did GPS tracking companies survive Google and Google Maps? I think there will probably be many niches to explore even as the big names work hard to compete and eventually commoditize LLMs
[+] dongobread|3 years ago|reply
I worked in a research capacity in the voice assistant org of a big tech company until very recently. There was a lot of panic when ChatGPT came out, as it became clear that the vast bulk of the org's modeling work and research essentially had no future. I feel bad for some of my colleagues who were really specialized in specific NLP technology niches (e.g. building NLU ontologies) which have been made totally obsolete by these generalized LLMs.

Personally - I'm moving to more of a focus on analytical modeling. There is really nothing interesting about deep learning to me anymore. The reality is that any new useful DL models will be coming out of mega-teams in a few companies, where improving output through detailed understanding of modeling is less cost effective than simply increasing data quality and scale. Its all very boring to me.

[+] djous|3 years ago|reply
During my master's degree in data science, we had several companies visit our faculty to recruit students. Not a single one was a specialized NLP company, but many of them had NLP projects going on.

Most of those projects were the usual "solution looking for a problem to solve". Even those projects that might have had _some_ utility, would have been way more effective to buy/license a product than to develop an in-house solution. Because really, what's the use of throwing a dozen 25-30 years old with non-specialized knowledge, when there are companies full of guys with PhDs in NLP that devote all their resources to NLP? Yeah, you can pipe together some python, but these kind of products will always be subpar and more expensive long-term than just buying a proper solution from a specialized company.

To me it was pretty clear that those projects were just PR so that c-levels could sell how they were preparing their company for a digital world. Can't say I'm sorry for all the people working on those non-issues though. From the attitude of recruiters and employees, you'd think they were about to find a cure for cancer. Honestly, I can't wait for GPT and other productivity tools to wrech havock upon the tech labour market. Some people in tech really need to be taken down a notch or two.

[+] bippingchip|3 years ago|reply
As one of the comments on reddit posts - it's not just big tech companies, but also entire university teams which feel the goalposts moving miles ahead all of a sudden. Imagine working on your PhD on chat bots since start of 2022. Your entire PhD topic might be irrelevant already...
[+] ChuckNorris89|3 years ago|reply
>Imagine working on your PhD on chat bots since start of 2022. Your entire PhD topic might be irrelevant already...

In fairness most PhD topics people work on these days, outside of the select few top research universities in the world, are obsolete before they begin. At least from what my friends in the field tell me.

[+] sgt101|3 years ago|reply
Perhaps - but normally you'll have a narrowly defined and very specific technical topic/hypothesis that you're working on, and many/most of these aren't going to be closed off by ChatGPT4

Will this effect the job market (both academic and commercial) for these folks? It's very hard to say. Clearly lots of value will be generated by the new generation of models. There will be a lot of catchup and utilisation work where people will want to have models in house and with specific features that the hyperscale models don't have (for example constrained training sets). I'm wondering how many commercial illustrators have had their practices disrupted by Stable Diffusion? Will the same dynamics (what ever they are) apply for the use of LLM's?

[+] simonh|3 years ago|reply
I'm not too worried about that. We don't actually understand fully how LLMs function internally, so research on how language works and how to process it is still useful in advancing our understanding. It may not lead to products that can compete with GPT, but PhDs aren't about commercialisation, they're about advancing human knowledge.
[+] echelon|3 years ago|reply
All these people don't understand how hireable and desirable they are now. They need to get out of academia and plugged into AI positions at tech companies and startups.

Their value just went up tremendously, even if their PhD thesis got cancelled.

Easily millionaires waiting to happen.

---

edit: Can't respond to child comment due to rate limit, so editing instead.

> That is not how it works at all.

Speak for yourself. I'm hiring folks off 4chan, and they're kicking ass with pytorch and can digest and author papers just fine.

People stopped caring about software engineering and data science degrees in the late 2010's.

People will stop caring about AI/ML PhDs as soon as the challenge to hire talent hits - and it will hit this year.

[+] jll29|3 years ago|reply
We are just building a chatbot to assist students that study (that study AI itself, to be precise).

We could not "just use ChatGPT" because the answers have to be correct, or students could take legal action against the university in situations where they'd give an answer in an exam that the chatbot gave them and that might be wrong. It's exciting for sure to compare e.g. ChaptGPT with other approaches, for a set of controlled benchmark questions such as:

  -- snip --
  Q: What is the A* algorithm?

  A: The A* algorithm is a popular heuristic search algorithm used in pathfinding and graph traversal. It is an extension of Dijkstra's algorithm, but uses a heuristic function to guide the search towards the goal node, which can make it more efficient than Dijkstra's algorithm in certain situations.
  The algorithm works by maintaining a priority queue of nodes to be explored, where the priority of each node is the sum of its actual cost from the starting node (g-score) and an estimate of the cost to reach the goal node (h-score), based on a heuristic function. The heuristic function must be admissible, meaning it never overestimates the actual cost to the goal node.
  At each step, the algorithm selects the node with the lowest priority from the queue and expands it, generating its neighboring nodes and computing their priority scores. The algorithm then adds these nodes to the queue if they have not been explored before or if a better path to them has been found.
  If the goal node is reached, the algorithm terminates and returns the path from the starting node to the goal node. If the queue becomes empty before the goal node is reached, then there is no path from the starting node to the goal node.
  Overall, the A* algorithm is a powerful and widely used algorithm that has many applications in fields such as robotics, video games, and logistics.
  -- snip --
[+] goethes_kind|3 years ago|reply
This is where it pays off to be researching something completely esoteric rather than something immediately applicable. I mostly scoffed at such research in the past, but now I see the value of it. The guy researching QML algorithms for NLP is not panicking yet, I think.
[+] pyuser583|3 years ago|reply
Isn’t that the sort of thing advisors are supposed to caution against?

And aren’t PhDs supposed have a theoretical underpinning?

[+] paganel|3 years ago|reply
> Imagine working on your PhD on chat bots

To be honest, that's a sh*tty PhD to begin with, it doesn't bring anything good and really worthwhile to the world, quite the contrary.

[+] jurassic|3 years ago|reply
Maybe this is alarmist, but I don't see how LLMs don't collapse our entire economic system over the next decade or so. This is coming for all of us, not just the NLP experts in big company research groups. Being able to cheaply/instantly perform virtually any task is great until you realize there is now nobody left to buy your product or service because the entire middle class has been put out of work by LLMs. And the service industries that depend on those middle class knowledge workers will be out of work because nobody can afford to purchase their services. I don't see how this doesn't end with guillotines coming out for the owner class and/or terrorism against the companies powering this revolution. I hope I'm wrong.
[+] davidkuennen|3 years ago|reply
I tried translating something from English to German (my native language) yesterday with ChatGPT4 and compared it to Microsoft Translate, Google Translate and DeepL.

My ranking:

1. ChatGPT4 - flawless translation. I was blown away

2. DeepL - very close, but one mistake

3. Google Translate - good translation, some mistakes

4. Microsoft Translate - bad translation, many mistakes

I can understand the panic.

[+] credit_guy|3 years ago|reply
They may panic, but they shouldn't. They can quickly pivot. GPT programs can be used off the shelf, but they can also use custom training. Every large org has a huge internal set of documents, plus a large external set of documents relevant to its work (research articles, media articles, domain relevant rules and regulations). They can train a GPT bot to their particular codebase. And that is now. Soon (I'd give it at most one year), we'll be able to train GPT bots to videos.

All this training does not happen by itself.

[+] martindbp|3 years ago|reply
Not big tech (or PhD level research), but half the work I did on my side project (subtitles for Chinese learning/OCR) is sort of obsolete now, most of the rest of it within a year or two. I put months into an NLP pipeline to segment Chinese sentences, classifying pinyin and translating words in-context, something ChatGPT is great at out the box. My painstaking heuristic for determining show difficulty using word frequencies and comparing distributions to children's shows is now the simple task of giving part of the transcript and asking ChatGPT how difficult it is. Next up, the OCR I did will probably be solved by ChatGPT4. It seems the writing is on the wall: most tasks on standard media (text/images/video), will be "good enough" for non-critical use. The only remaining advantage of bespoke solutions is speed and cost and that will also be a fleeting advantage.

But it's also extremely exciting, we'll be able to build really great things very easily, and focus our efforts elsewhere. Today anyone can throw together a language learning tutor to rival Duolingo. As long as you're in it for solving problems you shouldn't be too threatened by whatever tool set you're currently becoming obsolete.

[+] epups|3 years ago|reply
Everyone here is saying that people can simply transition easily into startups and other big companies. To a certain extent that's true, but what exactly are they going to do? As technology consolidates into one or two major LLM's, likely only accessible by API, I feel most orgs would be better served by relying heavily on finetuning or optimizing those for their purpose. Previous experience with NLP certainly helps with that, although this type of work would not necessarily be as exciting as trying to build the next big thing, which everyone was scrambling for before.

OpenAI could build a state-of-the-art tool with a few hundred developers - to me, that means that money will converge to them and other big orgs rather than the opposite.

[+] bsder|3 years ago|reply
I guess I'm not panicked about my job in the face of AI because objective correctness is required. I dream about the day that OpenAI can write the 100 lines of code that connect the BLE stack, the ADC sensor and the power management code so that my IoT sensor doesn't crash once every 8 days.

I see the AI stuff as very different from, say, the microcomputer revolution. People had LOTS of things they wanted to use computers for, but the computers were simply too expensive.

As soon as microprocessors arrived, people had LOTS of things they were already waiting to apply them to. Factory automation was screaming for computers. Payroll management was screaming for computers.

I don't see that with the current AI stuff. What thing was waiting for NLP/OpenAI to get good enough?

Yes, things like computer games opened up whole new vistas, and maybe AI will do that, but that's a 20 year later thing. What stuff was screaming for AI right now? Maybe transcription?

When I see the search bar on any of my favorite forums suddenly become useful, I'll believe that OpenAI stuff actually works.

Finally, the real problem is that OpenAI needs to cough up what I want but then it needs to cough up the original references to what I want. I normally don't make other humans do that. If I'm asking someone for advice, I've already ascertained that I can trust them and I'm probably going to accept their answers. If it's random conversation and interesting or unusual, I'll mark it, but I'm not going to incorporate it until I verify.

Although, given the current political environment, pehaps I should ask other humans to give me more references.

[+] MonkeyMalarky|3 years ago|reply
I'm not at a big tech company, and we don't sell algorithms, but my team does use a lot of NLP stuff in internal algorithms. The only panic I have is trying to keep up and take the time to learn the new stuff. If anything, things like GPT-4 are going to make my team 10x more successful without having to hire an army of PhDs.
[+] deepsquirrelnet|3 years ago|reply
I work at a small company, but it’s hard for me to imagine that generative AI will replace predictive AI/ML any time soon.

Smaller models trained supervised/in-domain are simply more efficient and more accurate than unsupervised/out-of-domain. Plus we own and operate the technology much more cheaply.

I don’t doubt that if your were trying to build a competing product to what OpenAI is doing that you’d feel affected, but there’s also a lot of other problems that are not being solved by generative models.

[+] twawaaay|3 years ago|reply
I think education goal for people shifted. I teach my kids to be flexible and embrace the change. Invest in abilities that transfer well to various things you could be doing during your life. Be a problem solver.

In the future -- forget about cosy job you can be doing for the rest of your life. You no longer have any guarantees even if you own the business and even if you are farmer.

What you absolutely don't want is spend X years at uni learning something, and then 5-10 years into your "career" finding out it was obsoleted overnight and you now don't have plan B.

[+] tippytippytango|3 years ago|reply
Not even experts in the domain could see themselves being replaced and pivot in time. What hope does an ordinary person have in preparing for what’s coming? Telling people to retrain will not be an acceptable answer because no one can predict which skills will be safe from AI in 5 years.
[+] mdip|3 years ago|reply
Fascinating -- I think the comments on the HN post are almost as good.

I think everyone mostly agrees that AI is coming for a lot of jobs. There's disagreement about how many, how it will impact society and the like.

The pace of technology is not linear, it accelerates. I've never seen something that has so rapidly crossed into the "magical" territory as "nearly every single big LLM/Generative AI thing" seems to. It redefines what was previously laughably impossible ... a decade ago.

We're riding a curve upward that is making it extremely hard to see what's coming next. All of the pontificating, all of the attempts at finding solutions to imagined problems ... I can't see one that doesn't feel like a blindfolded person aiming at what they were told was a dart board with what they were told was a dart. There's really nothing to do but hang on and hope you land where any new opportunities creep up.

Expect bubbles, black swans, and purple unicorns.

[+] anshumankmr|3 years ago|reply
Here's my two cents, as I work with NLP in a tech company, mostly with Dialogflow and Rasa, in my current project, we are using Chat GPT (and previously GPT 3) to summarize articles, and I see that it can really handle FAQ questions really well. One of most common requirements was to train our bot to handle FAQ type question apart from complex conversation stories/flows,but this thing can straight up take the content from an article,summarize it neatly and send a response back.

We have had some issues and complaints with the API,( mostly with GPT 3 as the fine tuning was only open for the base model and that had some trouble with some questions). Also there is a finicky response time, despite using having paid access. Response time varies from 10 seconds to even a minute (during some downtime that occured a few days ago, and a few days even before that there was a complete outage).

[+] belter|3 years ago|reply
A whole thread on AI experts discussing how AI is making them obsolete...back to gardening...
[+] oars|3 years ago|reply
If you were an NLP researcher at a university whose past years of experience is facing existential threat due to this rapid innovation causing your area to become obsolete, what would be some good areas to pivot to or refocus on?