top | item 40024327

(no title)

thisgoesnowhere | 1 year ago

The team I work on processes 5B+ tokens a month (and growing) and I'm the EM overseeing that.

Here are my take aways

1. There are way too many premature abstractions. Langchain, as one of may examples, might be useful in the future but at the end of the day prompts are just a API call and it's easier to write standard code that treats LLM calls as a flaky API call rather than as a special thing.

2. Hallucinations are definitely a big problem. Summarizing is pretty rock solid in my testing, but reasoning is really hard. Action models, where you ask the llm to take in a user input and try to get the llm to decide what to do next, is just really hard, specifically it's hard to get the llm to understand the context and get it to say when it's not sure.

That said, it's still a gamechanger that I can do it at all.

3. I am a bit more hyped than the author that this is a game changer, but like them, I don't think it's going to be the end of the world. There are some jobs that are going to be heavily impacted and I think we are going to have a rough few years of bots astroturfing platforms. But all in all I think it's more of a force multiplier rather than a breakthrough like the internet.

IMHO it's similar to what happened to DevOps in the 2000s, you just don't need a big special team to help you deploy anymore, you hire a few specialists and mostly buy off the shelf solutions. Similarly, certain ML tasks are now easy to implement even for dumb dumb web devs like me.

discuss

order

tmpz22|1 year ago

> IMHO it's similar to what happened to DevOps in the 2000s, you just don't need a big special team to help you deploy anymore, you hire a few specialists and mostly buy off the shelf solutions.

I advocate for these metaphors to help people better understand a reasonable expectation for LLMs in modern development workflows. Mostly because they show it as a trade-off versus a silver bullet. There were trade-offs to the evolution of devops, consider for example the loss of key skillsets like database administration as a direct result of "just use AWS RDS" and the explosion in cloud billing costs (especially the OpEx of startups who weren't even dealing with that much data or regional complexity!) - and how it indirectly led to Gitlabs big outage and many like it.

checkyoursudo|1 year ago

> get it to say when it's not sure

This is a function of the language model itself. By the time you get to the output, the uncertainty that is inherent in the computation is lost to the prediction. It is like if you ask me to guess heads or tails, and I guess heads, I could have stated my uncertainty (e.g. Pr [H] = .5) before hand, but in my actual prediction of heads, and then the coin flip, that uncertainty is lost. It's the same with LLMs. The uncertainty in the computation is lost in the final prediction of the tokens, so unless the prediction itself is uncertainty (which it should rarely be based on the training corpus, I think), then you should not find an LLM output really ever to say it does not understand. But that is because it never understands, it just predicts.

taneq|1 year ago

It's not just loss of the uncertainty in prediction, it's also that an LLM has zero insight into its own mental processes as a separate entity from its training data and the text it's ingested. If you ask it how sure it is, the response isn't based on its perception of its own confidence in the answer it just gave, it's based on how likely it is for an answer like that to be followed by a confident affirmation in its training data.

nequo|1 year ago

But the LLM predicts the output based on some notion of a likelihood so it could in principle signal if the likelihood of the returned token sequence is low, couldn’t it?

Or do you mean that fine-tuning distorts these likelihoods so models can no longer accurately signal uncertainty?

brookst|1 year ago

I get the reasoning but I’m not sure you’ve successfully contradicted the point.

Most prompts are written in the form “you are a helpful assistant, you will do X, you will not do Y”

I believe that inclusion of instructions like “if there are possible answers that differ and contradict, state that and estimate the probability of each” would help knowledgeable users.

But for typical users and PR purposes, it would be disaster. It is better to tell 999 people that the US constitution was signed in 1787 and 1 person that it was signed in 349 B.C. than it is to tell 1000 people that it was probably signed in 1787 but it might have been 349 B.C.

xiphias2|1 year ago

> so unless the prediction itself is uncertainty (which it should rarely be based on the training corpus, I think)

Why shouldn't you ask for uncertainaty?

I love asking for scores / probabilities (usually give a range, like 0.0 to 1.0) whenever I ask for a list, and it makes the output much more usable

lordofmoria|1 year ago

OP here - I had never thought of the analogy to DevOps before, that made something click for me, and I wrote a post just now riffing off this notion: https://kenkantzer.com/gpt-is-the-heroku-of-ai

Basically, I think we’re using GPT as the PaaS/heroku/render equivalent of AI ops.

Thank you for the insight!!

harryp_peng|1 year ago

You only processed 500m tokens, which is shockingly little. perhaps only 2k in incurred costs?

ryoshu|1 year ago

> But all in all I think it's more of a force multiplier rather than a breakthrough like the internet.

Thank you. Seeing similar things. Clients are also seeing sticker shock on how much the big models cost vs. the output. That will all come down over time.

nineteen999|1 year ago

> That will all come down over time.

So will interest, as more and more people realise theres nothing "intelligent" about the technology, it's merely a Markov-chain-word-salad generator with some weights to improve the accuracy somewhat.

I'm sure some people (other than AI investors) are getting some value out of it, but I've found it to be most unsuited to most of the tasks I've applied it to.

gopher_space|1 year ago

> Summarizing is pretty rock solid in my testing, but reasoning is really hard.

Asking for analogies has been interesting and surprisingly useful.

eru|1 year ago

Could you elaborate, please?

mirekrusin|1 year ago

Regarding null hypothesis and negation problems - I find it personally interesting because similar fenomenon happens in our brains. Dreams, emotions, affirmations etc. process inner dialogue more less by ignoring negations and amplifying emotionally rich parts.

airstrike|1 year ago

> Summarizing is pretty rock solid in my testing

Yet, for some reason, ChatGPT is still pretty bad at generating titles for chats, and I didn't have better luck with the API even after trying to engineer the right prompt for quite a while...

For some odd reason, once in a while I get things in different languages. It's funny when it's in a language I can speak, but I recently got "Relm4 App Yenileştirme Titizliği" which ChatGPT tells me means "Relm4 App Renewal Thoroughness" when I actually was asking it to adapt a snippet of gtk-rs code to relm4, so not particularly helpful

devdiary|1 year ago

> at the end of the day prompts are just a API call and it's easier to write standard code that treats LLM calls as a flaky API call

They are also dull (higher latency for same resources) APIs if you're self-hosting LLM. Special attention needed to plan the capacity.

weatherlite|1 year ago

> Similarly, certain ML tasks are now easy to implement even for dumb dumb web devs like me

For example?

spunker540|1 year ago

Lots of applied NLP tasks used to require paying annotators to compile a golden dataset and then train an efficient model on the dataset.

Now, if cost is little concern you can use zero shot prompting on an inefficient model. If cost is a concern, you can use GPT4 to create your golden dataset way faster and cheaper than human annotations, and then train your more efficient model.

Some example NLP tasks could be classifiers, sentiment, extracting data from documents. But I’d be curious which areas of NLP __weren’t__ disrupted by LLMs.

aerhardt|1 year ago

Anything involving classification, extraction, or synthesis.

motoxpro|1 year ago

Devops is such an amazing analogy.