top | item 40957215

(no title)

q7xvh97o2pDhNrh | 1 year ago

As a person replying to your comment in the era of generative AI, I'm inclined to agree the hype is a bit much, even considering how impressive the technology can (sometimes) be.

Another big area of hype is "prompt engineering." That one seems to have calmed down slightly, but for a while, there were large swaths of the Internet who were amazed that the set intersection of "talk like a decent human being" and "be precise in your communication" could generally lead to good results.

In many ways, "AI" right now is magic marketing sprinkles that you can put on anything to make it more delicious. (Or, if you're inside a big company, it's magic prioritization sprinkles.)

discuss

order

bongodongobob|1 year ago

Maybe the prompt engineering should have caught on more. I'm convinced that the large swaths of people commenting here and elsewhere "I don't get AI, it's just a parrot and it's always wrong and hallucinates, it's not useful" just don't understand that the prompt matters and the idea isn't to one shot everything. It writes good code for me every day, so I can only assume they're asking "Write me an OS from scratch" and then throwing their hands up when it obviously fails.

lolinder|1 year ago

I think that calling it "prompt engineering" is what made it fail to catch on. We didn't call it "Google engineering" back in the day when you could actually craft a Google search to turn up useful results, we called it "Google-fu" [0].

"Google-fu" sounds like a fun skill to learn and acquire, where "prompt engineering" sounds either like something well out of reach or like pretentious nonsense depending on the audience.

[0] https://blog.codinghorror.com/google-fu/

skywhopper|1 year ago

To me it’s more like, if I have to carefully craft English language prompts in a conversational back-and-forth to get things done, then I am not really interested in doing that job, which sounds like being a manager or a teacher, and in practice just makes me feel totally dead, sad, and quite frankly bored.

That’s just not an interesting or rewarding way to interact with a computer, and the last thing I want to do is add long wait times and nickel-and-dime cost to the process. Layer on using different LLMs for different tasks or trying them out against each other and cross-checking output and it’s a mind-numbingly indirect way to get anything accomplished that in the end teaches me nothing and develops no useful skill that I enjoy practicing.

If it works for you, great, but even the most honest and genuine fans make it sound like a nightmare to me.

BlackFly|1 year ago

If it only solves the problems I already find trivial then it is a parrot. Nowadays we all have a calculator with us but if you emphasize that fact and choose to not practice and excel at basic arithmetic then you will be unable to perform higher mathematics that require it at every step. Of course if your problem is always already a solved problem then sure a parrot can be convinced to spit it out.

So yes, the actual question for software engineering would be how to get AI to produce and iterate on an OS. The hallucinations aren't the only problem then, the lack of predictability in the answers is the biggest issue.

refulgentis|1 year ago

Been quietly wondering something similar to you for a year: I've ended up 95% confident that phenomena is due to people evaluating it in terms of "does it replace me?"

Cosign prompt engineering. My startup is tl;dr "what if i made a on-every-platform app that can sync and let you choose whatever ai provider, and you pay at cost. and then give you a simple UI for piecing together steps like dialogue / ai chat / search / retrieve / use files"

Seems to me the bigs are completely off the mark, lets cede the idea there's an omniscient AI available. Literally right now.

Cool.

It still has no idea how you work.

you could see 42, in hitchhiker's guide the galaxy, as a deep parody of this category error

curiouscavalier|1 year ago

I appreciate this perspective on prompt engineering. I’d love to think that one of the great outcomes of LLMs are people returning to more decent and precise forms of communicating. Imagine the progress if we could get that to transfer to human-human communication as well.

chipdart|1 year ago

> Another big area of hype is "prompt engineering." That one seems to have calmed down slightly, but for a while, there were large swaths of the Internet who were amazed that the set intersection of "talk like a decent human being" and "be precise in your communication" could generally lead to good results.

I think your comment conveys your obliviousness of the problem domain.

The main driving need for prompt engineering is not an inability to "talk like a decent human being". That's just your personal need to insult and demean people who are interested in a problem domain you know nothing about.

The main driving need for prompt engineering is aspects like not being able to control how context is formed and persisted in a particular model, and how to form the necessary and sufficient context to get a model to output anything interesting. Some applications require computationally expensive and time-consuming runs, and knowing what inputs to provide to a system which by it's very mature is open-ended is a critical skill to adequately use the system in professional settings.

Let's put it like this: GitHub copilot is a LLM service which is extremely narrow in what are their applications and use cases. Yet, you can't even get it to add unit tests to a function following a specific style without putting the effort to build up the context it needs to output what you expect.