Ask HN: I learned useless skill of prompt engineering, how relevant will it be?
77 points| nullptr_deref | 2 years ago | reply
So far it has saved me some time on my work, but I don't think promoting will be any relevant in the near future. People can and will build models that follow the same mode of thought.
[+] [-] scantis|2 years ago|reply
You have a short statement, with a description of your problem and the answer is a long text.
Sometimes we prefer verbose, sometimes concise. Sometimes a word already has all the meaning we need, another time we need a long description and examples. Depends on our level of knowledge.
So from my limited point of view, you excell at moving any statement into something you can comprehend easily or that is helpful to you.
That is a nice skill and it should vastly improve your ability to communicate and express yourself.
Like beeing able to use a search engine before, it is very beneficial. Not a skill someone would hire you for, but a skill that aids many tidous tasks.
Again, my limited opinion. Maybe it is more magical and has deep practical applications, that I am oblivious to.
[+] [-] tudorw|2 years ago|reply
It is a patient listener and it's response can help one reflect on the inherent weight and biases of words within a language.
I would also like to add that in 1996 being able to use a search engine was very much a skill someone would hire you for!
[+] [-] sam0x17|2 years ago|reply
[+] [-] dheera|2 years ago|reply
[+] [-] cpursley|2 years ago|reply
So I’m not sure if AI tools will help for these types of people without basic skills of logic and inquiry. And I don’t mean that in an insulting manner, I’m not even close to being the sharpest tool in the shed. But you really do have to have baseline IQ and knowledge to be able to make use of these tools.
[+] [-] WesolyKubeczek|2 years ago|reply
[+] [-] idopmstuff|2 years ago|reply
Prompting is basically the same thing as writing requirements as a PM - you need to describe what you want with precision and the appropriate level of detail while giving relevant, useful context. Doing it with an LLM isn't that different than doing it with a human.
A few examples:
- If you need some marketing copy written, you need to give the necessary information on the subject of the copy, information about the structure/length/etc. and probably some examples of the writing style you're going for. This is exactly the same with a human copywriter as with an LLM.
- If you're looking to have someone do data analysis on a large spreadsheet, you should give context on what the data mean and be as precise as you can about what analysis you want performed. Same with a human analyst or an LLM.
- And of course, if you want an app developed, you need to give specific requirements for the app - I won't go into detail here, because I'm sure most people on here get the idea, but again, same with a human developer or an LLM.
Ultimately the skill you're describing is just good, clear communication. Until we all have chips in our brain, that's going to be useful.
I will caveat that by saying that one area where I expect to see LLMs improve is in knowing when to solicit feedback. In the marketing copy case, for example, if you give it relevant product info and a particular length, it ought to ask you for examples of writing style or give you examples and ask for feedback before continuing. That'll certainly help, but it's not going to remove the need to clearly describe what you want.
[+] [-] chriskanan|2 years ago|reply
[+] [-] feoren|2 years ago|reply
No, this bullshit will be useless in 2 years. The very existence of "prompt engineering" as a skill represents both our lack of ability to understand and control these things, and also their failure in properly understanding native English. Both will be optimized away.
As databases get more powerful, SQL skills become more important. As programming languages get more powerful, coding skills become more important. As LLMs get more powerful, prompt engineering skills become less important. Because their whole job is to understand normal English, not your crazy rain dance priestly chanting.
[+] [-] huijzer|2 years ago|reply
[+] [-] H8crilA|2 years ago|reply
[+] [-] anonyfox|2 years ago|reply
So, its actually a lot of language tweaking to get just the right context/task description/data embeddings so the LLM (GPT3/4) gets it right >=90% of the time, which surprisingly often is better than actual humans, and in many cases there are also ways to detect imperfection and simply retry automatically which increases success chances even further.
The fetching/formatting/submitting data part (the manual coding) is getting easier over time, but the prompting remains somewhat, and I so far had no luck with any kind of recursion to let the LLM design its prompt, since ultimately all the specifics needed in the context has to somehow got into the context, which is me engineering it into big string structures.
probably doesn't sound shiny, but step-by-step making jobs irrelevant in businesses without sacrificing customizations. I think of it as a silent revolution thats happinging in many places now, ultimately making myself redundant, but hey the ride is fun!
[+] [-] olalonde|2 years ago|reply
[+] [-] spupy|2 years ago|reply
[+] [-] Kiro|2 years ago|reply
[+] [-] roel_v|2 years ago|reply
It's 2023 and there are lots of people who don't know how to efficiently and effectively use Google. To be able to do that, you need some sort of mental model of crawlers and websites and what gets indexed and what not and at what frequency, and the results of SEO and how a somewhat savvy marketeer at some company might influence things etc. The same with LLM models - if you don't know what a 'token' is, your only chance of getting good results is to use these models a lot and then hope that you start building useful intuitions. It really doesn't come natural to most people like it does to most of us here.
[+] [-] bbor|2 years ago|reply
[+] [-] politelemon|2 years ago|reply
[+] [-] mattlondon|2 years ago|reply
I don't think "the future" will include much direct prompting of LLMs. It will all be integrated into some other tool as a means to an end - what we have today with a raw prompt-and-answer mode are just proof of concept toys.
I fully expect that LLMs will end up deeply integrated into other things, so obviously the code IDE use case, but also less obvious things like travel websites where to explain what sort of vacation you want to go on and it returns some options or you tell netflix what sort of movie/show you are in the mood for. Basically search/recommendation engines, with a bit of summarisation added in. I don't think direct prompting will be a thing for 99% of future uses, especially for the general public.
[+] [-] unforeseen9991|2 years ago|reply
[+] [-] VladimirGolovin|2 years ago|reply
Could you summarize the essence of the prpompting skill in a couple of sentences? Are there concepts that are critical to learn and master (e.g. 'chain of thought', etc.)?
[+] [-] inconfident2021|2 years ago|reply
You have to make sure to couple chain of thought with branching, analysis and evaluation, then you can get pretty good results.
[+] [-] wesapien|2 years ago|reply
[+] [-] inconfident2021|2 years ago|reply
https://chat.openai.com/share/cb3a477b-57bd-46fd-92c9-4a3016...
I have attached the example in the above chat.
[+] [-] TheAceOfHearts|2 years ago|reply
There's tons of small tricks and techniques to tease out vastly superior responses. When you're prompting for fairly generic or high level things it doesn't feel like there's that much difference in prompt style, but once you're trying to tease out highly specialized behavior there's tons of room for magic.
One of the tricks I've picked up on is that too many instructions and details often become a hindrance, so you need to figure out which parts to cut out and re-organize while still managing to get a high quality output.
Sometimes it's all about finding just the perfect words to describe exactly what you want. You can play around with variants and synonyms and get a feel for how the output is shaped.
Every model has quirks and preferences as well, so it takes a bit of playing around until you get a feel for how it interacts with your inputs. Admittedly a lot of this feels more like a vibe check than a science.
[+] [-] samuell|2 years ago|reply
I noticed that a lot of people are terrible with search engines. They would carefully try to craft a combination of keywords that they hope will answer their problem.
I have pretty much always been able to find the answers I need quickly, by using a few ideas I see not that many around me use, such as trying to imagine in what context the answer might be answered (what would be the title of a blog or forum post about it, etc), as well as searching for the exact error message if I got one etc.
Now, search engines have gotten a lot better over the last say 5-10 years, so this skill isn't as important anymore, but I remember how the ability to find things quickly was a real productivity booster.
I think something similar might happen with LLMs.
You will have a (probably much bigger) productivity boost by being great at leveraging them.
With time, the user interfacing tooling and general knowledge of them will get much better, so the relative benefit you have will grow smaller, but it will for sure always be useful to know how to use them well.
My 5c.
[+] [-] soco|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] baby|2 years ago|reply
The other thing I’ve been struggling with is to have the AI keep track of what’s important. For example, when the AI learn something from you it should add it to a list (if producing a json output, the object can contain a list of things it knows about you). But it doesn’t always seem to understand it learned something personal from you, and has trouble carrying a list forward without losing items.
The last one is about correcting the user. I want to speak chinese to the AI and I want it to correct me. And if I use english words within my chinese I want it to help me translate them as well. It can’t do none of these things. It’s like it doesn’t seem to realize that chinese and english are two different languages.
[+] [-] Aerroon|2 years ago|reply
I wonder if the online chat models have a similar value somewhere.
---
If you want the AI to remember something you will unfortunately have to keep reminding the AI of it in the prompt. With explicitly or you might refer to the previous generated text if it fits into the context. However, in local models the context can be limited (eg 2000 tokens). If the conversation goes above that 2000 tokens then the model will discard stuff from before. There are models with larger context sizes though. Lengthy prompts will cause the same issue though.
The way things like SillyTavern role-playing work is that the model will constantly be reminded of some important attributes of the character that it's role-playing in the prompt (but it's done for you).
[+] [-] inconfident2021|2 years ago|reply
LLMs do not have the ability to reason with numbers. Most of the time they are hallucinating. One good strategy is to make it output in list and define the structure for each item of the list. If you give an example of what your list should look like, it will give you something close it.
> has trouble carrying a list forward without losing items.
This is the fundamental problem with these models because of the context limit. When you are prompting always remember that is processing a huge paragraph and emitting the next sentences of the paragraph. If you want information to be carried onwards, you have make it output on every prompt or you can also try to use specific identifiers. LLMs are good at in-context learning. It will not work 100% of the time, but it is usually good than having nothing at all.
> I want to speak chinese to the AI and I want it to correct me.
Give it a role of tutor and describe the instructions what the tutor should do.
[+] [-] gabrielsroka|2 years ago|reply
https://news.ycombinator.com/item?id=36971327
[+] [-] TheRealSteel|2 years ago|reply
[+] [-] azubinski|2 years ago|reply
Moreover, according to the ECPD's engineering definition (or to an any other commonly accepted and accepted by the engineering community definition) those fancy "prompt engineering" is pure anti-engineering at all.
This disdain for engineering is something of a tragedy. And it is also the result of the "washout" of engineering from post-industrial societies.
[+] [-] courseofaction|2 years ago|reply
[+] [-] intellectronica|2 years ago|reply
1. You have a lot of mileage with LLMs and AI systems in general (people who are exceptionally good at this have been reporting spending several hours daily working with AIs).
2. You already mastered a large number of useful tasks you can consistently and reliably complete using AI.
3. You continuously invent and discover novel ways to use AI and accomplish useful tasks.
4. You can use LLMs and other form of AI _programmatically_, by combining LLM calls as part of a larger and more complex process (ideally by writing code, though some people do that well using no-code tools or even just careful manual execution).
5. You can methodically examine and evaluate AI tasks, for example by developing evals and running them and analysing their results programmatically.
6. You keep up-to-date and consistently adapt to new developments, like new capabilities, models, libraries, etc ...
7. You can often come up with new ideas or translate existing requirements for tasks that can be achieved better or more efficiently (or achieved at all) using AI.
If the above is your definition of "prompt engineering" then yes, it's incredibly valuable, and even increase in value over time.
( x-posted on: https://everything.intellectronica.net/p/ad-hoc-definition-o... )
[+] [-] intellectronica|2 years ago|reply
[+] [-] inconfident2021|2 years ago|reply