top | item 35411037

Show HN: Prompt Engineering Jobs

81 points| Oras | 3 years ago |prompt-engineering-jobs.com | reply

102 comments

order
[+] devmunchies|3 years ago|reply
Many think prompt engineering is just like being good at writing Google searches, where my job is to be good at knowing "how Google/GPT4 thinks"—good at writing a single query.

However, I think prompt engineering will evolve to be an actual technical role, akin to Data Engineering (the people who make the systems, pipelines, ETL jobs, etc for the data).

Prompt engineers will build systems that facilitate prompt generation. Meaning that prompts will be dynamically generated or at least partially generated with modifications or additions to the raw user prompt.

It's the difference to being able to write HTML vs being able to do all the backend work to dynamically generate the HTML for my Amazon homepage (including the performance benchmarks and other strategic requirements), for example.

[+] seydor|3 years ago|reply
I don't think it will evolve at all, because models will just become better at understanding what people mean. There's no point in trying to be a better prompter as a competitive advantage.

Already, chatGPT and bing can both give great on-topic answers to 3 word queries. and the fact that you can infinitely refine it is great

OTOH i think there is space for developing GUIs for prompts. Makes them more engaging

[+] sgrove|3 years ago|reply
I’ve been doing the same thing with a number of projects, building chains of prompts from one api call to another e.g. for ConjureUI (self-creating, iterable UIs that come into existence, get used, then disappear) https://youtu.be/xgi1YX6HQBw how it works to generate a full self-contained react component:

1. Take user task

2. Pass it to a prompt that requests a Product UI description of a component

3. Pass 1+2 to another that asks for which npm packages to use

4. Pass 1+2+3 to a templated prompt to write the code in a constrained manner

5. Run 4 in a sandbox to see if there are errors, if so pass it back to #4, looping

It’s currently quite slow, but that’s an implementation detail I think.

[+] throwaway675309|3 years ago|reply
It's macro substitution on crafted templatized prompts at best. Calling it engineering dilutes the term either further than it already is. (Looking at you Salesforce "Engineer").
[+] dror|3 years ago|reply
Here are a couple of good resources around prompt engineering

https://www.promptingguide.ai/

https://lilianweng.github.io/posts/2023-03-15-prompt-enginee...

These clearly demonstrate that there's engineering skills beyond "I want you to act as a Linux terminal."

Also note that there's a difference between the prompt that you use on ChatGPT and the one that you use in the GPT API or other LLMs. In the first, you're already dealing with the prompt that openai supplied to the LLM in the first place.

[+] anonzzzies|3 years ago|reply
But why call it prompt engineer and not just data or, you know, software engineering? Prompt engineer means the prompting, and that’s already quite dead as it is: you can just ask gpt4 to fix the user input for a specific goal. We do this in our pipeline and it’s obviously (much) faster and cheaper than ‘prompt engineers’, but it is usually also simply better for what we need it for.

The stuff you say it will evolve in already exists and has names everyone uses.

[+] sorokod|3 years ago|reply
I have an idea what skills are required to dynamicly generate HTML and how to measure quality.

Can you share something similar for "prompt engineering" ?

[+] maxbondabe|3 years ago|reply
I'm working on a prompt engineering product at a stealth mode startup. Would it be possible for me to pick your brain sometime? I'd like to understand more about your workflow, and how our product could fit into that.

No pressure. My email is in my profile.

[+] simonw|3 years ago|reply
Crucially important that engineers who are building systems like that that work by concatenating prompts together have a very solid understanding of prompt injection attacks.
[+] arroz|3 years ago|reply
Lol
[+] TobyTheDog123|3 years ago|reply
I have one of these jobs currently.

It is a nightmare for me, and I do somewhat regret taking the contract even with the sizable hourly rate.

In my eyes, programming makes sense. Even if I introduce bugs, I can sooner or later track down the issue, facepalm, resolve it, and move on with my day, with some semblance of accomplishment and lessons learned.

My prompt engineering work offers no such rewards, and is a total time-suck. This is because, as another commenter wrote here, it is "throwing shit at the wall and hoping it sticks."

While you can treat it as a scientific endeavor, testing hypotheses against a black box, you will never find a prompt that works consistently, even with a low temperature, solely because these models were not built to give consistent results. There is no end-all solution, there is no "correct prompt."

Companies employing prompt engineers are looking for such consistency. Prompt engineers are therefore stuck in a system where they simply cannot succeed. They can hope and pray that the testing done by managers produces fruitful results upon every test, but the results are, for all intents and purposes, random.

[+] ukuina|3 years ago|reply
Thank you for the honest peek behind the curtains.

Are the companies enamored with the productivity gains of prompt-based systems sufficiently that they can ease up on the requirement for consistency?

[+] submeta|3 years ago|reply
When it comes to making ChatGPT solve technical problems, my observation is that you need to be good at writing (technical) requirements to be good at writing prompts.

When I assign a task to a (human) developer, the results depend on two things: First, how good the developer is, second, and more importantly, how well and clearly I am expressing the requirements. And this is also true for ChatGPT. With very precise requirements I get very good results.

So prompt engineering is like writing good requirements, and that also requires understanding the problem domain.

[+] ChatPGT|3 years ago|reply
writing/reading/understanding good requirements is a really nice skill to have in this decade. I have a peer that just CAN NOT interpret what is going on so he always need to schedule a meeting and it pisses me off hard. "are you able to talk right now?" sigh
[+] amrocha|3 years ago|reply
"Prompt engineering" isn't real. What are you engineering? You're throwing shit at the wall and hoping it sticks.

Software Engineering is kinda fake, especially in industry, but at least that's an actual discipline.

"AI monkey" is a better description

[+] iguana|3 years ago|reply
This is an unhelpfully cynical take. The job title has "engineer" in it, so a more charitable interpretation is more serious than "AI monkey".

Using LLMs to solve real problems is not easy. Making sure that you don't introduce regressions while making improvements is difficult, and requires building and evaluating a dataset, and the necessary pipelines. It may also include diversification of LLM providers, and creating the necessary abstractions. A fundamental understanding of how LLMs work, ability to compare different architectural approaches, along with typical data engineering and software development skills would be required.

What if you want to use the LLM for Question/Answer systems that requires working with embeddings? What if you want to find a way to process data locally without sending sensitive data to the LLM provider?

This requires real engineering skills.

[+] williadc|3 years ago|reply
> You're throwing shit at the wall and hoping it sticks.

A more charitable description might be "You're employing the scientific method to extract value from GPT-like systems." Just like in science, with time you're developing intuition for how the underlying system works, but you still have to run the experiments.

[+] haxton|3 years ago|reply
Would you prefer "LLM Reverse Engineer" then?
[+] blibble|3 years ago|reply
it's like those books from the late 90s

"Google for Dummies"

[+] Oras|3 years ago|reply
OP here, I see lots of comments about prompt engineering thinking in the context of asking one question to get the answer.

In that perspective, I understand why many people think it is useless. However, if you tried to make a chain of functions/calls, or worked with a tool like LangChain [0] you will see its importance.

Ex: "Which stock had better performance in the last 6 months, Tesla or Microsoft?"

A question like this would check:

- Understanding this is a financial question.

- Get the stock ticker (symbol) for each one.

- Use an API to get their performance history in the last 6 months.

- Compare.

- Return the answer.

[0] https://github.com/hwchase17/langchain

[+] throwaway1851|3 years ago|reply
And, with that, we’re back to pre-LLM chatbot design: intent classification, entity extraction, business logic, return a result. Only the whole process rests on a more rickety foundation. It’s also bloated and slow, querying an LLM over and over for these things. I’m starting to see some parallels to modern JavaScript and SPAs. ;-)
[+] swader999|3 years ago|reply
Feels for the people that made a living by googling for stuff. They are going to have to upgrade.
[+] game_the0ry|3 years ago|reply
Nice.

Given the wild popularity of posts about prompt engineering "jobs" paying +$300k, it was only a matter of time for an indie hacker to create a job board specifically for this type of job.

[+] angarg12|3 years ago|reply
I've also read the articles about prompt engineers with no experience or technical skills commanding six figures, or the one about these jobs paying +300k.

I have a BSc and Master's in Computer Science, 14 years of experience, I have slowly climbed the ladder, got a FAANG job 5 years ago, and only last year I managed to break 300k salary for the first time (working in ML of all things). Am I losing my sanity or these reports are greatly exaggerated?

[+] monero-xmr|3 years ago|reply
“Poke this box different ways until the right stuff leaks out”
[+] sokoloff|3 years ago|reply
"Jiggle this bag of parts until the device assembles itself."
[+] capableweb|3 years ago|reply
Essentially what many software "engineers" do day-to-day.
[+] hbarka|3 years ago|reply
This exchange: “I have a strong suspicion that “prompt engineering” is not going to be a big deal in the long-term & prompt engineer is not the job of the future. AI gets easier. You can already see in Midjourney how basic prompts went from complex in v3 to easy in v4. Same with ChatGPT to Bing. To the extent that prompt engineering remains a thing, it might be the Era of the Humanities Major.“

https://twitter.com/emollick/status/1627804798224580608

[+] WolfOliver|3 years ago|reply
The goal should be to be able to just talk your model without any engineering. If you as a normal user can not interact with an AI then the AI is just not smart.
[+] simonw|3 years ago|reply
"AI" isn't smart. Doesn't mean it's not useful if you know how to use it.
[+] capableweb|3 years ago|reply
The goal is for the end-user to just be able to talk to your model. But to get there, there is some additional "engineer" needed to be done.

What does "smart" mean to you in this context?

[+] scottiebarnes|3 years ago|reply
All the job listings seem to be actual software/ML engineering roles and not really "prompt engineering" roles.
[+] xkcd-sucks|3 years ago|reply
My prediction that we'll have psychologists for computers before having a mechanistic understanding of cognition seems to be coming true :)
[+] atum47|3 years ago|reply
> Prompt Engineer 20 ans d'expérience

I don't speak french but it clearly says 20 years of experience. Talk about trolling.

[+] ukuina|3 years ago|reply
This would be a great resource for GPT to find humans who can prompt GPT better.
[+] zerop|3 years ago|reply
Will knowing deep learning help on writing better prompts for LLMs?
[+] asasidh|3 years ago|reply
Prompts are kind of like SQL in the structured database world
[+] rrmdp|3 years ago|reply
Nice one!

I think Prompt Engineering Jobs could become popular