(no title)
terminalcommand | 1 year ago
Here is my experiment: https://chat.openai.com/share/98cae2bf-a7a6-42e7-b536-f3671c...
I gave minimum context like this: "I have a history exam. You are an expert in British royal history. List me the names of 20 kings and queens in England."
The answer was: "Certainly! Here's a list of 20 kings and queens of England:
1. William the Conqueror 2. William II (Rufus) 3. Henry I 4. Stephen 5. Henry II 6. Richard I (the Lionheart) 7. John 8. Henry III 9. Edward I (Longshanks) 10. Edward II 11. Edward III 12. Richard II 13. Henry IV 14. Henry V 15. Henry VI 16. Edward IV 17. Edward V 18. Richard III 19. Henry VII 20. Henry VIII"
latexr|1 year ago
And like I said at the start of the conversation:
> Consistency, for one. I have asked LLMs the exact same question twice in a row and got wildly different answers.
You’ve proven my point.
> I gave minimum context like this: "I have a history exam. You are an expert in British royal history.
Your excuses are getting embarrassingly hilarious. As if you need a history exam and to be an expert to understand the context of the question.
By the way, that answer is wrong from the first one. So much for giving context and calling it an expert.
terminalcommand|1 year ago
It's like when we first learned to code. Did syntax errors scare us, did nullpointer exceptions, runtime panics scare us? No, we learned to write code nevertheless.
I use LLMs daily to enhance my productivity, I try to understand them.
Providing context and assigning roles was a tactic I was taught in a prompt writing seminar. It may be a totally wrong view to approach it but it works for me.
With each iteration the LLMs get smarter.
Let me propose another example. Think of the early days of computing. If you were an old school engineer who only relied on calculations with your trusted slide rule, you would critise computers because they made errors, they crashed. Computing hardware was not stable back then and the UI were barely usable. Calculations had to be double checked.
Was investing in learning computing a bad investment then? Likewise investing in using LLMs is not a bad investment now.
They won't replace us, take our jobs. Let's embrace LLMs and try to be constructive. We are the technically inclined after all. Speaking of faults and doom is easy, let's be constructive.
I may be too dumb to use LLMs properly, but I advocate for AI because I believe it is the revolutionary next step in computing tools.