top | item 36339123

(no title)

psytrx | 2 years ago

I'm in a similar position. Top x% in my field, more than fair compensation, somewhat financial security. I worked my butt off towards burnout to get there (currently slowly recovering).

I recently listened to a conversation between Lex Fridman and Stephen Wolfram [1]. Stephen said something that stuck with me for a while. Something along the lines:

GPTs are statistical word guessing machines. They regurgitate the average of the internet. You can get around that by placing with temperature and top-k, but that'll only make them less accurate, and increase the chances for them to hallucinate.

You, however, are an expert. Better than the average of the internet. Years of experience, and likely with higher standards. You understand the requirements, architect maintainable systems, and align with the company goals. You learned how to read requirements from context and between the lines.

Yes, GPTs can do highly complex things, and sometimes it feels magical or even scary. But you'll have to prime them to do so, and you'll have to guide them. They are your junior programmer, and you are their lead. You understand context, and they don't (simply because they don't have any).

Start to accept that from now on you have your personal junior dev. Learn to guide him into the right direction, and learn to provide accurate tasks. I'd also suggest looking into GitHub copilot chat to see where this is going.

If we're lucky, they'll get better with time and lots of research, but that's (likely) none of your concerns.

Uneducated opinion:

From this point on, I think it's likely the corpus they consume gets worse because they are starting to consume their own generated, average-of-the-internet content. You built your own Reddit bot, and know how easy it is to get generated content out there. It's a problem that isn't solved yet, and with time, it's becoming even more difficult because the number of different public models (which suffer from the same challenges) is steadily increasing, and therefore detection of such content gets a lot harder. The technology might get exponentially better, but in the end it boils down to the corpus they consume and learn from. This will slow things down, and will give you some time to learn to use these tools to your advantage, not see them as competition.

[1] https://lexfridman.com/stephen-wolfram-4/

discuss

order

remich|2 years ago

The coming LLM content ouroboros is a real problem. Garbage in, garbage out.