top | item 34523844

(no title)

kieselguhr_kid | 3 years ago

When I was new, I saw one of my more experienced colleagues ask a few questions that together saved the company more than $1 mil each year. ChatGPT might be a threat to automate some low-level tasks or help eliminate bugs, but it is nowhere near ready to evaluate the context of a system, understand its history, or think* through the consequences of a major business decision.

* or think at all, in any meaningful way.

discuss

order

csomar|3 years ago

Though if it comes an AI with the capacity to include more context (ie: all company financials, communications, market analysis, etc...) it might be even more effective than a human with precise context.

Communication might be strictly email in the future. Or something that could be pipelined into the "AI" for context. Video/Calls might make it too at some point. Face to Face meetings strictly prohibited.

RhodesianHunter|3 years ago

I agree with you to a point, but I think the only reason that it can't understand the context of a system is because it hasn't been trained on that system's code and documentation, which is obviously a future coming soon.

kieselguhr_kid|3 years ago

I'm not sure training these models on code and documentation will make that much of a difference. These models struggle significantly with subtlety, relevance, and correctness. It also doesn't have a theory of its own knowledge or confidence, and so tends to "hallucinate" and put out confidently-worded nonsense. Especially for complex and nuanced topics.

A big part of my job in software is having a very sharpened grasp of my ignorance, the ability to weigh a variety of tradeoffs, and the ability to convey my confidence of my abilities and my team's abilities. I'm not sure this is possible for this generation of AI.

quonn|3 years ago

The problem is not the system, but the context of the system.