(no title)
ANarrativeApe | 1 year ago
I have a 6 million+ word archive with ChatGPT.
It truly is like having an army of interns, each a confident undergrad in a different subject, who have paid attention to every lecture they ever went to, event the one they'd popped acid just before going in.
It's right more often than it is wrong, but some of its clangers are almost unbelievable.
Yet, having never written a line of code, to build a python application that analyzed election data and applied the results to an interactive map, that gave constituency specific data on hover.
It invariable uses the word clarify instead of correct when challenged. Yet it knows that a clarification is refining an answer within the set of the previously proffered answer, and a correction is a revision on an answer outside of the set previously provided.
It believes that this is so consistent that, on the balance of probabilities this is coded and not purely as a result of training data.
When asked to write an article on this, and include the instances from that conversation where it had incorrectly used the word clarify, it edited the quotes to remove the evidence (probably the most egregious act I've witnessed it perform).
I still use ChatGPT, even more so now since DeepSeek got slow, but I watch it like a hawk.
I still call it out every time it prevaricates or flat out lies, it still promises to do better, it still, on being challenged, acknowledges that these assurances are dangerous lies to anyone who doesn't know it's lying.
But, for me, it is still a highly useful tool.
It frequently makes assumptions that would be made by those in a field I am unfamiliar with in a way that allows me to refine arguments.
Sharing ChatGPT chats can be a very helpful means of sharing one's thought process.
I have it on strict instructions not to create unless specifically told to, to not regurgitate what it already has, to focus on critiquing instead of echoing or praising.
Yet it still reckons 70% of its output violates these instructions.
But the remaining 30% justifies the time I spend using this remarkable, next generation, automation machine.
Because that is what it is.
*To say it is intelligent is like saying a scanner has an eye for detail. Yes, a scanner identifies every pixel but and LLM is no more a brain that a scanner is an eye. (And, yes, I know, but this is a line for people who don't know the neurological processing behind sight, which to be fair, is frequently not very logical.)
So it is a threat to people who earn money on fiverr writing bits of code or designing logos - hell yes.
It is a threat to those who code complex systems or who's designs can add actual digits to market share? hell no. Or at least not for the foreseeable future.
Just as the dotcom bubble funded the internet infrastructure that we still use today (just very inefficiently), it is unlikely these trillions will be completely wasted
No comments yet.