(no title)
hickelpickle | 2 years ago
I would say with someone new to coding it can be bad and good, as a lot of times it glosses over things, or can be slightly incorrect as it makes assumptions (or more so just answers in a more general context, and when asked to elaborate, or challenged on specifics it will reformat/improve it's answer, but without knowing you need to do so, I could see it easily see it providing half-baked foundational knowledge. You can ask it "x" and it will give a answer, but then if you ask it I am trying to do "y" with "x" and isn't "z" an issue or area of concern with its answer it will reformulate the information provided as its original response was flawed, but if you don't know exactly what the "y" you want to do is, or the "z" being foundational knowledge to challenge it on, you can easily get a whole wall of text that is out of context with what you are actually trying to learn.
worrycue|2 years ago
This is my hang up with LLMs … I don’t trust them.
Frankly it seems just as easy if not easier to just google keywords and read sites.
Google vs LLM is like asking random people on the internet (some are brilliant, some don’t know anything, some are nuts, … etc.) vs asking random people on the internet but all of them have a history of suffering from hallucinations and are routine liars with a compulsive need to answer confidently even when they know nothing.
abenga|2 years ago
bregma|2 years ago
ModernMech|2 years ago
Sounds like a description of narcissistic personality disorder or schizoaffective. Of course proto ai would have a personality disorder, go figure.
In the future, the job of an ai psychologist will be to certify the personality of ai products. Gotta make sure you’re not shipping a shrink wrapped psychopath.