top | item 39213190

(no title)

zerbinxx | 2 years ago

I’d like to believe the common line that chat GPT is “just a tool” and that it can actually be used to learn/comply just as much as a university degree can be obtained by mere compliance or demonstration of learning (or merely giving the appearance of such).

My experience with Chat GPT ranges from “it’s really good for rapidly getting a bearing with a certain topic” to “it’s a woeful substitute for independently developing a nuanced understanding of a given topic.” It tends to do an OK with programming and a very poor job with critical theory.

discuss

order

anileated|2 years ago

> a university degree can be obtained by mere compliance or demonstration of learning

Exactly. It “only” shows you can & willing to at least understand the requirements, internalize them well enough, and comply with them. It shows your capability of understanding & working together with other humans.

Which is key.

In my impression, almost always the knowledge you receive at the uni is not really pertinent to any actual job, and anyone can have PhD level understanding of a subject without having finished high school.

It is the capability of understanding and working in a system that matters.

Similarly with a chatbot. Using it to game interviews in ways described does not mean candidate is stupid, or something like that. It is, though, a negative signal of one’s willingness and intrinsic motivation to do things like internalizing job responsibilities & procedures, or just simply behave in good faith.

Mental capacity to do mundane things is often important when it comes to, say, maintaining a nuclear reactor.

> just a tool

> it’s really good for rapidly getting a bearing with a certain topic

Perhaps. Personally I prefer using Google, so that I at least know who wrote what and why rather than completely outsourcing this to an anonymous team of data engineers at ClosedAI or whatnot, but if it is efficient to get some knowledge then why not?

It’s using it to blatantly cheat and do the key part for you where it becomes questionable.

hackit2|2 years ago

ChatGPT like all transformers (language models) depends on how well you prime the model as it can only predict the next series of tokens over a finite probability space (the dimensions it was trained on) , it is up to you as the prompt creator to prime that model so it can be used as a foundation for further reasoning.

Normally people who get bad results from it would also get similar results if they asked a domain expert. Similarly different knowledge domains use a different corpus of text for their core axioms/premises, so if you don't know the domain area or those keywords your not going to be able to prime the model to get anything meaningful from it.