top | item 42132365

(no title)

mav3ri3k | 1 year ago

A current 3rd year college student here. I really want LLMs to help me in learning but the success rate is 0.

They often can not generate relatively trivial code When they do, they can not explain that code. For example, I was trying to learn socket programing in C. Claude generated the code, but when I stared asking about stuff, it regressed hard. Also, often the code is more complex than it needs to be. When learning a topic, I want that topic, not the most common relevant code with all the spagheti used on github.

For other subjects, like dbms, computer network, when asking about concepts, you better double check, because they still make stuff up. I asked ChatGPT to solve prev year question for dbms, and it gave a long, answer which looked good on surface. But when I actually read through because I need to understand what it is doing, there were glaring flaws. When I point them out, it makes other mistakes.

So, LLMs struggle to generate concise to the point code. They can not explain that code. They regularly make stuff up. This is after trying Claude, ChatGPT and Gemini with their paid versions in various capacities.

My bottom line is, I should NEVER use a LLM to learn. There is no fine line here. I have tried again and again because tech bros keep preaching about sparks of AGI, making startup with 0 coding skills. They are either fools or genius.

LLMs are useful strictly if you already know what you are doing. That's when your productivity gains are achieved.

discuss

order

guappa|1 year ago

Brace yourself, people who are going to come to tell you that it was all your fault are here!

I got bullied at a conference (I was in the audience) because when the speaker asked me, I said AI is useless for my job.

My suspicion is that these kind of people basically just write very simple things over and over and they have 0 knowledge of theory or how computers work. Also their code is probably garbage but it sort-of works for the most common cases and they think that's completely normal for code.

owenpalmer|1 year ago

I'm starting to suspect that people generally have poor experiences with LLMs due to bad prompting skills. I would need to see your chats with it in order to know if you're telling the truth.

mav3ri3k|1 year ago

There is no easy way to share. I copied them in google docs: https://docs.google.com/document/d/1GidKFVgySgLUGlcDSnNMfMIu...

One with ChatGPT about dbms questions and one with claude about socket programming.

Looking back are some questions a little stupid ? Yes. But affcourse they are! I am coming with zero knowledge trying to learn how the socket programming is happening here ? Which functions are begin pulled from which header files, etc.

In the end I just followed along with a random youtube video. When you say, you can get LLM to do anything, I agree. Now that I know how socket programming is happening, for next question in assignment about writing code for crc with socket programming, I asked it to generate code for socket programming, made the necessary changes, asked it generate seperate function for crc, integrated it manually and voila, assignment done.

But this is the execution phase, when I have the domain knowledge. During learning when the user asks stupid questions and the LLM's answer keep getting stupider, then using them is not practical.

WhyOhWhyQ|1 year ago

The simpler explanation is that LLMs are not very good.

blharr|1 year ago

I mean. Likely, yes, but if you have to spend the time to prompt correctly, I'd rather just spend that time learning the material I actually want to learn

WA|1 year ago

I've been programming for 20 years and mostly JS for the last 10 years. Right now, I'm learning Go. I wrote a simple CLI tool to get data from several servers. Asked GPT-4o to generate some code, which worked fine at first. Then I asked it to rewrite the code with channels to make it async and it contained at least one major bug.

I don't dismiss it as completely useless, because it pointed me in the correct direction a couple times, but you have to double-check everything. In a way, it might help me learn stuff, because I have to read its output critically. From my perspective, the success rate is a bit above 0, but it's nowhere close to "magical" at all.

fragmede|1 year ago

Care to share any of these chats?