top | item 36204007

(no title)

lakySK | 2 years ago

Agreed that ChatGPT can be a bit of a hit or miss when you ask it to produce code. But, I'd say a good 50% of the code I recently wrote was actually created by AI and I'm a lot more productive thanks to it.

3 big use cases for me so far:

- Github Copilot is just great. Often it helps fill in the code I was just about to write (or Google search). It just saves time, period. Plus, there's been at least a few times when I was ready to throw in the towel at the end of the day. But carried through as Copilot suggested the next line or implementation of a method I wanted to just create a placeholder for.

- ChatGPT for a project in an unknown domain. A few weeks ago I wanted to create a Chrome extension. I've never done it before and didn't know where to start. I asked ChatGPT and it delivered a great interactive tutorial with just the right code to get me started (I had to fix a few bugs that ChatGPT helped debug). Am I an expert in Chrome extensions thanks to it? Hell no! Have I created a working Chrome extension in a very short amount of time? 100% yes!

- ChatGPT for debugging. When I search for an error and get not-so-relevant Google search results, often ChatGPT can suggest rather relevant things to look into given an error message.

Of course, your mileage may vary, but saying AI does not help programmers based on a quick test of it not implementing things perfectly seems a bit surprisingly shortsighted I'd say.

discuss

order

marginalia_nu|2 years ago

> Github Copilot is just great.

I find it's great as long as what you're doing is very straightforward and boilerplaty. I find I have to go and re-write a lot of what it outputs though, since it tends to be for a lack of a better word, noodly. You often have to invert conditions and move stuff around for Copilot's suggestions otherwise everything has 7 levels of indentations and redundant condition checks.

Often with this re-write, the Copilot solution isn't really saving any significant time, as you could have just written it correctly to begin with.

> ChatGPT for a project in an unknown domain.

I'd say this is true for problems that are well explored with plenty of tutorials. Ask for help doing something that's even the slightly off the beaten path and you'll get entirely hallucinatory APIs.

Let's say you wanted to write a Parquet file in Java, for example. It's not a particularly strange thing to want to do, except I've never managed to get ChatGPT or Phind to produce a meaningful answer to that inquiry. You get correct-looking answers, except they use code that doesn't exist.

> ChatGPT for debugging

This I do agree with. You can just give it a function and ask "where is the bug in this code?". If there is a bug it will say so. If there isn't a bug, it may sometimes also say there is a bug, but it's pretty easy to verify and dismiss the answer at that stage.

comfypotato|2 years ago

Regarding your criticism of Copilot: I think the productivity-related outcome of the results varies from person to person. The exact results you mention help me in a way that I acknowledge is somewhat irrational. The mental load of verifying and manipulating conditions (that are already correct) is much lower than writing them.

com2kid|2 years ago

How about what I did last night

"In typescript, Write me an express POST endpoint that takes in a JSON payload, assigns it a unique id of some sort, and uploads it to an s3 bucket"

"To continue on with the last request, in typescript, write me an express GET route that takes in an id of the JSON file that the POST request above uploaded, downloads it from s3 and sends it to the user."

For problems with well defined constraints, ChatGPT4 is amazing. Of course I had to rewrite all the error handling logic, and fill in some details, but it saved me a lot of time looking up APIs.

The flip side is, it originally tried writing against AWS SDK2, and I had to ask it to use SDK3. If I hadn't known about SDK3, I would've had less than optimal code.

Similar thing when I asked it to add ajv validation to an endpoint, I hadn't done that in a couple of years and I knew it'd take me awhile to remember exactly how, while ChatGPT pushed it out in a few seconds, but with non-optimal code (didn't use middleware). Because I already knew what it should do, I was able to ask for a correction.

I have a genuine fear for Junior developers using ChatGPT and never going through the struggles to learn the tools and technologies that makes a good Senior engineer.

PeterStuer|2 years ago

"I have a genuine fear for Junior developers using ..."

I have been around long enough to have heard the same about

- writing code on the terminal as oposed to on paper (their coding will become just trail and error)

- using debuggers (repeat until it works, they will never understand why it failed to begin with)

- using IDE's (real programmers don't need crutches)

- using languages with extensive standard libraries (how can they ever understand their code if they didn't write their own dictionary)

- using domain frameworks (TF is for people incapable of grokking NN's)

- ...

It's most often not the tool that is producing bad programmers, it's bad programmers holding it wrong ;)

lakySK|2 years ago

Another great example!

With the out-of-date SDK, I had a similar experience. ChatGPT got me started with ManifestV2 for the Chrome extension and I found out it's getting deprecated and I should really use ManifestV3. But you know what, I asked ChatGPT how to update from ManifestV2 to ManifestV3 and it gave me the steps and things to fix. I had to do a few iterations as new errors were coming up and some things needed a bit of a refactor, but it was all done quite fast.

The fear of juniors skipping out on key learnings, or even perhaps having a hard time finding jobs in a few years is definitely interesting and something I'm super curious to see how it will go...

moffkalast|2 years ago

Agreed, for me it's on multiple occasions fixed some of those errors that are syntactically correct but can't be seen by the IDE or the interpreter/compiler, like forgetting to change some variable when copy pasting a section or calling the wrong function of the correct type. Stuff that can only be spotted when thinking about code semantically.

GPT 4 is also a serious CSS master, so that cuts so much time from trial and error there.

Current drawbacks though:

- the 2021 cutoff is very apparent, it's terrible at newer stuff since it can't pull from many examples (browsing mode helps, but it usually fails at finding the info it needs)

- it really can't help with the typical workflow of editing some small thing in a huge codebase because there's no way you can give it enough context for an answer that isn't based on heavy speculation

- when doing too much back and forth it eventually starts to cut tokens and no longer knows what the original question was; sometimes it's not an issue but other times it just goes off topic

lakySK|2 years ago

Definitely experienced all 3, but still getting more boost than drag overall.

The large codebase context might be somewhat solvable and I've seen projects that use embeddings to find the relevant bits of code to feed GPT to help it with context. No clue how well any of them work though, haven't tested them yet.

I've definitely noticed times when the conversation gets cut off and it can't "remember" the previous messages. Often, it results in a loop of ChatGPT giving me a solution, me getting an error and sending it back, then ChatGPT being terribly sorry and suggesting a new solution. Repeat 3 times and often we make a full circle to the first solution in this way...

adoxyz|2 years ago

I love your 2nd point and in my experience this is where ChatGPT excels for me. I wanted to start learning Swift and being able to ask ChatGPT for specific examples on how to do XYZ, copy and paste the code, see the result, tweak, ask follow on questions, and for the most part it just worked. Is it perfect? Not by a long shot, but it's a great start.

marssaxman|2 years ago

Thanks! That is the first description I've heard of a use for ChatGPT which actually sounds like it would make my work easier. Perhaps I'll give it a try.

lakySK|2 years ago

For sure! I think I learn best by trying to build something rather than following a tutorial. ChatGPT just makes that so much faster to jump straight into that.

thatswrong0|2 years ago

Agree entirely with your assessment. I find Copilot / ChatGPT to have varying degrees of utility, but neither tool ever makes me less productive. To your point, they do generally make me more productive, as they can help me debug / explain errors and code I'm unfamiliar with way better than Google can, assist in a new problem space and do some basic introductory scaffolding, and they help me write boring ass code more quickly (e.g. lots of unit tests with different conditions). Copilot especially does a great job at understanding the context of what I'm working on.. suggesting imports or autocompleting entire test cases based on my test description string.

It isn't quite "read-my-mind" level _yet_.. but it did feel magical for a while until I got used to it as part of my workflow.