(no title)
lakySK | 2 years ago
3 big use cases for me so far:
- Github Copilot is just great. Often it helps fill in the code I was just about to write (or Google search). It just saves time, period. Plus, there's been at least a few times when I was ready to throw in the towel at the end of the day. But carried through as Copilot suggested the next line or implementation of a method I wanted to just create a placeholder for.
- ChatGPT for a project in an unknown domain. A few weeks ago I wanted to create a Chrome extension. I've never done it before and didn't know where to start. I asked ChatGPT and it delivered a great interactive tutorial with just the right code to get me started (I had to fix a few bugs that ChatGPT helped debug). Am I an expert in Chrome extensions thanks to it? Hell no! Have I created a working Chrome extension in a very short amount of time? 100% yes!
- ChatGPT for debugging. When I search for an error and get not-so-relevant Google search results, often ChatGPT can suggest rather relevant things to look into given an error message.
Of course, your mileage may vary, but saying AI does not help programmers based on a quick test of it not implementing things perfectly seems a bit surprisingly shortsighted I'd say.
marginalia_nu|2 years ago
I find it's great as long as what you're doing is very straightforward and boilerplaty. I find I have to go and re-write a lot of what it outputs though, since it tends to be for a lack of a better word, noodly. You often have to invert conditions and move stuff around for Copilot's suggestions otherwise everything has 7 levels of indentations and redundant condition checks.
Often with this re-write, the Copilot solution isn't really saving any significant time, as you could have just written it correctly to begin with.
> ChatGPT for a project in an unknown domain.
I'd say this is true for problems that are well explored with plenty of tutorials. Ask for help doing something that's even the slightly off the beaten path and you'll get entirely hallucinatory APIs.
Let's say you wanted to write a Parquet file in Java, for example. It's not a particularly strange thing to want to do, except I've never managed to get ChatGPT or Phind to produce a meaningful answer to that inquiry. You get correct-looking answers, except they use code that doesn't exist.
> ChatGPT for debugging
This I do agree with. You can just give it a function and ask "where is the bug in this code?". If there is a bug it will say so. If there isn't a bug, it may sometimes also say there is a bug, but it's pretty easy to verify and dismiss the answer at that stage.
comfypotato|2 years ago
com2kid|2 years ago
"In typescript, Write me an express POST endpoint that takes in a JSON payload, assigns it a unique id of some sort, and uploads it to an s3 bucket"
"To continue on with the last request, in typescript, write me an express GET route that takes in an id of the JSON file that the POST request above uploaded, downloads it from s3 and sends it to the user."
For problems with well defined constraints, ChatGPT4 is amazing. Of course I had to rewrite all the error handling logic, and fill in some details, but it saved me a lot of time looking up APIs.
The flip side is, it originally tried writing against AWS SDK2, and I had to ask it to use SDK3. If I hadn't known about SDK3, I would've had less than optimal code.
Similar thing when I asked it to add ajv validation to an endpoint, I hadn't done that in a couple of years and I knew it'd take me awhile to remember exactly how, while ChatGPT pushed it out in a few seconds, but with non-optimal code (didn't use middleware). Because I already knew what it should do, I was able to ask for a correction.
I have a genuine fear for Junior developers using ChatGPT and never going through the struggles to learn the tools and technologies that makes a good Senior engineer.
PeterStuer|2 years ago
I have been around long enough to have heard the same about
- writing code on the terminal as oposed to on paper (their coding will become just trail and error)
- using debuggers (repeat until it works, they will never understand why it failed to begin with)
- using IDE's (real programmers don't need crutches)
- using languages with extensive standard libraries (how can they ever understand their code if they didn't write their own dictionary)
- using domain frameworks (TF is for people incapable of grokking NN's)
- ...
It's most often not the tool that is producing bad programmers, it's bad programmers holding it wrong ;)
lakySK|2 years ago
With the out-of-date SDK, I had a similar experience. ChatGPT got me started with ManifestV2 for the Chrome extension and I found out it's getting deprecated and I should really use ManifestV3. But you know what, I asked ChatGPT how to update from ManifestV2 to ManifestV3 and it gave me the steps and things to fix. I had to do a few iterations as new errors were coming up and some things needed a bit of a refactor, but it was all done quite fast.
The fear of juniors skipping out on key learnings, or even perhaps having a hard time finding jobs in a few years is definitely interesting and something I'm super curious to see how it will go...
moffkalast|2 years ago
GPT 4 is also a serious CSS master, so that cuts so much time from trial and error there.
Current drawbacks though:
- the 2021 cutoff is very apparent, it's terrible at newer stuff since it can't pull from many examples (browsing mode helps, but it usually fails at finding the info it needs)
- it really can't help with the typical workflow of editing some small thing in a huge codebase because there's no way you can give it enough context for an answer that isn't based on heavy speculation
- when doing too much back and forth it eventually starts to cut tokens and no longer knows what the original question was; sometimes it's not an issue but other times it just goes off topic
lakySK|2 years ago
The large codebase context might be somewhat solvable and I've seen projects that use embeddings to find the relevant bits of code to feed GPT to help it with context. No clue how well any of them work though, haven't tested them yet.
I've definitely noticed times when the conversation gets cut off and it can't "remember" the previous messages. Often, it results in a loop of ChatGPT giving me a solution, me getting an error and sending it back, then ChatGPT being terribly sorry and suggesting a new solution. Repeat 3 times and often we make a full circle to the first solution in this way...
adoxyz|2 years ago
marssaxman|2 years ago
lakySK|2 years ago
thatswrong0|2 years ago
It isn't quite "read-my-mind" level _yet_.. but it did feel magical for a while until I got used to it as part of my workflow.