top | item 40428715

(no title)

hugocbp | 1 year ago

As others have said, I find it very useful for smaller and simpler cases. Focused, small functions. A lot of times both Copilot and ChatGPT (and also Llama 3 via Ollama) are great at sometimes writing tests for edge cases that I might have forgotten.

But anything more complex and it is very hit or miss. I'm trying now to use GPT-4 Turbo to write some integration tests for some Go code that talks to the database and it is mostly a disaster.

It will constantly mock things that I want tested, and write useless tests that do basically nothing because either everything is mocked or the setup is not complete.

I'm settling in using it for tests for those small, pure functions, and more using it as a guide to find possible bugs / edge cases in more complex cases, then writing the tests myself and asking it in another prompt if they would cover those cases.

As most people that actually use AI heavily these days, I think the usefulness of AI for coding increases a lot if you already have a pretty good grasp of the subject and the problem space you are working on. If you already know roughly what you want and how to ask, they can be a huge time saver on the smaller and simpler things.

discuss

order

acedTrex|1 year ago

The most value i have ever gotten out of AI for coding was when i refactored about 20 thousand lines of gomega assertions into the more robust complex object matcher pattern. It did a good chunk of the grunt work quickly. was probably 85% accurate.

dartos|1 year ago

It’s nice for doing refactors.

I like having it translate config formats too. A series of env vars to a yaml or toml or something

afro88|1 year ago

It can work for more complex tests, but you have to give it an initial test that already sets everything up and utilizes mocks correctly. From there it will generate mostly correct tests.