(no title)
svaha1728 | 8 months ago
Let's be honest, many of those can't be found by just 'reading' the code, you have to get your hands dirty and manually debug/or test the assumptions.
svaha1728 | 8 months ago
Let's be honest, many of those can't be found by just 'reading' the code, you have to get your hands dirty and manually debug/or test the assumptions.
rco8786|8 months ago
People don’t like to do code reviews because it sucks. It’s tedious and boring.
I genuinely hope that we’re not giving up the fun parts of software, writing code, and in exchange getting a mountain of code to read and review instead.
thunspa|8 months ago
That we will end up just trying to review code, writing tests and some kind of specifications in natural language (which is very imprecise)
However, I can't see how this approach would ever scale to a larger project.
barrenko|8 months ago
Or even to make sure that the humans left in the project actually read the code instead of just swiping next.
Joof|8 months ago
Assume we have excellent test coverage -- the AI can write the code and ensure get the feedback for it being secure / fast / etc.
And the AI can help us write the damn tests!
ofjcihen|8 months ago
Example anecdata but since we started having our devs heavily use agents we’ve had a resurgence of mostly dead vulnerabilities such as RCEs (CVE in 2019 for example) as well as a plethora of injection issues.
When asked how these made it in devs are responding with “I asked the LLM and it said it was secure. I even typed MAKE IT SECURE!”
If you don’t sufficiently understand something enough then you don’t know enough to call bs. In cases like this it doesn’t matter how many times the agent iterates.
thunspa|8 months ago
On a more serious note: how could anyone possibly ever write meaningful tests without a deep understanding of the code that is being written?