(no title)
sh3rl0ck | 1 month ago
Most of my feedback that can be automated is done either by this or by fuzzing. Would love to hear about other optimisations y'all have found.
sh3rl0ck | 1 month ago
Most of my feedback that can be automated is done either by this or by fuzzing. Would love to hear about other optimisations y'all have found.
__MatrixMan__|1 month ago
There are also openapi spec validators to catch spec problems up front.
And you can use contract testing (e.g. https://docs.pact.io/) to replay your client tests (with a mocked server) against the server (with mocked clients)--never having to actually spin up both a the same time.
Together this creates a pretty widespread set of correctness checks that generate feedback at multiple points.
It's maybe overkill for the project I'm using it on, but as a set of AI handcuffs I like it quite a bit.
alphax314|1 month ago
esafak|1 month ago
sigseg1v|1 month ago