> We write unit tests for the happy path, maybe a few edge cases we can imagine, but what about the inputs we'd never consider? Many times we assume that LLMs are handling these scenarios by default,
I've seen companies advertise with LLM generated claims (~Best company for X according to ChatGPT), I've seen (political) discussions being held with LLM opinions as "evidence".
So it's pretty safe to say some (many?) attribute inappropriate credence to LLM outputs. It's eating our minds.
The original claim for TDD is your write tests for all your edge cases. It doesn't matter about inputs you didn't consider because they are covered in the edge. If you can only accept inputs from 2-7 (inclusive) you check 1,2,7,8 - if those pass you assume the rest work.
What’s interesting to me about this, reckless as it is, is that the conversation has begun to shift toward balancing LLMs with rigorous methods. These people seem to be selling some kind of AI hype product backed by shoddy engineering, and even they are picking up on the vibe. I think this is a really promising sign for the future.
Technically a property based test caught the issue.
What I've found surprising is that the __proto__ string is a fixed set from the strings sampling set. Whereas I'd have expected the function to return random strings in the range given.
But maybe that's my biased expectation being introduced to property-based testing with random values. It also feels like a stretch to call this a property-based test, because what is the property "setters and getters that work"? Cause I expect that from all my classes.
Good PBT code doesn't simply generate values at random, they skew the distributions so that known problematic values are more likely to appear. In JS "__proto__" is a good candidate for strings as shown here, for floating point numbers you'll probably want skew towards generating stuff like infinities, nans, denormals, negative zero and so on. It'll depend on your exact domain.
> Is this exploitable? No. ... JSON.stringify knows to skip the __proto__ field. ... However, refactors to the code could ... [cause] subtle incorrectness and sharp edge cases in your code base.
So what? This line of what-if reasoning is so annoying especially when it's analysis for a language like javascript. There's no vulnerability found here and most web developers are well aware of the risky parts of the language. This is almost as bad as all the insane false positives SAST scans dump on you.
Oh I'm just waiting to get dogpiled by people who want to tell me web devs are dumber than them and couldn't possibly be competent at anything.
> most web developers are well aware of the risky parts of the language
In my experience this really isn’t true. Most web developers I know are not familiar (enough) with prototype pollution.
By the way, this isn’t because they are “dumb”. It’s the tool’s fault, not the craftsman’s, in this case. Prototype pollution is complicated and surprising
This just can't be your answer to everything... the article clearly stated that they're developing a client application for browsers. Rust advocates like yourself are really doing more harm than good by ignoring real world constraints.
TL;DR: obj[key] with user-controlled key == "__proto__" is a gift that keeps on giving; buy our AI tool that will write subtle vulnerabilities like that which you yourself won’t catch in review but then it will also write some property-based tests that maybe will
philipwhiuk|2 months ago
Do we?
RGamma|2 months ago
So it's pretty safe to say some (many?) attribute inappropriate credence to LLM outputs. It's eating our minds.
bluGill|2 months ago
sevensor|2 months ago
bpt3|2 months ago
mhitza|2 months ago
What I've found surprising is that the __proto__ string is a fixed set from the strings sampling set. Whereas I'd have expected the function to return random strings in the range given.
But maybe that's my biased expectation being introduced to property-based testing with random values. It also feels like a stretch to call this a property-based test, because what is the property "setters and getters that work"? Cause I expect that from all my classes.
arnsholt|2 months ago
Piraty|2 months ago
nslog|2 months ago
sublinear|2 months ago
So what? This line of what-if reasoning is so annoying especially when it's analysis for a language like javascript. There's no vulnerability found here and most web developers are well aware of the risky parts of the language. This is almost as bad as all the insane false positives SAST scans dump on you.
Oh I'm just waiting to get dogpiled by people who want to tell me web devs are dumber than them and couldn't possibly be competent at anything.
oncallthrow|2 months ago
In my experience this really isn’t true. Most web developers I know are not familiar (enough) with prototype pollution.
By the way, this isn’t because they are “dumb”. It’s the tool’s fault, not the craftsman’s, in this case. Prototype pollution is complicated and surprising
yakshaving_jgt|2 months ago
I don't think this is true, and I think that's supported by the success of JavaScript: The Good Parts.
It would be unfair to characterise a lack of comprehensive knowledge of JavaScript foot-guns as general incompetence.
jgalt212|2 months ago
Great LLM use case: Please explain to the box ticking person why these "insane false positives SAST" are false and / or of no consequence.
unknown|2 months ago
[deleted]
koakuma-chan|2 months ago
kittoes|2 months ago
mananaysiempre|2 months ago
fireflash38|2 months ago
toobulkeh|2 months ago
My take away is “don’t write your own input tests, use a library”. The rest is AI-slip
nslog|2 months ago