Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.
They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.
However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.
They weren't published together. They managed to get the researchers to add CodeRabbit's talking points in after the fact, check out the blue text on the right hand side.
mkeeter|6 months ago
"No manual overrides, no exceptions."
"Our VDP isn't just a bug bounty—it's a security partnership"
oasisbob|6 months ago
Another:
> Security isn't just a checkbox for us; it's fundamental to our mission.
teaearlgraycold|6 months ago
coldpie|6 months ago
acaloiar|6 months ago
The usual "we take full responsibility" platitudes.
noisy_boy|6 months ago
therealpygon|6 months ago
paulddraper|6 months ago
frankfrank13|6 months ago
unknown|6 months ago
[deleted]
Kriptonian|6 months ago
[deleted]
cube00|6 months ago
- within 8 months: published the details after researchers publish it first.
Jap2-0|6 months ago
neandrake|6 months ago
However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.
KingOfCoders|6 months ago
cube00|6 months ago
https://web.archive.org/web/diff/20250819165333/202508192240...