top | item 44956073

(no title)

curuinor | 6 months ago

https://www.coderabbit.ai/blog/our-response-to-the-january-2...

discuss

order

mkeeter|6 months ago

The LLM tics are strong in this writeup:

"No manual overrides, no exceptions."

"Our VDP isn't just a bug bounty—it's a security partnership"

oasisbob|6 months ago

Wow, you hit a nerve with that one. There have been some quick edits on the page.

Another:

> Security isn't just a checkbox for us; it's fundamental to our mission.

teaearlgraycold|6 months ago

Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.

coldpie|6 months ago

The NFT smell completely permeates the AI "industry." Can't wait for this bubble to pop.

acaloiar|6 months ago

For anyone following along in the comments here. Code Rabbit's CEO posted some of the details today, after this post hit HN.

The usual "we take full responsibility" platitudes.

noisy_boy|6 months ago

I would like to see a diff of the consequences of taking full vs half-hearted responsibility.

therealpygon|6 months ago

I’m sure an “intern” did it.

paulddraper|6 months ago

I would love to know the acceptable version.

frankfrank13|6 months ago

Not a single mention of env vars. Just shifting the blame to rubocop.

cube00|6 months ago

They seem to have left out a point in their "Our immediate response" section:

- within 8 months: published the details after researchers publish it first.

Jap2-0|6 months ago

Hmm, is it normal practice to rotate secrets before fixing the vulnerability?

neandrake|6 months ago

They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.

However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.