> We wanted to make you aware that we are currently investigating a security incident, and that our investigation is ongoing. We will provide you updates about this incident, and our response, as they become available. At this point, we are confident that there are no unauthorized actors active in our systems; however, out of an abundance of caution, we want to ensure that all customers take certain preventative measures to protect your data as well.
Is anyone else a little annoyed by the messaging here, I read it as, "We think something bad happened to your ultra secret data, but we don't know, so we're asking teams to spend potentially hours or days fixing things while we aren't really able to tell you if your stuff was actually compromised"?
What I find more troubling is, if they don't quite know what happened, or aren't telling us, and we do the work to change everything, how do they know it won't just happen again in the next day or so and people are still accessing our systems, where is the details?
> At this point, we are confident that there are no unauthorized actors active in our systems.
Confident isn't really a good enough word to use here in my opinion. We've just blocked Circle CI from all our systems for now until we hear more, likely start to move to another build system.
I know accidents happen but this is likely the beginning of the end for our teams relationship with Circle CI. Trust has been broken.
> so we're asking teams to spend potentially hours or days fixing things
At the risk of sounding pedantic, but this is why you have everything as IaC. These kind of changes should not cost days. It should take merely minutes or an hour tops to change all your keys. It should be trivial, for cases just like this.
Great reminder for folks to switch any AWS actions you perform from CI/CD to use OIDC role assumption instead of static IAM user credentials. Then even if an attacker stole all your secrets they can't do anything in your AWS account.
I recently did this for one of my GitHub repos which runs several test suites (cumulatively taking >1h). If your actions are slow, pay attention to the IAM role session duration. The maximum duration with role chaining is 1 hour.
In the end your credentials need to outlive your CI/CD actions.
While OIDC is a good option, at StepSecurity, we are building an open-source project that allows using your MFA tokens for deployments in CI/ CD. So far, it is implemented for GitHub Actions - https://github.com/step-security/wait-for-secrets. In this method, you get a link in the build log, click the link, and can enter credentials at run time, which then gets used in the next step in the pipeline for deployment. So there are no persistent secrets stored in the CI/ CD pipeline and no need for managing/ rotating separate deployment credentials.
Not even a month, only 14 days between your linked post (Dec 7) and when secrets might have been leaked ("starting from December 21, 2022 through today, January 4, 2023")
What's tricky is this is not the first interesting recent post from Rob, he previously posted on "An Update on CirclCI Reliability" (Dec '22) [1] and "CircleCI remains secure; be vigilant and aware of phishing attempts for your credentials" (Nov '22) [2]. Overall, CircleCI has had a rough run of it lately.
Someone please correct me if i'm wrong... but there was a kerfuffle in 2017 about Circle using third-party JS which could be an attack vector: https://news.ycombinator.com/item?id=15442636
To give credence to this, a gitlabber spoke up in that thread, said it was a serious thing and they deliberately had no third-party stuff on their site for that reason.
And I just logged into Circle today, and use the Safari network inspector to see what JS it loads... and it's still plenty of third party stuff that I can see:
@dang this is currently #198 off the front page, yet this is basically an emergency (literally every customer's secrets are exposed?)... either circleci has no more customers, or people are very calm about this...
we need to rotate:
- secrets in context environment variables
- secrets in project environment variables
- project deploy keys
- circleci api tokens
then we have to go back and look at all audit logs for... basically everything... and try to find something that looks weird. :/
this is such a clusterfuck... and the circleci api doesn't even allow to automate most of the steps. and the ones that should work, error with "internal server error".
of course, support is completely unresponsive
Fun night when you need to reroll your credentials...at least it's nice to have a list in the CircleCI UI, but sucks when you need to make sure that you have all of the scopes available to you.
Had one legacy app still on CircleCI and figured may as well move it over to GH actions if we're already rotating tokens anyway. Really hard to recommend anything else these days.
People on my team are talking about it, I'd say this incident is the end of our trust in Circle CI going forwards.
On the other hand, I'm becoming increasingly weary of putting all my eggs in the Microsoft basket if move our source code, build system, dev environments (codespaces) to GitHib, is it just me ?
I legitimately don't understand how the ranking on HN works sometimes. How is it that there are older, less-commented posts ranking higher than this story? @dang?
edit: I sincerely think this should be bumped, given how many folks don't seem to be getting the news here in a timely fashion.
Our hodgepodge of microservices- developed over more than a decade- never got coordinated env variables, so now we've got to go through like ~50 services & libraries, one by one, updating secrets. Yuck.
If you do your shit right, you can just dump most of your secrets into some Contexts- containers of env variables- and apply them. Then when this stuff roles around, it's easy to update everything centrally; change the context & everyone sees it. We, alas, can't easily do that, since we have so many differing env var names. New Year, new fun!
> Then when this stuff roles around, it's easy to update everything centrally; change the context & everyone sees it.
But one still has to update their credentials on any downstream service, e.g. Third-party API keys. In general, this is highly individual for each service, and can mostly only be doneanially.
Such is the tragedy of for-profit software engineering. The trade-offs we see today lead to choices that tie our hands when facing trade-offs we didn't foresee. Also why experience comes at such a premium. Seeing further down the line and knowing how to argue about it prevents whole classes of problems.
It can:
* List env vars attached to your repos and contexts
* List SSH keys attached to your repos
* List which repos are configured with Jira (a secret that might need rotating)
I really don't understand why you use someones else's computer to compile and test your stuff.
When their computers are compromised, by internal or external crooks, the crooks have full access to your code, and - in some cases - your data. If they wanted, they could inject their own shit into your binaries, totally ruining your reputation.
As a bonus, you get to pay a premium!
I still compile and test my code on my own machines, in my own network. It's much faster than CircleCI, cheaper, and it's ∞ safer.
You do need to trust someone else’s computer if they’re going to build and run your code. I think Google is doing some good work here in helping champion things like Supply-chain Levels for Software Artifacts (SLSA) [0][1].
I’d argue your build/CI/CD system should never have access to production data, but it would indirectly by being able to mutate your production environment (to deploy things).
Compiling and testing on your own machine it’s necessarily safer though. Compare a typical CI/CD build instance which is usually a VM or container that has been freshly booted, or is being reused from a recent build, with your own machine that you likely also use to browse the internet and run many other apps.
The (ideal of the) former is a reproducible on-demand environment with a specific toolchain, while the latter is a bespoke assortment of different toolchains, software, and unfinished projects. Not to mention your machine will not be the same as someone else on your team.
I think as an industry we still have a lot of work to do around establishing trusted computing environments for CI/CD and enabling the level of auditability and observability to verify that.
There are also CI/CD providers that you can run on your own infrastructure.
I don't like to run my code on someone else's machine either, but having a separate build system allows you to run full, long running tests while you continue with your work.
I can see why you would use GitHub actions if you already host your code there, but I don't feel comfortable sharing my signing keys
I want the following option in my account settings for all critical services:
[X] In case of a "security incident", lock down my account until I take action.
I understand why they can't do that by default, but it's crazy that every time this happens, I have to run in order to secure my assets when in many cases, I'd be perfectly fine with things just shutting down until I have time to take care of them.
Better yet, also give me a button that does this even when there's no official incident reported. That means disabling all access tokens, resetting the password, halting any scheduled jobs, and revoking access for any connected OAuth services until I manually re-enable them.
I don't think locking down the account will do anything. It sounds like secrets were already stolen. GitHub access tokens, etc. Locking the account won't unsteal that stuff.
bamboozled|3 years ago
Is anyone else a little annoyed by the messaging here, I read it as, "We think something bad happened to your ultra secret data, but we don't know, so we're asking teams to spend potentially hours or days fixing things while we aren't really able to tell you if your stuff was actually compromised"?
What I find more troubling is, if they don't quite know what happened, or aren't telling us, and we do the work to change everything, how do they know it won't just happen again in the next day or so and people are still accessing our systems, where is the details?
> At this point, we are confident that there are no unauthorized actors active in our systems.
Confident isn't really a good enough word to use here in my opinion. We've just blocked Circle CI from all our systems for now until we hear more, likely start to move to another build system.
I know accidents happen but this is likely the beginning of the end for our teams relationship with Circle CI. Trust has been broken.
aequitas|3 years ago
At the risk of sounding pedantic, but this is why you have everything as IaC. These kind of changes should not cost days. It should take merely minutes or an hour tops to change all your keys. It should be trivial, for cases just like this.
csomar|3 years ago
arkadiyt|3 years ago
shitlord|3 years ago
In the end your credentials need to outlive your CI/CD actions.
ollien|3 years ago
varunsharma07|3 years ago
unfunco|3 years ago
sickmate|3 years ago
>I've been investigating the use of a @ThinkstCanary AWS token that was improperly accessed on December 27th and suspected as much.
woodruffw|3 years ago
[1]: https://circleci.com/blog/ceo-jim-rose-email-to-circleci-emp...
gurchik|3 years ago
nixgeek|3 years ago
[1] https://circleci.com/blog/an-update-on-circleci-reliability/
[2] https://circleci.com/blog/circleci-security-update/
chubs|3 years ago
To give credence to this, a gitlabber spoke up in that thread, said it was a serious thing and they deliberately had no third-party stuff on their site for that reason.
And I just logged into Circle today, and use the Safari network inspector to see what JS it loads... and it's still plenty of third party stuff that I can see:
* Amplitude * Segment * cci-growth-utils * Statuspage * DataDog * HotJar * Pusher
Not sure if this is an issue, but it doesn't make me comfortable.
throwaway892238|3 years ago
we need to rotate:
then we have to go back and look at all audit logs for... basically everything... and try to find something that looks weird. :/bbu|3 years ago
herpderperator|3 years ago
ajorgensen|3 years ago
imroot|3 years ago
Fun night when you need to reroll your credentials...at least it's nice to have a list in the CircleCI UI, but sucks when you need to make sure that you have all of the scopes available to you.
wlonkly|3 years ago
anderber|3 years ago
ithkuil|3 years ago
atymic|3 years ago
bamboozled|3 years ago
On the other hand, I'm becoming increasingly weary of putting all my eggs in the Microsoft basket if move our source code, build system, dev environments (codespaces) to GitHib, is it just me ?
Corrado|3 years ago
ryanisnan|3 years ago
edit: I sincerely think this should be bumped, given how many folks don't seem to be getting the news here in a timely fashion.
rektide|3 years ago
If you do your shit right, you can just dump most of your secrets into some Contexts- containers of env variables- and apply them. Then when this stuff roles around, it's easy to update everything centrally; change the context & everyone sees it. We, alas, can't easily do that, since we have so many differing env var names. New Year, new fun!
Eduard|3 years ago
But one still has to update their credentials on any downstream service, e.g. Third-party API keys. In general, this is highly individual for each service, and can mostly only be doneanially.
namaria|3 years ago
whitexn--g28h|3 years ago
latchkey|3 years ago
ab-dm|3 years ago
I guess the answer is, why on earth am I still using Circle CI....
Thankfully all of my secrets/env variables are just dummy data for tests, and already using OIDC
ryanisnan|3 years ago
robertoandred|3 years ago
rupert-m-a|3 years ago
https://github.com/rupert-madden-abbott/circleci-audit
It can: * List env vars attached to your repos and contexts * List SSH keys attached to your repos * List which repos are configured with Jira (a secret that might need rotating)
starwatch|3 years ago
Circle CI have also released something similar [0] linked to near the bottom of their blog post[1].
[0]: https://github.com/CircleCI-Public/CircleCI-Env-Inspector
[1]: https://circleci.com/blog/january-4-2023-security-alert/
theogravity|3 years ago
DylanL|3 years ago
richbell|3 years ago
chrisntb|3 years ago
"Thank you for contacting CircleCI Support.
This does also apply to SSH Keys, as such we do recommend to rotate SSH Keys as well as to take extra caution.
If you have any other concerns please reach out."
preetamjinka|3 years ago
nixgeek|3 years ago
mjmasn|3 years ago
benced|3 years ago
shdh|3 years ago
Felipebury|3 years ago
[deleted]
jgaa|3 years ago
When their computers are compromised, by internal or external crooks, the crooks have full access to your code, and - in some cases - your data. If they wanted, they could inject their own shit into your binaries, totally ruining your reputation.
As a bonus, you get to pay a premium!
I still compile and test my code on my own machines, in my own network. It's much faster than CircleCI, cheaper, and it's ∞ safer.
ojkelly|3 years ago
[0] https://cloud.google.com/blog/products/application-developme...
[1] https://slsa.dev (edit: fixed this link)
bunbun69|3 years ago
I can see why you would use GitHub actions if you already host your code there, but I don't feel comfortable sharing my signing keys
bamboozled|3 years ago
p-e-w|3 years ago
Better yet, also give me a button that does this even when there's no official incident reported. That means disabling all access tokens, resetting the password, halting any scheduled jobs, and revoking access for any connected OAuth services until I manually re-enable them.
VWWHFSfQ|3 years ago