wdella's comments

wdella | 2 years ago | on: Parsing the Postgres protocol – logging executed statements

> I was surprised while digging up the link that Gravatational is still releasing v13 and v14 updates under Apache 2, so maybe even Teleport will continue to have legs for those who cannot deploy AGPL stuff

Teleport puts out 3 major releases a year (every 4 months) and supports versions back to N-2. So the v13 will be updated until May (v16's release) and v14 until September-ish (v17's release). Using v14 and prior is not a viable strategy for AGPL averse companies in the long run... unless they want to fork.

After September 2024, the Teleport options that will get updates are:

1. Compiling Teleport yourself under the terms of the AGPL

2. Use the pre-compiled Community Edition under its new commercial license (<100 employees and <$10MM)

3. Purchase a license (or Teleport Cloud tenant) under enterprise terms

The recent Teleport licensing changes are designed to:

1. Push business users in category 1 and 2 into category 3 and

2. Preempt having Teleport's value resold by a big cloud player like the AWS Elasticsearch/OpenSearch kerfuffle a while back.

Source: I work at Teleport, and while I had no say in the license change, I did keep an ear out as I care about our open source stance. It is part of what brought me to the company.

wdella | 4 years ago | on: If you’re not using SSH certificates you’re doing SSH wrong (2019)

Disclaimer: I'm a Teleport employee, and participate in hiring for our SRE and tools folks.

> A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.

I argue the opposite: Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag.

The one engineer vetting the submission they may be reviewing before lunch or have had a bad week, turning a hire into a no-hire. [1] Not a deal breaker in an iterated PR review game, but rough for a single round hiring game. Beyond that, multiple samples from a population gives data closer to the truth than any single sample.

There is also a humanist element related to current employees: Giving peers a role and voice in hiring builds trust, camaraderie, and empathy for candidates. When a new hire lands, I want peers to be invested and excited to see them.

If you treat hiring as a mechanical process, you'll hire machines. Great software isn't built by machines... (yet)

[1] https://en.wikipedia.org/wiki/Hungry_judge_effect

wdella | 4 years ago | on: Ask HN: Favorite Podcast Episode of 2021?

For excellent reporting on a dystopian intersection of tech and politics:

Darknet Diaries Episode 100: NSO

On a different note: a whimsical, impossible, and hilarious improvised musical:

Mission to Zyxx Episode 507: A Little ‘Ditty about Jack and Shai’an

wdella | 4 years ago | on: Simple SSH Security

Tailscale and Teleport are similar, but operate at different levels of the network stack. Tailscale governs access and routing at L3 in the OSI model. See Hashicorp's Boundary or VPNs for alternatives. As a generalization, Teleport works at L7 -- doing auth and routing at the application protocol (ssh, psql, k8s) level.

There are ups and downs to both: L3 is relatively technology agnostic (e.g. you don't need different support for connecting to a database vs ssh). L7 auth & routing gives greater protocol introspection, but means more work to support different use cases.

Depending on your scale and use case, the right answer may be both: Do 2FA for both network access (are you allowed to send packets to the ip:port) and application access (are the packets you send allowed to sign in to the database as an intern or a admin?). The most important part is to get a hardware token and SSO on the path to access.

Disclosure: I work for Teleport. I also think Tailscale is awesome and run it for my home lab.

wdella | 4 years ago | on: HashiCorp – S1

Ever since Vagrant, everything Hashicorp has developed has been outstanding! Furthermore, their open core model and this S1 is an inspiration. I wish all the best for Mitchell, Armon and the team!

I have a couple emails from Mitchell H circa 2014. He was doing front line customer support for the Vagrant VMWare Workstation provider -- I think it was just about their first paid offering. I was impressed that the head of the company would take time to help me troubleshoot my busted setup. Incredibly technical and incredibly hard working.

wdella | 4 years ago | on: Anatomy of a Cloud Infrastructure Attack via a Pull Request

> Did you also consider some form of out-of-band approval mechanism for production environment access?

No, not before your comment at least. Vendor CI tools (be it GitLab, Drone, etc) often make it difficult to use this workflow. Their typical model is long lived static creds, and gating authn/authz around job kick off. I'm not aware of any that would work with delegated/approved credentials, at least without writing a custom secrets plugin. If anyone knows of such capabilities, give a me a holler.

Furthermore, there is still the risk of any service available to external contributors being compromised (as we saw in the this vulnerability). I'd just as soon have "no prod secrets touch a system that does external CI" as a security invariant -- no matter how trustworthy that external CI system is.

In a bittersweet irony, out-of-band approvals are in our product:

https://goteleport.com/blog/workflow-api/

but we're not there with CI yet. :/ It would be fantastic if we could have short lived credentials issued only for the duration of the job, after approval (or better: after delegation) from a trusted party. Something like AWS's `CalledVia`.

wdella | 4 years ago | on: Anatomy of a Cloud Infrastructure Attack via a Pull Request

> How can they be avoided without stopping the use of CI/CD?

Use separate systems for CI and CD, and don't put sensitive "keys to the kingdom" credentials in CI. For example:

Put CI in GitHub Actions or GitLab CI without any credentials to write artifacts or knowledge of stage/prod deployments. Let the "interns" in the threat model use this.

Put production CD/release in Jenkins or a similar self hosted, not publicly accessible system. Limit the folks who can trigger jobs in this system to a small group of trusted employees, and don't trigger runs on actions that don't require U2F auth (e.g. require a manual click through the webui protected by SSO, or only deploy from specific branches protected to only allow approved PRs -- no git client pushes).

> I assume it is the same with GitLab?

Yes. While GitLab does offer some secret and variable masking controls, the Travis disclosure earlier this week where all secrets were exposed to Pull Request CI shows you probably don't want to bet your business on those controls. (Acknowledging GitLab != Travis)

See https://travis-ci.community/t/security-bulletin/12081

wdella | 4 years ago | on: Anatomy of a Cloud Infrastructure Attack via a Pull Request

> the CI system basically had admin access over our infrastructure. It has to in order to do infrastructure as code.

> Public CIs are fine though. Ones that literally only do code builds, tests etc

I couldn't agree more.

Even internally, the security and authorization needs of deployment/release are wildly higher than those for running an ephemeral build and test. "CI/CD" needs to be un-bundled, for the sake of security, such that CI doesn't have admin access over infrastructure. Only a much more limited CD has this access.

In the case of open core products that use public facing CI, I'm inclined to put the average employee's CI on the public system; for transparency, but also to make sure external contributors don't become second class citizens using an irregular workflow/toolset. Maintain a separate internal release system limited to trusted employees. Principle of least privilege, and all that. :)

page 1