TomFrost's comments

TomFrost | 6 years ago | on: Visual Studio online available for public preview

I love the Remote SSH extension for VSCode (save for the frustrating workarounds necessary for ssh-agent forwarding -- an absolutely necessary feature) and expected VSO to be much more streamlined. But I find myself hitting early walls:

- No documentation on how to clone a private github repo, gitlab repo, etc.

- Cloning a private repo on the command line eschews the ability to bootstrap your VSO instance using the in-repo config, which kills a huge benefit of this product

- No documentation on forwarding ssh-agent or injecting RSA keys of any kind

There are some other needs addressed in other comment threads (particularly registering a remote headless box as a VSO machine) but the above are instant showstoppers. Perhaps this works with private Azure DevOps repos because of the login integration? I'd be willing to wager that the majority of folks interested in this are on other repo hosts, though.

TomFrost | 9 years ago | on: Semantic UI

Semantic recently adopted my team's React adaptation as their official React port. It's lighter weight, eliminates jQuery, and all components are standard React components that can be extended or dropped in as-is.

https://react.semantic-ui.com/

TomFrost | 9 years ago | on: Blox – Open Source Tools for Amazon ECS

ECS is free in that you pay only for the EC2 nodes running your containers -- there's no need to host ECS or do scheduling on your own hardware to use it. It's also Availability Zone-aware right out of the box, making sure the distribution of container instances is optimized for durability. Finally, it's fully managed. No one needs to maintain or upgrade your ECS implementation.

Granted, there's a lot of advantage to building on top of an infrastructure that can be installed on any hardware from any provider. However, we're not talking about rewriting your applications if you need to move away from ECS; it's all still the same containers. Going from ECS to Mesos or Kubernetes when needed is a matter of writing new config files.

It's a very attractive proposition for small teams on AWS who are trying to spend minimal time on ops.

TomFrost | 10 years ago | on: What If the Future of Technology Is in Your Ear?

The problem I've always had with audio interfaces is that input is not private. Requests on public transportation are heard by many. Requests at home turn into a conversation with the roommate, spouse, or children. Requests walking down the street make others question the mental wellbeing of the person talking to him/herself.

I'm reminded of the Ender's Game sequels in which the protagonist wears a small earpiece with an AI named Jane. He communicates with Jane by "subvocalizing" -- mentally saying the words, physically barely uttering a sound. The AI understands.

A few years ago there was a TED talk (forgive me; unable to find the link) on which a technology was demoed to do something similar. Sensors placed around the throat, combined with EEG sensors around the temple, allowed a man to transmit text to a computer by following all the mental and muscle processes of speaking, stopping short of moving his lips in an obvious fashion or making sounds. The sensors allowed the computer to translate their input to actual words.

Perfecting and miniaturizing that technology, then combining it with an in-ear AI, would be a game changer.

TomFrost | 10 years ago | on: Cheap Docker images with Nix

If the goal is solely Docker images with a standard size in the 20-40MB range, this can be achieved without additional tooling. After switching our development and deployment flow to docker, my team quickly tired of the 200-400MB images that seemed to be accepted as commonplace. We started basing our containers on alpine (essentially, busybox with a package manager) or alpine derivatives, and dropped into that target size immediately. Spinning up 8-10 microservices locally for a full frontend development stack is a shockingly better experience when that involves a 200MB download rather than a 2GB one.

This is in no way a negative commentary on Nix; it looks like an interesting solution to a well-known problem.

TomFrost | 10 years ago | on: Lock Up Your Customer Accounts, Give Away the Key

Is there a basis for such an assumption?

For an organization requiring the highest available security, the ideal solution would be a privately operated hardware security module kept off the DMZ. However, that, as well as the idea of self hosting (and maintaining) the entire dev, test, deploy, and prod stack suggested by another commenter, isn't always within reach of a small, agile team looking to focus on their core competencies.

One could argue that it's possible for Amazon to have falsified the description of KMS as an HSM, or the certifications[0] they were granted for it, but I'd retort that an organization in a position to seriously question those claims shouldn't be using a remote solution anyway.

So, making the more rational assumption that such claims by Amazon can be trusted, their offering is quite secure: the HSM does not allow the export of any key, and exposes only the ability to load encrypted data into the device and have it produce the decrypted result over a secure channel, and vice versa.

[0]: https://aws.amazon.com/kms/details/#compliance

TomFrost | 10 years ago | on: Lock Up Your Customer Accounts, Give Away the Key

This is certainly the case, however for an organization implementing best practices for code deployment, such a change would have to be peer-reviewed in the best case, or pushed directly to master with an obvious paper trail in the worst. It wasn't my intention to imply that employing well-designed envelope encryption would shut the door on any possibility of an engineer gaining access to secrets; clearly there's a lot more involved in making that happen. However, this goes a long way to allowing the source of any leaks to be traced should they occur.

TomFrost | 10 years ago | on: Lock Up Your Customer Accounts, Give Away the Key

KMS is not a centralized secret database -- it's a hosted Hardware Security Module. There is no way to store your service's secrets in it for later retrieval, unlike the solutions listed in the article. I suppose an argument could be made that it still provides a single point of failure, however the risk level of KMS and the SLA it provides is far lower than what one might encounter by maintaining their own server cluster.

TomFrost | 10 years ago | on: Ask HN: Who is hiring? (November 2015)

TechnolgyAdvice | Nashville, TN | Full time | REMOTE / ONSITE

About Us: We have a full JavaScript stack. React / Flux on the front-end, NodeJS-centric SOA on the back. We're independently profitable, solving awesome problems, and open-sourcing as many of those solutions as we can. Check those out on our Github (https://github.com/TechnologyAdvice) -- DevLab just hit the HN front page yesterday. We're growing by leaps and bounds and are passionate about cutting hours of research out of our friends' business technology decisions. We have a mature and productive dev culture, and get to spend most of our time working on entirely new, modern codebases, with super smart people on a team that's 50% coast-to-coast remote.

Senior Software Enginer (Front-end):

We're looking for engineers who are experienced with React and have a deep understanding of JavaScript. Perhaps you're doing React in your off hours, or are building React on top of an existing tech stack. At TA, you'll be able to work in React / Flux / Webpack full time. Our code base is only a few months old, so there's no need to worry about drowning in "mission-critical" legacy projects.

Apply Online: https://technologyadvice.applytojob.com/apply/DAZ3IB/Senior-...

Mid-to-Senior Software Engineers (Back-end):

We love passionate engineers. Applicants here should have a firm grasp of Node and Javascript in general, but we write microservices and use the best tool for the job, so knowing more outside of that ecosystem is heavily valued. We develop, test, and deploy in Docker, utilizing AWS for everything from hosting to message queues to our data warehouse. Our new microservices-based platform is still in its infancy, so we'd love to hear from architecturally-minded folks who love fresh design challenges.

Apply Online: https://technologyadvice.applytojob.com/apply/kPsEfJ/Senior-...

Product Owner/Project Manager:

We're growing like crazy and have hit the point where we need someone to help us manage a sane agile workflow. At the same time, we're launching a whole new industry-first project, and we're looking for a project manager who also has experience receiving feedback from customers and users both internal and external, and creating/prioritizing user stories for the team. Technical experience is appreciated -- we'd love a PM who understands microservices, messaging queues, and enough technobabble to glean statuses from engineer chatter without a lot of pausing for explanation.

Apply Online: https://technologyadvice.applytojob.com/apply/QEmxsl/Product...

TomFrost | 10 years ago | on: Show HN: DevLab – Docker Containerization for Local Development

Fair point; I was speaking more toward the case of needing a rebuild any time your container gets changed, due to the change-on-run ethos of DevLab versus the change-on-build ethos of Compose.

But where cleanup really comes into play is when you have multiple tasks. You can specify multiple command paths in Compose, but each one will be a newly built image. With DevLab, you could `lab install` a node project, followed by a `lab lint`, or a `lab test`. You could `lab build` to compile your app at any time. And when you do this, whether it's the first time or the twenty-first time, overhead versus what you'd experience locally is just a second -- and the only image you have is your environment container. No build process, ever.

TomFrost | 10 years ago | on: Show HN: DevLab – Docker Containerization for Local Development

Member of the DevLab team here :)

Compose is actually part of the reason we wrote this tool. We tried so hard to make it make sense for a TDD workflow, but it was always cumbersome.

Compose wants you to build your application into a container, and build and run that container every time you have a task to run. This takes time and a fair amount of cleanup, especially when you want a clean environment to run your tests that doesn't persist to the next run. DevLab just wants you to specify the environment to plug your application into, and doesn't build a container at all.

The result is:

- No manual cleanup

- No pile of images or processes that stack up

- Your project doesn't have to be a Docker project. It doesn't need a Dockerfile. You can use this for something you plan to clone on an EC2 node and run from an upstart script.

- No Vagrant/Ansible/Chef/Puppet, server config, anything. You pick out an image that matches your environment's needs (node:4.2, wordpress, go, etc., hub.docker.com is a great place to start) and DevLab plugs your app into it.

- More development-oriented features outside of the pure Docker ecosystem that Compose stays confined to. For example, soon, Mac users running docker-machine will get to enjoy ports bound to localhost much like Linux docker users do [1].

To get this functionality outside of Compose, we had a monolithic makefile to maintain all of this for us. It wasn't DRY and it wasn't smooth. I hope you enjoy DevLab!

[1] https://github.com/TechnologyAdvice/DevLab/pull/10

TomFrost | 10 years ago | on: Show HN: Sempl – Stupid Simple Bash Templating

Just today I made a suggestion in a Github issue[0] to use sed for a similar use case. There, it was to generate an Amazon ECS task definition file from a CI tool's environment variables. Using this, populating the list of environment variables by looping through the output of env and including anything that matches a given prefix would be a breeze.

Thanks for the excellent tool!

[0]: https://github.com/1science/wercker-aws-ecs/issues/4#issueco...

TomFrost | 10 years ago | on: Ask HN: How do you version control your microservices?

I'd love to hear back if you find something like this. Up until Vers was released, I lazily boilerplated it into whichever of my services needed the functionality. Even so, the pattern has been such a boon to our development cycles that it seems strange others haven't independently come to the same approach.

TomFrost | 10 years ago | on: Ask HN: How do you version control your microservices?

You answered that one yourself :) Deprecation means to publicly notify that some API endpoint has hit end-of-life, and that a better alternative is available. If you completely rewrite a service, it's your responsibility to implement the same interface that you had before on top of it. Then you deprecate it and also publish your new api or data schema. Once you get around to migrating the rest of your application's services away from the deprecated endpoints, the next version of the microservice in question can remove that old code entirely.

Imagine Twitter or AWS completely rewriting their backend -- if they were to announce to the public that at a specific time, their old API URLs would 404 and the new ones would go live, it would be a wreck. They'd support the old API through deprecated methods and tell users they have X months to migrate away, if they remove that layer at all. Stress-free SOA must employ that same level of discipline in order to stay stress-free.

--And, functionally, the much easier alternative here isn't to re-implement your old API on top of your ground-up rewritten shiny new service, it would be to reduce your old service to an API shell and proxy any requests to the new service in the new format. Far less work that way. Use more traditional API versioning for the much more common updates. Unless you're rewriting your services every other week, in which case you have an entirely different problem ;-)

TomFrost | 10 years ago | on: Ask HN: How do you version control your microservices?

You're 100% correct -- interoperability is the key, but you ensure that by making sure everything at the interface layer (whether that's a REST API, a polled queue, a notification stream, etc) is versioned, and the other microservices using that interface include the version they're targeting.

If you include the version at the data level, any time it gets passed into a queue or message bus or REST endpoint, your microservice can seamlessly upgrade it to the latest version, which all of its own code has already been updated to use. If a response is required back to the service that originated the request, use your same versioning package (Vers if you go with that) to downgrade it back down to the version the external service is expecting.

If your interface layer is more complex, having responses independent from the data that change, that calls for a versioned API. Either throwing /v1/* or /v2/* into your URLs, or accepting some header that declares the version. But even in this case, you can drastically simplify changes to the model itself by implementing model versioning behind the scenes.

TomFrost | 10 years ago | on: Ask HN: How do you version control your microservices?

From your description, it sounds like your pain points don't come from versioning your microservice code; they come from versioning the data models that those microservices either 'own' or pass around to each other. While your approach of collecting your microservices as a collection of submodules is novel, that also defeats the purpose of microservices -- you should be able to maintain and deploy them independently without having to be concerned with interoperability.

While it's possible to alleviate some of your pains with versioned APIs to track changes to your data models, you also conflict with data you already have stored in schemaless DBs when those models update.

In a Node or frontend JS stack, I solve that problem with Vers [1]. In any other stack, the idea is fairly simple to replicate: Version your models _inside_ of your code by writing a short diff between it and the previous version every time it changes. Any time you pull data from a DB or accept it via an API, just slip in a call to update it to the latest version. Now your microservice only has to be concerned with the most up-to-date version of this data, and your API endpoints can use the same methods to downgrade any results back to what version that endpoint is using. And frankly that makes versioning your APIs far simpler, as now you move the versioning to the model layer (where all data manipulation really should be) and only need to version the actual API when you change how external services need to interact with it.

And now your other microservices can update to the new schema at their leisure. No more dependency chain driven by your models.

[1] https://github.com/TechnologyAdvice/Vers

page 1