top | item 18495697

You might not need Kubernetes

307 points| tannhaeuser | 7 years ago |blog.jessfraz.com | reply

308 comments

order
[+] combatentropy|7 years ago|reply
Some day I would like a powwow with all you hackers about whether 99% of apps need more than a $5 droplet from Digital Ocean, set up the old-fashioned way, LAMP --- though feel free to switch out the letters: BSD instead of Linux, Nginx instead of Apache, PostgreSQL instead of MySQL, Ruby or Python instead of PHP.

I manage dozens of apps for thousands of users. The apps are all on one server, its load average around 0.1. I know, it isn't web-scale. Okay, how about Hacker News? It runs on one server. Moore's Law rendered most of our impressive workloads to a golf ball in a football field, years ago.

I understand these companies needing many, many servers: Google, Facebook, Uber, and medium companies like Basecamp. But to the rest I want to ask, what's the load average on the Kubernetes cluster for your Web 2.0 app? If it's high, is it because you are getting 100,000 requests per second, or is it the frameworks you cargo-culted in? What would the load average be if you just wrote a LAMP app?

EDIT: Okay, a floating IP and two servers.

[+] wpietri|7 years ago|reply
As somebody who has his own colocated server (and has since Bubble 1.0), I definitely agree that the old-fashioned way still works just fine.

On the other hand, I've been building a home Kubernetes cluster to check out the new hotness. And although I don't think Kubernetes provides huge benefits to small-scale operators, I would still probably recommend that newbs look at some container orchestration approach instead of investing in learning old-school techniques.

The problem for me with the old big-server-many-apps approach is the way it becomes hard to manage. 5 years on, I know that I did a bunch of things for a bunch of reasons, but I don't really remember what or why. It mixes intention with execution in a way that gets muddled over time. Moving to a new server or OS is more archaeology than engineering.

The rise of virtual servers and tools like Chef and Puppet provided some ways to manage that complexity. But "virtual server" is like "horseless carriage". The term itself indicates that some transition is happening, but that we don't really understand it yet.

I believe containers are at least the next step in that direction. Done well, I think containers are a much cleaner way of separating intent from implementation than older approaches. Something like Kubernetes strongly encourages patterns that make scaling easier, sure. But even if the scaling never happens, it makes people better prepared for operational issues that certainly will happen. Migrations, upgrades, hardware failures, transfers of control.

[+] meritt|7 years ago|reply
As someone who runs a very successful data business on a simple stack (php, cron, redis, mariadb), I definitely agree. We've avoided the latest trends/tools and just keep humming along while outperforming and outdelivering our competitors.

We're also revenue-funded so no outside VC pushing us to be flashy, but I will definitely admit it makes hiring difficult. Candidates see our stack as boring and bland, which we make up for that in comp, but for a lot of people that's not enough.

If you want to run a reliable, simple, and profitable business, keep your tech stack equally simple. If you want to appeal to VCs and have an easy time recruiting, you need to be cargo cult, even if it's not technically necessary.

[+] dcosson|7 years ago|reply
I really think that running a LAMP server for the average beginning developer these days would be just as complicated, maybe more complicated, than running a single deployment on Google Kubernetes Engine. You have to know about package managers and init systems and apache/nginx config files and keep track of security updates for your stack and rotate the logs so the hard drive doesn't fill up. If you already know how do this stuff in your sleep because you've done it for years, then yeah, don't fix what isn't broken. But if you're starting with no background, there's nothing inherently wrong with using a more advanced tool if that tool has good resources to get you started easily.

Just because there's more complexity in the entirety of the stack when running an orchestration system doesn't necessarily mean more complexity for the end user.

Side note - couldn't you make a similar argument about any kind of further abstraction? "Question for all you hackers out there - do you really need HTTP requests with their complicated headers and status codes and keepalive timeouts? I run several apps just sending plain text over TCP sockets and it works fine."

[+] gambler|7 years ago|reply
I realize there is a need for multi-server applications with automated deployment and scaling. However, the accidental complexity of serverless setups and container orchestration tools is just off the charts. When reading these articles I get roughly the same feeling I got when reading J2EE articles back when J2EE was "the future" and "the only way to build scalable infrastructure".
[+] manigandham|7 years ago|reply
It's not just about scaling. That seems to be the only thing people talk about because it sounds sexy but the reality is about operations.

Kubernetes makes deployments, rolling upgrades, monitoring, load balancing, logging, restarts, and other ops very easy. It can be as simple as a 1-line command to run a container or several YAML files to run complex applications and even databases. Once you become familiar with the options, you tend to think of running all software that way and it becomes really easy to deploy and test anything.

So yes, for personal projects a single server with SSH/Docker is fine, but any business can save time and IT overhead with Kubernetes. Considering how easy the clouds have made it to spin up a cluster, it's a great trade-off for most companies.

[+] SEJeff|7 years ago|reply
FreeBSD, Apache, Python - the FAP stack

Ok. I'll see myself out.

[+] solatic|7 years ago|reply
Stop thinking of Kubernetes as an easy way to scale ops for a single app, and start thinking of Kubernetes as an easy way to scale ops for non-trivial amount n apps.

If you're a startup with a monolith then sure, you probably don't need Kubernetes. If you're not using Heroku/GAE/etc. then you generate a machine image from your app, deploy it behind a load balancer (start with two servers), and use some managed database for the backend. That's pretty simple. You can scale development without scaling the size of your ops team (1-2 people, only need two if you're trying to avoid bus factor 1), at least until you need to outscale a monolith.

If you need to run a bunch of applications, made by a bunch of different teams (let alone when they don't work for you - i.e. an off-the-shelf product from a vendor), then using a managed Kubernetes provider makes this relatively simple without needing more people. If you try to do that without containers and orchestration, and want to keep a rapid pace of deployment, and not hire tons more people, you will go crazy.

[+] tptacek|7 years ago|reply
The reliability and performance story for Hacker News is not great, and that's despite the fact that its design has lots of simplifying assumptions. I wouldn't call HN a success story for the "just drop it on a server" approach.

Of course, HN is a kind of art project, and its scaling and performance goals are not typical of most applications.

[+] grey-area|7 years ago|reply
I think you're right - at least 90% of servers on the web would be fine with a couple of instances at most backed by a decent db. It can get more compex depending on your resiliance requirements but it really doesn't have to be.

I guess I run a CPG stack - Coreos, Postgresql, Go. Don't bother with containers as Go produces one binary which can be run under systemd. It is far simpler than kubernetes and the only real reason for other servers is redundancy. The only bit of complexity is I usually run the db servers as a separate instance or use a managed service. You can go a long way with very boring tech. I've run a little HN clone written in go on one $5 digital ocean droplet for years - it handles moderate traffic spikes with little effort.

[+] gizmodo59|7 years ago|reply
I think of these in this way: 99% of the apps are developed by the developers who are not in the top 1%. The cheap access to computing power has led to a growth of developers beyond the highly skilled ones who can milk out everything available in a less powerful computer. I'd like to believe we are at an Electron development phase where we just want to ship as much as possible easily without worrying about hiring great talent (And yeah I hate that its inefficient in terms of memory usage). This has led to the explosion of so many frameworks that does a lot of things easily which requires such complex devops pipeline.
[+] pythonaut_16|7 years ago|reply
I personally use Docker combined with a $5 droplet on Digital Ocean. This makes it easy to spin up multiple applications and sites without worrying about conflicting dependencies, and docker-compose gives me most of the benefits of orchestration tools (e.g. Kubernetes) that actually matter for my small scale usage.

Also Traefik makes a nice load balancer for this uage

[+] citilife|7 years ago|reply
For reference, this runs on a $5 AWS instance:

https://hnprofile.com/

The database is $600 per month, but that data runs five different websites (and it's a few hundred Gb of data).

EDIT: for those mentioning the 502 gateway error, it does auto scales - Now it's costing more per month, at least temporarily.

[+] tshannon|7 years ago|reply
Does hacker news really run on one server? What if server goes down?

I've always though high availability was the more important reason for multiple servers, rather than performance.

Even if you have only two paying customers, they are probably paying for the right to hit your website / service 24/7.

[+] ownagefool|7 years ago|reply
You probably don't need kubernetes.

Lets be fair, it offers:-

> Orchestration of block storage

> Resource Management

> Rolling Deploys

> Cloud provider agnostic APIs*

If you don't need any of these things, and your stack fits on a single server or two, and you aren't already familiar with it, I'm not sure why you bother other than an interest.

That said, there's a world of companies that aren't FAANG, ub3r and Basecamp, and many of those paying reasonble sums of money have more complicated and resource intensive requirements that don't fit on a single server.

Government Departments, Retail Companies and Banks all likely have a number of different software development projects where giving a number of developers API access to a platform that offers the above advatanges is, in my opinion, a good thing. Once you get to FAANG level, who knows whether kube itself will actually help or hinder at that level.

* Personally I'd rather use the kube APIs than talking to any of the cloud providers directly. I imagine that's somewhat personal preference and somewhat because I've been able to easily run it in my basement.

*2 Namespaces also makes making more environments for CI/CD easier, so as soon as you have a team of developers and you want to do that sort of thing, it also makes sense. Not so much for a loan developer and his server.

[+] movedx|7 years ago|reply
Spot on, friend.

So recently I started writing a simple web application for my family. They send emails to each other with gift wish lists in them and we all have to juggle those emails around. I figured some products would exist already to solve this problem, but I wanted to make my own.

When it came time to make it I thought: "This has to be a REST API with a JS front end" and then further down the line, "Man I should use Flutter and only make it a mobile app!" I had other thoughts about making it Serverless and doing $thisCoolThing and using $thatNewTech. In the end nothing got done at all.

Fast forward to today and it's a monolith Go application that renders Bootstrap 4 templates server-side, serves some static CSS directly, sits on a single server (DigitalOcean) and uses a single PostgreSQL instance (on the same server). The Bootstrap 4 CSS and JS come from their CDN.

I made the technology simpler and the job got done. It's an MVP with basic database backups in place, using Docker to deploy the app. It just works.

Lessons for me from this:

* Server-side template rendering is perfectly fine and actually easier, frankly * JS can still be used client-side to improve the experience without replacing the above or making the entire rendering process client-side * Although Go compiles to a single static binary I still need other assets, so it went into a Docker container for the added security benefits not to mention portability * Serverless is nice, but unless it's replaced the above day-to-day, there's always a steep learning curve around something you haven't done with it yet, but need * Picking the latest and greatest tech tends to stagnate progress or halt it entirely, in most cases * An a software MVP needs an MVP infrastructure to go with it

Just my thoughts.

[+] wilsonnb3|7 years ago|reply
I did a programming project for a job interview recently at a company called Willowtree that makes iOS and Android apps for other companies.

It was a pretty simple project, basically wrap a rest API around some JSON data provided to you.

I ended up deploying mine to Google Cloud Platform onto a VM running Ubuntu and Apache, and they seemed rather concerned that I took that approach instead of leveraging some kind of containerization or PaaS approach.

My API definitely had problems, as I don’t have much back end experience, but I found it strange that they would look down upon deploying to a cloud VM. It doesn’t seem like it was that long ago that a VM hosted on AWS or Digital Ocean was the latest and greatest and it seemed like a logical choice for something that would only ever be used by about five people.

[+] StavrosK|7 years ago|reply
They do not, I run tens of low-traffic projects very successfully on a $10/mo Hetzner server on Dokku. Dokku is amazing and so is Hetzner, I don't know why people always go for the high-scalability, expensive options just to end up with 0 utilization.
[+] z3t4|7 years ago|reply
On a service that has been up for almost 20 years, same code base, thousands of daily users, the first server was constantly on 100% CPU. The second server, averaged around 10% CPU with lots of spikes. The third server, now average below 1% CPU usage. Next time I need to upgrade I will probably get a "NUC", or a smarphone, or something even smaller. But it's not only CPU's that has gotten better. The first server also maxed out the bandwidth! And now, although with less users, the bandwidth usage is less then 1% Started out on 0.5Mbit DSL, and it's now on a Gbit fiber.
[+] derefr|7 years ago|reply
> If it's high, is it because you are getting 100,000 requests per second, or is it the frameworks you cargo-culted in?

Mine's high because our business model involves blockchain stuff, and

1. blockchain nodes are CPU+memory+disk hogs;

2. ETL pipelines that feed historical data in from blockchains produce billions of events in their catch-up phase. (And we're constantly re-running the catch-up phase as we change parameters.)

Sadly, we need several fast servers even without any traffic :/

[+] johngalt|7 years ago|reply
Initially I was skeptical as well. One server in a colocation will handle enough traffic until you can afford to hire all the people to make you web scale. But then I started playing with the various tools and seeing how people used them, and it totally changed my view.

The key point is that many of the new technologies in operations are about simplicity rather than speed. Standing up a stack in AWS can be flipped on and off like a light switch, and all the configuration steps can be much more easily automated/shared/updated/documented etc...

It's not about any of these technologies being more efficient; it is about spending more in order to abstract away many of the headaches that slow down development.

Certainly there are some people who are prematurely planning for a deluge of traffic and spending waayyy too many engineering resources on a 'web scale' stack, but that's not the majority.

[+] devman0|7 years ago|reply
I think for smaller use cases its more about high availability than load balancing.
[+] Aeolun|7 years ago|reply
The load average on my kubernetes cluster is actually around 3-4 without it even doing anything.

There’s a bunch of apps running in there, but nothing that would justify the load.

It’s also generating roughly 20 log lines per second.

I’m really not sure what it’s doing...

[+] aeriklawson|7 years ago|reply
> I manage dozens of apps for thousands of users. The apps are all on one server, its load average around 0.1.

If you're at this scale you can do whatever you want. Most of the stuff I've made has been with simple building blocks like you've described, maybe thrown in with some caches and a load balancer.

Although I've worked with other teams who really did have the high scale request flows that require you think about using a different architecture. Even so, K8s is not the end game and you can make something work even just extending the LAMP stack.

[+] staticassertion|7 years ago|reply
I think that kub and, generally, cloud providers have allowed for more ambitious projects to be generally accessible.

My side project is intended to handle > 1 billion events per day, with fairly low latency. That's well over 10k events per second.

I doubt I could do this easily on a single box, and I wouldn't really want to try. Why constrain myself that way? Is it worth just doing this the standard LAMP way?

More and more problems are available to be solved using commodity systems, so we have more and more people solving those problems with these new systems.

[+] navinsylvester|7 years ago|reply
Hacker news running on a single server is a very bad precedent. Wish people running the show address it quickly since its being used as a bad example.

When building a business you should take care of having an environment which is resilient. I agree it's not for everyone. But its quite essential when you have a huge customer base and care about unpleasant experience. If someone is running an important business and leaving it to chance - its just pure arrogance or gross incompetence.

[+] mdekkers|7 years ago|reply
Not everyone uses K8s for webapps. You would be surprised at the level of enterprise penetration of K8s. Those enterprises do boring stuff like datawarehousing etc.
[+] atleta|7 years ago|reply
Yeah, you probably don't. And not only that, but it probably makes your life harder. I've interviewed for a tech lead position at a company working with freelancers and I'm pretty sure the reason they ended up rejecting me was that I mentioned the technical interviewer that I think containers, container infrastructures (like Kubernetes) and even cloud infrastructure is being overused/used without giving too much thought about it as if it came free (in the sense of setup and operating complexity). Too bad the interviewer started rambling about how he was into Kubernetes those days :). (Actually, this was the most technical part of the interview.)

I'm mostly working with startups and small companies creating MVPs and that was their client base too. Now most of the time these are just building CRUD apps, most of the time these apps don't see heavy usage for years (maybe never). Developers love technology, love to play with new(ish) things so quite a few of us will prefer using whatever is new and hip for the next project. Now it's containers and microservices. And it feels safe, because done right, these will give you scalability. And once you convince the client/boss that you need it it's unlikely that anyone will come back in a year and say: hey, it seems that we'll never need this thing that made the development $X more expensive. (Partly because they won't know the amount.) So actually politically it is the safe choice. But professionally and cost wise it's usually worse. It's a lot better to have to transition after seeing the need (preferably from the projected growth numbers). At least you minimize the expected value of the costs (bacause YAGNI).

[+] djsumdog|7 years ago|reply
I once got an interview from a company in the container space because one of their exec read an article I published talking about the trouble with container systems[1]. (Really good talk/interview, but I ended up not moving forward because I didn't want to move back to the west coast).

I've been in smaller shops that wasted a lot of time on K8s stuff and fell behind on their timeline. If you want to run k8s, DC/OS, etc. you need a lot of ramp up time and at least 4 ~ 8 dedicate staff members. I've talked to other startups that preferred running Nomad instead due to setup complexity.

I doubt k8s will go the way of Open Stack since it does actually work, but I do think we'll see it limited to big-end enterprise systems and smaller startups will push forward with other, easier to build up clustering technologies.

[1]: https://penguindreams.org/blog/my-love-hate-relationship-wit...

[+] loftyal|7 years ago|reply
Same with me, never had any interviewer actually give me an actual real rebuttal, other than "but it scales!!".

I love software, but I really get tired of the blind cargo-cult culture of most of the industry.

[+] movedx|7 years ago|reply
> I'm pretty sure the reason they ended up rejecting me was that I mentioned the technical interviewer that I think containers, container infrastructures (like Kubernetes) and even cloud infrastructure is being overused/used without giving too much thought about it as if it came free (in the sense of setup and operating complexity).

I'd like to understand your thoughts more on why you believe cloud infrastructure is being "overused/used without giving too much thought ..." and more specifically, what the other options are.

I've come from a background of racking physical servers, plugging them into a network and having a PXE process install the OS for the client. It took a day to provision a single server in an enterprise hosting environment. It was mostly automated.

Speaking strictly "in the sense of setup and operating complexity", I'd love to know your thoughts on how dedicated physical servers in a local DC can outperform cloud based infrastructure in terms of (vast) availability and per-second billing. I don't think you could and I'd even be willing to pay for us to do an experiment: you call your local DC and have 30 high-end servers provisioned faster than me using the command-line.

You also put "... containers, container infrastructures (like Kubernetes) ..." under the same banner. I'd like to address this also, but to be fair and honest, I agree that Kubernetes is heavily overused and so I won't address it here. I'm mainly interested in how you consider containers to be overused given their simple as a concept and equally easy to get in place.

Put another way: in what way have you seen containers being abused? I want to avoid doing that my self and would love your thoughts on the matter.

To continue, if we take a rack full of high-end physical, dedicated servers and we want to deploy a Ruby Rails application (a very powerful, common software stack), how would you sell me a bare mental, direct-to-OS deployment of a Rails application versus using containers to deploy the same application?

Two of the biggest benefits to containers that make me put the effort into deploying them is portability and security. It's one "box" you have to logistically ship to a system and one command to open it and have its services supplied to the network. If the box is hacked due to an exploit, the hacker is trapped in the box and isn't roaming around the host server's file system looking for credit card details.

There's a good reason Discourse, for example, only support Docker has a means of deploying their application: it makes it easier.

> Now it's containers and microservices.

Yeah, microservices have been blown way out of proportion in our industry. They're amazing and great when you're Netflix, Facebook, Google, or Amazon, but there are only four companies that are that big and I just named them.

[+] maxxxxx|7 years ago|reply
"Anyways, the point I am trying to make is you should use whatever is the easiest thing for your use case and not just what is popular on the internet. "

This is good advice in theory but in the real employment world you are killing your own career that way. At some point you get marked as "dinosaur" that hasn't "kept up". Much better to jump on the latest tech trend.

[+] elsonrodriguez|7 years ago|reply
Most organizations don't need to manage servers or Ansible playbooks either.

The reason Kubernetes became so popular is because the API was largely application-centric, as opposed to server-centric. Instead of conflating the patching and configuration of ssh and kernels with the configuration of an application, you had clearly separate objects meant to solve different application needs.

The problem with Kubernetes is that to gain that API you need deploy and manage etcd. To bring your API objects to life you need the rest of the control plane, and to let your objects grow into your application you need worker nodes and a good grasp of networking.

This is a huge burden in order to gain access to K8's simple semantics.

GKE helps greatly, but the cluster semantics still come to the forefront whenever there's a problem, or upgrade, or deprecation, or credential rotation.

Of course there's always a time for worrying about those semantics. Specialized workloads might have some crazy requirements that nothing off the shelf will run. However I think the mass market is ready for a K8s implementation that just takes Deployments and Services, and hides the rest from you.

In lieu of that, people will just continue adoption of App Engine and other highly-managed platforms, because while you might not need Kubernetes, you almost certainly don't need to go back to Ansible.

[+] Sahbak|7 years ago|reply
I honestly don't understand the amount of negativity towards dockers and kubernetes sometimes.

All major cloud providers have a managed k8 service, so you don't have/need to learn much about the underlying system. You can spend a few days, at most, to learn about dockers, k8 configuration files and helm and you're pretty much set for simple workloads (and even helm might be overkill).

Afterwards, deploying, testing, reproducing things is, in my opinion, much better than managing your applications on random servers.

Might I be wasting some money on a k8 cluster? Maybe. Do I believe the benefits outweigh the money? Absolutely.

[+] chess44|7 years ago|reply
I am interested in people's opinion on the "break even point" between using Kubernetes and not using Kubernetes. Let's pretend that the only options are Kubernetes and something substantially less powerful.

What is the simplest/easiest personal project where using Kubernetes might be justified?

I am a junior software engineer trying to figure out how to contextualize all of these container/container management systems.

[+] freehunter|7 years ago|reply
Maybe someone here can help me figure out what I need, since the world of containers is growing faster than I can understand.

I have one code base that I run on multiple servers/containers independently of each other. Think Wordpress style. I used to run it on Heroku but I switched to Dokku because it's substantially cheaper and I don't mind taking care of the infrastructure. I like Dokku but I do worry about being tied to just one server and not being able to horizontally scale or easily load balance between multiple servers/regions. Ideally what I'd like is Dokku with horizontal scaling built in. I've seen Deis and Flynn but they seem less active/mature even than Dokku, which is saying something.

Is Kubernetes the right answer here or should I stick with Dokku and forget about horizontal scaling?

[+] bg4|7 years ago|reply
You probably don't need microservices either - it's insane how much money and time is being thrown away to these industrial strength hammers by companies that simply don't need it.
[+] garysahota93|7 years ago|reply
So true! I think the Rick & Morty reference alone speaks volumes for everything. haha
[+] frostyj|7 years ago|reply
Depends on the scale. If I only have 10 containers to manage I'd throw them on a m4 and let it be. Benefit of using k8s kicks in when your use case gets complicated.
[+] jammygit|7 years ago|reply
Its a bit funny to ask this question in this thread, but here we go:

What are the important topics & technologies to learn about with these types of topics? My uni experience didn't really include things like distributed systems or containerization.

Ideally fundamentals that won't be invalidated in 5 years when 'the new thing' becomes something else.

(Love good book recommendations on any subject a new grad should learn, not just this topic)

[+] geo_mer|7 years ago|reply
Kubernetes may be overkill for small projects and it's actually hard to setup for a single-machine cluster, but the idea of container orchestrators (k8s, docker swarm, nomad, etc...) is extremely useful, I understand that some abuse the word "scale", but for me container orchestration is far bigger than just scaling, these features include:

1. rolling updates

2. decoupling configs and secrets from code and mounting/changing config files easily

3. robust and predictable testing/production environments

4. centralized logging

Also microservices's goal isn't really about just "scaling" in my opinion, there are other important advantages even if you have no intention to scale, aspects like modularity, separation of concerns, robustness and lowering the technical debt are still as important whether your app serves 1 or a 10000 users at the same time. Of course you can pull your python app from your repo or even rsync it (just like you can just develop any software without using git or any revision control) and just execute it might work very well, but sooner or later you are going to regret it if you're a business

[+] sebringj|7 years ago|reply
It was interesting to note about workers and using web assembly together within V8 as this scenario could bypass the need for complexity and memory overhead, while combining different programming languages on the server-side. Not that it could replace Kubernetes as that is an amazing technology but if you are in a scenario where your tech could fit within workers, could be interesting. https://blog.cloudflare.com/introducing-cloudflare-workers/. I was amazed to think web assembly would be used for that purpose but i guess it does make sense in reading about how it is put together.
[+] vemv|7 years ago|reply
What bothers me about k8s is that it promises a lot ("15 years of experience of running production workloads at Google" at your fingertips! yay!) but it's in fact still a young, ever-changing solution.

Even developing an app locally with minikube is a PITA for a lot of reasons. From Helm to Telepresence to Skaffold, every tool out there is just unpolished and overambitious.

Don't want to imagine how those problems might amplify in production.

[+] barbecue_sauce|7 years ago|reply
Sometimes choice of technology acts as a signifier. If you're building a startup, and you want to communicate to investors that "hey, we may not have the users yet, but we're built to scale!", Kubernetes and microservice architectures and sophisticated ETL pipelines convey that image better than saying "we've built for the minimal load that we're currently experiencing with a LAMP-based monolith.". The reality may be that your product's consumption patterns will never necessitate having anything more than that, even at a large scale. Your product may be great, you might easily be able to scale manually, but someone who holds the purse strings who knows just enough to be dangerous, might decide that if you're not using the "hot" technologies, you must not know what you're doing.
[+] beiller|7 years ago|reply
My experience is that Kubernetes is too complex for the average functioning product. At our company, everyone is obsessed with it because it promises no cloud vendor lock ins! But at what cost? The complexity. Also the direction cloud vendors are going in my opinion, is more hardware-centric (eg. TPUs). How will you avoid the cloud lock in when only Azure offers the image tagging machine learning as a service? How will Kubernetes solve that? I believe a balance between a small bit of lock-in, but retain environment freedom (free programming languages like python, javascript...) is the sweet spot for cloud, eg. PaaS like appengine or azure app service or beanstalk.
[+] rcarmo|7 years ago|reply
If you’re looking for a simple way to manage web apps on Linux, check out https://github.com/rcarmo/piku

I wrote it as a sort of micro-Heroku/Dokku replacement to run on small ARM boards, and ended up deploying a few apps with it on Intel boxes (I also use Docker Compose, but for simple stuff it’s overkill).

It uses uWSGI and is heavily Python-oriented, but I’ve run other stuff on it (it’s basically a supervisor with automatic reverse proxy setup and a Procfile approach to specifying what to run - git push to it and you’re in business).

[+] mfer|7 years ago|reply
If you're going to use Kubernetes it's good to look at your business case or other need. Don't use a hammer if you need to unscrew something.

Kubernetes has it's place... I recently wrote a post on that... https://codeengineered.com/blog/2018/kubernetes-biz-case/

But, there are many times you just don't need it. Like, for my personal sites... just isn't a need there.

[+] martinlaz|7 years ago|reply
Yeah, but... Nobody ever got fired for using K8s.
[+] bashmonkey|7 years ago|reply
I do my level best to stay away from containers. I don't think most people even need them. It's a fad of sorts. I tend to stick with the tried and true and not follow trends, cloud or otherwise. Nothing worse than having your data on someone else's HW and losing connectivity through no fault of your own. Years ago, I worked for UUNET in Reston/Ashburn, VA, and built web servers and the attendant HW/SW that ran them (usually Sun Solaris/Apache/Oracle). We always had a "back net" into every device. Now? One NIC, one way in. I always like having more than one way to get to a device, be it local or remote. With the cloud, you tend to give this up. I recommend VMs over the cloud using someone else's data and data centre. Nothing worse than going to a tech conference with your boss, and him being the "deer in the headlights" as it were with regard to buying into what's being sold by the vendors. Last time we went, it took me the entire 3-hour car ride home to convince him we didn't need half of what was on offer. I tend to be old school and prefer to make do with Linux/FreeBSD VMs, and whatever software is needed to make something work. I like being in control of my own architecture.
[+] segmondy|7 years ago|reply
If you don't have microservices/soa architecture then you don't need k8s. Most people don't need skyscrapers. But yet there's a lot of them in the world. Just because you don't need one and don't have one doesn't mean that other's don't