top | item 24270669

Show HN: HyScale – An abstraction framework over Kubernetes

99 points| hyscale | 5 years ago |github.com | reply

67 comments

order
[+] kube-system|5 years ago|reply
Wrappers that aim to simplify another technology always make me nervous, especially with a rapidly evolving project.

Yes, this makes it easier to get started -- but when something goes wrong, now you have to hunt down bugs in two layers of software. And, since you've intentionally isolated yourself from the underlying layer, you have less experience with it!

This is why I like Helm. If you write your charts well, you can write your k8s yaml once, and do the things you need to do on a daily basis by adjusting your chart values

[+] hyscale|5 years ago|reply
Your concern is justified. Any abstraction must deal with minimizing leakages. This is why we have started addressing deployment error troubleshooting with some level of diagnosis so that the tool can provide error info in terms of higher-level abstraction.

Helm does require understanding of all the underlying low-level objects defined in Kubernetes. HyScale hopes to provide higher-level entities to deal with, as well as providing ways to do higher-level ops & deployment introspection. We believe it should be possible to satisfy the needs of a reasonably large number (>80%) of apps.

[+] hyscale|5 years ago|reply
Kubernetes complexities are often acknowledged. In our team, we experienced this first-hand while migrating a large PaaS application onto Kubernetes about two years ago. This prompted us to seek out a way to simplify and speed-up app deployments to Kubernetes. This pursuit eventually led us to believe that Kubernetes complexities deserve an abstraction, in quite the same way as JQuery over Javascript, or Spring over Servlets. The HyScale project was born out of this, to create an app-centric abstraction that is closer to the everyday language of developers and devops.

With HyScale, a simple & short service-descriptor allows developers to easily deploy their apps to K8s without having to write or maintain a voluminous amount of K8s manifest YAMLs. This enables self-service deployments as well as development on K8s with minimal IT/DevOps intervention.

We would love to get feedback, ideas and contributions.

[+] soamv|5 years ago|reply
Congrats on this launch, it's great to see serious efforts at Kubernetes simplification and I hope you succeed! K8s bills itself as a "platform for platforms", and such projects are a real test of that idea.

Questions:

1. How do you deal with the mutability of K8s resources? Do you assume people won't change the underlying resources that you generate, or do you keep k8s controllers running to ensure there's no deviation?

2. How understandable is your generated output to humans? Do users have a way to go "backwards" through your abstraction? (Since your tool lives in an ecosystem with many other kubernetes tools, your users will sometimes end up having to deal with the generated output, since other tools operate at the K8s level such as prometheus, log aggregators etc)

3. Do you interoperate with K8s-level resources well? Do _all_ my services need to be in this abstraction for this to work well? e.g. Can my hyscale yaml reference a non-hyscale service for any reason? Or are they essentially two separate worlds?

[+] miccah|5 years ago|reply
This looks like a really nice and polished app, thanks for releasing it!

Is there an option to only generate the yaml instead of generate and deploy altogether? This would allow me to create a lot of boilerplate while still having the option to fine tune the configurations if needed.

I also didn't see any information on rollbacks.

[+] devit|5 years ago|reply
Is there an easy way to customize the generated Kubernetes manifests if the abstraction turns out to be insufficient?
[+] tombh|5 years ago|reply
Is there any overlap with Hyscale and Terraform's Kubernetes provider or Helm's Charts?
[+] greentimer|5 years ago|reply
Kubernetes was released only 6 years ago so I'd imagine there is still a lot of legitimate evolution left in the ecosystem. I'd have to compliment you for choosing a project like this rather than something that had no chance of working because the ecosystems are completely set, like a new programming language. I believe there will be a distributional challenge for you in getting people to use this software. You can't pay for an advertising campaign. Maybe the most you can do is post on HN, but after that, people will forget about it. The fact that once it's used once in a GitHub project others will be forced to use it provides some hope. You say you want to be like JQuery over javascript. It may be worth it to you to figure out how JQuery solved their distributional challenge. Just as nobody needs to use JQuery, nobody will need to use your software, and there will be a strong temptation for people to bypass it and just use raw Kubernetes.

It is amazing the complexity of modern software projects like Kubernetes and I'd agree they have challenges in creating a simple interface that everyone will like while still getting the software to work consistently. According to the principle of radical skepticism it's amazing that anything so complex works at all.

[+] aantix|5 years ago|reply
OP - it can be done.

Reach out to the CTOs and VPs of Engineering that list Kubernetes as one of their core technologies. They're most apt to choose K8s for their own team.

Ask them if they've had any issues with Kubernetes, specifically mis-configuration or slow turn around times for configuration changes.

Explain your framework in one or two lines. Pick out one or two _specific_, common problems with K8s and ask them "Are you experiencing X? How about Y?" Talk to them like you already know and feel their pain. Because you do (you wouldn't have created this framework otherwise).

You'll learn a lot. And maybe get adoption and mayb a consulting gig out of it. :)

Use the advanced search on Linkedin to find these people. Make sure your Linkedin title has something to do with being a Kubernetes expert.

If you're in a big city, find those clients that are local first, as you can visit them in person (that goes a long way).

e.g. Senior DevOps Consultant, Specializing in Kubernetes/HyScale.

Here's the people search you need. Use Hunter.io to find their emails.

https://www.linkedin.com/search/results/people/?facetGeoUrn=...

Client outreach can be successful if it's specific and serving a genuine need.

[+] meowface|5 years ago|reply
I disagree, I think there'll eventually be huge demand for these kinds of frameworks and wrappers compared to "plain old Kubernetes", much like how a high percentage of developers are hungry for something to use on top of plain JavaScript. I could even see the demand eventually surpassing demand for Kubernetes itself. Kubernetes offers a ton of modern advantages - even for pretty small projects - at the cost of a huge amount of complexity and required learning.

If you can get the advantages plus something simpler than Kubernetes or homebrewed solutions with Docker, then I suspect a gigantic market will form. I think it's just a question of if it'll end up being this particular implementation, or an alternative one, or a full-on standalone competitor to Kubernetes designed for simplicity. We're still in the very early days.

[+] freedomben|5 years ago|reply
Disclaimer: I work at Red Hat with OpenShift

This is something I think OpenShift really adds value over "raw Kubernetes." With OpenShift you can treat it a bit like a flexible Heroku with `oc new-app` which can use s2i or your provided Dockerfile and will generate the foundation of what you need. You can then iterate on it if you need something beyond the standard setup.

By the way OKD 4 (the freely available upstream version of OpenShift) is now generally available: https://www.openshift.com/blog/okd4-is-now-generally-availab...

[+] gattacamovie|5 years ago|reply
okd4 came ~1yr after ocp4 was there. Can it be trusted it won't have such delays in the future ? What about security fixes, features, will they always lag behind as an incentive to get the paid version? In k8s, even if you maintain iy yourself, there is a huge community and you can always get your fixes. How does okd community compare? okd was on k8s 1.11 till few weeks back, 8 releases behind! Imagine the security issues okd had for such a long period... (ps: even centos seems to be lagging behing badly, centos 7.7 took many months after rhel 7.7).

as for ocp/okd tools like s2i, ansible for replacing helm, routes, deployment configs, etc -> they never took off, community did not agree. Those that did not take care to stay away from stuff that is not pure k8s suffer from beying disconnected from the rest of industry and have to invest to redo everything... not to mention their impossibility to switch to cloud providers solutions like eka,aks,pks,etc...

[+] whalesalad|5 years ago|reply
IMHO if you need a tool like this, you are normally better off building it yourself in-house. You will inevitably end up fighting all of the leaky abstractions something like this does not support for your use cases.
[+] q3k|5 years ago|reply
This a hundred times. Do yourself a favour and use Dhall/Cue/Jsonnet to develop some abstractions that fit your workload and environment. There is not much value proposition in a tool like this if you can use a slightly lower-level, more generic tool (like a configuration-centric programming language, which is an actually full-fledged programming language) to accomplish the same goal in a more flexible and more powerful fashion, that leaves you space for evolution and unforeseen structure changes.

The idea of tools mandating what 'environments' are is absurd, as it's pretty much always different for everyone (and that's good!).

[+] ForHackernews|5 years ago|reply
I strongly disagree. One of the primary* values of Kubernetes is that it commoditizes ops. Most companies are not special. Most applications are not special.

You probably need some APIs, a database or two, domain names and TLS certificates, and maybe a caching layer and object storage. There's zero reason why an abstraction layer can't be flexible enough to handle the overwhelming majority of line-of-business apps out there.

If you're going to be home-rolling your own janky custom deploy solution, you might as well save yourselves the headaches and not bother using Kubernetes either.

* - I might argue the only real value for most non-Google-scale organisations.

[+] sandGorgon|5 years ago|reply
Are the you using the Docker Compose standard ? Your specification looks very familiar.

It would be a killer app if you are.

https://www.compose-spec.io/

[+] hyscale|5 years ago|reply
The HyScale specification looks familiar to most developers/devops who have been doing development or deployment for some years as we wanted to create a spec that is intuitively understood and application-centric, while at the same time grows to support 80% of the use-cases out there. The spec is meant to also support K8s native elements such as sidecars, ingress, etc. We are also looking at the compose-spec to see if there is some convergence in the near/late future.

The HyScale spec schema itself is available at the companion repo here: https://github.com/hyscale/hspec

[+] GordonS|5 years ago|reply
If so, I'll be jumping right on this - for me, Docker Compose (and Swarm) configs are easy to write, read and maintain. Compared to the swathes of config required for k8s, it's beautiful.
[+] verdverm|5 years ago|reply
The examples look like they are not. Yet another Yaml based spec
[+] hajhatten|5 years ago|reply
Just what people running kubernetes needs, more yaml!
[+] hyscale|5 years ago|reply
:-) Actually less yaml. Typically without an abstraction like Hyscale, for a micro-service you might end up having to write / maintain a couple of hundred lines of K8s yaml including things like sidecars, ingress, PVCs, config-maps, etc. whereas the same service can be described in hyscale spec using barely 20-30 lines of yaml consisting of higher-level entities/language that is intuitive to most developers. You also get simpler troubleshooting of deployment errors and worry less about having to deal with backward compatibility with each new K8s version.
[+] alexfromapex|5 years ago|reply
This is a good idea but with simplicity being the value proposition it looks like you have spec files that are roughly the same length as a yaml file I could deploy with k8s. I think it would need to be much much simpler to be more valuable, just my take.
[+] hyscale|5 years ago|reply
Typically without an abstraction like HyScale, for a micro-service you might end up having to write / maintain a couple of hundred lines of K8s yaml including things like sidecars, ingress, PVCs, config-maps, etc. and linking up these yamls with the right selector-labels, etc.

Whereas the same service can be described in hyscale spec using a few dozen lines. But it's not just about the number of lines, the HyScale hspec is defined in terms of higher-level app-centric entities that are intuitive to most developers.

You also get simpler troubleshooting of deployment errors and worry less about having to maintain compatibility of your K8s manifest yamls with each new K8s version.

[+] koeng|5 years ago|reply
I really like git push workflows (Heroku / Dokku). I would use https://github.com/dokku/dokku-scheduler-kubernetes , but it doesn't support Dockerfiles, and I need Dockerfiles for a few of the applications that I want to run.

It would be great if there were some docs on how to integrate Hyscale with, for example, a Github action to enable enable deployment on a push to master. It wouldn't be too difficult for me to set up, but having a "right way" to do it written by the maintainers would give me much more confidence.

[+] josegonzalez|5 years ago|reply
Dokku doesn't have dockerfile support mostly because it hardcodes port 5000 (what heroku does). It should be possible to add that support and then we'd have full dockerfile support. Just not something that has been worked on yet as no one has complained :)

Disclosure: I am the Dokku maintainer.

[+] hyscale|5 years ago|reply
Its a good suggestion. We'll aim to add this to our documentation soon.
[+] dastx|5 years ago|reply
Maybe just me, but I've never thought of kubernetes as being complex for users. To me the complex bit of K8s isn't deploying to it, but administration of it. Figuring out the architecture, debugging some silly thing not working etc.
[+] lazyant|5 years ago|reply
Absolutely agree. There's a ton of literature on how to get started etc but not a lot on "day 2", the "now what" experience in production. (Friend working on it: https://day2kube.com/ )
[+] afterwalk|5 years ago|reply
A lot of attempts at "simplifying k8s" seem to be writing some type of "coffee-script" that generate the underlying yaml. What I (think I) want is just some good UI built for essential workflows.
[+] meowface|5 years ago|reply
There are a lot of advantages to defining these things in some form of text-based configuration files. You get to track everything in source control, you don't need to worry about upgrades somehow breaking the UI or database, you can quickly and easily check any deployment config anywhere if needed, you can diff changes in a sensible way.

Adding a web UI option in addition isn't a bad idea at all, but I like the canonical form being config files, be it YAML or HCL or some CoffeeScript-type thing.

[+] bassman9000|5 years ago|reply
Spring over servlets

This is not actually true. Servlets were never complicated. Documentation, specially the Javadoc, is terrific. I/Os and API are pretty simple. Problem is that there's a lot of boilerplate, and you ended up with a lot of copy-paste. But repetitive doesn't mean complicated. Spring solved may of these issues by providing the boilerplate.

k8s is actually complicated. Both conceptually, and in implementation.

I don't think it's an apt comparison.

[+] k__|5 years ago|reply
Awesome!

I did a deep dive into K8s in the last two weeks (usually doing serverless) and I think it really needs projects like HyScale to step up its game.

Even with EKS+Fargate, which remove master and worker provisioning/maintenence, K8s is still orders of magnitude behind serverless solutions in terms of dev experience.

[+] justsomeuser|5 years ago|reply
So this is two components:

- 1. Compiles high level config into lower level Kubernetes config.

- 2. Sends/runs that config on a Kubernetes cluster.

[+] vii|5 years ago|reply
If it were just that, then overall the project would be dangerous trap, as there is a big cost of added complexity from the new high level configuration language with its limitations and own terminology (volumes, etc).

Adding a wrapper, and then eventually forcing you to learn all the abstractions that leak through, creates an attractive nuisance. The Hyscale project is at least trying to overcome this problem. Not sure how well they succeed.

Along with the high level config it attempts to help untangle common K8s debugging steps, which normally require using multiple tools to determine what caused an error condition like CrashLoopBackoff - see https://github.com/hyscale/hyscale/wiki/App-centric-Troubles...

Looking at the code they painfully enumerate different Docker and K8s options in Java, so it will be expensive to maintain and keep up to date - the host company Primati may have the resources to do this and that's exciting!

[+] exdsq|5 years ago|reply
Can’t wait for the abstraction over HyScale. It’s abstractions all the way down.
[+] ramon|5 years ago|reply
What about mesh setups does this Hyscale support meshing.
[+] hyscale|5 years ago|reply
You can deploy sidecar agents using HyScale for your mesh. We're looking at further abstracting out things like VirtualService and we're also watching SMI related developments. If there's any specific mesh use-cases that you’d like to see abstracted out from a service deployment perspective, do let us know on our github page.