titpetric's comments

titpetric | 9 years ago | on: Running 1000 containers in Docker Swarm

Big respect for your achievements. I guess at some point it just becomes the question of "where do i get a 1000 nodes" vs. "how do I run a 1000 containers". Or, more the justification for that amount of hardware - I mean, the one dream job which I would probably want is getting paid to cut out all the hardware use while keeping reliability/availability/functionality. Like these guys who cut their AWS bill by $1mil/year in about 3 months - https://segment.com/blog/the-million-dollar-eng-problem/. The thing is that I'm not exactly sure where I'd fit in more - running this thing, or just fixing it for somebody else. I definitely know that I'm mostly dealing with pets and not cattle :)

titpetric | 9 years ago | on: Running 1000 containers in Docker Swarm

Actually, neither should be a problem if you have enough redundancy :) the hardest part of rolling your own infrastructure is testing mission critical systems (like databases) to be fault tolerant and at the same time reliable. Lots of great projects are out there that address some of these issues, but it takes a lot of attention to details (like transaction rates, ACID compliance, replication, etc.) to get it right. This is why a lot of developers which aren't in unicorn startups take advantage of technology which is available from giants like Amazon or Google, or specific problem-domain companies like CloudFlare for example. Netflix serves as a great example of a technology-driven company that is an inspiration to us, but there are so many others that really changed the way we approach problems - Tumblr, Etsy. But to stay on topic of netflix - I think their idea behind "chaos monkey" is great, and we're increasingly rolling out a (currently simple) docker swarm version of it - https://github.com/titpetric/docker-chaos-monkey - the best way to eliminate worry is to test failure scenarios. As docker chaos monkey is designed to unpredictably "kill off" containers, your system gets the benefit of design to handle failures. It's one of those problems that you have to have a passion for however - it's like testing software. You're only testing software for the functionality and failures which you can predict, and I'm pretty sure that any of us can't predict all the ways in which software (or distributed systems) can fail. As such, it's a never ending occupation. :)

titpetric | 9 years ago | on: Running 1000 containers in Docker Swarm

Hi, OP author here: I have actually set up a VRRP (well, UCARP) on Docker, so it's possible even to containerize this facet of running a HA ops stack with Docker as the infrastructure. It is however, as you say, it is only used for one active node + a number of fail-overs in case that one goes down. In terms of maintenance (hosts do go down, scheduled downtime is common), it's priceless to have this part of the puzzle portable as well. If you want to check it out, there's a github available here: https://github.com/titpetric/ucarp-ha - and a future article with it is planned as well. It will also become a part of the E-book which I'm currently working on and publishing on leanpub: https://leanpub.com/12fa-docker-golang :)

titpetric | 10 years ago | on: How to pass docker.sock to your containers while keeping security

In thise case, you can't use a read only mount. The underlying protocol is HTTP, which means you must write a request to the socket to get a response. You can use read only mounts for `/proc` however, which just spits out data. I use it for titpetric/netdata for example.

titpetric | 10 years ago | on: Setting up a remote digital workspace

Ok, some guy has a setup which works for him personally. Yet he decides to write a post about it titled "set up your remote office", advocating his view to others. It's like advocating sex, without sex education.

Sure, I might have pointed out that basically every one of his ideas is prone to failure. Aggressively. Or passionately. If you're planing to work remotely, you need to consider things like:

- uptime, SLA - redundancy (network, electricity, hardware, storage)

I'm not saying oh let's throw $10k and make a home datacenter with running costs at about $500/mo, but If I did, I'd gladly advocate it over an ISP-provided AP and educational low-end hardware. There's just a middle ground which is just as feasible and effective regarding cost & your time.

And I could go on. Years of remote work, and failures which are inevitable, have made me cautious. If you wanted a good remote work environment, you'd better come to me for suggestions than to use Ivan's "bragging rights" setup. Still not sure what's there to brag about. Linux today runs just about anywhere, even trashcans.

titpetric | 10 years ago | on: Setting up a remote digital workspace

My way is better because it's more reliable. Sometimes easier is just a side benefit. For OP's article complexity is reduced, reliability is better and price point stays the same (seeing how he already pays about the same for a bare metal server). I am sorry, sometimes an opinion is just an opinion, and other times somebody IS just wrong. There's benefit in any approach, but there is always a better way. Coming from someone who had the same shitty remote setup with RPis, I have enough experience to advise a better way.

titpetric | 10 years ago | on: Setting up a remote digital workspace

Home servers are not bad per-se. Personally, if I am fine with power outages, non-redundant uplinks, I'd just go with a 8 core cpu board so I can run proper virtualization. If your laptop is not powerful enough that is - there's a thing to be said for taking your dev environment with you.

titpetric | 10 years ago | on: Setting up a remote digital workspace

I have good experience. And I suggested ANY (google, amazon, azure) vm if your requirements are higher. And as in comparison with a bare metal server with 1 disk no SAN/RAID, yeah, better.

titpetric | 10 years ago | on: Just ship it

Just to add it here: I am interested in your opinion and experience about shipping software and getting clients. I know I have much to learn (or unlearn as it were) to hack out MVPs to gauge interest & monetize it if there is some. Any self-published self-made person on here can give valuable advice or share experience. I am listening ;)

titpetric | 12 years ago | on: Don't optimize your software prematurely

I like to think so to - guidelines are basically what best practices are. And behind every best practice there's a programmer who at one point failed and fixed their mistake. If you buy more ram/cpu/disk, are you allowing them to fail, or are you just "patching" the problem yourself?

titpetric | 12 years ago | on: Don't optimize your software prematurely

Pretty sure you are correct. The "quote" was paraphrased from my poor memory. Also from what I read, the book series is, even today, very informative and relevant. I am however disappointed how much damage this quote is doing outside of a proper context.

"In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization."

^^ This. I am ashamed when the original quote is being used to basically advocate "write un-optimized code and we'll buy hardware, which is cheaper than development time". I do realize that wasn't the case back when the books were written (hardware was hella expensive), but today this is just encouraging bad form in developers, many times their go-to reaction is "let's add more ram".

page 1