Every time I spin up a new project, I try to answer the following question honestly:
"Am I using this project as an excuse to learn
some new technology, or am I trying to solve a problem?"
Trying to learn some new technology? Awesome, I get to use one new thing. Since I already understand every other variable in my stack, I'll have a much easier time pinning down those 'unknown unknown' issues that invariably crop up over time.
Trying to solve a problem? I'm going to use what I already know. For web stuff, this'll be a super-boring, totally standard Rails web app that generates HTML (ugh, right? How last century), or maybe a JSON API if I'm trying to consume its output in a native app. For mobile stuff, this'll be an Objective-C iOS app.
Waffling about it and saying 'well, I am trying to solve a problem, and I think maybe a new whiz-bang technology is the best way to do it' is the simplest path to failing miserably. I've watched incredibly well-funded startups with smart people fail miserably at delivering a solution on-time because an engineer was able to convince the powers that be that a buzzword-laden architecture was the way to go.
You don't know what the 'right' solution is unless you understand the tools and technology you'll use to deliver that solution. Anything else is just cargo-culting.
Comments here are geared against picking a technology just because it is brand new and exciting, but sometimes you need to pick up something that is just different from what you or your team know well.
In a project I worked on once, we went with "what we knew" (standard normalized SQL schema) to build an analytics engine. The problem with "going with what you know" is you are likely to badly reinvent well-established patterns. If we had stop for a minute and learnt about star schemas, the project could have ended in a much better shape than it did, and maybe the effective time to release would had been shortened.
BTW, learning "new things" is almost always useful but isn't always precisely exciting. Data warehousing is one example :-).
Becoming proficient with a selected set of technologies is still a good idea, but I'm willing to learn and try new things all the time. First thing I ask myself is if a problem was already solved by someone else, and how.
That's a great approach. By restricting yourself to one new thing, you can evaluate it in isolation.
Then, when you make your decision about that new thing, you'll know why you like/dislike it. Your decision won't be clouded by arbitrary things like libraries inter-dependencies.
Using what you already know may not always be the best approach, because of "When your only tool is a hammer every problem looks like a nail" phenomenon. I would do my research and use what makes sense and what is best in the long run, despite of me being experienced or not in the technology. When you're a software engineer with years of experience under your belt, picking up the next one will not be a big challenge.
I generally agree with your post, but I think there is a critical difference between your argument and that of the blog post. Of course teams are more productive with technologies they know, but that isn't necessarily an arbitrarily-defined "boring" technology.
To pick on one specific example in the post: Node.js is popular enough that there are lots of teams and engineers that are most comfortable and productive working with it. For these teams, choosing Node certainly wouldn't cost an innovation token, while deciding to build some service in Python, Ruby or PHP (if we take at face value that this is more "boring") may end up being more costly.
I do the same thing and usually end up trying to solve a problem using WordPress which is probably cringe-worthy to a lot of people. Most of the things I come up with are basically content-publishing so it works really well for hacking around and making something fun.
My latest creation is a instagram-style feed of the beers a few friends of mine have been enjoying recently.
It works just fine and none of my non-technical friends have cared that they add beers in their mobile browser rather than via an app or something.
How much can you learn if you're not trying to solve a real problem, though? My personal experience is that I can dink around with a new technology all I want, and not learn half as much as when I'm trying to apply it to do something real.
Its not really about a "last century" thing, "last century" most apps need not multiple front-ends like today, unless you are 100% you won't need multiple front-ends, doing a json api from the start simplifies things a lot.
Great advice. Also extends to teams, 'What do we all know well enough to execute?' should trump 'What would be fun?' every time. I've worked in situations where someone saying 'Oh I decided to write this in Clojure, even though I'm the only one here who knows it and we're running out of cash' cost significant time and resources to fix (the fix was rewriting the project in plain ol' Python myself). It just isn't a sensible risk to take.
The innovation tokens concept seems to be a stand-in for both good engineering judgement and iterative exploration of the design/implementation space before committing to a path. I've been in several (successful) startups that leveraged both of these principles to great effect.
Both "innovative" and "boring" can shoot you in the foot. TFA focsues on "innovative" as a risk, but that's just daft. This industry is constantly rolling its lessons learned back into its shipped and shared technology. Ever gone back to a pre-Rails era web/backend codebase and screamed in horror? Ever gone to a "new" shop that never assimilated those lessons, used "boring" technology (thus dodging their shared/encapsulated forms), and recreated the old horror? (personally: check and check)
Trite policies are not a replacement for spending dedicated up-front (and occasional ongoing) time cycling between 1) evaluating/understanding your problem, 2) researching the current state of the art {processes, technology, etc.} related to your problem, and 3) using good engineering judgement to choose the best path then-and-there.
I have seen some incredibly good 'legacy' codebases written with vary old tech. There is a huge advantage when someone works with a technology for 10+ years and knows all the rough edges to avoid and then bakes it into their design.
Java may be the worst example of a ‘bulb’ language I can think of. However, I recently spoke with a team which had an awesome response to all the things I hated about the language. The closest analogy I can think of is mechanics working on popular cars get to the point where they can diagnose problems in seconds because they know the kinds of things that break. Cars come with plenty of sensors to help diagnose problems, but in this case familiarity often beats better tools.
> Both "innovative" and "boring" can shoot you in the foot.
This is true, but it misses one of the points of TFA, which is that with boring tech you already know the ways it can shoot you in the foot, because lots of people have had their feet shot by it before you came along. You can learn what not to do just by looking around and seeing which sets of feet have the smoking holes in them. With exciting tech, you don't know; you get to be one of the people who discovers those things.
This seems to be written from the "Engineers are monkeys" perspective. As if they spend their time flinging poo and you really need "solid" boing technology that's already well designed so the poo doesn't mess it up.
You shouldn't choose node.js or MongoDB because they are "innovative"-- but because they are poorly engineered. (Erlang did what node does but much better, and MongoDB is poorly engineered global write lock mess that is probably better now but whose hype way exceeded its quality for many years.)
The engineers are monkey's idea is that engineers can't tell the difference-- and it seems to be supported by the popularity of those two technologies.
But if you know what you're doing, you choose good technologies-- Elixir is less than a year old but its built on the boring 20 years of work that has been done in Erlang. Couchbase is very innovative but it's built on nearly a decade of couchdb and memcache work.
You choose the right technologies and they become silver bullets that really make your project much more productive.
Boring technologies often have a performance (in time to market terms) cost to them.
Really you can't apply rules of thumb like this and the "innovation tokens" idea is silly.
I say this having done a product in 6 months with 4 people that should have taken 12 people 12 months to do, using Elixir (not even close to 1.0 of elixir even) and couchbase and trying out some of my "wacky" ideas for how a web platform should be built-- yes, I was using cutting edge new ideas in this thing that we took to production very quickly.
The difference?
Those four engineers were all good. Not all experienced-- one had been programming less than a year-- but all good.
Seems everyone talks about finding good talent and how important that is but they don't seem to be able to do it. I don't know.
I do know is, don't use "engineers are monkies" rules of thumb-- just hire human engineers.
Having come from Etsy and witnessed the success of this type of thinking first hand, I think you missed the point of the article and I think you are using a tiny engineering organization (4 people) in your thinking, instead of a medium to large one (120+ engineers).
The problem isn't "we are starting a new codebase with 4 engineers, are we qualified to choose the right technology?" it's "we are solving a new problem, within a massive org/codebase, that could probably be solved more directly with a different set of technologies than the existing ones the rest of the company is using. Is that worth the overhead?" and the answer is almost always no. Ie: is local optimization worth the overhead?
Local optimization is extremely tempting no matter who you are, where you are. It's always easy to reach a point of frustration and come to the line of reasoning of "I don't get why we are wasting so much time to ship this product using the 'old' stuff when we could just use 'newstuff' and get it out the door in the next week." This happens to engineers of all levels, especially in a continuous deployment, "Just Ship" culture. The point of the article is that local optimization gives you this tiny boost in the beginning for a long term cost that eventually moves the organization is a direction of shipping less. It's not that innovative technologies are bad.
> But if you know what you're doing, you choose good technologies
No, if you know what you are doing you make good organizational decisions. It matters less what technology you use than that the entire organization uses the same technology. Etsy has a great engineering team and yet the entire site is written in PHP. I don't think there is a single engineer working at Etsy who thinks PHP is the best language out there, but the decision to be made at the time was "there is a site using PHP, some Python, some Ruby etc., how do we make this easier to work on?" Of those three python and ruby are almost universally thought of as better languages than PHP, but in this case the correct decision was picking a worse technology because more of the site was written in it, the existing infrastructure supported it more completely and so as an organization and a business we could get back to shipping products more quickly by all agreeing to use PHP. Etsy certainly does not think of its engineers as monkeys, quite the opposite.
My take tends to be not that 'innovation' is bad, but that there are a couple of risks:
- The weaknesses of new tech may not be fully understood. A lot of new tech solves existing problems, while re-surfacing problems that the old tech solved. Everyone thinks it's great until they've used it for a bit longer, and run into those issues.
- New tech runs a higher risk of disappearing/becoming unsupported. If you plan to support your product for a long time, that's a valid risk factor.
For myself, I'm wary of having very new tech as a fundamental underpinning of any piece of work I need to stick around. I'll likely adopt frameworks or database systems cautiously, unless their superiority is overwhelmingly obvious. On the other hand, I'd be a lot more willing to take risks on a simple library.
With a smaller, simpler piece of tech, it's easier to replace if something goes awry, and it's easier to evaluate in its totality prior to taking the risk.
Individual humans are smart. Groups of humans are dumb. When you're hiring people that you will personally work with, you can filter for smart. When you have to work with another group of humans, it's safer to assume that they are stupid.
I'll throw another one down your way. An organization I worked with had a about 5 million lines of COBOL in one system (they had several more and this one systems was only about 15% of their total transactional workload). It used a proprietary pre-relational database that allowed users to do both queries (of a sort) and do things like the value from the query result + 1500 bytes.
They tried re-writing pieces in Java at a cost of tens of millions of dollars. Java was the new hotness. In addition, they built out a Java hosting environment using expensive, proprietary Unix hardware to reach the same production volume as the mainframe. However, it was grossly under-utilized because the Java code couldn't do much more than ask the COBOL code what the answer was to a question by using Message queues. More millions of dollars went to keep up licenses and support contracts on essentially idle hardware.
They tried moving it to Windows, using .NET and MicroFocus COBOL. But the problem was they would still be tied to COBOL, even though they (conceptually) had a path to introduce .NET components or to wrap the green-screen pieces in more updated UIs. But that in itself was a problem because all their people knew the greenscreen UI so well it was all muscle memory. Several workers complained because new GUI actually made them slower at their jobs.
They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone. For the most part they were tied to that COBOL code because no one understood everything that it did and there were only a handful of COBOL programmers left in their shop (I think 6) and they were busy making emergency fixes on that + several other millions of lines of code in other systems.
They were, however, looking for an argument to retire COBOL and retire the mainframes. The cheapest solution would have been to stick with COBOL. Hire programmers. Teach them COBOL (because it was painfully difficult to find any new COBOL people and for various reasons they could not off-shore the project). Continue to develop and fix in COBOL (especially before the last remaining COBOL programmers died or retired). If you cleaned up or fixed a module, maybe move it to Java when possible.
The long story short is the decision to introduce a new technology, even in the face of an ancient, largely proprietary (since it's really about IBM COBOL on mainframes), and over-priced solution can actually lead to a worse outcome. Had they stayed with boring technology. Had they in-sourced more of their COBOL workforce. They might not have felt happy, but they would have been in a much strong, better position. Instead they were paying for a mainframe, and a proprietary Unix server farm, and software licenses on both Unix and z/OS.
When I last was there they were buying a new solution from Oracle which was supposed to arrive racked up and ready to go. Several weeks in they essentially said it would take months before the first of the new Oracle servers would be ready for an internal cloud deployment on which to try to re-host some software. I'm not even sure what they think they would be re-hosting but they talked about automatic translation of COBOL to Java.
Yes, this "boring = good!" trope is frequently weaponized to shut down people's voices. Happened to me.
One thing I realized is these blogposts are consumerist. They talk about "Python" and "MongoDB". Very little about underlying ideas like "algorithms", "computational paradigms" or "expressive power".
And they have hypersimplified plans about "three innovation tokens". Instead of "risk analysis" or "evaluate tradeoffs".
One company shut me down with such blogposts... while it let devs run amok with an architecture which did n^2 (more?) network calls... where each call transfered one RDBMS row at a time. It dragged down the intelligence of everyone who really knew better; they spent "sprints" trying to find micro-optimizations, knowing exactly that the system was fundamentally ridiculous.
So I spent a weekend reimplementing it in the Scary Fun Language. Because it was my weekend dammit, and Embracing Boredom damaged my brain too much. Scary Fun was the only way to start mending it. And it succeeded.
So of course the first order of business was to rewrite it in the Embrace Boredom language.
I recently went back to SQL from noSQL after I realized that a lot of noSQL was just reinventing wheels. I realize there might be cases where noSQL databases shine, but in most use cases SQL is better. It's slightly more work up front (only slightly) but it pays off later in keeping your data organized and making it easy to query. It's a great example of a very old technology with excellent longevity. That's in part because it's built on math and logic (set theory, etc.). There are universal mathematical/logical truths encoded elegantly into the structure of the SQL language, and they describe things you are going to need.
Your tools shouldn't be the exciting thing. The thing you are building with them should be the exciting thing.
By linking Aphyr's Redis article "call me maybe, Redis" as an example of possible troubles with new technologies, the author of this article shows that he actually does not understand very well the failure modes of MySQL itself, which are identical to the ones of Redis failover (and of every other master-slave system with asynchronous replication, more or less). This in theory contradicts the whole article, but actually I think the idea happens to be reasonable, but formulated not very well. The point is not what is new and what is old, is to switch to new technologies without a good reason which is a useless risk. If you analyze the failure modes, and the strenghts, of what you used in the past, and there is something new that performs much better, IF you are a good programmer, you can analyze, and test for a few days, read the doc, check some code, of something new, and understand if it is a better fit. This is why it's always the set of the best programmers that adopt new technologies that later turn into the next "obvious" stack, they are brave, not because they are crazy, because they can analyze something regardless of the fact is new or old.
I love the way Maciej Cegłowski describes his setup at Pinboard:
"Pinboard is written in PHP and Perl. The site uses MySQL for data storage, Sphinx for search, Beanstalk as a message queue, and a combination of storage appliances and Amazon S3 to store backups. There is absolutely nothing interesting about the Pinboard architecture or implementation; I consider that a feature!"
You can debate which axes matter - you can debate the weighing and scaling of them - but you can't get away from the conclusion that "pushing all your risk boundaries at the same time equals failure". As a matter of fact, this is structurally identical to the famous "fast, good, cheap" triangle.
n.b., this analysis really starts hitting home in multi-team environments, say, over 50 engineers.
I understand how someone might believe in "innovation tokens," but it's really just a confused way to look at ROI. There's no inherent cost in some innovation, though. If our programmers already know an "innovative" programming method, there's no cost in doing things that way.
The author seems to be conflating the cost of innovation with the cost of doing something you're less familiar with, which are not necessarily the same things. The risk of chasing shiny new objects is real, but sometimes those shiny objects can actually reduce costs and time spent to accomplish a goal (like a MVP or new version).
Sometimes it's worth innovating if you already have experience in the area. Sometimes it's worth innovating even if you have to learn and try new things. Sometimes the time/monetary cost of innovation is 0, and sometimes it's so high that you shouldn't innovate even if it improves your product.
This idea of limited innovation resulting in cumulative costs is overly simplistic. The smart founder will recognize the difference between innovations that will yield net returns and those that won't.
The problem is that "you" is not a person, "you" is every person who will ever work on the code in the future. And "innovation" isn't "the code you are writing now", "innovation" is "the code you are writing now, and next year, and the third-party library you want to integrate in 6 months, and the unit tests you don't have time for now but will become critical in 2-3 years as you become unable to ship working software, and the bug you'll spend a month working on because nobody has ever encountered it before."
Yes, you can look at this in terms of ROI. The author's point is that engineers - particularly ones who have never scaled & maintained a system over years and millions of users - consistently underweight the problems that they've never encountered before. With boring tech, other people have encountered them, and solved them, and you can Google for the answer or pull in a library. With bleeding-edge stuff, when you run into one of these, you have to drop everything you're doing and fix it, because nobody else will.
> "This idea of limited innovation resulting in cumulative costs is overly simplistic. The smart founder will recognize the difference between innovations that will yield net returns and those that won't."
I agree with you completely. Furthermore, I'd add that the view on maintenance is too simplistic also. Effective maintenance requires more than just a tech stack where the limitations are known, you'll also want something testable and refactorable. Ballooning code bases are a real problem, sometimes the smart move is to clean up the cruft. If you're smart about integrating new tech into your stack there's no reason you can end up with a solution which is both more robust and efficient.
Furthermore, there's the whole scaling issue. Perhaps the mode du jour is just to assume increased server costs (regardless of where they're hosted) are just a necessary part of scaling a website to more users, but rethinking your tech stack can help keep these costs under control. Perhaps this is a decision that can wait until you have a decent userbase, but it's still a good reason to be open-minded about what benefits a new solution could bring.
This article is a good starting point to talk about technology choices. But there are many issues with applying the advice in the real world.
First, he is intertwining two seperate issues, limiting tech choices in an organisation, and incorporating new technologies. Keep in mind that you can have seperate strategies for both.
Secondly, there is no notion as to how big a change a token is worth. Obviously switching languages is a much bigger change than switching caching libraries.
Thirdly, there is no mention of project size. Should a 3 month project get the same tokens as a two year project? This year we have created ~300 microservices. If each was allowed 3 tokens we would have 900 new tech changes in this year alone. That's unmanageable.
Fourthly, what is your organisational strategy and culture? If an engineer prototypes in a new language is that a problem because it is seen as wasteful? Perhaps it is something that will make the other devs jealous? Or is it considered an investment in the company and a risk mitigation strategy? Do you have the kind of engineers and tech leads who will do a lot of this prototyping and experimentation on their own time?
Unfortunately I think the answer to all of them is, 'it depends'. How much inertia does change get in your organisation? That will help place value on the tokens.
For your third point specifically, I think taking a pragmatic view is the best. You mentioned you created ~300 new microservices this year. I imagine they're all based on the same pattern, so perhaps your tokens will apply to that pattern rather than each individual project (eg. you get to change the stack for future microservivces). On the other hand, at at least one new service per day, it's obviously pretty efficient for you, so consider why you'd change it unless necessary?
The "innovation tokens" concept expands even past technology.
Want to innovate in the way your board is structured or remove standard protections from the term sheet? Or even set up your Twitter account in this never-before-seen way? Want to remove the idea of management, or rethink the way offices work? You lose an innovation token.
The site is currently down for me (503). While we're talking about boring technology, please consider hosting your blog on a static file host + CDN. It will be faster, easier to maintain, and virtually impossible to take down.
I see people recommend static sites in general, but I've recently done some research and couldn't find a static site generator that can give me a WYSIWYG editor in my browser. What I need is a blog that lets me edit posts from a PC, a tablet, or a phone, including picture uploads and one-click publishing. Everything I found either left the editing portion up to the user or said "just use WinSCP to upload your HTML/markdown".
I just went with Wordpress. My personal blog is not my job, I just want to write down my thoughts. What specific technology would you recommend to generate a static blog with a WYSWYG editor and picture uploads, on my own server (not S3 or some proprietary paid hosting)?
Ironically, Rails is now in the category of "boring technology" but each major version introduces enough breaking API changes that many apps never get updated. So all the pain of spending a token and little of the pleasure.
With smaller, more loosely-coupled modules, one can spend a fraction of a token here or there and still revert back to the boring way when necessary.
Sticking with the same set of technologies is a premature death for your career as a programmer.
The whole article builds up on a point that people tend to fail more when they are using new tools. That point is false. In reality, when you use a wrong but ’accustomed’ tooling in inappropriate situation, you end up writing code that you would never write if you had chosen right tools. You are effectively reinventing the wheel.
You also have an idea about ’innovation tokens’ that builds up on a static representation of weight of a new technology in a project. That is ridiculous.
There is no definition of ’boring’ in this article. I don't understand why you call PHP, Postgres and Cron ’boring’. What is ’interesting’ then?
It seems like you have made a wrong choice while thinking about the problem. The problem is clear: people fuck up projects by using modern, hyped technologies that are inappropriate for project's domain. They are just as wrong as you are.
On the one hand I agree with you, but having looked at some CV's recently, I see people who list every language and web framework under the sun. If you have learned 10 new frameworks in the last year, then you can' have any in depth knowledge of them.
3 innovation tokens? The supply is fixed for a long while? People on HackerNews of all places buying that?
It's plain wrong. Innovation is good for any kind of organization, if done properly. What the author should focus on is the lack of agility that prevents companies from experimenting and failing quickly. It's not the innovative technology that gets you in the end, it's your inability to evaluate/adopt/discard fast. Granted, that ability is hard to find in large-ish organizations, but to willingly limit your innovation sounds like a recipe for a slow death. It's like a gentleman boxer from the 19th century limiting himself to just jab, cross, hook entering a modern MMA fight.
So, call them "agility tokens". You've still got a limited supply when it comes to trying out new languages, new databases, new whatever. If you've got ten years Python and MySQL experience, and 95% of your codebase is in Python with data stored in MySQL, what do you gain and what do you lose by introducing Node.js and MongoDB into the mix? Sometimes it's worth the trade...other times it's not. But, Node.js and MongoDB is probably not going to provide enough of a productivity boost to make up for the costs of maintaining two codebases, two build/test/deploy environments, two databases, etc. You're making a trade; sometimes it's beneficial (usually long term), and often it's not (usually short term).
In short, yes, I'm buying this. I think it's a perfectly sensible analogy; a somewhat leaky abstraction, if you will, since none of us actually have any "tokens" that we are trading in for a new database. But, the meaning is clear to me, and I can't find fault in it.
https://consul.io/ is mentioned as an "exciting" technology. What is a "boring" alternative for this... that is, a multi-datacenter, service discovery/health-check/config-distribution software that'll "just work" ?
You can try using DNS with dynamic zones as a simple service discovery mechanism (sharing one master with all your environments), but you'll soon find out that:
- healthchecking really is a good idea in service discovery
- clients are awful about refreshing state from DNS
- single-master systems are a bad idea in a large environment.
- DNS replication is finicky; DNS caching is slow.
Puppet with puppetdb can sorta fill this gap, too, as long as you don't need fast convergence (or fast puppet runs, if your puppetdb is more than a few milliseconds away from any of your nodes).
Consul may be new, but it's built on really solid ideas and technologies. You can read papers[1][2] about the underlying technologies to get a sense for how Consul will fail. I'd like to think that counteracts some of the problems you get with newness.
This also makes me think of the problem of legacy code. Once you've written an app, it's feature complete, and it's profitable, expending effort to rewrite it in a new tech is not only a questionable value proposition, it can be actively dangerous. Replacing ugly but works with beautiful but fails is not good engineering.
Why not let the data make the technology choices for you ?
The way I go about making technology choices is by examining the data that I'll be working with in conjunction the data access patterns inherent in the features that I'll need to support.
I look at things like projected read/write throughput, latency characteristics, total data volume, concurrency, and whether or not the problem domain actually requires highly relational queries.
I think that a lot of shops don't put enough thinking into figuring out what kind of data access patterns they'll need to support throughout the life-cycle of the business. This is no big deal if the product doesn't experience growth. But in terms of rich web applications which begin to experience growth the team inevitably ends up with a massive scaling problem unless their system architecture was designed to support these access patterns from the ground up.
It seems that this "growing pains" scaling nightmare has become almost a right of passage for successful tech startups. Founders are generally led to believe that it's a good thing for them to need to sell equity to outside investors in order to "scale out" a much larger team to build out the infrastructure required to perform in-flight rocket surgery on the application before it either explodes or becomes increasing cost inefficient.
While this whole process greatly benefits VCs, the high end tech engineering job market, and recruiters, it's absolutely terrible the founding team because it means they inevitably get massively diluted as a consequence of experiencing success. I'm not saying it's a conspiracy, but I am saying there is massive financial incentive to keep this kind of technical knowledge about best practices an open secret within the highly paid IT consultancy world.
TLDR: It's my supposition that small teams can build scalable, composable, systems by thinking about web scale data access patterns from the beginning.
Agreed. Best expression I've heard to sum up this concept is "This is not an after-school club". Playing with shiny new technology is not the point. The point is to make money for the company and you use the best tools for the job. Most of the time that means solid tools that everyone understands.
My favorite piece of 'boring' technology: Sphinx (the search software).
I've been using it for maybe six years non-stop. I've thrown large data sets at it and it always runs fast; it's trivial to set up and always has more than enough options for my search purposes. It has also become a much better product over the time I've used it, with an active development group behind it. Sphinx works so well as is, I've never had a reason to look elsewhere at the latest hot search tech, it would be a waste of my time to do so.
[+] [-] aaronbrethorst|11 years ago|reply
Trying to solve a problem? I'm going to use what I already know. For web stuff, this'll be a super-boring, totally standard Rails web app that generates HTML (ugh, right? How last century), or maybe a JSON API if I'm trying to consume its output in a native app. For mobile stuff, this'll be an Objective-C iOS app.
Waffling about it and saying 'well, I am trying to solve a problem, and I think maybe a new whiz-bang technology is the best way to do it' is the simplest path to failing miserably. I've watched incredibly well-funded startups with smart people fail miserably at delivering a solution on-time because an engineer was able to convince the powers that be that a buzzword-laden architecture was the way to go.
You don't know what the 'right' solution is unless you understand the tools and technology you'll use to deliver that solution. Anything else is just cargo-culting.
[+] [-] emmanueloga_|11 years ago|reply
In a project I worked on once, we went with "what we knew" (standard normalized SQL schema) to build an analytics engine. The problem with "going with what you know" is you are likely to badly reinvent well-established patterns. If we had stop for a minute and learnt about star schemas, the project could have ended in a much better shape than it did, and maybe the effective time to release would had been shortened.
BTW, learning "new things" is almost always useful but isn't always precisely exciting. Data warehousing is one example :-).
Becoming proficient with a selected set of technologies is still a good idea, but I'm willing to learn and try new things all the time. First thing I ask myself is if a problem was already solved by someone else, and how.
[+] [-] pothibo|11 years ago|reply
Then, when you make your decision about that new thing, you'll know why you like/dislike it. Your decision won't be clouded by arbitrary things like libraries inter-dependencies.
[+] [-] neverminder|11 years ago|reply
Using what you already know may not always be the best approach, because of "When your only tool is a hammer every problem looks like a nail" phenomenon. I would do my research and use what makes sense and what is best in the long run, despite of me being experienced or not in the technology. When you're a software engineer with years of experience under your belt, picking up the next one will not be a big challenge.
[+] [-] shebson|11 years ago|reply
To pick on one specific example in the post: Node.js is popular enough that there are lots of teams and engineers that are most comfortable and productive working with it. For these teams, choosing Node certainly wouldn't cost an innovation token, while deciding to build some service in Python, Ruby or PHP (if we take at face value that this is more "boring") may end up being more costly.
[+] [-] stevesearer|11 years ago|reply
My latest creation is a instagram-style feed of the beers a few friends of mine have been enjoying recently.
It works just fine and none of my non-technical friends have cared that they add beers in their mobile browser rather than via an app or something.
[+] [-] mandeepj|11 years ago|reply
How about both?
[+] [-] snowwrestler|11 years ago|reply
[+] [-] Scarbutt|11 years ago|reply
Its not really about a "last century" thing, "last century" most apps need not multiple front-ends like today, unless you are 100% you won't need multiple front-ends, doing a json api from the start simplifies things a lot.
[+] [-] ehurrell|11 years ago|reply
[+] [-] saidajigumi|11 years ago|reply
Both "innovative" and "boring" can shoot you in the foot. TFA focsues on "innovative" as a risk, but that's just daft. This industry is constantly rolling its lessons learned back into its shipped and shared technology. Ever gone back to a pre-Rails era web/backend codebase and screamed in horror? Ever gone to a "new" shop that never assimilated those lessons, used "boring" technology (thus dodging their shared/encapsulated forms), and recreated the old horror? (personally: check and check)
Trite policies are not a replacement for spending dedicated up-front (and occasional ongoing) time cycling between 1) evaluating/understanding your problem, 2) researching the current state of the art {processes, technology, etc.} related to your problem, and 3) using good engineering judgement to choose the best path then-and-there.
[+] [-] Retric|11 years ago|reply
Java may be the worst example of a ‘bulb’ language I can think of. However, I recently spoke with a team which had an awesome response to all the things I hated about the language. The closest analogy I can think of is mechanics working on popular cars get to the point where they can diagnose problems in seconds because they know the kinds of things that break. Cars come with plenty of sensors to help diagnose problems, but in this case familiarity often beats better tools.
[+] [-] smacktoward|11 years ago|reply
This is true, but it misses one of the points of TFA, which is that with boring tech you already know the ways it can shoot you in the foot, because lots of people have had their feet shot by it before you came along. You can learn what not to do just by looking around and seeing which sets of feet have the smoking holes in them. With exciting tech, you don't know; you get to be one of the people who discovers those things.
[+] [-] davedx|11 years ago|reply
This is something worth fighting for, but by God, it's a hard fight, both in big corps and smaller "more agile" companies.
Great comment, anyway.
[+] [-] taeric|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] MCRed|11 years ago|reply
You shouldn't choose node.js or MongoDB because they are "innovative"-- but because they are poorly engineered. (Erlang did what node does but much better, and MongoDB is poorly engineered global write lock mess that is probably better now but whose hype way exceeded its quality for many years.)
The engineers are monkey's idea is that engineers can't tell the difference-- and it seems to be supported by the popularity of those two technologies.
But if you know what you're doing, you choose good technologies-- Elixir is less than a year old but its built on the boring 20 years of work that has been done in Erlang. Couchbase is very innovative but it's built on nearly a decade of couchdb and memcache work.
You choose the right technologies and they become silver bullets that really make your project much more productive.
Boring technologies often have a performance (in time to market terms) cost to them.
Really you can't apply rules of thumb like this and the "innovation tokens" idea is silly.
I say this having done a product in 6 months with 4 people that should have taken 12 people 12 months to do, using Elixir (not even close to 1.0 of elixir even) and couchbase and trying out some of my "wacky" ideas for how a web platform should be built-- yes, I was using cutting edge new ideas in this thing that we took to production very quickly.
The difference?
Those four engineers were all good. Not all experienced-- one had been programming less than a year-- but all good.
Seems everyone talks about finding good talent and how important that is but they don't seem to be able to do it. I don't know.
I do know is, don't use "engineers are monkies" rules of thumb-- just hire human engineers.
[+] [-] wdewind|11 years ago|reply
The problem isn't "we are starting a new codebase with 4 engineers, are we qualified to choose the right technology?" it's "we are solving a new problem, within a massive org/codebase, that could probably be solved more directly with a different set of technologies than the existing ones the rest of the company is using. Is that worth the overhead?" and the answer is almost always no. Ie: is local optimization worth the overhead?
Local optimization is extremely tempting no matter who you are, where you are. It's always easy to reach a point of frustration and come to the line of reasoning of "I don't get why we are wasting so much time to ship this product using the 'old' stuff when we could just use 'newstuff' and get it out the door in the next week." This happens to engineers of all levels, especially in a continuous deployment, "Just Ship" culture. The point of the article is that local optimization gives you this tiny boost in the beginning for a long term cost that eventually moves the organization is a direction of shipping less. It's not that innovative technologies are bad.
> But if you know what you're doing, you choose good technologies
No, if you know what you are doing you make good organizational decisions. It matters less what technology you use than that the entire organization uses the same technology. Etsy has a great engineering team and yet the entire site is written in PHP. I don't think there is a single engineer working at Etsy who thinks PHP is the best language out there, but the decision to be made at the time was "there is a site using PHP, some Python, some Ruby etc., how do we make this easier to work on?" Of those three python and ruby are almost universally thought of as better languages than PHP, but in this case the correct decision was picking a worse technology because more of the site was written in it, the existing infrastructure supported it more completely and so as an organization and a business we could get back to shipping products more quickly by all agreeing to use PHP. Etsy certainly does not think of its engineers as monkeys, quite the opposite.
[+] [-] AlisdairO|11 years ago|reply
- The weaknesses of new tech may not be fully understood. A lot of new tech solves existing problems, while re-surfacing problems that the old tech solved. Everyone thinks it's great until they've used it for a bit longer, and run into those issues. - New tech runs a higher risk of disappearing/becoming unsupported. If you plan to support your product for a long time, that's a valid risk factor.
For myself, I'm wary of having very new tech as a fundamental underpinning of any piece of work I need to stick around. I'll likely adopt frameworks or database systems cautiously, unless their superiority is overwhelmingly obvious. On the other hand, I'd be a lot more willing to take risks on a simple library.
With a smaller, simpler piece of tech, it's easier to replace if something goes awry, and it's easier to evaluate in its totality prior to taking the risk.
[+] [-] exelius|11 years ago|reply
[+] [-] fullwedgewhale|11 years ago|reply
They tried re-writing pieces in Java at a cost of tens of millions of dollars. Java was the new hotness. In addition, they built out a Java hosting environment using expensive, proprietary Unix hardware to reach the same production volume as the mainframe. However, it was grossly under-utilized because the Java code couldn't do much more than ask the COBOL code what the answer was to a question by using Message queues. More millions of dollars went to keep up licenses and support contracts on essentially idle hardware.
They tried moving it to Windows, using .NET and MicroFocus COBOL. But the problem was they would still be tied to COBOL, even though they (conceptually) had a path to introduce .NET components or to wrap the green-screen pieces in more updated UIs. But that in itself was a problem because all their people knew the greenscreen UI so well it was all muscle memory. Several workers complained because new GUI actually made them slower at their jobs.
They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone. For the most part they were tied to that COBOL code because no one understood everything that it did and there were only a handful of COBOL programmers left in their shop (I think 6) and they were busy making emergency fixes on that + several other millions of lines of code in other systems.
They were, however, looking for an argument to retire COBOL and retire the mainframes. The cheapest solution would have been to stick with COBOL. Hire programmers. Teach them COBOL (because it was painfully difficult to find any new COBOL people and for various reasons they could not off-shore the project). Continue to develop and fix in COBOL (especially before the last remaining COBOL programmers died or retired). If you cleaned up or fixed a module, maybe move it to Java when possible.
The long story short is the decision to introduce a new technology, even in the face of an ancient, largely proprietary (since it's really about IBM COBOL on mainframes), and over-priced solution can actually lead to a worse outcome. Had they stayed with boring technology. Had they in-sourced more of their COBOL workforce. They might not have felt happy, but they would have been in a much strong, better position. Instead they were paying for a mainframe, and a proprietary Unix server farm, and software licenses on both Unix and z/OS.
When I last was there they were buying a new solution from Oracle which was supposed to arrive racked up and ready to go. Several weeks in they essentially said it would take months before the first of the new Oracle servers would be ready for an internal cloud deployment on which to try to re-host some software. I'm not even sure what they think they would be re-hosting but they talked about automatic translation of COBOL to Java.
[+] [-] hackerboos|11 years ago|reply
Just a small correction. Elixir was started in 2012. I assume you mean less than 1 year since version 1.0?
[+] [-] calibraxis|11 years ago|reply
One thing I realized is these blogposts are consumerist. They talk about "Python" and "MongoDB". Very little about underlying ideas like "algorithms", "computational paradigms" or "expressive power".
And they have hypersimplified plans about "three innovation tokens". Instead of "risk analysis" or "evaluate tradeoffs".
One company shut me down with such blogposts... while it let devs run amok with an architecture which did n^2 (more?) network calls... where each call transfered one RDBMS row at a time. It dragged down the intelligence of everyone who really knew better; they spent "sprints" trying to find micro-optimizations, knowing exactly that the system was fundamentally ridiculous.
So I spent a weekend reimplementing it in the Scary Fun Language. Because it was my weekend dammit, and Embracing Boredom damaged my brain too much. Scary Fun was the only way to start mending it. And it succeeded.
So of course the first order of business was to rewrite it in the Embrace Boredom language.
[+] [-] api|11 years ago|reply
I recently went back to SQL from noSQL after I realized that a lot of noSQL was just reinventing wheels. I realize there might be cases where noSQL databases shine, but in most use cases SQL is better. It's slightly more work up front (only slightly) but it pays off later in keeping your data organized and making it easy to query. It's a great example of a very old technology with excellent longevity. That's in part because it's built on math and logic (set theory, etc.). There are universal mathematical/logical truths encoded elegantly into the structure of the SQL language, and they describe things you are going to need.
Your tools shouldn't be the exciting thing. The thing you are building with them should be the exciting thing.
[+] [-] antirez|11 years ago|reply
[+] [-] threefour|11 years ago|reply
"Pinboard is written in PHP and Perl. The site uses MySQL for data storage, Sphinx for search, Beanstalk as a message queue, and a combination of storage appliances and Amazon S3 to store backups. There is absolutely nothing interesting about the Pinboard architecture or implementation; I consider that a feature!"
https://pinboard.in/about/
[+] [-] pnathan|11 years ago|reply
Another way to think about it is this: You get to change three axes in a product: new underlying technology, new product, or new process.
- Choosing one will allow you to progress with likely success.
- Choosing two opens yourself up to non-trivial risk.
- Choosing three means you will likely fail in this project.
There's a nifty talk by Steve McConnell about Software Engineering Judgement - https://www.youtube.com/watch?v=PFcHX0Menno - that goes into this kind of analysis.
You can debate which axes matter - you can debate the weighing and scaling of them - but you can't get away from the conclusion that "pushing all your risk boundaries at the same time equals failure". As a matter of fact, this is structurally identical to the famous "fast, good, cheap" triangle.
n.b., this analysis really starts hitting home in multi-team environments, say, over 50 engineers.
[+] [-] ignostic|11 years ago|reply
The author seems to be conflating the cost of innovation with the cost of doing something you're less familiar with, which are not necessarily the same things. The risk of chasing shiny new objects is real, but sometimes those shiny objects can actually reduce costs and time spent to accomplish a goal (like a MVP or new version).
Sometimes it's worth innovating if you already have experience in the area. Sometimes it's worth innovating even if you have to learn and try new things. Sometimes the time/monetary cost of innovation is 0, and sometimes it's so high that you shouldn't innovate even if it improves your product.
This idea of limited innovation resulting in cumulative costs is overly simplistic. The smart founder will recognize the difference between innovations that will yield net returns and those that won't.
[+] [-] nostrademons|11 years ago|reply
Yes, you can look at this in terms of ROI. The author's point is that engineers - particularly ones who have never scaled & maintained a system over years and millions of users - consistently underweight the problems that they've never encountered before. With boring tech, other people have encountered them, and solved them, and you can Google for the answer or pull in a library. With bleeding-edge stuff, when you run into one of these, you have to drop everything you're doing and fix it, because nobody else will.
[+] [-] ZenoArrow|11 years ago|reply
I agree with you completely. Furthermore, I'd add that the view on maintenance is too simplistic also. Effective maintenance requires more than just a tech stack where the limitations are known, you'll also want something testable and refactorable. Ballooning code bases are a real problem, sometimes the smart move is to clean up the cruft. If you're smart about integrating new tech into your stack there's no reason you can end up with a solution which is both more robust and efficient.
Furthermore, there's the whole scaling issue. Perhaps the mode du jour is just to assume increased server costs (regardless of where they're hosted) are just a necessary part of scaling a website to more users, but rethinking your tech stack can help keep these costs under control. Perhaps this is a decision that can wait until you have a decent userbase, but it's still a good reason to be open-minded about what benefits a new solution could bring.
[+] [-] sheepmullet|11 years ago|reply
First, he is intertwining two seperate issues, limiting tech choices in an organisation, and incorporating new technologies. Keep in mind that you can have seperate strategies for both.
Secondly, there is no notion as to how big a change a token is worth. Obviously switching languages is a much bigger change than switching caching libraries.
Thirdly, there is no mention of project size. Should a 3 month project get the same tokens as a two year project? This year we have created ~300 microservices. If each was allowed 3 tokens we would have 900 new tech changes in this year alone. That's unmanageable.
Fourthly, what is your organisational strategy and culture? If an engineer prototypes in a new language is that a problem because it is seen as wasteful? Perhaps it is something that will make the other devs jealous? Or is it considered an investment in the company and a risk mitigation strategy? Do you have the kind of engineers and tech leads who will do a lot of this prototyping and experimentation on their own time?
[+] [-] NeutronBoy|11 years ago|reply
For your third point specifically, I think taking a pragmatic view is the best. You mentioned you created ~300 new microservices this year. I imagine they're all based on the same pattern, so perhaps your tokens will apply to that pattern rather than each individual project (eg. you get to change the stack for future microservivces). On the other hand, at at least one new service per day, it's obviously pretty efficient for you, so consider why you'd change it unless necessary?
[+] [-] austenallred|11 years ago|reply
Want to innovate in the way your board is structured or remove standard protections from the term sheet? Or even set up your Twitter account in this never-before-seen way? Want to remove the idea of management, or rethink the way offices work? You lose an innovation token.
[+] [-] _kerbal_|11 years ago|reply
[0] https://eager.io/blog/build-static-websites
[+] [-] freehunter|11 years ago|reply
I just went with Wordpress. My personal blog is not my job, I just want to write down my thoughts. What specific technology would you recommend to generate a static blog with a WYSWYG editor and picture uploads, on my own server (not S3 or some proprietary paid hosting)?
[+] [-] grandalf|11 years ago|reply
With smaller, more loosely-coupled modules, one can spend a fraction of a token here or there and still revert back to the boring way when necessary.
[+] [-] frandroid|11 years ago|reply
[+] [-] ytimoschenko|11 years ago|reply
The whole article builds up on a point that people tend to fail more when they are using new tools. That point is false. In reality, when you use a wrong but ’accustomed’ tooling in inappropriate situation, you end up writing code that you would never write if you had chosen right tools. You are effectively reinventing the wheel.
You also have an idea about ’innovation tokens’ that builds up on a static representation of weight of a new technology in a project. That is ridiculous.
There is no definition of ’boring’ in this article. I don't understand why you call PHP, Postgres and Cron ’boring’. What is ’interesting’ then?
It seems like you have made a wrong choice while thinking about the problem. The problem is clear: people fuck up projects by using modern, hyped technologies that are inappropriate for project's domain. They are just as wrong as you are.
[+] [-] collyw|11 years ago|reply
[+] [-] cubano|11 years ago|reply
My hats of to its author for having the courage to write it.
[+] [-] tie_|11 years ago|reply
It's plain wrong. Innovation is good for any kind of organization, if done properly. What the author should focus on is the lack of agility that prevents companies from experimenting and failing quickly. It's not the innovative technology that gets you in the end, it's your inability to evaluate/adopt/discard fast. Granted, that ability is hard to find in large-ish organizations, but to willingly limit your innovation sounds like a recipe for a slow death. It's like a gentleman boxer from the 19th century limiting himself to just jab, cross, hook entering a modern MMA fight.
[+] [-] SwellJoe|11 years ago|reply
In short, yes, I'm buying this. I think it's a perfectly sensible analogy; a somewhat leaky abstraction, if you will, since none of us actually have any "tokens" that we are trading in for a new database. But, the meaning is clear to me, and I can't find fault in it.
[+] [-] mcfunley|11 years ago|reply
Wow weird someone should totally think about that, maybe it doesn't even have much to do with the programming language
http://mcfunley.com/data-driven-products-now
http://mcfunley.com/design-for-continuous-experimentation
[+] [-] chetanahuja|11 years ago|reply
[+] [-] meatmanek|11 years ago|reply
Puppet with puppetdb can sorta fill this gap, too, as long as you don't need fast convergence (or fast puppet runs, if your puppetdb is more than a few milliseconds away from any of your nodes).
Consul may be new, but it's built on really solid ideas and technologies. You can read papers[1][2] about the underlying technologies to get a sense for how Consul will fail. I'd like to think that counteracts some of the problems you get with newness.
[1] https://ramcloud.stanford.edu/raft.pdf [2] https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf
[+] [-] beat|11 years ago|reply
[+] [-] code_reuse|11 years ago|reply
I look at things like projected read/write throughput, latency characteristics, total data volume, concurrency, and whether or not the problem domain actually requires highly relational queries.
I think that a lot of shops don't put enough thinking into figuring out what kind of data access patterns they'll need to support throughout the life-cycle of the business. This is no big deal if the product doesn't experience growth. But in terms of rich web applications which begin to experience growth the team inevitably ends up with a massive scaling problem unless their system architecture was designed to support these access patterns from the ground up.
It seems that this "growing pains" scaling nightmare has become almost a right of passage for successful tech startups. Founders are generally led to believe that it's a good thing for them to need to sell equity to outside investors in order to "scale out" a much larger team to build out the infrastructure required to perform in-flight rocket surgery on the application before it either explodes or becomes increasing cost inefficient.
While this whole process greatly benefits VCs, the high end tech engineering job market, and recruiters, it's absolutely terrible the founding team because it means they inevitably get massively diluted as a consequence of experiencing success. I'm not saying it's a conspiracy, but I am saying there is massive financial incentive to keep this kind of technical knowledge about best practices an open secret within the highly paid IT consultancy world.
TLDR: It's my supposition that small teams can build scalable, composable, systems by thinking about web scale data access patterns from the beginning.
[+] [-] reillyse|11 years ago|reply
[+] [-] adventured|11 years ago|reply
I've been using it for maybe six years non-stop. I've thrown large data sets at it and it always runs fast; it's trivial to set up and always has more than enough options for my search purposes. It has also become a much better product over the time I've used it, with an active development group behind it. Sphinx works so well as is, I've never had a reason to look elsewhere at the latest hot search tech, it would be a waste of my time to do so.
[+] [-] brianwawok|11 years ago|reply