My working theory about this is that the industry is driven by resume-driven development. Recruiters filter resumes with a keyword based search : 'Do you have React experience ?', 'What about docker ?', 'Do you do Ruby on Rails ?'. If you cannot show experience in the current hotness, your chances in the job market diminishes. So people pick up the new hotness and implement their next solution with that technology whether or not it is the right tool for the job. If we fix the job market, this problem might automatically disappear.
Another solution could be encouraging personal projects. Many companies over-work their employees to the point that they cannot do anything in their spare time. Give employees free time and encourage them to play with new technology in their personal projects. The curious ones will have an outlet to channel their energies and you will get a rock solid stack. Also, they will be knowledgeable to take you forward when a problem that requires their newfound knowledge arises in your organization.
> Another solution could be encouraging personal projects.
Agree. This is necessary to encourage learning, which helps keep people sharp, which in turn benefits the projects that they are working on in the day job. Even if the technology is the "same old".
A couple of decades ago, many software developers were content using the tools they had and knew. Few demonstrated the eagerness and openness to try something new. These few were also the ones who were better at coming up with out of the box solutions because they were willing to try something new -- both tools and/or approaches. Now, we have reached a point where the bare minimum required of a software developer is familiarity and comfort with the new and shiny, leading to the reverse problem we had early on. Now, people can spend way too much time and energy learning and trying out new tech that they miss out on the opportunity of staying long enough with a technology to learn from experience.
In other words, today we spend way too much time on accidental complexity than on essential complexity.
The root cause of writing unnecessary code is the way the work day and compensatory systems of the modern workplace are structured. Employed software and infrastructure engineers are not paid to just drop in the simplest, most appropriate solution, do the minor customizations/reconfigurations that may be needed, and get out. They're paid to be in butt-in-chair for at least 8 hours per day, and the presumption is that they're engineering that whole time. In reality, places often must make up tasks to keep their employees busy, and using the most efficient solution is disincentivized, because you have to say you're doing something for those 8 hours.
I think the fact that we have the white-collar workforce organized after the assembly line pattern is the primary cause of unnecessary, borderline intentional complication, which includes mixing in unstable/untested components. If we could find a way to compensate engineers fairly without monopolizing their time, we'd be much better off, because the need to continually invent additional work for oneself would go away.
The bliss of a stable, low-maintenance project can be had through side projects, which often work fine for years with minimal modifications. It's really rewarding to go back to the same utility that you've written and know that the comparatively little quantity of time you spent on it is still paying dividends years later. There's an awesome sense of pride attached to that.
> Recruiters filter resumes with a keyword based search
Although true, this is is somewhat shooting the messenger. If as a client of the recruiter I ask for COBOL developers and all I get are Clojure resumes because the recruiter thinks it's cooler, then the recruiter won't be my recruiter for very long.
Surely where there's an issue it's that the recruiters are being asked for developers with those attributes. They're presumably being asked because the employer, for good or ill is seeking developers with those attributes.
Now, I've personally seen 4 reasons for hiring devs with cutting edge knowledge: because it's cool, because it's the only way to attract good talent, because we checked and we absolutely need that technology and because we run a research lab looking into cutting edge tech we may or may not use.
The latter two are very good reasons to seek hotness. They're also the least likely to be the reason, in my experience. Using shiny, shiny to attract talent is often a successful approach. And who doesn't like cool stuff (though it's a terrible justification)?
Almost half of my time in the past few months have gone into interviewing and hiring and I approached the subject with a complete beginners mindset and tinkered to see what works. Much to my chagrin turns out there is a positive selection effect to filter candidates by resumes based on hot tech. Maybe the smart ones know this increases there employability? Maybe they are naturally more curious? Who knows, it just seem to be a good signal.
I saw this at one large uk Telco Some one was going for a promotion so they spent a year and a team of 15 people to redevelop an existing perl system into an oracle based one as oracle was the company standard.
one of the markers for promotion was manage a team of x size and a > 1million budget - so they where gaming the system not the best use of the share holders money
The overwork thing has to be culture. I'm sure they think they're cost minimizing, but I doubt adding the additional labor costs more than the additional time of the current staff doing it all alone and being stressed out and so on. The "stressed out and so on" adds up to negatives that your company will pay out.
I'm coming from learning all about toothpicks and glue, then seeing every problem solved as building things with toothpicks and glue. It was quick and fun at first but my creations were brittle and sometimes were dangerous even. Gradually I became aware of ductape, saws, plywood, nails, hammers, steel, etc, then pre-fabricated parts. Eventually, I ended up buying some from vendors even that had compatible standards so I could make things quicker piecing them together. In the end, just having to try to come up with things that were already made seemed stupid and a waste of my time but unfortunately, that idea only seems to make sense to people that have gone through the same.
But it ain't all bad, see. I hear some young folks are coming up with brand new types of toothpicks and glue, hell some of em' I hear put themselves together.
I had a new developer decided to make their own charting library. It seemed like the best idea was to let him try and learn why that was a bad idea. Which looking back I think many other people had the same idea with me.
Honestly, I think programming is at the point where apprenticeships are a good idea. Thirty years ago the tools where changing to fast, and eventually we can codify the correct body of knowledge to have useful formal education. But, education seems to default to handing people a bunch of tools and getting them to learn it own their own.
Not sure what the articles point is? That you shouldn't write an application if you can use an off the shelf tool? Who does that?
I'm paid to write code because someone wanted an application. I don't think I could tell the customers who buy our application that they should learn some bash or excel and solve their business problems?
I assume under some rare circumstances a developer isn't faced with writing an app (site, CAD program, OS, ...) but instead to solve some business problem internal to a business - then you have the option of not writing code.
I think his point is that engineers tend to favor complicated solutions, and that they shouldn't do that. Every line of code you write or modify is potential technical debt that you'll have to own. The author appears to indicate that this is bad, and also appears to recommend making wider use of pre-existing solutions to prevent it. That's fine as far as it goes, but it's a simplification.
The piece does feel incomplete in that it doesn't discuss the incentives behind the decision to "write code" or not.
The simple fact that we always seem to come back to is that once you learn the basics, good engineering just comes down to good judgment, which is only gained through years of dedicated trial and error. Is it better to write some glue code to bounce data between 8 different extant programs and make final transformations or to write your own program that handles the process soup-to-nuts? The answer is now and will always be "it depends" (and quite frequently, it involves elements of both). We need to make sure that the social incentive structures align with the engineering goals (i.e., don't tell someone that if they make something stable and low-maintenance, they'll unemploy themselves) and then we can trust people to do good, iterative work.
Your job is to solve business problems. You happen to use code as the tool to do so because it lends itself to building better, more reliable, more scalable solutions.
Nobody pays for a lump of code, they pay for a tool that solves a problem they have.
The major problem of bringing in external technologies is that they take over your architecture, not that they might introduce bugs.
There is no catch-all architecture, so there is guaranteed to be some impedance mismatch between the expectations of your project, and the provisions of the 3rd party tool. Heaven help you if you need the facilities of multiple architectures, and try to marshal and connect disparate datatypes and calling/threading assumptions together.
As programmers, we work with general purpose programming languages. Many project-specific problems are not difficult to solve in a custom manner, given somebody with enough experience and hindsight to know how to write such a forward-looking thing robustly. It is a serious consideration whether or not to defer your architecture to generic external sources that were not written with your unique needs in mind. And even if you do, it is by far a best practice to ensure that such things live behind an application-specific abstraction separating out your project-specific code from entangling 3rd party code, allowing you to perform the inevitable migration to a different platform later.
This is an issue we're grappling with at present, as we move all our sites out to AWS: do we build our apps to use the incredibly tempting AWS services, or do it ourselves?
Our present approach is "hell yeah, if it can be outsourced it should be" and that applies to services too. We do keep an awareness of what it would take to do each thing by hand again though.
This is employer-centric in my opinion - sure you're not paid to write code, and sure you can reuse existing code base in new and exciting ways.
But - there's also value in learning new things, in new skills, in new toolsets. Maybe it's not best for the employer, but for you the employee it can help quite a bit.
There's a fine balance - ideally your employer encourages you to learn things on the job, and to try new things that may not necessarily make it to production, but improve your skillset overall.
Agreed. Employees are investors. They invest their time and future technical direction. Making sure those investors get an acceptable return on their investment is just good business. Encouraging side projects, mentoring, and growth is a way to improve the returns on the investment employees make in their employers and their customers.
That's an interesting take. I think it's important to also remember there's a fine line in breadth vs depth learning in this field as well. So called 'Taco Bell' programming does encourage you, in a way, to learn the basic tools in greater depth than perhaps you would if you weren't forced to.
> In fact, code is a nasty byproduct of being a software engineer.
This is the core of the matter.
This is something Dijkstra, Edward Cohen, Jayadev Misra, and others in and around the formal methods camp have been saying for decades. It is more worthwhile to solve the problem than to guess your program into existence and patch the errors later. To dismiss them is to say you do not appreciate the true difficulty of programming and designing systems.
In practice I don't write formal proofs for every line of code -- how exhausting! But I do often write high-level specifications in tandem with the software I'm using to implement it and often one informs the other to the true nature of the problem I'm trying to solve. And I find that as I improve in different logics and predicate calculus I can find and spot the design errors in the structures formed by code that you don't see if all you're doing is trying to "solve the puzzle."
The whole approach to guessing your program and patching the bugs later is far too addictive. It saddens me when otherwise good programmers fall into this trap. It creates work for oneself but it keeps you from solving the real problem at hand!
Nice article. Not sure about the allure of "Taco Bell," but the spirit is in the right place.
Where can I read more about this? "Formal methods"? Being able to write proofs and tests/frameworking around things that I need to write would help me be more confident in what I do and also help me keep myself on track/not end up going down rabbit holes.
Every now and then we'll see an HN thread that asks something like: what do you know now that you wish you would have known when you started programming?
This. This is the thing that took the longest to sink in and had the biggest impact. There were a lot of cool languages, tools, platforms, and systems along the way, and I was stoked picking up each one and coding -- but after decades of that, I realized I was focusing on the wrong thing.
I think the thing that convinced me was when I got to start watching lots of technology teams in the wild across multiple organizations. So many times I would see conversations and tons of problem-solving effort being spent on the tools to solve the problem instead of the problem itself.
A couple of years ago I was teaching tech practices to a team that was deploying on iOS, Droid, and the web. After we went over TDD, ATDD, and the CI/CD pipeline, I emphasized how crucial it was not to silo. When I finished, a young developer took me aside.
"You don't understand. We have coders here who just want to do Java. They want to be the best Java programmers they possibly can be."
I told him that he didn't understand. Nobody gave a rat's ass about people wanting to program Java. They cared about whether the team had both the people and technical skills to solve a problem they have. It would be as if a carpenter refused to work on a cabinet because it wouldn't involve using his favorite hammer. You're focusing on the wrong thing.
Sadly, once you get this, the industry is all too happy to punish you for it. That's because the resume/interview/recruitment world is interested in buzzwords and coolkids tech, not actually whether or not you can make things people want. This means sadly, in a way, if you continue growing, it's entirely possible to "grow out of" programming.
> I think the thing that convinced me was when I got to start watching lots of technology teams in the wild across multiple organizations. So many times I would see conversations and tons of problem-solving effort being spent on the tools to solve the problem instead of the problem itself
Your insight really hit home. Especially since I've been on a new software dev position for about a month now. This is the overwhelming issue that is already reared its ugly head, is massively frustrating and I feel (nearly) powerless to stop it.
This is a good article, and the quote in the middle is absolutely amazing -- it belongs up there with some of the most insightful quotes about software ever (and it was not even directly about software).
I sum the quote up as, "Systems are sentient beings like the One True Ring, and they will absorb you. Soon, though you believe you are thinking freely, you will actually be merely a part of the System, thinking what it wants you to think." So true!
But ... Taco Bell programming still creates a new system. That's a flaw in the article's premise.
If you solve a problem by stringing together 11 tools, then yes, you should get some benefits from reusing preexisting tools. But now, you have a system with some rube goldberg characteristics, plus you've written a bunch of "glue code" (which is "new code") in the process.
Those systems can often turn out to be more complex.
That only applies to web dev and to a subset of user-mode PC, server and mobile software. While I understand that represents the majority of the software being developed, that article is IMO an incorrect generalization.
Some of us do CAD/CAM/CAE, the proven tools either don’t exist, or are targeted towards companies like Boeing and GM and cost a fortune.
Some of us work on console games. The environment is pretty close to the bare metal/hypervisor, and typically, there’re significant costs involved in integrating any third-party stuff.
Some of us program embedded firmware. Only recently the hardware became fast enough to reuse those proven tools ported from PC, you can’t use those on a PIC16, just not enough resources.
That’s just what I personally did/doing over my 16 years in the industry. I’m sure there’re other fields where the proposed approach is inapplicable for various reasons.
It's pretty amusing when you have to keep repeating to a client that the off-the-shelf tool is better, cheaper, and is immediately available to them. Yet they still argue for the development of a new system. So I don't think this push towards custom builds only comes from the developers side of things.
And another thing worth pointing out is: most employers hire you to write code. That's the job spec. So don't be surprised that that's what we end up doing.
Wake me up the next time a 5-line shell script in Bash that uses only standard tools and runs off of any default Debian sells with a license cost of $20 (only) -- you know, for the 5 lines, not anything else -- and anyone pays it.
Doesn't matter what it does. It is literally impossible to sell even $20 worth of the solution the person advocates. Go ahead: link me to anywhere in the web selling a 5-line bash file without anything else. I'm waiting.
You can do a lot in 5 lines, too. lalalala. waiting. While I wait I think I'll make some money coding something people actually pay for. /s
Seriously though, while the author's point might stand in many sys admin and even systems integration roles - most of the software world actually pays for deliverables: the clearest example of which is consumers doing so. People would rather spend 20 hours cursing, and then give up, rather than pay for a systems integration script that generalizes and solves their problem. It's what the market demands. This article really would make sense if it came from the person paying -- but it doesn't. nobody who is paying actually says the words the article chose for the title. Yes, you are paid to code.
> Wake me up the next time a 5-line shell script in Bash
Or when somebody can write a 5-line shell script in Bash that:
a) typical non-programmers can and will use
b) produces enough diagnostic information that anybody (programmer or not) can use to troubleshoot when something inevitably goes wrong - whether it's user input, network connectivity, a missing dependency...
I get what he's saying, but he's talking about a pretty small subset of what we really do.
I'm not 100% sure why you're asking for a link to a product when I think it's fairly clear that he's talking about business solutions. Or to give a weak example: "we have system X and system Y and need them to talk."
After you write something like that you don't turn around and sell it on the web.
Me, I'm paid to translate someone's idea from one language to another. The value to the business is already understood by the one conveying the original idea. My only job is to ensure that the entity on the other side fully understands what was originally communicated without hinderance through language barriers.
You know these occasioanl threads about engineer burnouts we see and post on HN? Well, I actually feel that when you are "burned out" and don't feel like writing complex systems is "fun" anymore, you actually become a better engineer - exactly for the reason described in this post.
I can't emphasise how true this is. I've written tonnes of temporary scripts that parse some files or rename some directories, and then later discovered that I could have done it with a single UNIX command.
I'm not employed as a professional software developer, so I still don't actually use the helpful UNIX commands. Takes all the fun out of it. :P
The other day I had problems with grep, so I rewrote it in JavaScript ... I could have used awk, but at least my script is more then double as fast as awk. I think it's some sort of procrastination and being self managed. Reminds me that I should get off HN and start working.
If you see yourself as a software developer instead of problem solver, you tend to solve all your problems by writing code. In many cases problems can be solved by adjusting processes, improving culture, education or just by learning to use the current tools in more efficient ways.
Heh. It's much easier to write a kludge and patch it 500 times than perform a culture change at my place. I'll get rewarded for "fixing" the application and be seen as valuable in the former, and quite easily despised in the latter.
"Developer" rather than mere "coder" should mean you're aware of the full import of the system from requirement to end user deployment and production. Sometimes it does.
Can you dig a canal with a tried and true shovel? Yes, you can, but you will need a lot of time and many shovels.
Can you cook up bioinformatics analysis in Shell and Awk. Yes, you can, but you if you want a large scale analysis to be done in a reasonable amount of time, you roll up your sleeves and reach for compiled languages, distributed systems and so on.
There are downsides in introducing new systems, but it should be weighed against the upside of suitability and efficiency.
[+] [-] simula67|9 years ago|reply
Another solution could be encouraging personal projects. Many companies over-work their employees to the point that they cannot do anything in their spare time. Give employees free time and encourage them to play with new technology in their personal projects. The curious ones will have an outlet to channel their energies and you will get a rock solid stack. Also, they will be knowledgeable to take you forward when a problem that requires their newfound knowledge arises in your organization.
[+] [-] tapan_k|9 years ago|reply
Agree. This is necessary to encourage learning, which helps keep people sharp, which in turn benefits the projects that they are working on in the day job. Even if the technology is the "same old".
A couple of decades ago, many software developers were content using the tools they had and knew. Few demonstrated the eagerness and openness to try something new. These few were also the ones who were better at coming up with out of the box solutions because they were willing to try something new -- both tools and/or approaches. Now, we have reached a point where the bare minimum required of a software developer is familiarity and comfort with the new and shiny, leading to the reverse problem we had early on. Now, people can spend way too much time and energy learning and trying out new tech that they miss out on the opportunity of staying long enough with a technology to learn from experience.
In other words, today we spend way too much time on accidental complexity than on essential complexity.
[+] [-] cookiecaper|9 years ago|reply
I think the fact that we have the white-collar workforce organized after the assembly line pattern is the primary cause of unnecessary, borderline intentional complication, which includes mixing in unstable/untested components. If we could find a way to compensate engineers fairly without monopolizing their time, we'd be much better off, because the need to continually invent additional work for oneself would go away.
The bliss of a stable, low-maintenance project can be had through side projects, which often work fine for years with minimal modifications. It's really rewarding to go back to the same utility that you've written and know that the comparatively little quantity of time you spent on it is still paying dividends years later. There's an awesome sense of pride attached to that.
[+] [-] lucozade|9 years ago|reply
Although true, this is is somewhat shooting the messenger. If as a client of the recruiter I ask for COBOL developers and all I get are Clojure resumes because the recruiter thinks it's cooler, then the recruiter won't be my recruiter for very long.
Surely where there's an issue it's that the recruiters are being asked for developers with those attributes. They're presumably being asked because the employer, for good or ill is seeking developers with those attributes.
Now, I've personally seen 4 reasons for hiring devs with cutting edge knowledge: because it's cool, because it's the only way to attract good talent, because we checked and we absolutely need that technology and because we run a research lab looking into cutting edge tech we may or may not use.
The latter two are very good reasons to seek hotness. They're also the least likely to be the reason, in my experience. Using shiny, shiny to attract talent is often a successful approach. And who doesn't like cool stuff (though it's a terrible justification)?
[+] [-] amasad|9 years ago|reply
[+] [-] walshemj|9 years ago|reply
one of the markers for promotion was manage a team of x size and a > 1million budget - so they where gaming the system not the best use of the share holders money
[+] [-] ianai|9 years ago|reply
[+] [-] user5994461|9 years ago|reply
If you don't work in that field, you're fine :D
[+] [-] sebringj|9 years ago|reply
But it ain't all bad, see. I hear some young folks are coming up with brand new types of toothpicks and glue, hell some of em' I hear put themselves together.
[+] [-] Retric|9 years ago|reply
Honestly, I think programming is at the point where apprenticeships are a good idea. Thirty years ago the tools where changing to fast, and eventually we can codify the correct body of knowledge to have useful formal education. But, education seems to default to handing people a bunch of tools and getting them to learn it own their own.
[+] [-] alkonaut|9 years ago|reply
I'm paid to write code because someone wanted an application. I don't think I could tell the customers who buy our application that they should learn some bash or excel and solve their business problems?
I assume under some rare circumstances a developer isn't faced with writing an app (site, CAD program, OS, ...) but instead to solve some business problem internal to a business - then you have the option of not writing code.
[+] [-] cookiecaper|9 years ago|reply
The piece does feel incomplete in that it doesn't discuss the incentives behind the decision to "write code" or not.
The simple fact that we always seem to come back to is that once you learn the basics, good engineering just comes down to good judgment, which is only gained through years of dedicated trial and error. Is it better to write some glue code to bounce data between 8 different extant programs and make final transformations or to write your own program that handles the process soup-to-nuts? The answer is now and will always be "it depends" (and quite frequently, it involves elements of both). We need to make sure that the social incentive structures align with the engineering goals (i.e., don't tell someone that if they make something stable and low-maintenance, they'll unemploy themselves) and then we can trust people to do good, iterative work.
[+] [-] Swizec|9 years ago|reply
Your job is to solve business problems. You happen to use code as the tool to do so because it lends itself to building better, more reliable, more scalable solutions.
Nobody pays for a lump of code, they pay for a tool that solves a problem they have.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] trprog|9 years ago|reply
Lots of people? Everyone who ever bought, downloaded or otherwise acquired any software not written by themselves?
[+] [-] white-flame|9 years ago|reply
There is no catch-all architecture, so there is guaranteed to be some impedance mismatch between the expectations of your project, and the provisions of the 3rd party tool. Heaven help you if you need the facilities of multiple architectures, and try to marshal and connect disparate datatypes and calling/threading assumptions together.
As programmers, we work with general purpose programming languages. Many project-specific problems are not difficult to solve in a custom manner, given somebody with enough experience and hindsight to know how to write such a forward-looking thing robustly. It is a serious consideration whether or not to defer your architecture to generic external sources that were not written with your unique needs in mind. And even if you do, it is by far a best practice to ensure that such things live behind an application-specific abstraction separating out your project-specific code from entangling 3rd party code, allowing you to perform the inevitable migration to a different platform later.
[+] [-] davidgerard|9 years ago|reply
Our present approach is "hell yeah, if it can be outsourced it should be" and that applies to services too. We do keep an awareness of what it would take to do each thing by hand again though.
[+] [-] binalpatel|9 years ago|reply
But - there's also value in learning new things, in new skills, in new toolsets. Maybe it's not best for the employer, but for you the employee it can help quite a bit.
There's a fine balance - ideally your employer encourages you to learn things on the job, and to try new things that may not necessarily make it to production, but improve your skillset overall.
[+] [-] humanrebar|9 years ago|reply
[+] [-] drvdevd|9 years ago|reply
[+] [-] agentultra|9 years ago|reply
This is the core of the matter.
This is something Dijkstra, Edward Cohen, Jayadev Misra, and others in and around the formal methods camp have been saying for decades. It is more worthwhile to solve the problem than to guess your program into existence and patch the errors later. To dismiss them is to say you do not appreciate the true difficulty of programming and designing systems.
In practice I don't write formal proofs for every line of code -- how exhausting! But I do often write high-level specifications in tandem with the software I'm using to implement it and often one informs the other to the true nature of the problem I'm trying to solve. And I find that as I improve in different logics and predicate calculus I can find and spot the design errors in the structures formed by code that you don't see if all you're doing is trying to "solve the puzzle."
The whole approach to guessing your program and patching the bugs later is far too addictive. It saddens me when otherwise good programmers fall into this trap. It creates work for oneself but it keeps you from solving the real problem at hand!
Nice article. Not sure about the allure of "Taco Bell," but the spirit is in the right place.
[+] [-] rpazyaquian|9 years ago|reply
[+] [-] DanielBMarkham|9 years ago|reply
This. This is the thing that took the longest to sink in and had the biggest impact. There were a lot of cool languages, tools, platforms, and systems along the way, and I was stoked picking up each one and coding -- but after decades of that, I realized I was focusing on the wrong thing.
I think the thing that convinced me was when I got to start watching lots of technology teams in the wild across multiple organizations. So many times I would see conversations and tons of problem-solving effort being spent on the tools to solve the problem instead of the problem itself.
A couple of years ago I was teaching tech practices to a team that was deploying on iOS, Droid, and the web. After we went over TDD, ATDD, and the CI/CD pipeline, I emphasized how crucial it was not to silo. When I finished, a young developer took me aside.
"You don't understand. We have coders here who just want to do Java. They want to be the best Java programmers they possibly can be."
I told him that he didn't understand. Nobody gave a rat's ass about people wanting to program Java. They cared about whether the team had both the people and technical skills to solve a problem they have. It would be as if a carpenter refused to work on a cabinet because it wouldn't involve using his favorite hammer. You're focusing on the wrong thing.
Sadly, once you get this, the industry is all too happy to punish you for it. That's because the resume/interview/recruitment world is interested in buzzwords and coolkids tech, not actually whether or not you can make things people want. This means sadly, in a way, if you continue growing, it's entirely possible to "grow out of" programming.
[+] [-] NobleLie|9 years ago|reply
Your insight really hit home. Especially since I've been on a new software dev position for about a month now. This is the overwhelming issue that is already reared its ugly head, is massively frustrating and I feel (nearly) powerless to stop it.
[+] [-] scotty79|9 years ago|reply
Writing code is just the only way to do the above since all attempts at making it less text file driven have failed so far.
[+] [-] charlieflowers|9 years ago|reply
I sum the quote up as, "Systems are sentient beings like the One True Ring, and they will absorb you. Soon, though you believe you are thinking freely, you will actually be merely a part of the System, thinking what it wants you to think." So true!
But ... Taco Bell programming still creates a new system. That's a flaw in the article's premise.
If you solve a problem by stringing together 11 tools, then yes, you should get some benefits from reusing preexisting tools. But now, you have a system with some rube goldberg characteristics, plus you've written a bunch of "glue code" (which is "new code") in the process.
Those systems can often turn out to be more complex.
[+] [-] Const-me|9 years ago|reply
Some of us do CAD/CAM/CAE, the proven tools either don’t exist, or are targeted towards companies like Boeing and GM and cost a fortune.
Some of us work on console games. The environment is pretty close to the bare metal/hypervisor, and typically, there’re significant costs involved in integrating any third-party stuff.
Some of us program embedded firmware. Only recently the hardware became fast enough to reuse those proven tools ported from PC, you can’t use those on a PIC16, just not enough resources.
That’s just what I personally did/doing over my 16 years in the industry. I’m sure there’re other fields where the proposed approach is inapplicable for various reasons.
[+] [-] aswerty|9 years ago|reply
And another thing worth pointing out is: most employers hire you to write code. That's the job spec. So don't be surprised that that's what we end up doing.
[+] [-] gonzo41|9 years ago|reply
[+] [-] logicallee|9 years ago|reply
Doesn't matter what it does. It is literally impossible to sell even $20 worth of the solution the person advocates. Go ahead: link me to anywhere in the web selling a 5-line bash file without anything else. I'm waiting.
You can do a lot in 5 lines, too. lalalala. waiting. While I wait I think I'll make some money coding something people actually pay for. /s
Seriously though, while the author's point might stand in many sys admin and even systems integration roles - most of the software world actually pays for deliverables: the clearest example of which is consumers doing so. People would rather spend 20 hours cursing, and then give up, rather than pay for a systems integration script that generalizes and solves their problem. It's what the market demands. This article really would make sense if it came from the person paying -- but it doesn't. nobody who is paying actually says the words the article chose for the title. Yes, you are paid to code.
[+] [-] clifanatic|9 years ago|reply
Or when somebody can write a 5-line shell script in Bash that:
a) typical non-programmers can and will use b) produces enough diagnostic information that anybody (programmer or not) can use to troubleshoot when something inevitably goes wrong - whether it's user input, network connectivity, a missing dependency...
I get what he's saying, but he's talking about a pretty small subset of what we really do.
[+] [-] orly_bookz|9 years ago|reply
After you write something like that you don't turn around and sell it on the web.
[+] [-] alrs|9 years ago|reply
[+] [-] mrweasel|9 years ago|reply
[+] [-] bluetwo|9 years ago|reply
[+] [-] randomdata|9 years ago|reply
[+] [-] kinkdr|9 years ago|reply
[+] [-] kutkloon7|9 years ago|reply
[+] [-] golergka|9 years ago|reply
[+] [-] libeclipse|9 years ago|reply
I'm not employed as a professional software developer, so I still don't actually use the helpful UNIX commands. Takes all the fun out of it. :P
[+] [-] z3t4|9 years ago|reply
[+] [-] hpaavola|9 years ago|reply
[+] [-] 0xfeba|9 years ago|reply
[+] [-] douche|9 years ago|reply
Computers do what you tell them to, most of the time.
[+] [-] davidgerard|9 years ago|reply
[+] [-] weego|9 years ago|reply
[+] [-] Walkman|9 years ago|reply
[+] [-] myf01d|9 years ago|reply
[+] [-] lolc|9 years ago|reply
http://benjiweber.co.uk/blog/2016/01/25/why-i-strive-to-be-a...
[+] [-] mynegation|9 years ago|reply
Can you dig a canal with a tried and true shovel? Yes, you can, but you will need a lot of time and many shovels.
Can you cook up bioinformatics analysis in Shell and Awk. Yes, you can, but you if you want a large scale analysis to be done in a reasonable amount of time, you roll up your sleeves and reach for compiled languages, distributed systems and so on.
There are downsides in introducing new systems, but it should be weighed against the upside of suitability and efficiency.