I remember when I first added ESLint to our large JavaScript codebase, and then again when I added Flow. I'm a perfectionist when I code, but it was shocking to see some of the things I'd written. Mistakes that I thought were so unlike me, but had been sitting there for months anyway, waiting to blow up.
The human brain wasn't designed to handle the complexity of large codebases. There are just too many compounding implications of everything for us to hold them all in mind at once. The very best coders miss things all the time.
Instead of seatbelts, I would compare tool assistance to aviation instruments. It's not primarily something that just catches you before a disaster; it's something that helps you make the most effective use of your mental resources. We don't ask pilots to fly commercial airliners without advanced software and dozens of readouts and displays to guide their attention and decision-making in an overwhelming sea of information. We shouldn't expect the same of programmers.
The comparison with aviation instruments is apt: I remember seeing a TV special that talked about how in the early days of aviation (circa WWI) the pilots' culture of seat-of-the-pants flying and bravado was resistant to suggestions that human senses are just not equipped to differentiate between certain inertial reference frames, of which one leads to getting into a death-spiral.
It was not until an early aviation safety pioneer came up with a demonstration on the ground (something like a spinning chair and blindfold - I forget the details) that pilots who went through the demonstration were shocked to discover they "fell for" the physical illusion they believed they were immune to.
I feel like programming is in the same state today. Especially the more C/C++ code I see in my career: I'm absolutely convinced humans should just not expected to manually manage some of the things they try to do now.
The first time I ran a static analyser over an old codebase of mine it was deeply humbling.
I'm a competent programmer who works deliberately and carefully and the tool still went "did you really mean to do this stupid thing?".
Of course sometimes you do mean to do stupid thing because there is a good reason for it and as long as the tool gives you the ability to say // @allow-stupid-thing: punch-myself-in-face it's all good.
Since then I throw as many linters, formatters and static analyses tools as I can reasonably get my hands on at every codebase.
It's amazing how little we talk about tooling, both in work and in education.
I had an internship where I kept on trying to work on better tooling for the developers, only to be told that I should be working on the actual product because that's more important. Except, if I make the other developer's lives even 5-10 percent easier, then that will have a much longer effect than whatever work I can get done in 3 months. The most useful contribution I made in that internship was a document explaining all the features of the product with images, short videos and both the internal names and the client facing names.
And in school? Forget about it. Nobody shows kids how to use IDEs or editors properly. Nobody teaches git properly or command line tools. Nobody learns how to set up linters or heck, how to turn on -Wall -pedantic. Imo, every systems class should have a first lecture that explains Valgrind, proper compiler flags and Makefiles, and a last lecture that explains Rust. Realistically most students are not going to use Rust, but just understanding that it exists as a possibility can be useful.
Humans struggle with large codebases but by restricting what's allowed to certain well understood and prearranged patterns and by forcing them to write specs and docs they are possible to handle.
"The human brain wasn't designed to handle the complexity of large codebases."
Because I'm a very simple bear, I always try to find the most simple thing that works. Because I'm certain that the next programmer that comes along, which is most likely future me, will see my efforts and think "WTF?!"
Another facet is our human mind's propensity to make errors. Instead of blaming stupid users, just deal with it. Design artifacts and systems which prohibit errors.
FWIW, Donald Norman's book Design of Everyday Things completed my transformation from technophile to humanist. (I don't know how well it's held up since. Or what knowledge now supersedes it.)
"I would compare tool assistance to aviation instruments."
Great analogy. Many, many designers (of APIs, frameworks, UIs, simulations) conflate abstractions with mental models. Our tools should make the underlying systems obvious, vs trying to hide the complexity. And if something remains too complex to explain, try again, until a mental model that is simple and obvious is found.
The contrarian argumentation that I see on HN is always something to behold (I view it as a positive!).
An article comes out yesterday about programmers being a problem: comment section points in another direction.
An article comes out today that says that programmers aren't the main problem: comment section points out how awful all of us were, if we just go back far enough in time.
Not only with dynamic types, where more bugs are of the "very stupid" kind.
I run against my rust code, and damm.
However, this also prove how MUCH better is static typing. The rust linter provide hints about more serious stuff, not things that the type system already avoid!
In race cars (though maybe not F1), I’m told that the inertia of the driver would throw them out of the car on those turns if not for the safety equipment.
I think you can make the analogy work for professional drivers, but not for amateurs.
In general, in favor of as many correctness and other checks at compile time as possible. Make tools as powerful as possible. I really liked this tweet:
"What if...
- your programming language required you to write useful docs,
- using those docs, it checked your program for mistakes,
Although I mostly agree with you, it's worth noting that static typing only goes so far. When I worked in a large-scale Java codebase, it always seemed like half the code only existed in order to "work around" the type system. (Don't even get me started on Spring, which might as well be a whole additional language on TOP of the actual application code.) I'm perfectly willing to grant that might just be a Java thing, but still, it was a huge and constant frustration.
Because of the above, types really don't seem like "useful docs" to me. I get extremely irritated when a library links to its API documentation and it's all just autogenerated lists of methods which proudly state their parameter and return types but say absolutely nothing about what they do.
That's one major thing I've always appreciated about the Ruby and Javascript ecosystems: because they can't rely on type information, most libraries go out of their way to describe how they work in depth.
I'm willing to grant that I didn't have a ton of experience with Java before moving on to other languages, and so I could very well be wrong... but one thing that definitely worries me about Typescript is the increasing Java-fication of browser languages.
Now, all that being said, I gave Crystal [0] a spin the other day and it's a freaking dream. Ruby, but with types, and compiled to LLVM? YES PLEASE.
> “The problem isn’t the use of a memory unsafe language, but that the programmers who wrote this code are bad.”
This really goes to show how anti-worker the media is even among high-skill jobs.
In my mid 20s I now mentor a bit in coding. I’ve worked with young devs that inherently know many of the obvious security pitfalls that have caused massive security breaches a la Equifax.
Are devs at the front of these breaches bad? I don’t believe so. Many went through grueling multi-part interviews to land the position.
No, the issue a management one. Anti-worker is a meme in America and this is just another manifestation of the disingenuous “lack of tech talent” whining used to recruit more H1B.
It’s also why unreleastic growth targets cause gaming tech workers to face layoffs despite record breaking sales.
Well, although it could be anti-worker, it could also be one-upmanship among programmers with fragile egos: I'm not a bad programmer; they are.
And I think ego is a barrier to accepting tools that prevent errors. To accept the tool is to admit to yourself and others that you are going to make the error (sometimes) without it.
> This really goes to show how anti-worker the media is even among high-skill jobs.
The quote was regarding "social media" as in mostly the workers themselves, not "the media" as in the independent journalists reporting on the workers.
I think you're agreeing with the article in the end, and I also agree with it, but I'd like to note two things:
- Grueling interviews are not necessarily correlated with skill. In particular, I've done a lot of whiteboard coding and a handful of take-home exercises, and not one has anyone cared either if I was writing buffer overflows or if I was reaching to a buffer-overflow-prone style. On the whiteboard, on either side of an interview, it's generally a loose pseudocode with the understanding that errors are not to be checked, that syntax doesn't need to be exact, and the point of the exercise is not whether you had an off-by-one in a calculation. So the skills of either being awesome enough to write bug-free code in security-sensitive environments or humble enough to use tools (languages, analyzers, sandboxes, whatever) to protect yourself from inevitable mistakes are not tested at all.
- "Lack of tech talent" generally refers to the number of people being insufficient for the job, not the quality of the existing people. It is entirely possible to believe that the problems of insecure code are not a level-of-talent issue and that we still face a lack of tech talent. (I have certainly never heard it claimed that the problems we face with insecure code are that too many Americans are bad at coding and we need to hire smarter foreigners who don't write bugs.)
Even if the problem was "bad coders", the answer is still better tooling.
The tooling we have now is pretty poor, in general. Sure, we can catch certain classes of bugs. Many bugs still occur in areas that aren't covered by those classes, even in the most robust languages.
Obviously memory-safe languages would help enormously. I also think that compilers are still in their infancy.
>> I wanted to avoid spawning a thread and immediately just having it block waiting for a database connection...
>> The problem is that the database connection would sometimes use a re-entrant mutex when it was acquired from the pool...
>> with a normal mutex we would be fine, since you only one lock can exist and it doesn’t matter if we unlock it on a thread other than the one we locked it from...
>> Fundamentally, we just can’t have a re-entrant mutex be involved and also be able to pull the connection from the pool on a different thread than it is being used...
Truly good coders are very rare because it's not just about mastering all the available tools and abstractions, it's also about the ability to come up with abstractions that make it simple for anyone looking at the code to understand what is happening.
Good coders can write simple code to do complex things.
I can see how this article might be comforting for some, but "bad coders" is still a problem. There are a lot of amateurs in this industry (even ones with years of "experience").
You can create an extremely opinionated language which doesn't give any freedom for creativity, solutions and expression, and devs will still find a way to duck everything up.
IMHO, OP saw a problem but didn't arrive to the right solution.
I don't see how the article is trying to be "comforting," or dismissing the value of skill and education. I also don't see how the `Send` trait featured in the article "doesn't give any freedom for creativity."
I read it as claiming that skill and education are insufficient to prevent these bugs, and that automation is still valuable no matter how skilled the programmer is.
It's good to have bad coders. Not everybody can have an exceptional IQ.
Not everybody can be good, but everybody deserves a chance to be a coder. There are real problems that can be solved by a bad coder as well, that may help your life some day. Great coders are needed for large, scalable, hard problems, but there are little things they don't have time to work on.
Another take is that DESPITE there being a lot of 'amateurs' in the industry, the world generally seems to be humming along as normal, if not doing very well.
The two factors are equally important; it's just that in certain circles, tooling and language assistance are undervalued and already-good programmers are told to "just get good".
One thing that clouds the issue here is that it really is true that better coders do write fewer of these kinds of bugs. That's a real correlation and not a fictional thing.
So I think what we need to do is acknowledge that but keep it in perspective. While better coders write fewer security bugs, even the best programmers still write some. So it can't be our only line of defense.
Also, suppose for the sake of discussion it were true that top programmers did write zero security bugs. Realistically, as an organization, how would you ensure you employed nothing but exclusively these programmers? Even if you make it a top priority, you can't guarantee it.
Especially since the way you become a great programmer is by starting off as a less-great programmer and getting experience. And while you're doing that, you're churning out software which is by definition written by a less-great or not-yet-great programmer.
And to take it further, let's just consider that we as programmers have a lot of work to do. The current scope of "things people are writing code for" is nowhere near the total scope of possible useful things we could be working on. We want to enable more people, even if they are not "the best" programmers, to build software, safely.
People who suggest "well, just hire better programmers" are incredibly naive and probably have never actually had to deal with the challenges around hiring programmers.
In fact, I think it's the case that the most consequential security issues tend to come from the best programmers. That's because the best programmers tend to be the ones working on high-impact projects such as the Linux kernel. The more widely used the software, the more impact the security issues in that software have.
Way too many developers are working on proprietary implementations of what is fundamentally a content management system. We should be creating half a dozen of these things and doing a little light customization and a few addons. Instead have a couple, and devs look down on people who use them.
Either we are lousy at picking projects, we have a broken culture, or the problem is that we have too many developers and so people can ramp up a bunch of projects that have already been done hundreds of times elsewhere.
We’ve added lanes to the proverbial highway and traffic just gets worse..
Somewhat hilariously, the problem isn't lack of talent in programmers, but lack of talent in companies. Someone on this thread claims everyone is being stolen by top tech companies, that means it's you who needs to make your business more attractive, not that there's a shortage of coders.
Experienced programmers know they don't have to settle for less.
>> With a normal mutex we would be fine, since you only one lock can exist and it doesn’t matter if we unlock it on a thread other than the one we locked it from.
Sorry if this is a dumb question, but I'm confused. Aren't mutexes always supposed to have ownership which implies that only the locking thread can unlock them?
> Sorry if this is a dumb question, but I'm confused. Aren't mutexes always supposed to have ownership which implies that only the locking thread can unlock them?
Mutexes are supposed to ensure that exactly one thread can accesses resource ("have the lock") at a time. There's no fundamental reason you can't pass the lock from one thread to another, as long as they don't both have it at once. But it may not be supported by the particular mutex implementation. It's not supported by the recursive mutexes the author was using, and I'd bet there are also non-recursive mutex implementations which don't support it. I agree with the author that it's great Rust can catch this sort of mistake.
btw, I think recursive mutexes and handing off locks are bad ideas. Both for the same reason: I want short critical sections to improve contention.
* Code that uses recursive mutexes tends to be sloppy about this; it's unclear from reading a section of code whether it even has the lock or not. (This also sounds like a recipe for deadlock when you need multiple locks.) I'd much rather structure it so a given piece of code is run only with or without the lock. In C++, I use lock annotations [1] for this. If I need something to be callable with or without the lock, I might have a private "DoThingLocked()" bit, and a public "DoThing()" bit that delegates while holding the lock. This should also be more efficient (though maybe it's insignificant) because there's no run-time bookkeeping for the recursive mutex.
* Handing off the mutex to another thread also feels like a smell that you're holding the lock longer than you need. I don't recall a time I've ever needed to do it. From the description here, it seems totally reasonable to hold a mutex while getting a connection from the pool and while returning one to it, but not between. I'd think you could get the connection, then create the new thread (passing the connection to it).
A reentrant mutex will allow you to unlock it multiple times so long as you're on the same thread. That means if Thread A takes the lock then later some code on Thread A tries to take the lock again to get a connection to pass to Thread B you'll have the connection the mutex was protecting on two threads at the same time. The rust version of the mutex prevents this by making the data the mutex is protecting unable to be sent to other threads. That means you can only share the mutex which will block on Thread B when you try to take the lock, as expected.
In almost any implementation you don't want to track the owner because it is only an unnecessary overhead and nothing else.
On a similar note the articles strikes me as somewhat contrived example, because the most obvious implementation of reentrant mutex also does not care about the owner thread.
Edit: Another reason why it feel contrived is that in fact reentrant mutex represented as first class datastructure (in contrast to monitor as an language-level construct) is to some extent only an hack to solve issues stemming from improper design.
The main problem is that being secure doesn't increase the revenue of companies substantially, otherwise C/C++ programs would be rushing to Rust. Still, the speed of improvement is great.
I'd love to see Firefox take over Chrome in speed and show that C++ is getting closer to being an outdated language.
Contrasting C and Rust completely misses the issue. Yes, obviously Rust will do away with some important classes of bugs and security vulnerabilities, but it's got nothing to do with addressing the problem. The problem is that there are billions of lines of C out there, and it will take many decades to replace them with software that's written in a memory-safe language. The issue is what do we do with all that existing software, as rewriting it at any reasonable cost or time frame is simply not an available option. A possible solution could be something like sound C static analysis tools that can guarantee no overflows etc. without a significant rewrite. The question we should be asking is how easy it is to use those tools, how scalable they are etc.
No, it is the issue. Before one can even consider the proposition that it might not be wise to write X in C, you first need to convince people that the tool (that is, C) is actually a problem. That's what the OP is targeting: people think the problem isn't the tooling, but the programmer.
The task of figuring out how to actually use the language is an entirely separate, though valid, problem. But it's not some giant mystery. The folks working on Firefox haven't set out to rewrite the entire thing in Rust. They're choosing targeted places where it works. Insomuch as I know, this has been a success.
There's a whole segment of CompSci dedicated to doing that. Softbound+CETS and SAFEbound are two of the better examples giving C memory safety. Data Flow Integrity is a newer one thst might be combined to bring security against data-oriented attacks.
The big issue with such tools is that C's lack of design and low-level nature make the tools have to be extra careful in ways that add extra overhead vs languages designed for verification (eg Gypsy, SPARK Ada). So, the slowdowns can be huge.
I still like them, though, given any approach that works turns the problem from "recode critical, legacy apps securely" to "buy a load balancer and some more servers." Huge improvement in feasibility.
Far as proving absence, RVI's RV-Match can do that against a full (or nearly so) semantics of C in K Framework. Their semantics, KCC, is mocked up like a GCC compiler to make it easy to try with code. It also gets stuck (fail-safe) on undefined behavior.
TL;DR: "[Coding perfectly and anticipating any possible change to how the tools work] are not reasonable expectations of a human being. We need languages with guard rails to protect against these kinds of errors."
I definitely agree with this. And it also applies to a lot more than what the article is focused on (low-level security). It seems that right now the entire programming ecosystem seems to jump to "you just don't understand it" rather than "this should be more intuitive, or at least safer by design."
There are several quasi-related variations. Arguments against higher level languages and abstractions.
Examples:
If you have to use a garbage collected language (eg, just about any modern language) it's because you're too stupid to know how to manage memory properly.
[ various arguments against type safe languages, turning runtime errors into compile time errors ]
Higher level languages and abstractions are just bloat. [or are too bloated, etc]
Managed language runtime systems are too [ big | expensive | bloated | slow ] etc. (eg, the Java runtime, or .NET runtime, to a degree also Python, JavaScript, Lisp(s), etc)
Counter arguments:
Any sufficiently complex program will need the managed runtime, garbage collection, abstractions, type safety, etc.
Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
We could just write everything in assembly language, er, . . . um . . no, in hex code. And key it into the front of the machine on toggle switches with blinking lights. We could! Yes, really! So why aren't we? Why isn't, say, LibreOffice written in assembly language?
I use high level languages, runtime systems, GC, etc because I'm not optimizing for cpu cycles and bytes. I'm optimizing for DOLLARS. A long time ago, in a galaxy far, far away, computer hardware was the most expensive resource. Today developer time is the most expensive resource. Everybody is happy to have their word processor upgrade a year sooner even if it means it uses a mere extra 500 MB of memory.
I'm torn on this, because while we can always do better, sometimes there's things you just gotta know; this is the best we can do, so far. There's also lots of lazy programmers, that think everything should be easy and intuitive, when in reality they need to do more to understand the tools they have and hone their craft. I refuse to call them bad, because I think most people can reach expert level. Maybe not master or grandmaster, but experts would do.
> It seems that right now the entire programming ecosystem seems to jump to "you just don't understand it" rather than "this should be more intuitive, or at least safer by design."
Stockholm syndrome? Hazing? Fear that skills that moat off expertise will be devalued?
Saying that I should do something that the computer can do for me is telling me that my time is not worth anything. It's pretty insulting.
Possibly sounds like a bit of a design issue to me, author /assumed/ that reentrant mutexes wouldn't be added to the code, and this wasn't documented or tested for properly...
I think this argument grants too much to the C-believer crowd by presenting a case where even an infallible programmer would have ended up making the mistake. The author is arguing with people divorced from reality on their terms.
Next, you'll get counterarguments from the peanut gallery proposing that the merging the changes was just the fault of bad process, and as long as people don't make the mistake of applying a bad process, evrything would be fine.
> This wasn’t caught when I finished writing the code. It was caught weeks later, when rebasing against the other changes of the codebase. The invariants of the code I was working with had fundamentally changed out from underneath me between when the code was written and when I was planning to merge it.
Aside from the wider question about bad coders, I don't understand why he didn't catch this when he wrote the code. Didn't it fail to compile?
I believe the implication is that when he wrote the code it compiled just fine. There were no reentrant mutexes, so one can assume the database connection he was working with conformed to Send at that point, and it was only with the addition of the reentrant mutexes that the connection lost the conformance to Send.
> ... use a re-entrant mutex when it was acquired from the pool ... normal mutex we would be fine, we unlock it on a thread other than the one we locked it from ... re-entrant mutex remembers which thread it was locked from, we need to keep the resource on the same thread
Sounds like a "bad coders" problem to me! This design is so screwed up that no tool or smart compiler can save you from problems here.
This article hits the nail on the head. When speaking with someone who argues that we don't need memory-safe languages, just better programmers, I always like to ask if they've ever had to use a debugger/ever made a single mistake programming. What you're asking of programmers is that they never make mistake, whether it be a memory-safety bug or just a regular bug. It's simply not possible.
I have argued many times it isn't the language that is the problem, necessarily. Never have I argued against tooling, though. Rather, toolchains can be added to without being replaced.
That is, my problem with this blog and most posts like it. Your tooling should not begin and end with your compiler. It is a vital part of your tooling, no doubt. But if all you did was compile it, you are playing a risky game.
[+] [-] _bxg1|7 years ago|reply
The human brain wasn't designed to handle the complexity of large codebases. There are just too many compounding implications of everything for us to hold them all in mind at once. The very best coders miss things all the time.
Instead of seatbelts, I would compare tool assistance to aviation instruments. It's not primarily something that just catches you before a disaster; it's something that helps you make the most effective use of your mental resources. We don't ask pilots to fly commercial airliners without advanced software and dozens of readouts and displays to guide their attention and decision-making in an overwhelming sea of information. We shouldn't expect the same of programmers.
[+] [-] phaedrus|7 years ago|reply
It was not until an early aviation safety pioneer came up with a demonstration on the ground (something like a spinning chair and blindfold - I forget the details) that pilots who went through the demonstration were shocked to discover they "fell for" the physical illusion they believed they were immune to.
I feel like programming is in the same state today. Especially the more C/C++ code I see in my career: I'm absolutely convinced humans should just not expected to manually manage some of the things they try to do now.
[+] [-] noir_lord|7 years ago|reply
The first time I ran a static analyser over an old codebase of mine it was deeply humbling.
I'm a competent programmer who works deliberately and carefully and the tool still went "did you really mean to do this stupid thing?".
Of course sometimes you do mean to do stupid thing because there is a good reason for it and as long as the tool gives you the ability to say // @allow-stupid-thing: punch-myself-in-face it's all good.
Since then I throw as many linters, formatters and static analyses tools as I can reasonably get my hands on at every codebase.
[+] [-] _hardwaregeek|7 years ago|reply
I had an internship where I kept on trying to work on better tooling for the developers, only to be told that I should be working on the actual product because that's more important. Except, if I make the other developer's lives even 5-10 percent easier, then that will have a much longer effect than whatever work I can get done in 3 months. The most useful contribution I made in that internship was a document explaining all the features of the product with images, short videos and both the internal names and the client facing names.
And in school? Forget about it. Nobody shows kids how to use IDEs or editors properly. Nobody teaches git properly or command line tools. Nobody learns how to set up linters or heck, how to turn on -Wall -pedantic. Imo, every systems class should have a first lecture that explains Valgrind, proper compiler flags and Makefiles, and a last lecture that explains Rust. Realistically most students are not going to use Rust, but just understanding that it exists as a possibility can be useful.
[+] [-] swiley|7 years ago|reply
Humans struggle with large codebases but by restricting what's allowed to certain well understood and prearranged patterns and by forcing them to write specs and docs they are possible to handle.
[+] [-] specialist|7 years ago|reply
Because I'm a very simple bear, I always try to find the most simple thing that works. Because I'm certain that the next programmer that comes along, which is most likely future me, will see my efforts and think "WTF?!"
Another facet is our human mind's propensity to make errors. Instead of blaming stupid users, just deal with it. Design artifacts and systems which prohibit errors.
FWIW, Donald Norman's book Design of Everyday Things completed my transformation from technophile to humanist. (I don't know how well it's held up since. Or what knowledge now supersedes it.)
"I would compare tool assistance to aviation instruments."
Great analogy. Many, many designers (of APIs, frameworks, UIs, simulations) conflate abstractions with mental models. Our tools should make the underlying systems obvious, vs trying to hide the complexity. And if something remains too complex to explain, try again, until a mental model that is simple and obvious is found.
[+] [-] moosey|7 years ago|reply
An article comes out yesterday about programmers being a problem: comment section points in another direction.
An article comes out today that says that programmers aren't the main problem: comment section points out how awful all of us were, if we just go back far enough in time.
My main takeaway: Programming is hard.
[+] [-] mamcx|7 years ago|reply
I run against my rust code, and damm.
However, this also prove how MUCH better is static typing. The rust linter provide hints about more serious stuff, not things that the type system already avoid!
[+] [-] hinkley|7 years ago|reply
I think you can make the analogy work for professional drivers, but not for amateurs.
[+] [-] adrianhel|7 years ago|reply
[+] [-] wheelie_boy|7 years ago|reply
"What if...
- your programming language required you to write useful docs,
- using those docs, it checked your program for mistakes,
- it even used the docs to speed up your program,
- this feature already exists!
And what if it was called static typing."
- https://twitter.com/DWvanGeest/status/1092095822559358976
[+] [-] mhink|7 years ago|reply
Because of the above, types really don't seem like "useful docs" to me. I get extremely irritated when a library links to its API documentation and it's all just autogenerated lists of methods which proudly state their parameter and return types but say absolutely nothing about what they do.
That's one major thing I've always appreciated about the Ruby and Javascript ecosystems: because they can't rely on type information, most libraries go out of their way to describe how they work in depth.
I'm willing to grant that I didn't have a ton of experience with Java before moving on to other languages, and so I could very well be wrong... but one thing that definitely worries me about Typescript is the increasing Java-fication of browser languages.
Now, all that being said, I gave Crystal [0] a spin the other day and it's a freaking dream. Ruby, but with types, and compiled to LLVM? YES PLEASE.
0: https://crystal-lang.org/
[+] [-] FilterSweep|7 years ago|reply
This really goes to show how anti-worker the media is even among high-skill jobs.
In my mid 20s I now mentor a bit in coding. I’ve worked with young devs that inherently know many of the obvious security pitfalls that have caused massive security breaches a la Equifax.
Are devs at the front of these breaches bad? I don’t believe so. Many went through grueling multi-part interviews to land the position.
No, the issue a management one. Anti-worker is a meme in America and this is just another manifestation of the disingenuous “lack of tech talent” whining used to recruit more H1B.
It’s also why unreleastic growth targets cause gaming tech workers to face layoffs despite record breaking sales.
[+] [-] adrianmonk|7 years ago|reply
And I think ego is a barrier to accepting tools that prevent errors. To accept the tool is to admit to yourself and others that you are going to make the error (sometimes) without it.
[+] [-] geofft|7 years ago|reply
The quote was regarding "social media" as in mostly the workers themselves, not "the media" as in the independent journalists reporting on the workers.
I think you're agreeing with the article in the end, and I also agree with it, but I'd like to note two things:
- Grueling interviews are not necessarily correlated with skill. In particular, I've done a lot of whiteboard coding and a handful of take-home exercises, and not one has anyone cared either if I was writing buffer overflows or if I was reaching to a buffer-overflow-prone style. On the whiteboard, on either side of an interview, it's generally a loose pseudocode with the understanding that errors are not to be checked, that syntax doesn't need to be exact, and the point of the exercise is not whether you had an off-by-one in a calculation. So the skills of either being awesome enough to write bug-free code in security-sensitive environments or humble enough to use tools (languages, analyzers, sandboxes, whatever) to protect yourself from inevitable mistakes are not tested at all.
- "Lack of tech talent" generally refers to the number of people being insufficient for the job, not the quality of the existing people. It is entirely possible to believe that the problems of insecure code are not a level-of-talent issue and that we still face a lack of tech talent. (I have certainly never heard it claimed that the problems we face with insecure code are that too many Americans are bad at coding and we need to hire smarter foreigners who don't write bugs.)
[+] [-] jaggederest|7 years ago|reply
The tooling we have now is pretty poor, in general. Sure, we can catch certain classes of bugs. Many bugs still occur in areas that aren't covered by those classes, even in the most robust languages.
Obviously memory-safe languages would help enormously. I also think that compilers are still in their infancy.
[+] [-] jondubois|7 years ago|reply
>> The problem is that the database connection would sometimes use a re-entrant mutex when it was acquired from the pool...
>> with a normal mutex we would be fine, since you only one lock can exist and it doesn’t matter if we unlock it on a thread other than the one we locked it from...
>> Fundamentally, we just can’t have a re-entrant mutex be involved and also be able to pull the connection from the pool on a different thread than it is being used...
Truly good coders are very rare because it's not just about mastering all the available tools and abstractions, it's also about the ability to come up with abstractions that make it simple for anyone looking at the code to understand what is happening.
Good coders can write simple code to do complex things.
[+] [-] rinchik|7 years ago|reply
You can create an extremely opinionated language which doesn't give any freedom for creativity, solutions and expression, and devs will still find a way to duck everything up.
IMHO, OP saw a problem but didn't arrive to the right solution.
[+] [-] Rusky|7 years ago|reply
I read it as claiming that skill and education are insufficient to prevent these bugs, and that automation is still valuable no matter how skilled the programmer is.
[+] [-] xiphias2|7 years ago|reply
Not everybody can be good, but everybody deserves a chance to be a coder. There are real problems that can be solved by a bad coder as well, that may help your life some day. Great coders are needed for large, scalable, hard problems, but there are little things they don't have time to work on.
[+] [-] leesec|7 years ago|reply
[+] [-] _bxg1|7 years ago|reply
[+] [-] adrianmonk|7 years ago|reply
So I think what we need to do is acknowledge that but keep it in perspective. While better coders write fewer security bugs, even the best programmers still write some. So it can't be our only line of defense.
Also, suppose for the sake of discussion it were true that top programmers did write zero security bugs. Realistically, as an organization, how would you ensure you employed nothing but exclusively these programmers? Even if you make it a top priority, you can't guarantee it.
Especially since the way you become a great programmer is by starting off as a less-great programmer and getting experience. And while you're doing that, you're churning out software which is by definition written by a less-great or not-yet-great programmer.
[+] [-] kelnos|7 years ago|reply
People who suggest "well, just hire better programmers" are incredibly naive and probably have never actually had to deal with the challenges around hiring programmers.
[+] [-] pcwalton|7 years ago|reply
[+] [-] weliketocode|7 years ago|reply
- Competition/Compensation for experienced coders has risen sharply
- There are many more inexperienced SDE's coming from colleges/bootcamps/etc
- Senior titles are often given to individuals who are still very early in their careers
Now, these factors might be necessary/good in the short term as software continues to eat the world.
But, let's not pretend that the talent shortage isn't a problem.
[+] [-] hinkley|7 years ago|reply
Either we are lousy at picking projects, we have a broken culture, or the problem is that we have too many developers and so people can ramp up a bunch of projects that have already been done hundreds of times elsewhere.
We’ve added lanes to the proverbial highway and traffic just gets worse..
[+] [-] WilliamEdward|7 years ago|reply
Experienced programmers know they don't have to settle for less.
[+] [-] FilterSweep|7 years ago|reply
Your issues in recruiting interviewing and oboarding are not the fault of tech workers.
[+] [-] srean|7 years ago|reply
[deleted]
[+] [-] applesvsoranges|7 years ago|reply
Sorry if this is a dumb question, but I'm confused. Aren't mutexes always supposed to have ownership which implies that only the locking thread can unlock them?
[+] [-] scottlamb|7 years ago|reply
Mutexes are supposed to ensure that exactly one thread can accesses resource ("have the lock") at a time. There's no fundamental reason you can't pass the lock from one thread to another, as long as they don't both have it at once. But it may not be supported by the particular mutex implementation. It's not supported by the recursive mutexes the author was using, and I'd bet there are also non-recursive mutex implementations which don't support it. I agree with the author that it's great Rust can catch this sort of mistake.
btw, I think recursive mutexes and handing off locks are bad ideas. Both for the same reason: I want short critical sections to improve contention.
* Code that uses recursive mutexes tends to be sloppy about this; it's unclear from reading a section of code whether it even has the lock or not. (This also sounds like a recipe for deadlock when you need multiple locks.) I'd much rather structure it so a given piece of code is run only with or without the lock. In C++, I use lock annotations [1] for this. If I need something to be callable with or without the lock, I might have a private "DoThingLocked()" bit, and a public "DoThing()" bit that delegates while holding the lock. This should also be more efficient (though maybe it's insignificant) because there's no run-time bookkeeping for the recursive mutex.
* Handing off the mutex to another thread also feels like a smell that you're holding the lock longer than you need. I don't recall a time I've ever needed to do it. From the description here, it seems totally reasonable to hold a mutex while getting a connection from the pool and while returning one to it, but not between. I'd think you could get the connection, then create the new thread (passing the connection to it).
[1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html
[+] [-] amaranth|7 years ago|reply
[+] [-] dfox|7 years ago|reply
On a similar note the articles strikes me as somewhat contrived example, because the most obvious implementation of reentrant mutex also does not care about the owner thread.
Edit: Another reason why it feel contrived is that in fact reentrant mutex represented as first class datastructure (in contrast to monitor as an language-level construct) is to some extent only an hack to solve issues stemming from improper design.
[+] [-] xiphias2|7 years ago|reply
I'd love to see Firefox take over Chrome in speed and show that C++ is getting closer to being an outdated language.
[+] [-] pron|7 years ago|reply
[+] [-] burntsushi|7 years ago|reply
The task of figuring out how to actually use the language is an entirely separate, though valid, problem. But it's not some giant mystery. The folks working on Firefox haven't set out to rewrite the entire thing in Rust. They're choosing targeted places where it works. Insomuch as I know, this has been a success.
[+] [-] nickpsecurity|7 years ago|reply
The big issue with such tools is that C's lack of design and low-level nature make the tools have to be extra careful in ways that add extra overhead vs languages designed for verification (eg Gypsy, SPARK Ada). So, the slowdowns can be huge.
I still like them, though, given any approach that works turns the problem from "recode critical, legacy apps securely" to "buy a load balancer and some more servers." Huge improvement in feasibility.
Far as proving absence, RVI's RV-Match can do that against a full (or nearly so) semantics of C in K Framework. Their semantics, KCC, is mocked up like a GCC compiler to make it easy to try with code. It also gets stuck (fail-safe) on undefined behavior.
[+] [-] Derek_MK|7 years ago|reply
I definitely agree with this. And it also applies to a lot more than what the article is focused on (low-level security). It seems that right now the entire programming ecosystem seems to jump to "you just don't understand it" rather than "this should be more intuitive, or at least safer by design."
[+] [-] DannyB2|7 years ago|reply
Examples:
If you have to use a garbage collected language (eg, just about any modern language) it's because you're too stupid to know how to manage memory properly.
[ various arguments against type safe languages, turning runtime errors into compile time errors ]
Higher level languages and abstractions are just bloat. [or are too bloated, etc]
Managed language runtime systems are too [ big | expensive | bloated | slow ] etc. (eg, the Java runtime, or .NET runtime, to a degree also Python, JavaScript, Lisp(s), etc)
Counter arguments:
Any sufficiently complex program will need the managed runtime, garbage collection, abstractions, type safety, etc.
Greenspun's tenth rule: (https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule)
Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
We could just write everything in assembly language, er, . . . um . . no, in hex code. And key it into the front of the machine on toggle switches with blinking lights. We could! Yes, really! So why aren't we? Why isn't, say, LibreOffice written in assembly language?
I use high level languages, runtime systems, GC, etc because I'm not optimizing for cpu cycles and bytes. I'm optimizing for DOLLARS. A long time ago, in a galaxy far, far away, computer hardware was the most expensive resource. Today developer time is the most expensive resource. Everybody is happy to have their word processor upgrade a year sooner even if it means it uses a mere extra 500 MB of memory.
[+] [-] P_I_Staker|7 years ago|reply
[+] [-] madhadron|7 years ago|reply
Stockholm syndrome? Hazing? Fear that skills that moat off expertise will be devalued?
Saying that I should do something that the computer can do for me is telling me that my time is not worth anything. It's pretty insulting.
[+] [-] smileypete|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] fulafel|7 years ago|reply
Next, you'll get counterarguments from the peanut gallery proposing that the merging the changes was just the fault of bad process, and as long as people don't make the mistake of applying a bad process, evrything would be fine.
[+] [-] brianpgordon|7 years ago|reply
Aside from the wider question about bad coders, I don't understand why he didn't catch this when he wrote the code. Didn't it fail to compile?
[+] [-] eridius|7 years ago|reply
[+] [-] alexeiz|7 years ago|reply
Sounds like a "bad coders" problem to me! This design is so screwed up that no tool or smart compiler can save you from problems here.
[+] [-] amaccuish|7 years ago|reply
[+] [-] taeric|7 years ago|reply
That is, my problem with this blog and most posts like it. Your tooling should not begin and end with your compiler. It is a vital part of your tooling, no doubt. But if all you did was compile it, you are playing a risky game.