top | item 24942671

Edsger Dijkstra – The Man Who Carried Computer Science on His Shoulders

444 points| furcyd | 5 years ago |inference-review.com

196 comments

order
[+] todd8|5 years ago|reply
I read Dijkstra's goto considered harmful in college, circa 1970. I was a young programmer at the time, and it made me think and reconsider how and why big programs were often hard to understand.

Years later, in grad school, my research was on program verification. Dijkstra was a big influence on the field. I attended a lecture at UT Austin that he gave before moving to the US.

Dijkstra's approach to proving the correctness of programs is hard to apply in practice. Like a geometry proof, there are ways in which a proof of correctness can have mistakes that aren't obvious. Here's an example, prove that a sorting algorithm works. If the specification is that the output must be in sorted order, a proven program can still fail to sort correctly. The specification has an error, the output must not only be in sorted order, it must ensure that all of the input elements make it somewhere to the output. But this isn't good enough. A sort proven to meet this specification might leave out some duplicate values. Requiring that the output be the same length as the input still isn't good enough, multiple copies of one element might hide missing duplicate input values for another element. The specification requires that the output be a sorted permutation of the input.

Sorting is a well understood problem, but real verification must deal with the strange mathematics implemented by computers. The mathematical theories that we are accustomed to from school assume infinite precision and no overflow, real life hardware doesn't act this way.

All of this makes program verification hard, and my hope is that AI programming assistants will someday aid in the construction of meaningful specifications and work with the programmer to verify the correctness of programs or determine their safe operating boundaries. Progress has been slow, and the tools are still impractical for use by most programmers on most programs.

[+] User23|5 years ago|reply
Program verification is a fool's errand and Dijkstra would have been the first to tell you this. The correct approach is program construction following derivation rules that do not admit error. You don't need to verify a program produced from a specification this way anymore than you need to prove a proof by construction. It baffles me that to this day the distinction defies comprehension for so many in our field.
[+] mcguire|5 years ago|reply
Take a look at Frama-C or Ada Spark sometime; they apply the predicate transformer semantics style to code and do pretty well. But yeah, incomplete specifications are always fun.
[+] amw-zero|5 years ago|reply
This is such a great and actually important comment. I’ve been obsessed with program correctness / verification for a while now. It just seems like the holy grail that we can’t fully attain yet. But you keep bumping into certain walls, and the fact that correctness can only be relative to a given specification is a profoundly huge wall.

There’s no checking to see if your specification is under-specified.

[+] userbinator|5 years ago|reply
AI programming assistants

Given what I've seen of AI today (works reasonably well at the best of times, but is sensitive to unusual cases and can produce totally nonsensical results otherwise; and nearly impossible to actually debug), I remain highly skeptical.

[+] xenocyon|5 years ago|reply
It is remarkable how much he contravened modern academic norms: no work to bring in external funding, relatively few papers, very little collaboration, almost no PhD students. It's not an exaggeration to say that a young researcher with these characteristics today would likely get fired from any high-ranked American research university before even coming up for tenure review. Perhaps our modern academic metrics of merit are missing something?
[+] rramadass|5 years ago|reply
They sure do.

Here is Freeman Dyson; the Physicist without a PhD ( https://www.quantamagazine.org/a-math-puzzle-worthy-of-freem... );

Oh, yes. I’m very proud of not having a Ph.D. I think the Ph.D. system is an abomination. It was invented as a system for educating German professors in the 19th century, and it works well under those conditions. It’s good for a very small number of people who are going to spend their lives being professors. But it has become now a kind of union card that you have to have in order to have a job, whether it’s being a professor or other things, and it’s quite inappropriate for that. It forces people to waste years and years of their lives sort of pretending to do research for which they’re not at all well-suited. In the end, they have this piece of paper which says they’re qualified, but it really doesn’t mean anything.

[+] mcguire|5 years ago|reply
Perhaps. I can't really speak to his career before he came to UT Austin in 1984, he was already very famous (Bjarne Stroustroup at Texas A&M/Columbia or Brian Kernighan at Princeton famous); if I remember, his chair was created for him as a full professor and without many of the usual duties of a professor. (One of his stipulations was that he not be made Chairman of the department.)

According to the article, as a young researcher, he was

* A "programmer" (something like a research assistant, perhaps?) at the Mathematisch Centrum (Mathematical Centre) in Amsterdam.

* Professor in the mathematics department of the Eindhoven University of Technology from 1962-1973. (As I understand, the "external funding" thing is less of a problem in mathematics. Also, I have no idea how tenure works there.)

* Finally, research fellow at the Burroughs Corporation.

It's true, he didn't have many direct PhD students (two?!), but his collaborations were many. (I don't know if the Tuesday Morning Group at UT is still active,[1] but it was rather well known and its members produced a great deal of what is now "theoretical computer science".)

If you're trying to compare Dijkstra to J. Random Young Researcher, J's gonna have a hard time any way you look at it.

[1] UT no longer requires automata theory or formal logic for CS undergraduates. They're dead to me.

[+] marcosdumay|5 years ago|reply
> It's not an exaggeration to say that a young researcher with these characteristics today would likely get fired from any high-ranked American research university before even coming up for tenure review.

He wouldn't even get in an university to get the chance to be fired. And that's not specific to the US, it applies to nearly all developed or developing countries.

That person wouldn't be able to get much past a PhD, even as a student.

[+] hinkley|5 years ago|reply
People used to say of my alma mater that the teaching was decent but the facilities were amazing.

There was a feel on campus that the people who were really going to 'make it' were only putting in B- or C+ work and spending the rest of their time working on side projects. There were very few days in a given year when more than half of the computer labs were full at the same time, and with Unix boxes you can always remote in to the flavor of machine you need.

You don't need much research budget if you already have access to time on equipment. I'd be curious to know if Dijkstra had access to a similar embarrassment of riches.

[+] enumjorge|5 years ago|reply
That reminds me of the anecdote where one of Google’s hiring committees realized that they wouldn’t have gotten hired had they been judged by their own standards. It makes you wonder how much talent gets filtered out when we apply very stringent metrics to grade candidates.
[+] random3|5 years ago|reply
One of my past career pet peeves was not being able to work in the same team with some people I was really excited to work daily with. I almost joined the research group, passed interviews, got accepted, but eventually everything got stalled because some "credentials" weren't there.
[+] nurettin|5 years ago|reply
It would be very funny to watch foremost experts in relatively new fields being denied because they don't have phd students. I somehow doubt this is the case, though.
[+] jes5199|5 years ago|reply
his papers are quite good though, and he did a lot of writing, just not for journals
[+] mysterydip|5 years ago|reply
I wish seeing the name Dijkstra didn't immediately remind me of his oft-quoted BASIC comment:

"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."

The problem is it's quoted regardless of how much the BASIC dialect being discussed resembles the one he was talking about [1]. There are lots of good BASIC languages out there running all kinds of useful software that are dismissed out of hand by a subset of developers simply from that comment.

[1] https://programmingisterrible.com/post/40132515169/dijkstra-...

[+] AnimalMuppet|5 years ago|reply
Or at least who thought he carried computer science on his shoulders.

He made significant, important contributions. He wasn't the only one. Hoare, Backus, McCarthy... there were a lot of people who made important, fundamental contributions.

Structured programming caught on. Proofs did not, for reasons that would have offended Dijkstra - they took time, and it was actually more valuable (both socially and financially) to produce mostly-working software quickly than to produce proven-correct software slowly.

But he never would have accepted that the real world was that way. He insisted that his approach was correct. As Backus said of him, "This guy’s arrogance takes your breath away."

So: Important contributions? Absolutely. More than anyone else's? Perhaps. Carried computer science on his shoulders? No, but he probably thought he did.

[+] marcosdumay|5 years ago|reply
Well, static types and analysis are in fashion now, and always moving on the direction of more powerful proof systems.

He was very likely too much ahead of his time on that. Decades after him we still don't have good enough tech to do the kinds of proof he expected... And yes, maybe too arrogant to realize that it's not the entire world that is dumb, but it's a core piece of the system that is missing.

On the other hand, he did work to improve proof systems, so who knows what he really thought?

[+] nickdrozd|5 years ago|reply
> ALGOL 60 included a series of novel features, such as recursion, which was supported in a much more complex manner than logicians had ever envisaged...Recursive procedures in ALGOL 60 are much more complex than in Lisp.

Can anyone explain how this "much more complex" recursion works?

[+] todd8|5 years ago|reply
Early LISP implementations used dynamic scope. This means that non-local references in a function search for the variable's binding in the call chain: if A calls B then when B uses nonlocal variable x it refers to the x in A, not necessarily the x that belongs in the lexical parent of B as seen in the source code. (Unlike C, Algol/Pascal/LISP can all define functions inside of other functions. These nested function definitions introduce lexical non-local scopes.)

Dynamic scoping is easier to implement than lexical scoping. With lexical scoping, the run-time must incorporate some mechanism to find the stack frame of a lexical parent (as seen in the program's source code) to access a nonlocal variable's value. In dynamic scoping the run-time can just follow the stack frame order (starting at the top) to find a nonlocal variable's binding.

Early LISP used dynamic scope. Scheme had lexical scope from the beginning. Common Lisp had both mechanisms and programs can use both.

The implementation of recursion also has to address proper access to nonlocal variables in functions that are values themselves and are passed as arguments to other functions. Do these functions access the nonlocal variables based on their call location or their lexical location in the source code? These problems with functions as arguments are known as the upward and downward funarg problems. Knuth wrote the man-boy program to test Algol 60 implementations in these areas.

Another complication for implementors of Algol 60 was parameter passing by-name, which is neither by-value nor by-reference. I don't recall any other well known language that has subsequently made that non-intuitive (to me) choice. I believe that Algol 68 abandoned call by-name, but it's been 25 years since I looked at the Algol 68 standard.

[+] Someone|5 years ago|reply
I think this refers to making the implementation efficient.

They wanted performance on par with existing languages that didn’t allow recursion, and could simply assign fixed addresses to all function arguments and local variables.

Lisp didn’t have that. It allocated environments left and right. That meant that a function call allocated memory and potentially could trigger garbage collection.

The major improvement was to use a runtime stack. That was fairly novel. Earlier compilers used a stack at compile time to make it possible that those fixed addresses for arguments and locals could be shared between functions that didn’t call each other, directly or indirectly, but that stack didn’t exist at runtime.

And yes, the idea of a stack seems trivial nowadays, but it wasn’t at the time. Quite a few CPUs didn’t even have “jump to subroutine” or “return from subroutine” instructions.

Algol’s “call by name” feature also may have complicated this, but I don’t see how that’s more difficult than passing lambdas as arguments.

[+] niea_11|5 years ago|reply
I did a quick search and according to this article [1], dijkstra's implementation was not restricted compared to other implementations at the time :

It is Dijkstra's generalizing style which stands out when a comparison is made with the work of his contemporaries. Rutishauser, for example, had limited the order of his procedure activations in his run-time system [59], while Dijkstra had no such restriction. Floyd's work [60, p.42-43] relied on three specialized “yo-yo” lists (i.e., stacks) instead of one general stack. Likewise, the MAD translator [61, p.28] used several specialized tables. Also, and most importantly, the ALCOR compilers were severely restricted in that they could not handle several ALGOL60 language constructs, including the recursive procedure (cf. Section ). Finally, though the Irons-Feurzeig system [54] did implement recursive-procedure activations and by means of one general run-time stack, it was, in the interest of efficiency, sophisticated in its run-time capabilities and, hence, unlike the run-time system of Dijkstra and Zonneveld .

[1] : https://www.dijkstrascry.com/node/4

[+] throwaway_pdp09|5 years ago|reply
The answers to this question are extremely interesting and show how much work has been done to get where we are today, a position I took for granted and never really understood until now (even in name: 'yo-yo list', for heaven's sake!)

It's a bit like parsing, these days we either reach for a technique (eg. recursive descent) or an implementation of a tool (such as flex/bison) and never think there was a time before these where people had to struggle conceptually with these things.

[+] qazpot|5 years ago|reply
Arrogance in computer science is measured in nano-dijkstra. - Alan Kay
[+] thelazydogsback|5 years ago|reply
One fun item I'm lucky enough to own is a core memory stack from Dijkstra's Electrologica computer, ca.1958 or so -- very low density obviously. This machine is where the first working Algol compiler was written.
[+] enchiridion|5 years ago|reply
How do you come across something like that?
[+] Jtsummers|5 years ago|reply
If your opinion of Dijkstra is mostly negative and the result of reading others' opinions or selective readings of his EWDs (some of which are incredibly negative in tone), it would behoove you to read his EWDs. Just go through the archive and grab random ones based on title, then later do a more systematic reading of them.
[+] User23|5 years ago|reply
I spent a year reading all of the English language ones and it has made me a MUCH better programmer. I can't recommend them highly enough. Also his book A Discipline of Programming was seminal for me.
[+] dryrunes|5 years ago|reply
What is an EWD?
[+] rramadass|5 years ago|reply
A well written portrait of a truly great and original "Scientist" in the true sense of the term.

I have always wondered why his approach of constructing correctness proof along with the implementation never really caught on. I think it was what allowed him to come up with his novel algorithms. Have we lost something of the real essence of logical thinking here? Perhaps the paper mentioned in the article; Social Processes and Proofs of Theorems and Programs which he was vehemently against, will give me the other side of the argument.

[+] ativzzz|5 years ago|reply
Probably because the cost associated with constructing such proofs for the vast majority of modern software use cases (like showing ads on 8 different browsers) is too high (cheaper to just hire QA teams), and most programmers are not simultaneously software engineers, computer scientists, and mathematicians.
[+] w0mbat|5 years ago|reply
My only beef against Dijkstra is that I applied for a fairly crappy job I was eminently qualified for in terms of experience and someone told me to code Dijkstra's shunting yard algorithm on a whiteboard (well not in so many words, but that was the solution). It's a good algorithm but way beyond the scope of what you should ask in a whiteboard interview. I didn't get the job.
[+] _Nat_|5 years ago|reply
I get the sense that Dijkstra would've agreed with you and perhaps had harsh words for that interview process.

Sometimes these interview stories sound like selecting a professor of mathematics by picking the one who can calculate the square-root-of-2 to 12 digits in their head the fastest.

[+] jamez|5 years ago|reply
Good read. For anyone interested in seeing the human side of Dijkstra, I recommend this documentary: https://www.youtube.com/watch?v=RCCigccBzIU

When I watched it I was both studying Computer Science and learning Dutch, so it was twice as interesting. The version I linked provides English subtitles.

[+] rramadass|5 years ago|reply
Neat video! Thanks for posting it.
[+] microtherion|5 years ago|reply
This is a really nice article on Dijkstra, both more detailed about his contributions and more balanced about his limitations — many articles about him are too hagiographic to my taste.
[+] amelius|5 years ago|reply
What are the greatest advances in CS of the last decade?
[+] mcguire|5 years ago|reply
As an aside, I took a History of Logic course from Krzysztof Apt (https://inference-review.com/author/krzysztof-apt/) while I was an undergrad and he was briefly at UT Austin. He is an excellent teacher and a very nice man.

(He loaned me a copy of a book on Frege's logic during the course---it had been signed and given to him by the person who let him stay on their floor when he first came to the US.)

[+] znpy|5 years ago|reply
«Edsger W. Dijkstra, “Recursive Programming,” Numerische Mathematik 2 (1960): 312–18, doi:10.1007/bf01386232. »

The link brings to a paywall. FFS do they really have the courage to ask 42,64€ for a paper from 1960 ?

[+] olodus|5 years ago|reply
I have wondered what Dijkstra would have said about that most (from my limited view) verification advances nowadays focus on functional programming (dependant types and so forth). I get the feeling he wasn't too fond of functional programming.

Or maybe he even was able to comment on it, I am not sure how far the field had moved while he was still with us. I have searched for it but could find any comment from him on the subject.

[+] a_136_chiffa|5 years ago|reply
Alan Kay on Dijkstra, (1997 OOPSLA keynote): 'I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras.'

Was this article written by Dijkstra himself?

[+] jakespoon|5 years ago|reply
Backus literally inspired functional programming with his Turing acceptance and Dijkstra attacked him for it. Dijkstra attacked the paper that inspired Haskell and everything else that currently exists in FP. He tried to destroy it. He should be a footnote in history.
[+] yters|5 years ago|reply
I have multiple CS and CE degrees, and learned his proofs, but industry is pretty antithetical to his approach and most CS pretty irrelevant. I am not sure he has that much real influence.

Not that I don't wish it were otherwise. I love the beauty of mathematics and do wish software were more elegant.