top | item 12621622

Vigorous Public Debates in Academic Computer Science

81 points| mad | 9 years ago |blog.regehr.org | reply

35 comments

order
[+] Animats|9 years ago|reply
Some of those are classics. The Lipton-Perlis-DeMillo argument against program verification is a good one.[1] They demonstrated that manual programming verification via hand theorem proving is buggy. Automated theorem proving was new back then.

This was an issue back in the 80s for a philosophical reason in mathematics. Mathematics classically approves of short, clever, elegant proofs. The paper-and-pencil crowd was trying to do that for code. In practice, program verification is about mechanized grinding through massive numbers of dumb assertions to catch off-by-one errors.

This used to bother mathematicians. When the four-color theorem was originally proved, it was done partly by hand and partly with computer case analysis. The computer part bothered many mathematicians. The culture has changed. Recently, someone redid the proof with the old hand parts re-done by computer. Since that eliminated some possible human errors, that was a step forward in solidifying the proof.

We also know now how to check a theorem prover. Some theorem provers emit a proof trace, huge files of "Step 2437: apply rule 42 to change 2*X to X+X". Proof traces can be fed into a dumb proof checker which does the indicated rewrite and checks that the expected output is obtained.

So progress has resolved that issue.

[1] http://www.yodaiken.com/papers/p271-de_millo.pdf

[+] nickpsecurity|9 years ago|reply
Far as prover verification, these people mostly solved it:

https://www.cs.utexas.edu/users/jared/milawa/Web/

They start with small logics you can do by hand in verified code. They gradually build on them, one layer at a time, increasingly sophisticated logics until they have a useful theorem prover in first-order logic. I also found a HOL-to-FOL converter. Since most of their stuff is HOL, that means we're almost done at that point for the verification chain.

[+] ChuckMcM|9 years ago|reply
This was the one I was going to bring up as well. USC-ISI had several students working in this space in the early 80's. The other one which comes to mind are the arguments about AI and whether or not it was a thing or an artifact of better algorithms.
[+] pron|9 years ago|reply
While not exactly a debate -- more of a fundamental difference in outlook -- these are interesting, completely opposite claims:

Bob Harper[1]:

> There is an alternative… without… reference to an underlying machine… [W]e adopt a linguistic model of computation, rather than a machine model, and life gets better! There is a wider range of options for expressing algorithms, and we simplify the story of how algorithms are to be analyzed.

Leslie Lamport[2]:

> Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. … State machines… provide a uniform way to describe computation with simple mathematics. The obsession with language is a strong obstacle to any attempt at unifying different parts of computer science.

[1]: https://existentialtype.wordpress.com/2011/03/16/languages-a...

[2]: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

[+] nickpsecurity|9 years ago|reply
So far, evidence is leaning toward latter view where physics imposes hard constraints on how we build our CPU's, etc for max performance. Same with software where certain things keep showing up (eg caching) due to mechanical constraints. Best result would seem to be a combo of abstract models for mathematical analysis and high-level languages that easily map to both those and the machines. Of course, many people have produced such tools. We're going in what appears to be objectively correct direction. :)
[+] agumonkey|9 years ago|reply
pardon my anecdote but after years of imperative and OOP, ad nauseam, I went into LISP/ML, to be stateless, but then structural parametric types and "state" got closer and closer at the conceptual level, just not "mutable".
[+] tlb|9 years ago|reply
I was a co-author on one of the contentious papers [0] in log-structured file systems, addressing the question of whether log cleaning could be done during idle times (yes, for any reasonable workload).

There was a surprising amount of drama and personal rivalry. While distracting, knowing that our results would be picked apart motivated us to be really rigorous.

In the end, modern file systems take the best aspects of both. Like the RISC-CISC debate, it's hard to draw a firm conclusion about the two ideas because they're tied up with the quality of implementations, and the optimal answer involves a synthesis of the ideas.

[0] http://www.eecs.harvard.edu/margo/papers/usenix95-clean/pape...

[+] userbinator|9 years ago|reply
One debate that I thought would be mentioned is "Goto statement considered harmful", "'GOTO Considered Harmful' Considered Harmful.", and "GOTO Considered Harmful" Considered Harmful' Considered Harmful?" I think it's somewhat disappointing that a lot of CS education always mentions Dijkstra's original argument, but not the other side. The latter two can be found here:

http://web.archive.org/web/20090320002214/http://www.ecn.pur...

https://dx.doi.org/10.1145%2F22899.315729

[+] marcosdumay|9 years ago|reply
That's because the original Dijkstra argument was part of a research line that created structured programming. The essay (and the other papers) has really nothing against the usage on that your first link. The real problem was goto into loops or subs, not out of them, and not as switch statements.

Current languages mostly don't even allow the bad kinds anymore, so people can ignore history, and complain that "goto isn't so bad, I've never seen anybody abusing it like he claims in the essay."

[+] RandomOpinion|9 years ago|reply
The best modern analysis of Dijkstra's original letter I've seen is is David Tribble's "Go To Statement Considered Harmful: A Retrospective" which dissects it line by line and explains it in the historical context in which Dijkstra wrote it.

http://david.tribble.com/text/goto.html

[+] waterhouse|9 years ago|reply
The "GOTO considered harmful" papers seem to have been appended to Regehr's post.
[+] westoncb|9 years ago|reply
I had heard vaguely of the 'microkernal debate' in the past, but never checked it out. Didn't realize the other main participant (aside from Linus Torvalds) was Andrew Tanenbaum. Very entertaining/interesting read. (The linked article refers us here: http://www.oreilly.com/openbook/opensources/book/appa.html)
[+] nickpsecurity|9 years ago|reply
The Shapiro article in Tannenbaum's link is good too. He's a bright guy that did the EROS project for anyone that doesn't know. Here it is in archive.org:

https://web.archive.org/web/20050514121653/http://www.eros-o...

https://web.archive.org/web/20050211090602/http://www.eros-o...

Two other good articles are why the EAL4 evaluations that Windows et al were getting are worthless and what a secure, build system might look like.

https://web.archive.org/web/20040214043848/http://eros.cs.jh...

https://www.usenix.org/legacy/publications/library/proceedin...

[+] tr352|9 years ago|reply
To be fair, these are debates about issues that are rather insignificant from a broader CS perspective. Engineering problems (methodology, network protocols, file systems, kernel implementation details) or debates that are infamous mostly due to their polemic nature (Dijkstra-Backus, or Dijkstra-Anything, for that matter).

I think it's true. In CS there are no deep fundamental or philosophical debate-evoking schisms of the likes you sometimes find in other fields. Consider economics, philosophy, physics (theory of everything) or the soft sciences.

Even the hottest problem in CS (P vs NP, of course) doesn't seem to evoke much debate among computer scientists.

[+] kctess5|9 years ago|reply
Let's not forget about tabs v spaces though
[+] jackmott|9 years ago|reply
Need a lot less rhetoric and a lot more controlled experiments.
[+] adrianratnapala|9 years ago|reply
We can't "control".

For example: someone in nineties-or-naughties once experimentally "proved" that Java was faster than C for matrix maths -- because of the overhead of `free()`! If that sounds ridiculous to you, then understand this really matters for certain implementations of certain matrix problems.

So the take-away hangs not on the nature of C vs. Java, but on whether the study matters in the real world. Emprics in CS are good for engineers quietly doing their own job, but for debates they are as fraught as in the soft-sciences.

Indeed CS is a soft-science. Or rather it is not a science at all, but a mathematical art, with bonus people issues.

[+] pron|9 years ago|reply
Computer scientists really hate those. Seems like we're stuck with the rhetoric.
[+] GFK_of_xmaspast|9 years ago|reply
Also of interest is one of regehr's commentators posting a link to Kahan's rebuttal to those "unum" numbers.