To be clear about the "(2014)", although Knuth gave this talk in 2014, this transcript of the talk is from the upcoming (February 2021) issue of Communications of the ACM.
The whole sequence of articles/talks is interesting:
- (2007, Martin Campbell-Kelly), "The History of the History of Software" (DOI: 10.1109/MAHC.2007.4407444 )
— the trigger for what follows.
The short version is that over the years, in all "history of X" fields other than history of mathematics, the proportion of papers with technical content—exactly what ideas did people come up with, and how, etc—has decreased, while historians have taken a turn towards broader social commentary. In this talk, Knuth explains why he finds this unfortunate and what value practitioners can get from history. (He also ends with examples of this kind of history waiting to be written.) In his reply, Haigh points out that if computer scientists want such history they'll have to write and fund such writing; historians as a field won't do it.
(Someone in the YouTube comments points out that military history is like this: there exist military historians writing technical history about things like the "terrain, weapon systems, tactics, strategy, etc", funded by the military, because members of the profession do care about this. Unfortunately, this doesn't seem to be much the case in computer science.)
BTW, here are a couple of papers that Knuth wrote himself, which I would guess is the kind of historical writing he'd like to read (rich in technical detail):
Unlike Medicine, many of the ideas that we had in the past were better than the commonly accepted way things are done now.
Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS. They had it at Xerox PARC, but Steve Jobs chose not to take that part.
On the other hand, the PARC focus on replicating paper was a step backwards from work by Engelbart and others.
The limitation of a single desktop was put in place to allow children to ease into the desktop metaphor... it wasn't meant for adults to be stuck with the training wheels on.
I've been digging back, looking for the ideas we missed... and boy, there are some really powerful tools waiting to be reified in a modern context.
Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS.
I know, I know. Norm Hardy was really good, his system, KeyCos, worked, and few could understand him. I used to know his "explainer", Susan Rajunas. We don't even have proper "rings of protection", like Multics, any more.
Although the real problem today is that we need to run programs with less authority than the user running them, and we still lack a good conceptual model for doing that. "Allow write to SD card" is far, far too powerful a privilege to grant.
> Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS. They had it at Xerox PARC, but Steve Jobs chose not to take that part.
For FreeBSD there's Capsicum(1) and for Linux, although the implementation is not strictly capabily-based, there's SE-Linux which depending on the usecase resembles capability-based restrictions.
The Burroughs mentioned by Knuth, already had tagged memory, a systems programming language with unsafe code blocks, co-routines, bounds checking, the whole package, Go like compile speed (for 1961 hardware), compiler intrisics instead of Assembly, the whole package.
Nowadays still available as Unisys ClearPath MCP, with high level security as selling point, while it will take generations to fix UNIX and C's adoption at large, if ever.
When you someone performs surgery on one patient, the legacy medical impacts of that surgery are contained to at most 2 people and their immediate families. There will probably be fewer than 10 surgeries where someone might think about that -- and we don't begrudge a surgeon who doesn't.
When you add a commit to an OS, the legacy impact of that stays with the OS and must be un-done by anyone who changes it.
I find the comparison between medicine and computer science very interesting! I have never thought about it this way.
Could you elaborate on how you „dig back“? Do you read old papers / books? I imagine that one would have to dig through much old stuff that just isn‘t interesting until you find something that could be really usefull (like capability based security).
Well, better in some sense: they might be cleaner, or more robust on some axis, but perhaps not practical or efficient. But, with computing the landscape can change rapidly so it is always good to see if older ideas can now be made a reality.
"The limitation of a single desktop was put in place to allow children to ease into the desktop metaphor." Any source for this? I dont thinkchildren were the target customers at that time.
> Capability based security … allowed you to run any program, with no danger to your system.
‘No danger to your system’ shouldn't be conflated with ‘no danger to you.’ The problem with capabilities is that they reinforce and solidify the ‘smartphone model’: your data isn't yours, it's an app's, so what you're permitted to do with it is entirely controlled by someone else.
See also Bret Victor's talk, "The Future of Programming". The conceit: it's 1973, he describes (with an overhead projector and transparencies) some then-innovative ideas that he hopes will be adopted in the future.
> Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS.
Basically, haven't we "reinvented" this via "The Eternal Mainframe"? Only now it's called Amazon Web Services?
> They had it at Xerox PARC, but Steve Jobs chose not to take that part.
At least they created Object Pascal and did two OSes with it, and contributed later on to the C++ industry adoption, instead of being yet another C powerhouse cloning UNIX.
> Capability based security, ... They had it at Xerox PARC, but Steve Jobs chose not to take that part.
To be fair:
1) Apple II and Macintoshes were slow
2) and single user
3) and pre-Internet (no networking, even.)
Microsoft also had to make an early choice about what features to add to DOS, particularly networking or multiuser, and networking was chosen to implement first. (Probably because of Novell and other early networking competitors.)
> the PARC focus on replicating paper was a step backwards
"Replicating paper" was resurrected about 10 years ago when everybody was making apps with a UI like books turning pages. There was actually a programmer (Russian name if I receall) who specialized in consulting on that, advising companies on what algorithms to use and how to get the page flipping appearance they wanted. I thought it was pointless but hilarious, as it was totally ornamental.
In 2021, we're having difficulties to surface even the history of computing and canonical documentation of the last two decades or so. Go search anything about early HTML, early JavaScript, research results in CompSci not even ten years old, POSIX reference docs, or even up-to-date JVM javadoc pages using your favorite search engine. It'll bring up all kinds of content marketing, naive march-of-progress advertising fringe tech and PLs addressed at juniors, hit-and-miss StackOverflow articles, and masses of academic pseudo science publications with original content/sites being decommissioned in favor of shiny content-less "web 2.0" crap.
that’s probably where the methodology used by trained historians is useful.
I created a History of Tech Design class [0] of 6 different tracks for design students. I’m not an historian, but I have some training in finding original sources, sorting them, accessing archives… it helped a lot, and I could use a lot of primary sources full of not-well-known details.
Back in the 90s it would have taken me weeks or months to just access this amount of historical material!
I had to turn this class into something accessible and interesting for design students / pros, but I certainly do not consider that dumbing down, just:
Know your audience!
What do you think is the difference between 'research results in CompSci not even ten years old' and 'academic pseudo science publications'?
Do you just disagree with the research of the last ten years for some reason so dismiss it as not the real research and think there's some kind of under-appreciated research out there?
As a computer science graduate student, I am always surprised by how rarely my peers seem to know or care about the history of our field. I doubt many of them would write papers about computer science history even if the incentives were better.
I think it is somehow related to the power of computer science to change the human condition. Everyone is thinking about the future. Mathematicians also crave novelty, but I don't think they feel "my work could change the world" in the same way as CS researchers.
Learning about CS history would make us better researchers, and thus more likely to change the world, but that line of motivation is not direct enough to excite people. There is still so much low-hanging fruit that can be plucked with only shallow knowledge of the field.
A favourite quote, that appears on my Github profile:
> Computing is pop culture. [...] Pop culture holds a disdain for history. Pop culture is all about identity and feeling like you're participating, It has nothing to do with cooperation, the past or the future—it's living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from]. - Alan Kay
Compared to those employed in an industry, few write about the history of trains, or wifi, or road building, and on and on. It's normal for only a few academics to be interested.
There are only a couple of fields that are different - media (movies/broadcast), military, medicine. Probably because of the built-in human drama that's easily accessible?
As others have mentioned, Computer Science has a little human drama but few have sacrificed their lives or appeared heroic. It's a pretty dry field to document - more similar to road building.
Wow, He used my wife's portrait of himself. I was there the day she took it, it was the opportunity of a lifetime to meet one of my heroes, and he didn't disappoint. Knuth is incredibly sharp, lightning sharp for his age. He plays Emacs as masterfully as he plays his pipe organ in his home. I watched as he whipped around different buffers of literate programming, as he demoed some new angle on the properties of Sudoku solving algorithms. I asked him if he had been keeping up with machine learning, and he instantly name-dropped a dozen papers/authors he had read recently.
I have to say, I'm worried. Just in 2020, we lost of a lot of greats, including Jon Conway, Larry Tessler, Chuck Peddle, Bert Sutherland, Frances Allen, most recently Brad Cox. We're losing not just a lot of history, but a lot of the historians themselves, the people to whom can tell the stories that may not have been written down yet.
One problem in understanding computer science history is the essentially perpetual copyright laws. It's illegal to view or share many important past programs' source code, a problem not shared by other fields. Imagine discussing literature without being allowed to read it! There are exceptions, but they are exceptions.
The rise of open source software is finally letting us view some of that software. But we may never publicly know what some past giants did.
Programs source code is just as copyrighted as late 20th century literature.
However, it's not generally available (unlike late 20th century literature). If windows sold with the source code (even if you were forbidden from doing anything with it), then we would be reading windows source code in our CS classes.
Is the actual source code that important? I’m trying to think of what I’m missing. I’m personally more interested in the comments left in the code of some or those old projects than the actual code, which was probably full of bugs and a bit crufty just like the stuff we write today.
In most active and growing fields ( medicine is one example ), the history of the field is generally ignored by students and practitioners.
There are a few pleasant exceptions.
For instance, Neurology Minute have had occassional bits on the history of neurology.
See https://neurologyminute.libsyn.com/
However, when reading something historical ( for instance this interesting podcast on the history of the Inverted Brachioradialis Reflex) https://neurologyminute.libsyn.com/history-of-neurology-3-hx..., there is no expectation that it will actually contribute to practice.
Knuth writes eloquently what I have thought for a while. Some of these early works and ideas, even and perhaps especially those that did not evolve into something successful today, are worthy of study. I strongly suspect some of them could take us in new directions. Some of those that “blocked” may now be unblocked by related developments in our field.
Maybe we have to go back to forwards.
I often find that when I can come across documentation that was written for a practitioner, say, 50 years ago, it helps clarify why things work the way they do today - the new, "streamlined" way of doing something only ever makes sense to me after I understand what the old way actually was.
The main problem seems to be that computer scientists don't care about history. That seems a bit strange to me, since there is no lack of people to analyse the historical parts of mathematics of physics. Maybe the problem is that computer science history doesn't seem like history yet since it is relatively recent?
Because a lot of CS history are only relevant in a larger/humanities context. In terms of absolute technical value, the historical contributions are not necessarily as valuable. Russell and Whitehead's Principia Mathematica has next to no practical use in modern day software engineering. Their existence helped induce Church and Turing to create modern computer science theory but the discrete mathematics in that book itself holds little value for your average computer scientist outside of being an intellectual exercise.
Another issue is that CS is a branch of applied math that also happens to be extremely profitable. Math operates on much larger timescales than other domains. But business and economics demands immediate attribution. McCulloch and Pitts had their Hebbian neural network almost a century ago. Kleene of the Regex fame designed his neural network in 1951 [0]. These vast timescales are quite normal for advanced mathematics. But in 2010 GPUs became cheap and the obscure theories of the pre-GOFAI age are suddenly immensely profitable. Of course the names and associations would be with their most recent implementors. The ones who actually applied and made it possible, rather than those whom dreamt it up a century ago.
- how brilliant people had to be to invent ideas that feel obvious to us
- how much people got done with previous decades' technology
- how many times history the consensus of expert opinion mis-predicted the next development
because we want to believe
- the current state of the art is close to perfection than it is to the last generation
- we only have to embrace the obvious
- people who are one step behind the times are at an insurmountable disadvantage to us who are up-to-date
- the future will bring nothing more than a higher perfection of the ideas we embrace now
It's intellectual millennialism. We want to think that the new ideas we receive at the beginning of our career are the final piece of the puzzle, that there's no point in generations beyond us. Our ideas give us special powers to accomplish things that no previous generation could accomplish, and no subsequent generation will accomplish much more, or the same things much easier, than we did. We are standing at the end of history. We are the generation that left the desert and entered the promised land.
History upsets all that. History teaches us that the future will stand in relation to us as we stand in relation to the past, whereas we want to believe that we are equals with the future and superior to the past.
The main problem is that computer researchers don't always write up their ideas in papers. And source code is hard to read, system dependent - you can't understand it without being familiar with hardware which may not even exist any more - and may be impossible to access.
This isn't really about history. This is about ideas which could still be useful today but which have been forgotten, for a variety of reasons.
Most CS departments don't have anyone who specialises in this. And academic historians have other priorities.
Calling this dumbing down doesn't help. It's not about historians "dumbing down" CS, it's about not understanding the difference between what historians do - create narratives - and what archivists and researchers do, which is more closely related to continuing research and development.
Physics actually has the same problem. Anything outside a limited mainstream is forgotten very quickly. Math seems less prone to it, perhaps because math is made of academic papers and nothing else, and there's more of a tradition of open exploration and communication.
I would submit that the “Graphics Gems” series of books does document a lot of rendering techniques in their historical context. They were written to be current for practitioners at the time, but the connections are drawn.
Similarly Michael Abrash’s articles on the development of Doom et al.
I wonder how much this is born from what at outwardly appear very transient.
Our interfaces with computing have changed so radically and many things are legitmately obselete for the majority of us; no one is likely to reach for a punch card or a 3.5" floppy (might be a few edge cases on this one :) ).
Most of this is the facade of computing though and many of underlying fundamentals are the same but I feel that this outward appears of transiency leaves us flippant about the retrospective.
It's not a great comparison but lets look at military history, many of the tactics, and hardware remain the same for decades with design and development taking an significant portion of time. the F35 has been in development for nearly 2 decades and it's only now at the beginning of it's service life.
This article returns error 500 for me. Seriously, STOP rendering your static web content with run-time server-side rendering! There is absolutely no reason this article couldn't be delivered as static assets by a CDN.
I was going to say the same thing but it’s probably a cardboard box filled with paper. Somebody who understands the stuff needs to spend the time to go through it and scan it and catalog it. And given the time crunch for the test of Vol 4, not to mention the rest I don’t want Knuth doing it.
Seems to me that there are multiple histories of computing . One is the history of theoretical computer science which I think is what Knuth is referring to. This is actually fairly well preserved in academia and anyone with a CS background should have gone through it. Another is the history of programming, of which we know the origins but off late has become nearly impossible to track. The last is the social history of the internet , which doesn't require a technical background.
> There was a brilliant programmer at Digitek who had completely novel and now unknown ideas for software development; he never published anything, but you could read and analyze his source code.
Does anyone know about whom Knuth was referring? Did this have anything to do with the work on the PL/I compiler?
I feel the same way about the history of the Web. There are hundreds of browsers we can be using, each one with its own unique features, advantages, and dare I say, beauty. Really writing for ANY browser shows you why things are the way they are today, and helps you not reinvent Unix, poorly :D
This type of history is also crucial in the fight against overreaching patent and copyright.
Talking in broad strokes about a novel solution that is later patented probably won’t get the patent invalidated. Showing the code for the implementation would.
[+] [-] svat|5 years ago|reply
The whole sequence of articles/talks is interesting:
- (2007, Martin Campbell-Kelly), "The History of the History of Software" (DOI: 10.1109/MAHC.2007.4407444 ) — the trigger for what follows.
- (2014, Donald Knuth): "Let's Not Dumb Down the History of Computer Science". Video: https://www.youtube.com/watch?v=gAXdDEQveKw Transcript: this submission (As mentioned, there was also a 2009 talk at Greenwich of which I can only find a 6-minute video: https://www.youtube.com/watch?v=sKUg0V7pt8o)
- (2014, Martin Campbell-Kelly): "Knuth and the Spectrum of History": https://ieeexplore.ieee.org/document/6880249 (click on PDF)
- (2015, Thomas Haigh): "The Tears of Donald Knuth": https://cacm.acm.org/magazines/2015/1/181633-the-tears-of-do...
The short version is that over the years, in all "history of X" fields other than history of mathematics, the proportion of papers with technical content—exactly what ideas did people come up with, and how, etc—has decreased, while historians have taken a turn towards broader social commentary. In this talk, Knuth explains why he finds this unfortunate and what value practitioners can get from history. (He also ends with examples of this kind of history waiting to be written.) In his reply, Haigh points out that if computer scientists want such history they'll have to write and fund such writing; historians as a field won't do it.
(Someone in the YouTube comments points out that military history is like this: there exist military historians writing technical history about things like the "terrain, weapon systems, tactics, strategy, etc", funded by the military, because members of the profession do care about this. Unfortunately, this doesn't seem to be much the case in computer science.)
BTW, here are a couple of papers that Knuth wrote himself, which I would guess is the kind of historical writing he'd like to read (rich in technical detail):
- Von Neumann's First Computer Program (1970): https://fermatslibrary.com/s/von-neumanns-first-computer-pro...
- Ancient Babylonian algorithms (1972): http://www.realtechsupport.org/UB/NP/Numeracy_BabylonianAlgo...
- The Early Development of Programming Languages (1976): https://news.ycombinator.com/item?id=25717306
[+] [-] mikewarot|5 years ago|reply
Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS. They had it at Xerox PARC, but Steve Jobs chose not to take that part.
On the other hand, the PARC focus on replicating paper was a step backwards from work by Engelbart and others.
The limitation of a single desktop was put in place to allow children to ease into the desktop metaphor... it wasn't meant for adults to be stuck with the training wheels on.
I've been digging back, looking for the ideas we missed... and boy, there are some really powerful tools waiting to be reified in a modern context.
[+] [-] Animats|5 years ago|reply
I know, I know. Norm Hardy was really good, his system, KeyCos, worked, and few could understand him. I used to know his "explainer", Susan Rajunas. We don't even have proper "rings of protection", like Multics, any more.
Although the real problem today is that we need to run programs with less authority than the user running them, and we still lack a good conceptual model for doing that. "Allow write to SD card" is far, far too powerful a privilege to grant.
[+] [-] jdsalaro|5 years ago|reply
For FreeBSD there's Capsicum(1) and for Linux, although the implementation is not strictly capabily-based, there's SE-Linux which depending on the usecase resembles capability-based restrictions.
Also, although not based around capabilities, Linux has supported them for awhile https://linux.die.net/man/7/capabilities
(1) https://www.cl.cam.ac.uk/research/security/capsicum/
[+] [-] pjmlp|5 years ago|reply
Nowadays still available as Unisys ClearPath MCP, with high level security as selling point, while it will take generations to fix UNIX and C's adoption at large, if ever.
[+] [-] afarrell|5 years ago|reply
When you add a commit to an OS, the legacy impact of that stays with the OS and must be un-done by anyone who changes it.
[+] [-] jonjacky|5 years ago|reply
https://canvas.harvard.edu/courses/34992/assignments/syllabu... - Classics of Computer Science
and that page contains this link to a spreadsheet with links to over 150 historic papers and other sources:
https://docs.google.com/spreadsheets/d/1wS6O7-ZoFL7Cfjgt-kdh...
[+] [-] C4ne|5 years ago|reply
[+] [-] saagarjha|5 years ago|reply
[+] [-] andi999|5 years ago|reply
[+] [-] kps|5 years ago|reply
‘No danger to your system’ shouldn't be conflated with ‘no danger to you.’ The problem with capabilities is that they reinforce and solidify the ‘smartphone model’: your data isn't yours, it's an app's, so what you're permitted to do with it is entirely controlled by someone else.
[+] [-] jonjacky|5 years ago|reply
http://pascal.hansotten.com/uploads/wirth/Good%20Ideas%20Wir...
[+] [-] jonjacky|5 years ago|reply
http://worrydream.com/dbx/
[+] [-] bsder|5 years ago|reply
Basically, haven't we "reinvented" this via "The Eternal Mainframe"? Only now it's called Amazon Web Services?
[+] [-] forgotmypw17|5 years ago|reply
It's talked about in just about every tribal knowledge compendium. It's assisted by doctors in some countries, almost completely ignored in others.
[+] [-] pjmlp|5 years ago|reply
At least they created Object Pascal and did two OSes with it, and contributed later on to the C++ industry adoption, instead of being yet another C powerhouse cloning UNIX.
[+] [-] mr_t|5 years ago|reply
[+] [-] doctorbaum|5 years ago|reply
[+] [-] redis_mlc|5 years ago|reply
To be fair:
1) Apple II and Macintoshes were slow
2) and single user
3) and pre-Internet (no networking, even.)
Microsoft also had to make an early choice about what features to add to DOS, particularly networking or multiuser, and networking was chosen to implement first. (Probably because of Novell and other early networking competitors.)
> the PARC focus on replicating paper was a step backwards
"Replicating paper" was resurrected about 10 years ago when everybody was making apps with a UI like books turning pages. There was actually a programmer (Russian name if I receall) who specialized in consulting on that, advising companies on what algorithms to use and how to get the page flipping appearance they wanted. I thought it was pointless but hilarious, as it was totally ornamental.
https://en.wikipedia.org/wiki/Skeuomorph
[+] [-] tannhaeuser|5 years ago|reply
Fuck the algorithm!
[+] [-] juliendorra|5 years ago|reply
I created a History of Tech Design class [0] of 6 different tracks for design students. I’m not an historian, but I have some training in finding original sources, sorting them, accessing archives… it helped a lot, and I could use a lot of primary sources full of not-well-known details.
Back in the 90s it would have taken me weeks or months to just access this amount of historical material!
I had to turn this class into something accessible and interesting for design students / pros, but I certainly do not consider that dumbing down, just: Know your audience!
[0] https://workflowy.com/s/strate-history-of-te/a4ID6kKtznLwQC7...
[+] [-] chrisseaton|5 years ago|reply
Do you just disagree with the research of the last ten years for some reason so dismiss it as not the real research and think there's some kind of under-appreciated research out there?
[+] [-] blt|5 years ago|reply
I think it is somehow related to the power of computer science to change the human condition. Everyone is thinking about the future. Mathematicians also crave novelty, but I don't think they feel "my work could change the world" in the same way as CS researchers.
Learning about CS history would make us better researchers, and thus more likely to change the world, but that line of motivation is not direct enough to excite people. There is still so much low-hanging fruit that can be plucked with only shallow knowledge of the field.
[+] [-] thundergolfer|5 years ago|reply
> Computing is pop culture. [...] Pop culture holds a disdain for history. Pop culture is all about identity and feeling like you're participating, It has nothing to do with cooperation, the past or the future—it's living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from]. - Alan Kay
Changed how I think about my career.
[+] [-] Animats|5 years ago|reply
- "As We May Think"
- Von Neumann's report on the EDVAC.
- The invention of index registers, originally called the "B Box". (The "A Box" being the main arithmetic unit.) Von Neumann missed that one.
- The original 19 page definition of ALGOL-60.
- HAKMEM, from MIT.
- A description of the SAGE air defense system.
- Something that describes how the Burroughs 5500, a very early stack machine, works.
- Something that describes how the IBM 1401 works. At one time, there were "business computers", all decimal, and they were very strange machines.
- Djykstra's original P and V paper.
- Wirth's Pascal manual, the one with the compiler listing
- The Bell System Technical Journal issue that describes UNIX.
- Jim Blinn's "A trip down the graphics pipeline", for the basics of classical computer graphics.
[+] [-] JoeAltmaier|5 years ago|reply
There are only a couple of fields that are different - media (movies/broadcast), military, medicine. Probably because of the built-in human drama that's easily accessible?
As others have mentioned, Computer Science has a little human drama but few have sacrificed their lives or appeared heroic. It's a pretty dry field to document - more similar to road building.
[+] [-] cromwellian|5 years ago|reply
I have to say, I'm worried. Just in 2020, we lost of a lot of greats, including Jon Conway, Larry Tessler, Chuck Peddle, Bert Sutherland, Frances Allen, most recently Brad Cox. We're losing not just a lot of history, but a lot of the historians themselves, the people to whom can tell the stories that may not have been written down yet.
[+] [-] dwheeler|5 years ago|reply
The rise of open source software is finally letting us view some of that software. But we may never publicly know what some past giants did.
[+] [-] aidenn0|5 years ago|reply
However, it's not generally available (unlike late 20th century literature). If windows sold with the source code (even if you were forbidden from doing anything with it), then we would be reading windows source code in our CS classes.
[+] [-] ed25519FUUU|5 years ago|reply
[+] [-] guidoism|5 years ago|reply
[+] [-] Ice_cream_suit|5 years ago|reply
There are a few pleasant exceptions. For instance, Neurology Minute have had occassional bits on the history of neurology. See https://neurologyminute.libsyn.com/
However, when reading something historical ( for instance this interesting podcast on the history of the Inverted Brachioradialis Reflex) https://neurologyminute.libsyn.com/history-of-neurology-3-hx..., there is no expectation that it will actually contribute to practice.
[+] [-] psyklic|5 years ago|reply
[+] [-] nickdothutton|5 years ago|reply
https://blog.eutopian.io/the-next-big-thing-go-back-to-the-f...
[+] [-] commandlinefan|5 years ago|reply
[+] [-] username90|5 years ago|reply
[+] [-] ampdepolymerase|5 years ago|reply
Another issue is that CS is a branch of applied math that also happens to be extremely profitable. Math operates on much larger timescales than other domains. But business and economics demands immediate attribution. McCulloch and Pitts had their Hebbian neural network almost a century ago. Kleene of the Regex fame designed his neural network in 1951 [0]. These vast timescales are quite normal for advanced mathematics. But in 2010 GPUs became cheap and the obscure theories of the pre-GOFAI age are suddenly immensely profitable. Of course the names and associations would be with their most recent implementors. The ones who actually applied and made it possible, rather than those whom dreamt it up a century ago.
[0] https://news.ycombinator.com/item?id=25882079
[+] [-] dkarl|5 years ago|reply
- how far we came in each decade
- how brilliant people had to be to invent ideas that feel obvious to us
- how much people got done with previous decades' technology
- how many times history the consensus of expert opinion mis-predicted the next development
because we want to believe
- the current state of the art is close to perfection than it is to the last generation
- we only have to embrace the obvious
- people who are one step behind the times are at an insurmountable disadvantage to us who are up-to-date
- the future will bring nothing more than a higher perfection of the ideas we embrace now
It's intellectual millennialism. We want to think that the new ideas we receive at the beginning of our career are the final piece of the puzzle, that there's no point in generations beyond us. Our ideas give us special powers to accomplish things that no previous generation could accomplish, and no subsequent generation will accomplish much more, or the same things much easier, than we did. We are standing at the end of history. We are the generation that left the desert and entered the promised land.
History upsets all that. History teaches us that the future will stand in relation to us as we stand in relation to the past, whereas we want to believe that we are equals with the future and superior to the past.
[+] [-] TheOtherHobbes|5 years ago|reply
This isn't really about history. This is about ideas which could still be useful today but which have been forgotten, for a variety of reasons.
Most CS departments don't have anyone who specialises in this. And academic historians have other priorities.
Calling this dumbing down doesn't help. It's not about historians "dumbing down" CS, it's about not understanding the difference between what historians do - create narratives - and what archivists and researchers do, which is more closely related to continuing research and development.
Physics actually has the same problem. Anything outside a limited mainstream is forgotten very quickly. Math seems less prone to it, perhaps because math is made of academic papers and nothing else, and there's more of a tradition of open exploration and communication.
[+] [-] bboreham|5 years ago|reply
Similarly Michael Abrash’s articles on the development of Doom et al.
[+] [-] Guthur|5 years ago|reply
Our interfaces with computing have changed so radically and many things are legitmately obselete for the majority of us; no one is likely to reach for a punch card or a 3.5" floppy (might be a few edge cases on this one :) ).
Most of this is the facade of computing though and many of underlying fundamentals are the same but I feel that this outward appears of transiency leaves us flippant about the retrospective.
It's not a great comparison but lets look at military history, many of the tactics, and hardware remain the same for decades with design and development taking an significant portion of time. the F35 has been in development for nearly 2 decades and it's only now at the beginning of it's service life.
[+] [-] baobabKoodaa|5 years ago|reply
[+] [-] User23|5 years ago|reply
I'd love to see this. Please put it on your webpage Dr. Knuth!
[+] [-] guidoism|5 years ago|reply
[+] [-] wittycardio|5 years ago|reply
[+] [-] kevinwang|5 years ago|reply
I don't think I've been exposed to it, but I would love to be. Where did you learn about TCS history?
[+] [-] squibbles|5 years ago|reply
Does anyone know about whom Knuth was referring? Did this have anything to do with the work on the PL/I compiler?
[+] [-] forgotmypw17|5 years ago|reply
[+] [-] johnorourke|5 years ago|reply
Or accountancy - another old trade with immediate commercial value, though perhaps without as much intellectual property.
[+] [-] ballenf|5 years ago|reply
Talking in broad strokes about a novel solution that is later patented probably won’t get the patent invalidated. Showing the code for the implementation would.