Eh. The fact that you had to spend around $1,000 to buy the set of manuals required, back when that was perhaps $2,000 in today's dollars, was probably a bigger factor than the article makes it out to be. Standards organizations like ISO and ANSI, at least back then, made their money selling manuals, which just doesn't work for something so ubiquitous in the number of people who have to know details.
Another problem that I read about at the time is that some of the ISO protocols didn't work. The "ISORMites" (http://www.amazon.com/Elements-Networking-Style-Animadversio...) were reported to be contemptuous of the Internet protocols, their origin from the "US Department of Death", etc. ... and didn't have a policy like the IETF's "running code wins" approach.
More generally, per http://www.ietf.org/tao.html "One of the 'founding beliefs' is embodied in an early quote about the IETF from David Clark: 'We reject kings, presidents and voting. We believe in rough consensus and running code'"
OSI's answer to TCP, TP4, was implemented in Windows 2000.
TP4 assumed a reasonably reliable data path, probably an X.25 virtual circuit, underneath. It didn't have all the timing, reordering, retransmit, and congestion control machinery that lets TCP work over bad links.
On the other hand, the OSI crowd had, at the time, more experience with scaling and routing. IP's class A, B and C networks, autonomous system numbers, and Border Gateway Protocol didn't scale well. Routing in the Internet is still something of a hack. Today's routers almost have to have an in-memory table mapping all 4 billion IPv4 addresses to Autonomous System Numbers. If memory wasn't so cheap, we'd be having real problems.
TCP is in fact pretty shitty over "bad links". Standard TCP congestion control algorithms are designed to assume that packet loss indicates congestion; not link degradation. This results in actual "bad links" tricking TCP into thinking there's congestion... so it backs off (which doesn't fix the problem); backs off some more (which still doesn't fix the problem); etc.; resulting in terrible throughput in the face of packet loss.
There are alternative congestion control algorithms that mitigate this somewhat; but the best solution (which AFAIK there is no widely-used standard for) is to use forward-error-correction. But since the Internet, for the most part, doesn't actually have "bad links", this is not generally necessary.
We're actually integrating an OSI-based network stack into our avionics datalink product at work right now.
The next generation of digital air traffic control between aircraft and the ground towers is an OSI stack. Proprietary application in layer 7 and avionics-specific layers 2 and 1, but everything in between is bog-standard OSI (TP4 & IDRP, IS-IS & CLNP, 8208).
Interesting protocol suite, and it will be keeping aircraft going the right direction for the next 25+ years.
Hilarious - a bit further down I mentioned how Hughes Canada Systems division had a team hard at work on that project in 1994/1995. I thought it had been scrapped - apparently not.
That is interesting I did in the summer go over to hanslope park (a technical out station for the Foreign office) for an interview and one of the technologies mentioned was OSI
Ah, reading about some of the early packet-switch debates reminds me of the time my dad called me up and said "I crossed off one of my bucket list items: Vint Cerf called me a 'circuit-switched bigot'"
I was working at Hughes Canada Systems Division in 1994/1995, and they had an entire division dedicated to developing a next generation Air Traffic Control system - it had been mandated about 4-5 years earlier, that this project had to be written using OSI protocol stack. They had been struggling for about 4 years, and still hadn't managed to get the protocol stack on their operating systems interoperating properly. There were dozens of people working on this project too. Total Fiasco. They ended up scrapping most of their work.
Interesting when I worked for BT I worked on OSI interconnects for the UK its not that hard once you grok asn.1 to workout what is going on.
My boss once watched a trace I was running stoped it and pointed to a particular dword and said that's a Sprint ADMD you can tell as they don't to that part of the had shake properly
OSI model, or OSI protocols? Everyone knows the OSI model - but outside of Avionics, you won't find that many people who know the protocol details beyond knowing they exist. And that TP4 is a thing.
ASN.1 managed to infest some internet protocols like SNMP and SSL certificates. It's one of those solutions to a problem that shows its committee heritage and ends up being ludicrously complicated to implement because of all the edge cases. Lots of software has been compromised through flawed ASN.1 and X.509 decoders.
I'm old enough to remember back in the day when our customer (the biggest possible customer) allowed us to implement early versions using SMTP, but we had to design and plan to switch to X.400 since that was going to be the real standard. It was a strange era. We had things that worked, but everyone knew that the whole industry was going to switch to something completely different that solved all the same problems for a reason no one in the trenches really understood. Except the dates for switching kept moving and the software for the new way never really quite arrived in working order or with enough features to move.
There is a quite interesting book where the author (John Day) shares an inside view about the OSI committees back in the 70's and 80's, and its endless discussions: "Patterns in Network Architecture: A Return to Fundamentals" [1]
One advantage that circuit routing has over stateless routing is that of largely painless muxing / demuxing; and hence almost no buffer bloat. SONET can mux multiple channels without the need for a single buffer. Obviously the downside is the lack of flexibility, interdependency and hence increased fragility.
Seven layers are a procrustean bed, at best. The model might be useful as a starting point, but really all it means is "good design uses layers." When you start discussing actual networking implementations, trying to fit a what you're doing into "where exactly this fits in the ISO model, and it's wrong if it doesn't fit" is a bad way to think.
Devices that mix layers can do pretty interesting work, too (e.g., inspection of packets for security purposes, smart buffering for controlling host load, etc.). So the "every layer only talks to the layers immediately above and below" is kind of suspect, at least as dogma. There are probably other examples in very high bandwidth systems where you want to skip layers for performance reasons.
Although I'm not really sympathetic to the OSI model, it did get some things right. For instance, witness how almost every TCP/IP protocol has to roll out its own session layer for authentication purposes...
[+] [-] hga|11 years ago|reply
Another problem that I read about at the time is that some of the ISO protocols didn't work. The "ISORMites" (http://www.amazon.com/Elements-Networking-Style-Animadversio...) were reported to be contemptuous of the Internet protocols, their origin from the "US Department of Death", etc. ... and didn't have a policy like the IETF's "running code wins" approach.
More generally, per http://www.ietf.org/tao.html "One of the 'founding beliefs' is embodied in an early quote about the IETF from David Clark: 'We reject kings, presidents and voting. We believe in rough consensus and running code'"
[+] [-] Animats|11 years ago|reply
TP4 assumed a reasonably reliable data path, probably an X.25 virtual circuit, underneath. It didn't have all the timing, reordering, retransmit, and congestion control machinery that lets TCP work over bad links.
On the other hand, the OSI crowd had, at the time, more experience with scaling and routing. IP's class A, B and C networks, autonomous system numbers, and Border Gateway Protocol didn't scale well. Routing in the Internet is still something of a hack. Today's routers almost have to have an in-memory table mapping all 4 billion IPv4 addresses to Autonomous System Numbers. If memory wasn't so cheap, we'd be having real problems.
[+] [-] colanderman|11 years ago|reply
There are alternative congestion control algorithms that mitigate this somewhat; but the best solution (which AFAIK there is no widely-used standard for) is to use forward-error-correction. But since the Internet, for the most part, doesn't actually have "bad links", this is not generally necessary.
[+] [-] ghshephard|11 years ago|reply
Full BGP Table is around 500,000 entries, so you are off by several orders of magnitude there.
[+] [-] noselasd|11 years ago|reply
Other classes you could chose had varying degree of error detection and recovery. It also had a connectionless mode.
[+] [-] tjgq|11 years ago|reply
[+] [-] coderjames|11 years ago|reply
The next generation of digital air traffic control between aircraft and the ground towers is an OSI stack. Proprietary application in layer 7 and avionics-specific layers 2 and 1, but everything in between is bog-standard OSI (TP4 & IDRP, IS-IS & CLNP, 8208).
Interesting protocol suite, and it will be keeping aircraft going the right direction for the next 25+ years.
[+] [-] ghshephard|11 years ago|reply
[+] [-] walshemj|11 years ago|reply
[+] [-] unethical_ban|11 years ago|reply
[+] [-] aidenn0|11 years ago|reply
[+] [-] ghshephard|11 years ago|reply
[+] [-] walshemj|11 years ago|reply
My boss once watched a trace I was running stoped it and pointed to a particular dword and said that's a Sprint ADMD you can tell as they don't to that part of the had shake properly
[+] [-] ivancamilov|11 years ago|reply
Hardly forgotten, I learned about OSI at the same time I studied TCP/IP back in college. And I'm hardly a veteran, since I graduated in '09.
[+] [-] ghshephard|11 years ago|reply
[+] [-] graycat|11 years ago|reply
Finally I began to understand: The standards meetings were in Paris, Rome, London, Munich, ...!
[+] [-] walshemj|11 years ago|reply
[+] [-] jauer|11 years ago|reply
IS-IS is a internal routing protocol sometimes used instead of OSPF.
[+] [-] jws|11 years ago|reply
I'm old enough to remember back in the day when our customer (the biggest possible customer) allowed us to implement early versions using SMTP, but we had to design and plan to switch to X.400 since that was going to be the real standard. It was a strange era. We had things that worked, but everyone knew that the whole industry was going to switch to something completely different that solved all the same problems for a reason no one in the trenches really understood. Except the dates for switching kept moving and the software for the new way never really quite arrived in working order or with enough features to move.
[+] [-] StephenFalken|11 years ago|reply
[1] http://www.amazon.com/Patterns-Network-Architecture-Fundamen...
[+] [-] tankenmate|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] jclish|11 years ago|reply
[+] [-] kabdib|11 years ago|reply
Devices that mix layers can do pretty interesting work, too (e.g., inspection of packets for security purposes, smart buffering for controlling host load, etc.). So the "every layer only talks to the layers immediately above and below" is kind of suspect, at least as dogma. There are probably other examples in very high bandwidth systems where you want to skip layers for performance reasons.
[+] [-] tjgq|11 years ago|reply
[+] [-] spydum|11 years ago|reply