> I started with B2B credits in FC and told him "Fibre Channel uses exactly the same approach as hop-by-hop windows we had in X.25, with the same results…" resulting in confused blank stare. Further down the conversation I said "FCoE uses lossless Ethernet, which uses PAUSE frames, which are exactly the same thing as Ctrl-S and Ctrl-Q on an async link." Same result.
Or -- to play contrarian/devil's-advocate -- perhaps you should find more ways to convey these timeless logical concepts that don't depend on ancient war-stories or shared-suffering involving "dead" technologies? I'm not saying you have to turn your baseball cap sideways and start saying "dawg", but effective communication means you need to update your metaphors and analogies at least once in a decade as your audience changes.
If the past contains useful lessons, sometimes the best thing you (i.e. direct contemporaries who were there) can do is to isolate the most important, broadly-useful parts, and cut them loose from the gnarly matrix of specifics. Then you can teach the lesson over and over in a way that doesn't depend on audiences who worked with Tech X from years Y to Z.
This can't be overstated. If you dump a pile of acronym diarrhea on someone from a bunch of dead technologies and get upset when you get a blank stare, it is 100% your own fault for lacking the communication skills or abstraction skills to separate useless jargon from important concepts.
To be fair, you have a point. Most of this knowledge is buried in poorly indexed or in some ways restricted vaults of papers in the ACM or various foundations, institutes and research universities. Some form of search engine for digging up prior art in CS and systems research is in order. A one canonical and easy to use resource for programmers and architects to look up before they go off reinventing flat tires.
The author of this article is making terrible mistakes in the other direction. He attributes failures of entire technologies to single issues. ATM wasn't killed just because of design failures, it was killed in large part because of being crushed by Cisco. Ethernet fabrics were fucked up by Juniper's poor execution, not because they are a bad design.
This thought process leads to unoriginal and restrictive thinking. Take the following bullet point: "Central Controller = single failure domain". When you see that he wants you to think designs with controllers are crap. However, Google's network uses a central controller to distribute policy and make QoS optimizations. If all of it's members go down (it's also a distributed system) it doesn't take down the network, you just lose some optimizations. This gets them much better utilization >90% than the run-of-the-mill BGP folded clos junk that 'experienced network engineers' churn out today.
Sorry for venting, but I see this guy's attitude very frequently in the network engineering community. "Oh, someone tried something like that once, and it didn't work, so we are going to refuse to do anything that has an overlapping concept." It's no wonder we still use command-lines and have to script SSH sessions to configure things. This industry is stagnant as hell.
You know what would be nice? If there were a good place to learn which technologies were dead, dying, or had gone niche instead of becoming mainstream.
Your rant implies ATM is in one of those categories, but it's surprisingly hard to find places which actually say that instead of being all polite (or whatever concept is in play here) and pretending that it's just as mainstream as DOCSIS or Ethernet.
Maybe it's too much to ask for. Maybe there's always going to be too much bias and... I don't know... hurt feelings?... for something like that to exist. It certainly wouldn't be a nice thing to have if you were in the business of selling ATM "Solutions".
I sort of remember ATM from descriptions back in the early 90's. Seemed like just the thing if you were a Telco wanting to efficiently send voice packets coast to coast and over-charge the customer for each and every bit. As a protocol for sending cat pictures and porn, not so good.
Friend of mine in industry referred to the conversion between IP packets and ATM packets and back as a meat grinder. As in your packets get chopped into bits sent over the wire and if you are lucky all bits make it through and you get a fully assembled packet on the other side. If not you get to retry the whole packet again. And of course you get charged for ATM packets even though you couldn't use them.
I think Cisco added an option where if you were using their equipment on both sides you could turn off the IP to ATM packet conversion. Which everyone immediately did. Sayonara ATM.
For real historic knowledge, here are a few things to understand about networking:
-- Bell System #5 Crossbar. The best of the electromechanical telephone switches. No Bell System #5 crossbar central office was ever out of service for more than 30 minutes for any reason other than a natural disaster or fire. That level of reliability was not maintained in the computer era. It's useful to know how that was accomplished. Briefly, there was a big, dumb switch fabric and common shared resources. Resources included markers (which set up calls), senders (which sent data to another central office to route a call), trunks (lines to other offices), originating registers (which provided dial tone and listened to dialed digits and tones), and some specialized units such as trouble recorders (which punched cards), automatic line insulation test units (which tested lines and phones remotely), traffic service position system consoles (phone operator), and card translators (a clunky device for looking up routes). All these resources were in resource pools, used in rotation with broken units skipped. If anything failed, the call was retried once using different resources. If the retry failed, the call was rejected with the fast busy tone, on the grounds that if two tries had failed, a third retry probably would not help. Everything had hardware timeouts, so if a relay stuck, after a few seconds the unit would fault, a trouble recorder would be seized, and the trouble recorder would drop a trouble card in front of a technician. Major problems set off alarm bells. The most complex units, the markers, ran in pairs, with one checking the other. Any difference generated a trouble card. In an emergency, a marker could run without its checking other half to keep calls going through.
It's worth understanding #5 Crossbar because it was a system far more reliable than its components. It scaled up to handling entire cities, could be maintained while running, and just did not have outages.
-- Western Union Plan 55-A. This automatic telegram switching system handled most telegrams in the US in the 1950s. Think Sendmail, built out of paper tape readers and punches and a telephone switch, an email server that filled a large building. The queuing theory for the ARPAnet came from Kleinrock's thesis on Plan 55-A. Only a superficial knowledge of Plan 55-A is useful today.
The only ways people learn important lessons from the past is by 1) being taught them by someone willing to teach it 2) overhearing someone else being taught it, or 3) reinventing the wheel and learning it yourself.
If 1 and 2 aren't happening, then you can't blame people for resorting to method 3. Perhaps some blame should go to teachers/curriculum, but bagging on people for what they don't know only creates frustration, not progress.
An example from the music world. I've taught the same "wisdom" about voice leading, what it is and why we like to use it, to both AP music theory students in high school as well as middle school band students learning to improvise. You can bet I change my analogies and vernacular to match their knowledge, but they've both been exposed to the concept and have some core vocabulary to research it further if they like.
You left off (0) doing your own research when you run into a problem. These days, with search engines, google scholar, google book search, Wikipedia, IRC channels full of folks trying to prove their knowledge, easy access to professors’ email addresses, etc., it’s easier than ever before.
Sometimes teaching won't focus on failure modes that much.
They might teach you intrisnics of some outdated technology as if it was alive and give no hints on why you no longer see any of it around. Pure waste of time.
Its called a literature search... and for some reason no one does it in computing...
This reminds me that just last month I attended a talk on hp's "the machine" where the presenter talked for about 20 minutes about all the revolutionary new ideas, before someone broke in and asked "how does this differ from an as400". After all the presentation looked like a copy paste job from the wikipedia page on as400 technology. The poor presenter didn't know, and eventually admitted he didn't even know what an as400 was. Much less than the fact that they continue to be sold. In the end someone suggested a literature search.... The presentation went downhill from there.
As someone who knows nothing about these technologies, I have to say the use of abbreviations and acronyms in the post adds a huge barrier to even wanting to unpack the information.
SONET/SDH and ATM WAN
ATM-to-the-desktop and ATM LANE
MPLS Traffic Engineering
BGP Brownouts
Or...
I was trying to explain the principles of SAN and differences between FC and
FCoE to a networking engineer a while ago. I started with B2B credits in FC and
told him Fibre Channel uses exactly the same approach as hop-by-hop windows
we had in X.25, with the same results…
Of course, I understand that I'm not the target audience, and the author didn't decide on the names for the technologies in the first place, and maybe even that these acronyms are common knowledge for the target audience, but it sure isn't helping information sharing if every sentence requires tons of extra parsing for new acronyms.
The comments section gets even worse...
None of our NOC guys know what ATDT means, even the ones that did come through
tech support. (PSTN is still more reliable than 3G for OOB, assuming you
test it often - anybody got a good automated test script?).
ATDT - priceless ;)) Thanks for bringing this up! Now that I think about that,
I started with ATDP :D... and I'm positive there's AT command set hidden
within every 3G modem (in the good old days you could use it to send SMS
messages).
ATDT is still alive!. Just setup a simple GSM/SMS based solution and used
AT commands to control GSM modem. It's really coll that this has not
changed over the years.
I remember having a little BBS in 1989 using some BBS SW called pirate or
black beard or something like that for my Novell and Token-Ring customers
to dial in and obtain the latest desktop drivers for IPX and IBM NICs etc.
I love the analogies on the older tech and SDN. Especially IntServ...
At a certain point, I'd blame technologists who enjoy feeling smart due to the barriers put up around the knowledge they've worked hard to earn, and who chastise newcomers for not doing tons of work to slog through poorly communicated concepts, instead of newcomers who would probably love to learn more and don't know where to start.
We should communicate more clearly so that it's easy to share knowledge.
Yes, that's it in a nutshell. People who work in the same field/sub-field will use jargon to enhance/shorten communication. This has been the case for at least several thousand years.
If you're out to rid the world of jargon, you've got quite a battle ahead of you.
Since starting to build a networking software startup a year ago, that RFC has become far less April 1st material and so much more "advice to live by".
It isn't their fault for not knowing it; in the ideal world every programmer would start out with complete and perfect knowledge of everything there is to know about their craft. No, it's up to older people with more experience to adequately communicate their experience to others. The responsibility of the younger programmers is to know when to listen.
It reminds me of my father who, close to retirement, had to switch companies led by a bunch of young whipper-snappers in their 40s. They had to solve certain problems that many of the older people had experienced ages ago and were really solved problems. But everyone was more interested in finding creative solutions, rather than just listen.
I hear chemists have a saying that you can save two weeks in the laboratory with an evening in the library.
Of course it's good to know these things. But there's simply going to be some things you know more about because you grew up in a different time than they did. That's not a real problem, that's just the way things are.
> It's up to older people with more experience to adequately communicate their experience to others. The responsibility of the younger programmers is to know when to listen.
The hitch is that the younger programmers are chasing the approval of VCs who actively encourage them to dismiss "older people" as clueless old fusts who can't think at Web Scale.
(The VCs don't actually believe that, of course, especially as many of them are old programmers themselves. But since old programmers know how to read a term sheet and tend to demand things like sane working hours and pay/equity commensurate with their skill, it's in the VCs' interest for their portfolio companies to be cults of youth.)
Of course the corollary to the claim 'there are no new ideas' (which is probably in line with this article's line of thinking) is that the only way progress is made is when a previously bad idea becomes a good one due to circumstance, and some naive fool comes along and tries it again.
A young audio engineer might know every possible fact about how Bluray, DVD, and CD formats work. Probably even cassette tapes and Long Play records (aka LPs). But I'm sure it's completely respectable for you to roll your eyes when an audio engineer gives you a blank stare when you bring up wow and flutter caused by a flattened pinch roller in an 8-track tape.
What would be the software equivalent? Insulting someone who starting developing on PHP 5 not knowing about the short_tags() global function which only existed in PHP 3? Someone proficient in Visual Basic .NET but doesn't know anything about VB 1-6? There is no reason to know how to develop in an ancient version of a language if every job you have had thus far has used exclusively modern versions.
Merely having been alive during the decade in which an obsolete technology was first introduced or was still popular means nothing. It may even have some relevance today when discussing the then-and-now similarities or differences, but frankly someone born decades after its obsolescence just will not care. There is already more knowledge than is possible to absorb about current technology. There is simply not enough time in a single lifetime to care about what came 20-40 years before. Perhaps if we ever push life expectancy to 1000 years, we'll spend the first 100 years of our lives reviewing every relevant detail of the past.
I have a strong interest in the history of computing, but most of this sort of stuff is squirreled away in places most folks wouldn't know to look. If you want to learn history in any other field you go search for history books, but a lot of this sort of stuff isn't conveniently compiled in books. Whose fault is that?
Both relate to flow control and trying to deliver a lossless network medium.
Regarding B2B credits / Fibre Channel and the hop-by-hop windows in X.25, the idea is that each hop has a counter of how many packets it will send to the next hop without receiving an acknowledgement, which are "credits". Once you're out of credits, that hop stops sending traffic. X.25 was superceded/replaced by TCP/IP (which uses end-to-end acknowledgements, instead of hop-by-hop) due to increased performance. But B2B is used with Fibre Channel (storage) networks, which require lossless performance on each hop.
Regarding PAUSE frames and Ctrl-S/Ctrl-Q, the idea is that as a device's receive buffer fills up (it is receiving data faster than it can remove it from the buffer), it will hit its 'pause threshold', and it will send a PAUSE message to the sender to let it know that it can't handle any more data. This is similar to Ctrl-S/Ctrl-Q, which are XOFF/XON control commands in flow control, to stop and start traffic as you ran out of buffer space.
It's a big firehose, but McDsyan-Spohn "ATM Theory and Applications" (ISBN-13: 978-0070453463) from 1998 covers the general principles extremely well. There have obviously been entire technologies that came and went since, but the first few chapters do a remarkable job of drawing up all the evolutionary pressures.
The impression I got from the article is sort of the reaction to not understanding some facet of social justice. It's apparently your job to educate yourself.
Dealing with people and organizations who don't take lessons of history seriously is very easy: take note, and outcompete them without breaking a sweat. Let them fail! This is one of the respects in which capitalism is unequivocally a force for good.
Unfortunately this does not at all apply to the software industry where many advances can be dismissed out of incredulity, or a parochial reaffirmation of the status quo either because of path dependence, or simply because of a perceived achievement in local optimum and incapability of further introspection. There are no scientific principles followed as such.
In some countries, the children learn from the parents. The children never try to teach the parents.
The TV show, "Modern Family", the daughter of the owner of a closet manufacturing company tries to show she can contribute by offering her new design ideas only to hear from her father that her designs are great cause they did the same thing decades ago.
[+] [-] Terr_|10 years ago|reply
Or -- to play contrarian/devil's-advocate -- perhaps you should find more ways to convey these timeless logical concepts that don't depend on ancient war-stories or shared-suffering involving "dead" technologies? I'm not saying you have to turn your baseball cap sideways and start saying "dawg", but effective communication means you need to update your metaphors and analogies at least once in a decade as your audience changes.
If the past contains useful lessons, sometimes the best thing you (i.e. direct contemporaries who were there) can do is to isolate the most important, broadly-useful parts, and cut them loose from the gnarly matrix of specifics. Then you can teach the lesson over and over in a way that doesn't depend on audiences who worked with Tech X from years Y to Z.
[+] [-] hueving|10 years ago|reply
[+] [-] vezzy-fnord|10 years ago|reply
[+] [-] hueving|10 years ago|reply
This thought process leads to unoriginal and restrictive thinking. Take the following bullet point: "Central Controller = single failure domain". When you see that he wants you to think designs with controllers are crap. However, Google's network uses a central controller to distribute policy and make QoS optimizations. If all of it's members go down (it's also a distributed system) it doesn't take down the network, you just lose some optimizations. This gets them much better utilization >90% than the run-of-the-mill BGP folded clos junk that 'experienced network engineers' churn out today.
Sorry for venting, but I see this guy's attitude very frequently in the network engineering community. "Oh, someone tried something like that once, and it didn't work, so we are going to refuse to do anything that has an overlapping concept." It's no wonder we still use command-lines and have to script SSH sessions to configure things. This industry is stagnant as hell.
[+] [-] cbd1984|10 years ago|reply
Your rant implies ATM is in one of those categories, but it's surprisingly hard to find places which actually say that instead of being all polite (or whatever concept is in play here) and pretending that it's just as mainstream as DOCSIS or Ethernet.
Maybe it's too much to ask for. Maybe there's always going to be too much bias and... I don't know... hurt feelings?... for something like that to exist. It certainly wouldn't be a nice thing to have if you were in the business of selling ATM "Solutions".
[+] [-] Gibbon1|10 years ago|reply
Friend of mine in industry referred to the conversion between IP packets and ATM packets and back as a meat grinder. As in your packets get chopped into bits sent over the wire and if you are lucky all bits make it through and you get a fully assembled packet on the other side. If not you get to retry the whole packet again. And of course you get charged for ATM packets even though you couldn't use them.
I think Cisco added an option where if you were using their equipment on both sides you could turn off the IP to ATM packet conversion. Which everyone immediately did. Sayonara ATM.
[+] [-] guelo|10 years ago|reply
Using TCL?
[+] [-] Animats|10 years ago|reply
-- Bell System #5 Crossbar. The best of the electromechanical telephone switches. No Bell System #5 crossbar central office was ever out of service for more than 30 minutes for any reason other than a natural disaster or fire. That level of reliability was not maintained in the computer era. It's useful to know how that was accomplished. Briefly, there was a big, dumb switch fabric and common shared resources. Resources included markers (which set up calls), senders (which sent data to another central office to route a call), trunks (lines to other offices), originating registers (which provided dial tone and listened to dialed digits and tones), and some specialized units such as trouble recorders (which punched cards), automatic line insulation test units (which tested lines and phones remotely), traffic service position system consoles (phone operator), and card translators (a clunky device for looking up routes). All these resources were in resource pools, used in rotation with broken units skipped. If anything failed, the call was retried once using different resources. If the retry failed, the call was rejected with the fast busy tone, on the grounds that if two tries had failed, a third retry probably would not help. Everything had hardware timeouts, so if a relay stuck, after a few seconds the unit would fault, a trouble recorder would be seized, and the trouble recorder would drop a trouble card in front of a technician. Major problems set off alarm bells. The most complex units, the markers, ran in pairs, with one checking the other. Any difference generated a trouble card. In an emergency, a marker could run without its checking other half to keep calls going through.
It's worth understanding #5 Crossbar because it was a system far more reliable than its components. It scaled up to handling entire cities, could be maintained while running, and just did not have outages.
-- Western Union Plan 55-A. This automatic telegram switching system handled most telegrams in the US in the 1950s. Think Sendmail, built out of paper tape readers and punches and a telephone switch, an email server that filled a large building. The queuing theory for the ARPAnet came from Kleinrock's thesis on Plan 55-A. Only a superficial knowledge of Plan 55-A is useful today.
[+] [-] brianclements|10 years ago|reply
If 1 and 2 aren't happening, then you can't blame people for resorting to method 3. Perhaps some blame should go to teachers/curriculum, but bagging on people for what they don't know only creates frustration, not progress.
An example from the music world. I've taught the same "wisdom" about voice leading, what it is and why we like to use it, to both AP music theory students in high school as well as middle school band students learning to improvise. You can bet I change my analogies and vernacular to match their knowledge, but they've both been exposed to the concept and have some core vocabulary to research it further if they like.
[+] [-] jacobolus|10 years ago|reply
[+] [-] guard-of-terra|10 years ago|reply
They might teach you intrisnics of some outdated technology as if it was alive and give no hints on why you no longer see any of it around. Pure waste of time.
I had a course on IBM SNA like that.
[+] [-] StillBored|10 years ago|reply
This reminds me that just last month I attended a talk on hp's "the machine" where the presenter talked for about 20 minutes about all the revolutionary new ideas, before someone broke in and asked "how does this differ from an as400". After all the presentation looked like a copy paste job from the wikipedia page on as400 technology. The poor presenter didn't know, and eventually admitted he didn't even know what an as400 was. Much less than the fact that they continue to be sold. In the end someone suggested a literature search.... The presentation went downhill from there.
[+] [-] pcl|10 years ago|reply
[+] [-] nickpsecurity|10 years ago|reply
[+] [-] ianstormtaylor|10 years ago|reply
It reminds of me Elon Musk's memo to SpaceX: https://twitter.com/collision/status/602950284864692224
For example...
Or... Of course, I understand that I'm not the target audience, and the author didn't decide on the names for the technologies in the first place, and maybe even that these acronyms are common knowledge for the target audience, but it sure isn't helping information sharing if every sentence requires tons of extra parsing for new acronyms.The comments section gets even worse...
At a certain point, I'd blame technologists who enjoy feeling smart due to the barriers put up around the knowledge they've worked hard to earn, and who chastise newcomers for not doing tons of work to slog through poorly communicated concepts, instead of newcomers who would probably love to learn more and don't know where to start.We should communicate more clearly so that it's easy to share knowledge.
[+] [-] jnbiche|10 years ago|reply
Yes, that's it in a nutshell. People who work in the same field/sub-field will use jargon to enhance/shorten communication. This has been the case for at least several thousand years.
If you're out to rid the world of jargon, you've got quite a battle ahead of you.
[+] [-] XaspR8d|10 years ago|reply
That said, I really enjoyed RFC 1925: https://tools.ietf.org/html/rfc1925
[+] [-] techdragon|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] c3534l|10 years ago|reply
It reminds me of my father who, close to retirement, had to switch companies led by a bunch of young whipper-snappers in their 40s. They had to solve certain problems that many of the older people had experienced ages ago and were really solved problems. But everyone was more interested in finding creative solutions, rather than just listen.
I hear chemists have a saying that you can save two weeks in the laboratory with an evening in the library.
Of course it's good to know these things. But there's simply going to be some things you know more about because you grew up in a different time than they did. That's not a real problem, that's just the way things are.
[+] [-] smacktoward|10 years ago|reply
The hitch is that the younger programmers are chasing the approval of VCs who actively encourage them to dismiss "older people" as clueless old fusts who can't think at Web Scale.
(The VCs don't actually believe that, of course, especially as many of them are old programmers themselves. But since old programmers know how to read a term sheet and tend to demand things like sane working hours and pay/equity commensurate with their skill, it's in the VCs' interest for their portfolio companies to be cults of youth.)
[+] [-] gfodor|10 years ago|reply
[+] [-] developer1|10 years ago|reply
What would be the software equivalent? Insulting someone who starting developing on PHP 5 not knowing about the short_tags() global function which only existed in PHP 3? Someone proficient in Visual Basic .NET but doesn't know anything about VB 1-6? There is no reason to know how to develop in an ancient version of a language if every job you have had thus far has used exclusively modern versions.
Merely having been alive during the decade in which an obsolete technology was first introduced or was still popular means nothing. It may even have some relevance today when discussing the then-and-now similarities or differences, but frankly someone born decades after its obsolescence just will not care. There is already more knowledge than is possible to absorb about current technology. There is simply not enough time in a single lifetime to care about what came 20-40 years before. Perhaps if we ever push life expectancy to 1000 years, we'll spend the first 100 years of our lives reviewing every relevant detail of the past.
[+] [-] cubano|10 years ago|reply
Acronym soup is such a 80s/90s thing...I for one am glad technologists finally clued up and stopped that practice for the most part.
[+] [-] thwarted|10 years ago|reply
[+] [-] InclinedPlane|10 years ago|reply
I have a strong interest in the history of computing, but most of this sort of stuff is squirreled away in places most folks wouldn't know to look. If you want to learn history in any other field you go search for history books, but a lot of this sort of stuff isn't conveniently compiled in books. Whose fault is that?
[+] [-] idlewords|10 years ago|reply
[+] [-] madsushi|10 years ago|reply
Regarding B2B credits / Fibre Channel and the hop-by-hop windows in X.25, the idea is that each hop has a counter of how many packets it will send to the next hop without receiving an acknowledgement, which are "credits". Once you're out of credits, that hop stops sending traffic. X.25 was superceded/replaced by TCP/IP (which uses end-to-end acknowledgements, instead of hop-by-hop) due to increased performance. But B2B is used with Fibre Channel (storage) networks, which require lossless performance on each hop.
Regarding PAUSE frames and Ctrl-S/Ctrl-Q, the idea is that as a device's receive buffer fills up (it is receiving data faster than it can remove it from the buffer), it will hit its 'pause threshold', and it will send a PAUSE message to the sender to let it know that it can't handle any more data. This is similar to Ctrl-S/Ctrl-Q, which are XOFF/XON control commands in flow control, to stop and start traffic as you ran out of buffer space.
[+] [-] ArkyBeagle|10 years ago|reply
Since it's so old, it's also pretty cheap.
[+] [-] oldmanjay|10 years ago|reply
[+] [-] rdancer|10 years ago|reply
[+] [-] vezzy-fnord|10 years ago|reply
[+] [-] rco8786|10 years ago|reply
[+] [-] late2part|10 years ago|reply
[+] [-] gopowerranger|10 years ago|reply
The TV show, "Modern Family", the daughter of the owner of a closet manufacturing company tries to show she can contribute by offering her new design ideas only to hear from her father that her designs are great cause they did the same thing decades ago.