Kind of depends on what we mean when we use the word "Internet". If we mean internet as a social/economic revolution, then sure, BBSs were relevant. If we mean Internet as a set of protocols, software that implements them, and hardware to run them on, then I don't see that BBSs had a lot of relevance (though they used sliding window protocols for file transfer such as ZMODEM/YMODEM).
There was a brief period where BBSs also offered various gateway services into the Internet a long with the normal BBS services like doors. I received my first ever email address from a huge local BBS back in the early early 90s by paying a token fee every month. I had nobody I knew with email addresses until a few years later so it was kind of pointless, but it did work.
On a larger scale, when AOL started offering internet services through their network is sort of the same thing until they eventually just become an ISP.
>The history of the internet is repeatedly reduced to the story of the singular Arpanet. But BBSs were just as important—if not more.
I mean, even the sub-heading feels wrong to me. Arpanet and BBSs are both a key part of the history of the internet. I'm 35 and both have been discussed, in a historical context, ad nauseam my entire life.
"This work is still being done on platforms like Facebook and Reddit. But unlike the sysops who enabled the flourishing of early online communities, the volunteer moderators on today's platforms do not own the infrastructures they oversee."
Volunteer moderators on Facebook. Perhaps I am misunderstanding this sentence. There have been a few documentaries showing how Facebook does moderation. It is done by low paid workers, e.g, in the Phillipines or Ireland, not volunteers. The first line moderation work is contracted out.
There is actually profound lack of moderation because (a) moderators are overwhelmed and cannot keep up and (b) the priority of Facebook is what attracts the most traffic ("engagement") not meeting standards. If some group page is getting lots of traffic then it goes into a special category. The moderators cannot delete it without approval from FTEs at Facebook. The number one priority is traffic,^1 not upholding any standard for content. Any standards enforced are only created in response to complaints. Section 230 shields Facebook from most of the potential legal liability.
1. Under the guise of soundbites such as "connecting the world" and "giving everyone a voice". People can be connected and have a voice and still only "engage" with Facebook periodically/sporadically. Overall, low traffic. But without both high traffic and ads, Facebook's days are numbered. Hence, ever more indignant, angry, extreme content is required to draw "engagement".
Edit: Thank you @LamdaComplex. I think that explains it.
Each Facebook Group has its own set of admins/moderators, appointed by whomever created the group--i.e. normal users who are trying to create an online community on a platform that they have no real control over (just like subreddit moderators on Reddit). I assume that's what it's talking about.
Author seems to miss one major point that many tech-journalists seem to miss these days which is the internet is different than the web. I had actually bookmarked the book before but now it doesn't seem very interesting to read.
PS: What the hell is the point of writing a title as aggravating as this? It's almost as if they go out of their way to be insulting! Sheesh!
To answer the PS: it's the clickbait equivalent of a "neg" from "pick up artist" theory - put the other person in a position of low status where they have something to prove/gain.
When I see such a title, I always look at the comments first, as I don't want to rewards this behaviour with a click unless the rest of the article is really worthwhile.
Perhaps it would be accurate to say that Arpanet provided the vision and ultimately protocols to create one unified network, while the online communities (BBS seems the wrong term to me) created many of the _applications_ (in concept, no code survived) that would eventually run on that unified network. Even that's not really true because NNTP was a strictly Internet thing and SMTP email existed before most online communities did. Heck you may as well throw in Prestel and Minitel while you're at it. Honestly the online communities seem like the Dodo of the modern internet, not the Coelacanth. For example I was able to get through those decades without using any of them (save for a little bit of CIS required because something...something...Microsoft developer...something needed it).
> If there is a future after Facebook, it will be led by a revival of the sysop, a reclamation of the social and economic value of community maintenance and moderation.
This is a major point of the article, more key than whether or not the history of BBSs was important in the construction of today's technology.
Current major participation platforms define how information is disseminated in our open societies, which has affected our democratic processes. Returning to a structure of self-managed online communities could help us regain control of what information flows you want to follow in your social group, rather than ceding that control to large companies seeking to maximize attention.
It is a click-bait title. Judging from the first paragraph it is a book review and the author of the article was conflating the Internet with the web. Scanning the rest of the article, it looks like the author tries to prop up their claims by arguing that the social aspects of the Internet had their foundations in BBSes. Even if this is true, it isn't what most people mean by the origins of the Internet as infrastructure. (Though it probably isn't true. Even for social elements that didn't originate with the Internet, one has to consider that there was more to the online world than BBSes and the Internet anyway. Even then one has to consider that things popularised by BBSes probably found their inspiration elsewhere, including from the Internet.)
For a vastly better take on the origin story of the Internet, based on the work of someone who was there at the time, documenting it at the time, I'd recommend without reservation John Quarterman's The Matrix.
First published in 1990, just as the WWW was emerging (and failing to make the book), it was revised in 1997 (the link I'm giving here). The earlier book does discuss extensive sets of networks of which ARPANET, formally decomisssioned in 1990 after spawning the then-nascent Internet, was only one instance.
The introduction includes earily prescient discussion of the role early computer networks played in the Tiananmen Square protests and massacre of 1989.
The BBS community pioneered some of the services that later become popular on Internet, but not at scale, and with very different motivation. It was never the origin story of Internet. At the prime of the BBS area, still years before Internet became commercially available, I remember a friend showing me experimental C code for his BBS for "email", via gateway to Internet.
I used to run a multi-user BBS in Norway with a pool of phone-modems, running under an early (pirated) version of QNX on a 8086 PC. The software, CrCs, was homegrown, written in C, mostly at night, while I worked several day jobs to pay for my BBS hobby. Eventually coding and software design became my profession.
There are other aspects of the origin story that seem to be myth. For instance, the idea that the Internet was designed to route around damage, and could also treat censorship as a form of damage, and therefore it was resistant to censorship.
I once spent a long weekend reading through the early RFCs. What I found was the obvious recognition of the fact that AT&T, which at the time had a monopoly on telephone services in the USA, was circuit-switched, whereas IP was being designed to be packet-switched, and clearly a packet-switched technology could be more decentralized than a circuit-switched technology. But that is the only shred of evidence I could find in favor of something like an ideology of decentralization. I could not find anything explicit in the design documents of the 1970s and 1980s.
As near as I can tell, the more grandiose claims of decentralization don't show up till the late 1990s, and those claims are not especially made by the technologists themselves (with the exception of Bob Metcalfe). It's mostly techno-utopians and libertarians who begin to make the bolder claims about how the Internet is fundamentally decentralized and resistant to censorship. People like Kevin Kelly, writing in Wired Magazine in the 1990s, made the most utopian claims about the decentralization of the Internet.
I've looked, and I've had trouble finding evidence of bold claims of decentralization, being made in the 1970s or 1980s, except in the very obvious case case that a packet-switched network is less necessarily centralized than a circuit-switched network. The myth gets built in the 1990s, and is then retro-imposed on the 1970s and 1980s.
> the idea that the Internet was designed to route around damage
See, e.g. Baran's writings from 1964, about decentralization, fault tolerance, and survivability of packet switched networks and directly lead to the ARPANet and internet:
https://www.rand.org/pubs/research_memoranda/RM3420.html
> shred of evidence I could find in favor of something like an ideology of decentralization
As to decentralization among many providers-- this only really comes into play in 1989, when BGP emerged, and the NSFNet began to connect regional networks and ISPs and internet exchanges began to flourish-- though you can find Baran writing in the late 1970's about the connection of multiple networks and ensuring that single providers or colluding groups of providers are not too powerful.
But it was a goal from the beginning. E.g. Pouzin, 1973:
"Actually, this last approach boils down to the construction of another network, and ultimate at that. ... This is not to say that a general internetwork agreement will never happen, but that it will happen gradually... This study attempts to put forward a realistic scheme allowing point to point message transfer across several independent packet switching networks.
This requires a common agreement in formatting messages, but constraints are minimized while preserving implementation freedom and efficiency."
Of course, many questions remained-- protocol translation vs. common protocol? How to do this all, without dynamic routing? What about the regulatory landscape, etc. We lucked out into a pretty good outcome, overall.
> could also treat censorship as a form of damage
This is a feature of the culture of the 'net it existed in the 80's and 90's based on experience with Usenet (which indeed does kind of route-around-censorship in many cases, but also engendered a group of users who culturally subverted attempts at censorship), and not a feature of the internet protocols themselves.
A packet-switched network is less necessarily centralized than a circuit-switched network.
This is not actually true. The AT&T Long Lines network was originally very decentralized.
When it was first fully automated, there was no central control point. There were 10 regional control centers, but they were not essential to most calls. Eventually there was central control, out of Bedminster, Pennsylvania. Originally all it did was collect data and send out new routing tables every few minutes. It didn't process any calls. Previously the routing tables had been fixed, changed maybe once a month. Calling pattern changes on holidays were a serious cause of overload. AT&T used to run ads asking people to make their Xmas calls earlier in the day.
There's "A History of Science and Engineering in the Bell System"[1], written by the old-timers at the AT&T breakup to record how all the old stuff worked. A general understanding of how the more advanced electromechanical switching systems worked is useful in designing high-rel systems. Number 4 and Number 5 crossbar systems had system reliability much higher than component reliability. In the entire history of the Bell System, no electromechanical central office was ever out of service for more than 30 minutes, other than due to a natural disaster or major fire. That record has not been maintained in the computer era.
You won’t find anyone saying that censorship is the type of damage to be routed around, in a serious engineering way, because that’s more of a colloquialism than a true design principle. Asserting that it was almost seems like a straw man.
The concept of routing around censorship is more of a behavior by people using the applications that the Internet has enabled. There’s no RFC for that.
For instance, the idea that the Internet was designed to route around damage, and could also treat censorship as a form of damage, and therefore it was resistant to censorship.
Yes. That comment was actually made about USENET, not the ARPAnet. Because USENET really did that. When two USENET nodes connected, they compared lists of message IDs, and each one requested any messages the other had, but they didn't. The result was distributed flooding - everybody eventually got everything in any group to which they subscribed.
If someone censored the messages going across a link, that would prevent the message traveling over that link. But if any other possible path existed, two nodes would eventually compare messages lists, note the missing ID, and transmit the missing message. So censorship was only possible if you controlled all possible connections between two networks.
This came up at Stanford in the 1980s, when some IT administrator (Ralph Gorin?) tried censoring rec.humor.funny. But there was one professor (McCarthy?) who had his own USENET node both on the Stanford network and connecting to the outside. It didn't have much capacity, but as long as there was even one low-bandwidth path, any censored messages would eventually take the slow path and arrive. So censorship was futile.
> The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.
Citation: Charles Herzfeld, ARPA Director (1965–1967) as quoted here:
One of my university professors helped start CSNet which ran as an alternative to ARPANet for a few years over X.25 equipment. There's a complicated history but it sorta ran until the late 80s by itself until the Internet kind of swallowed everything.
My impression from my professor is that these academic precursors to the Internet were really intended to be centrally controlled by their various committees (as academic efforts usually are). The notion that these packet switched networks would eventually just sort of be run in a mostly decentralized way didn't seem to have occurred to anybody.
"Decentralization" has many meanings and often organizational and technical decentralization are conflated.
The decentralization talked about in 1990's Wired was both because the technology wasn't controlled by a single company (unlike competitors like AOL or MSN).
When John Gilmore said in the early 01990s, "The Net interprets censorship as damage and routes around it," he was talking about USENET, not the internet. USENET is a peer-to-peer gossip network, like Bitcoin. John Gilmore is a techno-utopian, a libertarian, and a technologist; he was Sun employee #5, he designed BOOTP (the basis of DHCP), and he wrote substantial parts of GCC and GDB. This saying of his was quoted in Time Magazine in 01993 https://web.archive.org/web/20210408023213/https://kirste.us... which seems to be a bit confused about the relationship between the internet and USENET. I don't know how far back it dates, but I don't think he made it up for the article.
The claim is, strictly speaking, true of gossip networks like USENET. If node A talks to nodes B and C, which also talk to each other and also to another node D, and node A utters a message that node B chooses not to distribute, the situation from D's perspective is precisely the same as if node B were temporarily down. Node C will still get the message (directly from A, rather than from B) and node D will still get it from C.
You can find a lot of technologists who are also techno-utopians and libertarians making grandiose claims about the decentralizing influence of the internet in the cypherpunks archives from 01992 and 01993. At the time, the list was hosted on Hoptoad, John Gilmore's personal machine, in his house.
I think there are a lot of ways that the internet architecture is less centralized than the PSTN that aren't related to packet-switching versus circuit-switching. Regular users aren't allowed to run traceroute on the PSTN (though some phone phreaks used to do that sort of thing, and MPLS means traceroute means less than it used to). The PSTN doesn't have a public whois database. It won't delegate even a class C of 256 phone numbers to your PBX for local use (though that was becoming difficult to get on the internet too by the mid-01990s), much less 2^48 phone numbers. You can't get a PSTN AS number so you can multihome your PBX to two different long-distance providers. Historically there was no mapping from logical names to phone numbers, so to call someone on the phone you had to dial an area code and exchange that identified their local phone company and its exchange, so user-visible addresses were inherently owned by local regulated monopoly providers, not by users themselves.
Moreover, ideologically the deployment of new applications on the PSTN was centrally controlled and administered by the phone company, and they would run on the "switches" in the buildings without windows, while on the internet any random person could write and run an experimental server on their node, at least if their sysadmin was cool with it. Until the Carterfone decision of 01968 it was illegal in the US to plug even third-party telephones into the PSTN, and direct-connect modems didn't become legal until 01975. After that, lots of innovative applications of the PSTN were actually deployed in a decentralized way, such as answering machines, fax machines, and BBSes, but they were very limited in the features they could provide.
By comparison, the early internet was wildly decentralized.
Packet-switching does mean that a single computer can directly talk to an arbitrarily large number of other computers at once, instead of just however many circuits its connection supports. But a graph of degree three can already be a robustly cyclic connected graph, so three circuits per node would be enough. Of course, the phone company only provided most users with one at a time, with thousands of milliseconds of latency to establish a new circuit.
So it wasn't so much that we imposed decentralization retroactively on the work of the 01970s and 01980s. It was more that we observed that it had resulted.
[+] [-] anon946|3 years ago|reply
[+] [-] bane|3 years ago|reply
On a larger scale, when AOL started offering internet services through their network is sort of the same thing until they eventually just become an ISP.
[+] [-] msla|3 years ago|reply
Oh, well. I guess we should be happy they're not insisting the Internet was invented at CERN in 1995.
[+] [-] jjulius|3 years ago|reply
I mean, even the sub-heading feels wrong to me. Arpanet and BBSs are both a key part of the history of the internet. I'm 35 and both have been discussed, in a historical context, ad nauseam my entire life.
[+] [-] beaned|3 years ago|reply
[+] [-] 1vuio0pswjnm7|3 years ago|reply
Volunteer moderators on Facebook. Perhaps I am misunderstanding this sentence. There have been a few documentaries showing how Facebook does moderation. It is done by low paid workers, e.g, in the Phillipines or Ireland, not volunteers. The first line moderation work is contracted out.
There is actually profound lack of moderation because (a) moderators are overwhelmed and cannot keep up and (b) the priority of Facebook is what attracts the most traffic ("engagement") not meeting standards. If some group page is getting lots of traffic then it goes into a special category. The moderators cannot delete it without approval from FTEs at Facebook. The number one priority is traffic,^1 not upholding any standard for content. Any standards enforced are only created in response to complaints. Section 230 shields Facebook from most of the potential legal liability.
1. Under the guise of soundbites such as "connecting the world" and "giving everyone a voice". People can be connected and have a voice and still only "engage" with Facebook periodically/sporadically. Overall, low traffic. But without both high traffic and ads, Facebook's days are numbered. Hence, ever more indignant, angry, extreme content is required to draw "engagement".
Edit: Thank you @LamdaComplex. I think that explains it.
[+] [-] LambdaComplex|3 years ago|reply
[+] [-] bluedays|3 years ago|reply
[+] [-] iratewizard|3 years ago|reply
[+] [-] yessirwhatever|3 years ago|reply
PS: What the hell is the point of writing a title as aggravating as this? It's almost as if they go out of their way to be insulting! Sheesh!
[+] [-] ajb|3 years ago|reply
When I see such a title, I always look at the comments first, as I don't want to rewards this behaviour with a click unless the rest of the article is really worthwhile.
[+] [-] dboreham|3 years ago|reply
[+] [-] TuringTest|3 years ago|reply
This is a major point of the article, more key than whether or not the history of BBSs was important in the construction of today's technology.
Current major participation platforms define how information is disseminated in our open societies, which has affected our democratic processes. Returning to a structure of self-managed online communities could help us regain control of what information flows you want to follow in your social group, rather than ceding that control to large companies seeking to maximize attention.
[+] [-] kjsthree|3 years ago|reply
[+] [-] II2II|3 years ago|reply
[+] [-] user3939382|3 years ago|reply
[+] [-] dredmorbius|3 years ago|reply
First published in 1990, just as the WWW was emerging (and failing to make the book), it was revised in 1997 (the link I'm giving here). The earlier book does discuss extensive sets of networks of which ARPANET, formally decomisssioned in 1990 after spawning the then-nascent Internet, was only one instance.
The introduction includes earily prescient discussion of the role early computer networks played in the Tiananmen Square protests and massacre of 1989.
https://www.worldcat.org/title/matrix-computer-networks-and-...
[+] [-] jgaa|3 years ago|reply
No it is not.
The BBS community pioneered some of the services that later become popular on Internet, but not at scale, and with very different motivation. It was never the origin story of Internet. At the prime of the BBS area, still years before Internet became commercially available, I remember a friend showing me experimental C code for his BBS for "email", via gateway to Internet.
I used to run a multi-user BBS in Norway with a pool of phone-modems, running under an early (pirated) version of QNX on a 8086 PC. The software, CrCs, was homegrown, written in C, mostly at night, while I worked several day jobs to pay for my BBS hobby. Eventually coding and software design became my profession.
[+] [-] lkrubner|3 years ago|reply
I once spent a long weekend reading through the early RFCs. What I found was the obvious recognition of the fact that AT&T, which at the time had a monopoly on telephone services in the USA, was circuit-switched, whereas IP was being designed to be packet-switched, and clearly a packet-switched technology could be more decentralized than a circuit-switched technology. But that is the only shred of evidence I could find in favor of something like an ideology of decentralization. I could not find anything explicit in the design documents of the 1970s and 1980s.
As near as I can tell, the more grandiose claims of decentralization don't show up till the late 1990s, and those claims are not especially made by the technologists themselves (with the exception of Bob Metcalfe). It's mostly techno-utopians and libertarians who begin to make the bolder claims about how the Internet is fundamentally decentralized and resistant to censorship. People like Kevin Kelly, writing in Wired Magazine in the 1990s, made the most utopian claims about the decentralization of the Internet.
I've looked, and I've had trouble finding evidence of bold claims of decentralization, being made in the 1970s or 1980s, except in the very obvious case case that a packet-switched network is less necessarily centralized than a circuit-switched network. The myth gets built in the 1990s, and is then retro-imposed on the 1970s and 1980s.
[+] [-] mlyle|3 years ago|reply
See, e.g. Baran's writings from 1964, about decentralization, fault tolerance, and survivability of packet switched networks and directly lead to the ARPANet and internet: https://www.rand.org/pubs/research_memoranda/RM3420.html
> shred of evidence I could find in favor of something like an ideology of decentralization
As to decentralization among many providers-- this only really comes into play in 1989, when BGP emerged, and the NSFNet began to connect regional networks and ISPs and internet exchanges began to flourish-- though you can find Baran writing in the late 1970's about the connection of multiple networks and ensuring that single providers or colluding groups of providers are not too powerful.
But it was a goal from the beginning. E.g. Pouzin, 1973:
"Actually, this last approach boils down to the construction of another network, and ultimate at that. ... This is not to say that a general internetwork agreement will never happen, but that it will happen gradually... This study attempts to put forward a realistic scheme allowing point to point message transfer across several independent packet switching networks. This requires a common agreement in formatting messages, but constraints are minimized while preserving implementation freedom and efficiency."
Of course, many questions remained-- protocol translation vs. common protocol? How to do this all, without dynamic routing? What about the regulatory landscape, etc. We lucked out into a pretty good outcome, overall.
> could also treat censorship as a form of damage
This is a feature of the culture of the 'net it existed in the 80's and 90's based on experience with Usenet (which indeed does kind of route-around-censorship in many cases, but also engendered a group of users who culturally subverted attempts at censorship), and not a feature of the internet protocols themselves.
[+] [-] Animats|3 years ago|reply
This is not actually true. The AT&T Long Lines network was originally very decentralized. When it was first fully automated, there was no central control point. There were 10 regional control centers, but they were not essential to most calls. Eventually there was central control, out of Bedminster, Pennsylvania. Originally all it did was collect data and send out new routing tables every few minutes. It didn't process any calls. Previously the routing tables had been fixed, changed maybe once a month. Calling pattern changes on holidays were a serious cause of overload. AT&T used to run ads asking people to make their Xmas calls earlier in the day.
There's "A History of Science and Engineering in the Bell System"[1], written by the old-timers at the AT&T breakup to record how all the old stuff worked. A general understanding of how the more advanced electromechanical switching systems worked is useful in designing high-rel systems. Number 4 and Number 5 crossbar systems had system reliability much higher than component reliability. In the entire history of the Bell System, no electromechanical central office was ever out of service for more than 30 minutes, other than due to a natural disaster or major fire. That record has not been maintained in the computer era.
[1] https://www.amazon.com/History-Engineering-Science-Bell-Syst...
[+] [-] orev|3 years ago|reply
The concept of routing around censorship is more of a behavior by people using the applications that the Internet has enabled. There’s no RFC for that.
[+] [-] Animats|3 years ago|reply
Yes. That comment was actually made about USENET, not the ARPAnet. Because USENET really did that. When two USENET nodes connected, they compared lists of message IDs, and each one requested any messages the other had, but they didn't. The result was distributed flooding - everybody eventually got everything in any group to which they subscribed.
If someone censored the messages going across a link, that would prevent the message traveling over that link. But if any other possible path existed, two nodes would eventually compare messages lists, note the missing ID, and transmit the missing message. So censorship was only possible if you controlled all possible connections between two networks.
This came up at Stanford in the 1980s, when some IT administrator (Ralph Gorin?) tried censoring rec.humor.funny. But there was one professor (McCarthy?) who had his own USENET node both on the Stanford network and connecting to the outside. It didn't have much capacity, but as long as there was even one low-bandwidth path, any censored messages would eventually take the slow path and arrive. So censorship was futile.
But so was spam filtering.
[+] [-] msla|3 years ago|reply
Wikipedia sets this one straight, anyway:
https://en.wikipedia.org/wiki/ARPANET
> The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.
Citation: Charles Herzfeld, ARPA Director (1965–1967) as quoted here:
https://arstechnica.com/information-technology/2019/10/50-ye...
[+] [-] bane|3 years ago|reply
My impression from my professor is that these academic precursors to the Internet were really intended to be centrally controlled by their various committees (as academic efforts usually are). The notion that these packet switched networks would eventually just sort of be run in a mostly decentralized way didn't seem to have occurred to anybody.
[+] [-] nl|3 years ago|reply
The decentralization talked about in 1990's Wired was both because the technology wasn't controlled by a single company (unlike competitors like AOL or MSN).
[+] [-] throwaway193948|3 years ago|reply
[+] [-] kragen|3 years ago|reply
The claim is, strictly speaking, true of gossip networks like USENET. If node A talks to nodes B and C, which also talk to each other and also to another node D, and node A utters a message that node B chooses not to distribute, the situation from D's perspective is precisely the same as if node B were temporarily down. Node C will still get the message (directly from A, rather than from B) and node D will still get it from C.
You can find a lot of technologists who are also techno-utopians and libertarians making grandiose claims about the decentralizing influence of the internet in the cypherpunks archives from 01992 and 01993. At the time, the list was hosted on Hoptoad, John Gilmore's personal machine, in his house.
I think there are a lot of ways that the internet architecture is less centralized than the PSTN that aren't related to packet-switching versus circuit-switching. Regular users aren't allowed to run traceroute on the PSTN (though some phone phreaks used to do that sort of thing, and MPLS means traceroute means less than it used to). The PSTN doesn't have a public whois database. It won't delegate even a class C of 256 phone numbers to your PBX for local use (though that was becoming difficult to get on the internet too by the mid-01990s), much less 2^48 phone numbers. You can't get a PSTN AS number so you can multihome your PBX to two different long-distance providers. Historically there was no mapping from logical names to phone numbers, so to call someone on the phone you had to dial an area code and exchange that identified their local phone company and its exchange, so user-visible addresses were inherently owned by local regulated monopoly providers, not by users themselves.
Moreover, ideologically the deployment of new applications on the PSTN was centrally controlled and administered by the phone company, and they would run on the "switches" in the buildings without windows, while on the internet any random person could write and run an experimental server on their node, at least if their sysadmin was cool with it. Until the Carterfone decision of 01968 it was illegal in the US to plug even third-party telephones into the PSTN, and direct-connect modems didn't become legal until 01975. After that, lots of innovative applications of the PSTN were actually deployed in a decentralized way, such as answering machines, fax machines, and BBSes, but they were very limited in the features they could provide.
By comparison, the early internet was wildly decentralized.
Packet-switching does mean that a single computer can directly talk to an arbitrarily large number of other computers at once, instead of just however many circuits its connection supports. But a graph of degree three can already be a robustly cyclic connected graph, so three circuits per node would be enough. Of course, the phone company only provided most users with one at a time, with thousands of milliseconds of latency to establish a new circuit.
So it wasn't so much that we imposed decentralization retroactively on the work of the 01970s and 01980s. It was more that we observed that it had resulted.
[+] [-] antijava|3 years ago|reply
[+] [-] gerikson|3 years ago|reply
[+] [-] erlend_sh|3 years ago|reply
Literally none of us were going around thinking we were the problem.
[+] [-] m463|3 years ago|reply
EDIT: https://en.wikipedia.org/wiki/Telenet#PC_Pursuit
[+] [-] high_5|3 years ago|reply
[+] [-] dredmorbius|3 years ago|reply
[+] [-] willnonya|3 years ago|reply
[+] [-] RappingBoomer|3 years ago|reply
[deleted]
[+] [-] joezydeco|3 years ago|reply
https://web.eecs.umich.edu/~fessler/misc/funny/gore,net.txt
Al Gore and the Internet: By Robert Kahn and Vinton Cerf
"Al Gore was the first political leader to recognize the importance of the Internet and to promote and support its development."