top | item 19010671

The AI Threat to Open Societies

201 points| malloryerik | 7 years ago |georgesoros.com | reply

146 comments

order
[+] raz32dust|7 years ago|reply
This reminds me of the "Do artifacts have politics" paper by Langdon Winner [1]. He argues that technologies have inherent political traits.

Nuclear power is considered to be supportive of autocratic political systems since nuclear power plants need centralized planning and networks to be effective. Solar power is considered democratic since anyone can harness it. It's an interesting paper and definitely worth a read.

On similar lines, I feel internet is a democratizing force, since it allowed anyone to publish data and anyone to consume it, and is (somewhat) difficult to control centrally. AI, on the other hand, is a centralizing force, since the most powerful AI can be managed and powered by the most powerful institutions.

[1] https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf

[+] joe_the_user|7 years ago|reply
Indeed, one could consider machine learning an authoritarian technology if given power over individuals. AI's fundamental problem is it winds-up being a "results oriented" approach where individuals characteristics are weighed by black-box systems and the individual is judged without any recourse or even any exact idea what the criteria is.
[+] joakinen|7 years ago|reply
Internet is out of control by citizens. The web was such a democratic space but browsers narrow that space down because they are centralized products. DRM-enabled only (if ever comes) will kill the web.
[+] intended|7 years ago|reply
Base internet is democratizing at complexity = 1.

The moment complexity on the web goes up, your need for governance structures goes up and the web becomes a force for centralized control of entities on the web: Human or otherwise.

The higher the level of complexity, the better the tools required to manage and analyze data. Therefore the better the tools available to analyze and manage humans.

I guess, if you project this, then the highest levels of the web probably are fully centralized and firewalled networks,.

At some point you have to deal with attackers, inimical and hostile networks and attacks to take over your "stack" of complexity/society. With the internet and high enough complexity, You can finally attack some subset of human behavior, privacy, brains, or information with automation.

The internet is probably the equivalent of flight based damage to fortresses.

And this is before people discuss something like singularity AI.

[+] fooker|7 years ago|reply
The internet once required a huge amount of centralized resources and infrastructure to be realized.

I don't see why "AI" as we know it now is any different.

[+] ben_w|7 years ago|reply
An interesting perspective, however I do not believe that the Internet is democratising.

I regard democracy (and capitalism) as being an industrial-era approximation of solutions to the institution-alignment-problem (analogous to the AI alignment problem, but for governments and corporations). Both of these have their own failure modes; In the case of democracy, the failure mode is that people do not know objective truth, and can be fooled by propaganda. The Internet allowed cheap mass propaganda for as long as spam has existed, and this has become worse as propoganists learned to use internet friendly memes.

If you take away (or corrupt) the electorate’s knowledge, I do not think it is okay to continue to call it “democracy” even if they all still vote.

[+] nabla9|7 years ago|reply
I see Internet as somewhat neutral neutral in relation to democracy in already existing liberal democratic societies. It promotes diffusion of information, but not synthesis.

Internet really promotes anarchism. Even today with most of the discussion going trough few choke points.

Internet is an anarchist force and as such it's prone to informal elites. They are cliques that control people without well defined responsibility, and often without peoples knowledge or consent. The controlling groups can be rich individuals, corporations, foreign states marketers and political operators.

[+] js8|7 years ago|reply
> He argues that technologies have inherent political traits.

What are the traits of money and billionaires?

[+] gaius|7 years ago|reply
Solar power is considered democratic since anyone can harness it

But that makes literally no sense. You can’t fabricate solar panels in your backyard. You need a factory handling toxic chemistry and a supply chain of rare earth elements from open cast mining! Solar is no less centralised than nukes.

[+] bronz|7 years ago|reply
technologies having inherent political traits is a consequence of a much deeper and more important aspect of technology -- that it has inherent traits of human economics. if you take a set of technological realities that might be imposed on some society, it leads to that society eventually reaching exactly one stable state.

a very simple example is the technology of guns. this technology leads inevitably to a state of the world that is characterized by the presence of gun-utilizing nations. this is because the world is a kind of market, and when guns exist the only entities that are competitive are those that use guns.

right now, market economies dominate the world. even china utilizes markets for its own internal economic affairs. when AI comes, this will turn on its head -- market economies will no longer be competitive and centralized ones will replace them. this will be a pretty shocking change.

also, rather soon, humans will stop being present. this is because they will no longer be competitive, their existence will be vestigial and therefore fragile and vulnerable to the slightest perturbation. it will be similar to endangered animals in the present -- no longer competitive, their existence no longer perpetuates itself and therefore is terminated for any old reason, such as condominium developments or pollution.

[+] Animats|7 years ago|reply
AI is only in a supporting role here. It's massive data collection and storage at low cost that's the problem. Machine learning just helps to digest the data.

Tech has solved the problem of previous attempts at Big Brother - you just couldn't afford enough watchers to watch everybody all the time. Now, you can. It's even profitable.

[+] est31|7 years ago|reply
And tech will also solve the power problem of autocratic regimes. Right now, a dictator can't control every single individual in the country on their own. They need police that can search your apt at 5am because you made a blog post critical of the government. They need lawyers to convict you, prison guards, etc. In such regimes, the dictator still has to put people into places of power so that the will of the dictator is executed. But people can refuse orders, and they can declare someone else to be president.

Coups are one of the biggest dangers for dictators, and many rebellions are semi-coups where people in power just step aside, letting the rebels do their thing.

Now, enter AI. Now the dictator could give all that power to an AI instead of intermediaries. An entire government run by two entities: the Dictator, with direct control over the AI that runs the remainder. All the bomb-equipped drones, all the self-driving tanks, all the bipedal robots with their machine guns. All the robot prison-guards and the robot judges to put critical people into prison. If this AI would answer only to the head of government, then any kind of upheaval would become unrealistic and impossible.

[+] sgt101|7 years ago|reply
Dictators have one mind; Oligarchies have a handful, or scores. AI (as is, not scify) is (I think) a multiplier or leverage for a mind. The open societies differentiation is potentially millions or billions of minds. To realise that differentiation open societies have to create a population that has it's own mind, and develop a substrate that supports individual thinking, and of course manage the interaction and flow of all these minds. This is where the internet started to get interesting, but has now run into sand - most people still lack the tools and opportunities to think for themselves, while this is true they can be dominated.
[+] austincheney|7 years ago|reply
History has already answered this numerous times. AI is a tool much like a semi-automatic rifle or the printing press. The people who understand it well have an accelerated advantage over those who do not. It is both a good and bad thing depending upon who wields the tool. Like any force multiplier it will be misunderstood (magic) and improperly regulated by both open and closed societies alike.

Like with any tool the real victors will be those who adapt it to solve basic immediate problems: a utility.

[+] intended|7 years ago|reply
The question people need to address, is whether AI is a tool like the evolution from the Gun to the automatic rifle.

OR whether AI is the kind of tool like flight, which made fortresses redundant.

We have many areas of thought and society, which have been protected by walls of 'difficulty' making them intractable to large scale, automated and effective manipulation.

This may no longer be the case, and AI may well represent the more fundamental change of the second kind.

[+] bryanrasmussen|7 years ago|reply
Some thoughts I just had, and haven't really developed yet beyond the initial moment of having them:

It used to be you would see theories about the internet threat to Closed societies, and I suppose those threats are real too. But perhaps the threats to closed societies are the obvious threats, and the threats to the open societies were not immediately obvious, and thinking on this is somewhat murky but if there are a bunch of obvious threats and a bunch of hidden threats then I guess people guard against the obvious threats.

Finally maybe anything that threatens one type of human society must also threaten all types, only the threats are changed around for each type.

[+] pjc50|7 years ago|reply
Open societies are threats to closed societies, and vice versa. An open world is stable; a world of closed societies is also fairly stable internally, although much more likely to go to war with each other.

The idea of "the end of history" was that Open had won, and it was just a question of mopping up the remaining closed societies. It turns out that maybe the open societies weren't as open as they thought.

[+] jpalomaki|7 years ago|reply
Maybe one such example is the effect of "democratization of information distribution". Combine this with standard analytics (and in future AI) and you have machinery that can deliver highly targeted messages, each crafted to have maximum effect on the opinions of a specific recipient.

In the centralized Internet (think Facebook, Twitter, Google) this is something that we can to some extent understand and maybe control. In a proper decentralized world, this won't be possible. Privacy features will also protect the identities of trolls.

In the past, the content distributed was limited by the imagination of people. In future, no such limitation exists. You will have AI (think generative adversarial networks) learning to create fake content tailored for specific persons.

[+] quantum_state|7 years ago|reply
It is self evident that labeling any knowledge, technology, or artifacts to be against open society is completely nonsense and contradicts the very concept of open society itself.
[+] paganel|7 years ago|reply
It’s not nonsense, as you cannot carry on mass-killing events like the Holocaust or the Goulag without a modern-ish technology like the railway system. Or, to go directly to the source, let’s use Goebbels [1], as he was explaining that without modern technologies like the radio or the airplane the nazis’ ascent to power wouldn’t have been possible:

> It would not have been possible for us to take power or to use it in the ways we have without the radio and the airplane. It is no exaggeration to say that the German revolution, at least in the form it took, would have been impossible without the airplane and the radio.

[1] https://research.calvin.edu/german-propaganda-archive/goeb56...

[+] DyslexicAtheist|7 years ago|reply
Jacques Ellul makes very similar points in his work La Technique (The Technological Society). He traces how technology and power interplay from the invention of the pocket watch and steam engine, all the way to computers. It's one of the best works I've come across in this space and a must read for Technology critics and advocates alike. I wish it would be (even) more widely read here[1] than it is but finding it in languages others than English or French might prove a challenge. His follow up work "Propaganda The Formation of Mens Attitudes" is also phenomenal. These 2 books are among the best books I discovered in 2018 (if not in the last decade).

[1] https://hn.algolia.com/?query=Jacques%20Ellul&sort=byPopular...

[+] raverbashing|7 years ago|reply
Even though the reason why the author has these positions is perfectly understandable, I don't subscribe to his views, and there is an issue in complete liberalization of movements across borders.

Fundamentally, AI is not a threat or a blessing, it is a technology.

[+] Aeolun|7 years ago|reply
I think the biggest threat to open societies is that nobody will be able to read the warnings against dangers if they don’t have money to subscribe to a thousand publications :/
[+] renholder|7 years ago|reply
Perhaps this is my pessimistic view but wouldn't any government that monitors it's civilians for a potential of being "against" the government (thus, threatening that government's power) be authoritarian?

Let's take the second amendment in the states for an example. The right to bear arms is meant to stop oppressive government but you cannot purchase anti-aircraft or anti-tank or anti-anything-really weapons. So, the might of the government's force is disproportionately in favour of the very government that the amendment is proposed to prevent oppression from, yeah? (Not that I'm in favour of overthrowing the government of the states, whatsoever; this is just an observation.)

When the Snowden revelations came about, this was the equivalent of China's list of people to send for "re-education". Granted, in the states, there's been no re-education (that we're aware of), it's only a small step from taking that information gleamed from those "lists" and doing just that.

At what point do we draw the line between authoritarian and not? Shouldn't the very notion of putting people on a list count? After all, it was a list of people that suffered the consequences of the Night of the Long Knives, was it not?

I fail to see how an open society would be considered "open", if it includes secretive programs like that. Isn't the principle of secret programs against a government's citizenry the antithesis of an open society?

Maybe I'm missing something here but to say that there are any open societies left (while probably dystopian in nature and bereft of any hope of the future) is beguiling the very fact that there doesn't seem to be very many (if any at all) open societies left.

(I'm probably talking bullshit circular logic, so feel free to ignore this tirade of discontent.)

[+] DougN7|7 years ago|reply
I think maybe you’re comparing “open society” to utopia. There will always be bad actors that have to be accounted for and handled in some way. What is bad and how bad is always up for debate, but their existence will always be with us. In my opinion that requires lists. I do agree that most countries including the US have gone too far from what I’ve read.
[+] VvR-Ox|7 years ago|reply
You post an article about open societies and then I can't access it without subscribing - LOL

Just my opinion then without reading it: - in China we already see what a government can do with technologies including AI to control the behaviour of people - the West always seems to think it's not going to be that evil around here because people are moral superior (I don't believe that - with Hitler in Germany we saw how quickly things can change and with refugees and poor citizens in all those countries we see how badly people can treat others and still think this is correct - even without an AI they'd believe blindly) - too many techies have no moral & ethics. While studying and also in business I saw most of the people just being interested in personal welfare and earning the most money they can

TLDR; This (whatever 'this' will be) is definitely going to happen if not enough people find back to humanity - with or without support of IT/AI/... (just tools)

[+] gnomewascool|7 years ago|reply
> You post an article about open societies and then I can't access it without subscribing - LOL

While I also dislike being forced to subscribe to read something, free access to publications and open society are completely different things. After all, you had open societies before the digital age, when the vast majority of newspapers were paid.

Obviously, the two are linked, but I'm not even sure whether gratis access to all journalism encourages an open society or discourages it. (You can imagine just-so stories for either case:

1. free access to articles → everyone can read them → better informed society → open society

2. free access to articles → high quality journalism goes out of business as they have to compete against zero-cost alternatives → misinformed (or even deliberately disinformed) society → people less likely to fight for openness.

)

[+] ngcc_hk|7 years ago|reply
It is not AI. It is the chinese communist party.

Otherwise might as well have a paper called “The Internet Threat ...” or “The pen and paper threat ...”

The America fought Soviet Union. But ok with a communist party in china, with their all in and everywhere in the society (and copy things on top of contributions) ... it might be too interwoven and too big to ...

Good luck.

[+] intended|7 years ago|reply
But the Chinese party is effective.

It may be abhorrent to the societal values of most of the people reading this comment, but its great to MANY people in China.

But even that, is a red herring.

Strongly controlled and moderated communication networks are currently less brittle than open societal systems.

Even on a micro scale, on forums, you can see the ideas the Chinese apply being put to use out of sheer necessity.

When you Add in state level actors, with the ability to crunch the numbers, then its entirely possible to create the impression of a consensus in human minds with the use of sock puppets/pseudo-human accounts.

The holy grail of such research is discovering intent of speakers or groups of speakers. This will be used to stop hate speech as much as it will to kill dissidents.

This is a right mess and any tool created to clean it up (other than human effort), is liable to create more problems.

[+] raverbashing|7 years ago|reply
Also the US antagonized the Soviet Union but ultimately it fell by itself.

Remains to be seen what will happen in the Chinese case.

[+] hilbert42|7 years ago|reply
I've been interested in science and engineering since my youngest days and I've always considered myself a hacker from way back. At school, my fellow schoolmates nicknamed me 'The Boffin' as back then the terms 'hacker' and 'nerd' hadn't yet been coined. My profession is electronics engineering and IT and for my entire career I've followed and worked with the latest developments in the field. Right: I'm an insatiable technophile!

My other studies were in philosophy (ethics, etc.) and government and over the years I've found my formal training in them truly invaluable, they've broadened my perception and worldview about the ways science and engineering dovetail into society and make the world a better place by improving the lives of its citizens.

I have to agree with the tenet of George Soros' message for many reasons but from my perspective perhaps the most significant one is that we are moving at a frenetic pace headlong from an industrial age into a post industrial one that's driven by advanced technologies (and primarily through the use of information). We're entering a new era whose paradigms will have morphed into ones so very different from anything humankind has ever before witnessed and the changes are coming so very fast that they'll almost certainly cause fear and social disruption on an unprecedented scale unless we act now to adapt technology to our human needs and not those of governments and large multinational corporations—after all, they ought to be our servants, not vice versa as it is at present.

At present, society is both ill prepared and ill equipped to handle monumental changes of such a magnitude without considerable preparation, and we've hardly even begun to discuss the matter let alone draw up viable plans for society to adapt to them.

Leaving ML and AI aside for a moment, let's just look at the metaphysical† aspects of the Google/Facebook revolution. Both behemoths, but especially Facebook, are floundering in the mire over very important issues such as those concerning privacy, fake news, damaging effects on democracy and politics in general, and there's precious little light on the horizon to shine upon any potential solution let alone any commonly-agreed methodologies or viable options.

Let's look at what has effectively happened here: internet technologies evolved to a stage where worldwide networks such as Facebook became feasible and thus they were built without any real thought of the wider social consequences other than the paramount need to make money. Zuckerberg et al would like us all to believe that they had actually executed both their financial and social objectives as they'd planned but as we now know this is far from being the full truth.

Not only did Big Tech companies have secret plans all of their own with the deliberate intention of exploiting users but they kept these intentions hidden from both governments and users alike thus no independent scrutiny was possible until the inevitable leaks occurred. The lesson from this is that with no oversight, undesirable metaphysical effects arose from their complex systems the consequences of which have come back to bite them. Inevitably, this will happen again and again with ML and AI unless careful and sophisticated (and mandatory) regulation is introduced. To think otherwise would be foolhardy in the extreme.

It's clear to many that these 'geniuses' of Big Tech would have been fully cognizant of and understood how new physical properties often emerge from complex systems that are not foreseen from just examining their less complex building blocks. Moreover, similar but metaphysical processes evolve in human minds when they encounter complex systems. For instance, examining fine architecture brings an aesthetic experience to humans that no examination of a brick to the nth degree reveals. Therefore, there can be little if any excuse for Zuckerberg and his cronies for not anticipating in advance emergent human problems (such as those that have arisen from the Cambridge Analytica fiasco).

When in 1847 Italian chemist Ascanio Sobrero* invented nitroglycerine and immediately perceived its extreme dangers he became so scared and concerned about what he'd actually done that he kept the fact secret for over a year. However, unlike Sobrero who clearly had ethics on his side, the likes of Zuckerberg et al never gave any serious consideration of the consequences of their 'inventions'. As day follows night, they were expecting human problems but they simply ignored them until it was too late. Their lack of concern for humans—the hands that actually feed them—is palpable in the extreme; ethically and morally they're bankrupt.

As history illustrates yet again, we're now well past the point where it's safe to leave extremely powerful technologies in the hands of political novices who possess so precious few ethics—or whose few ethics are easily trumped on by their zealotry for certain technological fixes and or financial objectives. The fact that they may be the inventors or owners of newer technologies such as Facebook is irrelevant; what matters first and paramount is what is best for the citizenry and society at large.

The Google, Facebook et al cases ought to have been a non sequitur from the very beginning as the general will of the populus should have nailed them dead from the outset but it never happened for many reasons, including the highly addictive properties that Big Tech deliberately designed into their pernicious technologies. Tragically, over the past 40-50 years or so, many traditional ethical values which would have put the kibosh on these Tech Giants long before they'd gotten started have largely evaporated as our societies have become more homogeneous and international—nowadays, the lowest common ethical denominator is just that—pretty low.

Given that societies are still struggling with very basic ethical issues such as withering of our hard-fought democratic processes, rise of totalitarian power from both governments and Tech Giants then we're not even at ground level when it comes to solving the ethics of ML and AI. For starters, there are serious cultural differences (hence little or no agreement) over how to resolve the infamous trolley car/moral dilemma problem‡. At present, it is abundantly clear the various societies of an international world are not able reach a common worldview or consensus on this conceptual problem let alone a specific ML/AI incarnation thereof, consequentially we have precious little hope for solving even greater moral and ethical dilemmas that undoubtedly will be created by these fast-advancing technologies.

It seems to me very first steps must be taken to forge a common moral and ethical consensus for humankind. We need to first begin with the easiest problems to agree upon such as the inviolability of human life and then work upwards. Expect this to take a long time and it will. Of course, the huge dilemma is how to hold technologists and technocrats sans ethics (and common sense) at bay whilst various consensuses are being reached.

I am strongly of the opinion that (as I was fortunate enough to experience), we should begin by ensuring that core training for all engineers, scientists, technologists and technocrats—and for that matter, politicians—also include compulsory training in key philosophical subjects, especially ethics, moral philosophy and formal logic as well as basic/essential political science (the study of government).

I'm realistic enough to realise that despite such ethical studies being both core and compulsory, there is every that they will only have a minor impact in changing human nature if anything at all (at least in the beginning). Nevertheless their compulsory nature will achieve one major objective which is that every engineer, scientist and technologist and technocrat will be forced to learn the essentials of morals and ethics as they should be practiced in our increasingly technological societies.

Thus when their technologies go belly-up and damage both societies and people lives, with compulsory training in ethics under their belts, the Zuckerbergs of this world will no longer be able to claim ignorance as an excuse for their negligence, they will not be able to say that they 'did not know' or that 'we never considered that outcome'. The only likely excuse that they'll have left to argue is that of 'force majeure'—and it'd had better be a pretty good instance thereof or they'll be toast. …And good riddance.

Good effort George, keep the pressure up.

_____________

† As many will be aware, the uncomplicated definition of 'metaphysics' is 'above and beyond physics', that's to say ontological a priori deductive concepts of existence, of being, of becoming, reality etc. As far as Physics is concerned, metaphysics deals with ethereal, intangible concepts that are inconsequential to its Laws but nevertheless they're key to human existence as we know it, what it is to be human, our values, beliefs and ethics are metaphysical.

‡ The Moral Machine experiment , Nature, vol. 563, pp59-64, 2018-10-24 https://www.nature.com/articles/s41586-018-0637-6 Especially note graphs in Fig. 3: 'Country-level clusters'.

* Incidentally, Alfred Nobel was a student of chemist Ascanio Sobrero.

___

[+] StreamBright|7 years ago|reply
This is kind of funny how this article focuses on China and totally leaves our surveillance capitalism (Google, Facebook) out of the picture. If this article was unbiased than we would see example how the AI threat impacted the last election in the USA as well.
[+] IfOnlyYouKnew|7 years ago|reply
It’s probably presented in such a way to b convincing to the maximum amount of people. Being addressed to an English-speaking audience that tends to already be suspicious of a Chinese grab for economic power, that serves as common ground to establish rapport with the audience.

This being Soros, any connection to the US election would already doom his message to be disgeraded, or even taken as evidence for its opposite.

[+] VvR-Ox|7 years ago|reply
Very good point indeed!

That's the western filter bubble and since this trade war it's even more obvious.

Typical double standards. The news tell you who the bad guys are and everyone knows that the news never lie...

[+] hoaw|7 years ago|reply
I disagree with the premise. The threat from AI, while serious, isn't immediate compared to the threat of using thing like AI to justify inequality. You want to tell me that suddenly the global elite became concerned about relatively esoteric technology to the point where common politicians talk about it like it is the tax rate? And that just happens go with with the effects of globalization, crony capitalism and extreme rent seeking? If the excuse wasn't AI it would be something else.
[+] baruchthescribe|7 years ago|reply

[deleted]

[+] dictum|7 years ago|reply
Without a clear explanation of your statement (why does he deserve this euphemized special place?) this is just a personal attack, which seems to be the common response to any assertion or action coming from him.
[+] leducw|7 years ago|reply
I've never gotten this hate for Mr. Soros. Could you take some time and perhaps expound on why you dislike him? It'd be of great interest to me if I had an individual perspective on why he's disliked rather than the odd mishmash of reasons I find online about him.