This book is middling at best. From a literary perspective it's terrible. It reads like GPT-3.5 wrote it. Now from an introduction to a new idea perspective and understanding how AI will affect society, its __fine__. The book is full of contradictions. Suleyman regularly points out how we've __never__ successfully constrained a disrupting technological innovation and then says we NEED to here. I mean, absolutely absurd stuff. Ostensibly ends on an optimistic note, but is actually much more nihilistic.
The begging for regulatory capture in the AI business is so egregious that I think the government should nationalize all the AI companies 'to keep us safe' but really to prevent these shysters from making money.
Hmm, didn't we do this for cloning? I remember hearing about this on Lex Fridman when he interviewed Max Tegmark.
If I recall correctly, the entire world is in agreement that cloning is illegal, and even that some people in China (could be just one) even went to prison for it.
> How often do you 'vet' authors of non-fiction books prior to reading the book?
I misread the question as 'How' rather than 'How often', but I'll repeat Jerry Weinberg's heuristic. He'd wait until three people he trusted recommended a book before reading it, as a way to filter for quality. He used it as a way to manage his limited time ("24 hours, maybe 60 good years" - Jimmy Buffett), but it also works to weed out books not worth mentioning.
I read all the negative/neutral GoodReads comments of the book (and author's other books if I'm not familiar with the author, and maybe Wikipedia if I want to dig deeper).
99% of books I learn about from recommendations (HN, blogs, other books), and the pattern I see is that the source/recommender are usually at the similar "popsci" level.
I sometimes get it wrong. In most cases I just waste a few hours. The worst mistake was taking Why We Sleep to heart before I read the rebuttal. I still think it's fine, but more on a Gladwell level.
Im Suleyman's case, I recognize the name from Inflection shenanigans, so already have a bias against the book to start with.
I always do a little bit of research on the author online beforehand. If I'm going to read a non-fiction book I need to know if the author is credible.
I do believe Bill Gates might be a little bit biased here. I read the book some months ago, and while I can't say it's a bad book, I wouldn't call it a favorite either.
If it's something i have no grounding in then understanding the authors' potential biases is useful.
If it's something I'm relatively familiar with or close enough that i think I'll be able to understand the application of potential biases in realtime then i don't usually bother.
This issue is sometimes somewhat alleviated by reading multiple sources for the same/similar information.
I basically don't read much pop sci anymore because they are almost always bad, and the science from the good ones is better read unfiltered through papers and the journals do a good job of making authors declare conflicts of interest.
It is the worst when these books are not written by people with scientific training because they are more likely to make logical errors or use motivated reasoning to push a narrative.
Funny you should say that because I just read his Wikipedia page and a couple of articles about him. He and Gates are successful salesmen and managers who dabbled in coding when they were young. I don't expect any insight from them about the effect of technologies on society or anything like that. The idea that they are intellectuals or scholars is laughable.
> The historian Yuval Noah Harari has argued that humans should figure out how to work together and establish trust before developing advanced AI. In theory, I agree. If I had a magic button that could slow this whole thing down for 30 or 40 years while humanity figures out trust and common goals, I might press it. But that button doesn’t exist. These technologies will be created regardless of what any individual or company does.
Is that true, though? Training runs for frontier models don’t happen without enormous resources and support. You don’t run one in your garage. It doesn’t happen unless people make it happen.
Is this really a harder coordination problem than, say, stopping climate change, which Gates does believe is worth trying?
Climate change is the byproduct of the desired outcome, energy.
Advanced AI, if you buy Yuval's argument is the threat in and of itself.
So Climate Change is a problem that can be 'solved' while the main goal is pursued. This is ideologically consistent with gates investment in terrapower. Where as AI isn't because the desired outcome is the threat not a bi-product.
So your questions a bit flawed fundamentally.
As for gate's point is it true, almost certainly yes, the game theory is peruse and lie that you aren't, or openly pursue. You can't ever not pursue because you do not and cannot have perfect information.
Imagine how much visibility china would demand from the US to trust it was doing nothing, far more than they could give, and vice versa.
Do you think the us is going to give its adversaries tracking and production information its most advanced chips? It would never, and if they did why would other powers trust it if theres every reason to lie.
Sam Altman convincing the ultra-rich that AI is the next best place to put their too much money after they've run out of other places to put it is not anywhere near the spirit of what he meant by people "working together." He's saying that while in a dysfunctional society, AI will only magnify the dysfunction.
Fighting climate change is a means towards the end of having a livable environment, developing AI is a means towards the end of having a better society. But, whereas fixing the environment would be its own automatic benefit, having AGI would not automatically improve the world. Something as seemingly innocuous and positive as social networking made a lot of things worse.
AI research is global and of strategic value with both the US and China competing. I don't see one stopping research while the other cracks ahead. Similar problems exist with curbing co2 which hasn't gone very well to date.
Let's face it. Our world is currently filled with rogue states waging pointless wars, spying on their own citizens, launching cyberattacks, seeding disinformation outside their borders, etc. If they want to make it happen they will. It is a damn hard coordination problem.
I just finished listening to it on audible. It is certainly thought provoking, but full of contradictions as others have mentioned. Namely that this technology cannot be contained, and yet that it must be contained is pretty doom and gloom. The prognostications about artificial intelligence are hardly as scary as the ones made around genetic sequencing — that you can buy a device for 30k that will print pathogens and viruses for you out of your garage. That’s some scary stuff.
You can buy plasmids and make whatever bacteria you want for a few decades now. AI may help, but it certainly doesn't cost $30k to cause mischief.Pretty sure I learned that in Bio 102
It's an okay book but there isn't really anything in it that you couldn't infer after you've read the first 10%. A lot of common sense warnings about risks from AI, bioweaopns, cyberattacks etc but it's all very generic. There's no chapter in it that I found had any genuine insight. An interesting chapter would have been "what if I'm completely wrong and all we get is a bunch of meme generators and the next bubble", but that never appears to be a possibility.
It's oddly enough the case with a lot of books that end up on Gates recommended lists. I saw someone recently say, maybe a bit too mean, that we might make it to AGI because Yuval Noah Harari keeps writing books that more and more look like they're written by ChatGPT and it's not entirely untrue for a lot of the stuff Gates recommends.
Regardless, he's certainly been in the right places to understand AI trends and Gates' write-up makes it sound like an intriguing distillation. Thanks for posting!
My favorite book on AI is Sutton’s “reinforcement learning”. Looking just at the url I knew this would be some pop-sci tripe but leaving this comment here in case people want something other than what they can tout on twitter/X.
It's a book written for non experts and clearly labeled as such. Bill Gates likely knows more about AI than you do and yet he recommends a book that normal people can understand. There may be a lesson here.
>Given that The coming wave assumes that technology comes in waves and these waves are driven by the insiders, the solution it proposes is containment—governments should determine (via regulation) who gets to develop the technology, and what uses they should put the technology to. The assumption seems to be that governments can control access to natural choke points in the technology. One figure the book offers is how around 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry. This is not true though. Nuclear containment, for example, relies more on peer pressure between nation states, than regulation per se. It’s quite possible to build a reactor or bomb in your backyard. The more you scale up these efforts, the more it’s likely that the international community will notice and press you to stop. Squeezing on one of these choke points it more likely to move the activity somewhere else, then enable you to control it.
>...
>At its heart this is a book by and insider arguing that someone is going to develop this world-changing technology, and it should be them.
> 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry.
Tangent, but I suspect the reality is that as soon as you cut off production in that mine the math changes such that bunch of other potential mines that weren't profitable before suddenly become profitable now. The end result is just slightly more expensive sand, which is presumably only a small portion of the entire cost of semiconductors itself.
While I've enjoyed the small bursts of wisdom from many of Bill Gates shorter talks, I haven't found anything noteworthy in his written reviews and books. His viewpoints are often bizarre and radical. I still chuckle when remembering reading his conviction that ISDN would become the dominant Internet technology before the year 2000 in his book "The Road Ahead" (1995). It even seemed bizarre back then to me.
With all due respect - I would have hoped to see a list of other AI books reviewed with a recommendation. Currently the article seems like a Preface for the book - "The Coming Wave".
Of all the recent books I've read about AI, this was by far the worst.
The Singularity is Nearer, Life 3.0, and A Brief History of Intelligence were much much much better imho
By the late 1990s, Microsoft's competition (including Netscape and Apple) were nearly dead. In fact, the browser that Apple originally shipped with OS X was M$ Internet Explorer.
Gates was several months late to the web, but it's not like he missed the boat.
In an early chapter he talks about how well LLMs knowing medical and legal information but doesn't mention how it makes things up... Was hoping he'd discuss the challenges and hurdles right away...
For the record, I no longer endorse the recursive self improvement story told in Friendship is Optimal. I do not believe that we'll get FOOM from a Solomonoff reasoner.
God damn it. So far we've got the Harry Potter fan fiction, the My Little Pony fan fiction, the pop-sci book Gates is talking about, and one actual book, Reinforcement Learning: An Introduction by R. Sutton.
We need something that's technical enough to be useful, but not based on outdated assumptions about the technology used to implement AI.
[+] [-] HPMOR|1 year ago|reply
[+] [-] mjfl|1 year ago|reply
[+] [-] Sverigevader|1 year ago|reply
If I recall correctly, the entire world is in agreement that cloning is illegal, and even that some people in China (could be just one) even went to prison for it.
[+] [-] WalterBright|1 year ago|reply
These days it would be surprising if an author didn't generate at least some of the text with AI, or direct an AI to improve the prose.
[+] [-] HeatrayEnjoyer|1 year ago|reply
These aren't mutually exclusive.
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] benreesman|1 year ago|reply
Nothing that costs ten billion dollars gets built without the explicit or implicit consent of the public.
Internationally? If it’s a big enough deal the deterrent is strategic counter value.
We’re doing this deliberately. Maybe that’s good, maybe it’s bad, but it’s on purpose and it’s dishonest to say otherwise.
[+] [-] Insanity|1 year ago|reply
edit: as a semi-related question for folks here. How often do you 'vet' authors of non-fiction books prior to reading the book?
[+] [-] pjmorris|1 year ago|reply
I misread the question as 'How' rather than 'How often', but I'll repeat Jerry Weinberg's heuristic. He'd wait until three people he trusted recommended a book before reading it, as a way to filter for quality. He used it as a way to manage his limited time ("24 hours, maybe 60 good years" - Jimmy Buffett), but it also works to weed out books not worth mentioning.
In terms of 'how often', pretty often.
[+] [-] senko|1 year ago|reply
99% of books I learn about from recommendations (HN, blogs, other books), and the pattern I see is that the source/recommender are usually at the similar "popsci" level.
I sometimes get it wrong. In most cases I just waste a few hours. The worst mistake was taking Why We Sleep to heart before I read the rebuttal. I still think it's fine, but more on a Gladwell level.
Im Suleyman's case, I recognize the name from Inflection shenanigans, so already have a bias against the book to start with.
[+] [-] ladyprestor|1 year ago|reply
I do believe Bill Gates might be a little bit biased here. I read the book some months ago, and while I can't say it's a bad book, I wouldn't call it a favorite either.
[+] [-] NilMostChill|1 year ago|reply
If it's something i have no grounding in then understanding the authors' potential biases is useful.
If it's something I'm relatively familiar with or close enough that i think I'll be able to understand the application of potential biases in realtime then i don't usually bother.
This issue is sometimes somewhat alleviated by reading multiple sources for the same/similar information.
YMMV
[+] [-] pkd|1 year ago|reply
It is the worst when these books are not written by people with scientific training because they are more likely to make logical errors or use motivated reasoning to push a narrative.
[+] [-] ccppurcell|1 year ago|reply
[+] [-] zeofig|1 year ago|reply
[+] [-] thorum|1 year ago|reply
Is that true, though? Training runs for frontier models don’t happen without enormous resources and support. You don’t run one in your garage. It doesn’t happen unless people make it happen.
Is this really a harder coordination problem than, say, stopping climate change, which Gates does believe is worth trying?
[+] [-] lanthissa|1 year ago|reply
So Climate Change is a problem that can be 'solved' while the main goal is pursued. This is ideologically consistent with gates investment in terrapower. Where as AI isn't because the desired outcome is the threat not a bi-product.
So your questions a bit flawed fundamentally.
As for gate's point is it true, almost certainly yes, the game theory is peruse and lie that you aren't, or openly pursue. You can't ever not pursue because you do not and cannot have perfect information.
Imagine how much visibility china would demand from the US to trust it was doing nothing, far more than they could give, and vice versa.
Do you think the us is going to give its adversaries tracking and production information its most advanced chips? It would never, and if they did why would other powers trust it if theres every reason to lie.
[+] [-] add-sub-mul-div|1 year ago|reply
Fighting climate change is a means towards the end of having a livable environment, developing AI is a means towards the end of having a better society. But, whereas fixing the environment would be its own automatic benefit, having AGI would not automatically improve the world. Something as seemingly innocuous and positive as social networking made a lot of things worse.
[+] [-] tim333|1 year ago|reply
[+] [-] liuliu|1 year ago|reply
[+] [-] midiguy|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] saneshark|1 year ago|reply
[+] [-] downrightmike|1 year ago|reply
[+] [-] root_axis|1 year ago|reply
[+] [-] Barrin92|1 year ago|reply
It's oddly enough the case with a lot of books that end up on Gates recommended lists. I saw someone recently say, maybe a bit too mean, that we might make it to AGI because Yuval Noah Harari keeps writing books that more and more look like they're written by ChatGPT and it's not entirely untrue for a lot of the stuff Gates recommends.
[+] [-] davideg|1 year ago|reply
https://news.ycombinator.com/item?id=39757330
Regardless, he's certainly been in the right places to understand AI trends and Gates' write-up makes it sound like an intriguing distillation. Thanks for posting!
[+] [-] pockmarked19|1 year ago|reply
[+] [-] sandspar|1 year ago|reply
[+] [-] WillAdams|1 year ago|reply
https://www.goodreads.com/book/show/90590134-the-coming-wave
>...
>Given that The coming wave assumes that technology comes in waves and these waves are driven by the insiders, the solution it proposes is containment—governments should determine (via regulation) who gets to develop the technology, and what uses they should put the technology to. The assumption seems to be that governments can control access to natural choke points in the technology. One figure the book offers is how around 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry. This is not true though. Nuclear containment, for example, relies more on peer pressure between nation states, than regulation per se. It’s quite possible to build a reactor or bomb in your backyard. The more you scale up these efforts, the more it’s likely that the international community will notice and press you to stop. Squeezing on one of these choke points it more likely to move the activity somewhere else, then enable you to control it.
>...
>At its heart this is a book by and insider arguing that someone is going to develop this world-changing technology, and it should be them.
[+] [-] p1necone|1 year ago|reply
Tangent, but I suspect the reality is that as soon as you cut off production in that mine the math changes such that bunch of other potential mines that weren't profitable before suddenly become profitable now. The end result is just slightly more expensive sand, which is presumably only a small portion of the entire cost of semiconductors itself.
[+] [-] varelse|1 year ago|reply
[deleted]
[+] [-] talles|1 year ago|reply
[+] [-] signatoremo|1 year ago|reply
[+] [-] tim333|1 year ago|reply
[+] [-] jasoneckert|1 year ago|reply
[+] [-] DwnVoteHoneyPot|1 year ago|reply
[+] [-] the_arun|1 year ago|reply
[+] [-] deeznuttynutz|1 year ago|reply
[+] [-] netfortius|1 year ago|reply
[+] [-] amelius|1 year ago|reply
[+] [-] thomassmith65|1 year ago|reply
By the late 1990s, Microsoft's competition (including Netscape and Apple) were nearly dead. In fact, the browser that Apple originally shipped with OS X was M$ Internet Explorer.
Gates was several months late to the web, but it's not like he missed the boat.
[+] [-] auggierose|1 year ago|reply
There are waves that cannot be ridden.
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] figers|1 year ago|reply
[+] [-] jack_pp|1 year ago|reply
[+] [-] iceman-p|1 year ago|reply
I wrote the details here: https://www.fimfiction.net/blog/1026612/friendship-is-optima...
[+] [-] Vecr|1 year ago|reply
We need something that's technical enough to be useful, but not based on outdated assumptions about the technology used to implement AI.