I'm always amazed at the low-quality content these large venture firms and consulting groups put out. They should, by all means, have the money to hire decent and engaging writers.
This article has a bland, lifeless tone which is inferior to Chat-GPT's stock settings, and in the end it says nothing important.
> The AI effect is actually part of a larger human phenomenon we call the frontier paradox. Because we ascribe to humans the frontier beyond our technological mastery, that frontier will always be ill-defined. Intelligence is not a thing that we can capture but an ever-approaching horizon that we turn into useful tools. Technology is the artifice of intelligence forged over millennia of human collaboration and competition.
The fact that many if not most of these firms unironically went all in on crypto, and no one got any real consequences for backing what was effectively just brown fat for the economy, makes me unsurprised at the garbage content they produce as well. Clearly many of them made money just by being in the right place at the right time and being lucky.
I can’t decide what’s more distressing: some millionaire GP wrote that and thought he was being super profound, or the firm actually paid someone to write that for them.
Stripe commissions good writers to write interesting pieces. Even that SBF hagiography that one of the VC firms commissioned was more of an interesting failure than this. I can’t understand why anyone would put out this kind of anodyne prose and think they’re going to appeal to, well, anyone.
We hold that humans are special and have superior capabilities beyond current scientific understanding. This means that words like "intelligence" don't have a precise meaning and tend to mean just "something beyond what technology has achieved so far".
This leads to constant over-promising and under-delivery. We promise that computers will be smart now, honest, but all that happens is that tech improves, still doesn't quite do what we want, and the goal moves further away.
> Sequoia founder Don Valentine would ask founders two questions: “why now?” and “so what?” At the heart of these questions is the combination of curiosity and rigor that asks what has changed in the world (why now?) and what will this mean (so what?).
Just drop the second sentence. The first one is fine alone. Although you could just replace the whole thing with:
> At Sequoia, we ask founders two questions: "Why now?" And "So what?"
> I'm always amazed at the low-quality content these large venture firms and consulting groups put out. They should, by all means, have the money to hire decent and engaging writers.
It is not about money, it reveals the real focus and execution they have on the issue.
Really tired of hearing clowns that went all in on crypto scams preach their AI "wisdom". Sequoia has long been on the cutting edge of making huge, foolish investments. They helped start the trend when they invested in Color:
It's interesting that so many people present this argument now that we actually do have AIs worthy of the name. LLMs can perform an unbounded range of tasks with human-level performance and you talk to them like they are human. I think AI label will stick in this case. Perhaps dismissing AI is some sort of psychological defense that protects people from existential crisis triggered by emergence of LLMs.
When speaking in public, using the phrase "AI" is unavoidable. I tried hard to use terms like "statistical models", "machine learning", "deep learning" (or its concrete instance), but it is not what the general audience understands or cares about.
But when thinking about "AI", I recommend thinking about other phrases without the word "intelligence" or any of its synonyms (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...). That way, you refrain from using ill-defined terms with many definitions, meanings, and connotations.
> Intelligence is not a thing that we can capture but an ever-approaching horizon that we turn into useful tools.
I'd phrase this more as:
> Intelligence is not a thing that we can capture but an ever-approaching horizon that we turn into technology that we cannot control. We are playing God with the wisdom of a five year old. For every useful consequence, there will be five unexpected dangers.
Good to see we’re all on the same page that this piece is lacking.
Focusing on the understanding of intelligence in it, which leads to his goalpost moving theory: there are multiple senses to the word in English, and this dictionary definition doesn’t meet the one we use for what we see in say, the cleverness of children (who “know” very little, and yet..). Or even what we see in animals. That sense refers to something real, and is well explored in philosophy - but in branches that have been avoided by comp sci since its inception (thus McCarthy’s et al.’s mistaken usage and Turing’s punting on considering what it is).
The sense that comp sci tends to lean toward is the one intended in things like I.Q. and seen in puzzle solving. To the degree that we can call software intelligent, it’s because we see this intelligence encoded in it (usually reflective of the authors’ ability in this sense and the tradition they build on). Never the first kind, though.
I don't think the premise is correct. AI is often used term late into the product lifecycle. When Apple or Google or the technology press say a product has improved it's dictation they say the "AI" has improved.
When we talk historically about deep blue we categorize it as an early AI project to have computers beet humans at chess.
Fair that AI is a vague term for software with some training stage or approach to analyze data; But I don't see the term not being used to describe past innovation in that (admittedly vague) domain.
The author is likely thinking about the moving goalposts of human excepltionalism as AI gains capabilities which has a history of coverage itself; but goes about this in a roundabout way and I don't think it applies in the way the author is proposing per more specific language being the problem
All to say unfortunate article given such a cool article title :)
Excellent observation. The single-threaded effort dominating AI today (i.e. the base assumption that OpenAI can scale GPT up as it is today into AGI) is what's causing the bottleneck.
Assuming AGI can be built as 1 system is assuming that mind is separate from body, which is the old dualist idea we've outgrown in our own awareness, but somehow not when it comes to AI. We've been growing AI in that direction at https://www.aolabs.ai/
[+] [-] A_D_E_P_T|2 years ago|reply
This article has a bland, lifeless tone which is inferior to Chat-GPT's stock settings, and in the end it says nothing important.
> The AI effect is actually part of a larger human phenomenon we call the frontier paradox. Because we ascribe to humans the frontier beyond our technological mastery, that frontier will always be ill-defined. Intelligence is not a thing that we can capture but an ever-approaching horizon that we turn into useful tools. Technology is the artifice of intelligence forged over millennia of human collaboration and competition.
Now that's what I call word salad.
[+] [-] ramraj07|2 years ago|reply
[+] [-] nocoiner|2 years ago|reply
I can’t decide what’s more distressing: some millionaire GP wrote that and thought he was being super profound, or the firm actually paid someone to write that for them.
Stripe commissions good writers to write interesting pieces. Even that SBF hagiography that one of the VC firms commissioned was more of an interesting failure than this. I can’t understand why anyone would put out this kind of anodyne prose and think they’re going to appeal to, well, anyone.
[+] [-] dale_glass|2 years ago|reply
We hold that humans are special and have superior capabilities beyond current scientific understanding. This means that words like "intelligence" don't have a precise meaning and tend to mean just "something beyond what technology has achieved so far".
This leads to constant over-promising and under-delivery. We promise that computers will be smart now, honest, but all that happens is that tech improves, still doesn't quite do what we want, and the goal moves further away.
[+] [-] robertlagrant|2 years ago|reply
> Sequoia founder Don Valentine would ask founders two questions: “why now?” and “so what?” At the heart of these questions is the combination of curiosity and rigor that asks what has changed in the world (why now?) and what will this mean (so what?).
Just drop the second sentence. The first one is fine alone. Although you could just replace the whole thing with:
> At Sequoia, we ask founders two questions: "Why now?" And "So what?"
[+] [-] emmender1|2 years ago|reply
VCs are late to the AI game.. and are trying to stay relevant with these missives.. but it appears pathetic to informed readers.
[+] [-] wslh|2 years ago|reply
It is not about money, it reveals the real focus and execution they have on the issue.
[+] [-] j16sdiz|2 years ago|reply
They said they tried to be "more precise" about what they are doing, without specifying how.
[+] [-] papichulo2023|2 years ago|reply
[+] [-] bluefishinit|2 years ago|reply
https://en.wikipedia.org/wiki/Color_Labs
[+] [-] guy98238710|2 years ago|reply
[+] [-] stared|2 years ago|reply
> If it is written in Python, it's probably machine learning
> If it is written in PowerPoint, it's probably AI
(source: https://twitter.com/matvelloso/status/1065778379612282885, 2018)
When speaking in public, using the phrase "AI" is unavoidable. I tried hard to use terms like "statistical models", "machine learning", "deep learning" (or its concrete instance), but it is not what the general audience understands or cares about.
But when thinking about "AI", I recommend thinking about other phrases without the word "intelligence" or any of its synonyms (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...). That way, you refrain from using ill-defined terms with many definitions, meanings, and connotations.
[+] [-] vouaobrasil|2 years ago|reply
I'd phrase this more as:
> Intelligence is not a thing that we can capture but an ever-approaching horizon that we turn into technology that we cannot control. We are playing God with the wisdom of a five year old. For every useful consequence, there will be five unexpected dangers.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] what-no-tests|2 years ago|reply
[+] [-] msaeki|2 years ago|reply
Focusing on the understanding of intelligence in it, which leads to his goalpost moving theory: there are multiple senses to the word in English, and this dictionary definition doesn’t meet the one we use for what we see in say, the cleverness of children (who “know” very little, and yet..). Or even what we see in animals. That sense refers to something real, and is well explored in philosophy - but in branches that have been avoided by comp sci since its inception (thus McCarthy’s et al.’s mistaken usage and Turing’s punting on considering what it is).
The sense that comp sci tends to lean toward is the one intended in things like I.Q. and seen in puzzle solving. To the degree that we can call software intelligent, it’s because we see this intelligence encoded in it (usually reflective of the authors’ ability in this sense and the tradition they build on). Never the first kind, though.
That’s what keeps us from accepting it as AI.
[+] [-] mdale|2 years ago|reply
When we talk historically about deep blue we categorize it as an early AI project to have computers beet humans at chess.
Fair that AI is a vague term for software with some training stage or approach to analyze data; But I don't see the term not being used to describe past innovation in that (admittedly vague) domain.
The author is likely thinking about the moving goalposts of human excepltionalism as AI gains capabilities which has a history of coverage itself; but goes about this in a roundabout way and I don't think it applies in the way the author is proposing per more specific language being the problem
All to say unfortunate article given such a cool article title :)
[+] [-] zby|2 years ago|reply
[+] [-] mi3law|2 years ago|reply
Assuming AGI can be built as 1 system is assuming that mind is separate from body, which is the old dualist idea we've outgrown in our own awareness, but somehow not when it comes to AI. We've been growing AI in that direction at https://www.aolabs.ai/
[+] [-] AmericanOP|2 years ago|reply
[+] [-] blululu|2 years ago|reply
Don’t blame the writing teachers for this one.
[+] [-] canucyc|2 years ago|reply
[deleted]
[+] [-] trojanalert|2 years ago|reply
[deleted]
[+] [-] unknown|2 years ago|reply
[deleted]