One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.
I think one of the results of this is that the concept of AI itself increasingly becomes more muddled until it becomes indistinguishable from a word like “technology” and therefore useless for describing a particular phenomenon. You can already see this with the usage of “AGI” and “super intelligence” which from the definitions I’ve been reading, are not the same thing at all. AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god, and yet it seems like everyone is using them interchangeably. It’s very sloppy thinking.
Instead I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.
Couldn't agree more. We are seeing gradual degradation of the term AI. LLMs have stolen attention of the media and the general population and I notice that people equate ChatGPT with all of AI. It is not, but an average person doesn't know the difference. In a way, genAI is the worst thing that happened to AI. The ML part is amazing, it allows us to understand the world we live in, which is what we have evolved to seek and value, because there is an evolutionary benefit to it--survival; the generative side is not even a solution to any particular problem, it is a problem that we are forced to pay to make bigger. I wrote "forced", because companies like Adobe use their dominant position to override legal frameworks developed to protect intellectual property and client-creator contracts in order to grab content that is not theirs to train models they resell back to the people they stole content from and to subject ourselves to unsupervised, unaccountable policing of content.
What happened to ML? With the relatively recent craze precipitated by chatgpt, the term AI (perhaps in no small part due to "OpenAI") has completely taken over. ML is a more apt description of the current wave.
> AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god
When we build something we do not intend to build something that just achieves results «of the average human being», and a slow car, a weak crane, a vague clock are built provisionally in the process of achieving the superior aid intended... So AGI expects human level results provisionally, while the goal remains to go beyond them. The clash you see is only apparent.
> I think the term AI is going to ... will fade out over time
Are you aware that we have been using that term for at least 60 years?
And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.
The grifter is a nihilist. Nothing is holy to a grifter and if you let them they will rob every last word of its original meaning.
The problem with AI, as perfectly clear outlined in this article, is the same as the problem with the blockchain or with other esotheric grifts: It drains needed resources from often already crumbling systems¹.
The people falling for the hype beyond the actual usefulness of the hyped object are wishing for magical solutions that they imagine will solve all their problems. Problems that can't be fixed by wishful thinking, but by not fooling yourself and making technological choices that adequately address the problem.
I am not saying that LLMs are never going to be a good choice to adequately address the problem. What I am saying is that people blinded by blockchain/AI/quantum/snakesoil hype are the wrong people to make that choice, as for them every problem needs to be tackled using the current hype.
Meanwhile a true expert will weigh all available technological choices and carefully test them against the problem. So many things can be optimized and improved using hard, honest work, careful choices and a group of people trying hard not to fool themselves, this is how humanity managed to reach the moon. The people who stand in the way of our achievements are those who lost touch with reality, while actively making fools of themselves.
Again: It is not about being "against" LLMs, it is about leaders admitting they don't know, when they do in fact not know. And a sure way to realize you don't know is to try yourself and fail.
¹ I had to think about my childhood friend, whose esotheric mother died of a preventable disease, because she fooled herself into believing into magical cures and gurus until the fatal end.
In the beginning I think some, if not many, people did genuinely think it was "AI". It was the closest we've ever gotten to a natural language interface and that genuinely felt really different than anything before, even to an extreme cynic. And I also think there's many people that want us to develop AI and so were trying to actively convince themselves that e.g. GPT or whatever was sentient. Maybe that Google engineer who claimed an early version of Bard was "sentient" even believed himself (though I still suspect that was probably just a marketing hoax).
It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving of things like this nonsense:
---
- "How do I do (x)."
- "You do (A)."
- "No, that's wrong because reasons."
- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (B)."
- "No that's also wrong because reasons."
- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (A)."
> One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.
I was reading one famous book about investing some times ago (I don't remember which one exactly, I think it was a random walk into wall st, but don't quote me on that) and one chapter at the beginning of the book talk about the .com bubble and how companies, even ones who had nothing to do with the web, started to put .com or www in their name and were seeing an immediate bump in their stock price (until it all burst, as we know now).
And every hype cycle / bubble is like that. We saw something similar with cryptocurrencies. For a while, every tech demos at dev convention had to have some relation to the "blockchain". We saw every variation of names ending in -coin. And a lot of company, that where not even in tech, had dumb project related to the blockchain, which for anyone slightly knowledgeable with the tech it was clear that it was complete BS, and they almost all the time were quietly killed off after a few month.
To a much lesser extent, we saw the same with "BigData" (who even use this word anymore?) and AR/VR/XR.
And now its AI, until the next recession and/or the next shiny thing that makes for amazing demos pops-out.
It is not to say that it is all fake. There is always some genuine business that have actual use case with the tech and will probably survive the burst (or get brought up and live on has MS/Google/AWS Thingamajig). But you have to be pretty naïve if you think 99% of the current AI company will live in the next 5 years, and believe their marketing material. But it doesn't matter if you manage to sell before the bubble pop, and so the cycle continue.
Yeah, happens everytime. Remember when people were promising blockchain but had nothing to show for it (sometimes, not even an idea)? Or "cloud powered" for apps that barely made API calls? Remember when every and anything needed an app, even if it was just some static food menu?
It's obvious BS from anyone in tech, but the people throwing money aren't in tech.
>I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.
it'll die down, but the marketing tends to stick, sadly. we'll have to deal with if AI means machine learning or LLMs or video game pathfinding for decades to come.
The 'intelligence' label has been applied to computers since the beginning and it always misleads people into expecting way more than they can deliver. The very first computers were called 'electronic brains' by newspapers.
And this delay between people's mental images of what an 'intelligent' product can do and the actual benefits they get for their money once a new generation reaches the market creates this bullwhip effect in mood. Hence the 'AI winters'. And guess what, another one is brewing because tech people tend to think history is bunk and pay no attention to it.
> There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product.
This isn't really a new phenomenon, though. The only thing new about it is that the marketing buzzword of the day is "AI". For a little while prior it was "machine learning". History is littered with examples of marketers and salespeople latching onto whatever is popular and trendy, and using it to sell, regardless if their product actually has anything to do with it.
Typically at this point in the hype cycle a new term emerges so companies can differentiate their hype from the pack.
Next up:
Synthetic Consciousness, "SC"
Prediction: We will see this press release within 24 months:
"Introducing the Acme Juice Squeezer with full Synthetic Consciousness ("SC"). It will not only squeeze your juice in the morning but will help you gently transition into the working day with an empathetic personality that is both supportive and a little spunky! Sold exclusively at these fine stores..."
I'd like to think that AI right now is basically a placeholder term, like a search keyword or hot topic and people are riding the wave to get attention and clicks.
Everything that is magic will be labeled under AI for now, until it gets seated into their proper terms and are only closely discussed by those who are actually driving innovation in the space or are just casually using the applications in business or private.
The term "artificial intelligence" was marketing from its creation. It means "your plastic pal who's fun to be with, especially if you don't have to pay him." Multiple disparate technologies all called "AI", because the term exists to sell you the prospect of magic.
I worked for an AI startup that got bought by a big tech company and I've seen the hype up close. In the inner tech circles it's not exactly a big lie. The tech is good enough to make incredible demos but not good enough to generalize into reliable tools. The gulf between demo and useful tool is much wider than we thought.
This is just what happens, though. We were promised computer proliferation, and got locked-down squares with (barely) free internet access and little else to get excited for besides new ways to serve API requests. The future of programming isn't happening locally. Crypto, AI, shitty short-form entertainment, all of it is dripping from the spigot of an endless content pipeline. Of course people aren't changing the world on their cell-phone, all it's designed to do is sign up for email mailing lists and watch YouTube ads.
So I really don't actually know what the OP wants to do, besides brutalize idiots searching for a golden calf to worship. AI will progress regardless of how you gatekeep the public from percieving it, and manipulative thought-leaders will continue to schiester idiots in hopes of turning a quick buck. These cycles will operate independently of one-another, and those overeager idiots will move onto the next fad like Metaverse agriculture or whatever the fuck.
The jump to AI capabilities from data illiterate leadership is of such a pattern...
It reminds me of every past generation of focusing on the technology, not the underlying hard work + literacy needed to make it real.
Decades ago I saw this - I worked at a hardware company that tried to suddenly be a software company. Not at all internalizing - at every level - what software actually takes to build well. That leading, managing, executing software can't just be done by applying your institutional hardware knowledge to a different craft. It will at best be a half effort as the software craftspeople find themselves attracted to the places that truly understand and respect their craft.
There's a similar thing happening with data literacy where the non data literate hire the data literate, but don't actually internalize those practices or learn from them. They want to continue operating like the always have, but just "plug in AI" (or whatever new thing) without changing fundamentally how they do anything
People want to have AI, but those company's leaders struggle with basic understanding of statistical significance, basic fundamentals of experimentation, and thus essentially destroy any culture needed to build the AI-thing.
Do they struggle with the basics, or do they just not care?
I'm in a similar situation with my own 'C-suite' and it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.
Senior management's skill set is fundamentally not technical competence, business competence, financial competence, or even leadership competence. It's politics and social skills (or less charitably, schmoozing). Executives haven't cared about the "how" of anything to do with their business since the last generation of managers from before the cult of the MBA aged out
I swear this particular rant style is derived from earlier writers who I've read, probably many times, but don't remember. It feels completely familiar, way more so than someone who started working in 2019 could possibly have invented ex nihilo. They're good it though! And somebody has to keep the traditions going.
This post has an unnecessarily aggressive style but has some very good points about the technology hype cycle we're in. Companies are desperate to use "AI" for the sake of using it, and that's likely not a good thing.
I remember ~6 years ago wondering if I was going to be able to remain relevant as a software engineer if I didn't learn about neural networks and get good with TensorFlow. Everyone seemed to be trying to learn this skill at the same time and every app was jamming in some ML-powered feature. I'm glad I skipped that hype train, turns out only a minority of programmers really need to do that stuff and the rest of us can keep on doing what we were doing before. In the same way, I think LLMs are massively powerful but also not something we all need to jump into so breathlessly.
I empathize with it, but ultimately it's fruitless. This happens with every big tech hype. They very much want people to keep talking about it. It's part of the marketing, and tech puts a lotta money into marketing.
But that's all it is, hype. It'll die down like web3, Big Data, cloud, mobile, etc. It'll probably help out some tooling but it's not taking our jobs for decades (it will inevitably cost some jobs from executives who don't know better and ignore their talent, though. The truly sad part).
In the last few years I have come to think of AI as transformative in the same way as relational databases. Yes, right now there's a lot of fad noise around AI. That will fade. And not everyone in IT will be swimming in AI. Just like not everyone today is neck deep in databases. But databases are still pretty fundamental to a lot of occupations.
Front-end web devs might not write SQL all day, but they probably won't get very far without some comprehension. I see AI/ML becoming something as common. Maybe you need to know some outline of what gradient descent is. Maybe you just need some understanding of prompt engineering. But a reasonable grasp of the priciples is still going to be useful to a lot of people after all the hype moves to other topics.
I agree that the world isn't changing tomorrow like so much of the hype makes it out to be. I think I disagree that engineers can skip this hype train. I think it's like the internet - it will be utterly fundamental to the future of software, but it will take a decade plus for it to be truly integrated everywhere. But I think many companies will be utterly replaced if they don't adapt to the LLM world. Engineers likewise.
Worth noting that I don't think you need to train the models or even touch the PyTorch level, but you do need to understand how LLMs work and learn how (if?) they can be applied to what you work on. There are big swaths of technology that are becoming obsolete with generative AI (most obviously/immediately in the visual creation and editing space) and IMO AI is going to continue to eat more and more domains over time.
I’ve been doing just fine ignoring AI altogether and focusing on my thing. I only have one life. Fridman had a guy on his podcast a while ago, I don’t remember his name, but he studies human languages, and the way he put it was the best summary of the actual capabilities I’ve heard so far. Very refreshing.
He is, very clearly venting into an open mic. He starts with his bonfides (a Masters, he's built the tools not just been an API user). He adds more through out the article (talking about peers).
His rants are backed by "anecdotes"... I can smell the "consulting" business oozing off them. He cant really lay it out, just speak in generalities... And where he can his concrete examples and data are on point.
I dont know when angry became socially unacceptable in any form. But he is just that. He might have a right to be. You might have the right to be as well in light of the NONSENSE our industry is experiencing.
Maybe its time to let the AI hate flow though you...
I didn't find this article refreshing. If anything, it's just the same dismissive attitude that's dominating this forum, where AI is perceived as the new blockchain. An actually refreshing perspective would be one that's optimistic.
Is it possible that this forum may be dismissive of the AI bubble because the people on HN tend to have better understanding of the technology, its limitations, and the deceptive narrative around it?
That's far from what the article actually tries to say. Did you read the full thing?
HN is disproportionately dismissive, where one comment in late 2023 was this:
Ruby is so much better than Python, and Python is only pumped up by AI hype, and the AI hype will die down soon. Ruby will regain the throne again.
Imagine that!
This article is not that. This article just tells you to get your basics correct as a company, and don't think about using AI before you are absolutely sure where and how you will use it. And non-technical people are the main drivers of AI hype (which is besides the true thing).
Author makes good points but suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent article about the state of AI.
Started a career in ML/AI years before ChatGPT changed everything.
At the time, we only used the term AI if we referred more than just machine/deep learning techniques to create models or research something (thinks operations research, Monte Carlo simulations, etc).
But it started to change already.
I think startups and others will realise to make a product successful, you will need clean data and data engineers, the rest will fill follow. Fundamentals first.
All the startups trying to sell "AI" to traditional industries: good luck!
I've worked as an AI engineer for a big insurance, contractor with a bank, and oh gosh!
I remember the hype around BigData. I was in those meetings where vendors pitched their product. Our director would asked "Do you do Big Data?" Any vendor who said no was immediately dismissed.
I still don't know what the answer to that question was supposed to be. We scraped coupons from our competitors then displayed them on our websites.
Which is a pity. The style is excellent & so wonderful, is a critical relief, after suffering through insane out of this world hype-bordering-religion. At least to me; he doesn't read as menacing, he reads as being on a justifiably distraught polemic against total madness that's allowed to pointlessly suck up all the oxygen in the room.
We should be flipping our shit (if not each other) that we have to put up with this endless exuberant schucksterism. That robs us of agency & pollutes our noosphere with inauthentic bullshitting.
I'd so have preferred this to be true, and to ignore the AI thing (mainly to avoid any effort to change any of my habits in any way). But as an end user I can say that this is wrong. I definitely need LLMs for one critical thing: search that works.
Google has become clogged with outright spam and endless layers of indirection (useless sites that point to things that point to things that point to things, never getting me to the information that actually fucking matters), but I can ask the best LLMs queries like "what's the abc that does xyz in the context of ijk" and get meaningful answers. It only works well when the subject has a lot of "coverage" (a lot of well-trodden ground, nothing cutting-edge) but that's 80% of what I need.
I still have to check that the LLM found a real needle in the haystack rather than making up a bullshit one. (Ironically, Google works great for that once you know what the candidate needle actually is—it just sucks at finding any needle, even a hallucinated one, in the first place.) For shortest path from question to answer, LLMs are state of the art right now. They're not only kicking Google's ass, they're the first major improvement in search since Google showed up 20+ years ago.
Therefore I think this author is high on his own fumes. It reminds me of the dotcom period: yeah there was endless stupid hype and cringey grifters and yeah there were excellent rants about how stupid and craven it all was—but the internet really did change everything in the end. The ranters were right about most of the battles but ended up wrong about the war, and in retrospect don't look smart at all.
> I myself have formal training as a data scientist, going so far as to dominate a competitive machine learning event at one of Australia's top universities and writing a Master's thesis where I wrote all my own libraries from scratch. I'm not God's gift to the field, but I am clearly better than most of my competition - that is, practitioners who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions.
I really didn't have any illusions on the article after reading this - apparently the author believes that anyone who hasn't written a C library is below him.
I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs).
I understand the article is hyperbole for perhaps comedic purposes, and actually do perhaps 80% align with a lot of the authors views, but it's a bit much.
There is industry-changing tech which has become available, and many orgs are starting to grasp it. I won't deny that there's probably a large percentage of projects which fall under what the author describes, but these claims are doing a bit of a disservice to the legitimately amazing projects being worked on (and the competent people performing that work).
If you're interviewing for job at a company you're not familiar with, what are some good heuristics (and/or questions to ask) to politely get a sense of whether it's run by buzzword bingo enthusiasts?
A lot of this is on point. I work in tech diligence, talking to companies raising money. The amount of pointless AI hand waving is unreal, and the majority have not ever tested their disaster recovery plan.
[+] [-] keiferski|1 year ago|reply
I think one of the results of this is that the concept of AI itself increasingly becomes more muddled until it becomes indistinguishable from a word like “technology” and therefore useless for describing a particular phenomenon. You can already see this with the usage of “AGI” and “super intelligence” which from the definitions I’ve been reading, are not the same thing at all. AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god, and yet it seems like everyone is using them interchangeably. It’s very sloppy thinking.
Instead I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.
[+] [-] surfingdino|1 year ago|reply
[+] [-] elromulous|1 year ago|reply
What happened to ML? With the relatively recent craze precipitated by chatgpt, the term AI (perhaps in no small part due to "OpenAI") has completely taken over. ML is a more apt description of the current wave.
[+] [-] intended|1 year ago|reply
A generative tool can’t Hallucinate! It isn’t misperceiving its base reality and data.
Humans Hallucinate!
ARGH. At least it’s becoming easier to point this out, compared to when ChatGPT came out.
[+] [-] jszymborski|1 year ago|reply
I don't know what AI is, and nobody else does, that's why they're selling you it.
[+] [-] mdp2021|1 year ago|reply
When we build something we do not intend to build something that just achieves results «of the average human being», and a slow car, a weak crane, a vague clock are built provisionally in the process of achieving the superior aid intended... So AGI expects human level results provisionally, while the goal remains to go beyond them. The clash you see is only apparent.
> I think the term AI is going to ... will fade out over time
Are you aware that we have been using that term for at least 60 years?
And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.
[+] [-] atoav|1 year ago|reply
The problem with AI, as perfectly clear outlined in this article, is the same as the problem with the blockchain or with other esotheric grifts: It drains needed resources from often already crumbling systems¹.
The people falling for the hype beyond the actual usefulness of the hyped object are wishing for magical solutions that they imagine will solve all their problems. Problems that can't be fixed by wishful thinking, but by not fooling yourself and making technological choices that adequately address the problem.
I am not saying that LLMs are never going to be a good choice to adequately address the problem. What I am saying is that people blinded by blockchain/AI/quantum/snakesoil hype are the wrong people to make that choice, as for them every problem needs to be tackled using the current hype.
Meanwhile a true expert will weigh all available technological choices and carefully test them against the problem. So many things can be optimized and improved using hard, honest work, careful choices and a group of people trying hard not to fool themselves, this is how humanity managed to reach the moon. The people who stand in the way of our achievements are those who lost touch with reality, while actively making fools of themselves.
Again: It is not about being "against" LLMs, it is about leaders admitting they don't know, when they do in fact not know. And a sure way to realize you don't know is to try yourself and fail.
¹ I had to think about my childhood friend, whose esotheric mother died of a preventable disease, because she fooled herself into believing into magical cures and gurus until the fatal end.
[+] [-] somenameforme|1 year ago|reply
It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving of things like this nonsense:
---
- "How do I do (x)."
- "You do (A)."
- "No, that's wrong because reasons."
- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (B)."
- "No that's also wrong because reasons."
- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (A)."
- #$%^#$!!#$!
---
[+] [-] maeln|1 year ago|reply
I was reading one famous book about investing some times ago (I don't remember which one exactly, I think it was a random walk into wall st, but don't quote me on that) and one chapter at the beginning of the book talk about the .com bubble and how companies, even ones who had nothing to do with the web, started to put .com or www in their name and were seeing an immediate bump in their stock price (until it all burst, as we know now).
And every hype cycle / bubble is like that. We saw something similar with cryptocurrencies. For a while, every tech demos at dev convention had to have some relation to the "blockchain". We saw every variation of names ending in -coin. And a lot of company, that where not even in tech, had dumb project related to the blockchain, which for anyone slightly knowledgeable with the tech it was clear that it was complete BS, and they almost all the time were quietly killed off after a few month.
To a much lesser extent, we saw the same with "BigData" (who even use this word anymore?) and AR/VR/XR.
And now its AI, until the next recession and/or the next shiny thing that makes for amazing demos pops-out.
It is not to say that it is all fake. There is always some genuine business that have actual use case with the tech and will probably survive the burst (or get brought up and live on has MS/Google/AWS Thingamajig). But you have to be pretty naïve if you think 99% of the current AI company will live in the next 5 years, and believe their marketing material. But it doesn't matter if you manage to sell before the bubble pop, and so the cycle continue.
[+] [-] johnnyanmac|1 year ago|reply
It's obvious BS from anyone in tech, but the people throwing money aren't in tech.
>I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.
it'll die down, but the marketing tends to stick, sadly. we'll have to deal with if AI means machine learning or LLMs or video game pathfinding for decades to come.
[+] [-] namaria|1 year ago|reply
And this delay between people's mental images of what an 'intelligent' product can do and the actual benefits they get for their money once a new generation reaches the market creates this bullwhip effect in mood. Hence the 'AI winters'. And guess what, another one is brewing because tech people tend to think history is bunk and pay no attention to it.
[+] [-] kelnos|1 year ago|reply
This isn't really a new phenomenon, though. The only thing new about it is that the marketing buzzword of the day is "AI". For a little while prior it was "machine learning". History is littered with examples of marketers and salespeople latching onto whatever is popular and trendy, and using it to sell, regardless if their product actually has anything to do with it.
[+] [-] RaftPeople|1 year ago|reply
Next up: Synthetic Consciousness, "SC"
Prediction: We will see this press release within 24 months:
"Introducing the Acme Juice Squeezer with full Synthetic Consciousness ("SC"). It will not only squeeze your juice in the morning but will help you gently transition into the working day with an empathetic personality that is both supportive and a little spunky! Sold exclusively at these fine stores..."
[+] [-] jerieljan|1 year ago|reply
Everything that is magic will be labeled under AI for now, until it gets seated into their proper terms and are only closely discussed by those who are actually driving innovation in the space or are just casually using the applications in business or private.
[+] [-] davidgerard|1 year ago|reply
[+] [-] space_oddity|1 year ago|reply
[+] [-] konfusinomicon|1 year ago|reply
[+] [-] mrcartmeneses|1 year ago|reply
[+] [-] cleandreams|1 year ago|reply
[+] [-] talldayo|1 year ago|reply
So I really don't actually know what the OP wants to do, besides brutalize idiots searching for a golden calf to worship. AI will progress regardless of how you gatekeep the public from percieving it, and manipulative thought-leaders will continue to schiester idiots in hopes of turning a quick buck. These cycles will operate independently of one-another, and those overeager idiots will move onto the next fad like Metaverse agriculture or whatever the fuck.
[+] [-] softwaredoug|1 year ago|reply
It reminds me of every past generation of focusing on the technology, not the underlying hard work + literacy needed to make it real.
Decades ago I saw this - I worked at a hardware company that tried to suddenly be a software company. Not at all internalizing - at every level - what software actually takes to build well. That leading, managing, executing software can't just be done by applying your institutional hardware knowledge to a different craft. It will at best be a half effort as the software craftspeople find themselves attracted to the places that truly understand and respect their craft.
There's a similar thing happening with data literacy where the non data literate hire the data literate, but don't actually internalize those practices or learn from them. They want to continue operating like the always have, but just "plug in AI" (or whatever new thing) without changing fundamentally how they do anything
People want to have AI, but those company's leaders struggle with basic understanding of statistical significance, basic fundamentals of experimentation, and thus essentially destroy any culture needed to build the AI-thing.
[+] [-] a_bonobo|1 year ago|reply
I'm in a similar situation with my own 'C-suite' and it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.
[+] [-] ranger207|1 year ago|reply
[+] [-] blast|1 year ago|reply
[+] [-] remoquete|1 year ago|reply
[+] [-] habosa|1 year ago|reply
I remember ~6 years ago wondering if I was going to be able to remain relevant as a software engineer if I didn't learn about neural networks and get good with TensorFlow. Everyone seemed to be trying to learn this skill at the same time and every app was jamming in some ML-powered feature. I'm glad I skipped that hype train, turns out only a minority of programmers really need to do that stuff and the rest of us can keep on doing what we were doing before. In the same way, I think LLMs are massively powerful but also not something we all need to jump into so breathlessly.
[+] [-] johnnyanmac|1 year ago|reply
But that's all it is, hype. It'll die down like web3, Big Data, cloud, mobile, etc. It'll probably help out some tooling but it's not taking our jobs for decades (it will inevitably cost some jobs from executives who don't know better and ignore their talent, though. The truly sad part).
[+] [-] freeopinion|1 year ago|reply
Front-end web devs might not write SQL all day, but they probably won't get very far without some comprehension. I see AI/ML becoming something as common. Maybe you need to know some outline of what gradient descent is. Maybe you just need some understanding of prompt engineering. But a reasonable grasp of the priciples is still going to be useful to a lot of people after all the hype moves to other topics.
[+] [-] openmajestic|1 year ago|reply
Worth noting that I don't think you need to train the models or even touch the PyTorch level, but you do need to understand how LLMs work and learn how (if?) they can be applied to what you work on. There are big swaths of technology that are becoming obsolete with generative AI (most obviously/immediately in the visual creation and editing space) and IMO AI is going to continue to eat more and more domains over time.
[+] [-] nutrie|1 year ago|reply
[+] [-] zer00eyz|1 year ago|reply
Im not sure its "unnecessary".
He is, very clearly venting into an open mic. He starts with his bonfides (a Masters, he's built the tools not just been an API user). He adds more through out the article (talking about peers).
His rants are backed by "anecdotes"... I can smell the "consulting" business oozing off them. He cant really lay it out, just speak in generalities... And where he can his concrete examples and data are on point.
I dont know when angry became socially unacceptable in any form. But he is just that. He might have a right to be. You might have the right to be as well in light of the NONSENSE our industry is experiencing.
Maybe its time to let the AI hate flow though you...
[+] [-] AceJohnny2|1 year ago|reply
[+] [-] benreesman|1 year ago|reply
[deleted]
[+] [-] yosito|1 year ago|reply
[+] [-] Kiro|1 year ago|reply
[+] [-] andrei_says_|1 year ago|reply
[+] [-] __rito__|1 year ago|reply
HN is disproportionately dismissive, where one comment in late 2023 was this:
Imagine that!This article is not that. This article just tells you to get your basics correct as a company, and don't think about using AI before you are absolutely sure where and how you will use it. And non-technical people are the main drivers of AI hype (which is besides the true thing).
[+] [-] cainxinth|1 year ago|reply
[+] [-] pech0rin|1 year ago|reply
[+] [-] swedonym|1 year ago|reply
[+] [-] arbfay|1 year ago|reply
At the time, we only used the term AI if we referred more than just machine/deep learning techniques to create models or research something (thinks operations research, Monte Carlo simulations, etc). But it started to change already.
I think startups and others will realise to make a product successful, you will need clean data and data engineers, the rest will fill follow. Fundamentals first.
All the startups trying to sell "AI" to traditional industries: good luck!
I've worked as an AI engineer for a big insurance, contractor with a bank, and oh gosh!
[+] [-] firefoxd|1 year ago|reply
I still don't know what the answer to that question was supposed to be. We scraped coupons from our competitors then displayed them on our websites.
[+] [-] dieselgate|1 year ago|reply
[+] [-] jauntywundrkind|1 year ago|reply
Which is a pity. The style is excellent & so wonderful, is a critical relief, after suffering through insane out of this world hype-bordering-religion. At least to me; he doesn't read as menacing, he reads as being on a justifiably distraught polemic against total madness that's allowed to pointlessly suck up all the oxygen in the room.
We should be flipping our shit (if not each other) that we have to put up with this endless exuberant schucksterism. That robs us of agency & pollutes our noosphere with inauthentic bullshitting.
[+] [-] bryanrasmussen|1 year ago|reply
[+] [-] blast|1 year ago|reply
I'd so have preferred this to be true, and to ignore the AI thing (mainly to avoid any effort to change any of my habits in any way). But as an end user I can say that this is wrong. I definitely need LLMs for one critical thing: search that works.
Google has become clogged with outright spam and endless layers of indirection (useless sites that point to things that point to things that point to things, never getting me to the information that actually fucking matters), but I can ask the best LLMs queries like "what's the abc that does xyz in the context of ijk" and get meaningful answers. It only works well when the subject has a lot of "coverage" (a lot of well-trodden ground, nothing cutting-edge) but that's 80% of what I need.
I still have to check that the LLM found a real needle in the haystack rather than making up a bullshit one. (Ironically, Google works great for that once you know what the candidate needle actually is—it just sucks at finding any needle, even a hallucinated one, in the first place.) For shortest path from question to answer, LLMs are state of the art right now. They're not only kicking Google's ass, they're the first major improvement in search since Google showed up 20+ years ago.
Therefore I think this author is high on his own fumes. It reminds me of the dotcom period: yeah there was endless stupid hype and cringey grifters and yeah there were excellent rants about how stupid and craven it all was—but the internet really did change everything in the end. The ranters were right about most of the battles but ended up wrong about the war, and in retrospect don't look smart at all.
[+] [-] Tiberium|1 year ago|reply
I really didn't have any illusions on the article after reading this - apparently the author believes that anyone who hasn't written a C library is below him.
And also, this author is known to make articles that are full of ranting and have rage titles, for example https://news.ycombinator.com/item?id=34968457
[+] [-] timlod|1 year ago|reply
There is industry-changing tech which has become available, and many orgs are starting to grasp it. I won't deny that there's probably a large percentage of projects which fall under what the author describes, but these claims are doing a bit of a disservice to the legitimately amazing projects being worked on (and the competent people performing that work).
[+] [-] snazz|1 year ago|reply
[+] [-] iainctduncan|1 year ago|reply