I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
At work we started calling this trend clippification for obvious reasons. In a way this aligns with your comment: The information provided by Clippy was not necessarily useless, nevertheless people disliked it because (i) they didn't ask for help (ii) and even if by any chance they were looking for help, the interaction/navigation was far from ideal.
Having all these popups announcing new integrations with AI chatbots showing up while you are just trying to do your work is pretty annoying. It feels like this time we are fighting an army of Clippies.
I am a huge AI supporter, and use it extensively for coding, writing and most of my decision making processes, and I agree with you. The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.
I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.
Couldn’t agree more. There are awesome use-cases for AI, but Microsoft and Google needed to shove AI everywhere they possibly could, so they lost all sense of taste and quality. Google raised the price of Workspace to account for AI features no one wants. Then, they give away access to Gemini CLI for free to personal accounts, but not Workspace accounts. You physically cannot even pay Google to access Veo from a workspace account.
Raise subscription prices, don’t deliver more value, bundle everything together so you can’t say no. I canceled a small Workspace org I use for my consulting business after the price hike last year; also migrating away everything we had on GCP. Google would have to pay me to do business with them again.
google is sending pop-up sonyliv open emails suggesting that they will use our data and help us with a. i. which should not be accepted at all the pop-ups even don't disappear this is a real cheating and fraud
> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.
And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.
And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.
And either way, all the people responsible for making all your technology worse every day will continue to get richer.
Having seen the almost rabid and fearful reactions of product owners first hand around forcing AI into every product, it’s because all these companies are in panic mode. Many of these folks are not thinking clearly and have no idea what they’re doing. They don’t think they have time to think it through. Doing something is better than nothing. It’s all theatre for their investors coupled with a fear of being seen as falling behind. Nobody is going to have a measured and well thought through approach when they’re being pressured from above to get in line and add AI in any way. The top execs have no ideas, they just want AI. You’re not even allowed to say it’s a bad idea in a lot of bigger companies. Get in line or get a new job. At some point this period will pass and it will be pretty embarrassing for some folks.
Companies that don't invent the car get to go extinct.
This is the next great upset. Everyone's hair is on fire and it's anybody's ball game.
I wouldn't even count the hyperscalers as certain to emerge victorious. The unit economics of everything and how things are bought and sold might change.
We might have agents that scrub ads from everything and keep our inboxes clean. We might find content of all forms valued at zero, and have no need for social networking and search as they exist today.
And for better or worse, there might be zero moat around any of it.
The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
We run our own LLM server at the office for a month now, as an experiment (for privacy/infosec reasons), and a single RTX 5090 is enough to serve 50 people for occasional use. We run Qwen3 32b which in some benchmarks is equivalent to GPT 4.1-mini or Gemini 2.5 Flash. The GPU allows 2 concurrent requests at the same time with 32k context each and 60 tok/s. At first I was skeptical a single GPU would be enough, but it turns out, most people don't use LLMs 24/7.
"how much will they charge us for prioritised access to these resources"
For the consumer side, you'll be the product, not the one paying in money just like before.
For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.
> The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.
Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).
This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.
> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.
But I agree that the aggregation of power and centralisation of data is a pertinent risk.
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
People didn't even want mobile phones. In The Netherlands, there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.
As a kid I had Internet access since the early 90s. Whenever there was some actual technology to see (Internet, mobile gadgets etc.) people stood there with big eyes and forgot for a moment this was the most nerdy stuff ever
I’m not even sure it’s the right question. No one knew what the long term effects of the internet and mobile devices would be, so I’m not surprised people thought it was great. Cocoa leaves seemed pretty amazing at the beginning as well. But mobile devices especially have changes society and while I don’t think we can ever put the genie back in the bottle, I wish that we could. I suspect I’m not alone.
I've seen this bad take over and over again in the last few years, as a response to the public reaction to cryptocurrency, NFTs, and now generative AI.
It's bullshit.
I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.
But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)
By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.
Yes, everyone wanted the internet. It was massively hyped and the uptake was widespread and rapid.
Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.
I agree with the general gist of this piece, but the awkward flow of the writing style makes me wonder if it itself was written by AI…
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that:
1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?)
2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
Isn't "Engineering" is based on predictability, on repeatability?
LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself
If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
Just moments ago I noticed for the first time that Gmail was giving me a summary of email I had received.
Please don't. I am going to read this email. Adding more text just makes me read more.
I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)
How long before spam filtering is also done by an LLM and spammers or black hat hackers embed instructions into their spam mails to exploit flaws in the AI?
ChatGPT is the 5th most-visited website on the planet and growing quickly. that’s one of many popular products. Hardly call that unwilling. I bet only something like 8% of Instagram users say they would pay for it. Are we to take this to mean that Instagram is an unpopular product that is rbi g forced on an unwilling public?
Would you like your Facebook feed or Twitter or even Hacker News feed inserted in between your work emails or while you are shopping for clothes on a completely different website?
If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?
My 75 year old father uses Claude instead of google now for basically any search function.
All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.
If I want to use ChatGPT I will go and use ChatGPT myself without a middleman. I don't need every app and website to have it's own magical chat interface that is slow, undiscoverable and makes the stuff up half the time.
I downloaded a Quordle game on Android yesterday. It pushes you to buy a premium subscription, and you know what that gets you? AI chat inside the game.
I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.
Agreed. My mother and aunts are using ChatGPT all the time. It has really massive market penetration in a way I (a software engineer and AI skeptic/“realist”) didn’t realize. Now, do they care about meta’s AI? Idk, but they’re definitely using AI a lot
People want these features as much as they wanted Cortana on Windows.
Which is to say, there's already a history of AI features failing at a number of these larger companies. The public truly is frequently rejecting them.
But why are the CEOs insisting so much on AI? Because stock investors prefer to invest on anything with "AI inside". So the "AI business model" would not collapse , because it is what investors want. It is a bubble. It will be bubbly for a while, until it isn't.
It is not just that. Companies that already have lots of users interacting with their platform (Microsoft, Google, Meta, Apple ...) want to capture your AI interactions to generate more training data, get insights in what you want and how you go about it, and A/B test on you. Last thing they want is someone else (Anthropic, Deepseek ...) capturing all that data on their users and improve the competition.
Excellent Frank Zappa reference in The Famous Article is "I'm the Slime"[1].
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
As far as I can tell, the AI-hate is most prominent in tech circles (creativity too, but they don't like media generation, largely embrace text though).
It seems here on the ground in non-tech bubble land, people use ChatGPT a ton and lean hard on AI features.
When Google judges the success of bolted on AI, they are looking at how Jane and John General Public use it, not how xleet007 uses it(or doesn't).
There is also the fact that AI is still just being bolted onto things now. The next iteration of this software will be AI native, and the revisions after that will iron out big wrinkles.
When settings menus and ribbon panels are optional because you can just tell the program what to do in plain English, that will be AI integration.
Ok but TFA says only 8% of REGULAR PEOPLE want these features so if you're going to directly contradict the source material we all just read (right???) you should bring a citation because otherwise, in light of the data in the article you are ostensibly discussing, I don't know how that's "as far as you can tell."
I looked for the right term but force-feeding is what it is. I yesterday also changed my default search engine from Duckduckgo to Ecosia as they seem the only one left not to provide flaky AI summaries.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
As a note on Microsoft's obnoxious Copilot push, I too got the "Your 365 subscription price is increasing because we're forcing AI on you".
Only when I went to cancel[1], suddenly they made me aware that there was a "classic" subscription that was the normal price, without CoPilot. So they basically just upsized everyone to try to force uptake.
[1] - I'm in the AI business and am a user and abuser of AI daily, but I don't need it built directly into every app. I Already have AI subscriptions and local models and solutions.
This rapist mentality pricing is really off putting.
Recently I tried to cancel notion account of some people in our org and it wouldn’t let me do it easily so just cancelled the whole notion subscription, really wish they would go out of business for doing these kind of things
Even worse: they are using your data that you are inputting into these programs to continuously train their data. That’s an even bigger violation since it breaches data privacy.
I wish there was a checkbox that controlled this. 5% of the time I need the privacy, 95% of the time I'm tired of correcting the AI in the same way that I corrected it yesterday and I would happily take a little extra time out of my day to teach them to stop repeatedly making the same mistakes.
> Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?
In my European country you have to pay a toll to use a highway. Most people opt to use them, instead of taking the old 2-lane road that existed before the highway and is still free.
It's also heavily subsidized in products like Cursor and Windsurf. In fact, these tools are literally marketing vehicles for the LLMs if you do the math and look at who the investors are.
This stuff costs so much, they need mass adoption. ASAP. I didn't think about it before, but I wonder how quickly they need the adoption.
Companies didn't ask your opinion when they offshore manufacturing to Asia. They didn't ask your opinion when they offshore support to call centers in Asia. Companies don't ask your opinion, they do what they think is best for their financial interest, and that is how capitalism works.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
I noticed that some of his choices contributed to his problem. I haven't been forced into accepting AI (so far) while I've been using duckduckgo for search, libreoffice, protonmail, and linux.
even ddg has integrated AI now and while it can be disabled, the privacy aspect seems to mean that ddg regularily forgets my settings and re-enables the ai features.
maybe i'm doing something wrong here, but even ddg is annoying me with this.
IPv6 adoption is actually limited by network effect and infrastructure transition costs, not lack of end-user benefits - unlike AI, which faces a value perception problem.
Huh? I’ve been programming for 20 years now and LLMs/GenAI have replaced search and StackOverflow for me - I’d say that means they are pretty good! They are not perfect, not even close, but they are excellent when used as an assistant and when you know the result you’re expecting and can spot its obvious errors.
The top of the list has got to be that one of their testimonials presented to investors is from "DrDeflowerMe". It's also interesting to me because they list financials which position them as unbelievably tiny: 6,215 subscribing accounts, 400 average new accounts per month, which to me sounds like they have a lot of churn.
I'm in my third year of subscribing and I'm actively looking for a replacement. This "Start Engine" investment makes me even more confident that's the right decision. Over the years I've paid nearly $200/year for this and watched them fail to deliver basic functionality. They just don't have the team to deliver AI tooling. For example: 2 years ago I spoke with support about the screen that shows you your credit card numbers being nearly unreadable (very light grey numbers on a white background), which still isn't fixed. Around a year ago a bunch of my auto transfers disappeared, causing me hundreds of dollars in late fees. I contacted support and they eventually "recovered" all the missing auto-transfers, but it ended up with some of them doubled up, and support stopped responding when I asked them to fix that.
I question if they'll be able to implement the changes they want, let alone be able to support those features if they do.
I feel an urge to build personal local AI bots that would be personal spam filters. AI filtering AI, fight fire with fire. Mostly because the world OP wants is never coming back. Everything will be AI and it's everywhere.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
It's annoying having AI features force fed I imagine but it's come about due to many of the public liking some AI - apparently ChatGPT now has 800 million weekly users (https://www.digitalinformationworld.com/2025/05/chatgpt-stat...) and then competing companies think they should try to keep up.
I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.
I hate the Google AI Overview. More of my knowledge-seeking searches than not are things that have a consequential, singular correct answer. It's hard to break the habit of reading the search AI response first, it feeling not quite right, remembering that I can't actually trust it, then skipping down to pull up a page with the actual answer. Involuntary injection of needless confusion and mental effort with every query. If I wanted a vibe-answer, I'd ask ChatGPT with my plus subscription instead of Google, because at least then I get a proper model instead of whatever junk is cheap enough for Google to auto-run on every query without a subscription.
And of course there's no way to disable it without also losing calculator, unit conversions, and other useful functionality.
In two months they've doubled MAUs? Without an explanation of that specific outcome I don't believe it.
Also:
> As per SimilarWeb data 61.05% of ChatGPT's traffic comes from YouTube, which means from all the social media platforms YouTube viewers are the largest referral source of its user base,
> They wanted it. They paid for it. They enjoyed it.
The counter example is open-source software.
If we talk about popular packages:
- people want it
- people enjoy it
- people do not pay for that
But force-feeding with strict licenses like Ultralytics does works. Yes, it is force-feeding, but noone wants to pay the price, unless there is no other choice.
I mostly agree with TFA, with one glaring exception: The quality of Google search results has regressed so badly in the past years (played by SEO experts), that AI was actually a welcome improvement.
People don't know how to search, that's it. Even the HN population.
Every time this gets posted, I ask for one example of thing you tried to find and what keywords you used. So I'm giving you the same offer, give me for one thing you couldn't find easily on Google and the keywords you used, and I'll show you Google search is just fine.
>A few months ago, I needed to send an email. But when I opened Microsoft Outlook, something had changed.
I cannot take OP seriously when the post started like so.
If you are using Microsoft services and products in 2025, well, it serves you right.
Big companies can force Microsoft, Google and alike to don't use companies data for AI training, small companies have no chance.
Everything nowadays is cloud based, all you need is internet and a browser.
But nope, people and companies still using Windows, spending millions with AV software that they wouldn't have to if a decent Linux distro was being used instead.
By decent I mean user friendly such as Linux Mint or even worse Ubuntu (Ubuntu lost its way years ago, still a solid option for basic users, not for advanced users)
But that's exactly the problem with proprietary software. It's not force-feeding you anything, it's working exactly as intended.
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
They force more and more AI into everything so that AI can continue to learn.
Also the requests aren't answered locally. Your data is forwarded to the AI's DC, processed and the answer returned. You can be absolutely certain that they keep a copy of your data.
I am moderately hyped for AI, but I treat these corporate intrusions into my workflows the same as ads or age verification, pointing uBlock to elements which are easy to point-and-click block, and making quick browser plugins and Tampermonkey scripts for things like Google to intercept my web searches and redirect them from the All/AI search page. -And if I can, it does amuse me to have Gemini write the plugins to block Google ads/inconveniences.
"Any sufficiently advanced AI technology is indistinguishable from bullshit."
- me, a few years ago.
I find the whole situation with regard to AI utterly ridiculous and boring. While those algos might have some interesting applications, they're not as earth-shattering as we are made to believe, and their utility is, to me at least, questionable.
> Any sufficiently advanced AI technology is indistinguishable from bullshit
love this quote !
The whole sales-pitch for AI is predicated on FOMO - from developers being replaced by AI-enabled engineers to countries being left-behind by AI-slop. Like crypto, the idea is to get-big-fast, and become too big to fail. This worked for social-media but I find it hard to believe it can work for AI.
My hope is that: while some of the people can be fooled all the time, all the people cannot be fooled all the time.
I think there’s a difference between the tool that helps you do work better and the service that generates the end result.
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
This guy calls himself honest broker but his articles are just expressions of status anxiety. The kind of media the he loves to write about is becoming less relevant and so he lashes out at everything new from AI to TikTok.
"This is how AI gets introduced to the marketplace-by force-feeding the public. And they're doing this for a very good reason."
"Most people won't pay for AI voluntarily-just 8% according to a recent survey. So they need to bundle it with some other essential product."
"You never get to decide."
Silicon Valley and Redmond have been operating this way for quite some time.
They have been effectively removing choice long before this "AI" push. Often accomplished through "defaults".
This "AI" nonsense may be the most bold example.
"But if AI is bundled into existing businesses, Silicon Valley CEOs can pretend that AI is a moneymaker, even if the public is lukewarm or hostile."
"The AI business model would collapse overnight if they needed consumer opt-in. Just pass that law, and see how quickly the bots disappear. "
"You don't get to choose. You're never asked. It just shows up. Now you have to deal with it."
"If they gave people a choice, they would reject this tyranny masquerading as innovation."
"The AI business model would collapse overnight if they needed consumer opt-in."
We never get to find out what would happen.
One comment I would like to add here.
By removing meaningful choice and creating fabricated "demand" these so-called "tech" companies (unnecessary intermediaries) when faced with antitrust allegations then try to argue something like, "Everyone is using it therefore everyone wants it." And, "This shows everyone prefers us over the alternatives."
"Frank Zappa offers a possible mission statement for Microsoft back in 1976, a few months after the company is founded."
The weird thing about AI is that it doesn't learn over time but just in context. It doesn't get better the way a 12 year old learning to play the saxophone gets better.
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.
the title can be shortened to "force feeding an unwilling public" which is a fairly reasonable description of our current econimic system.
we went from "supply and demand", to "we can supply demand"(the heydays of hype and advertising), to "surprise!, like it or lump it"
Just a quick quibble…the subtitle of the article calls this problem tyranny.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
I agree copilot for answering emails is negative value.
But I find Google AI search results are very useful, can't see how they will monetise this, but can't complain for now.
I honestly can’t think of reasons to use AI. At work I have to give myself reminders to show my bosses that I used the internal ai tool so I don’t get in shit.
I don’t see the utility, all I see is slop and constant notifications in google.
You can say skill issue but that’s kind of the point; this was all dropped on me by people who don’t understand it themselves. I didn’t ask or want to built the skills to understand ai. Nor did my bosses: they are just following the latest wave. We are the blind leading the blind.
Like crypto ai will prove to be a dead end mistake that only enabled grifters
One recent thing I did was make cute little illustrations for an internal slide deck. I’m not even taking work away from an artist, there was no universe where I would have paid someone to do this, but now every presentation I give can be much more visually engaging than they would have been previously.
The reason your bosses are being obnoxious about making people use the internal AI tool is to push them into thinking about things like this. Perhaps at your company it’s genuinely not useful, but I’ve seen a lot of people say that who I’m pretty confident are wrong.
It's simply a money grab. You get this feature you don't need or want and hey, we're going to raise your price because of this. See for instance this - priceless - email:
Dear administrator,
We recently added the best of Google AI to Workspace plans to help your teams accomplish more, faster. In addition, we added new, simple to use security insights and controls to help you keep your business data safe.
We also announced updated subscription pricing. Your subscription will be subject to this updated pricing starting July 7, 2025.
We’ve provided additional information below to guide you through this change.
What this means for your organization
New Workspace features
Your updated pricing reflects the many new features now included in your Google Workspace edition. With these changes, you can:
Summarize long email threads, draft replies, and compose professional emails faster and easier with Help me write in Gmail
Write and refine documents with Gemini in Docs
Generate charts and insights with Gemini in Sheets
Automatically capture meeting notes so you can focus on the conversation with Take notes for me in Meet
Get AI assistance with brainstorming, researching, coding, data analysis, and more with Gemini Advanced
Accelerate learning by uploading your docs, PDFs, videos, websites, and more to get instant insights and podcast-style Audio Overviews with NotebookLM Plus
Enhance your organization’s security with security advisor, a new set of insights and tools. Use security advisor for threat defense with app access protection, account security with Gmail Enhanced Safe Browsing, and data protection capabilities
Customize email campaigns in Gmail. Add color schemes, logos, images, and other design elements
Starting as early as July 7, 2025, your Google Workspace Business Plus subscription price will be automatically updated to $22.00* per user, per month with an Annual/Fixed-Term Plan (or $26.40 if you have a monthly Flexible Plan).
The specific date that your subscription price will increase depends on your plan type, number of user licenses, and other factors.
*Prices will be updated in all local payment currencies.
If you have an Annual/Fixed-Term Plan, your subscription will be subject to updated pricing on your next plan renewal starting July 7, 2025. We will provide you with more specific information at least 30 days before updates to your Google Workspace plan pricing are made.
What you need to do
No action is required from you. Features have already rolled out to Google Workspace Business Plus subscriptions, including AI features in many additional languages, and subscription prices will be updated automatically starting July 7, 2025.
We know that data security and compliance are top priorities for business leaders when adopting AI, and we are committed to helping you keep your data safe. You can understand how to effectively utilize generative AI in your organization, and learn how to keep your data confidential and protected.
We’re here to help
If you wish to make changes to your subscription or payment plan, please visit the Admin console. Find which edition and payment plan you have on Google Workspace Admin Help.
Refer to the Help Center for details regarding the AI features and price updates, including updated local currency pricing.
That's such a horrific new-speak way of saying your subscription price has been raised. Just say it! This soft bullshitty choice of words is infuriating.
The major issue with AI technology is the people. The enthusiasts that pretend issues don't exist, the cheap startups trying to sell snake oil.
The AI community treats potential customers as invaders. If you report a problem, the entire thing turns on you trying to convince you that you're wrong, or that you reported a problem because you hate the technology.
It's pathetic. It looks like a viper's nest. Who would want to do business with such people?
Good point. Also, the fact that I’m adamant that one cannot fly a helicopter to the moon doesn’t mean that I think helicopters are useless. That said, if I’m inundated everyday with people insisting that one CAN fly a helicopter to the moon or that that capability is just around the corner, I might get so fed up that i say F it, I don’t want to hear another F’ing word about helicopters even though I know that helicopters have utility.
It’s not spot on. Buying and using all of these products is a choice.
The last is especially egregious. I don’t want poorly-written (by my standards) books cluttering up bookstores, but all my life I’ve walked into bookstores and found my favorite genres have lots of books I’m not interested in. Do I have some kind of right to have stores only stock products that I want?
The whole thing is just so damn entitled. If you don’t like something, don’t buy it. If you find the presence of some products offensive in a marketplace, don’t shop there. Spotify is not a human right.
Why do people who attempt to critique AI lean on the "no one wants this, everyone hates this" instead of just making their point. If your arguments are strong you don't need to wrap them in false statistics.
"There ought to be a law" is why we have nanny-state government. I imagine that is why there have been "no spitting" and "no chewing gum" laws on the books.
People going to lord it over others in the pursuit of what they think is proper.
Society is over-rated, once it gets beyond a certain size.
Along the same lines, I am currently starting my morning with blocking ranges of IP addresses to get Internet service back, due to someone's current desire to SYN Flood my webserver, which being hosted in my office, affects my office Internet.
It may soon come to a point where I choose to block all IP addresses except a few to get work done.
I’ve observed the opposite—not enough people are leveraging AI, especially in government institutions. Critical time and taxpayer money are wasted on tasks that could be automated with state-of-the-art models. Instead of embracing efficiency, these organizations perpetuate inefficiency at public expense.
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
> I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine.
What I have seen is employees spending days asking the model again and again to actually generate the document they need, and then submit it without reviewing it, only for a problem to explode a month later because no one noticed a glaring absurdity in the middle of the AI-polished garbage.
Yeah, no you cant see that yet. What you see is comparison between own super optimistic imagined idea of useful AI with either reality or even knee jerk "goverment is stupid and wastful becauce Musk said so".
The thing is, though, that time wasn’t wasted. It was spent fully understanding what they were actually trying to say, the context, the connotations of various different phrasings etc. It was spent mapping the territory. Throwing your initial, unexamined description into a prompt might generate something that looks enough like the email they’d have written, but it’s not been thought through. If the 10 minutes’ thought spent on the prompt was sufficient, the final email wouldn’t be taking days to do by hand.
Some comments were deferred for faster rendering.
dang|7 months ago
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
pera|7 months ago
Having all these popups announcing new integrations with AI chatbots showing up while you are just trying to do your work is pretty annoying. It feels like this time we are fighting an army of Clippies.
CuriouslyC|7 months ago
I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.
827a|7 months ago
Raise subscription prices, don’t deliver more value, bundle everything together so you can’t say no. I canceled a small Workspace org I use for my consulting business after the price hike last year; also migrating away everything we had on GCP. Google would have to pay me to do business with them again.
skeptrune|7 months ago
dwayne_dibley|7 months ago
fehu22|7 months ago
1vuio0pswjnm7|7 months ago
ToucanLoucan|7 months ago
It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.
And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.
And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.
And either way, all the people responsible for making all your technology worse every day will continue to get richer.
spacemadness|7 months ago
recursive|7 months ago
AppleBananaPie|7 months ago
Everyone nodding along, yup yup this all makes sense
mouse_|7 months ago
echelon|7 months ago
This is the next great upset. Everyone's hair is on fire and it's anybody's ball game.
I wouldn't even count the hyperscalers as certain to emerge victorious. The unit economics of everything and how things are bought and sold might change.
We might have agents that scrub ads from everything and keep our inboxes clean. We might find content of all forms valued at zero, and have no need for social networking and search as they exist today.
And for better or worse, there might be zero moat around any of it.
einrealist|7 months ago
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
kgeist|7 months ago
PeterStuer|7 months ago
For the consumer side, you'll be the product, not the one paying in money just like before.
For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.
ben_w|7 months ago
The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.
Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).
This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.
> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.
But I agree that the aggregation of power and centralisation of data is a pertinent risk.
mrob|7 months ago
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
sagacity|7 months ago
blablabla123|7 months ago
wussboy|7 months ago
danaris|7 months ago
It's bullshit.
I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.
But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)
By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.
relaxing|7 months ago
Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.
unknown|7 months ago
[deleted]
capyba|7 months ago
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that: 1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?) 2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
bsenftner|7 months ago
einrealist|7 months ago
LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
20k|7 months ago
If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
add-sub-mul-div|7 months ago
jfengel|7 months ago
Please don't. I am going to read this email. Adding more text just makes me read more.
I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)
shdon|7 months ago
daishi55|7 months ago
kemotep|7 months ago
If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?
satyrun|7 months ago
All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.
archargelod|7 months ago
esperent|7 months ago
I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.
nonplus|7 months ago
I just don't participate in discussions about Facebook marketplace links friends share, or Instagram reels my D&D groups post.
So in a sense I agree with you, forcing AI into products is similar to forcing advertising into products.
anon7000|7 months ago
nitwit005|7 months ago
Which is to say, there's already a history of AI features failing at a number of these larger companies. The public truly is frequently rejecting them.
croes|7 months ago
I wonder how many uses of Chatgpt and such are malicious.
svantana|7 months ago
bgwalter|7 months ago
It is like Clippy, which no one wanted. Hopefully, like Clippy, "AI" will be scrapped at some point.
seydor|7 months ago
PeterStuer|7 months ago
supersparrow|7 months ago
Of course it’s a bubble! Most new tech like this is until it gets to a point where the market is too saturated or has been monopolised.
smitty1e|7 months ago
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
[1] https://www.youtube.com/watch?v=JPFIkty4Zvk
[2] https://en.wikipedia.org/wiki/Joe%27s_Garage#Lyrical_and_sto...
Workaccount2|7 months ago
It seems here on the ground in non-tech bubble land, people use ChatGPT a ton and lean hard on AI features.
When Google judges the success of bolted on AI, they are looking at how Jane and John General Public use it, not how xleet007 uses it(or doesn't).
There is also the fact that AI is still just being bolted onto things now. The next iteration of this software will be AI native, and the revisions after that will iron out big wrinkles.
When settings menus and ribbon panels are optional because you can just tell the program what to do in plain English, that will be AI integration.
bgwalter|7 months ago
Marsha Blackburn's amendment to remove the "AI legislation moratorium" from the "Big Beautiful Bill" passed the Senate 99-1.
People are getting really fed up with "AI", "crypto" and other scams.
dingnuts|7 months ago
windows2020|7 months ago
blablabla123|7 months ago
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
dijksterhuis|7 months ago
JoshTriplett|7 months ago
lukaslevert|7 months ago
Gigachad|7 months ago
llm_nerd|7 months ago
Only when I went to cancel[1], suddenly they made me aware that there was a "classic" subscription that was the normal price, without CoPilot. So they basically just upsized everyone to try to force uptake.
[1] - I'm in the AI business and am a user and abuser of AI daily, but I don't need it built directly into every app. I Already have AI subscriptions and local models and solutions.
ozgrakkurt|7 months ago
Recently I tried to cancel notion account of some people in our org and it wouldn’t let me do it easily so just cancelled the whole notion subscription, really wish they would go out of business for doing these kind of things
blindriver|7 months ago
__MatrixMan__|7 months ago
adastra22|7 months ago
Highways.
cosmical65|7 months ago
In my European country you have to pay a toll to use a highway. Most people opt to use them, instead of taking the old 2-lane road that existed before the highway and is still free.
seydor|7 months ago
tom_m|7 months ago
This stuff costs so much, they need mass adoption. ASAP. I didn't think about it before, but I wonder how quickly they need the adoption.
arnaudsm|7 months ago
mat_b|7 months ago
kesor|7 months ago
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
jappgar|7 months ago
While government sponsored monopolies certainly exist, monopolies themselves are a natural outcome of competition.
Deregulation would break some monopolies while encouraging others to grow. The new monopolies may be far worse than the ones we had before.
miohtama|7 months ago
https://www.sciotoanalysis.com/news/2024/7/12/how-much-do-yo...
DangitBobby|7 months ago
ataru|7 months ago
hambes|7 months ago
maybe i'm doing something wrong here, but even ddg is annoying me with this.
daft_pink|7 months ago
It’s like IPV6, if it really was a huge benefit to the end user, we’d have adopted it already.
ethan_smith|7 months ago
immibis|7 months ago
supersparrow|7 months ago
NitpickLawyer|7 months ago
Just from current ARR announcements: 3b+ anthropic, 10b+ oai, whatever google makes, whatever ms makes, yeah people are already paying for it.
linsomniac|7 months ago
The top of the list has got to be that one of their testimonials presented to investors is from "DrDeflowerMe". It's also interesting to me because they list financials which position them as unbelievably tiny: 6,215 subscribing accounts, 400 average new accounts per month, which to me sounds like they have a lot of churn.
I'm in my third year of subscribing and I'm actively looking for a replacement. This "Start Engine" investment makes me even more confident that's the right decision. Over the years I've paid nearly $200/year for this and watched them fail to deliver basic functionality. They just don't have the team to deliver AI tooling. For example: 2 years ago I spoke with support about the screen that shows you your credit card numbers being nearly unreadable (very light grey numbers on a white background), which still isn't fixed. Around a year ago a bunch of my auto transfers disappeared, causing me hundreds of dollars in late fees. I contacted support and they eventually "recovered" all the missing auto-transfers, but it ended up with some of them doubled up, and support stopped responding when I asked them to fix that.
I question if they'll be able to implement the changes they want, let alone be able to support those features if they do.
djrj477dhsnv|7 months ago
jaimefjorge|7 months ago
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
frankzander|7 months ago
tim333|7 months ago
I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.
waswaswas|7 months ago
And of course there's no way to disable it without also losing calculator, unit conversions, and other useful functionality.
timewizard|7 months ago
Also:
> As per SimilarWeb data 61.05% of ChatGPT's traffic comes from YouTube, which means from all the social media platforms YouTube viewers are the largest referral source of its user base,
That's deeply suspect.
ternaus|7 months ago
If we talk about popular packages: - people want it - people enjoy it - people do not pay for that
But force-feeding with strict licenses like Ultralytics does works. Yes, it is force-feeding, but noone wants to pay the price, unless there is no other choice.
m000|7 months ago
tossandthrow|7 months ago
I use Kagi who returns excellent results, also when I need non AI verbatim queries.
Nursie|7 months ago
Badly summarise articles.
Outright invent local attractions that don’t exist.
Gave subtly wrong, misleading advice about employment rights.
All while coming across as confidently authoritative.
iLoveOncall|7 months ago
People don't know how to search, that's it. Even the HN population.
Every time this gets posted, I ask for one example of thing you tried to find and what keywords you used. So I'm giving you the same offer, give me for one thing you couldn't find easily on Google and the keywords you used, and I'll show you Google search is just fine.
h4kunamata|7 months ago
I cannot take OP seriously when the post started like so. If you are using Microsoft services and products in 2025, well, it serves you right.
Big companies can force Microsoft, Google and alike to don't use companies data for AI training, small companies have no chance.
Everything nowadays is cloud based, all you need is internet and a browser. But nope, people and companies still using Windows, spending millions with AV software that they wouldn't have to if a decent Linux distro was being used instead.
By decent I mean user friendly such as Linux Mint or even worse Ubuntu (Ubuntu lost its way years ago, still a solid option for basic users, not for advanced users)
gchamonlive|7 months ago
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
Grimeton|7 months ago
Also the requests aren't answered locally. Your data is forwarded to the AI's DC, processed and the answer returned. You can be absolutely certain that they keep a copy of your data.
kldg|7 months ago
ciconia|7 months ago
- me, a few years ago.
I find the whole situation with regard to AI utterly ridiculous and boring. While those algos might have some interesting applications, they're not as earth-shattering as we are made to believe, and their utility is, to me at least, questionable.
bwfan123|7 months ago
love this quote !
The whole sales-pitch for AI is predicated on FOMO - from developers being replaced by AI-enabled engineers to countries being left-behind by AI-slop. Like crypto, the idea is to get-big-fast, and become too big to fail. This worked for social-media but I find it hard to believe it can work for AI.
My hope is that: while some of the people can be fooled all the time, all the people cannot be fooled all the time.
justinclift|7 months ago
As a data point, the "Stop Killing Games" one has passed the needed 1M signatures so is in good shape:
https://www.stopkillinggames.com
isaacremuant|7 months ago
https://petition.parliament.uk/petitions/702074/
pacifika|7 months ago
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
But is the big money in revolution?
amadeuspagel|7 months ago
habosa|7 months ago
Some are excited about it. Some are actually making something cool with AI. Very few are both.
1vuio0pswjnm7|7 months ago
"Most people won't pay for AI voluntarily-just 8% according to a recent survey. So they need to bundle it with some other essential product."
"You never get to decide."
Silicon Valley and Redmond have been operating this way for quite some time.
They have been effectively removing choice long before this "AI" push. Often accomplished through "defaults".
This "AI" nonsense may be the most bold example.
"But if AI is bundled into existing businesses, Silicon Valley CEOs can pretend that AI is a moneymaker, even if the public is lukewarm or hostile."
"The AI business model would collapse overnight if they needed consumer opt-in. Just pass that law, and see how quickly the bots disappear. "
"You don't get to choose. You're never asked. It just shows up. Now you have to deal with it."
"If they gave people a choice, they would reject this tyranny masquerading as innovation."
"The AI business model would collapse overnight if they needed consumer opt-in."
We never get to find out what would happen.
One comment I would like to add here.
By removing meaningful choice and creating fabricated "demand" these so-called "tech" companies (unnecessary intermediaries) when faced with antitrust allegations then try to argue something like, "Everyone is using it therefore everyone wants it." And, "This shows everyone prefers us over the alternatives."
"Frank Zappa offers a possible mission statement for Microsoft back in 1976, a few months after the company is founded."
RIP.
cleandreams|7 months ago
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.
metalman|7 months ago
iambateman|7 months ago
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
throwawayoldie|7 months ago
That is what the natural state of capitalism _would_ be in a world of honest businesspeople and politicians.
d4rkn0d3z|7 months ago
bithead|7 months ago
garyclarke27|7 months ago
rimbo789|7 months ago
I don’t see the utility, all I see is slop and constant notifications in google.
You can say skill issue but that’s kind of the point; this was all dropped on me by people who don’t understand it themselves. I didn’t ask or want to built the skills to understand ai. Nor did my bosses: they are just following the latest wave. We are the blind leading the blind.
Like crypto ai will prove to be a dead end mistake that only enabled grifters
SpicyLemonZest|7 months ago
The reason your bosses are being obnoxious about making people use the internal AI tool is to push them into thinking about things like this. Perhaps at your company it’s genuinely not useful, but I’ve seen a lot of people say that who I’m pretty confident are wrong.
jacquesm|7 months ago
Dear administrator,
We recently added the best of Google AI to Workspace plans to help your teams accomplish more, faster. In addition, we added new, simple to use security insights and controls to help you keep your business data safe.
We also announced updated subscription pricing. Your subscription will be subject to this updated pricing starting July 7, 2025.
We’ve provided additional information below to guide you through this change. What this means for your organization
New Workspace features
Your updated pricing reflects the many new features now included in your Google Workspace edition. With these changes, you can:
Starting as early as July 7, 2025, your Google Workspace Business Plus subscription price will be automatically updated to $22.00* per user, per month with an Annual/Fixed-Term Plan (or $26.40 if you have a monthly Flexible Plan).The specific date that your subscription price will increase depends on your plan type, number of user licenses, and other factors.
*Prices will be updated in all local payment currencies.
If you have an Annual/Fixed-Term Plan, your subscription will be subject to updated pricing on your next plan renewal starting July 7, 2025. We will provide you with more specific information at least 30 days before updates to your Google Workspace plan pricing are made. What you need to do
No action is required from you. Features have already rolled out to Google Workspace Business Plus subscriptions, including AI features in many additional languages, and subscription prices will be updated automatically starting July 7, 2025.
We know that data security and compliance are top priorities for business leaders when adopting AI, and we are committed to helping you keep your data safe. You can understand how to effectively utilize generative AI in your organization, and learn how to keep your data confidential and protected. We’re here to help
If you wish to make changes to your subscription or payment plan, please visit the Admin console. Find which edition and payment plan you have on Google Workspace Admin Help.
Refer to the Help Center for details regarding the AI features and price updates, including updated local currency pricing.
encom|7 months ago
That's such a horrific new-speak way of saying your subscription price has been raised. Just say it! This soft bullshitty choice of words is infuriating.
alganet|7 months ago
The AI community treats potential customers as invaders. If you report a problem, the entire thing turns on you trying to convince you that you're wrong, or that you reported a problem because you hate the technology.
It's pathetic. It looks like a viper's nest. Who would want to do business with such people?
LgLasagnaModel|7 months ago
DrillShopper|7 months ago
Any minor comment or constructive criticism is FUD and met with "oh better go destroy a loom there, Ned Ludd".
It's pathetic and I grow tired of it.
123yawaworht456|7 months ago
Bluestein|7 months ago
cs702|7 months ago
"I don’t want AI customer service—but I don’t get a choice.
I don’t want AI responses to my Google searches—but I don’t get a choice.
I don’t want AI integrated into my software—but I don’t get a choice.
I don’t want AI sending me emails—but I don’t get a choice.
I don’t want AI music on Spotify—but I don’t get a choice.
I don’t want AI books on Amazon—but I don’t get a choice."
brookst|7 months ago
The last is especially egregious. I don’t want poorly-written (by my standards) books cluttering up bookstores, but all my life I’ve walked into bookstores and found my favorite genres have lots of books I’m not interested in. Do I have some kind of right to have stores only stock products that I want?
The whole thing is just so damn entitled. If you don’t like something, don’t buy it. If you find the presence of some products offensive in a marketplace, don’t shop there. Spotify is not a human right.
amelius|7 months ago
unknown|7 months ago
[deleted]
jheriko|7 months ago
[deleted]
varelse|7 months ago
[deleted]
oliveranderson|7 months ago
[deleted]
iluvfossilfuels|7 months ago
[deleted]
kotaKat|7 months ago
I said no. Respect my preferences.
jacquesm|7 months ago
bethekidyouwant|7 months ago
drudolph914|7 months ago
goatlover|7 months ago
Disposal8433|7 months ago
doug_durham|7 months ago
otabdeveloper4|7 months ago
Is not false statistics. "Nobody wanted or asked for this" is literally true.
raintrees|7 months ago
People going to lord it over others in the pursuit of what they think is proper.
Society is over-rated, once it gets beyond a certain size.
Along the same lines, I am currently starting my morning with blocking ranges of IP addresses to get Internet service back, due to someone's current desire to SYN Flood my webserver, which being hosted in my office, affects my office Internet.
It may soon come to a point where I choose to block all IP addresses except a few to get work done.
People gonna be people.
sigh.
jasonsb|7 months ago
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
sasaf5|7 months ago
What I have seen is employees spending days asking the model again and again to actually generate the document they need, and then submit it without reviewing it, only for a problem to explode a month later because no one noticed a glaring absurdity in the middle of the AI-polished garbage.
AI is the worst kind of liar: a bullshitter.
watwut|7 months ago
taneq|7 months ago
unknown|7 months ago
[deleted]
multjoy|7 months ago
lukaslevert|7 months ago