I'd love to live in your world for a bit... I can't imagine any future where having AI in your browser is a net positive for any user. It sounds like an absolute dystopian privacy and security nightmare.
Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times".
Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
Do we have to re-tread 3 years of big tech overreach, scams, user hostility in nearly every common program , questionable utility that is backed by hype more than results, and way its hoisting up the US economy's otherwise stagnant/weakening GDP?
I don't really have much new to add here. I've hated this "launch in alpha" mentality for nearly a decade. Calling 2022 "alpha" is already a huge stretch.
>When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Why is this valuable? I spent my entire childhood reading, and my college years being able to research and navigate technical documents. I don't value auto-summarizations. Proper writing should be able to do this in its opening paragraphs.
>Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times"
Yes, this is my "good enough" compromise that most applications are failing to perform. Let's hope for the best.
>Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
No, probably not. I don't trust the powers behind such tools to be able to identify what is "clickbait" for me. Grok shows that these are not impartial tools, and news is the last thing I want to outsource sentiment too without a lot of built trust.
> Imagine you have an AI button. When you click it, the locally running LLM
sure, you can imagine Firefox integrating a locally-running LLM if you want.
but meanwhile, in the real world [0]:
> In the next three years, that means investing in AI that reflects the Mozilla Manifesto. It means diversifying revenue beyond search.
if they were going to implement your imagination of a local LLM, there's no reason they'd be talking about "revenue" from LLMs.
but with ChatGPT integrating ads, they absolutely can get revenue by directing users there, in the same way they get money for Google for putting Google's ads into Firefox users' eyeballs.
that's ultimately all this is. they're adding more ads to Firefox.
> When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
I'm also now imagining my GPU whirring into life and the accompanying sound of a jetplane getting ready for takeoff, as my battery suddenly starts draining visibly.
Local LLMs for are a pipe dream, the technology fundamentally requires far too much computation for any true intelligence to ever make sense with current computing technologies.
>Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
but.. why? I can read the website myself. That's why I'm on the website.
That last one sounds like a lot of churn and resources for little results? You're not really making them sound compelling compared to just blocking click bait sites with a normal extension somehow. And it could also be an extension users install and configure - why a pop up offering it to me, and why built into the browser that directly?
For any mildly useful AI feature, there are hundreds of entirely dangerous ones. Either way I don't want the browser to have any AI features integrated, just like I don't want the OS to have them.
Especially since we know very well that they won't be locally running LLMs, everyone's plan is to siphon your data to their "cloud hybrid AI" to feed into the surveillance models (for ad personalization, and for selling to scammers, law enforcement and anyone else).
I'd prefer to have entirely separate and completely controlled and fire-walled solutions for any useful LLM scenarios.
That pretty much sums up the problem: an "AI" button is about as useful to me as a "do stuff" button, or one of those red "that was easy" buttons they sell at Home Depot. Google translate has offered machine translation for 20+ years that is more or less adequate to understand text written in a language I don't read. Fine, add a button to do that. Mediocre page summaries? That can live in some submenu. "Agentic" things like booking flights for an upcoming trip? I would never trust an "AI" button to do that.
Machine learning can be useful for well-defined, low-consequence tasks. If you think an LLM is a robot butler, you're fundamentally misunderstanding what you're dealing with.
> The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN"
I've already hit that option before reading the other ones.
> "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title"
Why would you bother fetching the clickbait at all? It's spam.
The main transformation I want out of a browser, the absolutely critical one, is the removal of advertising. I concede that AI might be decent at removing ads and all the overlay clutter that makes news sites unreadable; does anyone have the demo of "AI readability mode"? Crucially I do not want it changing any non-ad text found on the page.
> Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
I like Firefox and don't think it's about to collapse like many users here, but I have already unchecked "Recommend features as you browse" and "Recommend extensions as you browse" along with setting the welcome page for updates to about:blank.
Ideally the user interface for any tool I use should never change unless I actively prompt it to change, and the only notifications I should get would be from my friends and family contacting me or calendars/alarms that I set myself.
Most users are entirely ignorant of privacy and security and will make choices without considering it. I don’t say that to excuse it but it’s absolutely the reality.
I'd pay a monthly subscription fee for this. All the service would need to do to get my money is guess which words that already exist on the page I will be interested in and show me those words in black-and-white type (in a face and a size chosen by me, not the owner of the web site) free of any CSS, styling or "innovative" manner of presentation.
Specifically, the AI does not generate text for me to read. All it does is decide which parts of the text that already exists on the page to show me. (It is allowed to interact with the web page to get past any modal windows or gates.)
haha, what if I told you that the currently existing, shipping product, "ChatGPT / Gemini uses a browser for you" will have more users than Firefox in two years? I will even bet you that will likely be the case in 2 months.
tgsovlerkhgsel|2 months ago
Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times".
Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
johnnyanmac|2 months ago
Do we have to re-tread 3 years of big tech overreach, scams, user hostility in nearly every common program , questionable utility that is backed by hype more than results, and way its hoisting up the US economy's otherwise stagnant/weakening GDP?
I don't really have much new to add here. I've hated this "launch in alpha" mentality for nearly a decade. Calling 2022 "alpha" is already a huge stretch.
>When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Why is this valuable? I spent my entire childhood reading, and my college years being able to research and navigate technical documents. I don't value auto-summarizations. Proper writing should be able to do this in its opening paragraphs.
>Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times"
Yes, this is my "good enough" compromise that most applications are failing to perform. Let's hope for the best.
>Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
No, probably not. I don't trust the powers behind such tools to be able to identify what is "clickbait" for me. Grok shows that these are not impartial tools, and news is the last thing I want to outsource sentiment too without a lot of built trust.
meanwhile, trust has only corroded this decade.
evil-olive|2 months ago
sure, you can imagine Firefox integrating a locally-running LLM if you want.
but meanwhile, in the real world [0]:
> In the next three years, that means investing in AI that reflects the Mozilla Manifesto. It means diversifying revenue beyond search.
if they were going to implement your imagination of a local LLM, there's no reason they'd be talking about "revenue" from LLMs.
but with ChatGPT integrating ads, they absolutely can get revenue by directing users there, in the same way they get money for Google for putting Google's ads into Firefox users' eyeballs.
that's ultimately all this is. they're adding more ads to Firefox.
0: https://blog.mozilla.org/en/mozilla/leadership/mozillas-next...
tsimionescu|2 months ago
I'm also now imagining my GPU whirring into life and the accompanying sound of a jetplane getting ready for takeoff, as my battery suddenly starts draining visibly.
Local LLMs for are a pipe dream, the technology fundamentally requires far too much computation for any true intelligence to ever make sense with current computing technologies.
M2Ys4U|2 months ago
but.. why? I can read the website myself. That's why I'm on the website.
nemomarx|2 months ago
gigel82|2 months ago
Especially since we know very well that they won't be locally running LLMs, everyone's plan is to siphon your data to their "cloud hybrid AI" to feed into the surveillance models (for ad personalization, and for selling to scammers, law enforcement and anyone else).
I'd prefer to have entirely separate and completely controlled and fire-walled solutions for any useful LLM scenarios.
username223|2 months ago
That pretty much sums up the problem: an "AI" button is about as useful to me as a "do stuff" button, or one of those red "that was easy" buttons they sell at Home Depot. Google translate has offered machine translation for 20+ years that is more or less adequate to understand text written in a language I don't read. Fine, add a button to do that. Mediocre page summaries? That can live in some submenu. "Agentic" things like booking flights for an upcoming trip? I would never trust an "AI" button to do that.
Machine learning can be useful for well-defined, low-consequence tasks. If you think an LLM is a robot butler, you're fundamentally misunderstanding what you're dealing with.
pjc50|2 months ago
I've already hit that option before reading the other ones.
> "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title"
Why would you bother fetching the clickbait at all? It's spam.
The main transformation I want out of a browser, the absolutely critical one, is the removal of advertising. I concede that AI might be decent at removing ads and all the overlay clutter that makes news sites unreadable; does anyone have the demo of "AI readability mode"? Crucially I do not want it changing any non-ad text found on the page.
mcjiggerlog|2 months ago
They basically already have this feature: https://support.mozilla.org/en-US/kb/use-link-previews-firef...
MiddleEndian|2 months ago
Ideally the user interface for any tool I use should never change unless I actively prompt it to change, and the only notifications I should get would be from my friends and family contacting me or calendars/alarms that I set myself.
ares623|2 months ago
invl|2 months ago
afavour|2 months ago
knowitnone3|2 months ago
hollerith|2 months ago
Specifically, the AI does not generate text for me to read. All it does is decide which parts of the text that already exists on the page to show me. (It is allowed to interact with the web page to get past any modal windows or gates.)
doctorpangloss|2 months ago
cvoss|2 months ago
> any user