I don‘t get what they really complain about here? I mean first it’s Apple. They didn’t introduce a new paid service but a service which will be free to iPhone users this fall. And I strongly assume it will be iPhone 15Pro, iPhone16 and iPhone 16Pro by that time. So people can either upgrade to an older pro model or get the newest hottest device.
Second: When VW produces a new car nobody complains that the million drivers won’t get an update automatically. And even if we would say VW introduces a new AI parking system or whatever would it mean everybody would get it. I think the server side solution could work on older devices but that is rarely what Apple does. Also they specifically split the whole AI presentation from all the others features coming to iOS, macOS etc. not like millions of users won’t get anything with the next update.
There's always something to complain about. If the AI runs on servers then oh no there's no privacy. If it runs locally then oh no it won't run on my old hardware.
Given Apple's general emphasis on privacy and selling hardware, this seems like the right path for them. Personally I'm all for it.
It's purely about compute power[0]. The A17 Pro has 35 TOPS of Neural Engine performance, A16 Bionic has 17. A15 has 15.8.
The M-series have less performance, but I think they're compensated by being in iPads where they can use more power (battery) without impacting people's lives too much.
>When VW produces a new car nobody complains that the million drivers won’t get an update automatically.
Terrible comparison because
1, Car makers doesn’t release new cars every single year
2, For most people a car is
the 2nd biggest investment after their house which is nowhere near a smartphone in cost (that most people get subsidized by their carrier)
I was thinking the other day that it’s taken OpenAI quite a bit of time to bring out GPT-4o for free to users. Even paying users have limits of how much they could use GTP-4 and GPT-4 Turbo.
For Apple to build out a dedicated server farm to accommodate even 50% of all their current users would be extremely challenging. I think they need users to run an LLM locally just simply because they wouldn’t be able to handle the amount of compute it would require to handle every request server-side.
I’m very interesting to see how local AI will impact the amount of RAM that comes standard with their Macs and iPhones. The 12 year old Macbook I still have in a cupboard somewhere has 16 GB of RAM. That was max at the time, but today 128 GB is max while they still offer 8 GB. If they had done that 12 years ago they would have been offering 1, 2, 4, 8 and 16 GB of memory.
> For Apple to build out a dedicated server farm to accommodate even 50% of all their current users would be extremely challenging
fyi there have been rumors that apple is looking to scale out Mac Pro production for their own server farms to support this. Since ultra is a super niche product normally this obviously is going to be like, several orders of magnitude of increase here.
Obviously there is the conventional wisdom that apple doesn’t do server and the server market doesn’t do apple, and they may continue to prefer commodity/off-the-shelf server hardware for the bulk of their iCloud infra, but things may be different with the bottleneck on AI hardware at the moment.
Good, I don’t want hallucinating AI eating my battery.
I’ll upgrade when the latest OS no longer installs on my iPhone 11 and even then I might not get the latest and greatest - better to wait for my carrier to offer a previous-generation iPhone for peanuts and use that to upgrade.
One of the most interesting things about Apple's design here is that by heavily restricting the ways in which their on-device LLMs are used they've massively reduced the scope for hallucination.
It's far less likely to cause problems when limited to tasks like summaries, copy editing and short text rewording.
Hallucination is most likely to cause headline-grabbing mistakes when you use LLMs to answer questions about the world or generate larger volumes of text from a short prompt. Apple effectively outsource that to a clearly branded ChatGPT integration, neatly assigning any blame for hallucinations to OpenAI.
100%. It is likely that they will. The current on-device stuff is going to (obviously) be gated to the fastest devices presently.
I imagine they will roll out to other languages and devices relatively soon after. This article assumes too much about the future, and seems to be pushing an “Apple wants to sell you more hardware” narrative as if a hardware company wanting sales is somehow scandalous.
Despite that, I don’t think that is what is happening here. I imagine server models will be available across all/most devices not long after they do the initial launch.
Apple specifically avoids building solutions which put them in the position of being able to do lawful intercepts. I am convinced it is an explicit checkbox in their development process now.
If you’ve built the solution for commercial purposes you obviously don’t have any real moral concerns that can allow you to refuse a lawful request, while “build us a backdoor to unlock this iPhone” or “build us a backdoor to disable advanced data protection mode/e2ee mode” would be compelled speech.
The things they have complied on are fairly minor stuff like making airdrop not work with non-friends for more than 10 minutes at a time.
There are obvious privacy problems with streaming every user interaction to the cloud, which is why they’ve already avoided this for things like photo tagging etc. Maybe you could do anonymized inference similar to the way maps work (let’s say, apple hosts a session for an anonymized user identifier but they don’t know who) but it’s still very intimate information regardless.
On device really is better… not to mention the power concerns. Like I just do not get these objections at all, other than whining because it’s apple, and I think the same people would be whining 100x harder (with good justification) if apple was really steaming every user interaction to the cloud. We’d be hearing about the sham of apple marketing on privacy while sending your sexts to the cloud etc.
same thing for the whining about copilot. Like yeah if it’s on-device it ultimately is going to be a file on disk somewhere (omg SQLite, we hate that!!!). The whole disk is protected by bitlocker, and the file is additionally restricted tightly to the system user (not just admin, the kernel privileged user). The file has to be open during normal operation, so “double encrypting” the file an additional time and then storing the encryption key inside the PC is security theater, because the file has to be opened anyway, and everyone knows this and would immediately spout off if anyone did it in any other circumstance. Yes, if you force your way to system user you can read the file, because the system user needs to read it, but at that point the system user can also just keylog you and screenshot you directly.
The complaints people are raising aren’t technically cogent, but they don’t need to be, because they’re not technically based in the first place. People don’t want the feature and they find reasons to complain. You can 10x it anytime apple is involved.
>A huge set of media companies have shifted to using AV1
Such as? Most YouTube videos I am watching are still VP9 at 1080p/1440p, and there's no reason to watch 4K on phones (you still can, but lower battery life is your own choice in that case).
> Only the iPhone 15 Pro and Pro Max — out of the 24 models compatible with the new iOS 18 — will be able to run Apple Intelligence.
Prices are above 1200 EUR in Germany. Err, no. I'll stick to ChatGPT and Claude.
---
Perplexity.ai:
The iPhone 15 Pro in Germany costs:
For the 6.1-inch model:
- 128GB storage: €1,299[1][2][3]
- 256GB storage: €1,399[1][2][3]
For the 6.7-inch Pro Max model:
- 256GB storage: €1,499[2][4]
These prices are for an unlocked, SIM-free iPhone purchased directly from retailers like Coolblue, MediaMarkt, and the Apple online store in Germany. The prices include VAT but no carrier contract.[1][2][3][4]
But why would anyone buy a phone upfront when you can get them subsidized by the carrier with data and talk? I end up paying the same after 22
months but I also get more. Last phone I bought upfront was 20 years ago when WAP was still a thing.
It may "leave out" more than 90% of "current" users, but there's still the upgrade cycle a lot of us are going to be walking right into this fall.
As an iPhone 14 Pro owner that's been stuck in a carrier agreement for the past couple of years, there hasn't been any "incentive" for me to upgrade, and I'm naturally at my upgrade point now. I'll likely be going to a 16 Pro no matter what.
Some very cynical comments on this thread. Maybe, just maybe the older hardware isn't really capable of running these features properly and it's not some evil conspiracy to "force" users into upgrading?
It’s the amount of RAM that matters. It’s supported on the M1 but not on the A14. Both have the same CPU core design and neural engine, including the same number of neural engine cores. The A14, however, has 4 or 6GB RAM while the M1 has at least 8GB.
As the models used can be quite large this probably means the A14 doesn’t have enough headroom for it give a good UX.
That's the plan. Money needs to come somewhere, Apple has perfect the upgrade game (slowing phones down, new features, ...) when essentially every phone is the same since some years (I have a Mi 11 Ultra and it's the first phone I'll never upgrade before it breaks).
[+] [-] larusso|1 year ago|reply
Second: When VW produces a new car nobody complains that the million drivers won’t get an update automatically. And even if we would say VW introduces a new AI parking system or whatever would it mean everybody would get it. I think the server side solution could work on older devices but that is rarely what Apple does. Also they specifically split the whole AI presentation from all the others features coming to iOS, macOS etc. not like millions of users won’t get anything with the next update.
[+] [-] DennisP|1 year ago|reply
Given Apple's general emphasis on privacy and selling hardware, this seems like the right path for them. Personally I'm all for it.
[+] [-] theshrike79|1 year ago|reply
The M-series have less performance, but I think they're compensated by being in iPads where they can use more power (battery) without impacting people's lives too much.
[0] https://en.wikipedia.org/wiki/Apple_A17
[+] [-] haunter|1 year ago|reply
Terrible comparison because
1, Car makers doesn’t release new cars every single year
2, For most people a car is the 2nd biggest investment after their house which is nowhere near a smartphone in cost (that most people get subsidized by their carrier)
[+] [-] hmottestad|1 year ago|reply
For Apple to build out a dedicated server farm to accommodate even 50% of all their current users would be extremely challenging. I think they need users to run an LLM locally just simply because they wouldn’t be able to handle the amount of compute it would require to handle every request server-side.
I’m very interesting to see how local AI will impact the amount of RAM that comes standard with their Macs and iPhones. The 12 year old Macbook I still have in a cupboard somewhere has 16 GB of RAM. That was max at the time, but today 128 GB is max while they still offer 8 GB. If they had done that 12 years ago they would have been offering 1, 2, 4, 8 and 16 GB of memory.
[+] [-] paulmd|1 year ago|reply
fyi there have been rumors that apple is looking to scale out Mac Pro production for their own server farms to support this. Since ultra is a super niche product normally this obviously is going to be like, several orders of magnitude of increase here.
Obviously there is the conventional wisdom that apple doesn’t do server and the server market doesn’t do apple, and they may continue to prefer commodity/off-the-shelf server hardware for the bulk of their iCloud infra, but things may be different with the bottleneck on AI hardware at the moment.
[+] [-] epolanski|1 year ago|reply
[+] [-] loloquwowndueo|1 year ago|reply
I’ll upgrade when the latest OS no longer installs on my iPhone 11 and even then I might not get the latest and greatest - better to wait for my carrier to offer a previous-generation iPhone for peanuts and use that to upgrade.
[+] [-] simonw|1 year ago|reply
It's far less likely to cause problems when limited to tasks like summaries, copy editing and short text rewording.
Hallucination is most likely to cause headline-grabbing mistakes when you use LLMs to answer questions about the world or generate larger volumes of text from a short prompt. Apple effectively outsource that to a clearly branded ChatGPT integration, neatly assigning any blame for hallucinations to OpenAI.
[+] [-] threesevenths|1 year ago|reply
[+] [-] seanhunter|1 year ago|reply
[+] [-] sneak|1 year ago|reply
I imagine they will roll out to other languages and devices relatively soon after. This article assumes too much about the future, and seems to be pushing an “Apple wants to sell you more hardware” narrative as if a hardware company wanting sales is somehow scandalous.
Despite that, I don’t think that is what is happening here. I imagine server models will be available across all/most devices not long after they do the initial launch.
[+] [-] wepple|1 year ago|reply
[+] [-] paulmd|1 year ago|reply
If you’ve built the solution for commercial purposes you obviously don’t have any real moral concerns that can allow you to refuse a lawful request, while “build us a backdoor to unlock this iPhone” or “build us a backdoor to disable advanced data protection mode/e2ee mode” would be compelled speech.
The things they have complied on are fairly minor stuff like making airdrop not work with non-friends for more than 10 minutes at a time.
There are obvious privacy problems with streaming every user interaction to the cloud, which is why they’ve already avoided this for things like photo tagging etc. Maybe you could do anonymized inference similar to the way maps work (let’s say, apple hosts a session for an anonymized user identifier but they don’t know who) but it’s still very intimate information regardless.
On device really is better… not to mention the power concerns. Like I just do not get these objections at all, other than whining because it’s apple, and I think the same people would be whining 100x harder (with good justification) if apple was really steaming every user interaction to the cloud. We’d be hearing about the sham of apple marketing on privacy while sending your sexts to the cloud etc.
same thing for the whining about copilot. Like yeah if it’s on-device it ultimately is going to be a file on disk somewhere (omg SQLite, we hate that!!!). The whole disk is protected by bitlocker, and the file is additionally restricted tightly to the system user (not just admin, the kernel privileged user). The file has to be open during normal operation, so “double encrypting” the file an additional time and then storing the encryption key inside the PC is security theater, because the file has to be opened anyway, and everyone knows this and would immediately spout off if anyone did it in any other circumstance. Yes, if you force your way to system user you can read the file, because the system user needs to read it, but at that point the system user can also just keylog you and screenshot you directly.
The complaints people are raising aren’t technically cogent, but they don’t need to be, because they’re not technically based in the first place. People don’t want the feature and they find reasons to complain. You can 10x it anytime apple is involved.
[+] [-] thih9|1 year ago|reply
[+] [-] everdrive|1 year ago|reply
[+] [-] epolanski|1 year ago|reply
Wouldn't it be better it was available and you turned it off I'd you didn't want to use it?
[+] [-] kd913|1 year ago|reply
A huge set of media companies have shifted to using AV1 and these older devices are gonna get hammered for battery.
I was planning on upgrading anyway just to get hardware AV1 decode given youtube music/youtube are one of my most frequent apps.
Surprised there wasn't more furore when Google mandated that shift.
[+] [-] antonkochubey|1 year ago|reply
Such as? Most YouTube videos I am watching are still VP9 at 1080p/1440p, and there's no reason to watch 4K on phones (you still can, but lower battery life is your own choice in that case).
[+] [-] submeta|1 year ago|reply
Prices are above 1200 EUR in Germany. Err, no. I'll stick to ChatGPT and Claude.
---
Perplexity.ai:
The iPhone 15 Pro in Germany costs:
For the 6.1-inch model:
- 128GB storage: €1,299[1][2][3]
- 256GB storage: €1,399[1][2][3]
For the 6.7-inch Pro Max model:
- 256GB storage: €1,499[2][4]
These prices are for an unlocked, SIM-free iPhone purchased directly from retailers like Coolblue, MediaMarkt, and the Apple online store in Germany. The prices include VAT but no carrier contract.[1][2][3][4]
Citations: [1] https://www.coolblue.de/en/mobile-phones/smartphones/apple/a... [2] https://www.mediamarkt.de/de/brand/apple/iphone/iphone-15-pr... [3] https://www.apple.com/de/shop/buy-iphone/iphone-15-pro [4] https://www.apple.com/de/shop/buy-iphone/iphone-15-pro/6,7%2... [5] https://www.apple.com/de/shop/buy-iphone/iphone-15-pro/6,1%2...
[+] [-] unpopularopp|1 year ago|reply
[+] [-] kotaKat|1 year ago|reply
As an iPhone 14 Pro owner that's been stuck in a carrier agreement for the past couple of years, there hasn't been any "incentive" for me to upgrade, and I'm naturally at my upgrade point now. I'll likely be going to a 16 Pro no matter what.
[+] [-] joefarish|1 year ago|reply
[+] [-] Aaargh20318|1 year ago|reply
As the models used can be quite large this probably means the A14 doesn’t have enough headroom for it give a good UX.
[+] [-] theshrike79|1 year ago|reply
[+] [-] web3-is-a-scam|1 year ago|reply
[+] [-] mike_tyson|1 year ago|reply
[+] [-] dailykoder|1 year ago|reply
[+] [-] KingOfCoders|1 year ago|reply
[+] [-] supermatt|1 year ago|reply
[+] [-] shallichange|1 year ago|reply
[+] [-] ryandvm|1 year ago|reply
[+] [-] rs_rs_rs_rs_rs|1 year ago|reply
[+] [-] anshumankmr|1 year ago|reply
[+] [-] tr3ntg|1 year ago|reply
[+] [-] sharpshadow|1 year ago|reply
[+] [-] asimovfan|1 year ago|reply
[+] [-] beretguy|1 year ago|reply
[+] [-] WolfCop|1 year ago|reply
[+] [-] tibbydudeza|1 year ago|reply
[+] [-] CSMastermind|1 year ago|reply