top | item 41946545

(no title)

zomg | 1 year ago

what's a realistic use case for ai on a mobile phone? i have yet to find myself saying "gee, if only i had ai on my phone, i could do XYZ!"

discuss

order

1000100_1000101|1 year ago

On iPhone, if I take a picture of a plant or animal, it identifies it for me. It's not 100% by any means, but it's useful enough. I've figured out what were baby plants I wanted vs. weeds. I've figure out species of birds I'd taken photos of with my SLR (ie: phone takes picture of Lightroom editing the image, and is able to identify it from that... I'd prefer there was a way to not require me to take a photo of my monitor, either doing it "live", and/or adding the functionality into the Mac.) For people and pets it can find other images that contain the same subject.

When my daughter was studying Chinese, I could use the live-video translation app and see the lesson text translated to English, and see her hand-written answers also translated to English. I could see this being more broadly useful when travelling, along with live translation of spoken words.

HWR_14|1 year ago

While true your examples are AI, I believe in this case AI is being used in this context to mean LLM-based AI.

I don't know if LLM-based translation is better than previous translation models.

not_your_vase|1 year ago

Well, I'm still waiting for an AI feature that recognizes my usage patterns, and adapts the system's behavior.

E.g. if it sees that I always reopen an application 2 seconds after the OS kills it in the background, then maybe it shouldn't be killed.

Or if I wake up 3 minutes before the alarm would go off, and take a trip to the toilet, maybe it shouldn't blow up the speaker while I'm frantically pulling up my underpants, but recognize that I'm already awake, or at least wait with the alarm until I'm around the phone again.

Or automatic backlight shouldn't go crazy when I walk in the night under the streetlamps, it should recognize that lamps are coming and going, and that backlight adjustment every 5 seconds is silly and annoying.

I could go on. IMO there is definitely a place for machine learning/AI in phones (and other places too), especially for quality of life thingies. Just nobody is doing them, I guess becacuse these are not as visible as image generation. My credit card has been ready to spend on such developments since at least 2021. One of these days I will have enough of waiting and do it myself, out of spite...

yunwal|1 year ago

Spellcheck, voice control, voice-to-text, autocomplete and next-word-prediction are all some AI features that are already in use. Voice-to-text could certainly be much better if something like whisper was integrated. I pretty much never actually listen to voicemails, so having a reliable transcription there would be great.

I'd also love to be able to give commands that traverse multiple apps (e.g. take my google sheet and venmo request everyone the specified amount). Most likely this would happen by teaching an AI tool use and having apps expose an API.

I'd love to be able to give voice commands for certain things (e.g. flipping through recipes when my hands are wet) and have the phone be able to do the actual thing I want.

I actually think phones are a much better place for AI since they're so difficult to type on that voice could provide a higher-bandwidth interface.

jonathanlb|1 year ago

One use case could be improving navigation directions. Right now, map apps provide granular, step-by-step instructions that include unnecessary details, such as how to exit your own neighborhood.

AI could provide more human-oriented direction that focus on key landmarks and decisions rather than every minor turn. For example:

"Hop on 80 West, cross the bridge, take Sir Francis Drake onto 101 South, take the Alexander Avenue exit, don't go through the tunnel, and your destination will be on the right."

diggan|1 year ago

I'd love it if CarPlay/Siri just could read out stuff it finds on the Internet. Currently, all I can get out of it is "Sorry, I cannot show this to you right now" for basically everything except trying to control multimedia.

At one point, I had ChatGPT working via voice in CarPlay mode (via Shortcuts I think?), but seems like Apple disabled that at one point, for some stupid reason probably.

NovaX|1 year ago

It will likely become available for application developers to use. At work, we use it to assist warehouse checkins by allowing the guard to take photos of the truck, paperwork, seal, etc and fill out the forms going in and out. If built-in then it can be run on-device, so over time a lot more workflows can be seamless.

jazzyjackson|1 year ago

I used Google lens yesterday to get the artist name of a painting I liked, that was neat.

Syonyk|1 year ago

That's not "on a phone," though. That's just schlepping an image up to the Google data center, and getting a result back. That you're using the phone as an interface to a datacenter doesn't make it "AI on the phone."

chankstein38|1 year ago

The only one I use regularly is object replacement in photos. It's great for editing a street sign out of a picture of the sky or something, especially if you just don't want to dox yourself posting a pic. It's definitely not high quality most times. Just blurry redraw of what the background might look like.

Otherwise, totally with you. No idea why my phone needs AI. I can just open the ChatGPT app if I want to have a discussion with ChatGPT about something. I'm so tired of apps updating to "Add a new AI assistant!" like why do I need to talk to an LLM in most of the apps I use?