top | item 46816228

Apple buys Israeli startup Q.ai

131 points| ishener | 1 month ago |techcrunch.com

https://www.reuters.com/business/apple-acquires-audio-ai-sta...

48 comments

order

tchalla|1 month ago

> Notably, this is the second time CEO Aviad Maizels has sold a company to Apple. In 2013, he sold PrimeSense, a 3D-sensing company that played a key role in Apple’s transition from fingerprint sensors to facial recognition on iPhones. Q.ai launched in 2022 and is backed by Kleiner Perkins, Gradient Ventures, and others. Its founding team, including Maizels and co-founders Yonatan Wexler and Avi Barliya, will join Apple as part of the acquisition.

Twice, well done!

tartoran|1 month ago

What kind of tech does qAi bring to the table?

clueless|1 month ago

Could Q.ai be commercializing the AlterEgo tech coming out of MIT Lab? i.e. "detects faint neuromuscular signals in the face and throat when a person internally verbalizes words"

Yep, looks like that is it. Recent patent from one of the founders: https://scholar.google.com/citations?view_op=view_citation&h...

rajnathani|26 days ago

If this works well, then I could finally see that AI wearable pins could be socially feasible. IMO speaking aloud in public to AI doesn't seem like something which will work but it is also what OpenAI is apparently investing a lot into with their hardware ambition with Jony Ive [0].

[0] https://www.bloomberg.com/news/articles/2025-05-21/openai-to...

mikestorrent|1 month ago

Yeah...

Pardon the AI crap, but:

> ...in most people, when they "talk to themselves" in their mind (inner speech or internal monologue), there is typically subtle, miniature activation of the voice-related muscles — especially in the larynx (vocal cords/folds), tongue, lips, and sometimes jaw or chin area. These movements are usually extremely small — often called subvocal or sub-articulatory activity — and almost nobody can feel or see them without sensitive equipment. They do not produce any audible sound (no air is pushed through to vibrate the vocal folds enough for sound). Key evidence comes from decades of research using electromyography (EMG), which records tiny electrical signals from muscles: EMG studies consistently show increased activity in laryngeal (voice box) muscles, tongue, and lip/chin areas during inner speech, silent reading, mental arithmetic, thinking in words, or other verbal thinking tasks

So, how long until my Airpods can read my mind?

Sir_Twist|1 month ago

“Q.ai is a startup developing a technology to analyze facial expressions and other ways for communication.”

This is an interesting acquisition given their rumored Echo Show / Nest Hub competitor (1). Maybe this is part of their (albeit flawed and delayed) attempt to revitalize the Siri branding under their Apple Intelligence marketing. When you have to say the exact right words to Siri, or else she will add “Meeting at 10” as an all day calendar event, people get frustrated, and that non-technical illusion of the “digital assistant” is lost. If this is the model of understanding Apple have of their customers’ perception of Siri, then maybe their thinking is that giving Siri more non-verbal personable capability could be a differentiating factor in the smart hub market, along with the LLM rebuild. I could also see this tying into some sort of strategy for the Vision Pro.

Now, whether this hypothetical differentiating factor is worth $2 billion, I’m not so sure on, but I guess time will tell.

https://www.macrumors.com/2025/11/05/apple-smart-home-hub-20...

concavebinator|1 month ago

In case there are any Ender's Game fans here, the capability to understand micro-expressions reminds me of how Ender subvocalizes to Jane. Orson Scott Card predicted yet another technological norm.

danhite|1 month ago

Also earlier credit due to Isaac Asimov in Second Foundation [1953] "...

The same basic developments of mental science that had brought about the development of the Seldon Plan, thus made it also unnecessary for the First Speaker to use words in addressing the Student.

Every reaction to a stimulus, however slight, was completely indicative of all the trifling changes, of all the flickering currents that went on in another's mind. The First Speaker could not sense the emotional content of the Student's instinctively, as the Mule would have been able to do – since the Mule was a mutant with powers not ever likely to become completely comprehensible to any ordinary man, even a Second Foundationer – rather he deduced them, as the result of intensive training.

deepfriedchokes|1 month ago

Sounds pretty invasive for privacy, if this was ever paired with smart glasses in public.

Lammy|1 month ago

Hence the name, I assume.

stefanos82|1 month ago

Why am I having a feeling that one of their reasons was so they can trademark "iQ", to match the iSomething "franchise", so to speak?

gralab|1 month ago

Apple dropped the "i" naming scheme many years ago.

assaddayinh|1 month ago

The ability to impress CEOs and signal hotness to investors, may not corelate at all with the ability to produce breakthrough technology. Thus companies like google grow up unbought to then become ..

alecco|1 month ago

It's kind of sad watching Apple drift into irrelevancy. I know I'm not going to buy more products from them because nothing they have is worth the premium price.

jackyinger|1 month ago

I get the feeling Apple is the next Intel.

Intel went through a phase in the 2010’s of buying gobs of companies with fancy tech and utterly failing to integrate those acquisitions.

And even more fundamental, Intel rested on its laurels of having good hardware and got bit hard in the end. Something similar seems to be happening at Apple.

loudandskittish|1 month ago

This story has Apple + $2B acquisition + AI

...how is this not at the top of the page?

mobiledev2014|1 month ago

It's pretty crazy it's Apple's second-largest acquisition ever but it's kinda boring so nobody cares. Of course, Beats was a household name and founded by Dre... a much more accessible story

bnchrch|1 month ago

Wake me up when they let one of these acqui-hires update Siri to be on par with a voice assistant I could make in an afternoon with off the shelf tools.

alighter|1 month ago

This. And next word prediction / autocorrect that doesn’t look like it’s from the previous century.

wahnfrieden|1 month ago

that already made the news. it will be powered by gemini and may launch before next wwdc.

robinsoncrusue|1 month ago

[deleted]

tiffanyh|1 month ago

The full quote:

> enable devices to interpret whispered speech and enhance audio in noisy environments.

I personally see a lot of people using Siri on speakerphone in public places and am amazed due to the background noise … that Siri can even capture half of what’s said.

null_deref|1 month ago

Why did your comment omit the American company that thought it’s a good idea to buy it? Do you think it implies something about all American companies?