top | item 46759627

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory)

34 points| summerlee9611 | 1 month ago |thebeni.ai

Hi HN, I built Beni (https://thebeni.ai ), a web app for real-time video calls with an AI companion.

The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”.

Beni is basically:

A Live2D avatar that animates during the call (expressions + motion driven by the conversation)

Real-time voice conversation (streaming response, not “wait 10 seconds then speak”)

Long-term memory so the character can keep context across sessions

The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts.

Some implementation details (happy to share more if anyone’s curious):

Browser-based real-time calling, with audio streaming and client-side playback control

Live2D rendering on the front end, with animation hooks tied to speech / state

A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity

Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now.

What I’d love feedback on:

Does the “real-time call” loop feel responsive enough, or still too laggy?

Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser?

Thanks, and I’ll be around in the comments.

25 comments

order

augusteo|1 month ago

Building on zemo's point about parasocial relationships: traditional parasocial interaction involves a performer who doesn't know you exist. Here the AI does respond to you specifically, which changes the dynamic.

Is it still parasocial if the other party is responsive but not conscious? Or is this something new that we don't have good language for yet?

idiotsecant|1 month ago

I think maybe there needs to be a new word. It's still an asymmetric relationship. It's kind of a mix of DMing an influencer and chatting with the barista because you think she actually likes you. You're talking to a mirage.

summerlee9611|1 month ago

I think “parasocial” still captures part of it (one-to-many distribution, performer vibe), but there’s also a true interactive dyad here. It’s closer to “synthetic social interaction” or “responsive parasocial.” I don’t have a perfect word yet, but the asymmetry and the responsiveness both matter.

Charon77|1 month ago

You need to first prove that AI is not conscious.

I find it hard to even convince others that I am a conscious person.

Maybe consciousness is just a matter of belief, if I see this AI and believe that it's a person, then I am talking to a conscious entity.

nitroedge|1 month ago

For better lip sync you could try using rhubarb to extract from the mp3. What is your backend speech processor so you can get the real-time streaming response? Rhubarb would add a bit of latency for sure.

summerlee9611|1 month ago

For real-time: we use WebRTC for streaming. Input is streaming STT, then a low-latency LLM, then TTS, then we drive Live2D parameters on the client. Lip sync: we currently do (simple phoneme / amplitude-based) and are testing viseme extraction. Rhubarb is on our list, but we’re cautious about added latency.

october8140|1 month ago

This is disturbing.

xattt|1 month ago

It will quickly distill down to clients using the service just for sex and sex-adjacent activities.

No kink-shaming, but this sort of thing enables self-destructive hard-to-return-from anti-social behaviour.

summerlee9611|1 month ago

Totally fair reaction. We’re building this with clear boundaries: we don’t position it as therapy replacement, we add safety rails, and gives user a choice what mode they want and guardrails differ based on this. Plus, age restriction is there as safety boundary

dummydummy1234|1 month ago

What are you using for tts/stt/models?

summerlee9611|1 month ago

realtime api + elevenlabs but llms will be diversified based on persona moving forward. Using chatgpt/gemini as baseline model, we feel prompting has limitation

sghimire2022|1 month ago

This is cool.

summerlee9611|1 month ago

Appreciate it. If you try it and anything feels off (latency, turn-taking, uncanny moments), I’d love concrete feedback. That’s what we’re grinding on right now.

dfajgljsldkjag|1 month ago

It creates a conflict to build a system that is both a private friend and a public performer. You cannot maximize intimacy and fame at the same time.

zemo|1 month ago

You're describing Parasocial interaction: https://en.wikipedia.org/wiki/Parasocial_interaction

far from being impossible, it's the entire influencer economy. This form of social media has been extremely widespread for a decade or so running; it's probably the dominant form of social media.

summerlee9611|1 month ago

100% agree. Maximizing intimacy and scaling distribution pull in opposite directions. We’re experimenting with keeping the “character” consistent while letting personalization live in private memory and user-controlled settings. Still early, and this tension is real.