top | item 38991525

(no title)

evmaki | 2 years ago

Awesome write-up - especially the fact that you've gotten it working with good performance locally. It certainly requires a little bit more hardware than your typical home assistant, but I think this will change over time :)

I've been working on this problem in an academic setting for the past year or so [1]. We built a very similar system in a lab at UT Austin and did a user study (demo here https://youtu.be/ZX_sc_EloKU). We brought a bunch of different people in and had them interact with the LLM home assistant without any constraints on their command structure. We wanted to see how these systems might choke in a more general setting when deployed to a broader base of users (beyond the hobbyist/hacker community currently playing with them).

Big takeaways there: we need a way to do long-term user and context personalization. This is both a matter of knowing an individual's preferences better, but also having a system that can reason with better sensitivity to the limitations of different devices. To give an example, the system might turn on a cleaning robot if you say "the dog made a mess in the living room" -- impressive, but in practice this will hurt more than it helps because the robot can't actually clean up that type of mess.

[1] https://arxiv.org/abs/2305.09802

discuss

order

No comments yet.