top | item 40590858

(no title)

mtgr18977 | 1 year ago

The thing is: there is no LAM in rabbit r1.

Or, you can nsay that an automation is a LAM (it's not).

With the definition of the Silvio Savarese’s article (from the podcast you indicate):

> To be clear, an LAMs job isn’t just turning a request into a series of steps, but understanding the logic that connects and surrounds them. That means understanding why one step must occur before or after another, and knowing when it’s time to change the plan to accommodate changes in circumstances. It’s a capability we demonstrate all the time in everyday life. For instance, when we don’t have enough eggs to make an omelet, we know the first step has nothing to do with cooking, but with heading to the nearest grocery store. It’s time we built technology that can do the same.

**

The definition provided by Silvio Savarese highlights the ability of a LAM to not only transform a request into a series of steps but also to understand the underlying logic that connects and surrounds these steps. This includes the ability to adjust the plan as circumstances change.

Based on this definition, claiming that rabbit r1 is a LAM-oriented assistant seems to be inaccurate. If it does not demonstrate the ability to understand and adapt to contextual changes in a logical and effective manner, it cannot be classified as a genuine LAM.

For a true LAM, it is crucial that the technology not only follows a predefined sequence of steps but also understands the logic and purpose behind each step, adjusting as necessary to achieve the desired goal. If rabbit r1 does not meet these criteria, its classification as a LAM indeed needs to be reviewed.

And, with that in mind I can assure you that rabbit r1 it's not a LAM oriented assistant as they claim.

discuss

order

supermatt|1 year ago

It already does do this!

I don’t know exactly how sophisticated the setup is - it’s likely some tooling around langchain or similar - but it evidently DOES do this given the nature of some of the queries that it resolves.

You are suggesting that a LAM must route itself around a critical failure in its tooling. Maybe you also expect a LAM to grow arms and water your plants for you? You are taking an experts definition and projecting some extra magical requirement onto it to dismiss r1 having a LAM.

The r1s LAM sucks, for sure, but it evidently exists in some form.

As for it being a scam overall - I don’t see how they can offer ChatGPT for life with no subscription, so unless they have some other revenue stream they won’t be around for long.