top | item 37599754

Why does no one measure prompt velocity

10 points| thepaulthomson | 2 years ago |commandbar.com

11 comments

order

killaJ|2 years ago

Been messing with LangSmith playground recently. Pains me to say but it’s actually been a pretty good experience so far.

thepaulthomson|2 years ago

That's awesome. I've been seeing quite a bit of chat about it on X too. Seems like they've hit the mark with playground. What are you using it for specifically?

finnlobsien1|2 years ago

super interesting how it makes "decisions", but nice that they let you tie user feedback directly into LLM refinement, otherwise would be hard to make that info useful

fwesss|2 years ago

I'm curious about LangSmith's 'dynamic datasets'. How does it ensure data integrity, especially when rapidly iterating on AI models?

thepaulthomson|2 years ago

From the docs it looks like they're fairly explicit with respecting env states for each dataset. I'm not sure how/where contamination would even occur to be honest - regardless of model used.

dazzeloid|2 years ago

Love the concept of prompt velocity. Although it doesn't capture the initial quality of the prompt or whether the changes are effective.

thepaulthomson|2 years ago

Any thoughts on how they could improve here? Seems like that would be challenging.

sp332|2 years ago

What is prompt velocity? It's not mentioned in the blog post.

casstang|2 years ago

What’s LangSmith's revenue model?

thepaulthomson|2 years ago

I have no idea. They're still in beta so probably figuring it out as they go I guess. I could see them charging on tokens or traces most likely though.