I thought Observability in this context means the ability to introspectively make sense of why the LLM output what it did, which is a difficult problem because the model parameters are effectively an unintelligible morass of numbers. Does this help with that and if so how?
tracerbulletx|2 years ago
Aqueous|2 years ago