top | item 38614798

(no title)

DaveSchmindel | 2 years ago

Ah, nice, like feeding a custom GPT everything that goes on at the company. That seems dangerous to me, and not in the typical LLM weariness way.

Meetings can be messy, full of ambiguity, and "unsolved" at then end. If you feed a number of these experiences into an LLM, would its knowledge base be _that great_ at helping teach somebody about the company holding said meetings?

I'd argue that in order for your LLM suggestion to be effective, the input to it would need to be human-written content. I.E: the knowledge that the OP is discussing. In that case, the human effort is still required up front.

discuss

order

FeepingCreature|2 years ago

It would be interesting to see if you could train a LLM, given "<A> The foo is definitely xyfied" to exclusively answer "Well, at least two months ago, A thought that the foo was xyfied" rather than opine about foo directly.