Unfortunately, what a token describes is exactly what an LLM doesn't understand. As I explain in my article linked previously, procedural steps with determinate outcomes need procedural, traditional code rather than predictive LLM results.
If you just want something to predict the next best step or likely outcome, LLMs can already do that by fine-tuning on the kind of data you're talking about.
FYI, today's LLMs aren't trained on blog and forum type content as you mention but actually contain millions of books, academic papers, and other legit sources. Then, they're fine-tuned by a specific industry or company to include actual papers and data from their industry.
dtagames|1 year ago
If you just want something to predict the next best step or likely outcome, LLMs can already do that by fine-tuning on the kind of data you're talking about.
FYI, today's LLMs aren't trained on blog and forum type content as you mention but actually contain millions of books, academic papers, and other legit sources. Then, they're fine-tuned by a specific industry or company to include actual papers and data from their industry.