tatsuya4 | 1 year ago | on: Reader-LM: Small Language Models for Cleaning and Converting HTML to Markdown
tatsuya4's comments
tatsuya4 | 1 year ago | on: Qwen2-Math
It appears that no inference provider currently supports the 72B version.
tatsuya4 | 1 year ago | on: Qwen2-Math
ModelBox have a playground for Qwen Math
7B: https://model.box/try/playground/qwen/qwen2-math-7b-instruct 1.5b: https://model.box/try/playground/qwen/qwen2-math-1.5b-instru...
tatsuya4 | 1 year ago | on: AI-router-chat – An AI chat app with LLM model routing
Did you use the LLM aggregator like https://model.box ?
tatsuya4 | 1 year ago | on: Codestral Mamba
Just did a quick test in the https://model.box playground, and it looks like the completion length is noticeably shorter than other models (e.g., gpt-4o). However, the response speed meets expectations..
page 1
1. The quality of HTML → Markdown conversion results is easier to evaluate.
2. The HTML → Markdown process is essentially a more sophisticated form of copy-and-paste, where AI generates specific symbols (such as ##, *) rather than content.
3. Rule-based systems are significantly more cost-effective and faster than running an LLM, making them applicable to a wider range of scenarios.
These are just my assumptions and judgments. If you have practical experience, I'd welcome your insights.