It’s informal language that has formal language mixed in. The informal parts determine how the final document should look. So, a simple formal-to-formal translation won’t meet their needs.
I never really understand this reasoning of "regex is hard to reason about, so we just use an LLM we custom made instead!" I get it's trendy but reasoning about LLMs is impossible for many devs the idea that this makes it more maintainable is pretty hilarious.
Regex’s require you to understand what the obscure-looking patterns do character by character in a pile of text. Then, across different piles of text. Then, juggling different regex’s.
For a LLM, you can just tune it to produce the right output using examples. Your brain doesn’t have to understand the tedious things it’s doing.
This also replaces a boring, tedious job with one (LLM’s) that’s more interesting. Programmers enjoy those opportunities.
For as much as I would love for this to work, I'm not getting great results trying out the 1.5b model in their example notebook on Colab.
It is impressively fast, but testing it on an arxiv.org page (specifically https://arxiv.org/abs/2306.03872) only gives me a short markdown file containing the abstract, the "View PDF" link and the submission history. It completely leaves out the title (!), authors and other links, which are definitely present in the HTML in multiple places!
I'd argue that Arxiv.org is a reasonable example in the age of webapps, so what gives?
Unfortunately not getting any good results for RFC 3339 (https://www.rfc-editor.org/rfc/rfc3339), such a page where I think it would be great to convert text into readable Markdown.
The end result is just like the original site but with without any headings and the a lot of whitespace still remaining (but with some non-working links inserted) :/
That's their existing API (which I also tried, with... less than desirable results). This post is about a new model, `reader-lm`, which isn't in production yet.
In real-world use cases, it seems more appropriate to use advanced models to generate suitable rule trees or regular expressions for processing HTML → Markdown, rather than directly using a smaller model to handle each HTML instance. The reasons for this approach include:
1. The quality of HTML → Markdown conversion results is easier to evaluate.
2. The HTML → Markdown process is essentially a more sophisticated form of copy-and-paste, where AI generates specific symbols (such as ##, *) rather than content.
3. Rule-based systems are significantly more cost-effective and faster than running an LLM, making them applicable to a wider range of scenarios.
These are just my assumptions and judgments. If you have practical experience, I'd welcome your insights.
An aligned future, for sure. Current commercial LLMs refuse to talk about “keeping secrets” (protection of identity) or pornographic topics (which, in the communities I frequent – made of individuals who have been oppressed partly because of their sexuality –, is an important subject). And uncensored AIs are not really a solution either.
Why Claude 3.5 Sonnet is missing from the benchmark? Even if the real reason is different and completely legitimate, or perhaps purely random, it comes across as "claude does better than our new model so we omitted it because we wanted the tallest bars on the chart to be ours". And as soon as the reader thinks that, they may start to question everything else in your work, which is genuinely awesome!
So regex version still beats the LLM solution. There's also the risk of hallucinations. I wonder if they tried to make SML which would rewrite or update the existing regex solution instead of generating the whole content again? This would mean less output tokens, faster inference and output wouldn't contain hallucinations. Although, not sure if small language models are capabable to write regex
Not sure about the quality of the model's output. But I really appreciate this little mini-paper they produced. It gives a nice concise description of their goals, benchmarks, dataset preparation, model sizes, challenges and conclusion. And the whole thing is about a 5-10 minute read.
Feels surprising that there isn't a modern best-in-class non-LLM alternative for this task. Even in the post, they described that they used a hodgepodge of headless Chrome, readability, lots of regex to create content-only HTML.
Best I can tell, everyone is doing something similar, only differing in the amount of custom situation regex being used.
How could it possibly be (a better solution) when there are X different ways to do any single thing in html(/css/js)? If you have a website that uses a canvas to showcase the content (think presentation or something like that), where would you even start? People are still discussing whether the semantic web is important; not every page is utf8 encoded, etc. IMHO small LLMS (trained specifically for this) combined with some other (more predictable) techniques are the best solution we are going to get.
About their readability-markdown pipeline:
"Some users found it too detailed, while others felt it wasn’t detailed enough. There were also reports that the Readability filter removed the wrong content or that Turndown struggled to convert certain parts of the HTML into markdown. Fortunately, many of these issues were successfully resolved by patching the existing pipeline with new regex patterns or heuristics."
To answer their question about the potention of a SML doing this, they see 'room for improvement' - but as their benchmark shows, it's not up to their classic pipeline.
You echo their research question: "instead of patching it with more heuristics and regex (which becomes increasingly difficult to maintain and isn’t multilingual friendly), can we solve this problem end-to-end with a language model?"
[+] [-] choeger|1 year ago|reply
I don't get the usage of "regex/heuristics" either. Why can that task not be completely handled by a classical algorithm?
Is it about the removal of non-content parts?
[+] [-] baq|1 year ago|reply
A nicely formatted subset of html is very different from a dom tag soup that is more or less the default nowadays.
[+] [-] nickpsecurity|1 year ago|reply
[+] [-] MantisShrimp90|1 year ago|reply
[+] [-] nickpsecurity|1 year ago|reply
For a LLM, you can just tune it to produce the right output using examples. Your brain doesn’t have to understand the tedious things it’s doing.
This also replaces a boring, tedious job with one (LLM’s) that’s more interesting. Programmers enjoy those opportunities.
[+] [-] sippeangelo|1 year ago|reply
It is impressively fast, but testing it on an arxiv.org page (specifically https://arxiv.org/abs/2306.03872) only gives me a short markdown file containing the abstract, the "View PDF" link and the submission history. It completely leaves out the title (!), authors and other links, which are definitely present in the HTML in multiple places!
I'd argue that Arxiv.org is a reasonable example in the age of webapps, so what gives?
[+] [-] faangguyindia|1 year ago|reply
When you've Google Flash which is lightening fast and cheap.
My brother implemented it in option-k : https://github.com/zerocorebeta/Option-K
It's near instant. So why waste time on small models? It's going to cost more than Google flash.
[+] [-] vladde|1 year ago|reply
The end result is just like the original site but with without any headings and the a lot of whitespace still remaining (but with some non-working links inserted) :/
Using thei API link, this is what it looks like: https://r.jina.ai/https://www.rfc-editor.org/rfc/rfc3339
[+] [-] bberenberg|1 year ago|reply
> [Appendix B](#appendix-B). Day
So not sure if it's the length of the page, or something else, but in the end, it doesn't really work?
[+] [-] lelandfe|1 year ago|reply
[+] [-] tatsuya4|1 year ago|reply
1. The quality of HTML → Markdown conversion results is easier to evaluate.
2. The HTML → Markdown process is essentially a more sophisticated form of copy-and-paste, where AI generates specific symbols (such as ##, *) rather than content.
3. Rule-based systems are significantly more cost-effective and faster than running an LLM, making them applicable to a wider range of scenarios.
These are just my assumptions and judgments. If you have practical experience, I'd welcome your insights.
[+] [-] fsndz|1 year ago|reply
[+] [-] Diti|1 year ago|reply
[+] [-] faangguyindia|1 year ago|reply
Basically, it's utility which completes commandline for you
While playing with it, we thought about creating a custom small model for this.
But it was really limiting! If we use small model trained on MAN pages, bash scripts, stack overflow and forums etc...
We miss the key component, using a larger model like flash is more effective as this model knows lot more about other things.
For example, I can ask this model to simply generate a command that lets me download audio from a youtube url.
[+] [-] smusamashah|1 year ago|reply
I don't know if its using their new model or their engine
[+] [-] igorzij|1 year ago|reply
[+] [-] faangguyindia|1 year ago|reply
[+] [-] valstu|1 year ago|reply
[+] [-] rockstarflo|1 year ago|reply
[+] [-] rwl4|1 year ago|reply
[+] [-] siscia|1 year ago|reply
Instead of applying an obscure set of heuristic by hand, let the LM figure out the best way starting from a lot of data.
The model is bound to be less debuggable and much more difficult to update, for experts.
But in the general case it will work well enough.
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] WesolyKubeczek|1 year ago|reply
[+] [-] alexdoesstuff|1 year ago|reply
Best I can tell, everyone is doing something similar, only differing in the amount of custom situation regex being used.
[+] [-] monacobolid|1 year ago|reply
[+] [-] foul|1 year ago|reply
[+] [-] fsiefken|1 year ago|reply
About their readability-markdown pipeline: "Some users found it too detailed, while others felt it wasn’t detailed enough. There were also reports that the Readability filter removed the wrong content or that Turndown struggled to convert certain parts of the HTML into markdown. Fortunately, many of these issues were successfully resolved by patching the existing pipeline with new regex patterns or heuristics."
To answer their question about the potention of a SML doing this, they see 'room for improvement' - but as their benchmark shows, it's not up to their classic pipeline.
You echo their research question: "instead of patching it with more heuristics and regex (which becomes increasingly difficult to maintain and isn’t multilingual friendly), can we solve this problem end-to-end with a language model?"
[+] [-] Onavo|1 year ago|reply
[+] [-] Dowwie|1 year ago|reply