Parser combinators are one of those ideas that is really powerful in practice, but will never be mainstream because it had the bad fortune of coming from the functional programming world. It's a shame too, because parser combinators are what made parsers click for me.
Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.
Hey! I love parser combinators and wish they were more mainstream. That's why I wrote Tarsec, which is Parsec for TypeScript:
https://github.com/egonSchiele/tarsec
It also has a neat debug mode, and a way to benchmark performance. I've loved using it, hopefully someone reading will use it too! Use this as a chance to learn something cool (parser combinators)!
I’ve seen it in practice. I decided to use a Java parser combinator library for one problem. As soon as I was off the project they ripped out my code and replaced it with regexes.
That what was to be parsed wasn’t regular apparently did not matter.
> break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
The popular `pyparsing` library for Python (https://pypi.org/project/pyparsing/) expresses these concepts, just written more in traditional OO style rather than functional style (i.e. there are classes like OneOrMany that you instantiate, rather than binding a single-token-matcher argument to lambda or whatever).
Parser combinators are already mainstream. All/most real languages are implemented with hand written recursive descent parsers that closely resemble parser combinators, or at the very least borrow a bunch of concepts.
It's the name that's unlikely to become mainstream. That's squarely upon FP folks giving sci-fi names to common sense concepts.
As a sibling points out, recursive descent parsers have been mainstream for a long time. Higher order parser combinators are only moderately useful in an imperative language with regular looping constructs. (For example, the code to parse a list of Xs separated by Ys is a pretty straightforward loop.) A lot of parser combinator libraries also make the mistake of encouraging "short circuit" error handling, which is rarely what you want for a production-grade parser.
> will never be mainstream because it had the bad fortune of coming from the functional programming world
Nah, mainstream developer don’t care where an idea comes from as long as it solves the problem and helps them hit the deadline. E.g. linq in C# have been broadly adopted.
Regexes are disliked by many developers, so parser combinators should be an easy sell if you can show it solves realistic problems with simpler code.
I remember logstash having a quite intuitive parser language, called grok I think, where recursive decent was built on top of regex. We should use something like that!
> Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
That's a terrible example for "parser combinators beat regexes"; those are the three regular operations. (Except that you use zero_or_many in preference to one_or_many.)
I once tried to use parser combinators and I was quite disappointed by the result. I find it harder to read and to handle error than regular recursive descendant parsers.
There are numerous posts, comments, articles and so forth about when to use regex versus using a parser. The general rule is this:
If you need to do a search from a string, such as needle(s) from a hat stack, regex is probably more ideal than a parser. If you need anything more intelligent than a list of search results you probably want a full formal parser.
Most languages allow a form of nested regex that allow for increased search precision. This occurs when a method that makes use of a regex returns to a function whose argument is a matching string result, which is why regex is probably enough when the business is primitive. There is a tremendously higher cost to using a full parser, considering the lexer and tokenizer plus rules, but it’s so much more intelligent that it’s not even comparable.
The key thing for me in making this decision is of course predicting the future.
Parsers are more work up front but in my experience much easier to debug, maintain and extend. So if I predict something will be around for a while I'll want to do a parser and if it's just a one-off I usually go with regex.
Of course this predictably leads to me having a collection of parsers that I've never needed to touch again and a handful of monstrously complex regex grammars that I'll rewrite into a parser "the next time" I need to modify them.
I still think it's the right thing to base the decision on I just need to keep improving my prophesying.
Of course you could also pull out the good old Chomsky hierarchy and make an argument against regexes based on whatever the nature of your data is.
But the thing is: the beauty of regexes lies in their compactness (which, in turn, can make them quite difficult to debug). So, of course, if you want to optimize for some other measure, you'd use an alternative approach (e.g. a parser). And there are a number of worthwhile measures, such as e.g. the already mentioned debuggability, appropriateness for the problem at hand in terms of complexity, processing speed, ease of using the match result, the ability to get multiple alternative matches, support of regexes in your language of choice, etc.
But simply stating "X beats regexes" without saying in what respect leaves something to be desired.
> It uses the pcre-heavy Haskell library which in turn calls out to the system-wide pcre C library for actually compiling and running the regex.
That's not true regular expressions, but a backtracker with eclectic features.
The regex used in the regex solution is just:
mul\((\d+),(\d+)\)
not requiring PCRE.
If Haskell happens to provide bindings to the POSIX regcomp/regexec stuff, that would be something to try instead. \d has to be replaced with [0-9], needless to say.
“In other languages, it would be considered overkill to write a full parser when a simple regex can do the same thing. In Haskell, writing a parser is no big deal. We just do it and move on with our lives.”
I see a long code file filled with comments and long paragraph-level explanations. I think I’d rather just learn and use regex.
Whenever I write a regex, I end up with a comments roughly ten times longer than the regex. That being said, regular expressions are often the right tool for the job (i.e. parsing a regular language, as opposed to a context-free language or whatever), just the syntax becomes unreadable rather quickly. I’m sure you could build a nicer regular-expression syntax in Haskell.
Sounds like you think the comments and explantions are the problem? You can write regexes with comments and parsers without. Regexes are not generally known to be self explanatory, except for trivial cases like \d+
I mean that's the nature of article code no? You are writing for a generic audience and want to be very explicit as to what your code is doing to teach the audience.
For your average haskell programmer you could drop all of those comments since the code isn't really doing anything that couldn't be determined by just reading it.
The main advantage of recursive descent parsing is readability. Forget the machinery of the parser, which is (a) trivial enough that AI will generate correctly it for you with next to no prompting, and (b) widely available in libraries anyway. The win is writing a parser that reads like the grammar it implements, which means changes to your grammar are easy to apply to your parser.
Regexes are effectively write-only. If you change the grammar parsed by a regex, you're probably going to discard the regex and make a new one.
Note that most implementations of both parser combinators and regexes can fail very badly (exponential time). Never use either on untrusted input, unless you can prove your implementation lets you stay linear.
This is one of the reasons I've been afraid to use parser combinators too heavily. With regular (finite-state) languages I know their time usage, with parser combinators I have no clue, and there are so many different libraries and implementations and some are monadic and some are applicative and few mention worst-case. There are benchmarks https://gitlab.com/FinnBender/haskell-parsing-benchmarks#byt... but what kinds of "dangerous operations" do I have to watch out for when writing with parser combinators?
(I guess part of the problem is just that regular languages have been studied since Kleene had a full head of hair, while parser combinators were more or less unknown until the 80's.)
PEGs don't have this problem right? Linear time over length of input? Though I guess most peg libraries probably aren't "pure" pegs to get around its limitations and may reintroduce this issue?
Parser combinators work well for big messy problems as well as regexes; I used a custom library of combinators to write the Fortran parser for LLVM. They’re great for error recovery and for disambiguating tricky situations, of which Fortran has more than its share; I ended up parsing cooked characters without a tokenization phase.
One hard part is getting good performance when the same production is being tried as part of multiple higher level alternatives and backtracking is taking place. You can get into trouble for bad input cases, like expressions with thousands of operands being used in a context that isn’t clear until you get to characters after the expression. But there’s ways to solve that problem, so I recommend combinators to you if you need to write a parser.
Another downside of parser combinators, at least the ones that I wrote as a basis for Fortran parsing, is that if you implement them as C++ template classes, the types get astonishingly long in their names. This leads to long compilation times and makes debuggers very hard to use. Both of these concerns only affect the developer, of course, and seemed to me to be acceptable so long as the parser is fast and robust.
I'd use parser combinators again if I ever need to write a big parser, although I might not use overloaded operators (">>", "/") and confusing function names ("many" vs. "some") inherited from Haskell's Parsec library.
I've been working a lot with parser combinators lately, they are a really nice compromise between using grammar languages (I'd put regexes in this category, as well as parser generators) and hand-rolled imperative parsers. The parser combinator library ends up being a DSL for describing grammars in your preferred language, but the boundary is fluid, and you can move back and forth between writing grammars and writing code in your preferred language.
The bold implementer could integrate regexes into their parser combinator library, and you could move between all three.
You can also use simple string search functions for this Advent of Code problem.
I used this approach for implementing orthography rules in a stenography application (implementing rules like `artistic + ly = artistically`) and in the benchmarks I did, a few lines of searching for character indexes was an order of magnitude faster than regexes. Each of my functions is about 3-10 ish lines of simple code compared to a 1-line regex.
You do have to handle cases like the character not being found, but I've had zero issues with the code and, in terms of what actually executes, it is vastly simpler. This is why the saying goes that if you make something twice as fast, you may have done something clever, but if you make it 10x faster, you stopped doing something dumb.
Basic string search and indexing operations (especially when there is language/library support for cheap spans) are definitely underappreciated.
I wouldn't reach for them as a first step though, because they'd take more automated tests to give me confidence of correctness, and most problems I end up solving are not CPU bound but programmer time bound.
A few days ago I mentioned my first parser [1], which used regex in the tokenization step, based on studying the parser video by Destroy All Software. I'm glad I learned it that way first, since it takes a lot of the pain out of lexing. I've now built many parsers, including accidentally re-inventing parser combinators when working in Erlang. Two days ago I built another parser that uses regex heavily for tokenization/some parsing steps. I never parse programming languages, so my programs' needs are quite different from the parsers talked about online!
Parser combinators are great until you need to parse something real, like CSV with embedded newlines and Excel quotes. That’s when you reach for the reliable trio: awk, duct tape, and prayer.
Seems like exactly the situation where you’d want parsers, because they can do any form of quoting \ escaping you want and have no reason to care about newlines any more than they do any other character.
Functional programming languages are uniquely suitable for writing parsers for grammars, their whole feature set (discriminated unions, pattern matching etc.) is very suited to turning text into ASTs. It's not often that one needs to parse a custom DSL grammar.
That said, I'd say it's a pretty rare occasion to have that need, and other languages have great libraries for accomplishing the same, I'm partial to Chevrotain for Javascript (https://chevrotain.io/docs/), which has a small DSL for describing grammars.
Parser combinators are both less efficient and less featureful than bespoke solutions, which is a problem when you're writing a parser for your custom language, and you want to add extra features, like resilience, or rich error reporting.
To add a bit of snark, functional languages seem to best suited for writing parsers/compilers for other functional programming languages, which is pretty telling of the inward-facing functional culture.
> Parser combinators are both less efficient and less featureful than bespoke solutions, which is a problem when you're writing a parser for your custom language, and you want to add extra features, like resilience, or rich error reporting.
This is untrue. PCs are the bespoke solutions. The article shows how to introduce a feature like state. Parser generators are the technique which just dump fixed-functionality source code on you.
This is a false dichotomy -- regexes and parsers both have their place, even when solving the same problem.
The troubles start when you try and solve the whole thing in one step, using just regular expressions, or parsers.
Regular expressions are good at tokenizing input (converting a stream of bytes into a stream of other things, e.g. picking out numbers, punctuation, keywords).
Parsers are good at identifying structure in a token stream.
Neither are good at evaluation. Leave that as its own step.
Applying this rule to the example in the article (Advent of Code 2024 Day 3), I would still use regular expressions to identify mul(\d+,\d+), do(), and don't(), I don't think I need a parser because there is no extra structure beyond that token stream, and I would leave it up to the evaluator to track the state of whether multiplication is enabled or not.
Parser combinators will eventually generate a push-down automaton. Regex would eventually generate a deterministic finite-state automaton. There's no contest, CPU-wise the DFA will parse the string faster (just have to do one transition for each char/ it's one lookup). In some cases it might take more memory for the automaton though. (also this assumes you don't go into library-specific extensions that allow backtracking, but keep within the original regex scope)
It's fine and fair to suggest that parser combinators are better for many usecases (first of all, they're clearly more powerful! they can parse any context-free language). But it's misleading to suggest they "beat" regex (they don't - not on the regex turf).
Obligatory plug for Sprache (https://github.com/sprache/Sprache) - a nice library I've been using whenever I've needed to create a little parser for something in C#. I'm not dogmatically opposed to using regexes, just for me they feel quite clunky for many tasks and tend to grow legs and become harder to understand.
[+] [-] yen223|11 months ago|reply
Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.
[+] [-] egonschiele|11 months ago|reply
I have a short and a long intro, both illustrated: https://github.com/egonSchiele/tarsec/blob/main/tutorials/5-... and https://github.com/egonSchiele/tarsec/blob/main/tutorials/th...
It also has a neat debug mode, and a way to benchmark performance. I've loved using it, hopefully someone reading will use it too! Use this as a chance to learn something cool (parser combinators)!
[+] [-] mrkeen|11 months ago|reply
I do this every now and then. Usually in the days leading up to advent of code.
"A parser is a function from an input to either failure or a (remaining input, result) pair" is all you need to remember to build the rest.
[+] [-] Tabular-Iceberg|11 months ago|reply
That what was to be parsed wasn’t regular apparently did not matter.
[+] [-] zahlman|11 months ago|reply
The popular `pyparsing` library for Python (https://pypi.org/project/pyparsing/) expresses these concepts, just written more in traditional OO style rather than functional style (i.e. there are classes like OneOrMany that you instantiate, rather than binding a single-token-matcher argument to lambda or whatever).
[+] [-] fooker|11 months ago|reply
It's the name that's unlikely to become mainstream. That's squarely upon FP folks giving sci-fi names to common sense concepts.
[+] [-] foldr|11 months ago|reply
[+] [-] bazoom42|11 months ago|reply
Nah, mainstream developer don’t care where an idea comes from as long as it solves the problem and helps them hit the deadline. E.g. linq in C# have been broadly adopted.
Regexes are disliked by many developers, so parser combinators should be an easy sell if you can show it solves realistic problems with simpler code.
[+] [-] skybrian|11 months ago|reply
[+] [-] worldsayshi|11 months ago|reply
[+] [-] thaumasiotes|11 months ago|reply
That's a terrible example for "parser combinators beat regexes"; those are the three regular operations. (Except that you use zero_or_many in preference to one_or_many.)
[+] [-] meinersbur|11 months ago|reply
https://github.com/llvm/llvm-project/blob/main/flang/lib/Par...
[+] [-] wruza|11 months ago|reply
[+] [-] conaclos|11 months ago|reply
[+] [-] austin-cheney|11 months ago|reply
If you need to do a search from a string, such as needle(s) from a hat stack, regex is probably more ideal than a parser. If you need anything more intelligent than a list of search results you probably want a full formal parser.
Most languages allow a form of nested regex that allow for increased search precision. This occurs when a method that makes use of a regex returns to a function whose argument is a matching string result, which is why regex is probably enough when the business is primitive. There is a tremendously higher cost to using a full parser, considering the lexer and tokenizer plus rules, but it’s so much more intelligent that it’s not even comparable.
[+] [-] giraffe_lady|11 months ago|reply
Parsers are more work up front but in my experience much easier to debug, maintain and extend. So if I predict something will be around for a while I'll want to do a parser and if it's just a one-off I usually go with regex.
Of course this predictably leads to me having a collection of parsers that I've never needed to touch again and a handful of monstrously complex regex grammars that I'll rewrite into a parser "the next time" I need to modify them.
I still think it's the right thing to base the decision on I just need to keep improving my prophesying.
[+] [-] kleiba|11 months ago|reply
But the thing is: the beauty of regexes lies in their compactness (which, in turn, can make them quite difficult to debug). So, of course, if you want to optimize for some other measure, you'd use an alternative approach (e.g. a parser). And there are a number of worthwhile measures, such as e.g. the already mentioned debuggability, appropriateness for the problem at hand in terms of complexity, processing speed, ease of using the match result, the ability to get multiple alternative matches, support of regexes in your language of choice, etc.
But simply stating "X beats regexes" without saying in what respect leaves something to be desired.
[+] [-] kazinator|11 months ago|reply
That's not true regular expressions, but a backtracker with eclectic features.
The regex used in the regex solution is just:
not requiring PCRE.If Haskell happens to provide bindings to the POSIX regcomp/regexec stuff, that would be something to try instead. \d has to be replaced with [0-9], needless to say.
[+] [-] arn3n|11 months ago|reply
I see a long code file filled with comments and long paragraph-level explanations. I think I’d rather just learn and use regex.
[+] [-] layer8|11 months ago|reply
[+] [-] bazoom42|11 months ago|reply
[+] [-] OneDeuxTriSeiGo|11 months ago|reply
For your average haskell programmer you could drop all of those comments since the code isn't really doing anything that couldn't be determined by just reading it.
[+] [-] codebje|11 months ago|reply
Regexes are effectively write-only. If you change the grammar parsed by a regex, you're probably going to discard the regex and make a new one.
[+] [-] sayamqazi|11 months ago|reply
[deleted]
[+] [-] o11c|11 months ago|reply
[+] [-] internet_points|11 months ago|reply
(I guess part of the problem is just that regular languages have been studied since Kleene had a full head of hair, while parser combinators were more or less unknown until the 80's.)
[+] [-] giraffe_lady|11 months ago|reply
[+] [-] unknown|11 months ago|reply
[deleted]
[+] [-] thaumasiotes|11 months ago|reply
They can take exponential space, though, so I'm not sure why knowing you'll be able to process the data in linear time is supposed to keep you safe.
[+] [-] pklausler|11 months ago|reply
One hard part is getting good performance when the same production is being tried as part of multiple higher level alternatives and backtracking is taking place. You can get into trouble for bad input cases, like expressions with thousands of operands being used in a context that isn’t clear until you get to characters after the expression. But there’s ways to solve that problem, so I recommend combinators to you if you need to write a parser.
[+] [-] pklausler|11 months ago|reply
Another downside of parser combinators, at least the ones that I wrote as a basis for Fortran parsing, is that if you implement them as C++ template classes, the types get astonishingly long in their names. This leads to long compilation times and makes debuggers very hard to use. Both of these concerns only affect the developer, of course, and seemed to me to be acceptable so long as the parser is fast and robust.
I'd use parser combinators again if I ever need to write a big parser, although I might not use overloaded operators (">>", "/") and confusing function names ("many" vs. "some") inherited from Haskell's Parsec library.
[+] [-] maxbond|11 months ago|reply
The bold implementer could integrate regexes into their parser combinator library, and you could move between all three.
[+] [-] henning|11 months ago|reply
I used this approach for implementing orthography rules in a stenography application (implementing rules like `artistic + ly = artistically`) and in the benchmarks I did, a few lines of searching for character indexes was an order of magnitude faster than regexes. Each of my functions is about 3-10 ish lines of simple code compared to a 1-line regex.
You do have to handle cases like the character not being found, but I've had zero issues with the code and, in terms of what actually executes, it is vastly simpler. This is why the saying goes that if you make something twice as fast, you may have done something clever, but if you make it 10x faster, you stopped doing something dumb.
[+] [-] kqr|11 months ago|reply
I wouldn't reach for them as a first step though, because they'd take more automated tests to give me confidence of correctness, and most problems I end up solving are not CPU bound but programmer time bound.
[+] [-] Rendello|11 months ago|reply
https://news.ycombinator.com/item?id=43627698
[+] [-] DadBase|11 months ago|reply
[+] [-] iamevn|11 months ago|reply
[+] [-] masklinn|11 months ago|reply
[+] [-] unknown|11 months ago|reply
[deleted]
[+] [-] kreig|11 months ago|reply
[0]: https://en.wikipedia.org/wiki/Chomsky_hierarchy
[+] [-] torginus|11 months ago|reply
That said, I'd say it's a pretty rare occasion to have that need, and other languages have great libraries for accomplishing the same, I'm partial to Chevrotain for Javascript (https://chevrotain.io/docs/), which has a small DSL for describing grammars.
Parser combinators are both less efficient and less featureful than bespoke solutions, which is a problem when you're writing a parser for your custom language, and you want to add extra features, like resilience, or rich error reporting.
To add a bit of snark, functional languages seem to best suited for writing parsers/compilers for other functional programming languages, which is pretty telling of the inward-facing functional culture.
[+] [-] mrkeen|11 months ago|reply
This is untrue. PCs are the bespoke solutions. The article shows how to introduce a feature like state. Parser generators are the technique which just dump fixed-functionality source code on you.
[+] [-] asQuirreL|11 months ago|reply
The troubles start when you try and solve the whole thing in one step, using just regular expressions, or parsers.
Regular expressions are good at tokenizing input (converting a stream of bytes into a stream of other things, e.g. picking out numbers, punctuation, keywords).
Parsers are good at identifying structure in a token stream.
Neither are good at evaluation. Leave that as its own step.
Applying this rule to the example in the article (Advent of Code 2024 Day 3), I would still use regular expressions to identify mul(\d+,\d+), do(), and don't(), I don't think I need a parser because there is no extra structure beyond that token stream, and I would leave it up to the evaluator to track the state of whether multiplication is enabled or not.
[+] [-] virgilp|11 months ago|reply
It's fine and fair to suggest that parser combinators are better for many usecases (first of all, they're clearly more powerful! they can parse any context-free language). But it's misleading to suggest they "beat" regex (they don't - not on the regex turf).
[+] [-] fooker|11 months ago|reply
No, you can have context sensitivity.
[+] [-] smcl|11 months ago|reply