I spent a year or two working with PEGs, and ran into similar issues multiple times. Adding a new production could totally screw up seemingly unrelated parses that worked fine before.
As the author points out, Earley parsing with some disambiguation rules (production precedence, etc.) has been much less finicky/annoying to work with. It's also reasonably fast for small parses even with a dumb implementation. Would suggest for prototyping/settings when runtime ambiguity is not a showstopper, despite the remaining issues described in the article re: having a separate lexer.
Parsing computer languages is an entirely self-inflicted problem. You can easily design a language so it doesn't require any parsing techniques that were not known and practical in 1965, and it will greatly benefit the readability also.
This is entirely the case. Given a sensible grammar stated in a sensible way, it's very easy to write a nice recursive decent parser. They are fast and easy to maintain. It doesn't limit the expressiveness of your grammar unduly.
Both GCC and LLVM implement recursive decent parsers for their C compilers.
Parser generators are an abomination inflicted upon us by academia, solving a non problem, and poorly.
Just like email addresses. The specification/rfc/whatever could have defined a reg-ex that determines a valid address, instead of the essential impossibility we have today.
But I don't want to be able to parse only highly restricted languages. I want to be able to parse anything, including natural language or even non-languages like raw audio.
What I find annoying about using parser generators is that it always feels messy integrating the resulting parser into your application. So you write a file that contains the grammar and generate a parser out of that. Now you build it into your app and call into it to parse some input file, but that ends up giving you some poorly typed AST that is cluttered/hard to work with.
Certain parser generators make life easier by supporting actions on parser/lexer rules. This is great and all, but it has the downside that the grammar you provide is no longer reusable. There's no way for others to import that grammar and provide custom actions for them.
I don't know. In my opinion parsing theory is already solved. Whether it's PEG, LL, LR, LALR, whatever. One of those is certainly good enough for the kind of data you're trying to parse. I think the biggest annoyance is the tooling.
Parser combinators is what I've been loving in the last few years.
Pros:
* They're just a technique/library that you can use in your own language without the seperate generation step.
* They're simple enough that I often roll my own rather than using an existing library.
* They let you stick code into your parsing steps - logging, extra information, constructing your own results directly, e.g.
* The same technique works for lexing and parsing - just write a parser from bytes to tokens, and a second parser from tokens to objects.
* Depending on your languages syntax, you can get your parser code looking a lot like the bnf grammar you're trying to implement.
Cons:
* You will eventually run into left-recursion problems. It can be nightmarish trying to change the code so it 'just works'. You really need to step back and grok left-recursion itself - no handholding from parser combinators.
* Same thing with precedence - you just gotta learn how to do it. Fixing left-recursion didn't click for me until I learned how to do precedence.
Aycock & Horspool came up with a 'practical' method for implementing Earley parsing (conversion to a state-machine) that has pretty humorously good performance delta over "naive" Earley, and is still reasonable to implement. Joop Leo figured out how to get the worst-case of Earley parsing down to either O(n) (left-recursive, non-ambiguous) or O(n^2) (right-recursive, non-ambiguous). That means the Earley algorithm is only O(n^3) on right-recursive, ambiguous grammars; and, if you're doing that, you're holding your language wrong.
A somewhat breathless description of all of this is in the Marpa parser documentation:
https://jeffreykegler.github.io/Marpa-web-site/
In practice, I've found that computers are so fast, that with just the Joop Leo optimizations, 'naive' Earley parsing is Good Enoughâ„¢:
An extremely layman answer is that most interesting innovation in parsing in relatively modern times has happened seems to be in the context of IDE's. I.e. incremental, high-performance parsing to support syntax highlighting, refactoring, etc. etc.
Not sure, but I at least am certainly aware of possibilities that such writeups exclude.
In particular, you can do (a subset of) the following in sequence:
* write your own grammar in whatever bespoke language you want
* compose those grammars into a single grammar
* generate a Bison grammar from that grammar
* run `bison --xml` instead of actually generating code
* read the XML file and implement your own (trivial) runtime so you can easily handle ownership issues
In particular, I am vehemently opposed to the idea of implementing parsers separately using some non-proven tool/theory, since that way leads to subtle grammar incompatibilities later.
I'm not super familiar with the space, but tree-sitter seems to take an interesting approach in that they are an incremental parser. So instead of re-parsing the entire document on change, it only parses the affected text, thereby making it much more efficient for text editors.
I don't know if that's specific to tree-sitter though, I'm sure there are other incremental parsers. I have to say that I've tried ANTLR and tree-sitter, and I absolutely love tree-sitter. It's a joy to work with.
I feel that most of the time the two options are presented as either write a handwritten parser or use a parser generator. A nice third way is to write a custom parser generator for the language you wish to parse. Handwritten parsers do tend to get unwieldy and general purpose parser generators can have inscrutable behavior for any specific language.
Because the grammar for a parser generator is usually much simpler than most general purpose programming languages, it is typically relatively straightforward to handwrite a parser for it.
Common example of complications of two grammars being combined: C code and character strings.
Double quotes in C code mean begin and end of a string. But strings contain quotes too. And newlines. Etc.
So we got the cumbersome invention of escape codes, and so characters strings in source (itself a character string) are not literally the strings they represent.
My current view of what makes parsing so difficult is that people want to jump straight over a ton of intermediate things from parsing to execution. That is, we often know what we want to happen at the end. And we know what we are given. It is hoped that it is a trivially mechanical problem to go from one to the other.
But this ignores all sorts of other steps you can take. Targeting multiple execution environments is an obvious step. Optimization is another. Trivial local optimizations like shifts over multiplications by 2 and fusing operations to take advantage of the machine that is executing it. Less trivial full program optimizations that can propagate constants across source files.
And preemptive execution is a huge consideration, of course. Very little code runs in a way that can't be interrupted for some other code to run in the meantime. To the point that we don't even think of what this implies anymore. Despite accumulators being a very basic execution unit on most every computer. (Though, I think I'm thankful that reentrancy is the norm nowadays in functions.)
What have those other things got to do with parsing though? Granted, they rely on parsing having already happened, but I don't see how there's much feedback from those considerations to the way that parsers work, or are written, or - as the article discussed - can be combined?
pcfwik|2 years ago
https://news.ycombinator.com/item?id=30414683
https://news.ycombinator.com/item?id=30414879
I spent a year or two working with PEGs, and ran into similar issues multiple times. Adding a new production could totally screw up seemingly unrelated parses that worked fine before.
As the author points out, Earley parsing with some disambiguation rules (production precedence, etc.) has been much less finicky/annoying to work with. It's also reasonably fast for small parses even with a dumb implementation. Would suggest for prototyping/settings when runtime ambiguity is not a showstopper, despite the remaining issues described in the article re: having a separate lexer.
dataflow|2 years ago
> Earley parsing with some disambiguation rules
Any idea why GLR always gets ignored?
kazinator|2 years ago
dkjaudyeqooe|2 years ago
Both GCC and LLVM implement recursive decent parsers for their C compilers.
Parser generators are an abomination inflicted upon us by academia, solving a non problem, and poorly.
LgWoodenBadger|2 years ago
paulddraper|2 years ago
This is the controversial part, Lisp aficionados to the contrary.
loevborg|2 years ago
amelius|2 years ago
wittystick|2 years ago
[deleted]
Legend2440|2 years ago
My brain can do it, why can't my computer?
dang|2 years ago
Parsing: The Solved Problem That Isn't (2011) - https://news.ycombinator.com/item?id=8505382 - Oct 2014 (70 comments)
Parsing: the solved problem that isn't - https://news.ycombinator.com/item?id=2327313 - March 2011 (47 comments)
jolt42|2 years ago
EdSchouten|2 years ago
Certain parser generators make life easier by supporting actions on parser/lexer rules. This is great and all, but it has the downside that the grammar you provide is no longer reusable. There's no way for others to import that grammar and provide custom actions for them.
I don't know. In my opinion parsing theory is already solved. Whether it's PEG, LL, LR, LALR, whatever. One of those is certainly good enough for the kind of data you're trying to parse. I think the biggest annoyance is the tooling.
mrkeen|2 years ago
Pros: * They're just a technique/library that you can use in your own language without the seperate generation step.
* They're simple enough that I often roll my own rather than using an existing library.
* They let you stick code into your parsing steps - logging, extra information, constructing your own results directly, e.g.
* The same technique works for lexing and parsing - just write a parser from bytes to tokens, and a second parser from tokens to objects.
* Depending on your languages syntax, you can get your parser code looking a lot like the bnf grammar you're trying to implement.
Cons: * You will eventually run into left-recursion problems. It can be nightmarish trying to change the code so it 'just works'. You really need to step back and grok left-recursion itself - no handholding from parser combinators.
* Same thing with precedence - you just gotta learn how to do it. Fixing left-recursion didn't click for me until I learned how to do precedence.
temp123789246|2 years ago
thechao|2 years ago
A somewhat breathless description of all of this is in the Marpa parser documentation:
In practice, I've found that computers are so fast, that with just the Joop Leo optimizations, 'naive' Earley parsing is Good Enoughâ„¢:marcusf|2 years ago
(I may be talking out of my ass here.)
o11c|2 years ago
In particular, you can do (a subset of) the following in sequence:
* write your own grammar in whatever bespoke language you want
* compose those grammars into a single grammar
* generate a Bison grammar from that grammar
* run `bison --xml` instead of actually generating code
* read the XML file and implement your own (trivial) runtime so you can easily handle ownership issues
In particular, I am vehemently opposed to the idea of implementing parsers separately using some non-proven tool/theory, since that way leads to subtle grammar incompatibilities later.
sse|2 years ago
https://soft-dev.org/pubs/html/diekmann_tratt__dont_panic/
https://drops.dagstuhl.de/storage/00lipics/lipics-vol166-eco...
danielvaughn|2 years ago
I don't know if that's specific to tree-sitter though, I'm sure there are other incremental parsers. I have to say that I've tried ANTLR and tree-sitter, and I absolutely love tree-sitter. It's a joy to work with.
troupo|2 years ago
davidkellis|2 years ago
bsder|2 years ago
norir|2 years ago
Because the grammar for a parser generator is usually much simpler than most general purpose programming languages, it is typically relatively straightforward to handwrite a parser for it.
Nevermark|2 years ago
Double quotes in C code mean begin and end of a string. But strings contain quotes too. And newlines. Etc.
So we got the cumbersome invention of escape codes, and so characters strings in source (itself a character string) are not literally the strings they represent.
PH95VuimJjqBqy|2 years ago
ugly, yes. problematic? no.
taeric|2 years ago
But this ignores all sorts of other steps you can take. Targeting multiple execution environments is an obvious step. Optimization is another. Trivial local optimizations like shifts over multiplications by 2 and fusing operations to take advantage of the machine that is executing it. Less trivial full program optimizations that can propagate constants across source files.
And preemptive execution is a huge consideration, of course. Very little code runs in a way that can't be interrupted for some other code to run in the meantime. To the point that we don't even think of what this implies anymore. Despite accumulators being a very basic execution unit on most every computer. (Though, I think I'm thankful that reentrancy is the norm nowadays in functions.)
Karellen|2 years ago