top | item 20192815

(no title)

jpittis | 6 years ago

> I think the smaller differences are also large enough to rule out extraordinary claims, like the ones I’ve read that say writing a compiler in Haskell takes less than half the code of C++ by virtue of the language

Specifically the "by virtue of the language" part:

Seems to me like it's unreasonable to claim the languages are on equal footing because fancy parser libraries aren't allowed to be used for the project. The fancy parser libraries exist for certain languages specifically because the languages enable them to be written. (For example in Haskell: monadic libaries, libraries that take advantage of GADTs, etc.)

discuss

order

trishume|6 years ago

I don't think monadic parser libraries have a real claim to be that difference. All the languages listed have excellent parsing libraries that make things similarly easy, if not by language power than by grammar DSL with embeddable code snippets.

I think if any library could make a real difference for Haskell it's most likely to be http://hackage.haskell.org/package/lens, which a Haskeller friend of mine claims could likely make a lot of the AST traversal and rewriting much terser.

pwm|6 years ago

While I found your article informative and interesting I think it only works in the very specific context of this assignment. Disallowing powerful language features/libraries means it's not a level playing field and thus not a fair comparison. Some languages standard libraries are tiny some are huge. Some languages have lots of advanced features. Eg. GP mentioned GADTs with which one can write type safe/correct by construction ASTs. In other words programs passing specific tests in a specific context does not imply they are comparable in terms of general correctness/robustness/maintainability (as you noted this re caught edge cases).

howenterprisey|6 years ago

Hoopl (data flow analysis) would also make a difference. I did a very similar project at my university in Haskell and Hoopl definitely saved us from writing quite a bit of code. We also used parser combinators in the frontend, which I think saved us time too.

anaphor|6 years ago

I've found PEGs (Parsing Expression Grammars) to make things extremely easy and terse. E.g. OMeta, Parsley, etc.

My experience with using both PEGs and parser combinators is that there isn't a huge difference in the total number of lines of code. On the other hand though, the syntax of PEGs would be easier to understand for someone who is familiar with BNF style notation.

pyrale|6 years ago

Recoding a viable subset of lens would have taken 50 locs in haskell. Likewise, rewriting parser combinators would not have taken long for experienced devs. The problem here is that requiring people to recode the libs on top of the compiler is disingenuous. And if you ban idiomatic libs, you also ban most online help, tutorials, etc.

steveklabnik|6 years ago

My understanding is that in production compilers, hand rolled parsers are the norm. Parsing libraries are cool, but just aren’t used for big projects.

sanxiyn|6 years ago

Both OCaml and GHC use parser generators. It is incorrect to suggest production compilers hand roll parsers.

tomasato|6 years ago

Excluding the lens library (as per the article) is unusual, it provides natural getter/setter and row polymorphism type functionality.

More anecdotally, I’d argue parsing libraries are common, just look at the prevalence of attoparsec and others. But most parsing libraries in the ecosystem are parser combinator libraries which don’t support as performance and nice error messages that compilers need

willtim|6 years ago

It depends entirely on whether the big project still has an elegant and complete formal grammar. Hand-rolled parsers are only common in industrial languages because many have grown to be far too complex and ad-hoc, requiring e.g. additional analysis and disambiguation during parsing. It is not a situation to aspire to.

WalterBright|6 years ago

A typical translation of C++ code into D reduces the line count by a substantial amount, simply because D doesn't require .h files.

jancsika|6 years ago

What are relative compile times like?

Building Chromium atm, and to be honest I'd be happy if it were written in a trillion lines of BASIC if that would somehow achieve even a 10x build time speedup.

galaxyLogic|6 years ago

Using a fancy parser-library would mean we should also count the lines of code in it. It would basically mean adapting an existing solution. In practice that would make a lot of sense, but if the purpose is to compare the productivity of different languages then not so much.

geofft|6 years ago

Isn't it a facet of productivity of the language that it's easier to write certain types of libraries in one language than another? If you're writing a compiler, the fact that lots of people write parser libraries in Haskell is a point in favor of Haskell, whether if you intend to use those libraries (because they're available and production-tested) or you intend to write your own (because it's demonstrably a productive language for that sort of work).