top | item 33433783

Functional programming should be the future of software

341 points| g4k | 3 years ago |spectrum.ieee.org | reply

504 comments

order
[+] le-mark|3 years ago|reply
It would be helpful if the article started off defining what a functional language is. A lot of languages have functional features but are not “purely” functional. I think most would agree there’s a spectrum; dynamic vs static, eager vs lazy, mutable vs immutable.

So what flavor of functional programming one might ask, since javascript is a dynamically typed flavor that is ubiquitous nowadays? The fine article suggests drum roll... Haskell! The author believes a statically typed and lazily evaluated language is what we all should be using. Unlike the various other dynamicaly typed options like small talk or scheme or lisp. Standard ML and Ocaml being statically typed are eagerly evaluated.

Most popular languages have added a lot of functional features in recent years so it’s clear the world is moving that way, imo.

[+] dmitriid|3 years ago|reply
To quote "Stop Writing Dead Programs" [1]: "If what you care about is systems that are highly fault tolerant, you should be using something like Erlang over something like Haskell because the facilities Erlang provides are more likely to give you working programs."

[1] https://www.youtube.com/watch?v=8Ab3ArE8W3s

[+] commandlinefan|3 years ago|reply
> A lot of languages have functional features but are not “purely” functional.

That was my first thought. I work mostly in Java because that's what they pay me to do, but I've almost never worked with a Java programmer who could actually write Java code using the OO features that the language is based around. When I see their Scala code... it's mostly var, rarely val, because it's easy to think about.

[+] zamalek|3 years ago|reply
Functional programming won't succeed until the tooling problem is fixed. 'Tsoding' said it best: "developers are great at making tooling, but suck at making programming languages. Mathematicians are great at making programming languages, but suck at making tooling." This is why Rust is such a success story in my opinion: it is heavily influenced by FP, but developers are responsible for the tooling.

Anecdotally, the tooling is why I gave up on Ocaml (given Rust's ML roots, I was seriously interested) and Haskell. I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling. As for Haskell, I gave up maybe 20min in of waiting for deps to come down for a Dhall contribution I wanted to make.

Institutionally, it's a hard sell if you need to train the whole team to just compile a project, vs. `make` or `cargo build` or `npm install && npm build`.

[+] agentultra|3 years ago|reply
I think tsoding has the wrong idea here. Most mathematicians are not working on GHC or Haskell standards or even using Haskell. Most are still doing mathematics on pen and paper. Many use packages like Sage or Wolfram alpha. Few are using interactive theorem provers like Lean.

Haskell is a poor language to be doing mathematics in.

I’d say the majority of people working GHC are software developers and CS researchers. They’re a friendly bunch.

What’s holding back tooling is that the developers of the de-facto compiler for Haskell are spread out amongst several organizations and there isn’t tens of millions of dollars funding their efforts. It’s mostly run by volunteers. And not the volume of “volunteers” you get on GCC or the like either.

That makes GHC and Haskell quite impressive in my books.

There are other factors of course but the tooling is improving bit by bit.

The whole “Haskell is for research” meme needs to go into the dustbin.

[+] zelphirkalt|3 years ago|reply
You gave up using a programming language after a day? And Haskell after installing/building some dependencies for 20mins? Tbh, this sounds like you were not really trying. What kind of experience with a programming language do you expect to have after a mere day? Learning takes time. Anyone might spew some not idiomatic code within a day, but really becoming proficient usually takes longer.

Do you have any references for the "Rust is heavily influenced by FP" thing? To me it does not feel that much FP. I have (for now) given up writing FP like code in Rust. ML-influence -- Yeah maybe, if I squint a bit.

[+] Karrot_Kream|3 years ago|reply
I think FP suffers from what I call the "ideological gruel" problem. A lot of niche ideology-oriented communities with very strong opinions tend to ignore usability issues not necessarily because it's a theory/practice dichotomy, but that the highly ideologically excited community ignores the problems because they're so positively motivated by the ideology. So even if you're eating gruel, if the gruel is produced by an ideology you identify strongly with, the identification alone is enough to make the gruel taste better than mere gruel. A lot of folks that use FP are willing to overlook the myriad of rough edges around tooling because they're so excited to work with FP that they often get used to the tooling. Rust made tooling UX an explicit focus of the project which is why it was able to escape the "ideological gruel" curse.

Additionally most prominent FP projects are old. Both Haskell and Ocaml date from a time when UX expectations around language tooling were much lower (think C++.) The inertia around the projects never cared much for UX anyway so now in 2022 when languages like Rust and Go have raised the floor of expectation for PL tooling, Haskell and Ocaml struggle to keep up.

[+] pera|3 years ago|reply
That's interesting, I often find the tooling one of the best things that FP languages offer. In OCaml for instance I found Dune to be fantastic and extremely intuitive. Another very good experience I had was with Elixir and Hex. In Haskell I personally think that there are indeed quite a few things that could be improved around build system and packaging, but overall it's not really that bad once you learn the quirks.
[+] kitd|3 years ago|reply
Tooling is why Go gets its foot in the door much quicker than other languages IMHO. A single binary with no dependencies that does pretty much everything.
[+] Shorel|3 years ago|reply
As someone who likes both mathematics and programming, I find this comment and the article too divisive for divisiveness’ sake.

Programming is applied mathematics. An assignment is not 'sloppy' like the article posts, it is just another kind of operation that can be done.

A proof is very much like programming, except you are also the parser, the compiler, and the (comparatively slow) computer. Learning to write proofs helps immensely with learning how to program.

We should strive to make our proofs and programs easier to understand, no matter the paradigm.

[+] munchler|3 years ago|reply
I think F# has a good tooling story, since it's part of .NET and a first-class citizen in Visual Studio. It doesn't get as much love from Microsoft as C#, but it's still quite nice to use.
[+] initplus|3 years ago|reply
Honestly Haskell's tooling is really surprisingly good these days. It works fine cross platform, there is a single "blessed" path without too many options to spend time agonizing over.

Today the Haskell example is just `cabal install --only-dependencies && cabal build`.

[+] sandruso|3 years ago|reply
The npm install experience should be baseline for newer languages. Simply let me get into hacking fast. This is one of the top reason I like tinkering with JS because it just works. (Yes, I know all weaknesses of JS ecosystem, but getting is really easy)
[+] dist1ll|3 years ago|reply
Not only tooling is a problem. Name one FP language that I can use for high-performance and systems programming. Is there any one except ATS?

And ATS is pretty hard (unlike C, C++ and Rust). I think it will take a while until linear & dependent type languages will hit mainstream. Rust already succeeded in that regard, so it's a great stepping stone.

[+] capableweb|3 years ago|reply
> it is heavily influenced by FP

Is it really? I agree with the rest of your post, that Rust provides great tooling, but not sure it's "heavily influenced by FP", at least that's not obvious even though I've been mainly writing Rust for the last year or so (together with Clojure).

I mean, go through the "book" again (https://doc.rust-lang.org/book/) and tell me those samples would give you the idea that Rust is a functional language. Even the first "real" example have you mutating a String. Referential transparency would be one of the main point of functional programming in my opinion, and Rust lacks that in most places.

[+] jiggawatts|3 years ago|reply
Most (all?) dependency management systems are single threaded and download thousands of tiny files one… at… a… time…

I have gigabit internet and I’m lucky if some package manager can get more than a couple of megabits of throughput.

Most industries would never accept less than 0.5% efficiency, but apparently software developers’ time is just too expensive to ever be “wasted” on frivolous tasks like optimisation.

I kid, I kid. The real problem is that the guy developing the package manager tool has the package host server right next to him. Either the same building or even a dev instance on his own laptop. Zero latency magically makes even crappy serial code run acceptably well.

“I can’t reproduce this issue. Ticket closed, won’t fix.”

[+] thejosh|3 years ago|reply
I'm really happy with Meson, as a lot of Wayland (the new display protocol for Linux, the "successor" of X11) apps seem to be built in C, so using meson is super simple and I don't have to worry about tooling (I don't deal much with C/C++, so let me make my change and run away please).

Rust is the same, you can even define a nightly version if you want, so even the correct version is ran with rustup. It's fantastic, and I can contribute much easier to projects without worrying about tooling.

[+] karmakaze|3 years ago|reply
The only problem I had with tooling using F# was getting my dev env set up. The only help I need from the editor was navigating between classes/functions and build/run.

Even refactoring was easier because the types are sometimes left to be inferred and not named everywhere. The type inference in F# being weaker also even helps with both compile speed and readability where annotations are needed both to help the compiler and the reader.

Perhaps on larger projects other things become important, but I got the sense that it's on the devs to name things well, use type annotations where helpful, and otherwise document non-obvious aspects.

[+] hamandcheese|3 years ago|reply
There are plenty of non-academic languages with tooling issues as well. I think the larger issue is simply if the language has significant usage in a large corporation who can sponsor tooling development, or not.

Most tooling issues are pretty minor for small apps, it’s once one employs scores of developers that the lack of tooling begins to hurt (and by hurt, I mean cost money).

[+] yodsanklai|3 years ago|reply
> I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling.

I tend to agree but, compiling C++ isn't just about typing "make". And it did take me more than one day to figure out python/js workflow.

[+] mrkeen|3 years ago|reply
> Functional programming won't succeed until the tooling problem is fixed.

I think different people have different wants and needs with tooling. I make (and use) binaries with Haskell. I wish more mainstream languages could make binaries.

[+] dec0dedab0de|3 years ago|reply
I don't know any purely functional languages[], so my viewpoint is skewed here. Whenever I have seen a python developer drink the functional kool-aid they end up either storing state in global variables, environment variables or in a dictionary they end up passing to every function. I then have to explain to them that passing a dictionary around and expecting variables to be in it is just half-assed OOP.

My rule of thumb is anything that needs state should be in a class, and anything that can be run without state or side effects should be a function. It is even good to have functions that use your classes, as long as there is a way to write them with a reasonable set of arguments. This can let you use those functions to do specific tasks in a functional way, while still internally organizing things reasonably for the problem at hand.

The minute people start trying to force things to be all functional or all OOP, then you know they've lost the plot.

[] I have been wanting to learn lisp for over a decade, I just never get around to it.

[+] skohan|3 years ago|reply
Imo functional programming is one of those things that makes sense from a theoretical perspective, but comes with compromises when it comes to reality.

The thing about functional programming is that the confidence you get from immutability comes at the cost of increased memory usage thanks to data duplication. It's probably going to create a ceiling in terms of the absolute performance which can be reached.

There are just other ways to solve the problems like NPE's and memory safety, like ADT's and ownership rules, which don't come at the same costs as FP.

[+] axilmar|3 years ago|reply
I will certainly be downvoted for this, but I want to be honest so here it is anyway:

In all my programming years, 20+ years that is, I've met hundreds of programmers, and 95%+ of them handled imperative programming languages just fine, with very few actual bugs coming from each one.

Each time there is such a conversation, I have yet to see some actual concrete proof that functional programming provides a substantial increase in productivity over well-implemented imperative code.

In other words, I am still not convinced about the merits of functional programming over imperative programming. I want some real proof, not anecdotal evidence of the type 'we had a mess of code with C++, then we switched to Haskell and our code improved 150%".

Lots and lots of pieces of code that work flawlessly (or almost flawlessly) have been written in plain C, including the operating system that powers most of Earth (I.e. Unix-like operating systems, Windows etc).

So please allow me to be skeptical about the actual amount of advancement functional programming can offer. I just don't see it.

[+] kstenerud|3 years ago|reply
I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all.

The reason why multiple paradigms exist is because here in the real world, the competing issues and constraints are never equal, and never the same.

A big part of engineering is navigating all of the offerings, examining their trade-offs, and figuring out which ones fit best to the system being built in terms of constraints, requirements, interfaces, maintenance, expansion, manpower, etc. You won't get a very optimal solution by sticking to one paradigm at the expense of others.

One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)

[+] nayroclade|3 years ago|reply
It's strange to me that this article focusses so much on nullability, which seems like a tangental issue. There's nothing stopping an imperative language from enforcing nullability checks. Indeed, with full strictness enabled TypeScript will do just that, including requiring you check every indexed access to an array.
[+] discreteevent|3 years ago|reply
Not only that but directly under the heading "Nullifying problems with null references" they start to describe problems with global variables. The article is all over the place. There may be arguments for functional programming but I wouldn't trust this writer because their thinking is so sloppy.

Also: "But many functions have side effects that change the shared global state, giving rise to unexpected consequences. In hardware, that doesn’t happen because the laws of physics curtail what’s possible."

The laws of physics? That's complete waffle. What happens when one device trips a circuit breaker that disables all other devices? What happens when you open the door to let the cat out but the dog gets out as well?

[+] klysm|3 years ago|reply
This is a good point and I agree it buys you similar safety. The annoying part is it isn’t a monadic data structure. You usually get some syntactic sugar for a limited subset of mapping (optional chaining), but a lot of the time that’s insufficient and you get imperative code.
[+] AtNightWeCode|3 years ago|reply
And you will end up with monads anyway in FP which probably are about as error prone as missing null checks.

Many compilers warn about potential null problems as well.

[+] yxhuvud|3 years ago|reply
Yes, enforced nullability checks is just a type system feature. Another example that has it is Crystal, which is an OOP language.
[+] shubhamjain|3 years ago|reply
Functional programming proponents like the blog's author remind of Linux users who sweared by it as a user-friendly OS, thought of everyone else as idiots, and refused to admit its serious flaws as a general-purpose OS. I am not criticizing FP. It has great ideas and many of them have been actively borrowed into other languages. But it's just annoying to see bad analogies that get repeated again and again, like these:

> Now, imagine that every time you ran your microwave, your dishwasher’s settings changed from Normal Cycle to Pots and Pans. That, of course, doesn’t happen in the real world, but in software, this kind of thing goes on all the time.

> Let me share an example of how programming is sloppy compared with mathematics. We typically teach new programmers to forget what they learned in math class when they first encounter the statement x = x + 1. In math, this equation has zero solutions. But in most of today’s programming languages, x = x + 1 is not an equation. It is a statement that commands the computer to take the value of x, add one to it, and put it back into a variable called x.

Deja vu! I read exact same arguments 10 years ago. Maybe if FP did reduce the bugs, you'd have some stats and successful projects to back them up.

I worked at a company where FP was heavily used. It didn't magically reduce the number of issues we had to fix. Possibly increased them because of number of things we had to build from scratch. The company is default dead[1], now. Maybe bugs are not a symptom of the paradigm, but how strongly the systems and teams are architectured to prevent them.

[1]: http://www.paulgraham.com/aord.html

[+] deviantbit|3 years ago|reply
I believe functional programming is interesting, and fun. But it is not going to replace the world. My background has spanned from working on AAA games, to simulations for the DoE, to owning my own business developing embedded products, and graduate school (yes, I went back as a gray haired, and did that late). I fell in love with Lisp many years ago on a TI Explorer.

Functional languages are inherently difficult to develop applications that require state to change in non-deterministic ways. In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).

There are many types of applications where functional languages are perfect, but there are more that it would be a disaster. To make broad sweeping claims, such as this article, just encourages unnecessary discourse, and shows the ignorance of the author, and his limited understanding of the domain of problems that functional languages will benefit from, and the larger domain of problems that will not benefit from them.

As an employer, I don't hire people for their functional programming skills. If they have them, the better, but we have over 3 millions lines of code in C++, and close to a million in Java. We are not starting over, and new projects will leverage existing code.

[+] ParetoOptimal|3 years ago|reply
> In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).

https://hackage.haskell.org/package/frag

> There are many types of applications where functional languages are perfect, but there are more that it would be a disaster.

Can you give an example or two of an application in a functional language would be a disaster?

> To make broad sweeping claims, such as this article, just encourages unnecessary discourse, and shows the ignorance of the author, and his limited understanding

No, it's an opposing viewpoint to the popular "right tool for the job" and "take the best from functional, best from imperative, and smash them together".

See "The curse of the excluded middle by Erik Meijer".

> As an employer, I don't hire people for their functional programming skills.

I certainly choose my jobs with language and ecosystem in mind.

[+] thrown_22|3 years ago|reply
>Functional languages are inherently difficult to develop applications that require state to change in non-deterministic ways. In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).

Don't know about Haskell but it would be very fun to do it in Lisp.

Performance sold separately.

[+] z9znz|3 years ago|reply
At the last two places I worked, I gradually led my teams from traditional OOP Ruby or Python behaviors to at least partially functional (if you'll pardon the pun).

The immediate value I was able to demonstrate was in testing. Three basic (not too scary) principles can get you a long way:

1. push mutations and side effects as far toward the edges as possible (rather than embedded in every method/procedure)

2. strive for single responsibility functions

3. prefer simple built-in data structures (primarily hashes) over custom objects... at least as much as possible in the inner layers of the system

If these steps are taken, then tests become so much simpler. Most mocking and stubbing needs evaporate, and core logic can be well tested without having to touch the database, the api server, etc. Many of the factories and fixtures go away or at least become much simpler. You get to construct the minimal data structure necessary to feed to a test without caring about all the stuff that you normally would have to populate to satisfy your ORM rules (which should be tested in their own specific tests).

Once devs see this, they often warm up quickly to functional programming. Conversely, the quickest way to get an OOPist to double down and reject any FP is to build complex chains of collection operations which build and pass anonymous functions everywhere. Those things can be done where appropriate, but they don't provide as much early bang for the buck... and they likely prevent FP from getting a foot in the door to that team.

The only real downside is that naming things is hard, and good single responsibility practices result in a lot more functions that need good names.

[+] DeathArrow|3 years ago|reply
Functional or procedural or whatever, it doesn't matter much for me as long as the paradigm is not OOP.

I strongly believe that data should live separate from the actions performed on it. I also believe that inheritance is a bad thing as there are other, better means to achieve polymorphism.

I do believe in a data oriented programming where we waste as little CPU cycles as possible and introduce as little abstractions as possible.

[+] GaryPalmer|3 years ago|reply
I am working on a java game where I threw all OOP knowledge out the window and use C-style pure data-classes with public fields and no methods.

It's pretty refreshing to work this way compared to the design pattern madness you see in enterprise applications - but I guess it's not very safe if multiple people are working on this and some don't know what they are doing.

There has to be some middle ground, I think people have been going way overboard with OOP in the last two decades.

[+] scabel|3 years ago|reply
As someone who uses Scala on a daily basis, I might be biased. However, functional programming is truly the future because of its mathematical guarantees. Pure functions give us referential transparency i.e. we can substitute a piece in a large layer and still expect the entire thing to work as long as it has no side-effects. This has so much relevance to software maintainability. The downside is that, to take advantage of this mathematical guarantee the code has to be pure through and through. A single side-effect can spoil everything. Immutability, is another aspect of pure functions i.e. a pure function wouldn't mutate its inputs.
[+] agentultra|3 years ago|reply
Maybe it ought to be but it definitely isn't if you look at success as industry adoption.

The author seems to have good intentions and covers all the talking points a new convert will discover on their own.

However I'm afraid an article like this will do more harm than good in the end. There are too many network effects in play that go against a new paradigm supplanting the mainstream as it is. And the benefits of functional programming pointed out in this article haven't been convincing over the last... many decades. Without large, industry success stories to back it up I'm afraid any amount of evangelism, however good the intention of the author, is going to fall before skeptical minds.

It doesn't help that of the few empirical studies done none have shown any impressive results that hold up these claims. Granted those studies are few and far between and inconclusive at best but that won't stop skeptics from using them as ammunition.

For me the real power of functional programming is that I can use mathematical reasoning on my programs and get results. It's just the way my brain works. I don't think it's superior or better than procedural, imperative programming. And heck there are some problem domains where I can't get away from thinking in a non-functional programming way.

I think the leap to structured programming was an event that is probably only going to happen once in our industry. Aside from advances in multi-core programming, which we've barely seen in the last couple of decades, I wouldn't hold out for functional programming to be the future of the mainstream. What does seem to be happening is that developments in pure functional programming are making their way to the entrenched, imperative, procedural programming languages of the world.

A good talk, Why Isn't Functional Programming the Norm?

https://www.youtube.com/watch?v=QyJZzq0v7Z4

[+] Ciantic|3 years ago|reply
Lot of the time arguments for Functional Programming seem to describe some form of total programming and avoiding partial functions. Like enforcing null checking or exhaustive matching, avoiding panics etc.

When I was going through Functional Programming classes in Haskell, the teacher tried to separate total programming and functional programming.

For instance Rust programs rarely use function composition compared to Haskell. He didn't consider Rust as very good functional programming language for that very reason. But at the same time Rust has good total programming tools like exhaustive checking, Option, Result etc.

Does anyone else try to separate functional programming and total programming?

[+] aryehof|3 years ago|reply
I'm reminded of the following quote by the co-author of SICP - Gerry Sussman...

"Remember a real engineer doesn't want just a religion about how to solve a problem, like object-oriented or functional or imperative or logic programming. This piece of the problem wants to be a functional program, this piece of the program wants to be imperative, this piece wants to be object-oriented, and guess what, this piece wants to be logic feed based and they all want to work together usefully. And not because of the way the problem is structured, whatever it is. I don't want to think that there's any correct answer to any of those questions. It would be awful bad writing a device driver in a functional language. It would be awfully bad writing anything like a symbolic manipulator in a thing with complicated syntax."

[+] bob1029|3 years ago|reply
I'd strongly recommend checking this paper out:

Out of the Tar Pit - http://curtclifton.net/papers/MoseleyMarks06a.pdf

I agree that functional programming is part of the future. I believe that the relational model is the other part. In this space, imperative programming exists primarily to bootstrap a given FRP domain.

We've built a functional-relational programming approach on top of SQLite using this paper as inspiration. Been using this kind of stuff in production for ~3 years now.

Remember - Your user/application defined functions in SQL do not need to be pure. You can expose your entire domain to SQL and build complete applications there, with the domain data & relational model serving as first-class citizens. With special SQL functions like "exec_sql()" and storing your business rule scripts in the very same database, you can build elegant systems that can be 100% encapsulated within a single .db file.

[+] nu11ptr|3 years ago|reply
I personally think Rust should be included in this group. It isn't technically a functional language, but it has a "functional flair" with many of the same benefits as functional programming provides, and with some of the features (pattern matching, sum types, etc.). It takes a different approach to mutability, but the net benefit I think is about the same.
[+] commandlinefan|3 years ago|reply
Functional programming, as a paradigm, is way better than object-oriented programming, and also equally more complicated than object-oriented programming. My observation, over the past 30 years, is that 90+% of programmers aren't even capable of doing object-oriented programming - so getting them to do functional programming is a pipe dream.
[+] kleene_op|3 years ago|reply
> getting them to do functional programming is a pipe dream

I don't know if that was intentional but I'm upvoting!

[+] feoren|3 years ago|reply
> 90+% of programmers aren't even capable of doing object-oriented programming

Similar to other takes I've seen in this thread. But isn't it flawed to talk about being "capable of object-oriented programming" when object-oriented programming is itself an ill-defined, flawed idea? (I'm talking C++/Java/C#/textbook OO, not Smalltalk.) I spend a majority of my time in C#, and I'm not really sure I'm doing object-oriented programming either. Learning curves are never linear, but if it were, the curve for OO in C# would look something like:

Level 0: God-classes, god-methods. Puts the entire program in the "Main" method.

Level 1: Most of the logic is in "Main" or other static methods, with some working, mutable data stored in classes. No inheritance.

Level 2: Logic and state are starting to get distributed between classes, but lumpily -- some classes are thousands of lines long and others are anemic. Inheritance is used, badly, as a way to avoid copy/pasting code. Short-sighted inheritance based on superficial similarities, like "Dog : Animal". No clear separation of responsibilities, but "private" is starting to make an appearance. If design patterns are here, they're used arbitrarily. Still lots of mutability around. This is "OO Programming" as taught in early textbooks. It's bad.

Level 3: Methods and classes are starting to ask for contracts/abstractions instead of implementations. Inheritance hierarchies are getting smaller, include abstract classes, and are starting to be organized by need and functionality, rather than by superficial similarity; things like "TextNode : Node". Classes are clearly articulating their public surface vs. private details, with logic behind which is which. Generics are used, but mostly just with the built-in libraries (e.g. IEnumerable<T>). Design patterns are used correctly. Mutability is still everywhere. If interfaces are used, it's in that superficial enterprisey way that makes people hate interfaces: "Foo : IFoo", "Bar : IBar", for no discernable reason. This is "OO Programming" as taught in higher-level textbooks.

Level 4: No more "Dog : Animal". If inheritance is used at all, it's 1 layer deep (not counting Object), and the top layer is abstract. Code de-duplication is done via composition, not inheritance. Fluent/LINQ methods like .Select() [map] and .Where() [filter] have mostly replaced explicit loops. A large percentage of the code is "library" code -- new data structures and services for downstream use. Generics are everywhere, and not just with standard-library classes. Interfaces are defined by the needs of their consumers, not by their implementations -- you may not even see an implementation of an interface in the same project it's defined (this is a code-fragrance; a good smell!). Liberal use of Func<> and Action<> has eliminated almost all of the explicit design patterns and superficial inheritance that used to exist. Mutable state is starting to be contained and managed, perhaps via reactive programming or by limiting the sharing of mutable objects. This doesn't look much like OO as taught in textbooks.

Level 5: Almost all code is library ("upstream") code, with a clear, acyclic dependency graph. Inheritance is virtually absent; an abstract class may show up occasionally, but only because it hasn't been replaced with something better yet. Most code is declarative using fluent/functional-style methods on immutable data structures, like .Select() and .Where(). Where Level 4 may have abandoned that style at the limit of the "out of the box" data structures, Level 5 just writes their own immutable fluent/functional data structures when they need to. This means heavy use of interfaces, Func, and generics, including co- and contra-variance. It also means adapting ideas from the functional world, such as Monoids and the "monadic style" (but not an explicit Monad type, both due to the lack of higher-kinded types and due to the fact that Monad is a red-herring abstraction that is not useful on its own). Most code looks like it's written in a mini domain-specific language, whose output is not a result, but a plan (i.e. lots of lazy evaluation, but with sensible domain-specific data structures, not with raw language elements like LISP). Data is largely organized via relational concepts (see: Out of the Tar Pit), regardless of the underlying storage layer. Identity and state are separated. Data and function blend seamlessly. Mutability is almost exclusively relegated to the internals of an algorithm, mostly in said data structures. Virtually no mutable state is shared unless it's intrinsically necessary. If it is necessary, it's tightly controlled via reactive programming or something similar. A few performance-critical loops look almost like C, with their own memory models, bit twiddling, and other optimizations, but these are completely internal, private details, well commented and thoroughly tested. This looks nothing like OO Programming as taught in textbooks. It looks a lot more like functional programming (with some procedural sprinkled in) than OO.

If there's a Level 6, I'm not there yet, nor have I seen it (or known what I was looking at if I did).

So when I see someone say "programmers aren't doing OO programming", I don't know what that means. Only Levels 2 and 3 above look much like "object-oriented programming". If nobody told you C# was supposedly an "object-oriented language", and all you saw was Level 5 code, would you know OO was supposed to be the overriding paradigm?

Are people avoiding OO programming because they can't do it, or because they evolved past it? To someone stuck at Level 3, Level 5 code might look unnecessary, overly complicated, whatever. It might look like code written by someone who doesn't know how to do OO.

[+] JackFr|3 years ago|reply
> The biggest problem with this hybrid approach is that it still allows developers to ignore the functional aspects of the language. Had we left GOTO as an option 50 years ago, we might still be struggling with spaghetti code today.

This is demonstrably false, as C has always had a goto and its use by custom and in practice is greatly circumscribed.

[+] anfelor|3 years ago|reply
Even though this article comes from a reputable source, it should be pointed out that the author is not a researcher in the area -- and the decision not to include various MLs, OCaml, Scala, or F# in the chart of functional languages seems controversial. So this article does not speak for the community. If you want to read more about using Functional Programming in Industry, I would recommend Yaron Minsky's https://queue.acm.org/detail.cfm?id=2038036 instead.

Why did the GOTO statement fall out of favor with programmers? If you look at Knuth's famous article weighting the importance of GOTO (https://pic.plover.com/knuth-GOTO.pdf) you can see many calculations where the GOTO statement can save you a tiny bit of runtime. Today, these matter far less than all of the other optimizations that your compiler can do (e.g. loop unrolling, inserting SIMD instructions etc). Similarly, in some domains the optimizations that functional compilers can do matter more than the memory savings mutation could bring.

Personally, I believe that with in the next decades memory usage will matter more, but even then functional programming languages can do well if they can mutate values that only they reference(https://www.microsoft.com/en-us/research/uploads/prod/2020/1...). This does not break the benefits of immutability, as other program parts can not observe this mutation.

I disagree with the articles premise that it is "hard to learn".. it might be today but it doesn't have to be. Monads are usually difficult for beginners, but algebraic effects are almost as powerful while being much simpler. They have slowly become mainstream (and might even make it into WASM!). It is an exciting time for functional languages and many people are working to make them even better!

[+] thoradam|3 years ago|reply
> the decision not to include various MLs, OCaml, Scala, or F# in the chart of functional languages seems controversial

I don't know why that would be controversial. There's a very clear distinction between (MLs, OCaml, Scala, F#) and (Haskell, Elm, PureScript, etc.).