I think these two points are important. From the article:
> Object-oriented programming is popular in big companies, because it suits the way they write software. At big companies, software tends to be written by large (and frequently changing) teams of mediocre programmers.
And from the linked analysis by Jonathan Rees:
> This is related to Lisp being oriented to the solitary hacker and discipline-imposing languages being oriented to social packs, another point you mention.
I would guess that most of the world-changing programming work that can be done is work that requires collaborating with other, unknown people. (But not all! HN itself is a good counterexample, being the work of very few developers and still having a significant influence on the world.) You'll write something at a company, get reorged away (or leave for another job), and someone else will launch it. You'll write some open-source library and years later someone will make something cool with it. You'll contribute to an existing large-scale project, because that gives you leverage you couldn't get from your own project - perhaps because of an inability to replicate the technical accomplishments of the project, but more likely an inability to replicate the network effects of it already having users. You'll build a prototype of something that proves something is doable, but gets significantly rewritten before it goes into production.
If you're interested in these problems, it's in your interest to work in a language that lends itself to other unknown people picking up your work and running with it. If you want to make single-hacker scale projects that will not grow much beyond you, there's nothing wrong with that - but I think you have to be either lucky or able to commit time for the next several years if you want your project to get traction.
Why is there never a spectrum? Why is it always a "single-hacker project" vs. "a huge project with tens or hundreds of contributors"? Why is it always in these discussions that the concept of the functional languages gets hijacked to "they are only used by hackers"? (And what does "hacker" even mean in this context? The word has like 50 meanings already.)
This fabricated extreme dichotomy is killing the discussion. Every time. And it's sad.
We the techies are supposed to be objective and act on a meritocratic basis. I get very disappointed every time I see us just bickering over imaginary strawmen.
> My own feeling is that object-oriented programming is a useful technique in some cases, but it isn't something that has to pervade every program you write.
That could apply to pretty much any programming paradigm.
I wouldn't use object oriented programming for small programs. Why bother with types when you can express things in a couple hundred lines of python with functions? Why write classes to munge some data that could be better expressed as a series of map/fold/etc. on lists of tuples?
However, for keeping large codebases approachable and maintainable, OOP is the best paradigm I have personally seen.
> I wouldn't use object oriented programming for small programs. Why bother with types when you can express things in a couple hundred lines of python with functions?
Hm? Types don't require OO. Even your 100 line program can benefit from type checking.
Approachable code must be structured in a meaningful way. That doesn't require OO either.
It depends what you mean by OOP. If you're talking about class orientated program (aka everything is a class) then I disagree. That can get painful to maintain in large applications, especially when concurrency becomes important.
I'd say this more or less the mainstream view these days. Both Go (2012) and Rust (2015) decided to do polymorphism with a trait system rather than inheritance, for example.
I can count on 0 hands how many times I have used inheritance in the past 6-7 years. And if you don't count times that a framework requires me to use it to accomplish something, like React.Component or NSTableView, then that number goes back something like 15 years.
"Objects are just collections of first-class functions". " the lambda-calculus was the first object-oriented language: all its data is represented behaviorally as objects"
The lambda calculus is not an object-oriented language, because it does not have hidden (mutable) state. Noticeably, the idea of using 'at-end? and 'read messages goes wrong precisely when there's hidden mutable state behind "stream" (e.g. some other thread consumes the stream in between the 'at-end? and the 'read). As the saying goes, you wanted a banana but what you got was a gorilla holding the banana and the entire jungle. That can't happen in the lambda calculus.
I know Hacker news is hardly the place to disparage anything said by Paul Graham, but I have to ask: is anyone else turned off by his constant reference to "smart people"? Like in the first sentence of this essay, where he immediately declares:
> There is a kind of mania for object-oriented programming at the moment, but some of the smartest programmers I know are some of the least excited about it.
Is that supposed to convince me? "Oh, well, if he knows some smart people who are against it, maybe I should stop liking it, too, because I definitely want to fit in with the smart people Paul Graham knows."
It's such a common thing in Silicon Valley, it kills me. "Always be the dumbest person in the room. Surround yourself with people smarter than you." Like, do I have to ditch any old friends that don't make the cut? What does it mean for someone to be "smarter than me"? Do they have to have a higher SAT score? Do they have to play an instrument better than I do? Do they have to be more financially successful? Can I still learn things from people who are less successful than me?
I just really don't like the mentality of deeming someone "smart" and then using their personal preferences as an inherent argument. That's the fallacy of the argument from authority.
If you don't like OOP, great, carefully explain its shortcomings to me. And perhaps he goes on to do that. I haven't finished reading yet. I just wanted to stop after that first sentence and say that as an opener, I hate it.
However crappy it sounds, there are people who have a hard time grasping relatively simple programming concepts, and then there are also engineers who are astoundingly intelligent and productive.
It doesn't necessarily have to be attributed merely to raw intellect, but denying it isn't in line with reality.
Having worked at many different companies you can definitely see how certain cultures, problem areas and technologies attract people with distinct intellectual bents and how that impacts development velocity.
How much these talented people can leverage their skills is still dependent on the organization, but pretending those differences aren't out there or that you can't hire for these profiles is naive.
This is not pg’s best. OOP was invented in academia as a way to scale software design by defining abstractions that you still keep in your head even after it becomes very large. You don’t need OOP to write hello world like many languages such as C# and Java enforces. To pg’s point that hello world becomes quite ugly and verbose. On the other hand I have known C programmers writing million LOC and bragging about never needing OOP features of C++. When I look in to their code I often find that they had just reinvented OOP in their own home brewed way in rather less deciplined and mediocre manner. But still they would consider themselves to be too smart to need OOP.
16y later: OOP is still pervasive, but enjoys a healthier relationship with functional primitives in popular languages so that programmers can choose the right tool for each task. Meanwhile, Paul has built an extraordinary career in a role where it makes far more sense to aggressively dismiss established wisdom, creating and coaching new business models. All seems about right!
I personally prefer fully object-oriented languages, as long as it's an option and not a must like in Java.
Replacements for OOP are just another fad, in twenty years from now people will come up with new fancy object-oriented languages, maybe functional ones with immutable objects, or with some other paradigm that replaces traits, mixins, interfaces, etc.
OOP is good for domains that naturally give rise to a clearcut inheritance structure, such as stateful GUI frameworks or ontologies that directly mirror real-world objects. It's not a magic pink unicorn, but that's true about all other approaches.
If you want everything but small, compact executables, you can already have that today in CommonLisp. It can be used to write functional programs or object-oriented styles and has almost all other programming language concepts you can imagine. Still not many people use it. The choice of the latest programming languages has almost nothing to do with their features and more to do with the availability of tools, libraries, and programmers.
>OOP is good for domains that naturally give rise to a clearcut inheritance structure, such as stateful GUI frameworks or ontologies that directly mirror real-world objects
Stateful GUI frameworks are not the only way to build UI frameworks... It'd be pretty easy to argue that people are building stateful UI frameworks because they're predisposed to think about them in terms of OOP.
And so far as I can tell, inheritance ontologies never directly mirror any real world objects. Real world objects are purely compositional with some specialized emergent context-sensitive behaviors.
I don’t think so. Go is picking up steam with no sign of slowing down, and the more experience I get, the less I write OO code (I basically only use inheritance as a poor-man’s substitute for automatically delegating to a member, a la anonymous struct embedding in Go—basically syntax sugar).
If you look at more recently developed GUI architectures there is this heavy influence from functional reactive programming. The UIs are modelling as reductions over event streams with no mutable state.
Most architectures around React do this, for example. The reasons why virtual DOM technology has been developed on the web, was to hide away the stateful nature of the older, OOP-influence DOM API.
>> Maybe I'm just stupid, or have worked on some limited subset of applications.
It seems to me that Graham answers this question in the preceeding paragraph:
>> I've done a lot of things (e.g. making hash tables full of closures) that would have required object-oriented techniques to do in wimpier languages, but I have never had to use CLOS.
He simply writes ad-hoc implementation of an object system, instead of using one provided with his favourite language. Seems like a bad case of NIH syndrome.
I've always disagreed with pg's contention that objects are the poor man's closures, at least w.r.t. Common Lisp. I write a lot of Common Lisp code and almost every one of my programs uses CLOS in some way (and many of my programs use closures too; they're complementary). I'd probably never use CLOS if it limited me to single inheritance or single dispatch or prohibited reclassification like most OO systems do, but it doesn't. CLOS is OO without the arbitrary restrictions.
> He simply writes ad-hoc implementation of an object system, instead of using one provided with his favourite language. Seems like a bad case of NIH syndrome.
Could be. Or maybe he just accepts OOP as a necessary tool in a small percentage of cases and opts for the lesser of the two evils -- rolling his own mini-OOP system as opposed to accepting all the baggage that comes with a first-class OOP language (like side effects and bloated code). Remember the saying "when you code with an OOP language, you not only get the banana, but the gorilla and the whole jungle".
I know I would defend OOP to my dying breath just a short 2-3 years ago. But when I started working with FP languages I discovered that OOP is vastly overrated.
I am not as naive as to think that I will change your mind. Just offering another perspective.
I don't think making a hash table of closures is necessarily equivalent to "object orientation". Sure, if you use a set of well-known keys to represent an interface, then it's headed that way. But the same structure can be used for other things, e.g. a table of opaque IDs to closures encapsulating a possible "next step" through a workflow in an interactive program. Indeed, I think the original news.arc did exactly that in many places (and contemporary Hacker News still does somewhat?).
(My reading of the "wimpier languages" bit was that it was probably a dig at Java, where [prior to Java 8], you had to explicitly write an inner class to emulate a closure...)
[+] [-] geofft|7 years ago|reply
> Object-oriented programming is popular in big companies, because it suits the way they write software. At big companies, software tends to be written by large (and frequently changing) teams of mediocre programmers.
And from the linked analysis by Jonathan Rees:
> This is related to Lisp being oriented to the solitary hacker and discipline-imposing languages being oriented to social packs, another point you mention.
I would guess that most of the world-changing programming work that can be done is work that requires collaborating with other, unknown people. (But not all! HN itself is a good counterexample, being the work of very few developers and still having a significant influence on the world.) You'll write something at a company, get reorged away (or leave for another job), and someone else will launch it. You'll write some open-source library and years later someone will make something cool with it. You'll contribute to an existing large-scale project, because that gives you leverage you couldn't get from your own project - perhaps because of an inability to replicate the technical accomplishments of the project, but more likely an inability to replicate the network effects of it already having users. You'll build a prototype of something that proves something is doable, but gets significantly rewritten before it goes into production.
If you're interested in these problems, it's in your interest to work in a language that lends itself to other unknown people picking up your work and running with it. If you want to make single-hacker scale projects that will not grow much beyond you, there's nothing wrong with that - but I think you have to be either lucky or able to commit time for the next several years if you want your project to get traction.
[+] [-] pdimitar|7 years ago|reply
This fabricated extreme dichotomy is killing the discussion. Every time. And it's sad.
We the techies are supposed to be objective and act on a meritocratic basis. I get very disappointed every time I see us just bickering over imaginary strawmen.
[+] [-] payne92|7 years ago|reply
A functional style is not the only way to write programs, but it’s often much easier to reason about behavior when you’re not chasing side effects.
And OO implementations often have a LOT of side effects.
[+] [-] krapp|7 years ago|reply
If you're worried about side effects and want pure functional programming... Arc is not the language for you.
[+] [-] jfim|7 years ago|reply
That could apply to pretty much any programming paradigm.
I wouldn't use object oriented programming for small programs. Why bother with types when you can express things in a couple hundred lines of python with functions? Why write classes to munge some data that could be better expressed as a series of map/fold/etc. on lists of tuples?
However, for keeping large codebases approachable and maintainable, OOP is the best paradigm I have personally seen.
[+] [-] fuzzy2|7 years ago|reply
Hm? Types don't require OO. Even your 100 line program can benefit from type checking.
Approachable code must be structured in a meaningful way. That doesn't require OO either.
[+] [-] ChrisSD|7 years ago|reply
[+] [-] oconnor663|7 years ago|reply
[+] [-] sdegutis|7 years ago|reply
[+] [-] still_grokking|7 years ago|reply
https://softwareengineering.stackexchange.com/questions/2472...
[+] [-] krapp|7 years ago|reply
[+] [-] discreteevent|7 years ago|reply
http://wcook.blogspot.com/2011/04/paul-graham-on-objects-in-...
"Objects are just collections of first-class functions". " the lambda-calculus was the first object-oriented language: all its data is represented behaviorally as objects"
[+] [-] lmm|7 years ago|reply
[+] [-] aerovistae|7 years ago|reply
> There is a kind of mania for object-oriented programming at the moment, but some of the smartest programmers I know are some of the least excited about it.
Is that supposed to convince me? "Oh, well, if he knows some smart people who are against it, maybe I should stop liking it, too, because I definitely want to fit in with the smart people Paul Graham knows."
It's such a common thing in Silicon Valley, it kills me. "Always be the dumbest person in the room. Surround yourself with people smarter than you." Like, do I have to ditch any old friends that don't make the cut? What does it mean for someone to be "smarter than me"? Do they have to have a higher SAT score? Do they have to play an instrument better than I do? Do they have to be more financially successful? Can I still learn things from people who are less successful than me?
I just really don't like the mentality of deeming someone "smart" and then using their personal preferences as an inherent argument. That's the fallacy of the argument from authority.
If you don't like OOP, great, carefully explain its shortcomings to me. And perhaps he goes on to do that. I haven't finished reading yet. I just wanted to stop after that first sentence and say that as an opener, I hate it.
[+] [-] Daishiman|7 years ago|reply
It doesn't necessarily have to be attributed merely to raw intellect, but denying it isn't in line with reality.
Having worked at many different companies you can definitely see how certain cultures, problem areas and technologies attract people with distinct intellectual bents and how that impacts development velocity.
How much these talented people can leverage their skills is still dependent on the organization, but pretending those differences aren't out there or that you can't hire for these profiles is naive.
[+] [-] sytelus|7 years ago|reply
[+] [-] evrydayhustling|7 years ago|reply
[+] [-] pdimitar|7 years ago|reply
[+] [-] robobro|7 years ago|reply
[+] [-] 13415|7 years ago|reply
Replacements for OOP are just another fad, in twenty years from now people will come up with new fancy object-oriented languages, maybe functional ones with immutable objects, or with some other paradigm that replaces traits, mixins, interfaces, etc.
OOP is good for domains that naturally give rise to a clearcut inheritance structure, such as stateful GUI frameworks or ontologies that directly mirror real-world objects. It's not a magic pink unicorn, but that's true about all other approaches.
If you want everything but small, compact executables, you can already have that today in CommonLisp. It can be used to write functional programs or object-oriented styles and has almost all other programming language concepts you can imagine. Still not many people use it. The choice of the latest programming languages has almost nothing to do with their features and more to do with the availability of tools, libraries, and programmers.
[+] [-] Retra|7 years ago|reply
Stateful GUI frameworks are not the only way to build UI frameworks... It'd be pretty easy to argue that people are building stateful UI frameworks because they're predisposed to think about them in terms of OOP.
And so far as I can tell, inheritance ontologies never directly mirror any real world objects. Real world objects are purely compositional with some specialized emergent context-sensitive behaviors.
[+] [-] weberc2|7 years ago|reply
[+] [-] edynoid|7 years ago|reply
Most architectures around React do this, for example. The reasons why virtual DOM technology has been developed on the web, was to hide away the stateful nature of the older, OOP-influence DOM API.
[+] [-] pka|7 years ago|reply
React begs to differ.
> or ontologies that directly mirror real-world objects.
Such as?
[+] [-] pdimitar|7 years ago|reply
These can -- and have been -- influenced by corporations. Many times.
It's a vicious cycle.
[+] [-] mamon|7 years ago|reply
It seems to me that Graham answers this question in the preceeding paragraph:
>> I've done a lot of things (e.g. making hash tables full of closures) that would have required object-oriented techniques to do in wimpier languages, but I have never had to use CLOS.
He simply writes ad-hoc implementation of an object system, instead of using one provided with his favourite language. Seems like a bad case of NIH syndrome.
[+] [-] dreamcompiler|7 years ago|reply
[+] [-] pdimitar|7 years ago|reply
Could be. Or maybe he just accepts OOP as a necessary tool in a small percentage of cases and opts for the lesser of the two evils -- rolling his own mini-OOP system as opposed to accepting all the baggage that comes with a first-class OOP language (like side effects and bloated code). Remember the saying "when you code with an OOP language, you not only get the banana, but the gorilla and the whole jungle".
I know I would defend OOP to my dying breath just a short 2-3 years ago. But when I started working with FP languages I discovered that OOP is vastly overrated.
I am not as naive as to think that I will change your mind. Just offering another perspective.
[+] [-] dasmoth|7 years ago|reply
(My reading of the "wimpier languages" bit was that it was probably a dig at Java, where [prior to Java 8], you had to explicitly write an inner class to emulate a closure...)