The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at
his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
Seems to me this post is based on a misunderstanding of the Alan Kay quote. Kay wanted to minimize the role of _assignment_, not mutability. So instead of saying things like a=f(a), you say a.f(), where the method f simply mutates a in place. At least, that's how it sounds to me.
I think you misunderstood the quote. Specifically, "even if presented figuratively", and "Many programs are loaded with “assignment-style” operations now done by more expensive attached procedures".
Setters are assignment-style operations, and a.f() is an attached procedure.
Also, in http://www.c2.com/cgi/wiki?AlanKayOnMessaging you can see "[I realized] that assignments are a metalevel change from functions, and therefore should not be dealt with at the same level - this was one of the motivations to encapsulate these kinds of state changes, and not let them be done willy nilly"
Assignment and mutability aren't quite the same thing, but they're pretty close. If a.f() has the effect of mutating the internal state of a, how does that occur? By assignment!
In Smalltalk, there's a fairly common pattern of immutable objects. Their state is set when the object is created and then never changed. There might be getter methods, but there are no setters, and in fact the only method that makes assignments to instance variables is the initializer. It's not hard to see how that could be extended from "fairly common" to being a fundamental part of the language.
FP ~ Math/Formal Logics. OOP is a grab bag of theories, ideas and their applications to programming. Unfortunately I don't think that there's a generally accepted formal notion of what OOP actually is. On the other hand, functional languages, while there's some variance, all borrow from the same core principles. Moreover, feature parity in functional languages is a lot more clear in FP than in OOP. Terms such as "pureness" have a well defined meaning.
This is by no means a dogging of OOP. I personally think OOP seems to be better suited for practical business computing. Also FP, tends to have a higher learning curve (ex. monads) such that people of the street and in many academic programs gravitate towards the OOP paradigm. I think this could say something about how natural it is to use OOP, or maybe it just says something about how everybody is taught that programming "should be" (imperative vs functional).
"Object interfaces are essentially higher-order types, in the
same sense that passing functions as values is higher-order.
Any time an object is passed as a value, or returned as a
value, the object-oriented program is passing functions as
values and returning functions as values. The fact that the
functions are collected into records and called methods is
irrelevant. As a result, the typical object-oriented program
makes far more use of higher-order values than many functional programs."
People always half quote Kay on C++. He says "I came up with the term object-oriented, and I can tell you I did not have C++ in mind", but if you listen to the actual talk this comes up in, he immediately follows with (slight paraphrasing here) "Of course I'm not sure I had smalltalk in mind either"
The CS department at the Ohio State University built their own language on top of C++ called RESOLVE/C++. A lot of students absolutely hated it, but it introduced a pretty awesome paradigm for working with objects: swapping.
If a function needs to operate on an object, rather than messing with pointers or copying the object, the function instead "swaps" the object with an initialized object. The function's "contract" specifies that it "consumes" the object that is passed in, so in code that calls that function, it is apparent that it better copy the object if it needs to access it again.
I think this provides a nice paradigm for allowing for immutability without excessive copying. The only functions that actually modify objects are the object methods themselves.
The RESOLVE/C++ language has been around for well over a decade, but it hasn't garnered much interest outside of academia. It's a shame, because it's a reasonably high level language that allows for mathematical proofs of correctness while still compiling down to machine code.
An ex-girlfriend of mine was in the CS program at OSU and I helped her out occasionally with her homework and as such spent many nights pulling my hair out and cursing the name of whomever created RESOLVE/C++. The algebraic proof side of things was so poorly documented and designed that it was effectively impossible to use for its intended purpose, and the macros for things like simple comparisons were just excruciating. I've dealt with many languages, but RESOLVE/C++ is in the elite group which I found absolutely no upside to. I really don't understand why it's still in use.
I think with languages like Haskell, ML, Coq, ACL2 and their ilk, you're not going to get a lot of traction for theorem proving in an OOP-ed C++ variant. The first two do compile to machine code, and have type systems that better lend themselves to theorem proving. I'm not sure if C++ OOP is formal enough (or generally accepted as such) to be a strong contender in theorem proving.
The machines that we deal with are mutable as hell (unless we see time as an implicit argument ;)). Most early languages were thin abstractions from machine code, and the world kind of built from there.
Doesn't mutability force you to actually copy data? Immutability enables sharing, which needs less copying.
(Of course what it really comes down to, is whether you use multiple different copies of your data structures. If you only ever use one, even functional languages can update them in place. See Clean's linear types as an example, or Haskell's ST monad.)
Actually, today's hardware, with multiple cores/CPUs clashing over shared memory through non-shared cache, is less friendly to mutability than the old systems.
Immutability does require different data structures; a (singly-linked) list is easy to make immutable in most cases -- many lists with the same tail part can share that tail, without copying when a new list starts sharing that tail. Arrays, not so much.
Ah those were the days when computers hardly had enough memory to even keep the Haskell standard prelude in memory.
Immutability doesn't necessarily require copying data all the time though if you copied data only when modified and used the right data structures, does it?
Quite a superficial post. The Alan Kay quote is well-known, and it only touches upon mutable vs immutable data. Purity alone does not define functional programming, since there are plenty of counter-examples (Erlang, Lisp, OCaml).
The purpose of functional programming isn't to eradicate mutable state as if it were criminal. Mutable state is necessary. It's to control mutable state so that it's easy to determine where state is and how it is expected to behave. The problem with state is that if the architecture is sloppy and it's all over the place, it can lead to impossible-to-reason-about program behavior, especially when the rules that govern the state have been written emergently by several parties over years. Mutable state is a flower in the right part of the garden and a weed in the wrong part of it.
Haskell takes the most extreme approach of the mainstream languages, which is to segregate mutable state entirely into monads (e.g. IO, STM). Ocaml does this in a more practical way: some types (e.g. arrays, hashtables, records with mutable fields, ref cells) are mutable while others (e.g. linked lists, tree-based associative maps) are constitutionally immutable. Common Lisp really isn't "purely functional" at all; it provides the tools necessary to write functional code, but it also has methods like NCONC and RPLACA which allow in-place modification of cons cells, put into the language for historical reasons.
"Object-oriented programming" seems to have a "blind man and the elephant" problem. People differ radically in their opinions of it based on what they think "OOP" is. The problem with OO as commonly practiced is that it encourages the promiscuous, distributed proliferation of mutable state. Sometimes this is the right model, especially when involving processes that might be on separate machines, but not usually.
For example, you can write a Die class with a sides field (integer), a topFace field (integer) and a roll method that sets the value of topFace to a random integer between 1 and sides, inclusive. To use the Die, call roll and getTopFace. Want to roll 5 d10s? Then create Die(10) five times, then call .roll and .getTopFace on each of them. By the way, you now have five pieces of mutable state in your program and, if you ever forget to call .roll and only call .getTopFace on your dice (maybe you've been tempted to keep the Die objects around for "efficiency") you get erroneous results.
Clearly, that's the wrong way to solve the problem. It's better to just call (rand 10) five times: in general, you only care about the results of the rolls, not the "dice". (This approach still isn't "purely functional" as written, because it still uses a randomizer, e.g. state, but it's close enough for most purposes. Passing around a PRNG state is pretty heavyweight.)
Sometimes you need the full power of pi-calculus (communication, message passing) but usually you want to stick with lambda-calculus (referentially transparent computation) as far as you can take it. Mutable state is almost always necessary at some point, but you want to segregate it as much as you can.
I think Java and C++ et. al. get it wrong by overloading the . operator. On one hand it means "gimme this field of that object", and on the other hand it's "find this function in the method table of the class". These are completely different things. And it all goes downhill from this mistake.
[+] [-] jimwise|15 years ago|reply
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
From:
http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...
[+] [-] silentbicycle|15 years ago|reply
The parallels between objects and closures are neither here nor there.
[+] [-] cwp|15 years ago|reply
[+] [-] davidmathers|15 years ago|reply
That's because Java and C++ embody Barbara Liskov's vision of object oriented programming, primarily. The abstract data vision.
[+] [-] nycticorax|15 years ago|reply
[+] [-] loup-vaillant|15 years ago|reply
Setters are assignment-style operations, and a.f() is an attached procedure.
Also, in http://www.c2.com/cgi/wiki?AlanKayOnMessaging you can see "[I realized] that assignments are a metalevel change from functions, and therefore should not be dealt with at the same level - this was one of the motivations to encapsulate these kinds of state changes, and not let them be done willy nilly"
Alan Kay really is for segregating mutable state.
[+] [-] davidmathers|15 years ago|reply
The OP is kind of ridiculous actually.
[+] [-] jsvaughan|15 years ago|reply
[+] [-] cwp|15 years ago|reply
In Smalltalk, there's a fairly common pattern of immutable objects. Their state is set when the object is created and then never changed. There might be getter methods, but there are no setters, and in fact the only method that makes assignments to instance variables is the initializer. It's not hard to see how that could be extended from "fairly common" to being a fundamental part of the language.
[+] [-] grav1tas|15 years ago|reply
This is by no means a dogging of OOP. I personally think OOP seems to be better suited for practical business computing. Also FP, tends to have a higher learning curve (ex. monads) such that people of the street and in many academic programs gravitate towards the OOP paradigm. I think this could say something about how natural it is to use OOP, or maybe it just says something about how everybody is taught that programming "should be" (imperative vs functional).
[+] [-] jefffoster|15 years ago|reply
[+] [-] discreteevent|15 years ago|reply
On Understanding Data Abstraction, Revisited by William R. Cook
Discussed at:
http://lambda-the-ultimate.org/node/3668
From the paper:
"Object interfaces are essentially higher-order types, in the same sense that passing functions as values is higher-order. Any time an object is passed as a value, or returned as a value, the object-oriented program is passing functions as values and returning functions as values. The fact that the functions are collected into records and called methods is irrelevant. As a result, the typical object-oriented program makes far more use of higher-order values than many functional programs."
[+] [-] Tycho|15 years ago|reply
[+] [-] Homunculiheaded|15 years ago|reply
[+] [-] silentbicycle|15 years ago|reply
See also this interview with Alan Kay: http://queue.acm.org/detail.cfm?id=1039523
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] mellis|15 years ago|reply
[+] [-] BrandonM|15 years ago|reply
If a function needs to operate on an object, rather than messing with pointers or copying the object, the function instead "swaps" the object with an initialized object. The function's "contract" specifies that it "consumes" the object that is passed in, so in code that calls that function, it is apparent that it better copy the object if it needs to access it again.
I think this provides a nice paradigm for allowing for immutability without excessive copying. The only functions that actually modify objects are the object methods themselves.
The RESOLVE/C++ language has been around for well over a decade, but it hasn't garnered much interest outside of academia. It's a shame, because it's a reasonably high level language that allows for mathematical proofs of correctness while still compiling down to machine code.
[+] [-] daeken|15 years ago|reply
[+] [-] grav1tas|15 years ago|reply
[+] [-] jpr|15 years ago|reply
[+] [-] kia|15 years ago|reply
[+] [-] microtonal|15 years ago|reply
[+] [-] eru|15 years ago|reply
(Of course what it really comes down to, is whether you use multiple different copies of your data structures. If you only ever use one, even functional languages can update them in place. See Clean's linear types as an example, or Haskell's ST monad.)
[+] [-] jimwise|15 years ago|reply
Immutability does require different data structures; a (singly-linked) list is easy to make immutable in most cases -- many lists with the same tail part can share that tail, without copying when a new list starts sharing that tail. Arrays, not so much.
[+] [-] stewbrew|15 years ago|reply
Immutability doesn't necessarily require copying data all the time though if you copied data only when modified and used the right data structures, does it?
[+] [-] microtonal|15 years ago|reply
[+] [-] michaelochurch|15 years ago|reply
Haskell takes the most extreme approach of the mainstream languages, which is to segregate mutable state entirely into monads (e.g. IO, STM). Ocaml does this in a more practical way: some types (e.g. arrays, hashtables, records with mutable fields, ref cells) are mutable while others (e.g. linked lists, tree-based associative maps) are constitutionally immutable. Common Lisp really isn't "purely functional" at all; it provides the tools necessary to write functional code, but it also has methods like NCONC and RPLACA which allow in-place modification of cons cells, put into the language for historical reasons.
[+] [-] richcollins|15 years ago|reply
This is an argument against publicly accessible internal state, not stateful programming.
[+] [-] crikli|15 years ago|reply
/pedantic
[+] [-] sid0|15 years ago|reply
[+] [-] michaelochurch|15 years ago|reply
For example, you can write a Die class with a sides field (integer), a topFace field (integer) and a roll method that sets the value of topFace to a random integer between 1 and sides, inclusive. To use the Die, call roll and getTopFace. Want to roll 5 d10s? Then create Die(10) five times, then call .roll and .getTopFace on each of them. By the way, you now have five pieces of mutable state in your program and, if you ever forget to call .roll and only call .getTopFace on your dice (maybe you've been tempted to keep the Die objects around for "efficiency") you get erroneous results.
Clearly, that's the wrong way to solve the problem. It's better to just call (rand 10) five times: in general, you only care about the results of the rolls, not the "dice". (This approach still isn't "purely functional" as written, because it still uses a randomizer, e.g. state, but it's close enough for most purposes. Passing around a PRNG state is pretty heavyweight.)
Sometimes you need the full power of pi-calculus (communication, message passing) but usually you want to stick with lambda-calculus (referentially transparent computation) as far as you can take it. Mutable state is almost always necessary at some point, but you want to segregate it as much as you can.
[+] [-] jpr|15 years ago|reply
[+] [-] emmett|15 years ago|reply