top | item 10775969

App Developers on Swift Evolution

83 points| ingve | 10 years ago |curtclifton.net | reply

77 comments

order
[+] pcwalton|10 years ago|reply
Apple's engineers are right on this one. From a performance point of view, the only way Java (and dynamic languages like JavaScript) get away with having almost everything be virtual is that they can JIT under a whole-program analysis, add final automatically in the absence of subclassing, and deoptimize in the presence of classloaders. In other words, Java optimistically compiles under the assumption that classes that aren't initially subclassed won't be, and if that assumption later turns out to be invalid it deletes the now-incorrect JIT code and recompiles. This isn't an option for Swift, which is ahead-of-time compiled and can't recompile on the fly.

If you don't allow an ahead of time compiler to devirtualize anything, you're going to have worse method call performance than JavaScript.

[+] dilap|10 years ago|reply
Yet Obj-C seems to get by quite fine, and everything is super-dynamic there.

Being able to dynamically hack behavior is a huge boon when coding against a closed-source, 3rd party platform.

Doesn't matter so much in an open-source context, where you can just modify code and ship your own version of the library if necessary.

[+] hyperpape|10 years ago|reply
I immediately agreed with what you wrote, but now there's a voice in the back of my head asking "if this is so obviously right, why is it just happening now, after 2.0? It's not like this Lattner guy doesn't know what method calls cost..."

A post last week featured Lattner writing about Swift treating methods as final when they weren't declared that way (https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...). I wonder if the early thought was that they'd get enough mileage out of those "cheats", and then later decided that final should be the default.

[+] coldtea|10 years ago|reply
>If you don't allow an ahead of time compiler to devirtualize anything, you're going to have worse method call performance than JavaScript.

Well, Swift as it is (e.g. before adopting final by default) has great performance.

So what gives?

[+] klodolph|10 years ago|reply
This is a cultural issue much more than a technical issue. I can relate to both sides. "I want to be able to patch anything myself," versus "I want to be able to reliably reason about how my module works."

The rhetoric on both sides can quickly get stupid. This is one of the major ways in which languages are divided. Ruby folks are used to being able to monkey patch everything, and Python folks tend to avoid it even though they can. JavaScript programmers are divided on the issue: should you add a method to Array.prototype, or is that just asking for trouble? I've certainly seen my own fair share of bugs and crashes caused by programmers substituting types where they shouldn't, and seen my fair share of frustrating limitations in sealed modules that should just expose that one method I need, dammit.

Objective-C leaned towards the "go wild, do anything" approach, which grew quite naturally out of the minimalistic "opaque pointer + method table" foundations for object instances. One of the reasons that you make such a choice in the first place is the ease of implementation, but in 2015, writing your own compiler is easier than ever. So Apple is quite naturally considering application reliability as a priority (since they're competing with a platform that uses a safer language for most development).

Unfortunately, it's a cultural fight where one side or the other must necessarily lose.

[+] kstrauser|10 years ago|reply
Because I have no experience with Swift, could someone more informed explain this to me? How would my subclassing an Apple-provided class and overriding its methods affect anyone but me? In Python, if I write:

    class MyRequest(requests.Request):
        def get(self): do_something_stupid()
then my coworkers can still use `requests.Request` itself without getting my bad behavior, and if they find themselves looking at a flaw in my code, they know not to blame the upstream author. What's different about the Swift situation?

I'm kind of horrified at the idea of an OOP language that wouldn't easily let me override a parent class's behavior. If I break something in the subclass, it's on me to fix it. That never reflects poorly on that parent class.

[+] draw_down|10 years ago|reply
I don't really agree that JS devs are divided on this issue, though of course it's a huge community and I could just be talking about my corner of it. But we saw what happened when two libraries wanted to alter the same built-in. Polluting the global namespace and altering built-ins is a no-no.
[+] dilap|10 years ago|reply
When you control all the source, being dynamic is stupid.

When you're coding against a closed-source, external, 3rd party platform, being dynamic is really helpful.

Simple as that, IMO.

(From a distance, Swift sure does have a strong C++ vibe.)

[+] devit|10 years ago|reply
Final by default is correct, since otherwise you are effectively exposing and having to maintain an extra stable API towards subclasses, which is a nightmare and won't be done properly unless it's intentional.

In fact, having virtual methods at all outside of interfaces/traits/typeclasses is dubious language design since it meshes together the concepts of interface and implementation and makes it inconvenient, impossible or confusing to name the implementation instead of the interface.

The issues in the discussion are instead due to Apple's framework code being closed source and unmodifiable by users and developers, and also buggy according to the author.

[+] lpsz|10 years ago|reply
I'm an app developer. This change will absolutely break some of my stuff, and it's going to suck. Even with that, I do feel OP is taking an overtly political stance (even using the word "banned".) This change is perfectly reasonable within the already-strict mindset of Swift. Having a less-strict language just to work around potentially buggy Apple frameworks would be setting a bad precedent.

Using "final" also has some performance wins by reducing dynamic dispatch. [1]

[1] https://developer.apple.com/swift/blog/?id=27

[+] wvenable|10 years ago|reply
Being able to run-time patch an API installed on a device is an entirely different thing than being able to modify and distribute an open source framework.

Both are useful but they aren't the same thing. In one case, you want to able to get your code running on devices that in the wild now. In the other, you want your fixes to go upstream so you can remove any hacks needed to do the former.

[+] dplgk|10 years ago|reply
> since otherwise you are effectively exposing and having to maintain an extra stable API towards subclasses

How so? I override what i want at my own peril. I'm not going to complain to the author that his change broke my code.

> Apple's framework code being closed source and unmodifiable by users and developers, and also buggy according to the author.

Apple is constantly breaking things. If we can't extend classes, then we'll use composition and at the end of the day, what's the difference? I need code that sits in front of there so I can make it work correctly.

[+] adrianm|10 years ago|reply
I find this slow march in "modern" language design toward completely static compilation models troublesome to the extreme. It feels like a significant historical regression; they speak as if Smalltalk and the Metaobject Protocol are things to revile and shun, not elegant programming models that we as programmers should aspire to understand and use in our own programs.

To elide these features as a matter of principle implies that you believe your compilation model is perfect, and is able to deduce all information necessary for optimal compilation of your program statically, perhaps augmented with profiling information you have obtained from earlier runs. It also makes upgrading programs for users more difficult since patches must be applied in a non-portable manner across programs. I shan't mention the fact that they make iterative, interactive development an ordeal. The Swift REPL is progress (although REPLs for static languages are nothing new), but it still pales in comparison to the development and debugging experience in any Smalltalk or Lisp system.

There is no reason why the typing disciplines Swift is designed to support should demand the eradication of all dynamism in the runtime and object model.

If you have never heard of the Metaobject Protocol or similar concepts before, here is the standard reference: https://mitpress.mit.edu/books/art-metaobject-protocol

This discussion also reminds me of this essay by Richard P. Gabriel: https://www.dreamsongs.com/SeatBelts.html

[+] pcwalton|10 years ago|reply
OK, but Swift is an ahead of time compiled language, unlike Lisp or Smalltalk. That makes the tradeoffs completely different.
[+] chvid|10 years ago|reply
Absolutely agreed.

I honestly don't care about nicer switch-statements when are a big classes of problems that cannot be solved because of the lack of reflection.

[+] teacup50|10 years ago|reply
Smalltalk and the Metaobject Protocol are things to revile and shun.

They aren't elegant programming models and they aren't even internally consistent programming models.

If you read Smalltalk papers, they aren't really compsci papers at all. They're musings about some ideas they tried and how well they thought they worked afterwards.

The language world is moving to a more coherent, formal, mathematical understanding of type systems, programming languages, and automatable proof systems.

[+] munificent|10 years ago|reply
Methods in C# are non-virtual by default and almost every class in the core libraries is sealed and the world hasn't ended in .NET land.

I have definitely done some hacks to work around bugs in frameworks I've used. But I've also had to deal with users who broke my libraries or inadvertently wandered into the weeds because it wasn't clear what things were and weren't allowed.

This is one of those features where the appeal depends entirely on which role you imagine yourself in in scenarios where the feature comes into play.

[+] SideburnsOfDoom|10 years ago|reply
And I have heard people say that the "sealed" class in the framework is often mistake, as they have experienced pretty much the limitations described here. In other words, Curt Clifton's theory has a lot of merit in practice.
[+] randomfool|10 years ago|reply
But .NET definitely struggled with cultural issues around making APIs virtual. Because of Microsoft's strong 'no breaking changes' rule they were extremely cautious about adding virtuals- in my experience it was not unusual to see it costed at 1 week dev/test time for a single virtual on an existing method (in WPF).

C++ is also non-virtual by default and I think it's worked out OK.

[+] msie|10 years ago|reply
Sealing classes by default is troubling. I'm having a bad feeling about the future of Swift. I also think it's growing too big already.
[+] pjmlp|10 years ago|reply
The fragile base class problem shows that not sealing is also troubling, much more than sealing them.
[+] angerman|10 years ago|reply
The whole argument boils down to how developers were treated with apples libraries so far. The submission/review process is quite prohibitive, and the core libraries (like almost every piece of software) have flaws. Together with the opaque intransparent radar bug reporting / bug resolution system, you had to resort to method swizzling to keep your sanity (I guess the PSPDFKit guys can speak volumes on that).

Going forward, I hope apple sticks to the open source approach they took with swift. That more of the libraries will follow, with Apple encouraging more community participation.

[+] vor_|10 years ago|reply
This is a correct decision. APIs have to be designed to support subclassing properly. It's also a performance win.
[+] msie|10 years ago|reply
Interesting, it's like how Swift forces you to think about nulls. Now you have to think about people subclassing your classes.
[+] e28eta|10 years ago|reply
As an iOS developer for years, I have never resorted to a runtime hack, subclass and re-implement, or similar trick to work around a framework bug.

Our team rarely shipped a new version of the app concurrent with a .0 release of iOS, so that might be related, but we always found ways to work around issues while respecting the APIs provided.

I understand other products and other developers have had a different experience, but I'm not overly concerned about this particular change.

[+] veidr|10 years ago|reply
OK, but: As a (Mac) OS X developer, in 2003 I implemented a custom NSTextView subclass to fix two specific bugs that were impossible to work around otherwise. That subclass was used in everything we shipped for years and years... on OS X 10.3, 10.4, 10.5, 10.6, and 10.7.

(After that I lost track, but I think one or both bugs were finally fixed.)

I feel like maybe this change will make Apple frameworks more stable in the long run, but that will take a tech eternity (10+ years).

In the meantime, the overall user experience will be degraded by system framework bugs that can no longer be worked around. It just sounds more aspirational than realistic.

"Let's make it impossible for developers to work around our bugs -- that will force us to write perfect software!"

[+] zeckalpha|10 years ago|reply
Apple has been pushing composition over inheritance for years. No surprises.
[+] bsaul|10 years ago|reply
I'm not sure so i'm asking : is your comment ironical ? I see inheritance everywhere on uikit.