> What irritates me is when people say classes are bad. Or subclassing is bad. Thatʼs totally false. Classes are super important. Reference semantics are super important. If anything, the thing thatʼs wrong is to say, one thing is bad and the other thing is good. These are all different tools in our toolbox, and theyʼre used to solve different kinds of problems.
The issues with inheritance based OOP are that it fits very few problems well, that it usually causes lots of problems and that many programming languages only have inheritance based OOP in their toolbox.
Java is the extreme case of this. Patterns like abstract visitor factories are hacks to express situations that cannot be expressed in an obvious way.
Subtyping adds huge amounts of complexity to type systems and type inference. Dart even chose to have an unsound type system because subtyping and parametric polymorphism (generics in Java) were deemed too hard for Google programmers to understand. The Go designers agreed.
Haskell and OCaml are a joy to program in, in part because they (mostly) eschew subtyping. So yes, subtyping is controversial.
Interesting interview. Java is mentioned many times as language Swift aspires to replace. He is right about Kotlin:
"Kotlin is very reference semantics, itʼs a thin layer on top of Java, and so it perpetuates through a lot of the Javaisms in its model.
If we had done an analog to that for Objective-C it would be like, everything is an NSObject and itʼs objc_msgSend everywhere, just with parentheses instead of square brackets. .."
I think Swift has real chance to reach Java level popularity. It is already at #11 in Redmonk ranking. All languages above Swift are at least 15 year older than Swift. And once it server side features like concurrency it can be much more general purpose.
I wish Swift focused on reference semantics. One of the big problems of value types in C++ is that you have to be a language lawyer to not accidentally make wasteful copies of everything, and the same is true of Swift:
I thought Objective-C had already solved this problem quite nicely with the explicit NSFoo/NSMutableFoo class pairs. I don't see why this needed to be fixed again, in a less explicit way.
For that to happen Swift needs to be usable at Java level in all OSes where JVM/JDKs (some of them with AOT support since the early days of Java) do exist.
I am still waiting for first class support on Windows on the download page.
Right now Rust has much better OS support than Swift.
> Kotlin is very reference semantics, itʼs a thin layer on top of Java
One of the main points of Kotlin is that it integrates tightly with IntelliJ. So Kotlin is a layer (not so thin) between a visual IDE (IntelliJ or Android Studio) and Java-decompilable bytecodes on the JVM.
You don't get that with other JVM languages, e.g. Apache Groovy only gives correct type hints in Eclipse 80% of the time, and JAD stopped working on Groovy-generated bytecodes in Groovy 1.7.
Wow, he just really really does not like C++. He is certainly an extremely knowledgeable C++ guy, obviously Swift is written in C++, but it's hard to entirely agree with his opinion on it across all fronts.
On one way we love the language, the expressive power it gives us, the type safety taken from Simula and Algol, thanks to C++'s type system.
On the other hand like Chris puts it "has its own class of problems because itʼs built on the unsafety of C".
So some of us tend to work around it, by using safer languages and only coming down to C++ for those tasks, where those better languages cannot properly fulfil them.
But as the CVE database proves, it only works if everyone on the team cares about safety, otherwise it is a lost game, only fixable by preventing everyone on the team to write C style unsafe code to start with.
Sure nowadays there are plenty of analysers to ensure code safety, but they work mostly on source code and like any tool, suffer from people really caring to use them.
As someone who spent over 20 years writing applications in C, anything built on C is crap and that includes C++ and Objective C.
Writing code is fun and interesting. But most software development is not writing code. It's a little bit of build management, even more testing, but mostly it's debugging. Debugging is not as fun as writing code. Every language feature that makes debugging more necessary, harder to do and more time intensive sucks. Dangling pointers are the absolute worst.
I can easily give up multiple inheritance for a more functional language that's far easier to write correct code in.
>Look at Javascript or any of these other languages out there. They started with a simple premise (“I want to do simple scripting in a web browser”) and now people are writing freaking server apps in it. What happened there? Is there a logical leap that was missed? How is this a good idea? [Audience laughs] Sorry, I love Javascript too. Iʼd just love to kill it even more.
It's interesting to see people talk a bunch about how nothing is good or bad, it's all a bunch of trade offs, and then they eventually tip their hand. I think people should just come out and say what they think is good or bad.
I think that very important aspect of achieving world domination for Swift is front-end development for Web (with compiler targeting JS or Web Assembly)
In a way many iOS and macOS applications are front-end software. It much more makes sense to make Swift available for other kinds of front-end development that for server-side coding.
That's the most curious part, that he wants to go more low-level with Swift instead of high-level.
Systems programming seems well catered for with Java, Go and Rust while high level application programming is left at the mercy of javascript (I like TypeScript but it's mostly improvements borrowed from C# that are bolted on). I think there would be a lot to gain there first and foremost by compiling Swift to WebAssembly.
> My goal for Swift has always been and still is total world domination
I hope that this is never happen. Swift is great, it's universal and it saves you a lot of time during coding, BUT It also has very large syntax and high number of features - documentation is huge! The most of swift programmers probably don't know complete syntax and all features which is problem in a world where we code in teams and work with open source(both cases mean that you work with code you didn't write).
We just need new simple way how billions of people can explain computers what to do and backwards understand what computer was told to do and I'm sure that it's not Swift, Java or C++.
That feature size is why Swift is so scalable. Writing useful programs is easy to do for beginners with a very limited subset of the language. But as you expand your knowledge Swift is rich with features that make complex apps much easier to write for professionals.
BTW: Before we get too deep in specific language criticisms, let's not forget that Chris Lattner is awesome. The fact that two super smart guys with huge work ethics like Chris and Elon Musk couldn't get along is very disappointing to me.
> The fact that two super smart guys with huge work ethics like Chris and Elon Musk couldn't get along is very disappointing to me.
I'm storing popcorn for the day all the people who've been burned working for Musk finally come together and speak out about his insanity as a manager.
I suspect the thing holding them back is that Musk's goals are laudable and everyone still wants them to succeed.
But be glad you're a (potential?) customer of Musk's, not an employee.
Was Lattner the right person to run the Tesla Autopilot development program? Spending seven years perfecting a programming language is a very different -- and relatively serene -- job compared to putting together the mythical ML/heuristics package that will prevent people from dying in self-driving Teslas, and making it happen yesterday.
Could someone explain why I should build a language developed entirely by and for writing Apple ecosystem products? It seems like if I'm not targeting MacOS or iOS directly, the long list of benefits suddenly looks much, much smaller compared to e.g. JVM, .NET, Go, etc etc.
"letʼs start hacking, letʼs start building something, letʼs see where it goes pulling on the string" feels scarily accurate, and it's unclear where the language will be in 5 years.
Among other things, there's no way to disable objective-c interop, even though it complicates the language and feels like someone merged smalltalk, C++, and ML—not a pretty combination. But—literally the only reason you'd enable that would be to work with Cocoa/UIKit.
I'm still out on ARC—it was much less of a problem than I expected on my last project, but it never feels like an optimal solution, and you can never just "forget about it for the first draft" the way you can a VM's GC.
> a language developed entirely by and for writing Apple ecosystem products
So, apparently you didn't even read the article, as it is explicitly stated that this was not the intention or direction of Swift.
> Among other things, there's no way to disable objective-c interop, even though it complicates the language and feels like someone merged smalltalk, C++, and ML—not a pretty combination. But—literally the only reason you'd enable that would be to work with Cocoa/UIKit.
Swift on Linux does not use any of the ObjC runtime features that are used on Apple platforms.
Believe it or not, this compiler option is named `-disable-objc-interop`.
> Could someone explain why I should build a language developed entirely by and for writing Apple ecosystem products?
Possibly because you have an affinity for value types, performance, or safety. A language is a lot more than just a checkbox of platforms it supports, although iOS is a pretty large checkbox right now.
> the long list of benefits suddenly looks much, much smaller compared to e.g. JVM, .NET, Go, etc etc.
Swift isn't trying to compete with any of those. I mean sure in the "world domination 10 year plan" sense, but for the forseeable future the bullets that make Java attractive to enterprises (lots of developers, lots of libraries, lots of platforms) are not on anyone's todo list in the Swift community.
Rather, the short-term goal is to compete with C/C++/Rust. So you are writing a web server (e.g. nginx alternative, not a webapp) or a TLS stack or an h264 decoder and buffer overflows on the internet sounds scary, you are doing pro audio plugins where 10ms playback buffer is the difference between "works" and "the audio pops", you need to write an array out to network in a single pass to place your buy order before the trader across the street from you, but still have a reasonably productive way to iterate your trading algorithm because Trump is elected, etc.
As far as JVM/.NET, a cardinal rule of software is that it bloats over time. So JVM/.NET/Go can never "scale down" to the kinds of things C/C++ developers do, but it is less known whether a low-level language can "bloat up" to do what .NET developers do. In fact, C++ kinda does "bloat up", because C++ .NET exists. But that is basically an accident, because C++ was not designed in the 80s with .NET developers in mind, and perhaps for that reason it is not the most popular .NET. To the extent we have a plan, the plan with Swift is to try that "on purpose this time" and see if it works better when we're designing it to do that rather than grabbing a round peg off the shelf and hammering it into our square hole. It probably won't ever be as good at .NET problems as .NET, but perhaps it can get close, for useful values of close.
> you can never just "forget about it for the first draft" the way you can a VM's GC.
Similarly, ARC does not exist to compete with your VM on ease-of-use, it competes with malloc/free on ease of use (and your VM on performance). If your VM is performant enough (or you can afford the hardware to make it so), great, but that just isn't the case for many programming domains, and that's the issue we're addressing.
There is also a quasi-non-performance aspect to ARC that is often overlooked: deterministic deallocation. Most VM memory models are unbounded in that resources never have to be deallocated, but in a system like ARC we have fairly tight guarantees on when deallocation will take place. So if your objects have handles to finite resources in some way (think like open file handles, sockets, something to clean up when they blow away) the natural Swift solution will be much more conservative with the resource use relative to the natural JVM solution. Because of that it may be more useful to think of ARC as a general-purpose resource minimization scheme (where memory is merely one kind of resource) rather than as a memory model or GC alternative itself.
smaili|8 years ago
Couldn't agree more
legulere|8 years ago
Java is the extreme case of this. Patterns like abstract visitor factories are hacks to express situations that cannot be expressed in an obvious way.
willtim|8 years ago
geodel|8 years ago
"Kotlin is very reference semantics, itʼs a thin layer on top of Java, and so it perpetuates through a lot of the Javaisms in its model.
If we had done an analog to that for Objective-C it would be like, everything is an NSObject and itʼs objc_msgSend everywhere, just with parentheses instead of square brackets. .."
I think Swift has real chance to reach Java level popularity. It is already at #11 in Redmonk ranking. All languages above Swift are at least 15 year older than Swift. And once it server side features like concurrency it can be much more general purpose.
gurkendoktor|8 years ago
http://rosslebeau.com/2016/swift-copy-write-psa-mutating-dic...
I thought Objective-C had already solved this problem quite nicely with the explicit NSFoo/NSMutableFoo class pairs. I don't see why this needed to be fixed again, in a less explicit way.
pjmlp|8 years ago
I am still waiting for first class support on Windows on the download page.
Right now Rust has much better OS support than Swift.
vorg|8 years ago
One of the main points of Kotlin is that it integrates tightly with IntelliJ. So Kotlin is a layer (not so thin) between a visual IDE (IntelliJ or Android Studio) and Java-decompilable bytecodes on the JVM.
You don't get that with other JVM languages, e.g. Apache Groovy only gives correct type hints in Eclipse 80% of the time, and JAD stopped working on Groovy-generated bytecodes in Groovy 1.7.
hellofunk|8 years ago
pjmlp|8 years ago
We have a schizophrenic attitude towards C++.
On one way we love the language, the expressive power it gives us, the type safety taken from Simula and Algol, thanks to C++'s type system.
On the other hand like Chris puts it "has its own class of problems because itʼs built on the unsafety of C".
So some of us tend to work around it, by using safer languages and only coming down to C++ for those tasks, where those better languages cannot properly fulfil them.
But as the CVE database proves, it only works if everyone on the team cares about safety, otherwise it is a lost game, only fixable by preventing everyone on the team to write C style unsafe code to start with.
Sure nowadays there are plenty of analysers to ensure code safety, but they work mostly on source code and like any tool, suffer from people really caring to use them.
valuearb|8 years ago
Writing code is fun and interesting. But most software development is not writing code. It's a little bit of build management, even more testing, but mostly it's debugging. Debugging is not as fun as writing code. Every language feature that makes debugging more necessary, harder to do and more time intensive sucks. Dangling pointers are the absolute worst.
I can easily give up multiple inheritance for a more functional language that's far easier to write correct code in.
coldtea|8 years ago
Well, he has written not just LLVM in C++, but also a C++ compiler in it (Clang), besides having written Swift in C++.
So that's as far as knowing C++ one can go I'd say.
coldtea|8 years ago
Emphasis mine. Not that I disagree completely...
draw_down|8 years ago
anshargal|8 years ago
In a way many iOS and macOS applications are front-end software. It much more makes sense to make Swift available for other kinds of front-end development that for server-side coding.
alper|8 years ago
Systems programming seems well catered for with Java, Go and Rust while high level application programming is left at the mercy of javascript (I like TypeScript but it's mostly improvements borrowed from C# that are bolted on). I think there would be a lot to gain there first and foremost by compiling Swift to WebAssembly.
unknown|8 years ago
[deleted]
milansuk|8 years ago
I hope that this is never happen. Swift is great, it's universal and it saves you a lot of time during coding, BUT It also has very large syntax and high number of features - documentation is huge! The most of swift programmers probably don't know complete syntax and all features which is problem in a world where we code in teams and work with open source(both cases mean that you work with code you didn't write).
We just need new simple way how billions of people can explain computers what to do and backwards understand what computer was told to do and I'm sure that it's not Swift, Java or C++.
valuearb|8 years ago
santaclaus|8 years ago
What realistic alternative playing in the same space doesn't have this problem?
unknown|8 years ago
[deleted]
draw_down|8 years ago
valuearb|8 years ago
AceJohnny2|8 years ago
I'm storing popcorn for the day all the people who've been burned working for Musk finally come together and speak out about his insanity as a manager.
I suspect the thing holding them back is that Musk's goals are laudable and everyone still wants them to succeed.
But be glad you're a (potential?) customer of Musk's, not an employee.
pavlov|8 years ago
unknown|8 years ago
[deleted]
unknown|8 years ago
[deleted]
dgfgfdagasdfgfa|8 years ago
"letʼs start hacking, letʼs start building something, letʼs see where it goes pulling on the string" feels scarily accurate, and it's unclear where the language will be in 5 years.
Among other things, there's no way to disable objective-c interop, even though it complicates the language and feels like someone merged smalltalk, C++, and ML—not a pretty combination. But—literally the only reason you'd enable that would be to work with Cocoa/UIKit.
I'm still out on ARC—it was much less of a problem than I expected on my last project, but it never feels like an optimal solution, and you can never just "forget about it for the first draft" the way you can a VM's GC.
LeoNatan25|8 years ago
So, apparently you didn't even read the article, as it is explicitly stated that this was not the intention or direction of Swift.
> Among other things, there's no way to disable objective-c interop, even though it complicates the language and feels like someone merged smalltalk, C++, and ML—not a pretty combination. But—literally the only reason you'd enable that would be to work with Cocoa/UIKit.
Swift on Linux does not use any of the ObjC runtime features that are used on Apple platforms.
drewcrawford|8 years ago
Believe it or not, this compiler option is named `-disable-objc-interop`.
> Could someone explain why I should build a language developed entirely by and for writing Apple ecosystem products?
Possibly because you have an affinity for value types, performance, or safety. A language is a lot more than just a checkbox of platforms it supports, although iOS is a pretty large checkbox right now.
> the long list of benefits suddenly looks much, much smaller compared to e.g. JVM, .NET, Go, etc etc.
Swift isn't trying to compete with any of those. I mean sure in the "world domination 10 year plan" sense, but for the forseeable future the bullets that make Java attractive to enterprises (lots of developers, lots of libraries, lots of platforms) are not on anyone's todo list in the Swift community.
Rather, the short-term goal is to compete with C/C++/Rust. So you are writing a web server (e.g. nginx alternative, not a webapp) or a TLS stack or an h264 decoder and buffer overflows on the internet sounds scary, you are doing pro audio plugins where 10ms playback buffer is the difference between "works" and "the audio pops", you need to write an array out to network in a single pass to place your buy order before the trader across the street from you, but still have a reasonably productive way to iterate your trading algorithm because Trump is elected, etc.
As far as JVM/.NET, a cardinal rule of software is that it bloats over time. So JVM/.NET/Go can never "scale down" to the kinds of things C/C++ developers do, but it is less known whether a low-level language can "bloat up" to do what .NET developers do. In fact, C++ kinda does "bloat up", because C++ .NET exists. But that is basically an accident, because C++ was not designed in the 80s with .NET developers in mind, and perhaps for that reason it is not the most popular .NET. To the extent we have a plan, the plan with Swift is to try that "on purpose this time" and see if it works better when we're designing it to do that rather than grabbing a round peg off the shelf and hammering it into our square hole. It probably won't ever be as good at .NET problems as .NET, but perhaps it can get close, for useful values of close.
> you can never just "forget about it for the first draft" the way you can a VM's GC.
Similarly, ARC does not exist to compete with your VM on ease-of-use, it competes with malloc/free on ease of use (and your VM on performance). If your VM is performant enough (or you can afford the hardware to make it so), great, but that just isn't the case for many programming domains, and that's the issue we're addressing.
There is also a quasi-non-performance aspect to ARC that is often overlooked: deterministic deallocation. Most VM memory models are unbounded in that resources never have to be deallocated, but in a system like ARC we have fairly tight guarantees on when deallocation will take place. So if your objects have handles to finite resources in some way (think like open file handles, sockets, something to clean up when they blow away) the natural Swift solution will be much more conservative with the resource use relative to the natural JVM solution. Because of that it may be more useful to think of ARC as a general-purpose resource minimization scheme (where memory is merely one kind of resource) rather than as a memory model or GC alternative itself.