i.e., `False` has no constructors and hence is an empty type.
Anyway, this means that for any type `A`, you can construct a function of type `False -> A` because you just do this:
`fun (x : False) => match x with end.`
Since `False` has no constructors, a match statement on a value of type `False` has no cases to match on, and you're done. (Coq's type system requires a case for each constructor of the type of the thing being matched on.) This is why, if you assume something that is false, you can prove anything. :)
I get the feeling this doesn't really get into the meat of what "drop" is. It seems you can't really explain why you "love" a function without discussing its purpose. Maybe I'm wrong, I'm only really an outsider looking in when it comes to rust, but it does fascinate me as far as its goals. I would go so far as to say that it will be important for systems programmers to know in the not too distant future (if it's not already).
Isn't it really only there in case someone needs to "hook into" the drop functionality before the variable is dropped? Please correct me if I'm wrong.
I wouldn't say std::mem::drop acts like free at all, it's the equivalent of a destructor in C++. Mostly useful when you're dealing with manually allocated memory, FFI, implementing an RAII pattern, etc.
One cool thing about Drop (and some other cool stuff, like MaybeUninit) is that it makes doing things like allocating/freeing in place just like any other Rust code. There may be some unsafe involved, but the syntax is consistent. Whereas in C++ seeing placement new and manually called destructors can raise eyebrows.
I haven't used rust, so can you explain this to me:
If I do the rust equivalent of:
def add1(x):
return x + 1
x = 1
y = add1(x)
z = add1(x)
then will x have been deallocated by the first call to add1 and will the second call to add1 fail?
[You can ignore the fact that I'm using numbers and substitute an object if that makes more sense in the context of allocating / deallocating memory in rust.]
Interestingly, the type of `x` actually does matter here in Rust! For most types, yes, passing something by value into a function will cause the memory to be "moved", which means that reusing `x` will be a compiler error. That being said, you can also either pass a shared reference (i.e. `&x`), which will allow you to access the data in Rust (provided you don't move anything out from it or mutate it, which would cause a compiler error) or a mutable reference (i.e. `&mut x`), which will allow you to access or mutate the data in `x` but not take ownership of it (unless it's replaced with something else of the same type).
However, a few types, including integers, but also things like booleans and chars, implement a trait (which for the purposes of this discussion is like an interface, if you're not familiar with traits) called Copy that means that they should be implicitly copied rather than moved. This means that in the specific example you gave above, there would not be any error, since `x` would be copied implicitly. You can also implement Copy on your own types, but this is generally only supposed to be done on things that are relatively small due to the performance overheard of large copies. Instead, for larger types, you can implement Clone, which gives a `.clone` method that lets you explicitly copy the type while still having moves rather than copies be the default. Notably, the Copy trait can only be implemented on types that already implement Clone, so anything that is implicitly copied be can also be explicitly copied as well
If add1 takes ownership of the argument, yes (and x is not implicitly copyable).
Compare with C++, in particular types with deleted copy operators (e.g. unique_ptr<T>). In order to call a function that takes an unique_ptr by value as argument, you must explicitly move the object into the function:
Linters (i.e. clang-tidy) can be configured to complain about this, but it's completely valid C++ (because move leaves the object in an unspecified but valid state). In Rust, the argument will be automatically moved in the first call, and the second call will generate a compile-time error.
Normally yes, if using an object. In this case, the integer types implement the Copy trait, so instead of actually having your first call to add1 take ownership of x, it will just operate on a copy of the value, so your second call will work too.
That same function appears on the second page of Philip Wadler's first Linear Logic paper, but it's called "kill" [1]
But I remember the words "drop" and "dup" being used since the early days of linear logic too. I believe they come from Forth, where they do pretty much the same thing! [2]
Do variables go out of scope after last use or when the function exits? I could see the former evolving into the language if it’s not already the default behavior.
In which case there’s only one situation where I could see this useful, and that’s when you are building a large object to replace an old one.
The semantics of
foo = buildGiantBoject();
In most languages is that foo exists until reassigned. When the object represents a nontrivial amount of memory, and you don’t have fallback behavior that keeps the old data, then you might see something like
drop(foo);
foo = buildGiantBoject();
Most of the rest of the time it’s not worth the hassle.
It used to be at the end of the block, which caused all manner of annoyance. So they spent a lot of effort improving the borrow checker, and now it's 'after last use'.
It's not just a matter of memory use. References and mutable references form a sort of compile-time read-write mutex; you can't take a mutable reference without first dropping all other references. See https://stackoverflow.com/questions/50251487/what-are-non-le... for more.
Not sure how Rust mutexes work but in c++ that wouldn't work. Obvious first example is std::lock_guard which is implemented by locking in constructor and unlocking in destructor. The variable itself never has any "use", it's just created and held alive as a dummy to denote the locking scope.
Now actually this is a quite nasty object with implicit global side effects which you should avoid in the first place, but for the mutex case i don't know of a better option, maybe Rust has a better way to handle this?
Variables go "out of scope" (in at least one sense) at last use, but are not `Drop`-ed (de-allocated, etc...) until the end of the function. The difference is important because of rust's rule against simultaneous aliasing and mutability. Consider this example:
fn main() {
let mut a = 1;
let b = &mut a;
*b = 2;
println!("{}", a); // prints "2"
// *b = 4; // If this line is uncommented, compile time error.
}
Because b is a mutable reference to a, this means that a cannot be accessed directly until b goes out of scope. In this sense, b goes out of scope the last time it's used. _However_, AFAIK, b isn't actually de-allocated until the end of the function.
Of course, it doesn't matter in this trivial case, because b is just some bytes in the current stack frame so there's nothing to actually de-allocate. But if b were a complex type that _also_ had some memory to de-allocate, this wouldn't happen until the end of main(). But in this case, b's scope also lasts until the end of main, which is kind of like adding that last line back in...
This can be seen in the following example, where b has an explicit type:
struct B<'a>(&'a mut i32);
impl<'a> Drop for B<'a> {
fn drop(&mut self) {
// We'd still have a mutable reference to a here...
// If B owned resources and needed to free them, this is where that would happen
}
}
fn main() {
let mut a = 1;
let b = B(&mut a);
*b.0 = 2;
std::mem::drop(b); // Comment this line out, get compiler error
println!("{}", a); // prints "2"
}
In this example, without the std::mem::drop() line, the implementation for Drop (i.e., B's destructor), B::drop would be implicitly called at the end of the function. But in that case, B::drop() would still have a mutable reference to a, which makes the println call produce a "cannot borrow `a` as immutable because it is also borrowed as mutable" compile time error.
In other words, this "going out of scope at last use" is really about rust's lifetimes system, not memory allocation.
IMHO... this is one of the rough edges in rust's somewhat steep learning curve. Rust's lifetimes rules make the language kind of complicated, though getting memory safety in a systems programming language is worth the trade-off. There's a lot of syntactic sugar that makes things a LOT easier and less verbose in most cases, but the learning curve trade-off for _that_ is that, when you _do_ run into the more complex cases that the compiler can't figure out for you, it's easy to get lost, because there are a few extra puzzle pieces to fit together. Still way better than the foot-gun that is C, though. At least for me... YMMV, obviously.
> Now this might seem like a hack, but it really is not. Most languages would either ask the programmers to explicitly call free() or implicitly call a magic runtime.deallocate() within a complex garbage collector.
The compiler actually implicitly adds drop glue to all dropped variables!
For me, rust is still love & hate, even after 1 year of half-time (most of the free time I have) hacking.
It's a wonderful language but there are still some PITAs. For example you can't initialize some const x: SomeStruct with a function call. Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
That said, I wouldn't rather use C/C++/Go/Reason/Ocaml/? - that is probably the love part.
BTW: I've recently stopped worrying about unsafe and it got a bit better.
So my message is probably:
- keep your deps shallow, don't be afraid to implement something yourself
- if you get pissed off, try again later (sometimes try it the rust way, sometimes just do it in an entirely different way)
>Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
"(...) there are two factors that make something a proper zero cost abstraction:
No global costs: A zero cost abstraction ought not to negatively impact the performance of programs that don’t use it. For example, it can’t require every program carry a heavy language runtime to benefit the only programs that use the feature.
Optimal performance: A zero cost abstractoin ought to compile to the best implementation of the solution that someone would have written with the lower level primitives. It can’t introduce additional costs that could be avoided without the abstraction."
> Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
While someone else is right that "zero-cost" refers to runtime cost rather than compilation cost, dependencies are the biggest problem.
The program `spotifyd` takes over an hour to compile on my X200 laptop. This is, for reference, the same amount of time that the Linux Kernel and GCC takes to compile (Actually, I think GCC takes less time...). Most of the compilation time is on the 300+ dependencies, that simply wrap C (and in some places, replicate) libraries that I already have installed on my system!
I hear you. I'm really hoping that Swift improves over the next few years. It seems to be in a great sweet spot with many of the modern language features of Rust (optional / result types, parametric enums, try, generics, static compilation, etc). But it also has the ergonomics of a language like Go thats explicitly designed for writing practical code and just getting work done.
Swift is still missing decent async / await support, generators and promises. Some of this stuff can be written by library authors, but doing so fragments the ecosystem. Its also still harder than it should be to write & run swift code on non-mac platforms. And its also not as fast as it could be. I've heard some reports of swift programs spending 60% of their time incrementing and decrementing reference counts. Apparently optimizations are coming. I can't wait - its my favorite of the current crop of new languages. I think it strikes a nice balance between being fancy and being easy of use. But it really needs some more love before its quite ready for me to use it as a daily workhorse for http servers and games.
> Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
Zero cost refers to runtime cost, not compilation cost. Zero cost abstraction is not bullshit.
The beauty of programming language design is not building the most complex edifice like Scala or making the language unacceptably crippled like Go - but giving the programmer the ability to represent complex ideas elegantly and safely. Rust really shines in that regard.
i'm fairly ignorant on the various differences but my general feeling was that Go is quite useful?
One of the philosophies behind Go is to keep the language extra simple.
See "less is exponentially more".
The same way some electric bikes are restricted to a given speed to keep their user safe.
Some people call it "crippled", while some other call it "simple and safe to use".
Gotta admit, that really is a cute example. However, I was a bit surprised when the author described Go as "unacceptably crippled." What is he referring to?
Go is simple to the point where it annoys a lot of programmers, especially programmers who like to do fancy stuff with their programming language (the kind of person that's attracted to Rust, for instance).
The usual suspects for things missing from go are the following: Generics, sum types, match statements, tuple types, compile-time data-race detection, type-safe concurrent-maps, hygienic macros, immutable types/references, functional constructs such as 'map', 'filter', or monads, marker interfaces, better error handling, type-inference for consts that isn't garbage, etc.
Less common complaints are that it's missing: object oriented features like inheritance, a configurable gc (as java has), the ability to work with OS threads, c-compatible stacks for fast c-interop, ownership semantics, type-inference for arguments (e.g. as haskell does), operator overloading, dependent types, etc.
The list of things in the first set can mostly be summed up as "go has a worse type-system than C++/rust/etc, something much closer to java 1 before generics, or c". Basically, the language is intentionally crippled because it intentionally ignores advances in type-theory that have been shown to allow expressing many things more safely.
For example, sum types and match statements make modifying code much safer. People will write switch/if-else-ladder code to do exactly the same sort of thing even without them, the code will just fail at runtime rather than compile-time when a new variant is added or one is not handled by accident.
Go code feels very low-level to me, and very boilerplatey. In Rust I can write code that in almost the same way I write JavaScript (but with added type annotations), but Go makes me deal with all the little details, and makes it hard to abstract things neatly.
Lack of generics is a big part of the issue. But more generally the focus on "simple" code means that more sophisticated abstractions are actively eschewed, and personally I find this makes writing Go code quite frustrating.
> or making the language unacceptably crippled like Go
Gotta say, I lost a lot of respect for the author at this point. It’s not like I don’t love Rust - quite the contrary - but if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight. Go has been one of my languages of choice for over half a decade now, and for good reason.
> > or making the language unacceptably crippled like Go
> ... if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight.
Perhaps the author used a poor choice of words and instead could have phrased their intent along the lines of:
Go lacks the semantic density needed to express solutions in both a concise and consistent manner.
Were this the case, it would be hard to disagree as Go does, indeed, lack linguistic capabilities present in other programming languages which enable developers to encode system constraints within the language itself. Some might see this as a benefit, but I do not. YMMV.
A similar philosophy of "keep the language dirt-simple so anyone can code in it" was a driving force behind Java (the language) and JavaScript. What people have discovered is that when a programming language does not assist in expressing intrinsic problem complexity explicitly, it becomes implicitly intertwined within the source itself.
It may be an unnecessary dig, but the author may indeed be familiar with Go and still think it's unacceptably crippled for all their use cases. The whole post is just their opinion.
I work with Rust only these days, it’s really an awesome language and I wish everyone working with system languages would switch to Rust. Yet, I find Golang to be a much clearer language to read (and I read a shit ton of code). I hope they don’t add generics, but I wish they would options, results, sum types in general, redeclaring variables, the ? Operator, etc.
Go is a fine GC language - in many ways, it's a lot better than the likes of Java! But it's nonetheless way too clunky for many use cases, which is what the OP may have meant by their remark. And the concurrency support comes with a lot of nasty pitfalls, especially compared to Rust or even Pony.
foopdoopfoop|6 years ago
`Inductive False := .`
i.e., `False` has no constructors and hence is an empty type.
Anyway, this means that for any type `A`, you can construct a function of type `False -> A` because you just do this:
`fun (x : False) => match x with end.`
Since `False` has no constructors, a match statement on a value of type `False` has no cases to match on, and you're done. (Coq's type system requires a case for each constructor of the type of the thing being matched on.) This is why, if you assume something that is false, you can prove anything. :)
erk__|6 years ago
unexaminedlife|6 years ago
Isn't it really only there in case someone needs to "hook into" the drop functionality before the variable is dropped? Please correct me if I'm wrong.
EDIT: Minor editing to clarify meaning.
Rusky|6 years ago
It's talking instead about std::mem::drop, a standard library function that drops a value before it would ordinarily go out of scope.
steveklabnik|6 years ago
holy_city|6 years ago
One cool thing about Drop (and some other cool stuff, like MaybeUninit) is that it makes doing things like allocating/freeing in place just like any other Rust code. There may be some unsafe involved, but the syntax is consistent. Whereas in C++ seeing placement new and manually called destructors can raise eyebrows.
dbieber|6 years ago
If I do the rust equivalent of:
then will x have been deallocated by the first call to add1 and will the second call to add1 fail?[You can ignore the fact that I'm using numbers and substitute an object if that makes more sense in the context of allocating / deallocating memory in rust.]
saghm|6 years ago
However, a few types, including integers, but also things like booleans and chars, implement a trait (which for the purposes of this discussion is like an interface, if you're not familiar with traits) called Copy that means that they should be implicitly copied rather than moved. This means that in the specific example you gave above, there would not be any error, since `x` would be copied implicitly. You can also implement Copy on your own types, but this is generally only supposed to be done on things that are relatively small due to the performance overheard of large copies. Instead, for larger types, you can implement Clone, which gives a `.clone` method that lets you explicitly copy the type while still having moves rather than copies be the default. Notably, the Copy trait can only be implemented on types that already implement Clone, so anything that is implicitly copied be can also be explicitly copied as well
nemetroid|6 years ago
Compare with C++, in particular types with deleted copy operators (e.g. unique_ptr<T>). In order to call a function that takes an unique_ptr by value as argument, you must explicitly move the object into the function:
Linters (i.e. clang-tidy) can be configured to complain about this, but it's completely valid C++ (because move leaves the object in an unspecified but valid state). In Rust, the argument will be automatically moved in the first call, and the second call will generate a compile-time error.tridentlead|6 years ago
saagarjha|6 years ago
cft|6 years ago
grenoire|6 years ago
ratww|6 years ago
But I remember the words "drop" and "dup" being used since the early days of linear logic too. I believe they come from Forth, where they do pretty much the same thing! [2]
[1] http://homepages.inf.ed.ac.uk/wadler/papers/linear/linear.ps
[2] http://wiki.laptop.org/go/Forth_stack_operators
lpghatguy|6 years ago
hinkley|6 years ago
In which case there’s only one situation where I could see this useful, and that’s when you are building a large object to replace an old one.
The semantics of
In most languages is that foo exists until reassigned. When the object represents a nontrivial amount of memory, and you don’t have fallback behavior that keeps the old data, then you might see something like Most of the rest of the time it’s not worth the hassle.Filligree|6 years ago
It's not just a matter of memory use. References and mutable references form a sort of compile-time read-write mutex; you can't take a mutable reference without first dropping all other references. See https://stackoverflow.com/questions/50251487/what-are-non-le... for more.
Too|6 years ago
Now actually this is a quite nasty object with implicit global side effects which you should avoid in the first place, but for the mutex case i don't know of a better option, maybe Rust has a better way to handle this?
liara_k|6 years ago
Of course, it doesn't matter in this trivial case, because b is just some bytes in the current stack frame so there's nothing to actually de-allocate. But if b were a complex type that _also_ had some memory to de-allocate, this wouldn't happen until the end of main(). But in this case, b's scope also lasts until the end of main, which is kind of like adding that last line back in...
This can be seen in the following example, where b has an explicit type:
In this example, without the std::mem::drop() line, the implementation for Drop (i.e., B's destructor), B::drop would be implicitly called at the end of the function. But in that case, B::drop() would still have a mutable reference to a, which makes the println call produce a "cannot borrow `a` as immutable because it is also borrowed as mutable" compile time error.In other words, this "going out of scope at last use" is really about rust's lifetimes system, not memory allocation.
IMHO... this is one of the rough edges in rust's somewhat steep learning curve. Rust's lifetimes rules make the language kind of complicated, though getting memory safety in a systems programming language is worth the trade-off. There's a lot of syntactic sugar that makes things a LOT easier and less verbose in most cases, but the learning curve trade-off for _that_ is that, when you _do_ run into the more complex cases that the compiler can't figure out for you, it's easy to get lost, because there are a few extra puzzle pieces to fit together. Still way better than the foot-gun that is C, though. At least for me... YMMV, obviously.
saagarjha|6 years ago
codeflo|6 years ago
jcelerier|6 years ago
newacctjhro|6 years ago
The compiler actually implicitly adds drop glue to all dropped variables!
cztomsik|6 years ago
It's a wonderful language but there are still some PITAs. For example you can't initialize some const x: SomeStruct with a function call. Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
That said, I wouldn't rather use C/C++/Go/Reason/Ocaml/? - that is probably the love part.
BTW: I've recently stopped worrying about unsafe and it got a bit better.
So my message is probably: - keep your deps shallow, don't be afraid to implement something yourself - if you get pissed off, try again later (sometimes try it the rust way, sometimes just do it in an entirely different way)
coldtea|6 years ago
"(...) there are two factors that make something a proper zero cost abstraction:
No global costs: A zero cost abstraction ought not to negatively impact the performance of programs that don’t use it. For example, it can’t require every program carry a heavy language runtime to benefit the only programs that use the feature.
Optimal performance: A zero cost abstractoin ought to compile to the best implementation of the solution that someone would have written with the lower level primitives. It can’t introduce additional costs that could be avoided without the abstraction."
https://boats.gitlab.io/blog/post/zero-cost-abstractions/
It's not about compile time...
kam|6 years ago
You can if it's a `const fn`. The set of features you can use in `const` is small but growing.
fao_|6 years ago
While someone else is right that "zero-cost" refers to runtime cost rather than compilation cost, dependencies are the biggest problem.
The program `spotifyd` takes over an hour to compile on my X200 laptop. This is, for reference, the same amount of time that the Linux Kernel and GCC takes to compile (Actually, I think GCC takes less time...). Most of the compilation time is on the 300+ dependencies, that simply wrap C (and in some places, replicate) libraries that I already have installed on my system!
josephg|6 years ago
Swift is still missing decent async / await support, generators and promises. Some of this stuff can be written by library authors, but doing so fragments the ecosystem. Its also still harder than it should be to write & run swift code on non-mac platforms. And its also not as fast as it could be. I've heard some reports of swift programs spending 60% of their time incrementing and decrementing reference counts. Apparently optimizations are coming. I can't wait - its my favorite of the current crop of new languages. I think it strikes a nice balance between being fancy and being easy of use. But it really needs some more love before its quite ready for me to use it as a daily workhorse for http servers and games.
The_rationalist|6 years ago
Kenji|6 years ago
Zero cost refers to runtime cost, not compilation cost. Zero cost abstraction is not bullshit.
millstone|6 years ago
hathawsh|6 years ago
flywithdolp|6 years ago
The beauty of programming language design is not building the most complex edifice like Scala or making the language unacceptably crippled like Go - but giving the programmer the ability to represent complex ideas elegantly and safely. Rust really shines in that regard.
i'm fairly ignorant on the various differences but my general feeling was that Go is quite useful?
qznc|6 years ago
likeliv|6 years ago
See "less is exponentially more".
The same way some electric bikes are restricted to a given speed to keep their user safe. Some people call it "crippled", while some other call it "simple and safe to use".
unknown|6 years ago
[deleted]
gautamcgoel|6 years ago
c3534l|6 years ago
TheDong|6 years ago
Less common complaints are that it's missing: object oriented features like inheritance, a configurable gc (as java has), the ability to work with OS threads, c-compatible stacks for fast c-interop, ownership semantics, type-inference for arguments (e.g. as haskell does), operator overloading, dependent types, etc.
The list of things in the first set can mostly be summed up as "go has a worse type-system than C++/rust/etc, something much closer to java 1 before generics, or c". Basically, the language is intentionally crippled because it intentionally ignores advances in type-theory that have been shown to allow expressing many things more safely.
For example, sum types and match statements make modifying code much safer. People will write switch/if-else-ladder code to do exactly the same sort of thing even without them, the code will just fail at runtime rather than compile-time when a new variant is added or one is not handled by accident.
nicoburns|6 years ago
Lack of generics is a big part of the issue. But more generally the focus on "simple" code means that more sophisticated abstractions are actively eschewed, and personally I find this makes writing Go code quite frustrating.
hu3|6 years ago
Someone describes Go as "unacceptably crippled" while Uber engineering has 1500 microservices written in Go, making it their primary language.
https://news.ycombinator.com/item?id=21226347
saagarjha|6 years ago
augusto2112|6 years ago
jnericks|6 years ago
[deleted]
jchw|6 years ago
Gotta say, I lost a lot of respect for the author at this point. It’s not like I don’t love Rust - quite the contrary - but if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight. Go has been one of my languages of choice for over half a decade now, and for good reason.
AdieuToLogic|6 years ago
> ... if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight.
Perhaps the author used a poor choice of words and instead could have phrased their intent along the lines of:
Go lacks the semantic density needed to express solutions in both a concise and consistent manner.
Were this the case, it would be hard to disagree as Go does, indeed, lack linguistic capabilities present in other programming languages which enable developers to encode system constraints within the language itself. Some might see this as a benefit, but I do not. YMMV.
A similar philosophy of "keep the language dirt-simple so anyone can code in it" was a driving force behind Java (the language) and JavaScript. What people have discovered is that when a programming language does not assist in expressing intrinsic problem complexity explicitly, it becomes implicitly intertwined within the source itself.
viraptor|6 years ago
baby|6 years ago
EdwardDiego|6 years ago
goto11|6 years ago
zozbot234|6 years ago
crimsonalucard|6 years ago
[deleted]
blub|6 years ago
Go is a favorite target, but Python, Ada, Java, C# etc didn't remain unscathed either.
dingo_bat|6 years ago
[deleted]
etxm|6 years ago
I just spit mezcal on a stranger.
totalperspectiv|6 years ago