This is so excellent, and I love seeing long term, multiyear goals get completed. It isn't just this release, but all the releases in between. The Rust team and community is amazing.
I spent a few years as a scientific programmer and this is exactly the sort of thing that just bites you on the behind in C/C++/Fortran: the undefined behaviour can actually manifest as noise in your output, or just really hard to track down, intermittent problems. A big win to get rid of it.
It would be nice if there were both a saturating-as and a overflow-is-a-bug-as, the later of which is also saturating but in debug builds get instrumentation to panic if it ever actually saturates.
The overflow-is-a-bug-saturating as would be the default, and there would be a separate sat_as for "I know this saturates, it isn't a bug.". ::sigh:: Rust went through this same debate for integers, initially rejecting the argument I'm giving here but switching back to it after silently-defined-integer-overflow concealed severe memory unsafty bugs in the standard library.
Well-defining something like saturation actually reduces the power of static and dynamic program analysis because it can no longer tell if the overflow was a programmer-intended use of saturation or a bug.
Having it undefined was better from a tools perspective, even if worse at runtime, because if a tool could prove that overflow would happen (statically) or that it does happen (dynamically) that would always be a bug, and always be worth bringing to the user's attention.
So now you still get "noise" in your output, but it's the harder to detect noise of consistent saturation, and you've lost the ability to have instrumentation that tells you that you have a flaw in your code.
So I think this is again an example of rust making a decision that hurts program correctness.
I'm not sure I understand this. Does it not produce a run time error? Why not?
This looks very dangerous, because it essentially does the "nearest to right" thing. Say, you cast 256 to a u8, it's then saturated to 255. That's almost right, and a result might be wrong only by 0.5%. Much harder to detect than if it is set to 0.
Fortran predictably overflows the result, in contrast to fptoui which gives a poison value. I agree that even a predictable overflow can bite you on the behind. But it's better than undefined behavior. I'm not a fan of saturated cast, but having a defined behavior is for sure a great improvement.
It is nice that they have defined behavior for that now.
Though I try to always scrutinize any floating-point / integer conversions during code reviews. The default casting of a floating point value to integer is frequently not what you want, however. In the code we do, for example, you will usually round to the nearest integer instead, we don't normally need the more fancy rounding schemes.
Out of interest, with (my instance of!) 1.44, "let i = 257.0; i as u8" casts to 1. "let i = usize::MAX as f64 + 1.0; i as usize" does likewise. If you assign i as an integer DIRECTLY, however, it saturates.
To be honest... struggling to see why you'd do the former, outside of situations where you're happy with saturation, though I haven't thought about it a lot. Agreed that a consistent behaviour is a big help - I can work with "Rust does x in this scenario".
In Fortran, there is the ieee_arithmetic intrinsic module that can very nicely and robustly handle undefined behavior. If people do not use it, it is their problems.
I keep seeing more and more news about Rust, and figure that perhaps it is time that I learn something new.
99% of my development work these days is C with the target being Linux/ARM with a small-ish memory model. Think 64 or 128MB of DDR. Does this fit within Rust's world?
I've noticed that stripped binary sizes for a simple "Hello, World!" example are significantly larger with Rust. Is this just the way things are and the "cost of protection"? For reference, using rustc version 1.41.0, the stripped binary was 199KiB and the same thing in C (gcc 9.3) was 15KiB.
That is a bit extreme but it demonstrates the lower bound.
There's a lot of things you can do to drop sizes, depending on the specifics of what you're doing and the tradeoffs you want to make.
Architecture support is where stuff gets tougher than size, to be honest. ARM stuff is well supported though, and is only going to get better in the future. The sort of default "get started" board is the STM32F4 discovery, which has 1 meg of flash and 192k of RAM. Seems like you're well above that.
Yes there's a fixed cost because of the std library, panics unwinding code (i.e. clean recovery instead of aborting when something goes wrong), and the tendency to statically compile everything in adds some more, but in embedded context (a.k.a. "no_std") you can observe C and Rust code are very comparable.
Especially in your case you'll find Rust to be a joy to use: you'll have way more confidence in your code being able to run for months without segfaults or memory leaks. And if you have a good understanding of the C memory model using Rust will be a breeze.
If you're running Linux with MBs of RAM, then you're well within Rust's comfort zone. If you're on embedded ARM (cortex-M) with KBs RAM, then Rust will be really nice and it does work today, but there's still a few missing features to make it nice to use.
The following is a very good resource, going from safe/practical ways to reduce the size (like `strip`) to more advanced and unpractical builds (up to 8KiB hello world, removing... a lot).
> I've noticed that stripped binary sizes for a simple "Hello, World!" example are significantly larger with Rust. Is this just the way things are and the "cost of protection"?
Some simple firmware I'm writing (controls some lights, takes rotary encoder input, prints stuff on serial) is currently 8K. It's written in Rust and targets the stm32f1xx and stm32f4xx chips.
I've been playing with Rust myself in my free time, and if you use Rust with the standard library a stripped executable should be smaller and more comparable to C than what you're seeing with the standard library included. Depending on your use case, you might be able to get away with just using the core library
Dunno if it's apples and oranges for you, but I've seen Zig [0] being thrown around here previously as a "safer" embedded C alternative, albeit more minimal than Rust. May be worth comparing (I haven't tried either language).
When I started learning Rust over a year ago now, Rocket was extremely appealing to me. I never subscribe to any GitHub threads that I'm not involved in, except for Rocket on stable.
In general, stable proc macros is an awesome step for Rust.
By any chance if anyone is in the Paris area and is interested to teach Rust at university next year during the first semester. Please get in touch :).
Is this undergrad? Genuinely curious, how will you get someone to understand what ownership helps avoid without them having experienced the pain on the other side?
I guess with younger and younger kids learning programming these, may be there can handle more? I am not sure if my son would understand all of the intricacies in his first semester.
I'm building an embedded project that currently runs a python script for automatic brightness. It takes a brightness value from a sensor over I2C, applies a function to get an appropriate LCD brightness value and then sends that to the display driver over a serial port. Would this be an appropriate project to write in Rust to learn the basics of this language?
Yup, I've got a small little rust component that translates a couple industrial sensors on modbus + rtl_r443 over to an influx database and it's been happily running along for a few months now.
As someone who hasn't used Rust, I am curious about why Rust has macros.
I use C++ at work, which admittedly isn't the language I use most, and macros are used quite a bit in the code base. I find they just make the code harder to read, reason about, debug, and sometimes even write. I don't see them really living up to their claimed value.
Is there something different about Rust's macros that make them better?
There is almost no intersection between the kind of things that can be done with the C++ macro system and the kind of things that can be done with the Rust macro system. They are not related. You can see them as another feature that is not available from C++.
>I use C++ at work, ..., and macros are used quite a bit in the code base.
Ouch. Modern C++ has alternatives to C macros. Is it an old code base, or is it just written in the C++98 style?
Today people will typically use constexpr instead of #define. While macros can possibly do some funky things constexpr can't, you'd be hard pressed to find those things. (C++ supports constexpr if, constexpr functions, constexpr lambdas, math, and so on.)
If you have the time and effort, it may be worthwhile to slowly start modernizing the code base, bit by bit.
The advantage of modern day C++ is it catches errors at compile time that older versions did not and would crash while the software was running. You might improve the stability of the code base if you help out.
Writing macros (in any language) is a way of creating an abstraction. Creating abstractions is a way of automating the job of programming, making it ideally more efficient and less error prone. This is why people usually prefer java to basic.
Marcos are a way of creating abstractions that are particular suited to be "concreted" by the compiler, making them a ideal match for programming languages that seek to be "close to the metal" like C, C++ and rust.
C Macros are lacking because they are very primitive, e.g. they have not type system. They are also hardly turing complete. Its extremely hard to write a meaningful algorithm in them. IMHO the real macros of the C++ language are the templates and constexpr, althou they are limited in other ways. E.g. its hard to extend the syntax using them or do certain things like making the calling function return. They grow ever more powerful, with their own type system (concepts) and things like std::embed and static refection so they finally feel like a real language, alsbei a clumsy, pure functional language that feels alien a C++ programmer without exposure to haskell.
Rust macros are actually meant to feel like Rust, not some ad-hoc bolted on language.
The first thing I would say is that regular inline macros are hygienic and type safe, just like the rest of the code you write. That way they're not just text expansion but smart code expansion.
When I say hygienic, I mean that when a macro uses a variable that isn't in the parameters or isn't static, it will be a compile error as it cannot know the scope. Any variables defined inside are scoped. Only parameters parses in can be referenced from outside the scope of the macro
They're also notated different to regular functions and they can't appear anywhere so it's obvious when you see a macro that it will expand into some code that won't break any of the other code in your function.
Of course people can write really terrible macros but typically macros serve a single very simple task and should be well documented and in my experience they often are.
As for procedural macros. They are just ast in, ast out functions, but written as a library in regular rust rather than some language thrown on top. That makes it easy to reason about the code and make it safe. If you write them well, they can be very good at reporting errors in usage to users.
The std lib also provides some very straightforward yet useful macros to act as inspiration. String formatting is an inline macro. vec is a macro to quickly define Vectors. Derive debug, partial equality, default values are all procedural macros and they are very straightforward yet tedious tasks to do on your own all the time.
Given the macros people publish, I would say they've done a good job at securing them as a useful feature. I've not seen many instances in actual code bases of messy macros. For examples of great macros, see serde[0], clap[1], inline-python[2]
Most languages end up needing a way to step outside that language, and if you don't have macros you end up using something even worse. Rust in particular lacks "do notation" or proper support for higher-kinded types, so it needs to use macros as an ad-hoc replacement for things like sequencing async operations or proper propagation of errors. To be honest I'm surprised they didn't find a way to make a web framework in plain rust though.
The new API to cast in an unsafe manner is:
let x: f32 = 1.0;
let y: u8 = unsafe { x.to_int_unchecked() };
But as always, you should only use this method as a last resort. Just like with array access, the compiler can often optimize the checks away, making the safe and unsafe versions equivalent when the compiler can prove it.
I believe for array access you can elide the bounds checking with an assert like
assert!(len(arr) <= 255)
let mut sum = 0;
for i in 0..255 {
sum += arr[i];//this access doesn't emit bounds checks in the compiled code
}
I'm guessing it would work like this with casts?
assert!(x <= 255. && x >= 0);
let y: u8 = x as u8; // no check
I would say that my personal take of the temperature is "vaguely pro but not a slam dunk", at least from the opinions I've seen. Only one way to find out.
Rust has great libraries to make life easy, eg https://docs.rs/fixed/1.0.0/fixed/ (Note: I haven't benchmarked the 4 or 5+ fixed precision libraries Rust offers to see which is best.)
I think in the specific case of casting a float to an int, more instructions will be added, but it doesn't have to be a branch. Here it looks like rustc emits a conditional move: https://godbolt.org/z/1cfqof
> Just like with array access, the compiler can often optimize the checks away, making the safe and unsafe versions equivalent when the compiler can prove it.
Can it "often" solve the halting problem as well?
The hope that this kind of optimization will happen sounds a bit fanciful for any non-trivial part of a program.
You would be surprised, at least with array access stuff. And, if it doesn't, you can often help it understand with a bit of work. Sometimes an assert before a loop or re-slicing something can take a check in the body of a loop and move it out to a single one.
I ported a small C function to Rust recently that involved some looping, and all of the bounds checking was completely eliminated, even once I took the line-by-line port and turned it into a slightly higher level one with slices and iterators instead of pointer + length.
sitkack|5 years ago
Rust 1.45 will be the Rocket Release. It unblocks Rocket running on stable as tracked here https://github.com/SergioBenitez/Rocket/issues/19
This is so excellent, and I love seeing long term, multiyear goals get completed. It isn't just this release, but all the releases in between. The Rust team and community is amazing.
luhn|5 years ago
Or maybe it's an explosive weapon crafted with metal pipe and gunpowder. https://rust.fandom.com/wiki/Rocket
hermanradtke|5 years ago
angrygoat|5 years ago
I spent a few years as a scientific programmer and this is exactly the sort of thing that just bites you on the behind in C/C++/Fortran: the undefined behaviour can actually manifest as noise in your output, or just really hard to track down, intermittent problems. A big win to get rid of it.
nullc|5 years ago
The overflow-is-a-bug-saturating as would be the default, and there would be a separate sat_as for "I know this saturates, it isn't a bug.". ::sigh:: Rust went through this same debate for integers, initially rejecting the argument I'm giving here but switching back to it after silently-defined-integer-overflow concealed severe memory unsafty bugs in the standard library.
Well-defining something like saturation actually reduces the power of static and dynamic program analysis because it can no longer tell if the overflow was a programmer-intended use of saturation or a bug.
Having it undefined was better from a tools perspective, even if worse at runtime, because if a tool could prove that overflow would happen (statically) or that it does happen (dynamically) that would always be a bug, and always be worth bringing to the user's attention.
So now you still get "noise" in your output, but it's the harder to detect noise of consistent saturation, and you've lost the ability to have instrumentation that tells you that you have a flaw in your code.
So I think this is again an example of rust making a decision that hurts program correctness.
gameswithgo|5 years ago
davrosthedalek|5 years ago
This looks very dangerous, because it essentially does the "nearest to right" thing. Say, you cast 256 to a u8, it's then saturated to 255. That's almost right, and a result might be wrong only by 0.5%. Much harder to detect than if it is set to 0.
milancurcic|5 years ago
crazypython|5 years ago
ansible|5 years ago
Though I try to always scrutinize any floating-point / integer conversions during code reviews. The default casting of a floating point value to integer is frequently not what you want, however. In the code we do, for example, you will usually round to the nearest integer instead, we don't normally need the more fancy rounding schemes.
yodelshady|5 years ago
To be honest... struggling to see why you'd do the former, outside of situations where you're happy with saturation, though I haven't thought about it a lot. Agreed that a consistent behaviour is a big help - I can work with "Rust does x in this scenario".
TheRealKing|5 years ago
fullstop|5 years ago
99% of my development work these days is C with the target being Linux/ARM with a small-ish memory model. Think 64 or 128MB of DDR. Does this fit within Rust's world?
I've noticed that stripped binary sizes for a simple "Hello, World!" example are significantly larger with Rust. Is this just the way things are and the "cost of protection"? For reference, using rustc version 1.41.0, the stripped binary was 199KiB and the same thing in C (gcc 9.3) was 15KiB.
steveklabnik|5 years ago
That is a bit extreme but it demonstrates the lower bound.
There's a lot of things you can do to drop sizes, depending on the specifics of what you're doing and the tradeoffs you want to make.
Architecture support is where stuff gets tougher than size, to be honest. ARM stuff is well supported though, and is only going to get better in the future. The sort of default "get started" board is the STM32F4 discovery, which has 1 meg of flash and 192k of RAM. Seems like you're well above that.
bestouff|5 years ago
Especially in your case you'll find Rust to be a joy to use: you'll have way more confidence in your code being able to run for months without segfaults or memory leaks. And if you have a good understanding of the C memory model using Rust will be a breeze.
pitaj|5 years ago
Worth noting that a lot of the code size is a constant addition that won't really scale with your program code.
nicoburns|5 years ago
jwr|5 years ago
I really hope that more enlightened vendors (hi Nordic Semiconductor) will start supporting Rust on their platforms.
riquito|5 years ago
https://github.com/johnthagen/min-sized-rust
zozbot234|5 years ago
Relevant: https://news.ycombinator.com/item?id=23496107
bschwindHN|5 years ago
jefftime|5 years ago
maxkrieger|5 years ago
[0] https://ziglang.org/
PudgePacket|5 years ago
baseballdork|5 years ago
At least i+1 elements, right? Or am I getting caught up by one of the three hardest problems again?
steveklabnik|5 years ago
https://github.com/rust-lang/blog.rust-lang.org/commit/fe241... (should roll out in a few minutes)
Tade0|5 years ago
1. Naming things.
2. Cache invalidation.
3. Off by one errors.
Number 3 apparently the hardest one.
dagmx|5 years ago
———-
Original question: Out of curiosity, why the +1?
trait|5 years ago
steveklabnik|5 years ago
ldng|5 years ago
But I understand it could be more fun for the devs :-)
jjice|5 years ago
In general, stable proc macros is an awesome step for Rust.
hellofunk|5 years ago
mrmonkeyman|5 years ago
[deleted]
rvz|5 years ago
At the moment, I'd rather use something like actix-web instead.
p4bl0|5 years ago
harikb|5 years ago
I guess with younger and younger kids learning programming these, may be there can handle more? I am not sure if my son would understand all of the intricacies in his first semester.
JustFinishedBSG|5 years ago
bestouff|5 years ago
snalty|5 years ago
pas|5 years ago
Maybe take a look at this I2C lib: https://github.com/rust-embedded/rust-i2cdev
dbrgn|5 years ago
vvanders|5 years ago
mjw1007|5 years ago
cuddlybacon|5 years ago
I use C++ at work, which admittedly isn't the language I use most, and macros are used quite a bit in the code base. I find they just make the code harder to read, reason about, debug, and sometimes even write. I don't see them really living up to their claimed value.
Is there something different about Rust's macros that make them better?
JeromeLon|5 years ago
proverbialbunny|5 years ago
Ouch. Modern C++ has alternatives to C macros. Is it an old code base, or is it just written in the C++98 style?
Today people will typically use constexpr instead of #define. While macros can possibly do some funky things constexpr can't, you'd be hard pressed to find those things. (C++ supports constexpr if, constexpr functions, constexpr lambdas, math, and so on.)
If you have the time and effort, it may be worthwhile to slowly start modernizing the code base, bit by bit.
The advantage of modern day C++ is it catches errors at compile time that older versions did not and would crash while the software was running. You might improve the stability of the code base if you help out.
1ris|5 years ago
C Macros are lacking because they are very primitive, e.g. they have not type system. They are also hardly turing complete. Its extremely hard to write a meaningful algorithm in them. IMHO the real macros of the C++ language are the templates and constexpr, althou they are limited in other ways. E.g. its hard to extend the syntax using them or do certain things like making the calling function return. They grow ever more powerful, with their own type system (concepts) and things like std::embed and static refection so they finally feel like a real language, alsbei a clumsy, pure functional language that feels alien a C++ programmer without exposure to haskell.
Rust macros are actually meant to feel like Rust, not some ad-hoc bolted on language.
conradludgate|5 years ago
When I say hygienic, I mean that when a macro uses a variable that isn't in the parameters or isn't static, it will be a compile error as it cannot know the scope. Any variables defined inside are scoped. Only parameters parses in can be referenced from outside the scope of the macro
They're also notated different to regular functions and they can't appear anywhere so it's obvious when you see a macro that it will expand into some code that won't break any of the other code in your function.
Of course people can write really terrible macros but typically macros serve a single very simple task and should be well documented and in my experience they often are.
As for procedural macros. They are just ast in, ast out functions, but written as a library in regular rust rather than some language thrown on top. That makes it easy to reason about the code and make it safe. If you write them well, they can be very good at reporting errors in usage to users.
The std lib also provides some very straightforward yet useful macros to act as inspiration. String formatting is an inline macro. vec is a macro to quickly define Vectors. Derive debug, partial equality, default values are all procedural macros and they are very straightforward yet tedious tasks to do on your own all the time.
Given the macros people publish, I would say they've done a good job at securing them as a useful feature. I've not seen many instances in actual code bases of messy macros. For examples of great macros, see serde[0], clap[1], inline-python[2]
[0]: https://github.com/serde-rs/serde [1]: https://github.com/clap-rs/clap [2]: https://docs.rs/inline-python/0.5.3/inline_python/
lmm|5 years ago
adamnemecek|5 years ago
rgrs|5 years ago
rockmeamedee|5 years ago
steveklabnik|5 years ago
Here's an example of how when it can detect it, it does the right thing: https://godbolt.org/z/hPqf69
I am not an expert in these hints, maybe someone else knows!
laszlokorte|5 years ago
pavehawk2007|5 years ago
devit|5 years ago
Something so lossy and ill-conceived should not be a two-letter operator.
steveklabnik|5 years ago
I would say that my personal take of the temperature is "vaguely pro but not a slam dunk", at least from the opinions I've seen. Only one way to find out.
95th|5 years ago
stefano_c|5 years ago
unknown|5 years ago
[deleted]
person_of_color|5 years ago
proverbialbunny|5 years ago
Rust has great libraries to make life easy, eg https://docs.rs/fixed/1.0.0/fixed/ (Note: I haven't benchmarked the 4 or 5+ fixed precision libraries Rust offers to see which is best.)
mas3god|5 years ago
cjhanks|5 years ago
oconnor663|5 years ago
kgraves|5 years ago
well done rust team
unknown|5 years ago
[deleted]
unknown|5 years ago
[deleted]
thelastinuit|5 years ago
[deleted]
FartyMcFarter|5 years ago
Can it "often" solve the halting problem as well?
The hope that this kind of optimization will happen sounds a bit fanciful for any non-trivial part of a program.
steveklabnik|5 years ago
I ported a small C function to Rust recently that involved some looping, and all of the bounds checking was completely eliminated, even once I took the line-by-line port and turned it into a slightly higher level one with slices and iterators instead of pointer + length.