fork-bomber
|
1 month ago
This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem. Keeping the whole “it’s funded by the Government/Google etc” nonsense aside: I personally wish that at least a feeble attempt would be made to actually use the FFI capabilities that Rust and its ecosystem has before folks form an opinion. Personally - and I’m not ashamed to state that I’m an early adopter of the language - it’s very good. Please consider that the Linux kernel project, Google, Microsoft etc went down the Rust path not on a whim but after careful analysis of the pros and cons. The pros won out.
fork-bomber
|
1 month ago
My perception is that the designers have taken their rough experience onboard and have now settled on a reasonable development model with an emphasis on achievable feature additions. The language server is astonishingly good, the feature set as it stands very much batteries included and the reaction time to highlighted discrepancies and a reasonable resolution very commendable. I’m an embedded dev so my opinions are biased accordingly but I do see some pretty awesome additions with vls, veb, vui etc. Of course, please conduct your own experiments and research but I’m incrementally optimistic.
fork-bomber
|
1 month ago
Is there a missing link ?
fork-bomber
|
3 months ago
You’re right. But consider that in order to be useful when not fused off, the design would need to have a bunch of additional logic (interconnect ports, power control machinery etc) at the periphery of the to-eventually-be-fused-off area that would likely remain even when things were fused off. That may impact power.
Apart from that there’s the other usual angles: The very fact that there’s additional logic in the compute path (eventually fused off) means additional design and verification complexity. The additional area, although dark, eats into the silicon yield at the fab.
Not saying it’s not possible.
fork-bomber
|
3 months ago
QC likely use a lot of Arm IP, Nuvia notwithstanding, and want a way out of the general Arm monopoly. Seems to be a growing trend.
A dual ISA decoder with with fuse-off options will likely have unwelcome power-perf-area and yield consequences.
fork-bomber
|
4 months ago
Thanks. That's exactly the kind of subliminal lobbying that I was alluding to. I don't think it's FUD at all.
fork-bomber
|
4 months ago
An ISO standard is hard to gepolitically regulate, I would think.
It also cements the fact that the technology being standardized is simply too fundamental and likely ubiquitous for folks to worry about it being turned into a strategic weapon.
Taking the previously mentioned ethernet example (not a perfect one I should accentuate again): why bother with blocking it's uptake when it is too fundamentally useful and enabling for a whole bunch of other innovation that builds on top.
fork-bomber
|
4 months ago
A large motivation for this move is likely to ensure that attempts by some incumbent ISAs to lobby the US government to curb the uptake of RISC-V are stymied.
There appears to be an undercurrent of this sort underway where the soaring popularity of RISC-V in markets such as China is politically ripe for some incumbent ISAs to turn US government opinion against RISC-V, from a general uptake PoV or from the PoV of introducing laborious procedural delays in the uptake.
Turning the ISA into an ISO standard helps curb such attempts.
Ethernet, although not directly relevant, is a similar example. You can't lobby the US government to outright ban or generally slow the adoption of Ethernet because it's so much of a universal phenomenon by virtue of it being a standard.
fork-bomber
|
7 months ago
Word on the street is that further Cortex-R and Cortex-M development has been shelved. All the focus is on Cortex-A.
fork-bomber
|
9 months ago
In such scenarios, the assembly routines lend themselves to relatively easier manual scrutiny - given that they are smaller in size compared to the much larger higher level language code in the project.
It's the latter that really needs the compiler's assistance to help remove memory safety issues (it is much harder for humans given the code size and complexity order). The fact that that safe higher level language code is inter operating with inherently unsafe code (as per the Rust definitions) is absolutely OK.
fork-bomber
|
1 year ago
Nothing compared to the not so subtle wordplay you use in your HN handle!
fork-bomber
|
1 year ago
Internet: Please take note for future Lolz.
fork-bomber
|
1 year ago
I assume you allude to Rust's borrow checker. If you are, your concern is misplaced: which is a common occurrence unfortunately when it comes to this topic. Note that most of the interaction with the borrow checker's rules would be tackled by the interfaces between Rust and C that are being incrementally added to the kernel. By the time the 'end users' (the embedded Linux device driver authors you allude to) are involved, all they are doing is using safe Rust wrappers for loads and stores to MMIO, as an example, where there is no fundamental interaction with the borrow checker (because those happen at another level in the call graph involved).
That said: To appreciate the value Rust provides there is going to be some experience driven knowledge gain needed but the efforts underway should help.
fork-bomber
|
1 year ago
Possibly. What's more likely is that folks would want to ask why major corporations are willing to invest the dollars to commit to Rust and it's take on memory safety, at scale.
Once that's internalised, those 'someone's may either align or be the outliers that don't matter in the greater scheme.
fork-bomber
|
1 year ago
Reaching parity with Linux's immense suite of device drivers is perhaps the single biggest hurdle.
That's one of the biggest reasons why alternative kernels either remain fringe or fail.
Initiatives such as NetBSD's Rump kernel but for Linux may provide a bridge to Linux's drivers but it's brittle. There's already LKL - the Linux Kernel Library project with similar aims but not much traction.
fork-bomber
|
1 year ago
That's quite unlikely. What's more likely is the emergence of a better approach for incremental inclusion of Rust in the kernel. This policy is a decent stake in the ground.
fork-bomber
|
1 year ago
Arm has been enabling server/data-center class SoCs for a while now (eg Amazon Graviton et al). This is only going to pick up further (eg Apple Private Cloud Compute).
Also, there's nothing fundamentally stopping chiplet pick-up in traditional embedded domains. It's probably quite likely.
fork-bomber
|
1 year ago
Arm doesn't only do ISA. It essentially wrote the standards for the AMBA/AXI/ACE/CHI interconnect space. Standardizing chip-to-chip interconnects is very much in Arm's interests. It is a double edged sword though since Chiplets will likely enable fine grained modularity allowing IP from other vendors to be stitched around Arm (eg RISC-V IOMMU instead of Arm SMMUv3 etc).
fork-bomber
|
1 year ago
Surely that's an incredibly broad categorisation ?
Learning Rust, like any other language, is a strategic investment that pays off with experience. Companies that are willing to invest, benefit accordingly.
Evidently, several companies that care about memory safety and programmer productivity have invested and benefited from Rust.
Finally: this is subjective of course but the borrow checker isn't something that necessarily needs fighting 'for a month or two'. There's just so many excellent resources available now that learning how to deal with it is quite tractable.
fork-bomber
|
2 years ago
With that pedantic an approach you are missing out on an otherwise fairly informative piece.