Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?
In C? No, not unless you write your own scaffolding to do it.
In C++? Maybe, but you’d need to make sure you stay on top of using thread safe structures and smart pointers.
What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
So you’re not spending time chasing oddities because you missed a variable initialisation, or you’re hitting a race condition or some kind of use after free.
While there’s a lot of people who say that this slows you down and a good programmer doesn’t need it, my experience is even the best programmers forget and (at least for me), I spend more time trying to reason about C++ code than rust, because I can trust my rust code more.
Put another way, Rust helps with reducing how much of the codebase I need to consider at any given time to just the most local scope. I work in many heavy graphics C and C++ libraries , and have never had that level of comfort or mental locality.
I have a lot of experience in C, a lot of experience in C++, and some experience with Rust (I have some projects which use it). My opinion is that it's true, and the other comments are good explanations of why. But I want to point out, in addition to those: There's a reason why Rust was adopted into Linux, while C++ wasn't. Getting C++ to work in the kernel would almost certainly have been way less work than getting Rust to work. But only Rust can give you the strong guarantees which makes you avoid lifetime-, memory- and concurrency-related mistakes.
I'm not exactly C or Rust expert so better to check
@dagmx comment for that, but I know some C++ and worked with networking enough to know some pitfalls.
Talking of C++ it can be really solid to work with your own data structures where you control code on both ends. Using templates with something like boost::serialization or protobuf for the first time is like magic. E.g you can serialize whole state of your super complex app and restore it on other node easily.
Unfortunately it's just not the case when you actually trying to work with someone else API / ABI that you have no contol over. Even worse when it's moving target and you need to maintain several different adapters for different client / server versions.
Possible? Definitely. Easier? Probably not. At least for the most part, there are a couple things which C(++) can sometimes be more ergonomic for and those can be isolated out and used independently.
watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen! it's easy to forget that this world is looking closer and closer to those dreamed up by Gibson, Stephenson, etc. what a time to be alive.
I like your optimism, but it seems more like a Phillip K. Dick novel to me.
>In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
Can someone explain this vtoon trend to me? It doesn't seem to be driven by anonymity because their real name is easily findable, so I assume it's something else? It seems very common, especially in certain communities.
The m1n1 hypervisor specialised for debugging is a pretty genius idea. Is anyone aware of anyone else taking a similar approach? Seems like it would be a pretty generally applicable technique and would make OS/hardware driver development a lot more approachable.
Even before true CPU-supported "hypervisors," there was shim software like SoftICE that worked similarly to m1n1 in that you would run an OS underneath and then use a supervisor tool to trace and debug the OS under inspection.
More recently, it's fairly common to use a hypervisor or simulator for kernel debugging in device driver development on Windows via Hyper-V.
A lot of Linux driver development is done using qemu as well, although this is usually more targeted and isn't quite the same "put a thin shim over the OS running on the hardware" approach.
The flexibility and I/O tracing framework in m1n1 are pretty uniquely powerful, though, since it was built for reverse engineering specifically.
Some developers used user mode Linux for driver development, and I think some development has happened on the NetBSD rump kernel more recently. I find the work that goes into building this kind of tooling all pretty impressive.
The nouveau project used a kernel module to intercept mmio accesses: https://nouveau.freedesktop.org/MmioTrace.html.
Generally speaking hooking onto driver code is one of the preferred ways of doing dynamic reverse engineering. For userspace components, you can build an LD_PRELOAD stub that logs ioctls, and so on.
Idea wise actually S/360 run on hardware microcode and all these idea of virtual machine and hypervisor came from an unauthorised development called CP67 or later VM. IBM used it for development MVS etc. as some hardware is yet to be built for certain features.
But the modern day these development is crazy.
How can yo manage a 100+ structure in a language you just learnt (Rust) for a secret GPU the vendor does not share info.
The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel).
This isn't even that new of a thing. The floppy disk drive sold for the Commodore 64 included it's own 6502 CPU, ROM, and RAM. This ran its own disk operating system[1]. Clever programmers would upload their own code to the disk drive to get faster read/writes, pack data more densely on the disk, and even copy protection schemes that could validate the authenticity of a floppy.
There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
Every cell in your body is running a full blown OS fully capable of doing things that each individual cell has no need for. It sounds like this is a perfectly natural way to go about things.
I don't know. This sounds very computer-sciency-ish. We build smaller tools to help build big things. Now the big things are so good and versatile we can replace our smaller tools with the big things too. With the more powerful tools, we can build even bigger things. It is just compiler bootstrapping happening in hardware world.
Same. It's not about the principle, but that generally these OSes increase latency etc. There's so much you can do with interrupts, DMA, and targetted code when performance is a priority.
I sometimes wonder about how fast tings could go if we ditch the firmware, and also just bake a kernel / os right into the silicon. Not like all the subsystems which run their own os/kernels, but really just cut every layer, and have nothing in between.
I'm actually very happy about the rise of VTubers/live avatars. I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
The quantity of exclamation points lol. I assume I'm just too old to get it...I'm okay with that, and I'm damn impressed with the results, so more power to Lina, whatever works for her.
> It feels like Rust’s design guides you towards good abstractions and software designs.
> The compiler is very picky, but once code compiles it gives you the confidence that it will work reliably.
> Sometimes I had trouble making the compiler happy with the design I was trying to use, and then I realized the design had fundamental issues!
I experience a similar sentiment all the time when writing Rust code (which for now is admittedly just toy projects). So far it's felt like the compiler gives you just enough freedom to write programs in a "correct" way.
I don't really do unsafe/lower-level coding, so I can't speak to much there however.
The 2015MBP one was the last one that was passable for me, what came after is horrible. Even the new MBP that has real ports again is still not as good as the 2015 in terms of keyboard.
Not one comment here about the “GPU drivers in Python”. I like the idea of iteration speed, over pure speed.
And the coprocessor called “ASC” also have similarities with Python, where the GPU is doing the heavy lifting, but the ASC (like Python) interact using shared memory.
The same Python is doing with a lot of its libraries (written in C/C++)
> And the coprocessor called “ASC” also have similarities with Python
It's a processor, not a programming language :) The team has essentially strapped the API into something that you can poke with Python instead of with a native driver.
Loved reading this. About the triangle/cube screenshot, they were taken on Linux on a physical Mac OS computer? How were you able to deploy your driver, does the M1 GPU have a basic text/console mode allowing you to start and work with Linux?
Displaying to the screen and stuff was already working, you can already use Asahi Linux and have a GUI and everything, it’s just that it’s all rendered by the CPU right now
I've never played games on my M1 Macbook - what are some popular reasonably graphics intensive games that it would support? Could it run Dota2 for example?
Disco Elysium, Hades and CIV VI run really well on my MBA m1 (using a 4K display). These games are not as resource heavy as Dota2 AFAIK but I’m comparing them to my maxed out 16inch MBP from 2020 which acted more like a cursed semi sentient toaster than a hi spec laptop.
Resident Evil Village recently came out and it performs surprisingly well even on the low end MacBook Air M1 with only 7 GPU cores. What's even more impressive is that the game is playable (low gfx settings, 30fps) when running that machine on low power mode.
It is irksome to me given how much Linux is used inside Apple (board bringup, debugging, etc). You benefit from these gifts, Apple, give back a teensy bit in return. Everybody wins.
I think there's larger barriers to getting windows running on Apple Silicon that would need to be addressed first.
For one example, Windows ARM kernels are pretty tied to the GIC (ARM's reference interrupt controller), but Apple has its own interrupt controller. Normally on ntoskrnl this distinction would simply need hal.dll swapped out, but I've heard from those who've looked into it that the clean separation has broken down a bit and you'd have to binary patch a windows kernel now if you don't have source access.
>Asahi Lina, our GPU kernel sourceress. Lina joined the team to reverse engineer the M1 GPU kernel interface, and found herself writing the world’s first Rust Linux GPU kernel driver. When she’s not working on the Asahi DRM kernel driver, she sometimes hacks on open source VTuber tooling and infrastructure.
Asahi Linux has been upstreaming, but of course it's ongoing. The GPU driver in particular depends on some rust inside the kernel bits which aren't in the mainline kernel, yet. The 6.1 kernel has some Rust bits, 6.2 will have more, but I don't believe that will be enough for the GPU driver ... yet.
Asahi Lina is a maintainer in Asahi Linux project. She is now much known because of the achivement she earned, programming the Asahi Linux GPU driver for MacOS.
Some comments were deferred for faster rendering.
kajaktum|3 years ago
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?
dagmx|3 years ago
In C++? Maybe, but you’d need to make sure you stay on top of using thread safe structures and smart pointers.
What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
So you’re not spending time chasing oddities because you missed a variable initialisation, or you’re hitting a race condition or some kind of use after free.
While there’s a lot of people who say that this slows you down and a good programmer doesn’t need it, my experience is even the best programmers forget and (at least for me), I spend more time trying to reason about C++ code than rust, because I can trust my rust code more.
Put another way, Rust helps with reducing how much of the codebase I need to consider at any given time to just the most local scope. I work in many heavy graphics C and C++ libraries , and have never had that level of comfort or mental locality.
mort96|3 years ago
SXX|3 years ago
Talking of C++ it can be really solid to work with your own data structures where you control code on both ends. Using templates with something like boost::serialization or protobuf for the first time is like magic. E.g you can serialize whole state of your super complex app and restore it on other node easily.
Unfortunately it's just not the case when you actually trying to work with someone else API / ABI that you have no contol over. Even worse when it's moving target and you need to maintain several different adapters for different client / server versions.
saagarjha|3 years ago
aulin|3 years ago
jahewson|3 years ago
[deleted]
dimator|3 years ago
Mistletoe|3 years ago
>In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
from Johnny Mnemonic
What can we do to make it more utopian?
Aaargh20318|3 years ago
What would really push it into cyberpunk territory is if it turns out this is not an actual human but an AI-controlled virtual person.
anxiously|3 years ago
segmondy|3 years ago
nicoburns|3 years ago
bri3d|3 years ago
More recently, it's fairly common to use a hypervisor or simulator for kernel debugging in device driver development on Windows via Hyper-V.
A lot of Linux driver development is done using qemu as well, although this is usually more targeted and isn't quite the same "put a thin shim over the OS running on the hardware" approach.
The flexibility and I/O tracing framework in m1n1 are pretty uniquely powerful, though, since it was built for reverse engineering specifically.
hedgehog|3 years ago
averne_|3 years ago
gjsman-1000|3 years ago
ngcc_hk|3 years ago
But the modern day these development is crazy.
How can yo manage a 100+ structure in a language you just learnt (Rust) for a secret GPU the vendor does not share info.
helf|3 years ago
babypuncher|3 years ago
1: https://en.wikipedia.org/wiki/Commodore_DOS
dist1ll|3 years ago
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
[1] https://youtu.be/36myc8wQhLo
ramraj07|3 years ago
stock_toaster|3 years ago
GekkePrutser|3 years ago
For example, Intel's ME could be a really useful feature if we could do what we want with it. Instead they lock it down so it's just built-in spyware.
liuliu|3 years ago
hinkley|3 years ago
the__alchemist|3 years ago
jbverschoor|3 years ago
comprambler|3 years ago
psychphysic|3 years ago
I'm totally fine with it (I'm grateful the story is being told at all), but it is surreal tone for technical writing.
rychco|3 years ago
noveltyaccount|3 years ago
treesknees|3 years ago
IshKebab|3 years ago
https://youtu.be/SDJCzJ1ETsM?t=1179
How can people watch this?
qrio2|3 years ago
amelius|3 years ago
Mario Brothers would make more sense though. Whoever created this is a plumber par excellence.
kubb|3 years ago
samgranieri|3 years ago
kaiby|3 years ago
> The compiler is very picky, but once code compiles it gives you the confidence that it will work reliably.
> Sometimes I had trouble making the compiler happy with the design I was trying to use, and then I realized the design had fundamental issues!
I experience a similar sentiment all the time when writing Rust code (which for now is admittedly just toy projects). So far it's felt like the compiler gives you just enough freedom to write programs in a "correct" way.
I don't really do unsafe/lower-level coding, so I can't speak to much there however.
adultSwim|3 years ago
GekkePrutser|3 years ago
The 2015MBP one was the last one that was passable for me, what came after is horrible. Even the new MBP that has real ports again is still not as good as the 2015 in terms of keyboard.
punnerud|3 years ago
And the coprocessor called “ASC” also have similarities with Python, where the GPU is doing the heavy lifting, but the ASC (like Python) interact using shared memory. The same Python is doing with a lot of its libraries (written in C/C++)
saagarjha|3 years ago
It's a processor, not a programming language :) The team has essentially strapped the API into something that you can poke with Python instead of with a native driver.
unknown|3 years ago
[deleted]
rafale|3 years ago
Awesome job.
puyoxyz|3 years ago
fareesh|3 years ago
rpastuszak|3 years ago
drooopy|3 years ago
tingol|3 years ago
grappler|3 years ago
calrizien|3 years ago
sneak|3 years ago
gfody|3 years ago
pulvinar|3 years ago
unknown|3 years ago
[deleted]
neonsunset|3 years ago
hias|3 years ago
[deleted]
tiahura|3 years ago
monocasa|3 years ago
For one example, Windows ARM kernels are pretty tied to the GIC (ARM's reference interrupt controller), but Apple has its own interrupt controller. Normally on ntoskrnl this distinction would simply need hal.dll swapped out, but I've heard from those who've looked into it that the clean separation has broken down a bit and you'd have to binary patch a windows kernel now if you don't have source access.
noveltyaccount|3 years ago
masklinn|3 years ago
raverbashing|3 years ago
yjftsjthsd-h|3 years ago
2OEH8eoCRo0|3 years ago
Who is Asahi Lina? Is that an actual person?
nicoburns|3 years ago
Apple don't have linux drivers. It would be great if they wrote some, but it's never going to happen.
> Who is Asahi Lina? Is that an actual person?
The virtual persona of an actual person who has chosen to remain anonymous (hence the name which would be a crazy coincidence otherwise).
tiagod|3 years ago
sliken|3 years ago
Asahi Linux has been upstreaming, but of course it's ongoing. The GPU driver in particular depends on some rust inside the kernel bits which aren't in the mainline kernel, yet. The 6.1 kernel has some Rust bits, 6.2 will have more, but I don't believe that will be enough for the GPU driver ... yet.
worldsavior|3 years ago
wincy|3 years ago
yjftsjthsd-h|3 years ago
Apple's drivers are upstreamed, in Darwin. I'm not aware of any reason to believe that Apple has any Linux drivers that they could upstream.
unknown|3 years ago
[deleted]
bitwize|3 years ago
[deleted]
rvz|3 years ago
[deleted]
dboreham|3 years ago
To what upstream project?