tmandry's comments

tmandry | 2 years ago | on: Rust to stabilize `async fn` and return-position `impl Trait` in traits

I don't have numbers handy, but I can say with confidence that the answer to both questions is "yes". It depends on the use case. Boxing is a useful tool and in many cases the overhead is minimal, but you don't want to use it all the time. Likewise, static inlining can bite you in deeply nested cases and I've heard of this happening too.

The future we're working toward is for this to be a decision you can make locally without introducing compatibility hazards or having to change a bunch of code. Ideally, one day you can even ask the compiler to alert you to potential performance hazards or make reasonable default choices for you...

tmandry | 4 years ago | on: Human Rapamycin Longevity Clinical Trials Begin

Study design: Tracks 200 people over one year and measures a suite of biomarkers at the beginning, 6 months, and 12 months. Participants are randomly assigned to one of four dosages or a placebo. Double blind.

The study hopes to find an optimal dose for humans using the various biomarkers as a guide. I’m not sure how well we can connect those biomarkers to actual longevity/healthspan.

Source: https://www.lifespan.io/news/a-campaign-to-launch-rapamycin-...

tmandry | 5 years ago | on: Amazon, Apple and Google Cut Off Parler

> Otherwise we are always dependent on the good will of the companies without democratic control.

Yes, but that’s already true without this action, which doesn’t make them any more or less democratically controlled. A more democratic way of deciding clear limits on free speech would be great, but the absence of one doesn’t mean the platforms should sit on their hands and do nothing while the world burns.

tmandry | 5 years ago | on: Why Not Rust?

Most libraries don’t target nightly anymore. It certainly used to be the case that nightly had all the cool features everyone wanted, but almost all features that popular crates depended on have now been stabilized. Even Rocket (the most high-profile holdout I know of) now works on stable as of earlier this year.

As for maintenance, as with all library ecosystems it’s a mix. The most popular crates tend to be the most well-maintained in my experience. This is definitely something to consider when taking on new dependencies.

tmandry | 5 years ago | on: Foam – A Roam Research alternative with VSCode, Markdown and GitHub

You seem to be hitting the nail on the head with regard to seamless UX. I’d gladly pay to support something like this, but the lock-in / “what happens if you go away” problem is real and holds me back from investing all my knowledge and time. Short of an actual federated system, a self hosted option OR just markdown export / backup of an entire database would completely alleviate that for me.

tmandry | 6 years ago | on: DisplayPort and 4K

Only models sold after a certain date support this, and you have to go through a menu sequence to enable HDMI 2. It’s documented on their website.

tmandry | 6 years ago | on: Async-await on stable Rust

Just to clarify for those following along, Rust async code does not use green threads and doesn't require a stack per task.

tmandry | 6 years ago | on: How Rust optimizes async/await

The concept of a pseudo-thread you're referring to is a task. A task contains a whole tree of futures awaiting other futures. So no manual propagation is necessary.

Of course, it's possible for tasks to spawn other tasks that execute independently. (To be clear, if you are awaiting something from within your task, it is not a separate task.) For spawning new tasks, there's a standard API[1], which doesn't include any executor-specific stuff like priority. You'll have to decide what you want the default behavior to be when someone calls this; for example, a newly spawned task can inherit the priority of its parent.

To get more sophisticated, you could even have a "spawn policy" field for every task that your first-party code knows how to set. Any new task spawned from within that task inherits priority according to that task's policy. The executor implementation decides what tasks look like and how to spawn new ones, so you can go crazy. (Not that you necessarily should, that is.)

To summarize the Rust approach, I'd say you have 3 main extension points:

1. The executor, which controls the spawning, prioritization, and execution of tasks

2. Custom combinators (like join_all[2]), which allow you to customize the implementation of poll[3] and, say, customize how sub-futures are prioritized (This is at the same level as await, so per-Future, not per-Task.)

3. Leaf futures (like the ones that read or write to a socket). These are responsible for working with the executor to schedule their future wake-ups (with, say, epoll or some other mechanism). For more on this, see [4].

[1]: https://doc.rust-lang.org/1.28.0/std/task/trait.Executor.htm...

[2]: https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-a...

[3]: https://doc.rust-lang.org/1.28.0/std/future/trait.Future.htm...

[4]: https://boats.gitlab.io/blog/post/wakers-i/

tmandry | 6 years ago | on: How Rust optimizes async/await

Futures were developed outside Rust core, in a third-party library, before being brought into the language. Working with them in combinator form definitely was less ergonomic, but async/await fixes that.

tmandry | 6 years ago | on: How Rust optimizes async/await

Disclaimer: I'm not an expert on the proposal, but have looked at it some, and can offer my impressions here. (Sorry, this got a bit long!)

The C++ proposal definitely attacks the problem from a different angle than Rust. One somewhat surface-level difference is that it implements co_yield in terms of co_await, which is the opposite of Rust implementing await in terms of yield.

Another difference is that in Rust, all heap allocations of your generators/futures are explicit. In C++, technically every initialization of a sub-coroutine starts defaults to being a new heap allocation. I don't want to spread FUD: my understanding is that the vast majority of these are optimized out by the compiler. But one downside of this approach is that you could change your code and accidentally disable one of these optimizations.

In Rust, all the "state inlining" is explicitly done as part of the language. This means that in cases where you can't inline state, you must introduce an explicit indirection. (Imagine, say, a recursive generator - it's impossible to inline inside of itself! When you recurse, you must allocate the new generator on the heap, inside a Box.)

To be clear, the optimizations I'm talking about in the blog post are all implemented today. I'll be covering what they do and don't do, as well as future work needed, in future blog posts.

One benefit of C++ that you allude to is that there are a lot of extension points. I admit to not fully understanding what each one of them is for, but my feeling is that some of it comes from approaching the problem differently. Some of it absolutely represents missing features in Rust's initial implementation. But as I say in the post, we can and will add more features on a rolling basis.

The way I would approach the specific problem you mention is with a custom executor. When you write the executor, you control how new tasks are scheduled, and can add an API that allows specifying a task priority. You can also allow modifying this priority within the task: when you poll a task, set a thread-local variable to point to that task. Then inside the task, you can gain a reference to yourself and modify your priority.

tmandry | 6 years ago | on: How Rust optimizes async/await

It's the same underlying mechanism for generators as for futures: they are stackless coroutines. All the space they need for local variables is allocated ahead of time.

In my experience, the fact that they are stackless is not at all obvious when you're coding with them. Rust makes working with them really simple and intuitive.

tmandry | 6 years ago | on: How Rust optimizes async/await

I'd agree with this, and emphasize the point that this stuff is really tricky to get right without GC. Fighting the borrow checker is somewhat expected when you're dealing with this level of inherent complexity in your memory management.

One of the key reasons for shipping async/await is that it erases almost all of this difficulty and lets you write straight-line code again.

tmandry | 6 years ago | on: How Rust optimizes async/await

Generators do nothing unless you call their resume() method. resume moves the generator from the last state it was in to the next yield (or return).

Internally, when the code hits a yield, it's happening inside the resume method. yield works by saving the current state of the generator in the object (see e.g. resume_from in the post), and returning from resume().

tmandry | 6 years ago | on: How Rust optimizes async/await

Most languages allocate every future (and sub-future, and sub-sub-future) separately on the heap. This leads to some overhead, allocating and deallocating space to store our task state.

In Rust, you can "inline" an entire chain of futures into a single heap allocation.

tmandry | 6 years ago | on: How Rust optimizes async/await

Not necessarily. They're an implementation detail of the compiler, and aren't fully baked yet to boot.

But there's plenty of reason to want generators, including the fact that they let you build streams. And the fact that async/await relies heavily on them has pushed the implementation much closer to being ready. I hope we get them at some point!

page 1