(no title)
kenjin4096 | 11 months ago
The blog post mentions it brings a 1-5% perf improvement. Which is still significant for CPython. It does not complicate the source because we use a DSL to generate CPython's interpreters. So the only complexity is in autogenerated code, which is usually meant for machine consumption anyways.
The other benefit (for us maintainers I guess), is that it compiles way faster and is more debuggable (perf and other tools work better) when each bytecode is a smaller function. So I'm inclined to keep it for perf and productivity reasons.
ot|11 months ago
If the desired call structure can be achieved in a portable way, that's a win IMO.
coldtea|11 months ago
Several releases in, have we seen even a 2x speedup? Or more like 0.2x at best?
Not trying to dismiss the interpreter changes - more want to know if those speedup plans were even remotely realistic, and if anything close enough to even 1/5 of what was promised will really come out of them...
chippiewill|11 months ago
So it's slowly getting there, I think the faster cpython project was mostly around the idea that the JIT can get a lot faster as it starts to optimise more and more and that only just got shipped in 3.13, so there's a lot of headroom. We know that PyPy (an existing JIT implementation) is close to 5x faster than CPython a lot of the time already.
There's also now the experimental free-threading build which speeds up multithreaded Python applications (Not by a lot right now though unfortunately).
Twirrim|11 months ago
throwaway2037|11 months ago
About the author: https://us.pycon.org/2023/speaker/profile/81/index.html
This dude looks God-level.Half-joking: Maybe MSFT can also poach Lars Bak of Google V8-fame.
unknown|11 months ago
[deleted]