As far as I know, Zig has a bunch of things in the works for a better development experience. Almost every day there's something being worked on - like https://github.com/ziglang/zig/pull/24124 just now. I know that Zig had some plans in the past to also work on hot code swapping. At this rate of development, I wouldn't be surprised if hot code swapping was functional within a year on x86_64.
The biggest pain point I personally have with Zig right now is the speed of `comptime` - The compiler has a lot of work to do here, and running a brainF** DSL at compile-time is pretty slow (speaking from experience - it was a really funny experiment). Will we have improvements to this section of the compiler any time soon?
Overall I'm really hyped for these new backends that Zig is introducing. Can't wait to make my own URCL (https://github.com/ModPunchtree/URCL) backend for Zig. ;)
For comptime perf improvements, I know what needs to be done - I even started working on a branch a long time ago. Unfortunately, it is going to require reworking a lot of the semantic analysis code. Something that absolutely can, should, and will be done, but is competing with other priorities.
Hot code swapping will be huge for gamedev. The idea that Zig will basically support it by default with a compiler flag is wild. Try doing that, clang.
Is comptime slowness really an issue? I'm building a JSON-RPC library and heavily relying on comptime to be able to dispatch a JSON request to arbitrary function. Due to strict static typing, there's no way to dynamically dispatch to a function with arbitrary parameters in runtime. The only way I found was figuring the function type mapping during compile time using comptime. I'm sure it will blow up the code size with additional copies of the comptimed code with each arbitrary function.
Is it easy to build out a custom backend? I haven't looked at it yet but I'd like to try some experiments with that -- to be specific, I think that I can build out a backend that will consume AIR and produce a memory safety report. (it would identify if you're using undefined values, stack pointer escape, use after free, double free, alias xor mut)
URCL is sending me down a rabbithole. Haven't looked super deeply yet, but the most hilarious timeline would be that an IR built for Minecraft becomes a viable compilation target for languages.
This is already such a huge achievement, yet as the devlog notes, there is plenty more to come! The idea of a compiler modifying only the parts of a binary that it needs to during compilation is simultaneously refreshing and totally wild, yet now squarely within reach of the Zig project. Exciting times ahead.
> For a larger project like the Zig compiler itself, it takes the time down from 75 seconds to 20 seconds. We’re only just getting started.
Excited to see what he can do with this. He seems like a really smart guy.
What's the package management look like? I tried to get an app with QuickJS + SDL3 working, but the mess of C++ pushed me to Rust where it all just works. Would be glad to try it out in Zig too.
Package management in Zig is more manual than Rust, involving fetching the package URL using the CLI, then importing the module in your build script. This has its upsides - you can depend on arbitrary archives, so lots of Zig packages of C libraries are just a build script with a dependency on a unmodified tarball release. But obviously it's a little trickier for beginners.
Zig makes it really easy to use C packages directly like this, though Zig's types are much more strict so you'll inevitably be doing a lot of casting when interacting with the API
I said it for D and Nature, and every other languages that comes with its own backend, we all have a duty to support projects that tries to not depend on LLVM, compiler R&D has stagnated because of LLVM, far too many languages chose to depend on it, far too many people don't value fast iteration time, or perhaps they grew to not expect any better?
Fast iteration time with incremental compilation and binary patching, good debugging should be the expectation for new languages, not something niche or "too hard to do"
Sounds like Julia should consider switching to Zig to get considerable performance gains. I remember authors feeling uneasy with each llvm release worrying about performance degradations.
Julia is effectively hard locked to LLVM. Large swathes of the ecosystem rely on the presence of LLVM either for intrinsics, autodiff (Enzyme) or gpu compilation. Nevermind Base and Core.
The compiler is fairly retargetable, this is an active area of work. So it’s maybe possible in the future to envision zig as an alternative compiler for fragments of the language.
That could be a way to get compile times down, but I think there is still much to do on the Julia side.
Such as a more fine grained compile cache, better tooling to prevent i validations, removal of the world splitting optimisation, more use of multithreading in the compiler, automatic precompilation of concrete signatures, and generation of lazier code which hot-swaps in code when it is compiled.
People say this about every compiler backend that shows up. I have serious doubts, but if someone wants to take it on as a project it'd be pretty interesting to see what happens.
- an integrated build system that doesn't use multiple separate arcane tools and languages
- slices with known length in Zig vs arrays in C (buffer overflows)
- an explicit optional type that you're forced to check, null pointers aren't* allowed (*when they are, for integrating with C code, the type makes that obviously clear)
- enums, tagged unions and enforced exhaustive checks on "switch" expressions
- error handling is explicit, functions return an error (an enum value) that the caller must handle in some way. In C the function might return some integer to indicate an error which you're allowed to completely ignore. What is missing is a standard way of returning some data with an error that's built into the language (the error-struct-passed-through-parameters pattern feels bolted on, there should be special syntax for it)
- "defer", "errdefer" blocks for cleanup after function returns or errors
- comptime code generation (in Zig) instead of macros, type reflection (@typeInfo and friends)
- caller typically makes decisions about how and where memory is allocated by passing an allocator to libraries
- easier (at least for a noob) to find memory leaks by just using GeneralPurposeAllocator
As someone who's always used higher level languages since I started programming and strongly disliked many of the arcane, counterintuitive things about C and surrounding ecosystem whenever I tried it, Zig finally got me into systems programming in a way that I find enjoyable.
I've got that stuff all figured out, should have some interesting updates for everyone over the next 2-3 months. Been redoing I/O from the ground up - mostly standard library work.
The whole "stage1/2/3" jazz is about our bootstrap process; that is, the way you get a Zig compiler starting from nothing but a C compiler. This is a tricky problem because of the fact that the Zig compiler is written in Zig. The bootstrap is unfortunately quite slow to run, for two main reasons:
* We want the final Zig binary it produces to be optimized using LLVM, and LLVM is incredibly slow.
* The start of the bootstrap chain involves a kinda-weird step where we translate a WASM binary to a gigantic C file which we then build; this takes a while and makes the first Zig compiler in the process ("stage1"/"zig1") particularly slow.
Luckily, you very rarely need to bootstrap!
Most of the time, you can simply download a recent Zig binary from ziglang.org. The only reason the bootstrap process exists is essentially so you can make that tarballs yourself (useful if you want to link against system LLVM, or optimize for your native CPU, or you are a distro package maintainer). You don't actually need to do it to develop the compiler; you just need to get a relatively recent build of Zig to use to build the compiler, and it's fine to grab that from ziglang.org (or a mirror).
Once you have that, it's as simple as `zig build -Dno-lib` in the Zig repository root. The `-Dno-lib` option just prevents the build script from copying the contents of the `lib/` directory into your installation prefix (zig-out by default); that's desirable to avoid when working on the compiler because it's a lot of files so can take a while to copy.
You can also add `-Ddev=x86_64-linux` to build a smaller subset of compiler functionality, speeding up the build more. For the other `-Ddev` options, look at the fields of `Env` in `src/dev.zig`.
I'm interested in Zig but kind of discouraged by the 30 pages of open issues mentioning "segfault" on their Github tracker. It's disheartening for a systems programming language being developed in the 21st century.
Zig is not a memory safe language and does not attempt to prevent its users from shooting themselves in the leg; it tries to make those unsafe actions explicit and simple, unlike something like C++ that drowns you in complexity, but if you really want to do pointer wrangling and using memory after freeing it, zig allows you to do it.
This design philosophy should lead to countless segfaults that are the result of Zig working as designed. It also relegates Zig to the small niche of projects in modern programming where performance and developer productivity are more important than resilience and correctness.
The other comments get the general idea, but here's a slightly more detailed explanation.
Code generation backends in the Zig compiler work by lowering an SSA-like structured IR called AIR to machine code (or actually first to another intermediate data structure called MIR, but don't worry about that). The thing is, AIR is intentionally quite high-level, the intention being that the code emitting AIR (which is complex and difficult to parallelize) doesn't have to waste loads of valuable single-threaded time turning e.g. a single line of Zig code into tens or hundreds of instructions.
However, this approach sort of just moves this work of "expanding" these high-level operations into low-level instructions, from the code producing AIR, into the codegen backend. That's good for compiler performance (and also actually for avoiding spaghetti in the compiler :P), but it makes it much more difficult to write backends, because you need to implement much more complex lowerings, and for a much greater number of operations.
To solve this problem, `Legalize` is a new system we've introduced to the compiler which effectively performs "rewrites" on AIR. The idea is that if a codegen backend doesn't want to support a certain high-level operation, it can set a flag which tells `Legalize` that before sending the AIR to codegen, it should rewrite all occurences of that instruction to a longer string of simpler instructions. This could hugely simplify the task of writing a backend; we don't have that many legalizations implemented right now, but they could, for instance, convert arithmetic on large integer types (e.g. u256) into multiple operations on a "native" integer size (e.g. u64), significantly decreasing the number of integer sizes which you need to handle in order to get a functional backend. The resulting implementation might not emit as efficient machine code as it otherwise should (generally speaking, manually implementing "expansions" like the one I just mentioned in the backend rather than in `Legalize` will lead to better code because the backend can sort of "plan ahead" better), but you can implement it with way less work. Then, if you want, you can gradually extend the list of operations which the backend actually supports directly lowering, and just turn off the corresponding legalizations; so everything works before and after, but you get better code (and possibly slightly faster compilation) from implementing the operation "directly".
afaict its a new pass that transforms Air generated from Sema into Air understood by a particular backend, since theyre not all at the same level of maturity
[+] [-] Retro_Dev|9 months ago|reply
The biggest pain point I personally have with Zig right now is the speed of `comptime` - The compiler has a lot of work to do here, and running a brainF** DSL at compile-time is pretty slow (speaking from experience - it was a really funny experiment). Will we have improvements to this section of the compiler any time soon?
Overall I'm really hyped for these new backends that Zig is introducing. Can't wait to make my own URCL (https://github.com/ModPunchtree/URCL) backend for Zig. ;)
[+] [-] AndyKelley|9 months ago|reply
[+] [-] bgthompson|9 months ago|reply
[+] [-] ww520|9 months ago|reply
[+] [-] dnautics|9 months ago|reply
[+] [-] sali0|9 months ago|reply
[+] [-] rurban|9 months ago|reply
[+] [-] bgthompson|9 months ago|reply
[+] [-] 9d|9 months ago|reply
Excited to see what he can do with this. He seems like a really smart guy.
What's the package management look like? I tried to get an app with QuickJS + SDL3 working, but the mess of C++ pushed me to Rust where it all just works. Would be glad to try it out in Zig too.
[+] [-] stratts|9 months ago|reply
SDL3 has both a native Zig wrapper: https://github.com/Gota7/zig-sdl3
And a more basic repackaging on the C library/API: https://github.com/castholm/SDL
For QuickJS, the only option is the C API: https://github.com/allyourcodebase/quickjs-ng
Zig makes it really easy to use C packages directly like this, though Zig's types are much more strict so you'll inevitably be doing a lot of casting when interacting with the API
[+] [-] WalterBright|9 months ago|reply
real 0m18.444s user 0m17.408s sys 0m1.688s
On an ancient processor (it runs so fast I just never upgraded it):
cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ stepping : 2 cpu MHz : 2299.674 cache size : 512 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes
[+] [-] txdv|9 months ago|reply
When I tried compiling zig it would take ages because it would go through different stages (with the entirety of bootstraping from wasm)
[+] [-] throwawaymaths|9 months ago|reply
[+] [-] WhereIsTheTruth|9 months ago|reply
Fast iteration time with incremental compilation and binary patching, good debugging should be the expectation for new languages, not something niche or "too hard to do"
[+] [-] d3ckard|9 months ago|reply
Zig is pretty much exactly what I would want from low level language, I'm just waiting for it to be stable.
And, of course, kudos - I really appreciate minimalist design philosophy of Zig.
[+] [-] treeshateorcs|9 months ago|reply
[+] [-] mirekrusin|9 months ago|reply
[+] [-] patagurbon|9 months ago|reply
The compiler is fairly retargetable, this is an active area of work. So it’s maybe possible in the future to envision zig as an alternative compiler for fragments of the language.
[+] [-] bobbylarrybobby|9 months ago|reply
[+] [-] jakobnissen|9 months ago|reply
Such as a more fine grained compile cache, better tooling to prevent i validations, removal of the world splitting optimisation, more use of multithreading in the compiler, automatic precompilation of concrete signatures, and generation of lazier code which hot-swaps in code when it is compiled.
[+] [-] eigenspace|9 months ago|reply
[+] [-] whinvik|9 months ago|reply
[+] [-] dns_snek|9 months ago|reply
- an integrated build system that doesn't use multiple separate arcane tools and languages
- slices with known length in Zig vs arrays in C (buffer overflows)
- an explicit optional type that you're forced to check, null pointers aren't* allowed (*when they are, for integrating with C code, the type makes that obviously clear)
- enums, tagged unions and enforced exhaustive checks on "switch" expressions
- error handling is explicit, functions return an error (an enum value) that the caller must handle in some way. In C the function might return some integer to indicate an error which you're allowed to completely ignore. What is missing is a standard way of returning some data with an error that's built into the language (the error-struct-passed-through-parameters pattern feels bolted on, there should be special syntax for it)
- "defer", "errdefer" blocks for cleanup after function returns or errors
- comptime code generation (in Zig) instead of macros, type reflection (@typeInfo and friends)
- caller typically makes decisions about how and where memory is allocated by passing an allocator to libraries
- easier (at least for a noob) to find memory leaks by just using GeneralPurposeAllocator
As someone who's always used higher level languages since I started programming and strongly disliked many of the arcane, counterintuitive things about C and surrounding ecosystem whenever I tried it, Zig finally got me into systems programming in a way that I find enjoyable.
[+] [-] flohofwoe|9 months ago|reply
[deleted]
[+] [-] fjnfndnf|9 months ago|reply
While a quick compile cycle is beneficial for productivity, this is only the case if it also includes fast tests
Thus wouldn't it be easier to just interpret zig for debug? That would also solve the issue of having to repeat the work for each target
[+] [-] foresto|9 months ago|reply
https://github.com/ziglang/zig/wiki/FAQ#what-is-the-status-o...
[+] [-] AndyKelley|9 months ago|reply
[+] [-] geodel|9 months ago|reply
[+] [-] txdv|9 months ago|reply
I would like to contribute but faced difficulties because the compilation for all stage1/2/3 combined took a lot of time
[+] [-] mlugg|9 months ago|reply
* We want the final Zig binary it produces to be optimized using LLVM, and LLVM is incredibly slow.
* The start of the bootstrap chain involves a kinda-weird step where we translate a WASM binary to a gigantic C file which we then build; this takes a while and makes the first Zig compiler in the process ("stage1"/"zig1") particularly slow.
Luckily, you very rarely need to bootstrap!
Most of the time, you can simply download a recent Zig binary from ziglang.org. The only reason the bootstrap process exists is essentially so you can make that tarballs yourself (useful if you want to link against system LLVM, or optimize for your native CPU, or you are a distro package maintainer). You don't actually need to do it to develop the compiler; you just need to get a relatively recent build of Zig to use to build the compiler, and it's fine to grab that from ziglang.org (or a mirror).
Once you have that, it's as simple as `zig build -Dno-lib` in the Zig repository root. The `-Dno-lib` option just prevents the build script from copying the contents of the `lib/` directory into your installation prefix (zig-out by default); that's desirable to avoid when working on the compiler because it's a lot of files so can take a while to copy.
You can also add `-Ddev=x86_64-linux` to build a smaller subset of compiler functionality, speeding up the build more. For the other `-Ddev` options, look at the fields of `Env` in `src/dev.zig`.
[+] [-] candrewlee|9 months ago|reply
And hey, I wrote a lot of the rendering code for that perf analyzer. Always fun to see your work show up on the internet.
https://github.com/andrewrk/poop
[+] [-] ctz|9 months ago|reply
FWIW there is a similar effort for Rust using cranelift: <https://github.com/rust-lang/rustc_codegen_cranelift>
[+] [-] xmorse|9 months ago|reply
[+] [-] meepmorp|9 months ago|reply
[+] [-] VWWHFSfQ|9 months ago|reply
[+] [-] cornholio|9 months ago|reply
This design philosophy should lead to countless segfaults that are the result of Zig working as designed. It also relegates Zig to the small niche of projects in modern programming where performance and developer productivity are more important than resilience and correctness.
[+] [-] enbugger|9 months ago|reply
[+] [-] AndyKelley|9 months ago|reply
[+] [-] 9d|9 months ago|reply
Sorry, what?
[+] [-] mlugg|9 months ago|reply
Code generation backends in the Zig compiler work by lowering an SSA-like structured IR called AIR to machine code (or actually first to another intermediate data structure called MIR, but don't worry about that). The thing is, AIR is intentionally quite high-level, the intention being that the code emitting AIR (which is complex and difficult to parallelize) doesn't have to waste loads of valuable single-threaded time turning e.g. a single line of Zig code into tens or hundreds of instructions.
However, this approach sort of just moves this work of "expanding" these high-level operations into low-level instructions, from the code producing AIR, into the codegen backend. That's good for compiler performance (and also actually for avoiding spaghetti in the compiler :P), but it makes it much more difficult to write backends, because you need to implement much more complex lowerings, and for a much greater number of operations.
To solve this problem, `Legalize` is a new system we've introduced to the compiler which effectively performs "rewrites" on AIR. The idea is that if a codegen backend doesn't want to support a certain high-level operation, it can set a flag which tells `Legalize` that before sending the AIR to codegen, it should rewrite all occurences of that instruction to a longer string of simpler instructions. This could hugely simplify the task of writing a backend; we don't have that many legalizations implemented right now, but they could, for instance, convert arithmetic on large integer types (e.g. u256) into multiple operations on a "native" integer size (e.g. u64), significantly decreasing the number of integer sizes which you need to handle in order to get a functional backend. The resulting implementation might not emit as efficient machine code as it otherwise should (generally speaking, manually implementing "expansions" like the one I just mentioned in the backend rather than in `Legalize` will lead to better code because the backend can sort of "plan ahead" better), but you can implement it with way less work. Then, if you want, you can gradually extend the list of operations which the backend actually supports directly lowering, and just turn off the corresponding legalizations; so everything works before and after, but you get better code (and possibly slightly faster compilation) from implementing the operation "directly".
[+] [-] garbagepatch|9 months ago|reply
[+] [-] WalterBright|9 months ago|reply
[+] [-] nektro|9 months ago|reply
[+] [-] xmorse|9 months ago|reply
[+] [-] pjmlp|9 months ago|reply
If anything, it is a generation rediscovering what we have lost.
[+] [-] 0points|9 months ago|reply
We been enjoying golang for the last decade and then some ;-)
[+] [-] rurban|9 months ago|reply
[+] [-] ArtixFox|9 months ago|reply
[+] [-] unknown|9 months ago|reply
[deleted]
[+] [-] BrouteMinou|9 months ago|reply
[deleted]