It would be nice to have a more indepth discussion of the issues that have been found with compile-time programming, rather than uncritical acclaim. Staged programming is not new, and people have run into many issues and design tradeoffs in that time. (E.g. the same stuff has been done in Lisps for decades, though most Lisps don't have a type system, which makes things a bit more complicated.)
Some of the issues that come to mind:
* Implementing generics in this way breaks parametricity. Simply put, parametricity means being able to reason about functions just from their type signature. You can't do this when the function can do arbitrary computation based on the concrete type a generic type is instantiated with.
* It's not clear to me how Zig handles recursive generic types. Generally, type systems are lazy to allow recursion. So I can write something like
type Example = Something[Example]
(Yes, this is useful.)
* Type checking and compile-time computation can interact in interesting ways. Does type checking take place before compile-time code runs, after it runs, or can they be interleaved? Different choices give different trade-offs. It's not clear to me what Zig does and hence what tradeoffs it makes.
* The article suggests that compile-time code can generate code (not just values) but doesn't discuss hygiene.
I'm a pretty big fan of Zig--I've been following it and writing it on-and-off for a couple of years. I think that comptime has a couple of use-cases where it is very cool. Generics, initializing complex data-structures at compile-time, and target-specific code-generation are the big three where comptime shines.
However, in other situations seeing "comptime" in Zig code has makes me go "oh no" because, like Lisp macros, it's very easy to use comptime to avoid a problem that doesn't exist or wouldn't exist if you structured other parts of your code better. For example, the OP's example of iterating the fields of a struct to sum the values is unfortunately characteristic of how people use comptime in the wild--when they would often be better served by using a data-structure that is actually iterable (e.g. std.enums.EnumArray).
> Implementing generics in this way breaks parametricity. Simply put, parametricity means being
able to reason about functions just from their type signature. You can't do this when the function can do arbitrary computation based on the concrete type a generic type is instantiated with.
Do you mean reasoning about a function in the sense of just understanding what a functions does (or can do), i.e. in the view of the practical programmer, or reasoning about the function in a typed theoretical system (e.g. typed lambda calculus or maybe even more exotic)? Or maybe a bit of both? There is certainly a concern from the theoretical viewpoint but how important is that for a practical programming language?
For example, I believe C++ template programming also breaks "parametricity" by supporting template specialisation. While there are many mundane issues with C++ templates, breaking parametricity is not a very big deal in practice. In contrast, it enables optimisations that are not otherwise possible (for templates). Consider for example std::vector<bool>: implementations can be made that actually store a single bit per vector element (instead of how a bool normally is represented using an int or char). Maybe this is even required by the standard, I don't recall. My point is that in makes sense for C++ to allow this, I think.
Hi, article author here. I was motivated to write this post after having trouble articulating some of its points while at a meetup, so that's why the goal of this post was focused on explaining things, and not being critical.
So at least address your points here:
* I do agree this is a direct trade-off with Zig style comptime, versus more statically defined function signatures. I don't think this affects all code, only code which does such reasoning with types, so it's a trade-off between reasoning and expressivity that you can make depending on your needs. On the other hand, per the post's view 0, I have found that just going in and reading the source code easily answers the questions I have when the type signature doesn't. I don't think I've ever been confused about how to use something for more than the time it takes to read a few dozen lines of code.
* Your specific example for recursive generic types poses a problem because a name being used in the declaration causes a "dependency loop detected" error. There are ways around this. The generics example in the post for example references itself. If you had a concrete example showing a case where this does something, I could perhaps show you the zig code that does it.
* Type checking happens during comptime. Eg, this code:
when_typecheck.zig:3:17: error: expected type 'u32', found '*const [2:0]u8'
const a: u32 = "42";
^~~~
Compile Log Output:
@as(*const [2:0]u8, "Hi")
So the first @compileLog statement was run by comptime, but then the type check error stopped it from continuing to the second @compileLog statement. If you dig into the Zig issues, there are some subtle ways the type checking between comptime and runtime can cause problems. However it takes some pretty esoteric code to hit them, and they're easily resolved. Also, they're well known by the core team and I expect them to be addressed before 1.0.
* I'm not sure what you mean by hygiene, can you elaborate?
> being able to reason about functions just from their type signature.
This has nothing to do with compile-time execution, though. You can reason about a function from its declaration if it has a clear logical purpose, is well named, and has well named parameters. You can consider any part of a parameter the programmer can specify as part of the name, including label, type name, etc.
That's actually not a great article. While I agree with the conclusion stated in the title, it's a kind of "debate team" approach to argumentation which tries to win points rather than make meaningful arguments.
The better way to frame the debate is flexibility vs complexity. A fixed function generics system in a language is simpler (if well designed) than a programmable one, but less flexible. The more flexibility you give a generics system, the more complex it becomes, and the closer it becomes to a programming language in its own right. The nice thing about zig's approach is that the meta-programming language is practically the same thing as the regular programming language (which, itself, is a simple language). That minimizes the incremental complexity cost.
It does introduce an extra complexity though: it's harder for the programmer to keep straight what code is executing at compile time vs runtime because the code is interleaved and the context clues are minimal. I wonder if a "comptime shader" could be added to the language server/editor plugin that puts a different background color on comptime code.
100%. So tiring that the discourse around this is based on 15 minute demos and not actual understandings of the trade offs. Varun Gandhi's post that you link to is great.
Based on my experience with Rust, a lot of what people want to do with its "constant generics" probably would be easier to do with a feature like comptime. Letting you do math on constant generics while maintaining parametricity is hard to implement, and when all you really want is "a trait for a hash function with an output size of N," probably giving up parametricity for that purpose and generating the trait from N as an earlier codegen step is fine for you, but Rust's macros are too flexible and annoying for doing it that way. But as soon as you replace parametric polymorphism with a naive code generation feature, you're in for a world of hurt.
You can't use the binding early like this, but inside of the type definition you can use the @This() builtin to get a value that's the type you're in, and you can presumably do whatever you like with it.
The type system barely does anything, so it's not very interesting when type checking runs. comptime code is type checked and executed. Normal code is typechecked and not executed.
comptime is not a macro system. It doesn't have the ability to be unhygienic. It can cleverly monomorphize code, or it can unroll code, or it can omit code, but I don't think it can generate code.
* Documentation. In a sufficiently-powerful comptime system, you can write a function that takes in a path to a .proto file and returns the types defined in that file. How should this function be documented? What happens when you click a reference to such a generated type in the documentation viewer?
* IDE autocompletions, go to definition, type hinting etc. A similar problem, especially when you're working on some half-written code and actual compilation isn't possible yet.
Scheme has a "hygienic" macro system that allows you to do arbitrary computation and code alteration at compile time.
The language doesn't see wide adoption in industry, so maybe its most important lessons have yet to be learned, but one problem with meta-programming is that it turns part of your program into a compiler.
This happens to an extent in every language. When you're writing a library, you're solving the problem "I want users to be able to write THIS and have it be the same as if they had written THAT." A compiler. Meta-programming facilities just expand how different THIS and THAT can be.
Understanding compilers is hard. So, that's at least one potential issue with compile-time programming.
I find that the hard part with designing macros and other metaprogramming constructs that run at compile time is not adding POWER - that part is easy, but to make it readable and easy to understand.
If a reader of the code needs to stop to work out the code then that is a weakness, not a strength - as such a pause in code reading affects everything from refactoring to spotting bugs.
In Zig, compile time code looks like runtime code and you cannot easily know which is which without looking up the definition of the variables.
Turning some statements into compile time, like for, requires ”inline” while some, like if, always folds at compile time if it is constant resolved, skipping even semantic checking of the ”dead” branch.
Grasping this at a glance is very challenging.
There is also the issue of generating types - and not just values - at compile time.
This means that a tool refactoring a compile time defined struct member must know how that struct member was created - which may be through arbitrary code execution.
All of this, and including ”build.zig”- the build tool describing how to compile a project using code invocations, makes it extremely challenging for an IDE to reason about code and provide meaningful analysis and refactoring tools.
Which also in the end may affect overall code quality.
So it’s a trade-off. For a contrast, look at C3 which has a comparable amount of compile time but tries to be IDE friendly.
I think most of those points one only stumbles over after a few thousand lines of Zig and going really deep into the comptime features.
And some features in your list are of questionable value IMHO (e.g. the "reasoning over a function type signature" - Rust could be a much more ergonomic language if the compiler wouldn't have to rely on function signatures alone but instead could peek into called function bodies).
There are definitely some tradeoffs in Zig's comptime system, but I think the more important point is that nothing about it is surprising when working with it, it's only when coming from languages like Rust or C++ where Zig's comptime, generics and reflection might look 'weird'.
Parametricity is a neat trick for mathematicians, it's not really worth it in a low-level language (not least because the type system is badly unsound anyway).
That feels like the wrong word for the thing you're describing. Linguistic arguments aside, yes, you're absolutely right.
In Zig though, that issue is completely orthogonal to generics. The first implementation `foo` is the "only" option available for "truly arbitrary" `T` if you don't magic up some extra information from somewhere. The second implementation `bar` uses an extra language feature unrelated to generics to return a different valid value (it's valid so long as the result of `bar(T, x)` is never accessed). The third option `baz` works on any type with non-zero width and just clobbers some data for fun (you could golf it some more, but I think the 5-line implementation makes it easier to read for non-Zig programmers).
Notice that we haven't performed a computation with `T` and were still able to do things that particular definition of parametricity would not approve of.
Zig does give up that particular property (being able to rely on just a type signature to understand what's going on). Its model is closer to "compile-time duck-typing." The constraints on `T` aren't an explicitly enumerated list of constraints; they're an in vivo set of properties the code using `T` actually requires.
That fact is extremely annoying from time to time (e.g., for one or two major releases the reference Reader/Writer didn't include the full set of methods, but all functions using readers and writers just took in an `anytype`, so implementers either had to read a lot of source or play a game of whack-a-mole with the compiler errors to find the true interface), but for most code it's really not hard to handle.
E.g., if you've seen the `Iterator` pattern once, the following isn't all that hard to understand. Your constraints on `It` are that it tell you what the return type is, that return type ought to be some sort of non-comptime numeric, and it should have a `fn next(self: *It) ?T` method whose return values after the first `null` you're allowed to ignore. If you violate any of those constraints (except, perhaps, the last one -- maybe your iterator chooses to return null and then a few more values) then the code will fail at comptime. If you're afraid of excessive compiler error message lengths, you can use `@compileError()` to create a friendlier message documenting your constraints.
It's a different pattern from what you're describing, but it's absolutely not hard to use correctly.
fn sum(It: type, it: *It) It.T {
var total: T = 0;
while (it.next()) |item|
total += item;
return total;
}
> recursive generics
A decent mental model (most of which follows from "view 4" in TFA, where the runtime code is the residue after the interpreter resolves everything it can at comptime) is treating types as immutable and treating comptime evaluation like an interpreted language.
With that view, `type Example = Something[Example]` can't work because `Example` must be fully defined before you can pass it into `Something`. The laziness you see in ordinary non-generic type instantiations doesn't cross function boundaries. I'm not sure if there's a feature request for that (nothing obvious is standing out), but I'd be a fan @AndyKelley if you're interested.
In terms of that causing problems IRL, it's only been annoying a few times in the last few years for me. The most recent one involved some comptime parser combinators, and there was a recursive message structure I wanted to handle. I worked around it by creating a concrete `FooParser` type with its associated manually implemented `parse` function (which itself was able to mostly call into rather than re-implement other parsers) instead of building up `FooParser` using combinators, so that the normal type instantiation laziness would work without issues.
> when does type checking run
Type inference is simplistic enough that this is almost a non-issue in Zig, aside from the normal tradeoffs from limited type inference (last I checked, they plan to keep it that way because it's not very important to them, it actively hinders the goal of being able to understand code by looking at a local snapshot, and that sort of complexity and constraint might keep the project from hitting more important goals like incremental compilation and binary editing). They are interleaved though (at least in the observable behavior, if you treat comptime execution as an interpreter).
D had it 17 years ago! D features steadily move into other languages.
> Here the comptime keyword indicates that the block it precedes will run during the compile.
D doesn't use a keyword to trigger it. What triggers it is being a "const expression". Naturally, const expressions must be evaluatable at compile time. For example:
int sum(int a, int b) => a + b;
void test()
{
int s = sum(3, 4); // runs at run time
enum e = sum(3, 4); // runs at compile time
}
By avoiding use of non-constant globals, I/O and calling system functions like malloc(), quite a large percentage of functions can be run at compile time without any changes.
Even memory can be allocated with it (using D's automatic memory management).
Zig looks interesting, I just wish it had operator overloading. I don't really buy most of the arguments against operator overloading. A common argument is that with operator overloading you don't know what actually happens under the hood. Which doesn't work, because you might as well create a function named "add" which does multiplication. Another argument is iostreams in C++ or boost::spirit as examples of operator overloading abuse. But I haven't really seen that happen in other languages that have operator overloading, it seems to be C++ specific.
You don't know the amout of magic that goes behind the scenes in python and php with the __ functions. I think zig's approach is refreshing. Being able to follow the code trumps the seconds wasted typing the extra code.
In my humble opinion, a lot of the dislike of operator overloading is related to unexpected runtime performance.
My ideal solution would be for the language to introduce custom operators that clearly indicate an overload. Something like a prefix/postfix (e.g. `let c = a |+| b`). That way it is clear to the person viewing the code that the |+| operation is actually a function call.
This is still open to abuse but I think it at least removes one of the major concerns.
In C++ I ever only used operator overloading for vector/matrix math (where it is indeed very useful). I'd be fine if a language implements the vector math syntax directly (like shading languages do). Zig at least has a @Vector() [1] type which is a bit similar to Clang's Vector Extension (but unfortunately not the Extended Vector Extension) [2].
Maybe such operators for basic linear algebra (for arrays of numbers) should be just built into the language instead of overloading operations. I'm not sure if such a proposal doesn't already exists.
Yeah I never got the aversion to operator overloading either.
"+ can do anything!" As you said, so can plus().
"Hidden function calls?" Have they never programmed a soft float or microcontroller without a div instruction? Function calls for every floating point op.
Ah ‘fieldNames’, looks very similar to Nim’s ‘fieldPairs’. It’s an incredibly handy construct! It makes doing efficient serialization a breeze. I recently implemented a compile time check for thread safety checks on types using ‘fieldPairs’ in about 20 lines.
This needs to become a standard feature of programming languages IMHO.
It’s actually one of the biggest things I find lacking in Rust which is limited to non-typed macros (last I tried). It’s so limiting not to have it. You just have to hope serde is implemented on the structs in a crate. You can’t even make your own structs with the same fields in Rust programmatically.
At some point there was a discussion about compile time reflection, which I guess could include functionality like that, but I think the topic died along with some kind of drama around it. Quite a bummer, cause things like serde would have been so much easier to imeplement with compile time reflection
With comp-time reflection you can build frameworks like ORMs or web frameworks. The only trade-off is that you have to include such a library in the form of source code.
After having written a somewhat complete C parser library I don't really get the big deal about needing meta programming in the language itself.
If I want to generate structs, serialization, properties, instrumentation, etc, I just write a regular C program that processes some source files and output source files and run that first in by build script.
How do you people debug and test these meta programs?
Mine are just regular C programs that uses the exact same debuggers and tools as anything else.
>I don't really get the big deal about needing meta programming in the language itself. If I want to generate structs, serialization, properties, instrumentation, etc, I just write a regular C program that processes some source files and output source files and run that first in by build script.
This describes exactly what people don't want to do.
C# (strictly, Roslyn/dotnet) provides this in a pretty nice way: because the compiler is itself written in the language, you can just drop in plugins which have (readonly!) access to the AST and emit C# source.
Debugging .. well, you have to do a bit more work to set up a nice test framework, but you can then run the compiler with your plugin from inside your standard unit test framework, inside the interactive debugger.
Another interesting pattern is the ability to generate structs at compile time.
Ive ran experiments where a neural net is implemented by creating a json file from pytorch, reading it in using @embedFile, and generating the subsequent a struct with a specific “run” method.
This in theory allows the compiler to optimize the neural network directly (I havent proven a great benefit from this though). Also the whole network lived on the stack, which is means not having any dynamic allocation (not sure if this is good?).
I've done this sort of thing by writing a code generator in python instead of using comptime. I'm not confident that comptime zig is particularly fast, and I don't want to run the json parser that generates the struct all the time.
If you're surprised by Zig's comptime, you should definitely take a look at Nim which also has compile-time code evaluation, plus a full AST macro system.
Zig is overall pretty good as a language and it does what it needs to: staying in the lane of the purpose is very important. It is why I do not particularly care for some languages being used just because.
I hope we can have something that combines the meta-programming capabilities of Zig with the vast ecosystem, community and safety of Rust.
Looking at the language design, I really prefer Zig to Rust, but as an incompetent, amateur programmer, I couldn't write anything in Zig that's actually useful (or reliable), at least for now.
Comptime to replace macros is indeed good, comptime to replace generics on the other hand isn't and that really makes me think of the “when all you have is a hammer” quote.
Is anyone here using Zig for audio plugin development? It seems like a good candidate as an alternative to C++ but lacks the ecosystem (like JUCE). Are there any ongoing efforts to bring DSP/audio plugin development to Zig?
Mojo's compiletime metaprogramming [1] is inspired by Zig's. Though Mojo takes things further by implementing a fully-featured generic programming system.
Tiny bug report: The second code example's output is still ">>array's<< sum is 6" (emphasis mine) even though the code snippet's printout is "struct's sum is {d}"
The article went off the rails at partial evaluation as it doesn’t even show an example of partial evaluation. And then the section on generating code really went nowhere useful.
As a disclaimer, the last time I gave Zig a solid shot was when 0.12 released. The last time I played with comptime properly was in 0.11.
There's a heap of praise thrown at zig comptime. I can certainly see why. From a programming language perspective it's an elegant and very powerful solution. It's a shame that Rust doesn't have a similar system in place. It works wonderfully if you need to precompute something or do some light reflection work.
But, from an actual user perspective it's not very fun or easy to use as soon as you try something harder. The biggest issue I see is that there's no static trait/interface/concept in the language. Any comptime type you receive as a parameter is essentially the `any` type from TypeScript or `void` from C/C++. If you want to do something specific* with it, like call a specific method on it, you have to make sure to check that the type has it. You can of course ignore it and try to call it without checking it, but you're not going to like the errors. Of course, since there are no interfaces you have to do that manually. This is done by reading the Zig stdlib source code to figure out the type enum/structures and then pattern-matching like 6 levels deep. For every field, every method, every parameter of a method. This sucks hard. Of course, once you do check for the type you still won't get any intellisense or any help at all from your IDE/editor.
Now, there are generally two solutions to this:
One would be to add static interfaces/concepts to the language. At the time this was shot down as "unnecessary". Maybe, but it does make this feature extremely difficult to use for anyone but the absolutely most experienced programmers. Honestly, it feels very similar to how Rust proc macros are impenetrable for most people.
The second one is to take a hint from TypeScript and take their relatively complex type system and type assertions. Eg. `(a: unknown): a is number => typeof a === 'number'`.
This one also seems like a bust as it seems to go against the "minimal language" mantra. Also, I don't get the feeling that the language dev team particularly cares about IDEs/LSPs as the Zig LSP server was quite bad the last time I tried it.
Now, the third solution and the one the people behind the Zig LSP server went with is to just execute your comptime functions to get the required type information. Of course, this can't really make the experience of writing comptime any easier, just makes it so that your IDE knows what the result of a comptime invocation was.
So in short it is as difficult to use as it is cool. Really, most of the language is like this. The C interop isn't that great and is severly overhyped. The docs suck. The stdlib docs are even worse. I guess I'm mostly dissapointed since I was hoping Zig could be used where unsafe Rust sucks, but I walked away unsatisfied.
Rust really needs comptime, I love cargo and the ecosystem but trait level programming is weird, macros are weird, why can’t you just be normal? (rust screams)
I see a lot of people in the comments basically saying "well X did it first" and that it's not worth talking about. This missed the point for me, zig is an interesting one personally and not out of semantics of std lib or anything really, it's just something nice to play around with so far. I think with the above attitude we probably could have stopped systems programming at c++, that wouldn't be too fun at all, what we all do without java to laugh at?
[+] [-] noelwelsh|1 year ago|reply
Some of the issues that come to mind:
* Implementing generics in this way breaks parametricity. Simply put, parametricity means being able to reason about functions just from their type signature. You can't do this when the function can do arbitrary computation based on the concrete type a generic type is instantiated with.
* It's not clear to me how Zig handles recursive generic types. Generally, type systems are lazy to allow recursion. So I can write something like
type Example = Something[Example]
(Yes, this is useful.)
* Type checking and compile-time computation can interact in interesting ways. Does type checking take place before compile-time code runs, after it runs, or can they be interleaved? Different choices give different trade-offs. It's not clear to me what Zig does and hence what tradeoffs it makes.
* The article suggests that compile-time code can generate code (not just values) but doesn't discuss hygiene.
There is a good discussion of some issues here: https://typesanitizer.com/blog/zig-generics.html
[+] [-] MatthiasPortzel|1 year ago|reply
However, in other situations seeing "comptime" in Zig code has makes me go "oh no" because, like Lisp macros, it's very easy to use comptime to avoid a problem that doesn't exist or wouldn't exist if you structured other parts of your code better. For example, the OP's example of iterating the fields of a struct to sum the values is unfortunately characteristic of how people use comptime in the wild--when they would often be better served by using a data-structure that is actually iterable (e.g. std.enums.EnumArray).
[+] [-] marhee|1 year ago|reply
> Implementing generics in this way breaks parametricity. Simply put, parametricity means being able to reason about functions just from their type signature. You can't do this when the function can do arbitrary computation based on the concrete type a generic type is instantiated with.
Do you mean reasoning about a function in the sense of just understanding what a functions does (or can do), i.e. in the view of the practical programmer, or reasoning about the function in a typed theoretical system (e.g. typed lambda calculus or maybe even more exotic)? Or maybe a bit of both? There is certainly a concern from the theoretical viewpoint but how important is that for a practical programming language?
For example, I believe C++ template programming also breaks "parametricity" by supporting template specialisation. While there are many mundane issues with C++ templates, breaking parametricity is not a very big deal in practice. In contrast, it enables optimisations that are not otherwise possible (for templates). Consider for example std::vector<bool>: implementations can be made that actually store a single bit per vector element (instead of how a bool normally is represented using an int or char). Maybe this is even required by the standard, I don't recall. My point is that in makes sense for C++ to allow this, I think.
[+] [-] ScottRedig|1 year ago|reply
So at least address your points here:
* I do agree this is a direct trade-off with Zig style comptime, versus more statically defined function signatures. I don't think this affects all code, only code which does such reasoning with types, so it's a trade-off between reasoning and expressivity that you can make depending on your needs. On the other hand, per the post's view 0, I have found that just going in and reading the source code easily answers the questions I have when the type signature doesn't. I don't think I've ever been confused about how to use something for more than the time it takes to read a few dozen lines of code.
* Your specific example for recursive generic types poses a problem because a name being used in the declaration causes a "dependency loop detected" error. There are ways around this. The generics example in the post for example references itself. If you had a concrete example showing a case where this does something, I could perhaps show you the zig code that does it.
* Type checking happens during comptime. Eg, this code:
Gives this error: So the first @compileLog statement was run by comptime, but then the type check error stopped it from continuing to the second @compileLog statement. If you dig into the Zig issues, there are some subtle ways the type checking between comptime and runtime can cause problems. However it takes some pretty esoteric code to hit them, and they're easily resolved. Also, they're well known by the core team and I expect them to be addressed before 1.0.* I'm not sure what you mean by hygiene, can you elaborate?
[+] [-] jmull|1 year ago|reply
This has nothing to do with compile-time execution, though. You can reason about a function from its declaration if it has a clear logical purpose, is well named, and has well named parameters. You can consider any part of a parameter the programmer can specify as part of the name, including label, type name, etc.
> There is a good discussion of some issues here: https://typesanitizer.com/blog/zig-generics.html
That's actually not a great article. While I agree with the conclusion stated in the title, it's a kind of "debate team" approach to argumentation which tries to win points rather than make meaningful arguments.
The better way to frame the debate is flexibility vs complexity. A fixed function generics system in a language is simpler (if well designed) than a programmable one, but less flexible. The more flexibility you give a generics system, the more complex it becomes, and the closer it becomes to a programming language in its own right. The nice thing about zig's approach is that the meta-programming language is practically the same thing as the regular programming language (which, itself, is a simple language). That minimizes the incremental complexity cost.
It does introduce an extra complexity though: it's harder for the programmer to keep straight what code is executing at compile time vs runtime because the code is interleaved and the context clues are minimal. I wonder if a "comptime shader" could be added to the language server/editor plugin that puts a different background color on comptime code.
[+] [-] withoutboats3|1 year ago|reply
Based on my experience with Rust, a lot of what people want to do with its "constant generics" probably would be easier to do with a feature like comptime. Letting you do math on constant generics while maintaining parametricity is hard to implement, and when all you really want is "a trait for a hash function with an output size of N," probably giving up parametricity for that purpose and generating the trait from N as an earlier codegen step is fine for you, but Rust's macros are too flexible and annoying for doing it that way. But as soon as you replace parametric polymorphism with a naive code generation feature, you're in for a world of hurt.
[+] [-] anonymoushn|1 year ago|reply
You can't use the binding early like this, but inside of the type definition you can use the @This() builtin to get a value that's the type you're in, and you can presumably do whatever you like with it.
The type system barely does anything, so it's not very interesting when type checking runs. comptime code is type checked and executed. Normal code is typechecked and not executed.
comptime is not a macro system. It doesn't have the ability to be unhygienic. It can cleverly monomorphize code, or it can unroll code, or it can omit code, but I don't think it can generate code.
[+] [-] miki123211|1 year ago|reply
* Documentation. In a sufficiently-powerful comptime system, you can write a function that takes in a path to a .proto file and returns the types defined in that file. How should this function be documented? What happens when you click a reference to such a generated type in the documentation viewer?
* IDE autocompletions, go to definition, type hinting etc. A similar problem, especially when you're working on some half-written code and actual compilation isn't possible yet.
[+] [-] MathMonkeyMan|1 year ago|reply
The language doesn't see wide adoption in industry, so maybe its most important lessons have yet to be learned, but one problem with meta-programming is that it turns part of your program into a compiler.
This happens to an extent in every language. When you're writing a library, you're solving the problem "I want users to be able to write THIS and have it be the same as if they had written THAT." A compiler. Meta-programming facilities just expand how different THIS and THAT can be.
Understanding compilers is hard. So, that's at least one potential issue with compile-time programming.
[+] [-] lerno|1 year ago|reply
If a reader of the code needs to stop to work out the code then that is a weakness, not a strength - as such a pause in code reading affects everything from refactoring to spotting bugs.
In Zig, compile time code looks like runtime code and you cannot easily know which is which without looking up the definition of the variables.
Turning some statements into compile time, like for, requires ”inline” while some, like if, always folds at compile time if it is constant resolved, skipping even semantic checking of the ”dead” branch.
Grasping this at a glance is very challenging.
There is also the issue of generating types - and not just values - at compile time.
This means that a tool refactoring a compile time defined struct member must know how that struct member was created - which may be through arbitrary code execution.
All of this, and including ”build.zig”- the build tool describing how to compile a project using code invocations, makes it extremely challenging for an IDE to reason about code and provide meaningful analysis and refactoring tools.
Which also in the end may affect overall code quality.
So it’s a trade-off. For a contrast, look at C3 which has a comparable amount of compile time but tries to be IDE friendly.
[+] [-] eddd-ddde|1 year ago|reply
[+] [-] moonlion_eth|1 year ago|reply
[+] [-] flohofwoe|1 year ago|reply
And some features in your list are of questionable value IMHO (e.g. the "reasoning over a function type signature" - Rust could be a much more ergonomic language if the compiler wouldn't have to rely on function signatures alone but instead could peek into called function bodies).
There are definitely some tradeoffs in Zig's comptime system, but I think the more important point is that nothing about it is surprising when working with it, it's only when coming from languages like Rust or C++ where Zig's comptime, generics and reflection might look 'weird'.
[+] [-] tliltocatl|1 year ago|reply
[+] [-] hansvm|1 year ago|reply
That feels like the wrong word for the thing you're describing. Linguistic arguments aside, yes, you're absolutely right.
In Zig though, that issue is completely orthogonal to generics. The first implementation `foo` is the "only" option available for "truly arbitrary" `T` if you don't magic up some extra information from somewhere. The second implementation `bar` uses an extra language feature unrelated to generics to return a different valid value (it's valid so long as the result of `bar(T, x)` is never accessed). The third option `baz` works on any type with non-zero width and just clobbers some data for fun (you could golf it some more, but I think the 5-line implementation makes it easier to read for non-Zig programmers).
Notice that we haven't performed a computation with `T` and were still able to do things that particular definition of parametricity would not approve of.
Zig does give up that particular property (being able to rely on just a type signature to understand what's going on). Its model is closer to "compile-time duck-typing." The constraints on `T` aren't an explicitly enumerated list of constraints; they're an in vivo set of properties the code using `T` actually requires.That fact is extremely annoying from time to time (e.g., for one or two major releases the reference Reader/Writer didn't include the full set of methods, but all functions using readers and writers just took in an `anytype`, so implementers either had to read a lot of source or play a game of whack-a-mole with the compiler errors to find the true interface), but for most code it's really not hard to handle.
E.g., if you've seen the `Iterator` pattern once, the following isn't all that hard to understand. Your constraints on `It` are that it tell you what the return type is, that return type ought to be some sort of non-comptime numeric, and it should have a `fn next(self: *It) ?T` method whose return values after the first `null` you're allowed to ignore. If you violate any of those constraints (except, perhaps, the last one -- maybe your iterator chooses to return null and then a few more values) then the code will fail at comptime. If you're afraid of excessive compiler error message lengths, you can use `@compileError()` to create a friendlier message documenting your constraints.
It's a different pattern from what you're describing, but it's absolutely not hard to use correctly.
> recursive genericsA decent mental model (most of which follows from "view 4" in TFA, where the runtime code is the residue after the interpreter resolves everything it can at comptime) is treating types as immutable and treating comptime evaluation like an interpreted language.
With that view, `type Example = Something[Example]` can't work because `Example` must be fully defined before you can pass it into `Something`. The laziness you see in ordinary non-generic type instantiations doesn't cross function boundaries. I'm not sure if there's a feature request for that (nothing obvious is standing out), but I'd be a fan @AndyKelley if you're interested.
In terms of that causing problems IRL, it's only been annoying a few times in the last few years for me. The most recent one involved some comptime parser combinators, and there was a recursive message structure I wanted to handle. I worked around it by creating a concrete `FooParser` type with its associated manually implemented `parse` function (which itself was able to mostly call into rather than re-implement other parsers) instead of building up `FooParser` using combinators, so that the normal type instantiation laziness would work without issues.
> when does type checking run
Type inference is simplistic enough that this is almost a non-issue in Zig, aside from the normal tradeoffs from limited type inference (last I checked, they plan to keep it that way because it's not very important to them, it actively hinders the goal of being able to understand code by looking at a local snapshot, and that sort of complexity and constraint might keep the project from hitting more important goals like incremental compilation and binary editing). They are interleaved though (at least in the observable behavior, if you treat comptime execution as an interpreter).
[+] [-] naasking|1 year ago|reply
[+] [-] medo-bear|1 year ago|reply
SBCL, which is a very popular Common Lisp implementation, is indeed strongly typed. Coalton, which is an addon, is even statically typed
[+] [-] WalterBright|1 year ago|reply
> Here the comptime keyword indicates that the block it precedes will run during the compile.
D doesn't use a keyword to trigger it. What triggers it is being a "const expression". Naturally, const expressions must be evaluatable at compile time. For example:
By avoiding use of non-constant globals, I/O and calling system functions like malloc(), quite a large percentage of functions can be run at compile time without any changes.Even memory can be allocated with it (using D's automatic memory management).
[+] [-] skocznymroczny|1 year ago|reply
[+] [-] LAC-Tech|1 year ago|reply
In ocaml you can redefine operators... but only in the context of another module.
So if I re-define + in some module Vec3, I can do:
Or even: So there you go, no "where did the + operator come from?" questions when reading the source, and still much nicer than: I doubt zig will change though. The language is starting to crystallize and anything that solved this challenge would be massive.[+] [-] hiccuphippo|1 year ago|reply
[+] [-] zoogeny|1 year ago|reply
My ideal solution would be for the language to introduce custom operators that clearly indicate an overload. Something like a prefix/postfix (e.g. `let c = a |+| b`). That way it is clear to the person viewing the code that the |+| operation is actually a function call.
This is still open to abuse but I think it at least removes one of the major concerns.
[+] [-] flohofwoe|1 year ago|reply
[1] https://ziglang.org/documentation/master/#Vector
[2] https://clang.llvm.org/docs/LanguageExtensions.html#vectors-...
[+] [-] ptrwis|1 year ago|reply
[+] [-] bigpingo|1 year ago|reply
"+ can do anything!" As you said, so can plus().
"Hidden function calls?" Have they never programmed a soft float or microcontroller without a div instruction? Function calls for every floating point op.
[+] [-] elcritch|1 year ago|reply
This needs to become a standard feature of programming languages IMHO.
It’s actually one of the biggest things I find lacking in Rust which is limited to non-typed macros (last I tried). It’s so limiting not to have it. You just have to hope serde is implemented on the structs in a crate. You can’t even make your own structs with the same fields in Rust programmatically.
[+] [-] drogus|1 year ago|reply
[+] [-] ptrwis|1 year ago|reply
[+] [-] pakkarde|1 year ago|reply
How do you people debug and test these meta programs? Mine are just regular C programs that uses the exact same debuggers and tools as anything else.
[+] [-] coldtea|1 year ago|reply
This describes exactly what people don't want to do.
[+] [-] pjc50|1 year ago|reply
Debugging .. well, you have to do a bit more work to set up a nice test framework, but you can then run the compiler with your plugin from inside your standard unit test framework, inside the interactive debugger.
[+] [-] modernerd|1 year ago|reply
> Arbitrary compile-time execution in C:
> cl /nologo /Zi metaprogram.c && metaprogram.exe
> cl /nologo /Zi program.c
> Compile-time code runs at native speed, can be debugged, and is completely procedural & arbitrary
> You do not need your compiler to execute code for you
https://x.com/ryanjfleury/status/1875824288487571873
[+] [-] koe123|1 year ago|reply
Ive ran experiments where a neural net is implemented by creating a json file from pytorch, reading it in using @embedFile, and generating the subsequent a struct with a specific “run” method.
This in theory allows the compiler to optimize the neural network directly (I havent proven a great benefit from this though). Also the whole network lived on the stack, which is means not having any dynamic allocation (not sure if this is good?).
[+] [-] anonymoushn|1 year ago|reply
[+] [-] 0x1ceb00da|1 year ago|reply
[+] [-] pjmlp|1 year ago|reply
[+] [-] sixthDot|1 year ago|reply
[+] [-] Tiberium|1 year ago|reply
[+] [-] SMP-UX|1 year ago|reply
[+] [-] bryango|1 year ago|reply
Looking at the language design, I really prefer Zig to Rust, but as an incompetent, amateur programmer, I couldn't write anything in Zig that's actually useful (or reliable), at least for now.
[+] [-] littlestymaar|1 year ago|reply
[+] [-] brylie|1 year ago|reply
[+] [-] melodyogonna|1 year ago|reply
1. https://docs.modular.com/mojo/manual/parameters/
[+] [-] spacecow|1 year ago|reply
[+] [-] cowsandmilk|1 year ago|reply
[+] [-] dminik|1 year ago|reply
There's a heap of praise thrown at zig comptime. I can certainly see why. From a programming language perspective it's an elegant and very powerful solution. It's a shame that Rust doesn't have a similar system in place. It works wonderfully if you need to precompute something or do some light reflection work.
But, from an actual user perspective it's not very fun or easy to use as soon as you try something harder. The biggest issue I see is that there's no static trait/interface/concept in the language. Any comptime type you receive as a parameter is essentially the `any` type from TypeScript or `void` from C/C++. If you want to do something specific* with it, like call a specific method on it, you have to make sure to check that the type has it. You can of course ignore it and try to call it without checking it, but you're not going to like the errors. Of course, since there are no interfaces you have to do that manually. This is done by reading the Zig stdlib source code to figure out the type enum/structures and then pattern-matching like 6 levels deep. For every field, every method, every parameter of a method. This sucks hard. Of course, once you do check for the type you still won't get any intellisense or any help at all from your IDE/editor.
Now, there are generally two solutions to this:
One would be to add static interfaces/concepts to the language. At the time this was shot down as "unnecessary". Maybe, but it does make this feature extremely difficult to use for anyone but the absolutely most experienced programmers. Honestly, it feels very similar to how Rust proc macros are impenetrable for most people.
The second one is to take a hint from TypeScript and take their relatively complex type system and type assertions. Eg. `(a: unknown): a is number => typeof a === 'number'`. This one also seems like a bust as it seems to go against the "minimal language" mantra. Also, I don't get the feeling that the language dev team particularly cares about IDEs/LSPs as the Zig LSP server was quite bad the last time I tried it.
Now, the third solution and the one the people behind the Zig LSP server went with is to just execute your comptime functions to get the required type information. Of course, this can't really make the experience of writing comptime any easier, just makes it so that your IDE knows what the result of a comptime invocation was.
So in short it is as difficult to use as it is cool. Really, most of the language is like this. The C interop isn't that great and is severly overhyped. The docs suck. The stdlib docs are even worse. I guess I'm mostly dissapointed since I was hoping Zig could be used where unsafe Rust sucks, but I walked away unsatisfied.
[+] [-] bionhoward|1 year ago|reply
[+] [-] HeliumHydride|1 year ago|reply
[+] [-] bilekas|1 year ago|reply
[+] [-] G3rn0ti|1 year ago|reply
https://web.archive.org/web/20250107090641/https://www.scott...