ArkType is a really interesting library that has a difficult time marketing itself. More than being a schema validator, it brings TS types into the runtime, so you can programmatically work with types as data with (near?) full fidelity.
I've been evaluating schema libraries for a better-than-Zod source of truth, and ArkType is where I've been focused. Zod v4 just entered beta[1], and it improves many of my problems with it. For such a mature library to improve like this, v4 is treat and speaks volumes to the quality of engineering. But ArkType has a much larger scope, and feels to me more like a data modeling language than a library. Something I definitely want as a dev!
The main downside I see is that its runtime code size footprint is much larger than Zod. For some frontends this may be acceptable, but it's a real cost that isn't wise to pay in many cases. The good news is with precompilation[2] I think ArkType will come into its own and look more like a language with a compiler, and be suitable for lightweight frontends too.
I recently went down this same rabbit hole for backend and stumbled on Typia[0] and Nestia[1] from the same developer. The DX with this is fantastic, especially when combined with Kysely[2] because now it's pure TypeScript end-to-end (no runtime schema artifacts and validations get AOT inlined).
> The main downside I see is that its runtime code size footprint is much larger than Zod.
Yes, it unfortunately really does bloat your bundle a lot, which is a big reason I personally chose to go with Valibot instead (it also helps that it's a lot closer to zods API so it's easier to pickup).
Thanks for linking that issue, I'll definitely revisit it if they can get the size down.
I have to agree, I know not all people like the same things, but this looks terrible to me. Maybe it’s like Tailwind in the sense that you learn to like it when you actually use it.
The thing is Zod seems fairly standard in the ecosystem, and I value that more than novelty.
Yeah, that type constraints in strings thing also threw me off big time. Zod is, if you learn it, fairly composable and roughly feels like TypeScript types in terms of intuition on capabilities. Im not sure if I want to sacrifice all of that for a vague promise of performance improvements that I realistically never encounter.
In validation, it's never about speed. Is how you relate the schema tree to the error reporting tree. If you didn't already, you will figure that out eventually.
If you mess that (either by being too flat, too customizeable or too limited), library users will start coming up with their own wrappers around it, which will make your stuff slower and your role as a maintainer hell.
(source: 15 years intermittently maintaining a similar project).
There is an obvious need for a validation library nowadays that bridges oop, functional and natural languages. Its value, if adopted as a standard, would be immense. The floor is still lava though, can't make it work in today's software culture.
Especially when unions are involved. We have a flexible schema for many parameters (e.g., X accepts an object or an array of them, and one object has some properties that could be an enum or a detailed shape), and both Zod and Valibot produce incomprehensible and useless error messages that don’t explain what’s wrong. We had to roll our own.
Really conflicted with TS. On one hand it’s so impressive that a type system can do these sort of tricks. On the other hand if we had type introspection at runtime we wouldn’t need any of this.
What good would that do with making sure a complex form has been filled out completely? Without a way to meaningfully attach error messages and so on, that only solves a small subset of the problems libraries like ArkType, Valibot, and Zod solve.
I really want to see the people that have performance issues with Zod and what's their use case.
I mean it.
I've been parsing (not just validating) runtime values from a decade (io-ts, Zod, effect/schema, t-comb, etc) and I find the performance penalty irrelevant in virtually any project, either FE or BE.
Seriously, people will fill their website with Google tracking crap, 20000 libraries, react crap for a simple crud, and then complain about ms differences in parsing?
Since recent typescript features have made it more possible, I’m less interested in runtime validation, and really only keen in build-type schema validation.
There’s a few tools out there that generate code that typescript will prove will validate your schema. That I think is the path forward.
I don’t know too much about about the TS ecosystem but do these new systems you talk about do it via a “Smart Constructor”? That is the pattern I typically use to solve this problem in Haskell and Rust code.
With many TS features making their way into JS, I've sometimes wondered if TS is to JS what Sass is to CSS. I currently rely on TS, but I now consider Saas harmful [there being overlapping syntax w/ vanilla CSS].
What features are you thinking of? I hadn’t heard anything about TS types making their way into JS in any way (unless you count engines that can execute TS directly, but those just ignore the type syntax). That would be a massive change.
I like the idea of having types at runtime for parsing etc and generating validators in various languages. What stopped me from going there so far is that I already have TypeScript types provided for the various libraries I use. How good are the tools for importing TypeScript types into ArkType/Zod and working with types in various representations in parallel?
The way zod and arktype generally handle this is by providing treating the schema as the source of truth, rather than a type. They then provide a way to define the type in terms of the schema:
// zod 3 syntax
import { z } from 'zod'
const RGB = z.schema({
red: z.number(),
green: z.number(),
blue: z.number(),
})
type RGB = z.infer<typeof RGB>
// same thing as:
// type RGB = { red: number; green: number; blue: number };
For the initial migration, there are tools that can automatically convert types into the equivalent schema. A quick search turned up https://transform.tools/typesgcript-to-zodghl, but I've seen others too.
For what it's worth, I have come to prefer this deriving types from parsers approach to the other way around.
This looks interesting, however Zod has become a standard of sorts and a lot of libraries I use expect, for example, a JSON schema defined as a Zod schema. I would need some sort of adapter to a Zod schema to make this work for me.
TS: "We have added types to javascript, everything is now strongly typed, and the compiler will pester you to no end until it's happy with the types."
Me: "Awesome, so I get an object from an API, it will be trivial to check at runtime if it's of a given type. Or to have a debug mode that checks each function's inputs to match the declared types. Otherwise the types would be just an empty charade. Right?"
TS: "What?"
Me: "What?"
Realizing this was a true facepalm moment for me. No one ever thought of adding a debug TS mode where it would turn
function myFunc(a: string, b: number) {}
into
function myFunc(a: string, b: number) {
assert(typeof a === "string")
assert(typeof b === "number")
}
to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
That isn’t what TypeScript is for, you’re describing a similar but unrelated problem, namely runtime type validation. There’s a reason why JSON.parse() returns any.
However. TS allows to avoid logical errors in the program's source, which is a class of errors historically very important, since JavaScript is so highly dynamic.
The debug mode sounds interesting at first thought, but quickly explodes in complexity when you deal with more complex object types and signatures. To enable automatic runtime validation for all cases, you would need to rewrite programs so thoroughly that you’re pretty much guaranteed to introduce bugs and behaviour changes that were not present in the source code.
In my opinion it’s great that TS draws a strict boundary to avoid runtime impact at all cost, and leave that to libraries like Zod, which handle dealing with external data.
> to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
Those type casts are sure annoying, but what’s the alternative? Even in your hypothetical debug mode, you would not be safe here, since you’re effectively telling the compiler you know better and it’s supposed to transform that type, not assert it. Or do you want to remove the escape hatch „as“ is? Because that would be a major pain in the ass in situations where you just do know better, or don’t want to ensure perfect type safety for something you know will work.
You can’t make things idiot proof, no matter how hard you try. That doesn’t make preprocessing type hints useless.
webdevladder|10 months ago
I've been evaluating schema libraries for a better-than-Zod source of truth, and ArkType is where I've been focused. Zod v4 just entered beta[1], and it improves many of my problems with it. For such a mature library to improve like this, v4 is treat and speaks volumes to the quality of engineering. But ArkType has a much larger scope, and feels to me more like a data modeling language than a library. Something I definitely want as a dev!
The main downside I see is that its runtime code size footprint is much larger than Zod. For some frontends this may be acceptable, but it's a real cost that isn't wise to pay in many cases. The good news is with precompilation[2] I think ArkType will come into its own and look more like a language with a compiler, and be suitable for lightweight frontends too.
[1] https://v4.zod.dev/v4
[2] https://github.com/arktypeio/arktype/issues/810
CharlieDigital|10 months ago
I was so shocked by how good this is that I ended up writing up a small deck (haven't had time to write this into a doc yet): https://docs.google.com/presentation/d/1fToIKvR7dyvQS1AAtp4Y...
Shockingly good (for backend)
[0] Typia: https://typia.io/
[1] Nestia: https://nestia.io/
[2] https://kysely.dev/
sync|10 months ago
epolanski|10 months ago
So...it's a parser. Like Zod or effect schema.
https://effect.website/docs/schema/introduction/
worble|10 months ago
Yes, it unfortunately really does bloat your bundle a lot, which is a big reason I personally chose to go with Valibot instead (it also helps that it's a lot closer to zods API so it's easier to pickup).
Thanks for linking that issue, I'll definitely revisit it if they can get the size down.
Aeolun|10 months ago
It’s a miracle it can be 100x faster than Zod, but speed was never my issue with zod to begin with.
thiht|10 months ago
The thing is Zod seems fairly standard in the ecosystem, and I value that more than novelty.
9dev|10 months ago
davnicwil|10 months ago
Heads up, seems overall more scannable than an equivalent zod schema though given the similarity to 'raw' TS.
Also it seems like a fairly short hop to this engine being used with actual raw TS types in a compilation step or prisma-style codegen?
alganet|10 months ago
If you mess that (either by being too flat, too customizeable or too limited), library users will start coming up with their own wrappers around it, which will make your stuff slower and your role as a maintainer hell.
(source: 15 years intermittently maintaining a similar project).
There is an obvious need for a validation library nowadays that bridges oop, functional and natural languages. Its value, if adopted as a standard, would be immense. The floor is still lava though, can't make it work in today's software culture.
e1g|10 months ago
retropragma|10 months ago
brap|10 months ago
9dev|10 months ago
Chyzwar|10 months ago
epolanski|10 months ago
I mean it.
I've been parsing (not just validating) runtime values from a decade (io-ts, Zod, effect/schema, t-comb, etc) and I find the performance penalty irrelevant in virtually any project, either FE or BE.
Seriously, people will fill their website with Google tracking crap, 20000 libraries, react crap for a simple crud, and then complain about ms differences in parsing?
re-thc|10 months ago
Still far behind if the 100x is to be believed. v4 isn't even a 10x improvement. Nice changes though.
madeofpalk|10 months ago
There’s a few tools out there that generate code that typescript will prove will validate your schema. That I think is the path forward.
root_axis|10 months ago
seniorsassycat|10 months ago
dkubb|10 months ago
esafak|10 months ago
chrisweekly|10 months ago
iainmerrick|10 months ago
domoritz|10 months ago
your_fin|10 months ago
For what it's worth, I have come to prefer this deriving types from parsers approach to the other way around.
t1amat|10 months ago
cendyne|10 months ago
cugul|10 months ago
https://developer.huawei.com/consumer/en/doc/harmonyos-guide...
KTibow|10 months ago
tomaskafka|10 months ago
Me: "Awesome, so I get an object from an API, it will be trivial to check at runtime if it's of a given type. Or to have a debug mode that checks each function's inputs to match the declared types. Otherwise the types would be just an empty charade. Right?"
TS: "What?"
Me: "What?"
Realizing this was a true facepalm moment for me. No one ever thought of adding a debug TS mode where it would turn
function myFunc(a: string, b: number) {}
into
function myFunc(a: string, b: number) { assert(typeof a === "string") assert(typeof b === "number") }
to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
9dev|10 months ago
The debug mode sounds interesting at first thought, but quickly explodes in complexity when you deal with more complex object types and signatures. To enable automatic runtime validation for all cases, you would need to rewrite programs so thoroughly that you’re pretty much guaranteed to introduce bugs and behaviour changes that were not present in the source code.
In my opinion it’s great that TS draws a strict boundary to avoid runtime impact at all cost, and leave that to libraries like Zod, which handle dealing with external data.
> to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
Those type casts are sure annoying, but what’s the alternative? Even in your hypothetical debug mode, you would not be safe here, since you’re effectively telling the compiler you know better and it’s supposed to transform that type, not assert it. Or do you want to remove the escape hatch „as“ is? Because that would be a major pain in the ass in situations where you just do know better, or don’t want to ensure perfect type safety for something you know will work.
You can’t make things idiot proof, no matter how hard you try. That doesn’t make preprocessing type hints useless.