Maybe I'm missing something and I'm glad this idea resonates, but it feels like sometime after Java got popular and dynamic languages got a lot of mindshare, a large chunk of the collective programming community forgot why strong static type checking was invented and are now having to rediscover this.
In most strong statically typed languages, you wouldn't often pass strings and generic dictionaries around. You'd naturally gravitate towards parsing/transforming raw data into typed data structures that have guaranteed properties instead to avoid writing defensive code everywhere e.g. a Date object that would throw an exception in the constructor if the string given didn't validate as a date (Edit: Changed this from email because email validation is a can of worms as an example). So there, "parse, don't validate" is the norm and not a tip/idea that would need to gain traction.
> In most strong statically typed languages, you wouldn't often pass strings and generic dictionaries around.
In 99% of the projects I worked on my professional life, anything that is coming from an human input is manipulated as a string and most of the time, it stays like this in all of the application layers (with more or less checks in the path).
On your precise exemple, I can even say that I never saw something like an "Email object".
> it feels like sometime after Java got popular [...] a large chunk of the collective programming community forgot why strong static type checking was invented and are now having to rediscover this.
I think you have a very rose-tinted view of the past: while on the academic side static types were intended for proof on the industrial side it was for efficiency. C didn't get static types in order to prove your code was correct, and it's really not great at doing that, it got static types so you could account for memory and optimise it.
Java didn't help either, when every type has to be a separate file the cost of individual types is humongous, even more so when every field then needs two methods.
> In most strong statically typed languages, you wouldn't often pass strings and generic dictionaries around.
In most strong statically typed languages you would not, but in most statically typed codebases you would. Just look at the Windows interfaces. In fact while Simonyi's original "apps hungarian" had dim echoes of static types that got completely washed out in system, which was used widely in C++, which is already a statically typed language.
> You'd naturally gravitate towards parsing/transforming raw data into typed data structures that have guaranteed properties instead to avoid writing defensive code everywhere e.g. a Date object that would throw an exception in the constructor if the string given didn't validate as a date
It's tricky because `class` conflates a lot of semantically-distinct ideas.
Some people might be making `Date` objects to avoid writing defensive code everywhere (since classes are types), but...
Other people might be making `Date` objects so they can keep all their date-related code in one place (since classes are modules/namespaces, and in Java classes even correspond to files).
Other people might be making `Date` objects so they can override the implementation (since classes are jump tables).
Other people might be making `Date` objects so they can overload a method for different sorts of inputs (since classes are tags).
I think the pragmatics of where code lives, and how the execution branches, probably have a larger impact on such decisions than safety concerns. After all, the most popular way to "avoid writing defensive code everywhere" is to.... write unsafe, brittle code :-(
> You'd naturally gravitate towards parsing/transforming raw data into typed data structures that have guaranteed properties instead to avoid writing defensive code everywhere e.g.
There's nothing natural about this. It's not like we're born knowing good object-oriented design. It's a pattern that has to be learned, and the linked article is one of the well-known pieces that helped a lot of people understand this idea.
My experience was that enterprise programmers burned out on things like WSDL at about the same time Rails became usable (or Django if you’re that way inclined). Rails had an excellent story for validating models which formed the basis for everything that followed, even in languages with static types - ASP.NET MVC was an attempt to win Rails programmers back without feeling too enterprisey. So you had these very convenient, very frameworky solutions that maybe looked like you were leaning on the type system but really it was all just reflection. That became the standard in every language, and nobody needed to remember “parse don’t validate” because heavy frameworks did the work. And why not? Very few error or result types in fancy typed languages are actually suited for showing multiple (internationalised) validation errors on a web page.
The bitter lesson of programming languages is that whatever clever, fast, safe, low-level features a language has, someone will come along and create a more productive framework in a much worse language.
Note, this framework - perhaps the very last one - is now ‘AI’.
In 2 out of 3 problematic bugs I've had in the last two years or so were in statically typed languages where previous developers didn't use the type system effectively.
One bug was in a system that had an Email type but didn't actually enforce the invariants of emails. The one that caused the problem was it didn't enforce case insensitive comparisons. Trivial to fix, but it was encased in layers of stuff that made tracking it down difficult.
The other was a home grown ORM that used the same optional / maybe type to represent both "leave this column as the default" and "set this column to null". It should be obvious how this could go wrong. Easy to fix but it fucked up some production data.
Both of these are failures to apply "parse, don't validate". The form didn't enforce the invariants it had supposedly parsed the data into. The latter didn't differentiate two different parsing.
In my experience that's pretty rare. Most people pass around string phone numbers instead of a phonenumber class.
Java makes it a pain though, so most code ends up primitive obsessed. Other languages make it easier, but unless the language and company has a strong culture around this, they still usually end up primitive obsessed.
I'm not sure, maybe a little bit. My own journey started with BASIC and then C-like languages in the 80s, dabbling in other languages along the way, doing some Python, and then transitioning to more statically typed modern languages in the past 10 years or so.
C-like languages have this a little bit, in that you'll probably make a struct/class from whatever you're looking at and pass it around rather than a dictionary. But dates are probably just stored as untyped numbers with an implicit meaning, and optionals are a foreign concept (although implicit in pointers).
Now, I know that this stuff has been around for decades, but it wasn't something I'd actually use until relatively recently. I suspect that's true of a lot of other people too. It's not that we forgot why strong static type checking was invented, it's that we never really knew, or just didn't have a language we could work in that had it.
Strong static type checking is helpful when implementing the methodology described in this article, but it is besides its focus. You still need to use the most restrictive type. For example, uint, instead of int, when you want to exclude negative values; a non-empty list type, if your list should not be empty; etc.
When the type is more complex, specific contraints should be used. For a real live example: I designed a type for the occupation of a hotel booking application. The number of occupants of a room must be positiv and a child must be accompanied by at least one adult. My type Occupants has a constructor Occupants(int adults, int children) that varifies that condition on construction (and also some maximum values).
It's a design choice more than anything. Haskell's type safety is opt-in — the programmer has to actually choose to properly leverage the type system and design their program this way.
I worked (a long time ago) on a C project where every int was wrapped in a struct.
And a friend told me about a C++ project where every index is a uint8, uint16, and they have to manage many different type of objects leading to lots of bugs..
So it isn't really linked to the language.
> Edit: Changed this from email because email validation is a can of worms as an example
Email honestly seems much more straightforward than dates... Sweden had a Feb 30 in 1712, and there's all sorts of date ranges that never existed in most countries (e.g. the American colonies skipped September 3-13 in 1752).
I think you're quite right that the idea of "parse don't validate" is (or can be) quite closely tied to OO-style programming.
Essentially the article says that each data type should have a single location in code where it is constructed, which is a very class-based way of thinking. If your Java class only has a constructor and getters, then you're already home free.
Also for the method to be efficient you need to be able to know where an object was constructed. Fortunately class instances already track this information.
this is very much a nitpick, but I wouldn't call throwing an exception in the constructor a good use of static typing. sure, it's using a separate type, but the guarantees are enforced at runtime
A frequent visitor to HN. Tip: if you click on the "past" link under the title (but not the "past" link at the top of the page), you'll trigger a search for previous posts.
This is a great article, but people often trip over the title and draw unusual conclusions.
The point of the article is about locality of validation logic in a system. Parsing in this context can be thought as consolidating the logic that makes all structure and validity determination about incoming data into one place in the program.
This lets you then rely on the fact that you have valid data in a known structure in all other parts of the program, which don't have to be crufted up with validation logic when used.
Related, it's worth looking at tools that further improve structure/validity locality like protovalidate for protobuf, or Schematron for XML, which allow you to outsource the entire validity checking to library code for existing serialization formats.
When I came to this idea on my own, I called it "translation at the edge." But for me it was more that just centralizing data validation, it also was about giving you access to all the tools your programming language has for manipulating data.
My main example was working with a co-worker whose application used a number of timestamps. They were passing them around as strings and parsing and doing math with them at the point of usage. But, by parsing the inputs into the language's timestamp representation, their internal interfaces were much cleaner and their purpose was much more obvious since that math could be exposed at the invocation and not the function logic, and thus necessarily, through complex function names.
I think that's an excellent way to build a defensive parsing system but... I still want to build that and then put a validator in front of it to run a lot of the common checks and make sure we can populate easy to understand (and voluminus) errors to the user/service/whatever. There is very little as miserable as loading a 20k CSV file into a system and receiving "Invalid value for name on line 3" knowing that there are likely a plethora of other issues that you'll need to discover one by one.
It seems modern statically-typed and even dynamically-typed languages all adopted this idea, except Go, where they decided zero values represent valid states always (or mostly).
A sincere question to Go programmers – what's your take on "Parse, Don't Validate"?
Go was written for programmers with little to no Computer Science knowledge, a fair bit of C/C++ experience, and zero tolerance for learning abstractions that take more than 30 seconds to explain. If you have opinions about type systems, you are not the target demographic for Go.
In larger codebases, I've noticed an emergent phenomenon that usually the T{} itself (bypassing NewT constructor) tends to be unusable anyway, hence the constructor will enforce "parse, don't validate" just well enough. Only very trivial T{} won't have a nilable private field, such as a pointer, func, or chan.
I'd say that "making zero a meaningful value" does not scale well when codebase grows.
Not speaking for all Go programmers, but I think there is a lot of merit in the idea of "making zero a meaningful value". Zero Is Initialization (ZII) is a whole philosophy that uses this idea. Also, "nil-punning" in Clojure is worth looking at. Basically, if you make "zero" a valid state for all types (the number 0, an empty array, a null pointer) then you can avoid wrapping values in Option types and design your code for the case where a block of memory is initialized to zero or zeroed out.
I think, more generally, "push effects to the edges" which includes validation effects like reporting errors or crashing the program. If you, hypothetically, kept all of your runtime data in a big blob, but validated its structure right when you created it, then you could pass around that blob as an opaque representation. You could then later deserialize that blob and use it and everything would still be fine -- you'd just be carrying around the validation as a precondition rather than explicitly creating another representation for it. You could even use phantom types to carry around some of the semantics of your preconditions.
Point being: I think the rule is slightly more general, although this explanation is probably more intuitive.
Systems tend to change over time (and distributed nodes of a system don’t cut over all at once). So what was valid when you serialized it may not be valid when you deserialize it later.
This, along with John Ousterhout's talk [1] on deep interfaces was transformational for me. And this is coming from a guy who codes in python, so lots of transferable learnings.
Unfortunately, it's somewhat of a religious argument about the one true way. I've worked on both sides of the fence, and each field is equally green in its own way. I've use OCaml, with static typing, and Clojure, with maybe-opt-in schema checking. They both work fine for real purposes.
The big problem arrives when you mix metaphors. With typing, you're either in, or you're out - or should be. You ought not to fall between stools. Each point of view works fine, approached in the right way, but don't pretend one thing is the other.
I make great use of value objects in my applications but there are things I needed to do to make it ergonomic/performant. A "small" application of mine has over 100 value objects implemented as classes. Large apps easily get into the 1000s of classes just for value objects. That is a lot of boilerplate. It's a lot of boxing/unboxing. It'd be a lot of extra typing than "stringly typed" programs.
To make it viable, all value objects are code-generated from model schemas, and then customized as needed (only like 5% need customization beyond basic data types). I have auto-upcasting on setters so you can code stringly when wanted, but everything is validated (very useful for writing unit tests more quickly). I only parse into types at boundaries or on writes/sets, not on reads/gets (limit's the amount of boxing, particularly on reading large amounts of data). Heavy use of reflection, and auto-wiring/dependency injection.
But with these conventions in place, I quite enjoy it. Easy to customize/narrow a type. One convention for all validation. External inputs are by default secure with nice error messages. Once place where all values validation happens (./values classes folder).
A related talk is Richard Feldman's "Making Impossible States Impossible." Richard wrote a number of Elm packages and is the creator of the Roc language.
> Now I have a single, snappy slogan that encapsulates what type-driven design means to me, and better yet, it’s only three words long
IMHO this is distracting and sort of vain. It forces this "semantics" perspective into the reader, just so the author can have a snappy slogan.
Also, not all languages have such freedom in type expressiveness. Some of them have but offer terrible trade-ofs.
The truth is, if you try to be that expressive in a language that doesn't support it you'll end up with a horror story. The article fails to mention that, and that "snappy slogan" makes it look like it's an absolute claim that you must internalize, some sort of deep truth that applies everywhere. It isn't.
An unconstrained json/bson parser without recursive structure limits must be bounded somehow. In many cases, the ordering of marshaled data cannot be guaranteed across platforms.
The best method is walk the symbolic tree with a cost function, and score the fitness of the data compared to expected structures. For example, mismatched or duplicate GUID/Account/permission/key fields reroute the message to the dead-letter queue for analysis, missing required fields trigger error messaging, and missing optional fields lower the qualitative score of the message content.
Parsers can be extremely unpredictable, and loosely typed formats are dangerous at times. =3
This article has done rounds on the ITernet before. Maybe because it resonates with people (who repost it time and again). Anyway, I very much agree with the idea. In my experience, "text" or "string" is not a type. Technically it is one, of course, but I seldom see good use of it for when a more apt type would do better -- in short, it's a last resort thing, and it fares badly there too. Ironically, the only good use for it is as input to a... parser.
I see a lot of URLs being passed around as strings within a system perfectly capable of leveraging typing theory and offering user defined types, if not at least through OOP goodness a lot of people would furiously defend. The URL, in this case, would often have _already_ been parsed once, but effectively "unparsed" and keeps being sent around as text in need of parsing at every "junction" of the system that requires to meaningfully access it, except that parsing is approached like some ungodly litany best avoided and thus foregone or lazily implemented with a regex where a regex isn't nearly sufficient. Perhaps it's because we lack parsers, by and large, or in the very least parser generators that are readily available, understandable (to your average developer), and simple enough to use without requiring to understand formal language theory with Chomsky hierarchy, context sensitivity, grammar ambiguity and parse forests, to say the least.
Same with [file] paths, HTTP header values, and other things that seem alluring to dismiss as only being text.
It wouldn't be a problem, had I not seen time and again how the "text" breaks -- URLs with malformed query parameters because why not just do `+ '?' + entries.map(([ name, value ]) => name + "=" + value).join("&")`, how hard can it be? Paths that assume leading slash or lack there of etc.
I believe the article was born precisely of the same class of frustrations. So I am now bringing the same mantra everywhere with me: "There is no such type as string". Parse at earliest opportunity, lazily if the language allows it (most languages do) -- breadth first so as to not pay upfront, just don't let the text slip through.
I am talking from experience, really, your mileage may vary.
Along with all the general discussion, I found the concept of defensive parsing striking a chord when reading this as well: "The Seven Turrets of Babel: A Taxonomy of LangSec Errors and How to Expunge Them", https://langsec.org/papers/langsec-cwes-secdev2016.pdf
I'd love for these ideas to take hold at work, but I'm on the fringes in infosec, not a dev.
I'm not very familiar with functional programming and Haskell in particular. I think I understand the gist of this article, and "use data structures that make illegal states unrepresentable". However, is there a similar article but written with more common languages (C#, C++, Java, Go) in mind? Or is a big part of this concept only relevant for strong functional languages with sum types and pattern matching?
It is relevant to all languages with static type checkers from idris to python. But of course since it is about expressing properties via the type system the more expressive that is the easier and more applicable.
Java has sum types, incidentally. And pattern matching.
> Or is a big part of this concept only relevant for strong functional languages with sum types and pattern matching?
It need not strictly be a pure functional language for type-driven style to be usable. Type-driven style only requires the fact that some type cannot be assigned to another type, so it's kind of possible to do even in a language like C, as `int a = (struct Foo) {};` would get rejected by C compilers.
However, I don't think it's doable in languages with structural type systems like Typescript or Go's interface without a massive ergonomic hit for minimal gain. Languages with a structural type system are deliberately designed to remove the intentionality of "type T cannot be assigned to type S" in exchange for developer ergonomics.
> However, is there a similar article but written with more common languages (C#, C++, Java, Go) in mind?
For C#, there's F#-focused article, which I believe some of it can be applied to C# as well:
For modern Java, there is some attempt at popularizing "Data-Oriented Programming" which just rebranded "Type-driven design". Surprisingly, with JDK 21+, type-driven style is somewhat viable there, as there is algebraic data type via `record` + `sealed` and exhaustive pattern match & destructuring.
For Rust, due to the new mechanics introduced by its affine type system, there is much more flexibility in what you could express in Rust types compared to more common languages.
Making illegal states unrepresentable sounds like a great idea, and it is, but I see it getting applied without nuance. “Has multiple errors” can be a valid type. Instead of bailing immediately, you can collect all of the errors so that they can be reported all together rather than forcing the user to fix one error at a time.
Is this not `Result<Whatever, List<Error>>`? There's nothing enforcing that the error side needs to be the value-based equivalent of a single instance of an Exception class.
The important part is not to expose a "String -> Whatever" function publicly.
Maybe I am being contrarian, or maybe I don't understand; if I am reading input, I am always going to validate that input after parsing. Especially if it is from a user.
I understand that they should be separate, but they should be very close together.
> if I am reading input, I am always going to validate that input after parsing.
In the "parse, don't validate" mindset, your parsing step is validation but it produces something that doesn't require further validation. To stick with the non-empty list example, your parse step would be something like:
parse [h|t] = Just h :| t
parse [] = Nothing
So when you run this you can assume that the data is valid in the rest of the code (sorry, my Haskell is rusty so this is a sketch, not actual code):
process data =
do {
Just valid <- parse data;
... further uses of valid that can assume parsing succeeded, if it didn't an error would already have occurred and you can handle it
}
That has performed validation, but by parsing it also produces a value that doesn't require any revalidation. Every function that takes the parsed data as an argument can ignore the possibility that the data is invalid. If all you do is validate (returning true/false):
validate [h|t] = true
validate [] = false
Then you don't have that same guarantee. You don't know that, in future uses, that the data is actually valid. So your code becomes more complex and error-prone.
process data =
if validate data then use data else fail "Well shit"
use [h|t] = do_something_with h t
use [] = fail "This shouldn't have happened, we validated it right? Must have been called without data being validated first."
The parse approach adds a guarantee to your code, that when you reach `use` (or whatever other functions) with parsed and validated data that you don't have to test that property again. The validate approach does not provide this guarantee, because you cannot guarantee that `use` is never called without first running the validation. There is no information in the program itself saying that `use` must be called after validation (and that validation must return true). Whereas a version of `use` expecting NonEmpty cannot be called without at least validating that particular property.
Suppose you're receiving bytes representing a User at the edge of your system. If you put json bytes into your parser and get back a User, then put your User through validation, that means you know there are both 'valid' Users and 'invalid' Users.
Instead, there should simply be no way to construct an invalid User. But this article pushes a little harder than that:
Does your business logic require a User to have exactly one last name, and one-or-more first names? Some people might go as far as having a private-constructor + static-factory-method create(..), which does the validation, e.g.
class User {
private List<String> names;
private User(List<String> names) {..}
public static User create(List<String> names) throws ValidationException {
// Check for name rules here
}
}
Even though the create(..) method above validates the name rules, you're still left holding a plain old List-of-Strings deeper in the program when it comes time to use them. The name rules were validated and then thrown away! Now do you check them when you go to use them? Maybe?
If you encode your rules into your data-structure, it might look more like:
class User {
String lastName;
NeList<String> firstNames;
private User(List<String> names) throws ValidationException {..}
}
If I were doing this for real, I'd probably have some Name rules too (as opposed to a raw String). E.g. only some non-empty collection of utf8 characters which were successfully case-folded or something.
Is this overkill? Do I wind up with too much code by being so pedantic? Well no! If I'm building valid types out of valid types, perhaps the overall validation logic just shrinks. The above class could be demoted to some kind of struct/record, e.g.
record User(Name lastName, NeList<Name> firstNames);
Before I was validating Names inside User, but now I can validate Names inside Name, which seems like a win:
class Name {
private String value;
private Name (String name) throws ValidationException {..}
}
This article always end up relevant once in a while.
Recently, I am trying to make llm to output specific format.
It turns out no matter how you wrote propmt and perform validate. It will never be as effective as just limit the output with proper bnf (via llama cpp grammar file).
Semi tangent but I am curious. for those with more experience in python, do you just pass around generic Pandas Dataframes or do you parse each row into an object and write logic that manipulates those instead?
Speaking personally, I try not to write code that passes around dataframes at all. I only really want to interact with them when I have to in order to read/write parquet.
Pass as immutable values, and try to enforce schema (eg, arrow) to keep typed & predictable. This is generally easy by ensuring initial data loads get validated, and then basic testing of subsequent operations goes far.
If python had dependent types, that's how i'd think about them, and keeping them typed would be even easier, eg, nulls sneaking in unexpectedly and breaking numeric columns
When using something like dask, which forces stronger adherence to typings, this can get more painful
The circumstances where you would use one or the other are vastly different. A dataframe is an optimized datastructure for dealing with columnar data, filtering, sorting, aggregating, etc. So if that is what you are dealing with, use a dataframe.
The goal is more about cleaning and massaging data at the perimeter (coming in, and going out) versus what specific tool (a collection of objects vs a dataframe) is used.
I'll be honest, as someone not familiar with Haskell, one of my main takeaways from this article is going down a rabbit hole of finding out how weird Haskell is.
The casualness at which the author states things like "of course, it's obvious to us that `Int -> Void` is impossible" makes me feel like I'm being xkcd 2501'd.
If you spend your life talking about bool having two values, and then need to act as if it has three or 256 values or whatever, that's where the weirdness lives.
In C, true doesn't necessarily equal true.
In Java (myBool != TRUE) does not imply that (myBool == FALSE).
Maybe you could do with some weirdness!
In Haskell:
Bool has two members: True & False. (If it's True, it's True. If it's not True, it's False).
Unit has one members: ()
Void has zero members.
To be fair I'm not sure why Void was raised as an example in the article, and I've never used it. I didn't turn up any useful-looking implementations on hoogle[1] either.
The author's point here is great, but the post does (imho) a poor job illustrating it.
The tl;dr on this is: stop sprinkling guards and if statements all over your codebase. Convert (parse) the data into truthful objects/structs/containers at the perimieter. The goal is to do that work at the boundaries of your system, so that inside of your system you can stop worrying about it and trust the value objects you have.
I think my hangup here is on the use of terms parse vs validate. They are not the right terms to describe this.
I understand where you're coming from, but these terms seem fine to me:
This is exactly what, for example, Rust's str::parse method is for. The documentation gives the example:
let four: u32 = "4".parse().unwrap();
You will so very often have text and want typed information, and parse is exactly how we do that transformation exactly once. Whereas validation is what it looks like when we try to make piecemeal checks later.
Hot take: Static typing is often touted as the end all be all, and all you need to do is "parse, don't validate" at the edge of your program and everything is fine and dandy.
In practice, I find that staunch static typing proponents are often middle or junior engineeers that want to work with an idealised version of programming in their heads. In reality what you are looking for is "openness" and "consistency", because no amount of static typing will save you from poorly defined or optimised-too-early types that encode business logic constraints into programmatic types.
This is also why in practice alot of customer input ends up being passed as "strings" or have a raw copy + parsed copy, because business logic will move faster than whatever code you can write and fix, and exposing it as just "types" breaks the process for future programmers to extend your program.
> no amount of static typing will save you from poorly defined or optimised-too-early types that encode business logic constraints into programmatic types.
That's not a fault of type systems, though.
> because business logic will move faster than whatever code you can write and fix, and exposing it as just "types" breaks the process for future programmers to extend your program
That's a problem with overly-tight coupling, poor design, and poor planning, not type systems
> In practice, I find that staunch static typing proponents are often middle or junior engineeers
I find people become enthusiastic about it around intermediate stages in their career, and they sometimes embrace it in ways that can be a bit rigid and over-zealous, but again it isn't a problem with type systems
> I find that staunch static typing proponents are often middle or junior engineeers
I wouldn't go this far as it depends on when the individual is at that phase of their career. The software world bounces between hype cycles for rigorous static typing and full on dynamic typing. Both options are painful.
I think what's more often the case is that engineers start off by experiencing one of these poles and then after getting burned by it they run to the other pole and become zealous. But at some point most engineers will come to realize that both options have their flaws and find their way to some middle ground between the two, and start to tune out the hype cycles.
This is such a tired take. The burden of using static types is incredibly minimal and makes it drastically simpler to redesign your program around changing business requirements while maintaining confidence in program behavior.
how does this square with very senior people putting in a lot of effort to bolt fairly good type systems onto Python and JavaScript?
> business logic will move faster than whatever code you can write and fix, and exposing it as just "types" breaks the process for future programmers to extend your program.
I just don't understand how this is the case. Fields or methods or whatever are either there, or they are not. Type systems just expose that information. If you need to change the types later on, then change the types.
Example: Kotlin allows you to say "this field will never be null", and it also allows you to say "this field will either be null or not null". Java only allows the latter. If you want the latter in Kotlin, you can still just do that, and now you're able to communicate that (or the other option) to all of your callers.
Typed Python allows you to say "yeah this function returns Any, good luck!" and at least your callers know that. It also allows you to say "this function always returns a str".
I'm sorry, I don't like to title drop, but I am a Staff Data Engineer and I find that "type driven" development is an inappropriate world view for many programming contexts that I encounter. I use "world view" carefully as it makes a contractual assumption about reality -- "give me what I expect". Data processing does not always have the luxury of such imposition. In these contexts a dynamic and introspective world view is more appropriate, "What do we have here?" "What can we use?". In 2019 I would have felt crippled by use of Haskell in data processing contexts and have instead done much in Clojure in these intervening years, though now LLM assisted use of Haskell toward such tasks would be a fun spectator sport.
To amplify what yakshaving said, this may be the worst forum in the entire industry to title drop in. Half the people in any given article's comments are a CxO or Chief or Head or Director or Founder or whatever, or wrote the article, or invented the technology in the article, or are otherwise renowned for something or another.
seanwilson|19 days ago
In most strong statically typed languages, you wouldn't often pass strings and generic dictionaries around. You'd naturally gravitate towards parsing/transforming raw data into typed data structures that have guaranteed properties instead to avoid writing defensive code everywhere e.g. a Date object that would throw an exception in the constructor if the string given didn't validate as a date (Edit: Changed this from email because email validation is a can of worms as an example). So there, "parse, don't validate" is the norm and not a tip/idea that would need to gain traction.
pjerem|19 days ago
In 99% of the projects I worked on my professional life, anything that is coming from an human input is manipulated as a string and most of the time, it stays like this in all of the application layers (with more or less checks in the path).
On your precise exemple, I can even say that I never saw something like an "Email object".
masklinn|19 days ago
I think you have a very rose-tinted view of the past: while on the academic side static types were intended for proof on the industrial side it was for efficiency. C didn't get static types in order to prove your code was correct, and it's really not great at doing that, it got static types so you could account for memory and optimise it.
Java didn't help either, when every type has to be a separate file the cost of individual types is humongous, even more so when every field then needs two methods.
> In most strong statically typed languages, you wouldn't often pass strings and generic dictionaries around.
In most strong statically typed languages you would not, but in most statically typed codebases you would. Just look at the Windows interfaces. In fact while Simonyi's original "apps hungarian" had dim echoes of static types that got completely washed out in system, which was used widely in C++, which is already a statically typed language.
chriswarbo|19 days ago
It's tricky because `class` conflates a lot of semantically-distinct ideas.
Some people might be making `Date` objects to avoid writing defensive code everywhere (since classes are types), but...
Other people might be making `Date` objects so they can keep all their date-related code in one place (since classes are modules/namespaces, and in Java classes even correspond to files).
Other people might be making `Date` objects so they can override the implementation (since classes are jump tables).
Other people might be making `Date` objects so they can overload a method for different sorts of inputs (since classes are tags).
I think the pragmatics of where code lives, and how the execution branches, probably have a larger impact on such decisions than safety concerns. After all, the most popular way to "avoid writing defensive code everywhere" is to.... write unsafe, brittle code :-(
munificent|19 days ago
There's nothing natural about this. It's not like we're born knowing good object-oriented design. It's a pattern that has to be learned, and the linked article is one of the well-known pieces that helped a lot of people understand this idea.
thom|19 days ago
The bitter lesson of programming languages is that whatever clever, fast, safe, low-level features a language has, someone will come along and create a more productive framework in a much worse language.
Note, this framework - perhaps the very last one - is now ‘AI’.
noelwelsh|19 days ago
One bug was in a system that had an Email type but didn't actually enforce the invariants of emails. The one that caused the problem was it didn't enforce case insensitive comparisons. Trivial to fix, but it was encased in layers of stuff that made tracking it down difficult.
The other was a home grown ORM that used the same optional / maybe type to represent both "leave this column as the default" and "set this column to null". It should be obvious how this could go wrong. Easy to fix but it fucked up some production data.
Both of these are failures to apply "parse, don't validate". The form didn't enforce the invariants it had supposedly parsed the data into. The latter didn't differentiate two different parsing.
bcrosby95|19 days ago
Java makes it a pain though, so most code ends up primitive obsessed. Other languages make it easier, but unless the language and company has a strong culture around this, they still usually end up primitive obsessed.
css_apologist|19 days ago
You can get ever so gradually stricter with your types which means that the operations you perform on on a narrow type is even more solid
It is also 100% possible to do in dynamic languages, it's a cultural thing
wat10000|19 days ago
C-like languages have this a little bit, in that you'll probably make a struct/class from whatever you're looking at and pass it around rather than a dictionary. But dates are probably just stored as untyped numbers with an implicit meaning, and optionals are a foreign concept (although implicit in pointers).
Now, I know that this stuff has been around for decades, but it wasn't something I'd actually use until relatively recently. I suspect that's true of a lot of other people too. It's not that we forgot why strong static type checking was invented, it's that we never really knew, or just didn't have a language we could work in that had it.
Archelaos|19 days ago
When the type is more complex, specific contraints should be used. For a real live example: I designed a type for the occupation of a hotel booking application. The number of occupants of a room must be positiv and a child must be accompanied by at least one adult. My type Occupants has a constructor Occupants(int adults, int children) that varifies that condition on construction (and also some maximum values).
yakshaving_jgt|19 days ago
renox|18 days ago
jackpirate|19 days ago
Email honestly seems much more straightforward than dates... Sweden had a Feb 30 in 1712, and there's all sorts of date ranges that never existed in most countries (e.g. the American colonies skipped September 3-13 in 1752).
conartist6|19 days ago
Essentially the article says that each data type should have a single location in code where it is constructed, which is a very class-based way of thinking. If your Java class only has a constructor and getters, then you're already home free.
Also for the method to be efficient you need to be able to know where an object was constructed. Fortunately class instances already track this information.
jiehong|19 days ago
So things stay as maps or arrays all the way through.
brooke2k|19 days ago
macintux|19 days ago
https://hn.algolia.com/?query=Parse%2C%20Don%27t%20Validate&...
However, it's more effective to throw quotes into the mix, reduces false positives.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
zdw|19 days ago
The point of the article is about locality of validation logic in a system. Parsing in this context can be thought as consolidating the logic that makes all structure and validity determination about incoming data into one place in the program.
This lets you then rely on the fact that you have valid data in a known structure in all other parts of the program, which don't have to be crufted up with validation logic when used.
Related, it's worth looking at tools that further improve structure/validity locality like protovalidate for protobuf, or Schematron for XML, which allow you to outsource the entire validity checking to library code for existing serialization formats.
jmholla|19 days ago
My main example was working with a co-worker whose application used a number of timestamps. They were passing them around as strings and parsing and doing math with them at the point of usage. But, by parsing the inputs into the language's timestamp representation, their internal interfaces were much cleaner and their purpose was much more obvious since that math could be exposed at the invocation and not the function logic, and thus necessarily, through complex function names.
solomonb|19 days ago
munk-a|19 days ago
r4victor|19 days ago
A sincere question to Go programmers – what's your take on "Parse, Don't Validate"?
OkayPhysicist|18 days ago
kubanczyk|19 days ago
Always aspire to that. Translating that to Go conventions, the constructor has to have signature like:
Such signatures exist in the stdlib, e.g. https://cs.opensource.google/go/go/+/refs/tags/go1.25.7:src/... although I've met old-hands that were surprised by it.In larger codebases, I've noticed an emergent phenomenon that usually the T{} itself (bypassing NewT constructor) tends to be unusable anyway, hence the constructor will enforce "parse, don't validate" just well enough. Only very trivial T{} won't have a nilable private field, such as a pointer, func, or chan.
I'd say that "making zero a meaningful value" does not scale well when codebase grows.
taylorallred|19 days ago
dang|19 days ago
Parse, Don't Validate (2019) - https://news.ycombinator.com/item?id=41031585 - July 2024 (102 comments)
Parse, don't validate (2019) - https://news.ycombinator.com/item?id=35053118 - March 2023 (219 comments)
Parse, Don't Validate (2019) - https://news.ycombinator.com/item?id=27639890 - June 2021 (270 comments)
Parse, Don’t Validate - https://news.ycombinator.com/item?id=21476261 - Nov 2019 (230 comments)
Parse, Don't Validate - https://news.ycombinator.com/item?id=21471753 - Nov 2019 (4 comments)
macintux|19 days ago
Parsix - https://news.ycombinator.com/item?id=27166162
TypeScript - https://news.ycombinator.com/item?id=28425435
C - https://news.ycombinator.com/item?id=44507405
Without comments:
Non-blank strings in Rust - https://news.ycombinator.com/item?id=34947030
Email type in Rust - https://news.ycombinator.com/item?id=34946791
Java - https://news.ycombinator.com/item?id=29250169
d0liver|19 days ago
Point being: I think the rule is slightly more general, although this explanation is probably more intuitive.
jmull|19 days ago
pcwelder|19 days ago
This, along with John Ousterhout's talk [1] on deep interfaces was transformational for me. And this is coming from a guy who codes in python, so lots of transferable learnings.
[1] https://www.youtube.com/watch?v=bmSAYlu0NcY
throw567643u8|19 days ago
sn9|19 days ago
kayo_20211030|19 days ago
Unfortunately, it's somewhat of a religious argument about the one true way. I've worked on both sides of the fence, and each field is equally green in its own way. I've use OCaml, with static typing, and Clojure, with maybe-opt-in schema checking. They both work fine for real purposes.
The big problem arrives when you mix metaphors. With typing, you're either in, or you're out - or should be. You ought not to fall between stools. Each point of view works fine, approached in the right way, but don't pretend one thing is the other.
rorylaitila|19 days ago
To make it viable, all value objects are code-generated from model schemas, and then customized as needed (only like 5% need customization beyond basic data types). I have auto-upcasting on setters so you can code stringly when wanted, but everything is validated (very useful for writing unit tests more quickly). I only parse into types at boundaries or on writes/sets, not on reads/gets (limit's the amount of boxing, particularly on reading large amounts of data). Heavy use of reflection, and auto-wiring/dependency injection.
But with these conventions in place, I quite enjoy it. Easy to customize/narrow a type. One convention for all validation. External inputs are by default secure with nice error messages. Once place where all values validation happens (./values classes folder).
1-more|19 days ago
https://www.youtube.com/watch?v=IcgmSRJHu_8
gaigalas|19 days ago
IMHO this is distracting and sort of vain. It forces this "semantics" perspective into the reader, just so the author can have a snappy slogan.
Also, not all languages have such freedom in type expressiveness. Some of them have but offer terrible trade-ofs.
The truth is, if you try to be that expressive in a language that doesn't support it you'll end up with a horror story. The article fails to mention that, and that "snappy slogan" makes it look like it's an absolute claim that you must internalize, some sort of deep truth that applies everywhere. It isn't.
Joel_Mckay|19 days ago
The best method is walk the symbolic tree with a cost function, and score the fitness of the data compared to expected structures. For example, mismatched or duplicate GUID/Account/permission/key fields reroute the message to the dead-letter queue for analysis, missing required fields trigger error messaging, and missing optional fields lower the qualitative score of the message content.
Parsers can be extremely unpredictable, and loosely typed formats are dangerous at times. =3
hackrmn|19 days ago
I see a lot of URLs being passed around as strings within a system perfectly capable of leveraging typing theory and offering user defined types, if not at least through OOP goodness a lot of people would furiously defend. The URL, in this case, would often have _already_ been parsed once, but effectively "unparsed" and keeps being sent around as text in need of parsing at every "junction" of the system that requires to meaningfully access it, except that parsing is approached like some ungodly litany best avoided and thus foregone or lazily implemented with a regex where a regex isn't nearly sufficient. Perhaps it's because we lack parsers, by and large, or in the very least parser generators that are readily available, understandable (to your average developer), and simple enough to use without requiring to understand formal language theory with Chomsky hierarchy, context sensitivity, grammar ambiguity and parse forests, to say the least.
Same with [file] paths, HTTP header values, and other things that seem alluring to dismiss as only being text.
It wouldn't be a problem, had I not seen time and again how the "text" breaks -- URLs with malformed query parameters because why not just do `+ '?' + entries.map(([ name, value ]) => name + "=" + value).join("&")`, how hard can it be? Paths that assume leading slash or lack there of etc.
I believe the article was born precisely of the same class of frustrations. So I am now bringing the same mantra everywhere with me: "There is no such type as string". Parse at earliest opportunity, lazily if the language allows it (most languages do) -- breadth first so as to not pay upfront, just don't let the text slip through.
I am talking from experience, really, your mileage may vary.
tlavoie|19 days ago
I'd love for these ideas to take hold at work, but I'm on the fringes in infosec, not a dev.
benhoyt|19 days ago
masklinn|18 days ago
Java has sum types, incidentally. And pattern matching.
lock1|18 days ago
However, I don't think it's doable in languages with structural type systems like Typescript or Go's interface without a massive ergonomic hit for minimal gain. Languages with a structural type system are deliberately designed to remove the intentionality of "type T cannot be assigned to type S" in exchange for developer ergonomics.
For C#, there's F#-focused article, which I believe some of it can be applied to C# as well:F# - Railway Oriented Programming - https://fsharpforfunandprofit.com/rop/
F# - Designing with Types - https://fsharpforfunandprofit.com/series/designing-with-type...
For modern Java, there is some attempt at popularizing "Data-Oriented Programming" which just rebranded "Type-driven design". Surprisingly, with JDK 21+, type-driven style is somewhat viable there, as there is algebraic data type via `record` + `sealed` and exhaustive pattern match & destructuring.
Inside Java Blog - Data-Oriented Programming - https://inside.java/2024/05/23/dop-v1-1-introduction/
Infoq - Data-Oriented Programming - https://www.infoq.com/articles/data-oriented-programming-jav...
For Rust, due to the new mechanics introduced by its affine type system, there is much more flexibility in what you could express in Rust types compared to more common languages.
Rust - Typestate Pattern - https://cliffle.com/blog/rust-typestate/
Rust - Newtype - https://rust-unofficial.github.io/patterns/patterns/behaviou...
sevensor|19 days ago
mh2266|19 days ago
The important part is not to expose a "String -> Whatever" function publicly.
exodys|19 days ago
I understand that they should be separate, but they should be very close together.
Jtsummers|19 days ago
In the "parse, don't validate" mindset, your parsing step is validation but it produces something that doesn't require further validation. To stick with the non-empty list example, your parse step would be something like:
So when you run this you can assume that the data is valid in the rest of the code (sorry, my Haskell is rusty so this is a sketch, not actual code): That has performed validation, but by parsing it also produces a value that doesn't require any revalidation. Every function that takes the parsed data as an argument can ignore the possibility that the data is invalid. If all you do is validate (returning true/false): Then you don't have that same guarantee. You don't know that, in future uses, that the data is actually valid. So your code becomes more complex and error-prone. The parse approach adds a guarantee to your code, that when you reach `use` (or whatever other functions) with parsed and validated data that you don't have to test that property again. The validate approach does not provide this guarantee, because you cannot guarantee that `use` is never called without first running the validation. There is no information in the program itself saying that `use` must be called after validation (and that validation must return true). Whereas a version of `use` expecting NonEmpty cannot be called without at least validating that particular property.mrkeen|19 days ago
Instead, there should simply be no way to construct an invalid User. But this article pushes a little harder than that:
Does your business logic require a User to have exactly one last name, and one-or-more first names? Some people might go as far as having a private-constructor + static-factory-method create(..), which does the validation, e.g.
Even though the create(..) method above validates the name rules, you're still left holding a plain old List-of-Strings deeper in the program when it comes time to use them. The name rules were validated and then thrown away! Now do you check them when you go to use them? Maybe?If you encode your rules into your data-structure, it might look more like:
If I were doing this for real, I'd probably have some Name rules too (as opposed to a raw String). E.g. only some non-empty collection of utf8 characters which were successfully case-folded or something.Is this overkill? Do I wind up with too much code by being so pedantic? Well no! If I'm building valid types out of valid types, perhaps the overall validation logic just shrinks. The above class could be demoted to some kind of struct/record, e.g.
Before I was validating Names inside User, but now I can validate Names inside Name, which seems like a win:mmis1000|19 days ago
Recently, I am trying to make llm to output specific format.
It turns out no matter how you wrote propmt and perform validate. It will never be as effective as just limit the output with proper bnf (via llama cpp grammar file).
yakshaving_jgt|19 days ago
https://www.youtube.com/watch?v=MkPtfPwu3DM
curiousgal|19 days ago
tomtom1337|19 days ago
If you need it, use a dataframe validation library to ensure that values are within certain ranges.
There are not yet good, fast implementations of proper types in Python dataframes (or databases for that matter) that I am aware of.
adammarples|19 days ago
lmeyerov|19 days ago
If python had dependent types, that's how i'd think about them, and keeping them typed would be even easier, eg, nulls sneaking in unexpectedly and breaking numeric columns
When using something like dask, which forces stronger adherence to typings, this can get more painful
whalesalad|19 days ago
The goal is more about cleaning and massaging data at the perimeter (coming in, and going out) versus what specific tool (a collection of objects vs a dataframe) is used.
cbondurant|19 days ago
LordDragonfang|19 days ago
The casualness at which the author states things like "of course, it's obvious to us that `Int -> Void` is impossible" makes me feel like I'm being xkcd 2501'd.
mrkeen|19 days ago
In C, true doesn't necessarily equal true.
In Java (myBool != TRUE) does not imply that (myBool == FALSE).
Maybe you could do with some weirdness!
In Haskell: Bool has two members: True & False. (If it's True, it's True. If it's not True, it's False). Unit has one members: () Void has zero members.
To be fair I'm not sure why Void was raised as an example in the article, and I've never used it. I didn't turn up any useful-looking implementations on hoogle[1] either.
[1] https://hoogle.haskell.org/?hoogle=a+-%3E+Void&scope=set%3As...
metalliqaz|19 days ago
whalesalad|19 days ago
The tl;dr on this is: stop sprinkling guards and if statements all over your codebase. Convert (parse) the data into truthful objects/structs/containers at the perimieter. The goal is to do that work at the boundaries of your system, so that inside of your system you can stop worrying about it and trust the value objects you have.
I think my hangup here is on the use of terms parse vs validate. They are not the right terms to describe this.
tialaramex|19 days ago
This is exactly what, for example, Rust's str::parse method is for. The documentation gives the example:
You will so very often have text and want typed information, and parse is exactly how we do that transformation exactly once. Whereas validation is what it looks like when we try to make piecemeal checks later.danieltanfh95|19 days ago
In practice, I find that staunch static typing proponents are often middle or junior engineeers that want to work with an idealised version of programming in their heads. In reality what you are looking for is "openness" and "consistency", because no amount of static typing will save you from poorly defined or optimised-too-early types that encode business logic constraints into programmatic types.
This is also why in practice alot of customer input ends up being passed as "strings" or have a raw copy + parsed copy, because business logic will move faster than whatever code you can write and fix, and exposing it as just "types" breaks the process for future programmers to extend your program.
steve_adams_86|19 days ago
That's not a fault of type systems, though.
> because business logic will move faster than whatever code you can write and fix, and exposing it as just "types" breaks the process for future programmers to extend your program
That's a problem with overly-tight coupling, poor design, and poor planning, not type systems
> In practice, I find that staunch static typing proponents are often middle or junior engineeers
I find people become enthusiastic about it around intermediate stages in their career, and they sometimes embrace it in ways that can be a bit rigid and over-zealous, but again it isn't a problem with type systems
jghn|19 days ago
I wouldn't go this far as it depends on when the individual is at that phase of their career. The software world bounces between hype cycles for rigorous static typing and full on dynamic typing. Both options are painful.
I think what's more often the case is that engineers start off by experiencing one of these poles and then after getting burned by it they run to the other pole and become zealous. But at some point most engineers will come to realize that both options have their flaws and find their way to some middle ground between the two, and start to tune out the hype cycles.
solomonb|19 days ago
mh2266|19 days ago
> business logic will move faster than whatever code you can write and fix, and exposing it as just "types" breaks the process for future programmers to extend your program.
I just don't understand how this is the case. Fields or methods or whatever are either there, or they are not. Type systems just expose that information. If you need to change the types later on, then change the types.
Example: Kotlin allows you to say "this field will never be null", and it also allows you to say "this field will either be null or not null". Java only allows the latter. If you want the latter in Kotlin, you can still just do that, and now you're able to communicate that (or the other option) to all of your callers.
Typed Python allows you to say "yeah this function returns Any, good luck!" and at least your callers know that. It also allows you to say "this function always returns a str".
beastman82|19 days ago
While we're sharing anecdotal data, I've experienced the opposite.
The older, more experienced fellows love static types and the new ones barely understand what they're missing in javascript and python.
yakshaving_jgt|19 days ago
waffletower|19 days ago
kstrauser|19 days ago
See also: "Did you win the Putnam?"
yakshaving_jgt|19 days ago
I am a Chief Technology Officer[^1].
Your opinion here is common, and misguided.
Here is why: https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...
---
[^1]: Literally nobody cares.