I have a theory that the worse is better approach begets an environment where the worse is better approach is better.
At least hypothetically, I think there's an approach which is not "the right thing" or "worse is better" but rather more like "the right foundations".
Most interface complexity in my experience seems to be inherited from underlying interface complexity, and it takes a lot of work to fix that underlying interface complexity. This, I think is where "worse is better" shines. If you try to apply a "the right thing" approach to a system where you're dealing with shitty underlying interfaces (i.e. every popular operating system out there including every unix, and NT system) you end up with endless complexity and performance loss. So obviously nobody will want to do "the right thing" and everyone who takes the "worse is better" approach will end up way ahead of you in terms of delivering something. People will be happy (because people are almost always happy regardless of how crap your product is).
On the other hand, designing something with "the right foundations" means that "the right thing" no longer needs to involve "sacrifice implementation simplicity in favour of interface simplicity" to anywhere near the same extent because your implementation can focus on implementing whatever interface you want rather than first paving over a crappy underlying interface.
But the difficulty of "the right foundations" is that nobody knows what the right foundations are the first 10 times they implement them. This approach requires being able to rip the foundations up a few times. And nobody wants that, so "worse is better" wins again.
I think there's a lot of truth to this. It reminds me of an idea in economics about the "second-best". From the wikipedia page:
"In welfare economics, the theory of the second best concerns the situation when one or more optimality conditions cannot be satisfied. The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal. Politically, the theory implies that if it is infeasible to remove a particular market distortion, introducing one or more additional market distortions in an interdependent market may partially counteract the first, and lead to a more efficient outcome."
That worse-is-better is self-reinforcing and that it's the only stable strategy in an environment with less-than-perfect cooperation (i.e. it's the only Nash equilibrium) may both be true at the same time. In fact, if the latter is true then the former is alsmost certainly true.
The real question is, then, whether doing "the right thing" is a stable and winning strategy at all, i.e. viable and affordable. As you yourself suspect, the answer may well be no. Not only because it takes a few tries to figure out the right foundations, but also because what foundation is right is likely to change over time as conditions change (e.g. hardware architecture changes, programming practices -- such as the use of AI assistants -- change etc.).
I think this ties back to the idea of "get it working, then once it's working go back and make it fast | preformant | better for whatever meaning of better".
I think much of the consternation towards "worse is better" comes from re-inventing things to achieve the "make it better" improvements from scratch instead of leveraging existing knowledge. Re-inventing might be fine, but we shouldn't throw away knowledge and establshed techniques if we can avoid it.
How would you design a network interface using your right foundations model? I'm not talking about HTML or whatnot.
I have some sort of medium, copper, fiber whatever and I would like to send 10 bytes to the other side of it. What is the right foundations that would lead to an implementation which isn't overly complex.
I'm always happy whenever this old article goes viral. For two reasons: First, learning to accept the fact that the better solutions doesn't always win has helped me keep may sanity over more than two decades in the tech industry. And second, I'm old enough to have a pretty good idea what the guy meant when he replied, "It takes a tough man to make a tender chicken."
> Both early Unix and C compilers had simple structures, are easy to port, require few machine resources to run, and provide about 50%-80% of what you want from an operating system and programming language.
> Unix and C are the ultimate computer viruses.
The key argument behind worse-is-better is that an OS which is easy to implement will dominate the market in an ecosystem with many competing hardware standards. Operating systems, programming languages, and software in general have not worked this way in a long time.
Rust is not worse-is-better, but it's become very popular anyway, because LLVM can cross-compile for anything. Kubernetes is not worse-is-better, but nobody needs to reimplement the k8s control plane. React is not worse-is-better, but it only needs to run on the one web platform, so it's fine.
Worse-is-better only applies to things that require an ecosystem of independent implementers providing compatible front-ends for diverging back-ends, and we've mostly standardized beyond that now.
There are some differences between your examples in my opinion:
Rust started as an experiment of Mozilla team replacing C++ with something that helps them compete with Chrome in developing safe multi-threaded code more efficiently. It took a lot of experiments to get to the current type system, most of which gives real advantages by using affine types, but the compiler is at this point clearly over-engineered for the desired type system (and there are already ideas on how to improve on it). It's still too late to restart, as it looks like it takes 20 years to productionize something like Rust.
As for React I believe it's I believe an over-engineered architecture from the start for most web programming tasks (and for companies / programmers that don't have separate frontend and backend teams), but the low interest rates + AWS/Vercel pushed them on all newcomers (and most programmers are new programmers, as the number of programmers grew exponentially).
HTMX and Rails 8 are experiments in the opposite direction (moving back to the servers, nobuild, noSAAS), but I believe there's lot of space to further simplify the web programming stack.
I remember when I had a lesson about OSI layers, where the teacher has carefully described all the layers in detail
and then said something like "most of this is not important, these layers don't really exist, TCP/IP got popular first because it's just much simpler than OSI was"
Oh, there's an entirely different feature-length article to be written/found about how packet switching beat circuit switching and the "Internet approach" beat the telco approach. The great innovation of being able to deploy devices at the edges without needing clearance from the center.
I don't think very many people even remember X25. The one survivor from all the X standards seems to be X509?
I enjoyed how we got taught the two models in the 1990s, and why one has the four layers you need and the "standard" has seven layers instead.
The professor asked "How many layers do you count?" - "Seven." - "How many members do you think the ISO/OSI committee had that designed it between them?" - [laughter] - "Seven.".
IMO the OSI layer system (even though using TCP/IP suite) has some merit in education. To most of us, the concept of layering protocols may seem obvious, but I've talked to people who are just learning this stuff, and they have a lot of trouble understanding it. Emphasizing that each layer is (in theory) cleanly separated and doesn't know about layers above and below, is a very useful step towards understanding abstractions.
At a certain level, sure, but C++ at least has definitely lost out. In the 90s it seemed like it might really take over all sorts of application domains, it was incredibly popular. Now and for probably the last couple decades it and C have only kept around 10% of the global job market.
Isn't "Worse is better" just a restatement of "Perfect is the enemy of Good", only slanted to make better\Perfect sound more enticing?
>The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementors.
A deer is only 80% of a unicorn, but waiting for unicorns to exist is folly.
Yes and no. "Worse is Better" also implies you allow someone outside your problem domain to define abstractions you use to decompose the problem domain (and construct the solution domain.) So... I mean... that's probably not TOO bad if they're well-understood and well-supported. Until it isn't and you have to waste a lot of time emulating a system that allows you to model abstractions you want/need to use.
But at the end of the day everyone knows never to assume STD I/O will write an entire buffer to disk and YOU need to check for EINTR and C++ allows you to wrap arbitrary code in try...catch blocks so if you're using a poorly designed 3rd party library you can limit the blast radius. And it's common now to disclaim responsibility for damages from using a particular piece of software so there's no reason to spend extra time trying to get the design right (just ship it and when it kills someone you'll know it's time to revisit the bug list. (Looking at YOU, Boeing.))
I do sort of wonder what happens when someone successfully makes the argument that C++ Exceptions are a solution often mis-applied to the problem at hand and someone convinces a judge that Erlang-like supervisory trees constitute the "right" way to do things and using legacy language features is considered "negligence" by the courts. We're a long way off from that and the punch line here is a decent lawyer can nail you on gross negligence even if you convinced your customer to sign a liability waiver (at least in most (all?) of the US.)
Which is to say... I've always thought there is an interplay between the "worse is better" concept and the evolution of tech law in the US. Tort is the water in which we swim; it defines the context for the code we write.
The critical point of the article holds true of everything in human social networks (be it religion/culture/philosophy/apps/industry...).
If you don't achieve virality, you're as good as dead. Once a episteme/meme spreads like wild-fire there's very little chance for a reassessment based on value/function - because the scope is now the big axis of valuation.
It's actually worse because humanity is now a single big borg. Even 30-40 years back, there were sparsely connected pools where different species of fish could exist - not any more. The elites of every single country is a part of the Anglosphere, and their populations mimic them (eventually).
This tumbling towards widespread mono-memetism in every single sphere of life is deeply dissatisfying about the modern human life, not just for PL/OS/... but also for culture etc.
> If you don't achieve virality, you're as good as dead.
Are you? Maybe the worse solution peaks faster, but can be supplanted by a better solution in the future, like how Rust is displacing C/C++ in new projects. The better solution may never be popular yet persist.
"Worse is better" has become like "move fast and break things". They're both sayings that reveal an often-overlooked truth, but they have both been taken far too far and result in worse things for everybody.
I see what you mean. Yet I feel like the first one (at least, as outlined in the article) is more about accepting an inevitability that you probably have little control over, while the second is more often adopted as a cultural process guideline for things you can control. But that's just my impression.
I feel like we're seeing a bit of push-back today against worse-is-better in the area of languages. Rust in particular feels more like the MIT approach, albeit with an escape hatch via the explicit keyword "unsafe." Its type system is very thoroughly specified and correct as opposed to C's YOLO typing.
Rust is not the MIT approach, because an important aspect of that approach is that it's conceptually simple. Rust is a leviathan of complexity both in interface and implementation. Common Lisp is an MIT approach language, and approaches the same problems like memory and type safety by doing "the right thing" by default and offering more advanced options like type annotations and optimization levels in a "take it or leave it" manner. Rust will force you to program the way the compiler wants in the name of safety, while Common Lisp will allow you to program safely and freely, and decide which parts are important. An observation of this idea is that Rust compilers are terribly huge and slow because they use static type-tetris for everything, while Common Lisp compilers are very fast because they do most type-checking at runtime.
I actually take this as evidence that Rust will always remain niche. It's just a very complicated language. Go or Zig is much easier to learn and reason about.
In Go you can immediately tell what the fields are in a config yaml file just by looking at struct annotations. Try doing that with Rust's Serde. Super opaque in my opinion.
I think this is a self-delusion experienced by Rustaceans because they overvalue a certain type of software correctness and because Rust gets that right, the rest of its warts are invisible.
I assume that 'worse' often means find adequation with average and mass. This ensures a longer existence, later you may absorb the "better" you didn't have early on. Look how dynamic languages started to have better data types, various traits (generators, closures..), jit .. all things they could pluck out of old "glorious" languages that were .. somehow too advanced for the mainstream. It's a strange schizophrenic situation.
I think the worse-is-better philosophy is not well encapsulated with the 4 priorities given. Perhaps it is 4 completely different priorities. Here's a strawman.
1. Minimal -- the design and implementation must be the smallest as possible,
especially the scope (which should be deliberately "incomplete")
2. Timely -- the implementation must be delivered as soon as feasible, even if it comes before the design (get it working first, then figure out why)
3. Relevant -- the design and implementation must address important, unmet need, eschewing needs that are not urgent at the time (you can iterate or supplement)
4. Usable -- the implementation must be integrated with the existing, working and stable infrastructure (even if that integration causes design compromises)
The other dimensions, simplicity, correctness, consistency, and completeness are very nice to have, but they are not the primary drivers of this philosophy.
JavaScript really illustrates the ultimate path-dependence of evolution. It got widely deployed during a boom time and therefore we are stuck with it forever.
Yup first thing I thought. That pathetic piece of crap conceived in basically 15 days (not kidding)... BUT it is what we have on the front-end for web apps so there's that. JavaScript is the mediocre turd I love to hate.
I know this is an article about Lisp and the specific usage of this term in the context of software acceptance, but when you use a title that provocative I want to speak specifically about the idea of "Worse is Better" with respect to a more literal idea of "sometimes things get worse overtime but you are told they have improved"
For example, why is it that central vacuums are more rare in 2024 than they were in the 1980s, despite them being superior in every way compared to regular ones?
"Worse" vacuums are "better" for the economy? (because Dyson makes jobs and consumes resources?)
EINTR's design is one of computing's absolute classics. To MIT and New Jersey, we should add the McDougals approach: "I cannot work under these conditions." When faced with the PC loser-ing issue, just don't implement the code in question.
McDougals resolves the apparent conflict between the other two. It blames the interrupt hardware as the root cause. It produces non-working, incomplete software. It's kind of a modest proposal.
However, it also produces no ripples in the design fabric. With MIT, the OS source is a maintenance nightmare. With NJ, modern software still has to deal with archaic idiosyncrasies like EINTR. With McDougals, all the "conflict-free" portions of the software advance, those that write themselves.
The result is likely immediately shelved, perhaps as an open source PoC. Over time, someone might write some inelegant glue that makes interrupts appear to behave nicely. Alternatively, the world might become perfect to match the software.
If nothing else, the software will have mimicked the way we learn. We use imperfect examples to draw the idealized conclusion. Even if it never gets to run, it will be more readable and more inspiring than either MIT or NJ.
RichardGabriel makes this observation on the survival value of software in the paper Lisp: Good News, Bad News, How to Win Big. See http://www.jwz.org/doc/worse-is-better.html for the section on WorseIsBetter.
For those seeking the first node, see http://web.archive.org/web/19990210084721/http://www.ai.mit.edu/docs/articles/good-news/good-news.html.
For even more context on WorseIsBetter see http://www.dreamsongs.com/WorseIsBetter.html. My favorite part is RichardGabriel arguing with himself.
what's the old saw about "unix design prioritizes simplicity over correctness, and on modern hardware simplicity is also no longer considered necessary"
Ironically, the main feature that separates LISP from other modern languages is homoiconicity/macros(now that features like garbage collection are mainstream).
And this leads to an easier implementation - parsing is easier(which is why code transformation via macros becomes easy).
A language which has that feature tends to get classified as a member of the Lisp family, even if it is horribly "unlispy" under the hood in its semantics.
What is better, a perfect design or one that exists?
Our mind tries to trick us into thinking that the choice is between perfect and not so perfect.
No, in real-life it is common that aiming for the perfect ruins everything else, even the existence of that idea in real form due to other constraints.
It is going to come down to context. For the most part, you never know quite what it is you are designing, so an iterative approach to design with fast feedback cycles will get you there quicker. Give people something to play with, see how they use it, and continue from there.
But sometimes you need to know what is you are designing before giving it to people, because there are large risks associated with improvising. In those cases, making The Right Thing is still expensive, but it may reduce the risk of catastrophe.
I think, however, that the latter cases are rarer than most people think. There are ways of safely experimenting even in high-risk domains, and I believe doing so ultimately lowers the risk even more than doing The Right Thing from the start. Because even if we think we can spend years nailing the requirements for something down, there are always things we didn't think of but which operational experience can tell us quickly.
One counterargument is that people who claim "worse is better" are often making excuses for why their preferred technology didn't win.
Often in these arguments, worse means "shortcut" and better means "won". The difficulty is proving that not taking the shortcut had some other advantages that are assumed, like in the article.
I don’t get the name New Jersey approach, is it just the general association of New Jersey and poor quality? When I think of New Jersey and CS, I think of Princeton, which has a pretty good program IIRC.
—
Anyway, I wouldn’t put simplicity on the same level as the other things. Simplicity isn’t a virtue in and of itself, simplicity is valuable because it helps all of the other things.
Simplicity helps a bit with consistency, in the sense that you have more trouble doing really bizarre and inconsistent things in a simple design.
Simplicity helps massively with correctness. You can check things that you don’t understand. Personally, that means there’s a complexity prove after which I can’t guarantee correctness. This is the main one I object to. Simplicity and correctness simply don’t belong in different bullet-points.
Simplicity could be seen as providing completeness. The two ways to produce completeness are to either work for a really long time and make something huge, or reduce scope and make a little complete thing.
There was an article posted on here[1] a while back that I only just found again, introducing the term "expedience." The idea was that we think we live in a world where people have to have "the best" sweater, be on "the best" social network, drive "the best" car, etc. But when you look at what really WINS, it's not the best, it's the most "expedient" - i.e. sufficiently good, with built-in social proof, inoculated of buyer's remorse, etc.
Is Amazon "the best" place to go shopping? No, you might find better prices on individual items if you put a little more work into it, but it's the most expedient. Is Facebook/Instagram/Tiktok/insert here "the best" social network? No, but it is the most accessible, easy-to-use, useful one. Is a Tesla (perhaps outdated example since X) "the best" car - no, but it is the most expedient.
There is a tangent here that intersects with refinement culture as well. Among the group of society that (subconsciously) care about these "expedient" choices, you see everyone and everything start to look the same
"Expedient" is a common (or at least not rare) English word that means something like "practical and effective even if not directly attending to higher or deeper considerations."
For example, if two students in a class are having frequent confrontations that bring learning in the class to a halt, and attempts by teachers and counselors to address their conflict directly haven't been effective, the expedient solution might be to place them in separate classes. The "right thing" would be to address the problem on the social and emotional level, but if continued efforts to do so is likely to result in continued disruption to the students' education, it might be better to separate them. "Expedient" acknowledges the trade-off, while emphasizing the positive outcome.
Often a course of action is described as "expedient" when it seems to dodge an issue of morality or virtue. For example, if we solve climate change with geoengineering instead of by addressing thoughtless consumerism, corporate impunity, and lack of international accountability, many people would feel frustrated or let down by the solution because it would solve the problem without addressing the moral shortcomings that led to the problem. The word expedient stresses the positive side of this, the effectiveness and practicality of the solution, while acknowledging that it leaves other, perhaps deeper issues unaddressed.
> Is Amazon "the best" place to go shopping? No, you might find better prices on individual items if you put a little more work into it, but it's the most expedient.
It's not just that. Every time you do business with a new web site you assume additional risk. Amazon is a known quantity. You can be pretty sure that they are not going to outright scam you, and they aren't going to be hacked by script kiddies. There is a significant risk of getting a counterfeit item, but they have a very liberal return policy, so the real cost to you in this case is a minute or two to get a return code and possibly a trip to the nearest Whole Foods to drop it off.
Amazon sucks in many ways, but at least their suckage is a known quantity. Predictability has significant value.
Your model of the world is not perfect so instead of trying to find a globally optimal solution, you are satisfied with a local optimum that exceeds some threshold that has to suffices.https://en.wikipedia.org/wiki/Satisficing
The number one reason I use Amazon, is not for the best prices, but because of their return policy. Amazon returns are actually often more painless than physical store returns.
Being able to return something predictably and easily outweighs a small difference in price.
> you might find better prices on individual items if you put a little more work into it
That extra work costs you money, too. Calculate how much your job pays you per hour, then you can deduce the $cost of spending more time to get a better deal.
I might argue it’s the one most known by the most people, the “best” takes time to get there, Google was better than yahoo but it took years to become #1 in terms of hits
Related - thinking takes a lot of energy, so people prefer options that are cheap to evaluate. This definitely contributes to the preference for expedient options.
Non native here. What's the meaning of inoculated here?
It's not the first time that I struggle to parse this word. In italian it keeps the original latin meaning and can be translated with "injected with". You could inoculate a vaccine but you could also inoculate a poison, so it does not carry the immunity meaning by default. English (US?) as far as I can tell use it as a synonym of "immune", is that so?
I'll never understand the obsession with LISP. My guess is it just appeals to a certain type of person, sort of academic in my view. I'm not convinced that LISP was ever the-right-thing. The author didn't express anything about LISP vs C except to assert that C was a 50% solution and LISP was better.
I agree though that for practical purposes, practical solutions are just going to be more successful.
Over the years I've developed what I call the "lefthanded scissors" analogy: people assume that everyone's mind is wired the same way and that all good programmers are good in the same way and think in the same way, but what if that's not true? What if different people have a predisposition (like lefthandedness) to prefer different tools?
Then a righthanded person picks up the lefthanded scissors and deems them weird and uncomfortable. Which they are .. for the right hand of a right-handed person.
Other popular examples of such taste controversy are Python's semantic whitespace, the idioyncracies of Perl, the very unusual shape of J/APL, and anyone using FORTH for non-trivial purposes.
edit: https://news.ycombinator.com/item?id=41766753 comment about "other people's Lisp" reminds me, that working as a solo genius dev on your own from-scratch code and working in a team inside a large organization on legacy code are very different experiences, and the "inflexibility" of some languages can be a benefit to the latter.
> I'm not convinced that LISP was ever the-right-thing.
Remember that this has to be read in historical context. At the time C was invented, things like garbage collection, message-passing object-orientation, generics, rich sets of conditionals, first-class functions, etc. were brand spanking new. They were The Right Thing to do (even judged in the harsh light of hindsight), but also quite complicated to implement – so much so that the New Jersey people skipped right past most of it.
Today these things are par for the course. At the time, they were the The Right Thing that made the system correct but complex, and had adoption penalties. As time passes, the bar for The Right Thing shifts, and today, it would probably not be embodied by Lisp, but maybe by something like Haskell or Rust?
I like LISP, but I wouldn't say I'm an evangelist. Here's my 2 cents for why LISP has such a following.
1. LISP is easy to start with if you're not a programmer. There is very little syntax to get to grips with, and once you understand "everything is a list" it's super easy to expand out from there.
2. LISP really makes it easy to hack your way to a solution. With the REPL and the transparency of "code is data" model you can just start writing code and eventually get to a solution. You don't need to plan, or think about types, or deal with syntax errors. You just write your code and see it executed right there in the REPL.
For my part, I love LISP when it's just me doing the coding, but once you start adding other peoples custom DSL macros or whatever the heck it becomes unwieldy. Basically, I love my LISP and hate other peoples LISP.
Such a weird take on HN. Lisp should be experimented with if only to appreciate the profound beauty of a small, powerful, cohesive design. It is a wholly different feeling from industry standard languages which are constantly changing.
In Lisp, almost all of the language’s power is in “user space.”
The ramifications for that are deep and your beliefs as to whether that is good are largely shaped by whether you believe computation is better handled by large groups of people (thus, languages should restrict users) or smaller groups of people (thus, languages should empower users).
LISP remains one of the only languages where manipulating the code looks exactly the same as executing it. This is often illustrated by pointing out that "eval" in lisp doesn't take in a string of characters. (https://taeric.github.io/CodeAsData.html is a blog I wrote on the idea a bit more.)
What this often meant was that getting a feature into your LISP program was something you could do without having to hack at the compiler.
Used to, people balked at how macros and such would break people's ability to step debug code. Which is still largely true, but step debugging is also sadly dead in a lot of other popular languages already.
It’s for the dreamers. The crazy ones among us that do not think of themselves as experts in programming machines to solve business problems, but rather novices in cajoling machines to think like humans do.
In the words of "Programmers Are Also Human" YouTube channel, it's just more comfortable. REPL development, ease of refactoring, dynamic typing, good CFFI, all just adds up to a developer experience that I find to be, in a word, chill.
Keep in mind that this essay was written in the early 1990s. Today there are many programming languages available that offer features that are unavailable in C but have long been available in Lisp. This was not the case in the 1980s during the AI boom of that era. There was a large chasm between classic procedural languages (C, Fortran, Pascal, Simula) and dynamic languages (Smalltalk and Lisp). Prolog was a popular alternative to Lisp in AI circles, especially in Japan back when it was pursuing the Fifth Generation Computing Project. When looking at the language landscape in the 1980s in the context of AI, it makes sense that practitioners would gravitate toward Lisp and Prolog.
Today we benefit from having a variety of languages, each with tradeoffs regarding how their expressiveness matches the problem at hand and also the strength of its ecosystem (e.g., tools, libraries, community resources, etc.). I still think Lisp has advantages, particularly when it comes to its malleability through its syntax, its macro support, and the metaobject protocol.
As a Lisp fan who codes occasionally in Scheme and Common Lisp, I don’t always grab a Lisp when it’s time to code. Sometimes my language choices are predetermined by the ecosystem I’m using or by my team. I also think strongly-typed functional programming languages like Standard ML and Haskell are quite useful in some situations. I think the strength of Lisp is best seen in situations where flexibility and have malleable infrastructure is highly desirable.
It's not a perfectly reliable tell, but people who write LISP instead of Lisp generally give themselves away as knowing nothing about the language. Why not kick the tires with Common Lisp, or even Clojure, and see if you can then understand for yourself why it sparks joy in people? I'm not saying it'll spark joy in you, just that you might understand. (Do you understand Haskell's draw to certain people? Rust's?) At the very least, perhaps you'll lose your notion that it primarily appeals to academic types. Common Lisp is and always has been an industrial language.
"Please don't assume Lisp is only useful for Animation and Graphics, AI, Bioinformatics, B2B and E-Commerce, Data Mining, EDA/Semiconductor applications, Expert Systems, Finance, Intelligent Agents, Knowledge Management, Mechanical CAD, Modeling and Simulation, Natural Language, Optimization, Research, Risk Analysis, Scheduling, Telecom, and Web Authoring just because these are the only things they happened to list."
--Kent Pitman
The paper was written for an audience of Lisp programmers, so the things you're talking about were sort of out of scope. I'm not going to try to convince you that Lisp and ITS really did aim at "the right thing" in a way that C and Unix didn't; you'll see that it's true if you investigate.
> Software engineering has its own political axis, ranging from conservative to liberal. (...) We regard political conservatism as an ideological belief system that is significantly (but not completely) related to motivational concerns having to do with the psychological management of uncertainty and fear. (...) Liberalism doesn't lend itself quite as conveniently to a primary root motivation. But for our purposes we can think of it as a belief system that is motivated by the desire above all else to effect change. In corporate terms, as we observed, it's about changing the world. In software terms, liberalism aims to maximize the speed of feature development, while simultaneously maximizing the flexibility of the systems being built, so that feature development never needs to slow down or be compromised.
Lisp, like Perl and Forth, is an extremist "liberal" language, or family of languages. Its value system is centered on making it possible to write programs you couldn't write otherwise, rather than reducing the risk you'll screw it up. It aims at expressiveness and malleability, not safety.
The "right thing" design philosophy is somewhat orthogonal to that, but it also does pervade Lisp (especially Scheme) and, for example, Haskell. As you'd expect, the New Jersey philosophy pervades C, Unix shells, and Golang. Those are also fairly liberal languages, Golang less so. But a C compiler had to fit within the confines of the PDP-11 and produce fast enough code that Ken would be willing to use it for the Unix kernel, and it was being funded as part of a word processing project, so things had to work; debuggability and performance were priorities. (And both C and Unix were guided by bad experiences with Multics and, I infer, M6 and QED.) MACLISP and Interlisp were running on much more generous hardware and expected to produce novel research, not reliable production systems. So they had strong incentives to both be "liberal" and to seek after the "right thing" instead of preferring expediency.
How do you know someone knows lisp? They'll tell you. And tell you. And tell you...
I like lisp for the most part, but holy shit is the enduring dialog surrounding it the absolute worst part of the whole family of languages by far. No, it doesn't have or give you superpowers. Please grow up.
Arch-TK|1 year ago
At least hypothetically, I think there's an approach which is not "the right thing" or "worse is better" but rather more like "the right foundations".
Most interface complexity in my experience seems to be inherited from underlying interface complexity, and it takes a lot of work to fix that underlying interface complexity. This, I think is where "worse is better" shines. If you try to apply a "the right thing" approach to a system where you're dealing with shitty underlying interfaces (i.e. every popular operating system out there including every unix, and NT system) you end up with endless complexity and performance loss. So obviously nobody will want to do "the right thing" and everyone who takes the "worse is better" approach will end up way ahead of you in terms of delivering something. People will be happy (because people are almost always happy regardless of how crap your product is).
On the other hand, designing something with "the right foundations" means that "the right thing" no longer needs to involve "sacrifice implementation simplicity in favour of interface simplicity" to anywhere near the same extent because your implementation can focus on implementing whatever interface you want rather than first paving over a crappy underlying interface.
But the difficulty of "the right foundations" is that nobody knows what the right foundations are the first 10 times they implement them. This approach requires being able to rip the foundations up a few times. And nobody wants that, so "worse is better" wins again.
wismi|1 year ago
"In welfare economics, the theory of the second best concerns the situation when one or more optimality conditions cannot be satisfied. The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal. Politically, the theory implies that if it is infeasible to remove a particular market distortion, introducing one or more additional market distortions in an interdependent market may partially counteract the first, and lead to a more efficient outcome."
https://en.wikipedia.org/wiki/Theory_of_the_second_best
pron|1 year ago
The real question is, then, whether doing "the right thing" is a stable and winning strategy at all, i.e. viable and affordable. As you yourself suspect, the answer may well be no. Not only because it takes a few tries to figure out the right foundations, but also because what foundation is right is likely to change over time as conditions change (e.g. hardware architecture changes, programming practices -- such as the use of AI assistants -- change etc.).
kagevf|1 year ago
I think this ties back to the idea of "get it working, then once it's working go back and make it fast | preformant | better for whatever meaning of better".
I think much of the consternation towards "worse is better" comes from re-inventing things to achieve the "make it better" improvements from scratch instead of leveraging existing knowledge. Re-inventing might be fine, but we shouldn't throw away knowledge and establshed techniques if we can avoid it.
jpc0|1 year ago
How would you design a network interface using your right foundations model? I'm not talking about HTML or whatnot.
I have some sort of medium, copper, fiber whatever and I would like to send 10 bytes to the other side of it. What is the right foundations that would lead to an implementation which isn't overly complex.
gpderetta|1 year ago
Yes and it worse than that. The right foundations might change with time and changing requirements.
tightbookkeeper|1 year ago
The biggest benefit of simplicity in design is when the whole system is simple, so it’s easy to hack on and reason about.
ezekiel68|1 year ago
bbor|1 year ago
And, at the risk of intentionally missing the metaphor: they do in fact make automated tenderizers, now ;) https://a.co/d/hybzu2U
bccdee|1 year ago
> Unix and C are the ultimate computer viruses.
The key argument behind worse-is-better is that an OS which is easy to implement will dominate the market in an ecosystem with many competing hardware standards. Operating systems, programming languages, and software in general have not worked this way in a long time.
Rust is not worse-is-better, but it's become very popular anyway, because LLVM can cross-compile for anything. Kubernetes is not worse-is-better, but nobody needs to reimplement the k8s control plane. React is not worse-is-better, but it only needs to run on the one web platform, so it's fine.
Worse-is-better only applies to things that require an ecosystem of independent implementers providing compatible front-ends for diverging back-ends, and we've mostly standardized beyond that now.
xiphias2|1 year ago
Rust started as an experiment of Mozilla team replacing C++ with something that helps them compete with Chrome in developing safe multi-threaded code more efficiently. It took a lot of experiments to get to the current type system, most of which gives real advantages by using affine types, but the compiler is at this point clearly over-engineered for the desired type system (and there are already ideas on how to improve on it). It's still too late to restart, as it looks like it takes 20 years to productionize something like Rust.
As for React I believe it's I believe an over-engineered architecture from the start for most web programming tasks (and for companies / programmers that don't have separate frontend and backend teams), but the low interest rates + AWS/Vercel pushed them on all newcomers (and most programmers are new programmers, as the number of programmers grew exponentially).
HTMX and Rails 8 are experiments in the opposite direction (moving back to the servers, nobuild, noSAAS), but I believe there's lot of space to further simplify the web programming stack.
tightbookkeeper|1 year ago
psychoslave|1 year ago
Could the industry get rid of C and ridiculous esoteric abbreviation in identifiers, it could almost be a sane world to wander.
karel-3d|1 year ago
and then said something like "most of this is not important, these layers don't really exist, TCP/IP got popular first because it's just much simpler than OSI was"
pjc50|1 year ago
I don't think very many people even remember X25. The one survivor from all the X standards seems to be X509?
jll29|1 year ago
The professor asked "How many layers do you count?" - "Seven." - "How many members do you think the ISO/OSI committee had that designed it between them?" - [laughter] - "Seven.".
PhilipRoman|1 year ago
supportengineer|1 year ago
gpderetta|1 year ago
And 30 years later they show few signs of letting go.
ezekiel68|1 year ago
Jach|1 year ago
stonemetal12|1 year ago
>The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementors.
A deer is only 80% of a unicorn, but waiting for unicorns to exist is folly.
OhMeadhbh|1 year ago
But at the end of the day everyone knows never to assume STD I/O will write an entire buffer to disk and YOU need to check for EINTR and C++ allows you to wrap arbitrary code in try...catch blocks so if you're using a poorly designed 3rd party library you can limit the blast radius. And it's common now to disclaim responsibility for damages from using a particular piece of software so there's no reason to spend extra time trying to get the design right (just ship it and when it kills someone you'll know it's time to revisit the bug list. (Looking at YOU, Boeing.))
I do sort of wonder what happens when someone successfully makes the argument that C++ Exceptions are a solution often mis-applied to the problem at hand and someone convinces a judge that Erlang-like supervisory trees constitute the "right" way to do things and using legacy language features is considered "negligence" by the courts. We're a long way off from that and the punch line here is a decent lawyer can nail you on gross negligence even if you convinced your customer to sign a liability waiver (at least in most (all?) of the US.)
Which is to say... I've always thought there is an interplay between the "worse is better" concept and the evolution of tech law in the US. Tort is the water in which we swim; it defines the context for the code we write.
unknown|1 year ago
[deleted]
th43o2i4234234|1 year ago
If you don't achieve virality, you're as good as dead. Once a episteme/meme spreads like wild-fire there's very little chance for a reassessment based on value/function - because the scope is now the big axis of valuation.
It's actually worse because humanity is now a single big borg. Even 30-40 years back, there were sparsely connected pools where different species of fish could exist - not any more. The elites of every single country is a part of the Anglosphere, and their populations mimic them (eventually).
This tumbling towards widespread mono-memetism in every single sphere of life is deeply dissatisfying about the modern human life, not just for PL/OS/... but also for culture etc.
Anthropocene of humanity itself.
esafak|1 year ago
Are you? Maybe the worse solution peaks faster, but can be supplanted by a better solution in the future, like how Rust is displacing C/C++ in new projects. The better solution may never be popular yet persist.
unknown|1 year ago
[deleted]
JohnFen|1 year ago
ezekiel68|1 year ago
sesm|1 year ago
i_s|1 year ago
"It is better to go picking blueberries before they are fully ripe, that way you won't have much competition."
api|1 year ago
dokyun|1 year ago
djha-skin|1 year ago
In Go you can immediately tell what the fields are in a config yaml file just by looking at struct annotations. Try doing that with Rust's Serde. Super opaque in my opinion.
busterarm|1 year ago
I think this is a self-delusion experienced by Rustaceans because they overvalue a certain type of software correctness and because Rust gets that right, the rest of its warts are invisible.
agumonkey|1 year ago
clarkevans|1 year ago
1. Minimal -- the design and implementation must be the smallest as possible, especially the scope (which should be deliberately "incomplete")
2. Timely -- the implementation must be delivered as soon as feasible, even if it comes before the design (get it working first, then figure out why)
3. Relevant -- the design and implementation must address important, unmet need, eschewing needs that are not urgent at the time (you can iterate or supplement)
4. Usable -- the implementation must be integrated with the existing, working and stable infrastructure (even if that integration causes design compromises)
The other dimensions, simplicity, correctness, consistency, and completeness are very nice to have, but they are not the primary drivers of this philosophy.
AnimalMuppet|1 year ago
I would say that Timely and Relevant drive Minimal. I would also say that Minimal and Usable are in tension with each other.
mseepgood|1 year ago
hammock|1 year ago
It's the opposite of what marketers want you to think of when they say "uncompromising design."
Dansvidania|1 year ago
(edit: I mean it unironically)
api|1 year ago
TacticalCoder|1 year ago
Der_Einzige|1 year ago
For example, why is it that central vacuums are more rare in 2024 than they were in the 1980s, despite them being superior in every way compared to regular ones?
"Worse" vacuums are "better" for the economy? (because Dyson makes jobs and consumes resources?)
AnimalMuppet|1 year ago
pjc50|1 year ago
worstspotgain|1 year ago
McDougals resolves the apparent conflict between the other two. It blames the interrupt hardware as the root cause. It produces non-working, incomplete software. It's kind of a modest proposal.
However, it also produces no ripples in the design fabric. With MIT, the OS source is a maintenance nightmare. With NJ, modern software still has to deal with archaic idiosyncrasies like EINTR. With McDougals, all the "conflict-free" portions of the software advance, those that write themselves.
The result is likely immediately shelved, perhaps as an open source PoC. Over time, someone might write some inelegant glue that makes interrupts appear to behave nicely. Alternatively, the world might become perfect to match the software.
If nothing else, the software will have mimicked the way we learn. We use imperfect examples to draw the idealized conclusion. Even if it never gets to run, it will be more readable and more inspiring than either MIT or NJ.
shagie|1 year ago
jes5199|1 year ago
marcosdumay|1 year ago
Anyway, worse is better is about simplicity of implementation versus conceptual simplicity. By principle, that's a much harder choice.
enugu|1 year ago
And this leads to an easier implementation - parsing is easier(which is why code transformation via macros becomes easy).
kazinator|1 year ago
dang|1 year ago
Lisp: Good News, Bad News, How to Win Big (1990) [pdf] - https://news.ycombinator.com/item?id=30045836 - Jan 2022 (32 comments)
Worse Is Better (2001) - https://news.ycombinator.com/item?id=27916370 - July 2021 (43 comments)
Lisp: Good News, Bad News, How to Win Big (1991) - https://news.ycombinator.com/item?id=22585733 - March 2020 (21 comments)
The Rise of Worse Is Better (1991) - https://news.ycombinator.com/item?id=21405780 - Oct 2019 (37 comments)
The Rise of Worse Is Better (1991) - https://news.ycombinator.com/item?id=16716275 - March 2018 (44 comments)
Worse is Better - https://news.ycombinator.com/item?id=16339932 - Feb 2018 (1 comment)
The Rise of Worse is Better - https://news.ycombinator.com/item?id=7202728 - Feb 2014 (21 comments)
The Rise of "Worse is Better" - https://news.ycombinator.com/item?id=2725100 - July 2011 (32 comments)
Lisp: Good News, Bad News, How to Win Big [1991] - https://news.ycombinator.com/item?id=2628170 - June 2011 (2 comments)
Worse is Better - https://news.ycombinator.com/item?id=2019328 - Dec 2010 (3 comments)
Worse Is Better - https://news.ycombinator.com/item?id=1905081 - Nov 2010 (1 comment)
Worse is better - https://news.ycombinator.com/item?id=1265510 - April 2010 (3 comments)
Worse Is Better - https://news.ycombinator.com/item?id=1112379 - Feb 2010 (5 comments)
Lisp: Worse is Better, Originally published in 1991 - https://news.ycombinator.com/item?id=1110539 - Feb 2010 (1 comment)
Lisp: Good News, Bad News, How to Win Big - https://news.ycombinator.com/item?id=552497 - April 2009 (2 comments)
dang|1 year ago
Worse Is Better - https://news.ycombinator.com/item?id=36024819 - May 2023 (1 comment)
My story on “worse is better” (2018) - https://news.ycombinator.com/item?id=31339826 - May 2022 (100 comments)
When Worse Is Better (2011) - https://news.ycombinator.com/item?id=20606065 - Aug 2019 (13 comments)
EINTR and PC Loser-Ing: The “Worse Is Better” Case Study (2011) - https://news.ycombinator.com/item?id=20218924 - June 2019 (72 comments)
Worse is worse - https://news.ycombinator.com/item?id=17491066 - July 2018 (1 comment)
“Worse is Better” philosophy - https://news.ycombinator.com/item?id=17307940 - June 2018 (1 comment)
What “Worse is Better vs. The Right Thing” is really about (2012) - https://news.ycombinator.com/item?id=11097710 - Feb 2016 (35 comments)
The problematic culture of “Worse is Better” - https://news.ycombinator.com/item?id=8449680 - Oct 2014 (116 comments)
"Worse is Better" in the Google Play Store - https://news.ycombinator.com/item?id=6922127 - Dec 2013 (10 comments)
What “Worse is Better vs The Right Thing” is really about - https://news.ycombinator.com/item?id=4372301 - Aug 2012 (46 comments)
Worse is worse - https://news.ycombinator.com/item?id=437966 - Jan 2009 (3 comments)
germandiago|1 year ago
What is better, a perfect design or one that exists?
Our mind tries to trick us into thinking that the choice is between perfect and not so perfect.
No, in real-life it is common that aiming for the perfect ruins everything else, even the existence of that idea in real form due to other constraints.
Taniwha|1 year ago
dkasper|1 year ago
orwin|1 year ago
Still I am a bit bothered, does a counterargument exist?
ripap|1 year ago
Same author (name is an anagram).
kqr|1 year ago
But sometimes you need to know what is you are designing before giving it to people, because there are large risks associated with improvising. In those cases, making The Right Thing is still expensive, but it may reduce the risk of catastrophe.
I think, however, that the latter cases are rarer than most people think. There are ways of safely experimenting even in high-risk domains, and I believe doing so ultimately lowers the risk even more than doing The Right Thing from the start. Because even if we think we can spend years nailing the requirements for something down, there are always things we didn't think of but which operational experience can tell us quickly.
gpderetta|1 year ago
NAHWheatCracker|1 year ago
Often in these arguments, worse means "shortcut" and better means "won". The difficulty is proving that not taking the shortcut had some other advantages that are assumed, like in the article.
Feathercrown|1 year ago
bee_rider|1 year ago
—
Anyway, I wouldn’t put simplicity on the same level as the other things. Simplicity isn’t a virtue in and of itself, simplicity is valuable because it helps all of the other things.
Simplicity helps a bit with consistency, in the sense that you have more trouble doing really bizarre and inconsistent things in a simple design.
Simplicity helps massively with correctness. You can check things that you don’t understand. Personally, that means there’s a complexity prove after which I can’t guarantee correctness. This is the main one I object to. Simplicity and correctness simply don’t belong in different bullet-points.
Simplicity could be seen as providing completeness. The two ways to produce completeness are to either work for a really long time and make something huge, or reduce scope and make a little complete thing.
It’s all simplicity.
dmansen|1 year ago
unknown|1 year ago
[deleted]
ezekiel68|1 year ago
sebastianconcpt|1 year ago
karel-3d|1 year ago
aredox|1 year ago
st_goliath|1 year ago
Via: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
ta988|1 year ago
pulse7|1 year ago
hammock|1 year ago
Is Amazon "the best" place to go shopping? No, you might find better prices on individual items if you put a little more work into it, but it's the most expedient. Is Facebook/Instagram/Tiktok/insert here "the best" social network? No, but it is the most accessible, easy-to-use, useful one. Is a Tesla (perhaps outdated example since X) "the best" car - no, but it is the most expedient.
There is a tangent here that intersects with refinement culture as well. Among the group of society that (subconsciously) care about these "expedient" choices, you see everyone and everything start to look the same
[1]https://tinaja.computer/2017/10/13/expedience.html
dkarl|1 year ago
For example, if two students in a class are having frequent confrontations that bring learning in the class to a halt, and attempts by teachers and counselors to address their conflict directly haven't been effective, the expedient solution might be to place them in separate classes. The "right thing" would be to address the problem on the social and emotional level, but if continued efforts to do so is likely to result in continued disruption to the students' education, it might be better to separate them. "Expedient" acknowledges the trade-off, while emphasizing the positive outcome.
Often a course of action is described as "expedient" when it seems to dodge an issue of morality or virtue. For example, if we solve climate change with geoengineering instead of by addressing thoughtless consumerism, corporate impunity, and lack of international accountability, many people would feel frustrated or let down by the solution because it would solve the problem without addressing the moral shortcomings that led to the problem. The word expedient stresses the positive side of this, the effectiveness and practicality of the solution, while acknowledging that it leaves other, perhaps deeper issues unaddressed.
lisper|1 year ago
It's not just that. Every time you do business with a new web site you assume additional risk. Amazon is a known quantity. You can be pretty sure that they are not going to outright scam you, and they aren't going to be hacked by script kiddies. There is a significant risk of getting a counterfeit item, but they have a very liberal return policy, so the real cost to you in this case is a minute or two to get a return code and possibly a trip to the nearest Whole Foods to drop it off.
Amazon sucks in many ways, but at least their suckage is a known quantity. Predictability has significant value.
d0mine|1 year ago
Your model of the world is not perfect so instead of trying to find a globally optimal solution, you are satisfied with a local optimum that exceeds some threshold that has to suffices. https://en.wikipedia.org/wiki/Satisficing
RcouF1uZ4gsC|1 year ago
The number one reason I use Amazon, is not for the best prices, but because of their return policy. Amazon returns are actually often more painless than physical store returns.
Being able to return something predictably and easily outweighs a small difference in price.
WalterBright|1 year ago
That extra work costs you money, too. Calculate how much your job pays you per hour, then you can deduce the $cost of spending more time to get a better deal.
AnimalMuppet|1 year ago
onlyrealcuzzo|1 year ago
The most expedient car? Or BEV in the US?
b3ing|1 year ago
jprete|1 year ago
rpastuszak|1 year ago
unknown|1 year ago
[deleted]
aulin|1 year ago
Non native here. What's the meaning of inoculated here?
It's not the first time that I struggle to parse this word. In italian it keeps the original latin meaning and can be translated with "injected with". You could inoculate a vaccine but you could also inoculate a poison, so it does not carry the immunity meaning by default. English (US?) as far as I can tell use it as a synonym of "immune", is that so?
unknown|1 year ago
[deleted]
NAHWheatCracker|1 year ago
I agree though that for practical purposes, practical solutions are just going to be more successful.
pjc50|1 year ago
Then a righthanded person picks up the lefthanded scissors and deems them weird and uncomfortable. Which they are .. for the right hand of a right-handed person.
Other popular examples of such taste controversy are Python's semantic whitespace, the idioyncracies of Perl, the very unusual shape of J/APL, and anyone using FORTH for non-trivial purposes.
edit: https://news.ycombinator.com/item?id=41766753 comment about "other people's Lisp" reminds me, that working as a solo genius dev on your own from-scratch code and working in a team inside a large organization on legacy code are very different experiences, and the "inflexibility" of some languages can be a benefit to the latter.
kqr|1 year ago
Remember that this has to be read in historical context. At the time C was invented, things like garbage collection, message-passing object-orientation, generics, rich sets of conditionals, first-class functions, etc. were brand spanking new. They were The Right Thing to do (even judged in the harsh light of hindsight), but also quite complicated to implement – so much so that the New Jersey people skipped right past most of it.
Today these things are par for the course. At the time, they were the The Right Thing that made the system correct but complex, and had adoption penalties. As time passes, the bar for The Right Thing shifts, and today, it would probably not be embodied by Lisp, but maybe by something like Haskell or Rust?
yoyohello13|1 year ago
1. LISP is easy to start with if you're not a programmer. There is very little syntax to get to grips with, and once you understand "everything is a list" it's super easy to expand out from there.
2. LISP really makes it easy to hack your way to a solution. With the REPL and the transparency of "code is data" model you can just start writing code and eventually get to a solution. You don't need to plan, or think about types, or deal with syntax errors. You just write your code and see it executed right there in the REPL.
For my part, I love LISP when it's just me doing the coding, but once you start adding other peoples custom DSL macros or whatever the heck it becomes unwieldy. Basically, I love my LISP and hate other peoples LISP.
mattgreenrocks|1 year ago
In Lisp, almost all of the language’s power is in “user space.”
The ramifications for that are deep and your beliefs as to whether that is good are largely shaped by whether you believe computation is better handled by large groups of people (thus, languages should restrict users) or smaller groups of people (thus, languages should empower users).
See this for more discussion: https://softwareengineering.stackexchange.com/a/237523
taeric|1 year ago
What this often meant was that getting a feature into your LISP program was something you could do without having to hack at the compiler.
Used to, people balked at how macros and such would break people's ability to step debug code. Which is still largely true, but step debugging is also sadly dead in a lot of other popular languages already.
bbor|1 year ago
djha-skin|1 year ago
hcarvalhoalves|1 year ago
https://www.youtube.com/watch?v=o4-YnLpLgtk
https://www.youtube.com/watch?v=gV5obrYaogU
It makes working on VSCode today look like banging rocks together, let alone 30 years ago.
linguae|1 year ago
Today we benefit from having a variety of languages, each with tradeoffs regarding how their expressiveness matches the problem at hand and also the strength of its ecosystem (e.g., tools, libraries, community resources, etc.). I still think Lisp has advantages, particularly when it comes to its malleability through its syntax, its macro support, and the metaobject protocol.
As a Lisp fan who codes occasionally in Scheme and Common Lisp, I don’t always grab a Lisp when it’s time to code. Sometimes my language choices are predetermined by the ecosystem I’m using or by my team. I also think strongly-typed functional programming languages like Standard ML and Haskell are quite useful in some situations. I think the strength of Lisp is best seen in situations where flexibility and have malleable infrastructure is highly desirable.
Jach|1 year ago
"Please don't assume Lisp is only useful for Animation and Graphics, AI, Bioinformatics, B2B and E-Commerce, Data Mining, EDA/Semiconductor applications, Expert Systems, Finance, Intelligent Agents, Knowledge Management, Mechanical CAD, Modeling and Simulation, Natural Language, Optimization, Research, Risk Analysis, Scheduling, Telecom, and Web Authoring just because these are the only things they happened to list." --Kent Pitman
unknown|1 year ago
[deleted]
buescher|1 year ago
kragen|1 year ago
Lisp definitely does depend on personality type. Quoting Steve Yegge's "Notes from the Mystery Machine Bus" (https://gist.github.com/cornchz/3313150):
> Software engineering has its own political axis, ranging from conservative to liberal. (...) We regard political conservatism as an ideological belief system that is significantly (but not completely) related to motivational concerns having to do with the psychological management of uncertainty and fear. (...) Liberalism doesn't lend itself quite as conveniently to a primary root motivation. But for our purposes we can think of it as a belief system that is motivated by the desire above all else to effect change. In corporate terms, as we observed, it's about changing the world. In software terms, liberalism aims to maximize the speed of feature development, while simultaneously maximizing the flexibility of the systems being built, so that feature development never needs to slow down or be compromised.
Lisp, like Perl and Forth, is an extremist "liberal" language, or family of languages. Its value system is centered on making it possible to write programs you couldn't write otherwise, rather than reducing the risk you'll screw it up. It aims at expressiveness and malleability, not safety.
The "right thing" design philosophy is somewhat orthogonal to that, but it also does pervade Lisp (especially Scheme) and, for example, Haskell. As you'd expect, the New Jersey philosophy pervades C, Unix shells, and Golang. Those are also fairly liberal languages, Golang less so. But a C compiler had to fit within the confines of the PDP-11 and produce fast enough code that Ken would be willing to use it for the Unix kernel, and it was being funded as part of a word processing project, so things had to work; debuggability and performance were priorities. (And both C and Unix were guided by bad experiences with Multics and, I infer, M6 and QED.) MACLISP and Interlisp were running on much more generous hardware and expected to produce novel research, not reliable production systems. So they had strong incentives to both be "liberal" and to seek after the "right thing" instead of preferring expediency.
adamnemecek|1 year ago
Blackthorn|1 year ago
I like lisp for the most part, but holy shit is the enduring dialog surrounding it the absolute worst part of the whole family of languages by far. No, it doesn't have or give you superpowers. Please grow up.
unknown|1 year ago
[deleted]