camdez | 1 year ago | on: Tech terms I was pronouncing wrong
camdez's comments
camdez | 2 years ago | on: K Isn't Lisp (2004)
(frequencies (map second v))
;; => {"one" 2, "two" 2, "three" 1, "four" 1}
;; or:
(update-vals (group-by second v) count)
;; => {"one" 2, "two" 2, "three" 1, "four" 1}
;; +(#:'=v[;1];?v[;1])
;; flip(count each group v[;1];unique v[;1])
So, shorter than the readable K version in a Lisp.What did we learn here? Probably nothing.
K is great for sequence operations--not sure what we're trying to imply about Lisps.
camdez | 3 years ago | on: The Janet Language
camdez | 3 years ago | on: The Janet Language
Of course you can do that simple case with a tertiary operator but:
1. It's a construct that really has no reason to exist (I argue) as distinct from `if`.
2. It doesn't compose with statements.
This duality is primarily what I'm arguing against.
A better example would have been a case statement inside of the `println`:
(println
"Log in by"
(case user-id
0 "root"
1 "local admin"
(format "regular user (id: %d)" user-id)))
In C, you have to introduce a variable for no good reason (or do some non-idiomatic, ugly, nested tertiary operators that get uglier the more cases we have).And even then, you can't just say
user_name = switch { ... }
Because switch is an statement.camdez | 3 years ago | on: The Janet Language
Lispers don't feel positive about Lisp because of parentheses; change them to curly braces or brackets or ^ and $—that's really not what matters. Lisps with brackets go all the way back to the beginning (https://en.wikipedia.org/wiki/M-expression). Indentation-based Lisps have been done too (https://readable.sourceforge.io/).
The point is an expression-based syntax that directly models the code tree, is written in the data structures of language, and is convenient for meta-programming. It's a fundamentally different approach that yields massive benefits (see my other comment in the thread if you want to hear that spelled out in more detail).
But we don't see that when we just stop at unfamiliar syntax.
Lispers have been structurally-editing code as a matter of course since at least 1970. Most of the rest of the world only got a taste of that when tree-sitter came out circa 2018 (I know I'm rounding the edges here, but the point stands). Half a century later! Why is that? It's not just curly braces vs parens—something deeper is happening here.
I do apologize if I came off rude. I'm just so frustrated at hearing this same line year after year after year from people who are missing out some of the most powerful ideas in programming because they prefer this ASCII glyph over that one. It's nothing more than parochialism.
It just makes me want to scream (perhaps uncharitably) "surely you're not a serious engineer who works on serious problems if your biggest concern while coding is which character is used to group code?!" I want tools to help me think more clearly, ways to operate at higher levels of abstraction, better concurrency semantics—surface characteristics be damned. Sure, I have my preferences about orthography, but the tail doesn't wag the dog.
Look deeper! Learn what each language has to teach you! Then keep the parts that move our craft forward and use whatever glyphs you want. But don't reject the automobile because it doesn't have handlebars.
Moreover, the things that look familiar probably have the least to teach you.
I believe we have the ability to do so much better as an industry, but it's not going to happen if we reject the unfamiliar just for being so.
camdez | 3 years ago | on: The Janet Language
Lisps don't arbitrarily look weird—there's a deep, principled, elegant reason for it; Lisp code represents how the code will be evaluated in the most direct way, without relying on (some would say needlessly) complex parsing / precedence rules. There are no surprises and no arbitrary rules to learn. There are no useless semicolons to forget, and you'll never have to wonder if `+=` returns the RHS or the result of the operation (or does it even have a return value?).
You don't have this meaningless distinction where you can't directly reduce with `+` because—ugh—it's not a function, it's an operator. You just say `(reduce + [1 2 3])`.
You never have to do this ugly Ruby stuff...
words.map &:length
# or
words.map { |w| w.length }
...because methods are really just polymorphic functions, but language designers chose syntax that doesn't compose elegantly.You don't have this useless distinction between statements and expressions that limits how you can compose code. You never have to drop down to some ugly, limited tertiary expression form (`COND ? X : Y`) of `if` because—whoops—`if` is a "statement". You just write:
(println (if me? "me" "you"))
Because, duh, we wanted an `if`.What do we gain by adding all of this noise?:
if (is_me) {
println("me");
} else {
println("you");
}
Absolutely nothing. The parens on the conditional, the curly braces, the semicolons, the `else` keyword—they're essentially meaningless incantations to appease the compiler. And we've introduced an undesirable opportunity for the two branches to accidentally diverge over time.But most importantly, our code is written in the data structures of our language. Code as data means we can manipulate code as easily as we manipulate data, which means we can trivially (re-)write code with code (i.e. macros). And not shitty string generating macros, or macros that can only do a handful of sanctioned things—we can write our own control structures in couple lines of code. We can add new abstractions to the programming language from user space.
Wish the language had an `if-not` construct? You can add it with, like, 3 lines of code. Wish functions could have optional parameters? Add it. Wish it had a pattern matching functions like SML or Erlang? Cool. Java-style annotations? Logging that is fully removed when running in high performance mode? A different OO model? Multi-line string literals? String interpolation? A graph of dependent calculations that only get run when used? A more convenient way to load dependencies? It's all easily doable.
I've coded in Lisps (and a dozen other languages) for at least 20 years, and every time I have to use a non-Lisp syntax I just think "wow, these people really missed the boat". It's like having to write math in Roman numerals (would you rather calculate "D + L + IX" or "500 + 50 + 9"?); there's a better way, and that better way has elegant, recursive, underlying design principles that make the ergonomics way better.
But, yeah, it doesn't look like C code. And people seem to be really attached to their C syntax.
camdez | 3 years ago | on: The Janet Language
This is why I don't do any math I can't do on my fingers.
Parentheses are just too scary, and there's no way that parenthesis math junk actually has any useful ideas.
camdez | 3 years ago | on: Clojure Turns 15 panel discussion video
camdez | 3 years ago | on: Clojure Turns 15 panel discussion video
Objects / classes are not so much the problem, per se—it's specifically that ORMs fundamentally involve scattering uncoordinated mutable state throughout the application. A foundational thesis of Clojure is that mutation is a tremendous source of bugs, and should be avoided / thoughtfully limited.
Once you let the unmanaged mutation genie out of the bottle, it's almost impossible to put back in.
More concretely, I used to work extensively with Rails; I loved ActiveRecord (ORM) when I first started out—it makes basic things so easy.
Later I worked on a large Rails app supporting millions of users...we used ActiveRecord extensively, and had a very talented team. ActiveRecord worked fine most of the time, but I have bad memories of spending hours or even days tracking down user-reported bugs.
I'd try to figure out how to recreate the user's state locally, even cloning pieces of the production database to work with their exact DB data, but whatever state the program had gotten into was a large graph of (all!) mutable objects. How was that flag getting set? What code could have done it? When? And the answer is basically ANYTHING at ANY TIME that could possibly get a reference to the object. And web applications are far from the worst offenders in this space because the request / response cycle is (usually) fairly globally stateless.
Clojure is the exact opposite of that experience.
The state of a Clojure program will most likely comprised of data literals (think JSON data types, if you don't have experience with Clojure / Lisp data). Printable data literals. Connect to the errant server, serialize the state, read it from your machine, and you're there. It's coherent, stable, serializable, transmittable, simple.
Who can mutate your data? No one. You have an immutable reference (maybe others do too, but reading is a fundamentally safe operation). How does it change? Only along the explicit path of computations you're working through (it doesn't change, actually, you choose to hold onto a new reference to derived data when you want to).
Or, if you really need a mutable escape hatch (like, say, you're holding a handle to a database), every type of mutation in (core) Clojure has defined (and thoughtfully so) concurrency semantics. You won't see a bunch of notes in Clojure API docs that say things like "not thread safe" like you see in JavaDocs.
TLDR: Clojure will happily give you object-like read-only views into your database (like Datomic's `datomic.api/entity`), or help you write queries with a knowledge of your database schema, but most Clojure persistence solutions will explicitly coordinate mutation into a single 'site' because that's the only way maintain a coherent view of state-over-time. And that single-mutation-site story is the opposite of what ORMs (as commonly defined) do.
camdez | 3 years ago | on: Japanese explained to programmers
Logograms allow you to understand (some) meaning without understanding pronunciation.
Syllabaries and alphabets allow you to understand pronunciation (to varying degrees of success depending on spelling consistency) without necessarily understanding meaning.
It's all tradeoffs.
These days I read Chinese much better than Japanese, and it's definitely fun that I can look at a page full of Japanese and understand the meaning of many words just from knowing the (largely parallel) meaning of the kanji / Hanzi from Chinese.
Conversely, I can read (the sounds of) Hangul, but I rightly know about 20 Korean words, so it's all just sounds to me.
But I can read the name off of a Korean hotel sign and communicate it to a Korean taxi driver. I can't do the same in Japan if I don't know the pronunciation, even if I know exactly what the sign means. If it affects your ability to use the language effectively to achieve what you want then I think it matters.
camdez | 5 years ago | on: Guide to Notion Landing Pages
camdez | 8 years ago | on: Readable Clojure
Infix is occasionally more readable for math, but I'd rather have a macro to transform a delimited section of infix code than to mess with the language.
camdez | 9 years ago | on: Ask HN: Will the committee that built Common Lisp make a new one in the future?
camdez | 9 years ago | on: GNU Guile 2.2.0
Or `foo(bar)(1)`.
camdez | 10 years ago | on: Steve Yegge Grades His Own Predictions
camdez | 10 years ago | on: Toki Pona: a human language with 120 words
Agreed RE "水好" being unnatural, but it's hardly incorrect.
camdez | 10 years ago | on: Toki Pona: a human language with 120 words
Thanks!
camdez | 10 years ago | on: Toki Pona: a human language with 120 words
camdez | 10 years ago | on: Toki Pona: a human language with 120 words
I'm curious about the decision to include the grammatical particles, and why it seemed necessary...anyone have a full enough understanding of the grammar to know why the decision was made to allow dropping the "li" particle with "mi" and "sina", but not getting rid of it in general? Chinese similarly lacks a 'to be' copula, and gets by quite well without a subject marker.
c.f.
EN: I (am) good
TP: mi pona
ZH: 我好
EN: Water is good
TP: telo >li< pona
ZH: 水好
Japanese explicitly demarcates the subject / topic, but seems to allow a bit more variety at the beginning of the sentence + uses that demarcation to add a connotation of emphasis or contrast with a previous topic + isn't a conlang. Anyone have a feeling for what it adds here?camdez | 11 years ago | on: Quality Is Fractal
Yes, quality (often) has this one bad apple spoils the whole bunch property. If we have to have a math metaphor we might say that quality is multiplicative, in the sense that one low value in a sequence still impacts the entire product. Or we might say that quality has an absorbing element, by which we mean that any zero value kills the whole set (100 * 100 * 100 * 0 still equals 0). But fractal means that we see self-similarity at all levels. That hardly seems to be the case.
According to the original argument, bad software implies bad programmers which implies a bad company. That seems not only incorrect but also non-constructive. It ignores (e.g.) the idea that good employees could release bad products under bad management. Likewise, great programmers can write applications which are terrible to use. There are other skills involved in that process (interaction design, for instance). The criteria for evaluating programmers and software are vastly different and thus it doesn’t make sense to say that there’s a fractal relationship between these two vastly different kinds of entities.
`seq`, `Eq`, `prev` I pronounce like the beginnings of the underlying words, so I'd say pronounce-then-abbreviate and abbreviate-then-pronounce yield the same result.
I guess this implies people are saying "SECK" (`seq`) and "ECK" (`Eq`), rhyming `prev` with "rev", and pronouncing `id` like Freud's id? (Eek.)
In fairness, I'll confess I pronounce `enum` "E-NUM", not "E-NOOM".