> This is how I solved the third problem, by the way: [insert unreadable scribbles of a madman]
when will array enthusiasts understand that having first letter A does not mean the language should be aimed at Aliens?
get over your "code golf" tendencies and start thinking about *readability* - give us long names, give us brackets, give us ASCII and stop pulling everything into a single line in examples - _THEN_ you'll have your first tiny chance of being heard and taken seriously
> when will array enthusiasts understand that having first letter A does not mean the language should be aimed at Aliens? […] get over your "code golf" tendencies
I've always assumed that people working on these languages come from a heavy maths background, so to them the sequence of symbols is more “naturally” readable. So saying “if array language people used words I'd understand better” is not unlike saying “if the Chinese would only speak English there would be less language barriers”. Not wrong, of course, but perhaps not a reasonable expectation.
> At this point, comments suggesting the code is “unreadable” are bound to be thrown around, but once you learn a handful of symbols, this is in fact significantly more readable than the kinds of formulas I see on a regular basis in Excel.
To your point: what would this look like using English words as identifiers rather than unicode soup whose names I have likely never heard of? It isn't like those characters are easy to find on my keyboard.
The glyphs are not a barrier to use or learning. Learning them is no more difficult than learning the keywords and syntax of any other language. The real difficulty lies in thinking in terms of arrays and array operations rather than conditionals and loops.
An obvious compromise is to have a dual representation, one terse and one that’s spelled out. You could then toggle between them when seeing unfamiliar symbols.
The fact that nobody selling me on these arcane runes talks about something like that tells me it’s more about code golf for them.
Personally, I do not find that long names, brackets and general verbosity helps _me_ with "readability". The less concise it looks the less likely I am to read it. And if there is too much to read, I will definitely not read it. Just because someone has entertained themselves by writing lots of verbose code does not mean I want to read it. Skimming it and searching through numerous directories of source code files is not the same thing as actually reading each file from beginning to end.
As it happens, I find terse languages, and "code golf", as more approachable and thus readbale. It is lower volume, more concise and compact. Generally, I can read all of it. I use an ASCII-based array language that uses brackets: https://en.wikipedia.org/wiki/K_(programming_language)
I am not sure why I prefer writing less code as opposed to writing more code. Maybe I just dislike typing. In any case, I am thankful these languages exist. For both reading and writing.
To insinuate that no one takes array languages seriously would of course be wrong, as one can see from the above Wikipedia page.
The question I have is why anyone who prefers popular verbose languages would care about the discussion of unpopular array languages on HN. Why even bother to comment and complain. If array languages are incomprehensible and unpopular, then why pay any attention to them. Appeals for verbosity over terseness do not seem to be working for the vast majority of Javascript I see in webpages from the larger so-called "tech" companies running the most popular websites.
Array languages are very easy to avoid. Imagine trying to avoid Python. Not easy (speaking from experience).
Telling array language authors to make their languages more verbose is like telling Bob Dylan to play what people want to hear and only then will he have a "tiny chance of being heard and taken seriously".
(Also wondering if commenter meant braces not brackets.)
This is what I love in Uiua[1]. That operators can be written as english words instead of unicode symbols. Makes it quite similar looking to functuinal point free code.
I agree that in a team you should not use code golf for others to review your work. But for your own explorations or because your team can read easily your code golf then use it when is appropriate.
The project seems interesting, but IMO it undermines the author’s case that the only example presented is completely artificial and unrelatable.
After explaining how spreadsheets enabled ordinary users to solve real-world problems better than traditional programming, the author proposes to take it to the next level with APL-style expressions, and gives a code example for the following task:
You're given a dictionary and asked to find the longest word which contains no more than 2 vowels.
I don’t believe there is a single Excel user in 40 years who ever needed to perform this operation. It’s the kind of abstracted “l33tcode” exercise that only appears in computer science classes and a certain type of programming job interview.
A practical example of how an array language IDE works better than a spreadsheet for some useful task would go a long way here.
Ah, the classic "Two Stories" [0] from Joel Spolsky:
I pretty rapidly realized that the App Architecture group knew even less than I did about macros. At least, I had talked to a handful of macro developers and some Excel old-timers to get a grip on what people actually did with Excel macros: things like recalculating a spreadsheet every day, or rearranging some data according to a certain pattern. But the App Architecture group had merely *thought* about macros as an academic exercise, and they couldn’t actually come up with any examples of the kind of macros people would want to write. Pressured, one of them came up with the idea that since Excel already had underlining and double-underlining, perhaps someone would want to write a macro to triple underline. Yep. REAL common. So I proceeded to ignore them as diplomatically as possible.
Part of modern university training is making exercises as unrelatable as unpractical as possible, so that students are relieved to find their commercial work simply boring and repetitive afterwards.
The fundamental issue here is discoverability. There is no benefit to having these symbols, since you could easily have some kind of button the UI to switch between symbols and words. It would be far better if you tab completed the actual name of a function rather than typing some arbitrary sequence of characters to get a symbol of questionable function. You won't ever replace Excel if the first step to using your product is memorising a huge dictionary of strange runes.
I think a good language for replacing spreadsheets would be Scheme, or something similar. You can teach it to someone very quickly as the syntax is quite minimal. Provided functions are given human readable names, I think it could be very intuitive. You would also need to offer some kind of graphical editor for the code. Lisp has wins here as well since you could likely translate the code into a graph without much issue. However it would also be good to allow the inverse of such transformations.
(plus (minus (div b (times 2 a)) (times (div 1 (times 2 a)) (sqrt (minus (pow b 2) (times 4 a c))))))
or
plus(minus(div(b, times(2,a)), times(div(1, times(2,a)), sqrt(minus(pow(b,2), times(4, a, c))))))
APL just has a small handful of symbols with really simple definitions and, heck, you already know what +, -, ×, and ÷ mean. Scheme actually has a ginormous dictionary by comparison, and in practice you do need to read the manual anyway, so discovery is a moot point.
APL is actually simpler in that regard in practice, and the gains you get in readability from the symbols are just like above. Better yet, though, which of the above do you immediately think, "that 1/(2a) can be factored out"? Oh, and here's APL for the above:
(-b÷2×a) + (1÷2×a)×((b*2)-4×a×c)*1÷2
Quickly empowering your brain to spot high-level patterns across your codebase like that is a huge strength of APL.
At this point, comments suggesting the code is “unreadable” are bound to be thrown around, but once you learn a handful of symbols, this is in fact significantly more readable than the kinds of formulas I see on a regular basis in Excel. This is proof that one is not more difficult to learn than the other, and that the problem is one of marketing rather than technological.
That logic is broken. Even accepting it as true, "X is easier to read than Y once you learn a specialized symbolic language" is not "proof that one is not more difficult to learn than the other". It is, at best, proof that there is value that may make it worth learning the one even if it is harder to learn, which is very different from proof that it is not harder to learn.
This discussion wouldn't be complete without covering Lotus Improv on NeXT. It did two things differently: data, formulaz, and presentation were separate layers and the formulas weren't applied per cell but rather as a relational invariant between all sets of cells in a direction for the full range. One last thing it used was naming everything with natural names for columns and tables much like a database schema.
The Windows version of Improv wasn't nearly as good, possibly because it seemed too advanced for PC users familiar with 1-2-3, or because it might disrupt an already very profitable revenue stream.
I believe there's a spiritual successor in Quantrix.
What was the conceptual difference with Lotus Notes? That Lotus Improv was based on spreadsheets? I remember Lotus Notes was a very successful product, closer to the low code paradigm. Remember seeing really fast development cycles in old organizations.
The explanation as to why spreadsheets can be so bad is spot on. It's because of these reasons that porting a spreadsheet to a proper app is a thing. I've been tasked of doing just that and it's horrible.
We're supposed to rejoice and hail the spreadsheets gods because a spreadsheet can take unstructured data and create a map of (key,value) where key is a value appearing in one column and where value is the sum (for example) of the values appearing in another column for that key.
We literally have a simple mapping like that, call it a "pivot table", and it's supposed to be some achievement of mankind.
I beg to differ.
From what I've seen I'm pretty sure the world runs on broken spreadsheets, with bogus numbers being presented during meetings, used for decisions (including political decisions), etc.
At some point management realizes the company (or institution) has been taking nonsensical decisions since years and it's not only a minefield of bugs and near impossible to debug "code" (that'd be the "I've certainly spent more time than I want to admit trying to untangle one of those bowls of spaghetti trying to find out why a certain number is computed the way it is." from TFA, which any of us ever having been tasked to port a spreadhseet to an app know all too well)... But it's also a minefield politically: you cannot ask too many questions or point out too many bugs because it exposes the brokenness everything has been running on since years.
Even though somewhere, at some level, people knew everything was FUBARed (otherwise the decision to rewrite the spreadsheet to a dedicated app would not have been taken), it's a minefield. You have to walk on eggs to not piss anyone.
I recently discovered that Excel gained “array formulas” a few years ago. [0] It’s a little hard to grasp for me because arrays are still flattened out as individual values in the sheet, but it has been useful to write a single formula to process a group of values rather than filling across/down and perhaps losing sync when the formula is changed.
This article completely misunderstands the users of spreadsheets:
>users of spreadsheets are already thinking in terms of matrices of values
I am pretty sure most users of Excel are not thinking about matrices. They are thinking of rows and columns. Programmers and mathematicians might generalize this to matrices and values, but the users themselves are not thinking about it. This is akin to viewing Tinder as a "human interaction app" and lumping it in with Slack under that abstraction, and not realizing that those are two very different use cases and mindsets of the user.
> The spreadsheet paradigm hides what's important (the formulas) in favor of the superficial (the data)
To most users, the data is what is important in a spreadsheet and the formulas are implementation details.
> To most users, the data is what is important in a spreadsheet and the formulas are implementation details.
In my experience of erm... hardcore-spreadsheeting (engineering, finance)... the cells whose value matters and become a final report that motivates design/decision-making are a tiny percentage of the total cells. Most of the cells in a spreadsheet are either raw data or intermediate values. Their values have no real meaning to the author, just steps in a long calculation. They might look at them for sanity checking but if they need to verify their values then that is better as another calculation not by repeated manual inspection.
Of course there are many types of spreadsheet user. Every cell value matters for your Vinyl collection or whatever :)
The example of "why spreadsheets bad" made me wonder - why don't spreadsheets have a visual indicator for cells with a formula rather than a static value?
I wonder if someone has worked on automatic translation of Excel spreadsheets with formulas to a proper programming language. Using the Python openpyxl package, you can read an Excel spreadsheet and get the contents of each cell, either a value or a formula. Having stored this data, it should be possible to
(1) look for changes in formulas, which could indicate an error. Why is the formula for cell A99 different from that in A100?
(2) write formulas in Python/NumPy or another language that replicates the formulas.
the critique of spreadsheets was lacking in my opinion. Not being able to see the of formulas is just a mode switch away or you can even implement your own checks like highlighting cells that aren't formulas but should be. It's kind of the same as code that is syntactically correct but wrong. I actually think spreadsheets are fantastic programming environment, total immersion in the results and instant feedback. The alternative proposed (by introducing another layer) seemed..... unconvincing and cumbersome. There's almost no common types of computations that would be better done outside the spreadsheet itself, and for uncommon ones where you want to do something fancy, nearly all spreadsheets can be accessed by a vast array of tools and languages. Spreadsheets have survived all challengers including those that try to create extra layers on top
Is the author here? Most of this article is about how ubiquitous spreadsheets are and how this language can replace them, but I want to hear the author's honest opinion on if through reasonable spreadsheet user will switch to this.
If you think they will, I also ask, what language that looks like this has ever been used mainstream? Or even in a hacker community? Brainfuck and golf languages are used by a very, very small niche. Regex is probably the most similar, but you only need to learn like 4 symbols to be useful (. * + \). And that hasn't caught on in the average excel user's repertoire.
I really wish something like this would exist and be useful because I share the exact same gripes as the author.
One simple thing I think missing from Excel is 1st class table/data sheets. A datasheet can have only well-defined manual/computed columns. They are independent of worksheets so won't hurt existing use-cases / backward compatibility. I am not sure of forward compatibility.
We can add validations and type to sheet so that the data consistency is not an issue. A good UI of scratch sheet/worksheet will allow for adhoc work on it. Also, we can label formulas in a worksheet (aka variable) that should simplify the code.
I understand using symbols when helpful, but... they've gone too far. The way to type the symbols in doesn't even correlate to the symbol. To insert division you enter `=
Is there a reason why APL languages all fold right? I mean other than genealogical, as in they fold right because APL does. Does it offer compelling advantages?
As an outsider it looks like wasted novelty budget. There's already a lot of new things to learn and get used to in an APL school array language, and then on top of that you have to learn to read them right to left.
On that note, why do array languages use terse notation and avoid newlines? Regexes do this too, but at least Perl 6 / Raku is pushing for a multi-line form. There are even initiatives to give lisps friendlier syntax.
Not all array languages fold right. K's reduce is a left fold and BQN has a mix of both. My understanding is that a right fold was chosen because it's more expressive. You get alternating sums when folding subtraction with a right fold and something less interesting if you use a left fold. FWIW, that(s my understanding of the history. I don't actually think the argument is very compelling.
It's about simple and consistent order of operations. There are no precedence rules to remember other than that, so it reduces errors and improves readability compared to a complex formula written in standard notation, where you would have to read the entire thing to know which parts are calculated first.
I think I speak for everybody when I say that if a language isn't immediately intelligible to me within two seconds of barely glancing at it then it isn't worth anyone's time. Programmers are lazy. That's not a criticism. It's something to aspire to. [end sarcasm]
[+] [-] NooneAtAll3|2 years ago|reply
when will array enthusiasts understand that having first letter A does not mean the language should be aimed at Aliens?
get over your "code golf" tendencies and start thinking about *readability* - give us long names, give us brackets, give us ASCII and stop pulling everything into a single line in examples - _THEN_ you'll have your first tiny chance of being heard and taken seriously
aim design at common people, not gurus
[+] [-] dspillett|2 years ago|reply
I've always assumed that people working on these languages come from a heavy maths background, so to them the sequence of symbols is more “naturally” readable. So saying “if array language people used words I'd understand better” is not unlike saying “if the Chinese would only speak English there would be less language barriers”. Not wrong, of course, but perhaps not a reasonable expectation.
[+] [-] sethammons|2 years ago|reply
> At this point, comments suggesting the code is “unreadable” are bound to be thrown around, but once you learn a handful of symbols, this is in fact significantly more readable than the kinds of formulas I see on a regular basis in Excel.
To your point: what would this look like using English words as identifiers rather than unicode soup whose names I have likely never heard of? It isn't like those characters are easy to find on my keyboard.
[+] [-] t-3|2 years ago|reply
[+] [-] arijun|2 years ago|reply
The fact that nobody selling me on these arcane runes talks about something like that tells me it’s more about code golf for them.
[+] [-] 1vuio0pswjnm7|2 years ago|reply
As it happens, I find terse languages, and "code golf", as more approachable and thus readbale. It is lower volume, more concise and compact. Generally, I can read all of it. I use an ASCII-based array language that uses brackets: https://en.wikipedia.org/wiki/K_(programming_language)
I am not sure why I prefer writing less code as opposed to writing more code. Maybe I just dislike typing. In any case, I am thankful these languages exist. For both reading and writing.
To insinuate that no one takes array languages seriously would of course be wrong, as one can see from the above Wikipedia page.
The question I have is why anyone who prefers popular verbose languages would care about the discussion of unpopular array languages on HN. Why even bother to comment and complain. If array languages are incomprehensible and unpopular, then why pay any attention to them. Appeals for verbosity over terseness do not seem to be working for the vast majority of Javascript I see in webpages from the larger so-called "tech" companies running the most popular websites.
Array languages are very easy to avoid. Imagine trying to avoid Python. Not easy (speaking from experience).
Telling array language authors to make their languages more verbose is like telling Bob Dylan to play what people want to hear and only then will he have a "tiny chance of being heard and taken seriously".
(Also wondering if commenter meant braces not brackets.)
[+] [-] ivanjermakov|2 years ago|reply
[1]: https://www.uiua.org/
[+] [-] haolez|2 years ago|reply
[+] [-] emblaegh|2 years ago|reply
[+] [-] rokkitmensch|2 years ago|reply
[+] [-] 29athrowaway|2 years ago|reply
https://www.pckeyboard.com/page/product/00UA41P4A
[+] [-] llmzero|2 years ago|reply
[+] [-] MonkeyClub|2 years ago|reply
It's not necessary to make everything toddler-safe; we already have Python.
[+] [-] pavlov|2 years ago|reply
After explaining how spreadsheets enabled ordinary users to solve real-world problems better than traditional programming, the author proposes to take it to the next level with APL-style expressions, and gives a code example for the following task:
You're given a dictionary and asked to find the longest word which contains no more than 2 vowels.
I don’t believe there is a single Excel user in 40 years who ever needed to perform this operation. It’s the kind of abstracted “l33tcode” exercise that only appears in computer science classes and a certain type of programming job interview.
A practical example of how an array language IDE works better than a spreadsheet for some useful task would go a long way here.
[+] [-] Joker_vD|2 years ago|reply
[+] [-] miga|2 years ago|reply
[+] [-] James_K|2 years ago|reply
I think a good language for replacing spreadsheets would be Scheme, or something similar. You can teach it to someone very quickly as the syntax is quite minimal. Provided functions are given human readable names, I think it could be very intuitive. You would also need to offer some kind of graphical editor for the code. Lisp has wins here as well since you could likely translate the code into a graph without much issue. However it would also be good to allow the inverse of such transformations.
[+] [-] xelxebar|2 years ago|reply
Hard disagree. Which do you find more readable?
or or APL just has a small handful of symbols with really simple definitions and, heck, you already know what +, -, ×, and ÷ mean. Scheme actually has a ginormous dictionary by comparison, and in practice you do need to read the manual anyway, so discovery is a moot point.APL is actually simpler in that regard in practice, and the gains you get in readability from the symbols are just like above. Better yet, though, which of the above do you immediately think, "that 1/(2a) can be factored out"? Oh, and here's APL for the above:
Quickly empowering your brain to spot high-level patterns across your codebase like that is a huge strength of APL.[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] dragonwriter|2 years ago|reply
That logic is broken. Even accepting it as true, "X is easier to read than Y once you learn a specialized symbolic language" is not "proof that one is not more difficult to learn than the other". It is, at best, proof that there is value that may make it worth learning the one even if it is harder to learn, which is very different from proof that it is not harder to learn.
[+] [-] karmakaze|2 years ago|reply
The Windows version of Improv wasn't nearly as good, possibly because it seemed too advanced for PC users familiar with 1-2-3, or because it might disrupt an already very profitable revenue stream.
I believe there's a spiritual successor in Quantrix.
[+] [-] GMoromisato|2 years ago|reply
I think Improv could only satisfy a subset of Excel use cases. In particular, any non-tabular sheet was hard to do.
If you have one tool that does it all, even if imperfectly, why bother learning a second, special tool just for a subset?
Note also that it was easier for Excel to superset Improv (e.g., by adding Pivot Tables) than the reverse.
[+] [-] karmakaze|2 years ago|reply
[0] https://en.wikipedia.org/wiki/Javelin_Software
[+] [-] wslh|2 years ago|reply
[+] [-] TacticalCoder|2 years ago|reply
We're supposed to rejoice and hail the spreadsheets gods because a spreadsheet can take unstructured data and create a map of (key,value) where key is a value appearing in one column and where value is the sum (for example) of the values appearing in another column for that key.
We literally have a simple mapping like that, call it a "pivot table", and it's supposed to be some achievement of mankind.
I beg to differ.
From what I've seen I'm pretty sure the world runs on broken spreadsheets, with bogus numbers being presented during meetings, used for decisions (including political decisions), etc.
At some point management realizes the company (or institution) has been taking nonsensical decisions since years and it's not only a minefield of bugs and near impossible to debug "code" (that'd be the "I've certainly spent more time than I want to admit trying to untangle one of those bowls of spaghetti trying to find out why a certain number is computed the way it is." from TFA, which any of us ever having been tasked to port a spreadhseet to an app know all too well)... But it's also a minefield politically: you cannot ask too many questions or point out too many bugs because it exposes the brokenness everything has been running on since years.
Even though somewhere, at some level, people knew everything was FUBARed (otherwise the decision to rewrite the spreadsheet to a dedicated app would not have been taken), it's a minefield. You have to walk on eggs to not piss anyone.
My epitath may be:
"The world runs on broken spreadsheets".
[+] [-] wrs|2 years ago|reply
[0] https://support.microsoft.com/en-us/office/guidelines-and-ex...
[+] [-] RcouF1uZ4gsC|2 years ago|reply
>users of spreadsheets are already thinking in terms of matrices of values
I am pretty sure most users of Excel are not thinking about matrices. They are thinking of rows and columns. Programmers and mathematicians might generalize this to matrices and values, but the users themselves are not thinking about it. This is akin to viewing Tinder as a "human interaction app" and lumping it in with Slack under that abstraction, and not realizing that those are two very different use cases and mindsets of the user.
> The spreadsheet paradigm hides what's important (the formulas) in favor of the superficial (the data)
To most users, the data is what is important in a spreadsheet and the formulas are implementation details.
[+] [-] Fluorescence|2 years ago|reply
In my experience of erm... hardcore-spreadsheeting (engineering, finance)... the cells whose value matters and become a final report that motivates design/decision-making are a tiny percentage of the total cells. Most of the cells in a spreadsheet are either raw data or intermediate values. Their values have no real meaning to the author, just steps in a long calculation. They might look at them for sanity checking but if they need to verify their values then that is better as another calculation not by repeated manual inspection.
Of course there are many types of spreadsheet user. Every cell value matters for your Vinyl collection or whatever :)
[+] [-] lukasb|2 years ago|reply
[+] [-] llmzero|2 years ago|reply
solution =: >@:{.@:(\: #&>)@:(((+/@:(e.&'aeiou') <: 2:) # ])&.>)@:;:
Example
solution 'yes, today you are reading something that is not so easy to grasp'
the result is: today
Ruby : frase.split.select{|x| x.count("aeiou")<3}.sort_by(&:length).last => "today"
Edited: Added a comparison with Ruby. It seems Ruby here is easier to read and to compose.
[+] [-] Bostonian|2 years ago|reply
(1) look for changes in formulas, which could indicate an error. Why is the formula for cell A99 different from that in A100?
(2) write formulas in Python/NumPy or another language that replicates the formulas.
[+] [-] jd3|2 years ago|reply
https://www.youtube.com/watch?v=CEG9pFNYBCI
https://mesh-spreadsheet.com/
[+] [-] keithnz|2 years ago|reply
[+] [-] hn92726819|2 years ago|reply
If you think they will, I also ask, what language that looks like this has ever been used mainstream? Or even in a hacker community? Brainfuck and golf languages are used by a very, very small niche. Regex is probably the most similar, but you only need to learn like 4 symbols to be useful (. * + \). And that hasn't caught on in the average excel user's repertoire.
I really wish something like this would exist and be useful because I share the exact same gripes as the author.
[+] [-] blackoil|2 years ago|reply
We can add validations and type to sheet so that the data consistency is not an issue. A good UI of scratch sheet/worksheet will allow for adhoc work on it. Also, we can label formulas in a worksheet (aka variable) that should simplify the code.
[+] [-] 83457|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] samatman|2 years ago|reply
As an outsider it looks like wasted novelty budget. There's already a lot of new things to learn and get used to in an APL school array language, and then on top of that you have to learn to read them right to left.
[+] [-] amluto|2 years ago|reply
[+] [-] gitonthescene|2 years ago|reply
[+] [-] t-3|2 years ago|reply
[+] [-] zuser567|2 years ago|reply
[+] [-] gitonthescene|2 years ago|reply
[+] [-] bvrmn|2 years ago|reply