It's not specifically stated, but the assumption seems to be that we should sometimes make our code look closer to how we display math in papers or on a whiteboard.
I'm legitimately on the fence about this.
I recently re-watched Guy Steele's 2017 PPOPP on "Computer Science Metanotation", and aside from wanting to make CSM an unambiguous formal system, he specifically says at one point that he wants tools to support CSM as it appears (i.e. with stacked overbars, gentzen-style inference rules, etc) because "anything else is a translation".
And partly I get that. There is real cognitive work if you have to constantly translate back and forth between two representations of the "same" thing.
But should we favor readability or ease of interaction / modification? Keyboards give you a way to insert a sequence of characters. Notations that are not graphically linear (e.g. a symbol that has both a subscript and a superscript) create an ambiguity about how you input them. "Modes" where we display something different than what is typed can create ambiguity about how to edit them.
And if a tool only covers 90% of the notational convention you care about, it quickly gets frustrating as you repeatedly bump up against that boundary. I experience this in emacs org mode with "symbol" support.
You input characters using the same notation as latex (IE \mu or \hbar) and then tab autocomplete to unicode. Even jupyter notebooks support that convention. Most people in Julia's target audience know latex already and so there is zero learning curve.
It did hugely simplify my scientific code where I already had variable names like hnu, omega_squared, and k_prime_prime which become arduous when checking against long equations.
Emacs is actually why this is attractive to me. The latex input mode makes it easy to input a large set of Unicode that I care about, but my font not supporting them means that the fallback is used, which throws everything out of alignment, frequently messing up the line-height as well.
On the other hand, if a tool only covers 10% of the notation you care about because of a restriction like plain-text, it’s very difficult to document what you’re doing as you bump up against this boundary.
Humans have been using nonlinear notation in a wide variety of fields, but computer programming languages seem especially stuck on plain text everywhere, despite the utility of such notation. I would rather faff around for a minute or two trying to figure out how to change my integration limits in a comment than leave some horrendous documentation comment like “computes the integral from a to b of the blah blah blah blah” and by this point because you can’t see the familiar notation, you first have to interpret it before checking the function. (Several PhD students I know working in applied mathematics and physics will “translate” these comments from code onto a scrap piece of paper in front of them, before attempting to dig into the function).
Personally I think it would be great if I could embed more rich text (styling, equations, and images) in comments to better communicate my intent. This stuff otherwise just winds up being left out (because it’s too hard), or in some other separate documentation which is difficult to keep in sync during development.
I am sincerely of the opposite opinion. Code that I write on a whiteboard or in a paper or in a textbook is much easier to read than ASCII code.
The over bar example you give is a bad notation independently whether it is code or not. But Unicode indices and superscripts are amazingly useful. And .dot and .kron are simply terrible notation compared to the standard Unicode operators for the same operations.
Scala made this available, it was a mess, so this has been to my knowledge largely ignored by the community. Early Scala projects were rife with this sort of impossible to read or maintain code.
It's just a very nice coding font.
Tons of OpenType features like alternatives etc.
But it's doesn't go to excess with ligatures.
It has huge unicode coverage.
The letter shapes are nice.
It's just good.
The asterisk is far too heavy for anyone working in a language that makes heavy use of it.
I normally use Inconsolata at "11 point" (quotes because this is a nominal value, not actual) size; JuliaMono is larger when used at the same nominal size, but going down to 10pt results in a level of graininess that I don't find acceptible.
Great effort, but I'll stick with Inconsolate for now.
This font is beautiful and exactly what I was looking for.
I’ve used Courier as my coding and console font for a long time, as it had the right character design and stroke width choices to make reading really easy, even at small font sizes.
A lot of modern coding fonts are taller and thinner. I’ve tried Fira Code, Source Code Pro, Consolas, and others each for a week or so, and found myself struggling to read identifiers and punctuation as well as before.
This font however hits all the same points and is instantly readable for me. Thanks for sharing!
Courier was the default typeface of IBM typewriters for decades, monospaced mechanically just like everyone else.
No difficulty producing columnal or tabular data on the first attempt was forseen.
When FORTRAN programming came along, you wanted the characters to line up in columns properly too, expected with the simple dot-matrix fonts.
Daisywheel printers all had mono Courier, with some starting to also autodetect mechanical PS printwheels when inserted.
Dot-matrix printers got better fonts than DOS itself but what you see on the screen was not very much of what you got on the printout the first time. PS could increase woe. Courier was available both ways. But tons of people stuck with the simple characters like you get on the cheap store receipts.
DOS text files using a monospaced font can be reselected to a different monospaced font without all the formatting difficulty you can get otherwise.
When Windows got popular, loads of people found out their old dot-matrix printer had been capable of beautiful PS output the whole time, it had just been out of reach without a WYSIWYG drafting approach.
You can still set your email window for a monospaced font, and draft a message into a Notepad window set for mono, intentionally with all the sentences of each paragraph on a single line when Word Wrap is disabled (disable Word Wrap before copying & pasting), and with a single line space between these _paragraphs_.
Then just copy & paste the whole text linewise from Notepad to different email windows and their mono spacing can be a good way to make the different local & remote word wraps work with less headaches.
Courier's remaining universalness still makes it a top choice for this type of thing, but a new monospace font that can serve as a functional superset is good to have.
It looks fairly clean but the lowercase "r" character stuck out at me as I read the webpage. Mainly in the smaller paragraph size, not so much the titles. Something about it just looks "off" to me, sticks out like a sore thumb as I'm reading.
It's got a left-pointing serif like the i and j. It should just be shortened version of n. He provides alternate versions of some characters like g and a, but I don't see an alternate for r.
I would point to Cousine[1] as a nice mono font with a similar set of goals. It is not clear to me if Cousine has an equal Unicode span, but it claims a "pan-european" character set.
I worked on a Haskell code base that had Unicode enabled. It rendered GitLab’s search useless, and vim/emacs had different key combos to enter them. We eventually worked them out of our code due to the hassle.
The Bulgarian example in the language section tells a story of a security guard in a warehouse who is pretending to be on post but is in fact secretly eating meatballs behind some crates.
I’ve been learning Thai and Lao and it’s a bummer that it has virtually no monospace support. Abugida support is certainly going to be a bit of a challenge but the number of characters is definitely finite, but how does every character take up one ‘space’ when อี้ is three characters? It’s very easy to visually stack them as a human into one ‘space’, but trickier as a machine.
Well, I don't know about your nationality, but as a Korean person I found that the lack of CJK characters in a coding font is usually a non-problem, mostly because there are almost none that have them. (Apart from some local-company made fonts, Noto CJK Mono is almost the only one... which I do not like.)
Usually you just set a fallback CJK font and just not use my native language while coding - I just got better at English while writing comments in it.
This font could also be a good fallback to your preferred monospace font. My OS fallback to a non-monospace language which sometimes gets really hairy to work with on some characters.
There is no latex in the code. You _may_ use known latex commands to produce certain symbols, which then _become_ the code. But that's just a convenience julia provides. If you prefer to copy-paste from your character map application, or memorize their meta-key shortcuts, that's entirely up to you.
[+] [-] abeppu|5 years ago|reply
I'm legitimately on the fence about this.
I recently re-watched Guy Steele's 2017 PPOPP on "Computer Science Metanotation", and aside from wanting to make CSM an unambiguous formal system, he specifically says at one point that he wants tools to support CSM as it appears (i.e. with stacked overbars, gentzen-style inference rules, etc) because "anything else is a translation".
And partly I get that. There is real cognitive work if you have to constantly translate back and forth between two representations of the "same" thing.
But should we favor readability or ease of interaction / modification? Keyboards give you a way to insert a sequence of characters. Notations that are not graphically linear (e.g. a symbol that has both a subscript and a superscript) create an ambiguity about how you input them. "Modes" where we display something different than what is typed can create ambiguity about how to edit them.
And if a tool only covers 90% of the notational convention you care about, it quickly gets frustrating as you repeatedly bump up against that boundary. I experience this in emacs org mode with "symbol" support.
[+] [-] djaque|5 years ago|reply
You input characters using the same notation as latex (IE \mu or \hbar) and then tab autocomplete to unicode. Even jupyter notebooks support that convention. Most people in Julia's target audience know latex already and so there is zero learning curve.
It did hugely simplify my scientific code where I already had variable names like hnu, omega_squared, and k_prime_prime which become arduous when checking against long equations.
[+] [-] andreareina|5 years ago|reply
[+] [-] joppy|5 years ago|reply
Humans have been using nonlinear notation in a wide variety of fields, but computer programming languages seem especially stuck on plain text everywhere, despite the utility of such notation. I would rather faff around for a minute or two trying to figure out how to change my integration limits in a comment than leave some horrendous documentation comment like “computes the integral from a to b of the blah blah blah blah” and by this point because you can’t see the familiar notation, you first have to interpret it before checking the function. (Several PhD students I know working in applied mathematics and physics will “translate” these comments from code onto a scrap piece of paper in front of them, before attempting to dig into the function).
Personally I think it would be great if I could embed more rich text (styling, equations, and images) in comments to better communicate my intent. This stuff otherwise just winds up being left out (because it’s too hard), or in some other separate documentation which is difficult to keep in sync during development.
[+] [-] moonchild|5 years ago|reply
[+] [-] carapace|5 years ago|reply
[+] [-] solidasparagus|5 years ago|reply
[+] [-] krastanov|5 years ago|reply
The over bar example you give is a bad notation independently whether it is code or not. But Unicode indices and superscripts are amazingly useful. And .dot and .kron are simply terrible notation compared to the standard Unicode operators for the same operations.
[+] [-] cfeduke|5 years ago|reply
[+] [-] oxinabox|5 years ago|reply
[+] [-] bobbylarrybobby|5 years ago|reply
[+] [-] PaulDavisThe1st|5 years ago|reply
I normally use Inconsolata at "11 point" (quotes because this is a nominal value, not actual) size; JuliaMono is larger when used at the same nominal size, but going down to 10pt results in a level of graininess that I don't find acceptible.
Great effort, but I'll stick with Inconsolate for now.
[+] [-] npr11|5 years ago|reply
[+] [-] gwenzek|5 years ago|reply
[+] [-] dragonwriter|5 years ago|reply
It has an alternate, lighter asterisk.
[+] [-] DecoPerson|5 years ago|reply
I’ve used Courier as my coding and console font for a long time, as it had the right character design and stroke width choices to make reading really easy, even at small font sizes.
A lot of modern coding fonts are taller and thinner. I’ve tried Fira Code, Source Code Pro, Consolas, and others each for a week or so, and found myself struggling to read identifiers and punctuation as well as before.
This font however hits all the same points and is instantly readable for me. Thanks for sharing!
[+] [-] fuzzfactor|5 years ago|reply
No difficulty producing columnal or tabular data on the first attempt was forseen.
When FORTRAN programming came along, you wanted the characters to line up in columns properly too, expected with the simple dot-matrix fonts.
Daisywheel printers all had mono Courier, with some starting to also autodetect mechanical PS printwheels when inserted.
Dot-matrix printers got better fonts than DOS itself but what you see on the screen was not very much of what you got on the printout the first time. PS could increase woe. Courier was available both ways. But tons of people stuck with the simple characters like you get on the cheap store receipts.
DOS text files using a monospaced font can be reselected to a different monospaced font without all the formatting difficulty you can get otherwise.
When Windows got popular, loads of people found out their old dot-matrix printer had been capable of beautiful PS output the whole time, it had just been out of reach without a WYSIWYG drafting approach.
You can still set your email window for a monospaced font, and draft a message into a Notepad window set for mono, intentionally with all the sentences of each paragraph on a single line when Word Wrap is disabled (disable Word Wrap before copying & pasting), and with a single line space between these _paragraphs_.
Then just copy & paste the whole text linewise from Notepad to different email windows and their mono spacing can be a good way to make the different local & remote word wraps work with less headaches.
Courier's remaining universalness still makes it a top choice for this type of thing, but a new monospace font that can serve as a functional superset is good to have.
[+] [-] ricket|5 years ago|reply
[+] [-] yardshop|5 years ago|reply
[+] [-] nielsbot|5 years ago|reply
Check it out here: https://input.fontbureau.com
[+] [-] oxinabox|5 years ago|reply
[+] [-] Barrin92|5 years ago|reply
[+] [-] gombosg|5 years ago|reply
[+] [-] fernly|5 years ago|reply
[1] https://fonts.google.com/specimen/Cousine
[+] [-] aidenn0|5 years ago|reply
[+] [-] 29athrowaway|5 years ago|reply
Some other fonts I've been using lately are: Cascadia Code, JetBrains Mono, Monoid, Pragmata, Luculent, VL Gothic, Luxi Mono.
[+] [-] BooneJS|5 years ago|reply
[+] [-] st1x7|5 years ago|reply
[+] [-] xvilka|5 years ago|reply
[+] [-] toastal|5 years ago|reply
[+] [-] pcr910303|5 years ago|reply
Usually you just set a fallback CJK font and just not use my native language while coding - I just got better at English while writing comments in it.
[+] [-] randomfool|5 years ago|reply
I'm excited to spend some time trying this out.
[+] [-] noir_lord|5 years ago|reply
Bit too wide for my taste, I fall on the Pragmata Pro/Iosevka side of preferring a more condensed font for code.
[+] [-] dxxvi|5 years ago|reply
[+] [-] toastal|5 years ago|reply
[+] [-] toastal|5 years ago|reply
[+] [-] zaroth|5 years ago|reply
But otherwise this is really a very impressive effort, and it was cool reading functions that truly benefit from the expressive Unicode character-set.
[+] [-] tbenst|5 years ago|reply
[+] [-] brian_herman|5 years ago|reply
[+] [-] st1x7|5 years ago|reply
[+] [-] tpoacher|5 years ago|reply