* Flowsheets V2: a prototype programming environment where you see real data as you program instead of imagining it in your head:
https://www.youtube.com/watch?v=y1Ca5czOY7Q
* Marilyn Maloney: an interactive explanation of a program designed so that even children could easily understand how it works:
http://glench.com/MarilynMaloney/
These are very interesting! The Legible Mathematics one also made me think of APL, which is very hard to read - but specifically it's hard to read for the same reason that traditional linear programming math is hard to read, the lack of visual grouping[1]. I wonder how it might be possible to redesign something like APL to take advantage of rich text and more complex formatting?
[1] The weird symbols might seem like an initial barrier to this, but they're only hard to read if you're unfamiliar with APL; they're actually very easy to learn and remember, as there's not that many of them and they all do very basic operations. However, APL loves operator overloading and likes to give operators different functions based on whether they're used in prefix ("monadic") or infix ("dyadic") forms, and there are also higher-order operators that consume the operators immediately adjacent to them and then operate on the expressions after that; all of this makes the nominally right-to-left parsing require a fair bit of mental effort instead of being able to rely on immediate visual recognition.
What are your thoughts on Mathematica/Wolfram Language? Some of these ideas are present in it (like mathematical typesetting, interactive documentation and live code/data updates).
I really love your Flowsheets prototype, I hope someone pursues that direction in the future. “Spreadsheets with more tools from traditional programming” seems like such a powerful idea.
> We've simply gotten used to them: Dealing with the idiosyncracies of bash, vi, or the JavaScript type system
This stuck out to me, there seems to be a trend in UX/UI where any move away from the "simplest path" is seen as a huge negative. Could it be the case that we use these tools (especially UI patterns like vi) because after the learning curve the give a huge amount of value? It seems like we are assuming that we should make a developer tool with the same level of "immediate familliarity" that we try to build into a website where customers will bounce easily, for an audience who is willing to spend time learning a tool if it provides value to them.
Rich Hickey has a pretty interesting take on this very topic where he compares programming languages to instruments, and how instruments aren’t for beginners. How it takes work to become good at playing an instrument, and how that isn’t a bad thing.
> But look at this guitar player with blisters. A harpist has blisters, a base player with blisters. There's this barrier to overcome for every musician. Imagine if you downloaded something from GitHub and it gave you blisters. Right? The horrors!
That whole talk is filled with some interesting takes on designing and building software (with the usual skew that paints Clojure in a good light, so take it with a grain of salt if necessary).
I'm a designer [0] and an engineer — you'll take my shell from my cold, dead hands.
There are two issues here:
a) Designers trying to simplify everything beyond usefulness is a good instinct gone haywire. Simplification helps, but without an understanding of accidental complexity versus essential complexity, one is bound to end up painted into a corner with no flexibility left in the app. Few designers understand this, and those who do got it the long way round — by working on products that have a lot of essential complexity, like AdWords, and by repeatedly fighting those battles
b) An engineer's operating environment, OS, IDE, shell, terminal, is a reflection of the inside of his or her mind writ large. Like every Jedi has to build their own lightsaber, every engineer has to go through this pain of building out their weapon, because one's workflow is how one thinks, how you look at the problems at hand. No UI designer can help with that.
[0] (Because it's relevant to the context: ex-Google, ex-Facebook as professional experience)
For things like vi, I'm developing the "left handed mouse" analogy:
Many mice are ambidextrous (e.g. the Apple puck). Most are weakly right-handed with a slightly assymetrical shape. Some are very strongly right-handed (e.g. vertical mice) and can't be used sensibly in the left hand. So left-handed mice also exist.
Some people are naturally left-handed. We (as a civilisation) used to treat this as aberrant but have now recognised it, and that different tools suit different people.
I believe that something similar exists in programming tooling in relation to how people think about programs. There are clearly some people who have a strong, unusual "handedness" and have developed tools to match (e.g. Colorforth). A few people discover these and find them amazingly usable. Most other people find them baffling.
Consider the three propositions:
a) Jimi Hendrix played guitar in the wrong way with the strings in the wrong positions
b) Jimi's configuration was correct and everyone else was wrong, because he's producing the objectively best music
c) Jimi was left handed, and had constructed an accomodation which worked for him but should not be expected to work for anyone else
Far too many discussions of programming tools devolve into (a) versus (b), largely because people want there to be an objective ranking of who the best programmer is and what the best tools are, rather than allowing for diversity of (programmer x tool).
Maybe a bit unrelated, I got used to Windows 2000, then window XP, then Gnome 2, then Gnome 3 came. So I stuck with Gnome 2, then moved to XFCE, and now with RHEL 8 I had to use Gnome 3 because there were no other options. Gnome 3 is an absolute horror show. I don't know who made it, for who, and what the theory behind it is but I don't see how it would be easier to use for someone non technical, my parents both understood how to use Windows 2000. It is just weird. It is harder to multitask as efficiently as I did in XFCE and Gnome 2. Gnome 3 is not simpler - it is just more convoluted.
And I feel this is a very similar situation with other tools. I edit code with vim, in a terminal. This is simple as dirt. I do it because it is simple as dirt. Visual Studio is incredibly complicated to me because to do the creating code part my job I need to understand the following:
- How code is built.
- How to build the code without using any graphical front end.
- But now when you bring VS into the mix I need to also understand visual studio. It does not remove complexity, it adds it.
Similar thing with debugging, I need to understand all the ins and outs of debugging but now bring VS into the mix and I need to understand it's stupid UI.
I like simple, my mind is simple. I can learn things, if there are rules and patterns it makes it easier to learn, but the less things I have to learn the happier I am. I don't have an option to not learn some things, like how to do build automation, how to debug code, how computers work, etc. But I do have an option to not learn something entirely useless like VS.
I think the lie being sold is that somehow you can be a programmer without actually knowing how to use a computer. And to know how to use a computer is not the same thing as knowing how to click on things in the UI with a mouse. To know how to use a computer you need to understand how to use it to do automation - and once you need to do this VS is just a nuisance.
I think there should be a clear distinction between your tools and... let's call them "utilities" what comes to UI. It is perfectly okay to make electricity plug, water faucet, toaster and the power button on your computer not only so easy that an idiot can use them, but so easy that an idiot can't use them wrong. Not only because requiring mental energy to use these is irritating but also because it can be dangerous.
Your tools, then again, chainsaw, microscope, text editor... have no reason whatsoever to have an UI that is intuitive without training[1]. Because without training you are anyway going to be either dangerous, useless or in best case just really unproductive.
[1] of course, the UI needs to be efficient after training.
My point of view is different: there has to be a really strong reason to move my eyes from the code, and in the editors there should be respect for that. For example, I don't want to move from Atom to VSCode because VSCode has those huge icons that take space and distract me, when all I want in the default editor UI is the project directory tree and the code. Everything else should be opt-in.
In "The Design of Everyday Things" Don Norman describes this distinction as knowledge encoded in the head versus knowledge encoded in the world.
For professional tools knowledge encoded in the head supported by appropriately encoded knowledge in the world absolutely is a viable approach, provided there's appropriate feedback and conceptual mapping corresponds to the mental model a user has about how that tool works, i.e. actions and reactions should be consistent.
With modal design patterns such as the ones used by vi, for example, this can become a problem.
Listing vi in there shows that the OP, even if they may have good points in general, is limiting what good UX is to certain specific attributes, whereas developers want to optimize for additional attributes.
And this isn't just me saying that because I like vim. It's because of the objective fact that nearly every developer tool that is created will include a vim mode. And if included as an extension it will often be one of the most popular extensions. What that objectively indicates is that there is a large contingent of developers who genuinely find the vim modal editing UX excellent, to the point they seek it in other tools as well (including browsers, mail clients, RSS readers, etc).
I think even complex products should have a linear progression from newcomers, curiously exploring the product for the first time, to experts, who want to do certain tasks as fast as possible.
It's simply not possible to learn vi by just using vi.
Also, the author emphasizes that just dumbening a product is not the solution.
Oh yes, that is certainly what we do. And there's nothing wrong about that.
We are definitely in the business of building tools for professionals, i.e. a bit of a learning curve is not the issue. The "professional hazing process" isn't necessarily bad.
What I would say is that there is some (maybe a lot of) potential value that vi cannot develop simply due to the very reduced form factor that it has. I believe this never really has come to the fore because most programming languages are designed in a fairly limited way. I.e. they don't really take into account that they are a User Interface.
As positive examples I would point out the kind of interactive editing mode that you can find in dependently typed programming languages. I believe e.g. Idris has a pretty cool Emacs mode.
Far from me to defend every UI of every development tool out there but I think statements like these, and their illustration, could benefit from some explanation for those of us who are not designers:
> We coders still put up with horrid UX/UI when programming.
which is illustrated with a screenshot from Visual Studio... .NET 2002, I think, judging by the application icon?
Setting aside the relevance of a 20-year old screenshot, what exactly is wrong with that interface and what makes it horrid? I mean it definitely had its quirks but:
- It's spectacularly compact, certainly way better than anything I've seen in the last five years. We could display an UI builder and the associated code on single 1024x768 screen and work on it semi-comfortably. "Beautiful" UI/UX, as understood today, is so cluttered by whitespace (oh, the irony...) that it's barely usable on a 1920x1080 screen. A similarly compact interface on today's huge screens would be a dramatic productivity improvement that, twenty years ago, we could only dream of.
- You could easily access any function through textual menus -- no hamburger menus, no obscure, monochrome icons. Granted, the toolbar icons were a pain, but the way I remember it, most of us either disabled it straight away, or just populated with a couple of items that were of real value and which we knew well.
- The colors have great contrast, the whole thing is readable even on a very poor-quality screenshot that seems to have been actually downsized.
- UI items have enough relief and/or distinction that it's clear what you can interact with and what you can't (maybe the item palette from the diagram editor is an exception, or at least the screenshot makes it look like one, but virtually every program in that era made it look like that so it wasn't so hard to use).
I think of most UX designers as children: they want everything to conform to the latest trends and are always in search of the new shiny thing to play with.
The problem with developer tools is not technical, it's all about the selling part. If you make a new superior developer experience, you think developers will instantly see the benefits? haha! You first have to teach them, then after a few month if you are lucky you will get a "aha, now I understand". So first you need to manually educate each user until you have a critical mass. Then you need to market and hype your product. So that developers will tell each other how cool your new technology is. Continue with that a few years until there are code in production that use your product, before even thinking about a business plan. So there are few options, either you have enough money so that you do not have to "work" again, and can spend your time making new tools. Or your current employer lets you work on the tools. For a startup working on developer experience (language and tools) I would suggest a 10 year runway (funding) and that 1/3 of the budget goes into educating users and 1/3 goes to marketing.
Any software business that is profitable, already have code in production, and that code need to be maintained. So instead of creating a new better experience, you can make the current experience better. eg. putting rockets on a horse, rather then creating an automobile.
> The problem with developer tools is not technical, it's all about the selling part. If you make a new superior developer experience...
Selling is going to be hard but I feel like you underestimate the technical difficulty of replacing a large stack of complex tools that have decades of work and experience behind them. And that, in part, makes selling harder: I'm immediately suspicious of anyone who claims they've invented a superior way to work. It's more likely that they've invented a small improvement (and an arguable one at that) for a particular scenario, but developers would still have to rely on their old tools for a lot of stuff. In worst case, they're trying to sell a tool that doesn't extend but replaces the old tools without providing support for scenarios and workflows that existed with the old tools; step forward on one front, three steps back on others.
Of course, small improvements to existing workflows can usually be implemented by developers for themselves (and others while at it) once they learn about the idea, and that's how the developer experience has slowly improved over the years.
For example, you can make a new fancy code editor (let's call it sublime) and hype it on features like multiple cursors. And I can have that in emacs at the cost of about 3000 sloc of elisp, and I don't have to give up any of the old things that I've grown to rely on.
But this is exactly why Rails did so well. The developer experience was good, right off the bat, in a way that a 15 minute video could portray so that developers could instantly see the benefits. There was no "manually educate" step taking months.
I told my UX buddy half a dozen years ago that I was optimistic that there seemed to be a glut of UX people coming because we were going to steal a bunch of them to look at DevEx issues.
I spent some time nerding out over woodworking hand tools a few years back and it pretty well cemented for me something that I’ve suspected for most of my career: people down in the muck have very limited vision. That your output is only better than your input by degrees.
I’m not sure there would be much fine woodworking at all if the best woodworking tools were only as good as the best software tools. There is no Lee Valley of developer tools. You can’t make me stop using JetBrains (individual licenses were their best idea every), but it still doesn’t rate above a Woodriver, and if I’m honest some of their stuff is Stanley level, and not even the antique stuff. And their stuff is better than just about any other tool I use all day.
I suspect Harbor Freight could make better software than Atlassian, and I don’t even mean that as a metaphor. I think I could take harbor freight employees and get better requirements out of them because they wouldn’t be up to their ears in cognitive dissonance.
Atlassian truly is the Harbor Freight of developer tools!
I'm a bit hopeful that DX might finally start getting the love it deserves. -- For better or worse, Microsoft seems to understand the potential that lies in building better tools and seducing programmers to join their fold.
I don't think that DX is harder than UX because developers somehow have a more complex task than everybody else, and that there's therefore a richer "experience" to navigate than in other domains. I think it's because developers tend to be developers, so are more aware of and willing to accept the nuances and complexities that go into development. In other words, we've got a more detailed understanding of the developer experience because, as developers, it's what we experience.
We accept the heritage of developer tools - the keyboard-driven interface that displays to a teletype emulator, the edit-compile-debug workflow - and build tools that improve the processes that have built around those legacies. This is why when something comes out of left field like Adele Goldberg and colleagues describing Smalltalk, we find it easy to adopt the approach to code organisation on offer and hard to adopt the image model, browser-based workflow, debugger-driven iteration, and other changes.
Meanwhile, when we go out into other domains, we use a little bit of understanding of that domain, a lot of reasoning by analogy, and an intention to "disrupt" what already exists and "eat the world", and create something that works very well for the spherical user in a vacuum without all of the detailed understanding that comes from having grown up in the system and learnt from people who grew up in it even longer ago.
A healthy exercise, I think, is to replace "developer" with "scribe" whenever we have these conversations. It becomes clear that in large part it is the overall culture that is missing the point -- which should be a kind of mass literacy. Smalltalk is a different universe indeed, but its goals were also completely different from today's. Its creators assumed that "using" a personal computer would be, in part, "programming" it, and they sought to define what that interaction would be like.
Isn’t that a circular feedback that leads nowhere? Developer like to be developers but being one doesn’t mean to make solutions and profits. Like a bureaucrat likes bureaucracy and then everything is bureaucracy under the hood sprayed with marketing sauce. Best stories of best products sound “we hacked, broke, short-cutted, ignored, made a fucking lots of money, and then rewrote fundamental instruments to match our needs”. Isn’t there a controversy with a nice cool environment that developers praise?
> There has been tremendous growth in the field of UX/UI. Without a doubt, today's applications are much more user-friendly than those from 30 years ago. Back then, users were given features, interface be damned.
That 30 year time frame takes us back to 1990 and back then the user experience was limited by the technology of the time.
However a decade later we had Windows XP.
I would say that 20 year old Windows XP might in fact be a much better user experience than the modern UX/UI we have to live with today.
The much less powered CPUs of that time felt much more responsive than the modern day CPUs/OS that we have to today.
> How do I run this thing? Where does the code start? Is my system configured correctly?
This is why there are occasional spasms of "back to basics" or plaintive remembering of the BBC Micro. You power it on, it beeps, and within a second you're in the interactive development environment. Typing code runs it directly. Typing code with a line number adds it to the program. No configuration, containers, downloads, updates, dependencies or uninformed choices to make.
> Why do we treat this as a moral failing instead of a usability issue?
Yes. This applies in so many places. Learn from "Poka-yoke". The system should make it easier to do safe things and harder to do unsafe things.
> Tests are a usability dead end
Depends what you mean by "tests". A strong type system does away with certain categories of test (and conversely a lot of the heavy unit testing usage comes from communities with weakly typechecked languages). But both types and tests are capturing a human-level requirement of "if X then Y", a constraining of the problem space.
This is why many successful code archaeology maintenance projects start by building a test suite to capture the current functionality of the program. An executable requirements document.
> Tests are a usability dead end -- I know this may be contentious, but I believe test suites are another realm of excessive moralizing en lieu of better tools and better processes. Too often they function as a security blanket that simply encases the parts of the code that are unit testable, while leaving the vulnerable, untestable bits fluttering in the wind. Approaches such as generative testing seem more promising.
I'm not quite sure what the author is arguing for here in particular.
Anything that makes writing tests a bit easier, e.g. suggestions for additional test cases, would be cool, but ultimately tests are about writing down your assumptions/expectations about the code.
No, they are not formal proofs and sometimes they are not perfect but they still provide a lot of value. So far I haven't found a good reason not to write tests (since I outgrew my newcomer attitude) and yeah integration tests are usually what I focus on most. For any case where testing whole systems today is hard, there are some fundamental challenges (e.g. end to end web UI test). I don't quite see how tooling will get rid of the need for tests.
> So far I haven't found a good reason not to write tests...I don't quite see how tooling will get rid of the need for tests.
I write device control software. It’s very difficult to have true automated testing of things like drivers. You can write unit tests for subsystems, like packet parsers, but integration testing generally requires good ol’ “monkey testing.”
“Just write a mock!” Is what I hear all the time.
Mocking a device is a massive project; potentially larger than designing the device, itself. Remember that the mock needs to be of unimpeachable quality, and also needs to do things like simulate adverse signal environments.
DX for that kind of thing can be awful.
As far as basic DX goes...
Most developer tools are wrappers for command-line OS tools, and it shows.
They can also be quite buggy, and we accept this bugginess. I use Xcode, which is quite “crashy.” I am constantly fixing issues by deleting the build folder.
I think that improving DX might be the most important thing of the future software industry. If a single developer devotes his live to a better DX, hundred of thousands of developers might be significantly more productive.
I really like to explore new DX approaches (just recently, I published an extension for VS Code enabling visual debugging [1]). But I find it hard to make a living out of it, as so many companies find it granted that everything is free. They would rather hire another developer than paying for licenses that might effectively increase the effiencency of the developers they already have.
Just a little bit of feedback, while the visuals are quite nice I think this page could be helpful if treated like a landing page, what's the problem/why is this helpful compared to what I currently/normally do. I might not be the target audience though, since I don't understand what it does/why it helps quite quickly through scanning
On impact: I've been running the numbers for a tool I'm working on and the projected savings for the industry look insane! Just by shaving off 10 minutes here or there you can contribute a lot.
This. Despite latest <libname>-cli trend, anything slightly more sophisticated requires to create a structure, then create structures in nodes, then a configuration structure, structures all the way down. One could object that everything we do is structures, but there is no ui to these, only a text file and a formal documentation (if you’re lucky).
Text files and their disconnection from a documentation is a root of all our evils. Not only they diverge with new versions of everything, but there is a constant attention switch (stacks of them!) and unnecessary diving into things that may or may not be important to the development process. There is no way to omit these checks when you learn or return to an idle project.
I have a long time idea that every config, format, api call, and so on should come with inseparable documentation ui (+rationale, examples of use, best practice links, pre-configuration, diff/merge views, etc). Yes texts are simple and easy to read, but we also write. You can make text from a structure in O(1), but you cannot make a knowledge from an empty file in O(sensible), for any sensible sensible.
Not arguing on salaries though. Even pretenders who have no clue can take a great cut off this nonsense.
I taught Scratch to kids. They love the visual element of the language. It's so hard to make basic mistakes - partly because you can't put an Int where the code expects a Bool.
The interface shows you all the components you can use, and what data they require.
I then move on to teaching Python, and watch kids get frustrated.
Languages like DRAKON should be the future of our profession - not typing 80 char lines into a terminal.
The question is: how do you grow up from there, in terms of expressive power? I.e. is it possible to find more middle ground between Python and Scratch?
The other day I was using MusicBrainz Picard and it occurred to me just how absolutely pleasant it is. It uses standard desktop widgets, is fast, has some nice attention to detail and consistently accurately relates the state of things, is powerful, and stays out of your way. It—and this is #1-with-a-bullet more important than every other UX concern—behaves consistently.
I don’t think it would survive a pass from most “UX” folks in such a nice state. It 1000% wouldn’t survive a designer or hybrid designer/UX person (it wouldn’t look pretty in screenshots on their portfolio).
The main problems in software tools are lack of consistent behavior, lies, and tons of ways to use a bunch of tools that all do basically the same thing (and you’ll probably have to know more than one). The hardest part’s not using them, exactly, it’s knowing all the different, stupid reasons they break. It’s a general quality issue more than a broader UX thing, I think. That extends to libraries. And I don’t also mean tools and libs from big names—I mostly mean them.
I can't be the only one who prefers the way applications looked 30 years ago? When they dumped whatever they had to offer right there for you to explore and pick from.
These days everything is hidden below a million layers, trying so hard "not to be in your way" that you need a deep love of pixel-hunt-point-and-click adventures to get anything done..
Let's take an example: Specifying DNS servers in Windows.
95:
1. Rightclick "Network" on desktop -> Properties
2. Doubleclick on the TCP/IP protocol for the NIC.
3. Type new DNS server.
4. Press OK.
10:
1 Press start
2 Search for control panel
3 Open Network and Internet settings
4 Select Change Adapter Settings.
5 Rightclick NIC and select Properties
6 Select TCP/IP and then Properties.
7 Type new Server
8 Press OK
Sure, some may think that Windows 10 looks better than Windows 3.11 or 95, but I don't and I can't believe everyone does.
I think developers are missing the testing practices UX designers have grown accustomed to:
1. Usability tests where developers literally sit down and watch someone install and use your library from scratch (I find a lot of developers do not like to do this). Things like, seeing where they have to look up documentation (and how they do it), what bugs they hit, and how often they make common mistakes. I think a lot of this could be logged e.g. A developer signs up, you have their email + API key, you can connect the dots between what doc pages they view, how often, and what errors they commonly run into.
2. Doing whatever it takes to minimize the time to aha moment. This is absolutely critical for any product design effort, but not many companies measure this if any at all when it comes to DX. I think Twilio and maybe Stripe are the only ones that may have had this as a key onboarding KPI.
Ultimately I think a majority of developers that are capable of implementing these things are quite technical and used to the general state of DX so they don't view bad DX as much of an issue unless its really terrible.
Lastly, I really wish error messages would just be super informative. For example getting something like this "undefined method `my_method_name' for nil:NilClass (NoMethodError)" still feels a bit cryptic to someone newer to programming, if you can also tell me the human readable variable that I used that caused this issue, the one that was nil, and the exact line (the stack trace purely by itself can be confusing) that little touch would go a long way. For example compare that error message to something highlighted in a different color that says "The variable you used called "contact" on line 87 was found to be nil, this is likely causing this issue". This way when you run into the error and are scanning the stack trace, the computer is telling you as quickly as possible what may be wrong, again for a novice since the way the original error is written for someone more experienced is likely succinct enough.
Why doesn't every error message have a link to a specific page with discussions, instructions, etc. Or maybe even a button you can click where the machine tries a best guess at an automatic fix?
> When was the last time you heard of a programming language discussed in terms of discoverability, succinctness, relevance, let alone beauty?
I mean, fairly often throughout the years. Particularly in communities for lisps, ruby, perl, python, C. More common 5-10 years back perhaps.
Relevance is a fairly common topic across the board. Discoverability is maybe the least common topic here, and one that's a pretty interesting one for PL design imo.
I haven't seen too many blog posts about these things lately, but they're frequent enough discussions in personal circles and in mailing lists/chats that this question seemed odd to me.
The problems the author identifies largely have already been solved. The solutions just haven't become universal, generally for reasons completely unrelated to the actual problem, and more to do with PR.
I wouldn't say it's fundamentally harder. It's fundamentally the same thing, because developers are users, and software development isn't the only domain where users have discerning tastes and needs that they want catered to. It's not the only domain where users will happily accept a steep learning curve for the years of use that will pay off that initial investment, or where shoving detailed information under the carpet can be a negative thing.
I think it would be arrogant to think of software development as exceptional in this sense, but it's certainly reflected in a lot of software designs. The reality IMO is that if you aren't working off a set of insights and observations like the one listed in the article—regardless of your users' domain—you aren't making enough of an effort to design for your users.
I wish we could scrap the term experience again and just go back to interface.
I connect experience with hyped concepts that are already forgotten today.
That said, tendency to decrease choice seems to only serve certain users. Other feel just as restricted as developers, which are also users, so the dichotomy should be questioned.
Glench|6 years ago
* Legible Mathematics, an essay about the UI design of understandable arithmetic: http://glench.com/LegibleMathematics/
* FuzzySet: interactive documentation of a JS library, which has helped fix real bugs: http://glench.github.io/fuzzyset.js/ui/
* Flowsheets V2: a prototype programming environment where you see real data as you program instead of imagining it in your head: https://www.youtube.com/watch?v=y1Ca5czOY7Q
* REPLugger: a live REPL + debugger designed for getting immediate feedback when working in large programs: https://www.youtube.com/watch?v=F8p5bj01UWk
* Marilyn Maloney: an interactive explanation of a program designed so that even children could easily understand how it works: http://glench.com/MarilynMaloney/
MrEldritch|6 years ago
[1] The weird symbols might seem like an initial barrier to this, but they're only hard to read if you're unfamiliar with APL; they're actually very easy to learn and remember, as there's not that many of them and they all do very basic operations. However, APL loves operator overloading and likes to give operators different functions based on whether they're used in prefix ("monadic") or infix ("dyadic") forms, and there are also higher-order operators that consume the operators immediately adjacent to them and then operate on the expressions after that; all of this makes the nominally right-to-left parsing require a fair bit of mental effort instead of being able to rely on immediate visual recognition.
henrikeh|6 years ago
What are your thoughts on Mathematica/Wolfram Language? Some of these ideas are present in it (like mathematical typesetting, interactive documentation and live code/data updates).
ripley12|6 years ago
werg|6 years ago
kurnikas|6 years ago
This stuck out to me, there seems to be a trend in UX/UI where any move away from the "simplest path" is seen as a huge negative. Could it be the case that we use these tools (especially UI patterns like vi) because after the learning curve the give a huge amount of value? It seems like we are assuming that we should make a developer tool with the same level of "immediate familliarity" that we try to build into a website where customers will bounce easily, for an audience who is willing to spend time learning a tool if it provides value to them.
adamkl|6 years ago
> But look at this guitar player with blisters. A harpist has blisters, a base player with blisters. There's this barrier to overcome for every musician. Imagine if you downloaded something from GitHub and it gave you blisters. Right? The horrors!
That whole talk is filled with some interesting takes on designing and building software (with the usual skew that paints Clojure in a good light, so take it with a grain of salt if necessary).
[1] https://github.com/matthiasn/talk-transcripts/blob/master/Hi...
rolleiflex|6 years ago
There are two issues here:
a) Designers trying to simplify everything beyond usefulness is a good instinct gone haywire. Simplification helps, but without an understanding of accidental complexity versus essential complexity, one is bound to end up painted into a corner with no flexibility left in the app. Few designers understand this, and those who do got it the long way round — by working on products that have a lot of essential complexity, like AdWords, and by repeatedly fighting those battles
b) An engineer's operating environment, OS, IDE, shell, terminal, is a reflection of the inside of his or her mind writ large. Like every Jedi has to build their own lightsaber, every engineer has to go through this pain of building out their weapon, because one's workflow is how one thinks, how you look at the problems at hand. No UI designer can help with that.
[0] (Because it's relevant to the context: ex-Google, ex-Facebook as professional experience)
pjc50|6 years ago
Many mice are ambidextrous (e.g. the Apple puck). Most are weakly right-handed with a slightly assymetrical shape. Some are very strongly right-handed (e.g. vertical mice) and can't be used sensibly in the left hand. So left-handed mice also exist.
Some people are naturally left-handed. We (as a civilisation) used to treat this as aberrant but have now recognised it, and that different tools suit different people.
I believe that something similar exists in programming tooling in relation to how people think about programs. There are clearly some people who have a strong, unusual "handedness" and have developed tools to match (e.g. Colorforth). A few people discover these and find them amazingly usable. Most other people find them baffling.
Consider the three propositions:
a) Jimi Hendrix played guitar in the wrong way with the strings in the wrong positions
b) Jimi's configuration was correct and everyone else was wrong, because he's producing the objectively best music
c) Jimi was left handed, and had constructed an accomodation which worked for him but should not be expected to work for anyone else
Far too many discussions of programming tools devolve into (a) versus (b), largely because people want there to be an objective ranking of who the best programmer is and what the best tools are, rather than allowing for diversity of (programmer x tool).
ailideex|6 years ago
And I feel this is a very similar situation with other tools. I edit code with vim, in a terminal. This is simple as dirt. I do it because it is simple as dirt. Visual Studio is incredibly complicated to me because to do the creating code part my job I need to understand the following:
- How code is built.
- How to build the code without using any graphical front end.
- But now when you bring VS into the mix I need to also understand visual studio. It does not remove complexity, it adds it.
Similar thing with debugging, I need to understand all the ins and outs of debugging but now bring VS into the mix and I need to understand it's stupid UI.
I like simple, my mind is simple. I can learn things, if there are rules and patterns it makes it easier to learn, but the less things I have to learn the happier I am. I don't have an option to not learn some things, like how to do build automation, how to debug code, how computers work, etc. But I do have an option to not learn something entirely useless like VS.
I think the lie being sold is that somehow you can be a programmer without actually knowing how to use a computer. And to know how to use a computer is not the same thing as knowing how to click on things in the UI with a mouse. To know how to use a computer you need to understand how to use it to do automation - and once you need to do this VS is just a nuisance.
Just a rant I guess.
beefield|6 years ago
Your tools, then again, chainsaw, microscope, text editor... have no reason whatsoever to have an UI that is intuitive without training[1]. Because without training you are anyway going to be either dangerous, useless or in best case just really unproductive.
[1] of course, the UI needs to be efficient after training.
izietto|6 years ago
BjoernKW|6 years ago
For professional tools knowledge encoded in the head supported by appropriately encoded knowledge in the world absolutely is a viable approach, provided there's appropriate feedback and conceptual mapping corresponds to the mental model a user has about how that tool works, i.e. actions and reactions should be consistent.
With modal design patterns such as the ones used by vi, for example, this can become a problem.
addicted44|6 years ago
And this isn't just me saying that because I like vim. It's because of the objective fact that nearly every developer tool that is created will include a vim mode. And if included as an extension it will often be one of the most popular extensions. What that objectively indicates is that there is a large contingent of developers who genuinely find the vim modal editing UX excellent, to the point they seek it in other tools as well (including browsers, mail clients, RSS readers, etc).
Gehinnn|6 years ago
It's simply not possible to learn vi by just using vi.
Also, the author emphasizes that just dumbening a product is not the solution.
werg|6 years ago
As positive examples I would point out the kind of interactive editing mode that you can find in dependently typed programming languages. I believe e.g. Idris has a pretty cool Emacs mode.
dehrmann|6 years ago
alxlaz|6 years ago
> We coders still put up with horrid UX/UI when programming.
which is illustrated with a screenshot from Visual Studio... .NET 2002, I think, judging by the application icon?
Setting aside the relevance of a 20-year old screenshot, what exactly is wrong with that interface and what makes it horrid? I mean it definitely had its quirks but:
- It's spectacularly compact, certainly way better than anything I've seen in the last five years. We could display an UI builder and the associated code on single 1024x768 screen and work on it semi-comfortably. "Beautiful" UI/UX, as understood today, is so cluttered by whitespace (oh, the irony...) that it's barely usable on a 1920x1080 screen. A similarly compact interface on today's huge screens would be a dramatic productivity improvement that, twenty years ago, we could only dream of.
- You could easily access any function through textual menus -- no hamburger menus, no obscure, monochrome icons. Granted, the toolbar icons were a pain, but the way I remember it, most of us either disabled it straight away, or just populated with a couple of items that were of real value and which we knew well.
- The colors have great contrast, the whole thing is readable even on a very poor-quality screenshot that seems to have been actually downsized.
- UI items have enough relief and/or distinction that it's clear what you can interact with and what you can't (maybe the item palette from the diagram editor is an exception, or at least the screenshot makes it look like one, but virtually every program in that era made it look like that so it wasn't so hard to use).
So what's wrong with that thing?
meddlepal|6 years ago
z3t4|6 years ago
Any software business that is profitable, already have code in production, and that code need to be maintained. So instead of creating a new better experience, you can make the current experience better. eg. putting rockets on a horse, rather then creating an automobile.
clarry|6 years ago
Selling is going to be hard but I feel like you underestimate the technical difficulty of replacing a large stack of complex tools that have decades of work and experience behind them. And that, in part, makes selling harder: I'm immediately suspicious of anyone who claims they've invented a superior way to work. It's more likely that they've invented a small improvement (and an arguable one at that) for a particular scenario, but developers would still have to rely on their old tools for a lot of stuff. In worst case, they're trying to sell a tool that doesn't extend but replaces the old tools without providing support for scenarios and workflows that existed with the old tools; step forward on one front, three steps back on others.
Of course, small improvements to existing workflows can usually be implemented by developers for themselves (and others while at it) once they learn about the idea, and that's how the developer experience has slowly improved over the years.
For example, you can make a new fancy code editor (let's call it sublime) and hype it on features like multiple cursors. And I can have that in emacs at the cost of about 3000 sloc of elisp, and I don't have to give up any of the old things that I've grown to rely on.
regularfry|6 years ago
hinkley|6 years ago
I spent some time nerding out over woodworking hand tools a few years back and it pretty well cemented for me something that I’ve suspected for most of my career: people down in the muck have very limited vision. That your output is only better than your input by degrees.
I’m not sure there would be much fine woodworking at all if the best woodworking tools were only as good as the best software tools. There is no Lee Valley of developer tools. You can’t make me stop using JetBrains (individual licenses were their best idea every), but it still doesn’t rate above a Woodriver, and if I’m honest some of their stuff is Stanley level, and not even the antique stuff. And their stuff is better than just about any other tool I use all day.
I suspect Harbor Freight could make better software than Atlassian, and I don’t even mean that as a metaphor. I think I could take harbor freight employees and get better requirements out of them because they wouldn’t be up to their ears in cognitive dissonance.
werg|6 years ago
I'm a bit hopeful that DX might finally start getting the love it deserves. -- For better or worse, Microsoft seems to understand the potential that lies in building better tools and seducing programmers to join their fold.
grahamlee|6 years ago
We accept the heritage of developer tools - the keyboard-driven interface that displays to a teletype emulator, the edit-compile-debug workflow - and build tools that improve the processes that have built around those legacies. This is why when something comes out of left field like Adele Goldberg and colleagues describing Smalltalk, we find it easy to adopt the approach to code organisation on offer and hard to adopt the image model, browser-based workflow, debugger-driven iteration, and other changes.
Meanwhile, when we go out into other domains, we use a little bit of understanding of that domain, a lot of reasoning by analogy, and an intention to "disrupt" what already exists and "eat the world", and create something that works very well for the spherical user in a vacuum without all of the detailed understanding that comes from having grown up in the system and learnt from people who grew up in it even longer ago.
scroot|6 years ago
wruza|6 years ago
jussij|6 years ago
That 30 year time frame takes us back to 1990 and back then the user experience was limited by the technology of the time.
However a decade later we had Windows XP.
I would say that 20 year old Windows XP might in fact be a much better user experience than the modern UX/UI we have to live with today.
The much less powered CPUs of that time felt much more responsive than the modern day CPUs/OS that we have to today.
pjc50|6 years ago
This is why there are occasional spasms of "back to basics" or plaintive remembering of the BBC Micro. You power it on, it beeps, and within a second you're in the interactive development environment. Typing code runs it directly. Typing code with a line number adds it to the program. No configuration, containers, downloads, updates, dependencies or uninformed choices to make.
> Why do we treat this as a moral failing instead of a usability issue?
Yes. This applies in so many places. Learn from "Poka-yoke". The system should make it easier to do safe things and harder to do unsafe things.
> Tests are a usability dead end
Depends what you mean by "tests". A strong type system does away with certain categories of test (and conversely a lot of the heavy unit testing usage comes from communities with weakly typechecked languages). But both types and tests are capturing a human-level requirement of "if X then Y", a constraining of the problem space.
This is why many successful code archaeology maintenance projects start by building a test suite to capture the current functionality of the program. An executable requirements document.
quelltext|6 years ago
I'm not quite sure what the author is arguing for here in particular.
Anything that makes writing tests a bit easier, e.g. suggestions for additional test cases, would be cool, but ultimately tests are about writing down your assumptions/expectations about the code.
No, they are not formal proofs and sometimes they are not perfect but they still provide a lot of value. So far I haven't found a good reason not to write tests (since I outgrew my newcomer attitude) and yeah integration tests are usually what I focus on most. For any case where testing whole systems today is hard, there are some fundamental challenges (e.g. end to end web UI test). I don't quite see how tooling will get rid of the need for tests.
ChrisMarshallNY|6 years ago
I write device control software. It’s very difficult to have true automated testing of things like drivers. You can write unit tests for subsystems, like packet parsers, but integration testing generally requires good ol’ “monkey testing.”
“Just write a mock!” Is what I hear all the time.
Mocking a device is a massive project; potentially larger than designing the device, itself. Remember that the mock needs to be of unimpeachable quality, and also needs to do things like simulate adverse signal environments.
DX for that kind of thing can be awful.
As far as basic DX goes...
Most developer tools are wrappers for command-line OS tools, and it shows.
They can also be quite buggy, and we accept this bugginess. I use Xcode, which is quite “crashy.” I am constantly fixing issues by deleting the build folder.
Back to testing...
I prefer test harnesses over unit tests. I write about that here: https://medium.com/chrismarshallny/testing-harness-vs-unit-4...
Gehinnn|6 years ago
I really like to explore new DX approaches (just recently, I published an extension for VS Code enabling visual debugging [1]). But I find it hard to make a living out of it, as so many companies find it granted that everything is free. They would rather hire another developer than paying for licenses that might effectively increase the effiencency of the developers they already have.
[1] https://github.com/hediet/vscode-debug-visualizer/blob/maste...
yoshyosh|6 years ago
werg|6 years ago
On impact: I've been running the numbers for a tool I'm working on and the projected savings for the industry look insane! Just by shaving off 10 minutes here or there you can contribute a lot.
wruza|6 years ago
Text files and their disconnection from a documentation is a root of all our evils. Not only they diverge with new versions of everything, but there is a constant attention switch (stacks of them!) and unnecessary diving into things that may or may not be important to the development process. There is no way to omit these checks when you learn or return to an idle project.
I have a long time idea that every config, format, api call, and so on should come with inseparable documentation ui (+rationale, examples of use, best practice links, pre-configuration, diff/merge views, etc). Yes texts are simple and easy to read, but we also write. You can make text from a structure in O(1), but you cannot make a knowledge from an empty file in O(sensible), for any sensible sensible.
Not arguing on salaries though. Even pretenders who have no clue can take a great cut off this nonsense.
chii|6 years ago
why isn't a schema and then auto-complete a sensible way to do config?
Both xml and json has a schema file that you can create generic text editors with auto-complete for.
Edmond|6 years ago
https://codesolvent.com/config-node/
edent|6 years ago
The interface shows you all the components you can use, and what data they require.
I then move on to teaching Python, and watch kids get frustrated.
Languages like DRAKON should be the future of our profession - not typing 80 char lines into a terminal.
werg|6 years ago
karatestomp|6 years ago
I don’t think it would survive a pass from most “UX” folks in such a nice state. It 1000% wouldn’t survive a designer or hybrid designer/UX person (it wouldn’t look pretty in screenshots on their portfolio).
The main problems in software tools are lack of consistent behavior, lies, and tons of ways to use a bunch of tools that all do basically the same thing (and you’ll probably have to know more than one). The hardest part’s not using them, exactly, it’s knowing all the different, stupid reasons they break. It’s a general quality issue more than a broader UX thing, I think. That extends to libraries. And I don’t also mean tools and libs from big names—I mostly mean them.
koffiezet|6 years ago
werg|6 years ago
dusted|6 years ago
Let's take an example: Specifying DNS servers in Windows. 95: 1. Rightclick "Network" on desktop -> Properties 2. Doubleclick on the TCP/IP protocol for the NIC. 3. Type new DNS server. 4. Press OK.
10: 1 Press start 2 Search for control panel 3 Open Network and Internet settings 4 Select Change Adapter Settings. 5 Rightclick NIC and select Properties 6 Select TCP/IP and then Properties. 7 Type new Server 8 Press OK
Sure, some may think that Windows 10 looks better than Windows 3.11 or 95, but I don't and I can't believe everyone does.
yoshyosh|6 years ago
1. Usability tests where developers literally sit down and watch someone install and use your library from scratch (I find a lot of developers do not like to do this). Things like, seeing where they have to look up documentation (and how they do it), what bugs they hit, and how often they make common mistakes. I think a lot of this could be logged e.g. A developer signs up, you have their email + API key, you can connect the dots between what doc pages they view, how often, and what errors they commonly run into.
2. Doing whatever it takes to minimize the time to aha moment. This is absolutely critical for any product design effort, but not many companies measure this if any at all when it comes to DX. I think Twilio and maybe Stripe are the only ones that may have had this as a key onboarding KPI.
Ultimately I think a majority of developers that are capable of implementing these things are quite technical and used to the general state of DX so they don't view bad DX as much of an issue unless its really terrible.
Lastly, I really wish error messages would just be super informative. For example getting something like this "undefined method `my_method_name' for nil:NilClass (NoMethodError)" still feels a bit cryptic to someone newer to programming, if you can also tell me the human readable variable that I used that caused this issue, the one that was nil, and the exact line (the stack trace purely by itself can be confusing) that little touch would go a long way. For example compare that error message to something highlighted in a different color that says "The variable you used called "contact" on line 87 was found to be nil, this is likely causing this issue". This way when you run into the error and are scanning the stack trace, the computer is telling you as quickly as possible what may be wrong, again for a novice since the way the original error is written for someone more experienced is likely succinct enough.
werg|6 years ago
Why doesn't every error message have a link to a specific page with discussions, instructions, etc. Or maybe even a button you can click where the machine tries a best guess at an automatic fix?
kaens|6 years ago
I mean, fairly often throughout the years. Particularly in communities for lisps, ruby, perl, python, C. More common 5-10 years back perhaps.
Relevance is a fairly common topic across the board. Discoverability is maybe the least common topic here, and one that's a pretty interesting one for PL design imo.
I haven't seen too many blog posts about these things lately, but they're frequent enough discussions in personal circles and in mailing lists/chats that this question seemed odd to me.
regularfry|6 years ago
The problems the author identifies largely have already been solved. The solutions just haven't become universal, generally for reasons completely unrelated to the actual problem, and more to do with PR.
werg|6 years ago
boomlinde|6 years ago
I think it would be arrogant to think of software development as exceptional in this sense, but it's certainly reflected in a lot of software designs. The reality IMO is that if you aren't working off a set of insights and observations like the one listed in the article—regardless of your users' domain—you aren't making enough of an effort to design for your users.
_pmf_|6 years ago
lallysingh|6 years ago
raxxorrax|6 years ago
I connect experience with hyped concepts that are already forgotten today.
That said, tendency to decrease choice seems to only serve certain users. Other feel just as restricted as developers, which are also users, so the dichotomy should be questioned.
azhenley|6 years ago
See here for examples of publications in the area: http://web.eecs.utk.edu/~azh/publications.html
Also relevant, I wrote a blog post for students to get started in human factors in software engineering: http://web.eecs.utk.edu/~azh/blog/guidehciseresearch.html
werg|6 years ago