As someone with a hardware background, I'll throw in my $0.02. The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like. Once you get past defining the block behaviors in HDL, instantiation can become tedious and error-prone in text, since the tools all kinda suck with very little hinting or argument checking, and the modules can and regularly do have dozens of I/O arguments. Instead, it's often very easy to map the module inputs to schematic-level wires, particularly in situations where large buses can be combined into single fat lines, I/O types can be visually distinguished, etc. IDE keyboard shortcuts also make these signals easy to follow and trace as they pass through hierarchical organization of blocks, all the way down to transistor-level implementations in many cases.
I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.
Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.
As much as I prefer to do everything in a text editor and use open-source EDA tools/linters/language servers, Xilinx's Vivado deserves major credit from me for its block editor, schematic view, and implementation view.
For complex tasks like connecting AXI, SoC, memory, and custom IP components together, things like bussed wires and ports, as well as GUI configurators, make the process of getting something up and running on a real FPGA board much easier and quicker than if I had to do it all manually (of course, after I can dump the Tcl trace and move all that automation into reproducible source scripts).
I believe the biggest advantage of the Vivado block editor is the "Run Block Automation" flow that can quickly handle a lot of the wire connections and instantiation of required IPs when integrating an SoC block with modules. I think it would be interesting to explore if this idea could be successfully translated to other styles of visual programming. For example, I could place and connect a few core components and let the tooling handle the rest for me.
Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells and whistles, including an IP library, programmable IP GUI configurators, bussed ports and connections, and block automation. You could even integrate different HDL front-ends as there are many more now than in the past. I know Icestudio is a thing, but that seems designed for educational use, which is also cool to see! I think a VSCode webview-based extension could be one easier way to prototype this.
> The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like.
Right. Trying to map lines of code to blocks 1 to 1 is a bad use of time. Humans seem to deal with text really well. The problem becomes when we have many systems talking to one another, skimming through text becomes far less effective. Being able to connect 'modules' or 'nodes' together visually(whatever those modules are) and rewire them seems to be a better idea.
For a different take that's not circuit-based, see how shader nodes are implemented in Blender. That's not (as far as I know a) a Turing complete language, but it gives one idea how you can connect 'nodes' together to perform complex calculations: https://renderguide.com/blender-shader-nodes-tutorial/
A more 'general purpose' example is the blueprint system from Unreal Engine. Again we have 'nodes' that you connect together, but you don't create those visually, you connect them to achieve the behavior you want: https://dev.epicgames.com/documentation/en-us/unreal-engine/...
> I don't think there's many analogous situations in typical programming applications that call for timing diagrams
Not 'timing' per se (although those exist), but situations where you want to see changes over time across several systems are incredibly common, but existing tooling is pretty poor for that.
I think we need to differentiate: Visualize a program vs. Visually program.
This post seems to still focus the former while an earlier HN post on Scoped Propagators https://news.ycombinator.com/item?id=40916193 showed what's possible with the latter. It specifically showed what's possible when programming with graphs.
Bret Victor might argue visualizing a program is still "drawing dead fish".
The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language. It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.
I think there's a very important real-world nuance here.
What you want with a programming language is to handle granular logic in a very explicit way (business requirements, precise calculations, etc.). What this article posits, and what I agree with, is that existing languages offer a more concise way of doing that.
If I wanted to program in a visual way, I'd probably still want / need the ability to do specific operations using a written artifact (language, SQL, etc). Combining them in different ways visually as a first-class operation would only interest me if it operated at the level of abstraction that visualizations currently operate at, many great examples of which are offered in the article (multiple code files, system architecture, network call).
The dead fish metaphor is so interesting because programs aren’t static objects, they move.
Most visual programming environments represent programs in a static way, they just do it with pictures (often graphs) instead of text.
Perhaps there is something to be discovered when we start visualization what the CPU does at a very low level, as in moving and manipulating bits, and then build visual, animated abstractions with that.
A lot of basic bit manipulations might be much clearer that way, like shifting, masking etc. I wonder what could be built on top to get a more bird‘s eye view.
Bret Victor might argue visualizing a program is still "drawing dead fish".
The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language.
I disagree. We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are described using diagrams, like flowcharts or state transition diagrams, etc.
Furthermore, quite often there are zero direct code references between these chunks. Effectively, we are already organizing large systems in exactly the fashion the op is proposing. Inside each chunk, we just have code. But at a higher level viewpoint, we often have the abstraction described by a diagram. (Which is often maintained manually, separate from the repo.)
>I think we need to differentiate: Visualize a program vs. Visually program.
Not necessarily, programming with visual DSL is already a thing in the field of language oriented programming. Visual programming refers to different thing, but not impossible to make a connection between the two fields.
Visual programming is now more like umbrella term for projects (and research) exploring new ways of programming beyond the textual representation. Probably better to call it non-textual programming, because some of its ideas not tied to visuality, like structural editing.
Visual programming enviraments offers a concrete way to program general-purpose code, DSLs offers a very specific language to program in a domain (language orinted programming offers ways to invent these DSLs). Often visual programming applied to a specific domain, as an alternative to textual scripting languages. Maybe this confuses people, thinking they are less powerfull, non general-purpose.
What described in the article is a visual DSL based on diagrams, using as a source for the programming itself (which is exactly the same as UML). But the whole thing are not well thought, and I think only serves the purpose of dunk on visual programming or the people how are working on them for "not understanding what professional programmers need".
The power of visual programming is diminished if the programmer aims to produce source-code as the final medium
Why would that be true?
It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.
What advantages would that give? The disadvantages are so big that it will basically never happen for general purpose programming. Making a brand new language make any sort of inroads in finding a niche takes at least a decade, and that's usually with something updating and iterating on what people are already doing.
My read of this post (especially the title) is the author does differentiate normally but chose to blur the lines here for a narrative hook & a little bit of fun.
Great article. Any sufficiently complex problem requires looking at it from different angles in order to root out the unexpected and ambiguous. Visualizations do exactly that.
This is especially important in the age of AI coding tools and how coding is moving from lower level to higher level expression (with greater levels of ambiguity). One ideal use of AI coding tools would be to be on the lookout for ambiguities and outliers and draw the developer's attention to them with relevant visualizations.
> do you know exactly how your data is laid out in memory? Bad memory layouts are one of the biggest contributors to poor performance.
In this example from the article, if the developer indicates they need to improve performance or the AI evaluates the code and thinks its suboptimal, it could bring up a memory layout diagram to help the developer work through the problem.
> Another very cool example is in the documentation for Signal's Double Rachet algorithm. These diagrams track what Alice and Bob need at each step of the protocol to encrypt and decrypt the next message. The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol
This is the next step in visualizations: moving logic from raw code to expressions within the various visualizations. But we can only get there bottom-up, solving one particular problem, one method of visualization at a time. Past visual code efforts have all been top-down universal programming systems, which cannot look at things in all the different ways necessary to handle complexity.
> Any sufficiently complex problem requires looking at it from different angles in order to root out the unexpected and ambiguous. Visualizations do exactly that.
To me, this is an underappreciated tenet of good visualization design.
Bad/lazy visualizations show you what you already know, in prettier form.
Good visualizations give you a better understanding of things-you-don't-know at the time of designing the visualization.
I.e. If I create a visualization using these rules, will I learn some new facts about the "other stuff"?
> Bad memory layouts are one of the biggest contributors to poor performance.
This will depend on the application, but I've encountered far more of the "wrong data structure / algorithm" kind of problem, like iterating over a list to check if something's in there when you could just make a map ("we need ordering": sure, we have ordered maps!).
The social problem with visual programming is indeed the same as with "Mythical Non-Roboticist". But there is quite some issues on it on the technical side too:
- Any sufficiently advanced program has non-planar dataflow graph. Yes "pipelines" are fine, but anything beyond that - you are going to need labels. And with labels it becomes just like plain old non-visual program, just less structured.
- Code formatting becomes much more important and much harder to do. With textual program representation it is more or less trivial to do auto-formatting (and the code is somewhat readable ever with no formatting at all). Yet we still don't have a reliable way to layout a non-trivial graph so that it doesn't look like a spagetti bowl. I find UML state machines very useful and also painful because after every small edit I have to spend ten minutes fixing layout.
- Good data/program entry interfaces are hard to design and novel tools rarely do a good job of it the first time. Most "visual" tools have a total disaster for a UI. Vs. text editors that were incrementally refined for some 70 years.
I am surprised I have not seen LabView mentioned in this thread. It is arguably one of the most popular visual programming languages after Excel and I absolutely hate it.
It has all the downsides of visual programming that the author mentions. The visual aspect of it makes it so hard to understand the flow of control. There is no clear left to right or top to bottom way of chronologically reading a program.
LabView’s shining examples would be trivial Python scripts (aside from the GUI tweaking). However, it’s runtime interactive 2D graph/plot widgets are unequaled.
As soon as a “function” becomes slightly non trivial, the graphical nature makes it hard to follow.
Structured data with the “weak typedef” is a minefield.
A simple program to solve a quadratic equation becomes an absolute mess when laid out graphically. Textually, it would be a simple 5-6 line function that is easy to read.
Source control is also a mess. How does one “diff” a LabView program?
And Simulink. I lost years in grad school to Simulink, but it is very nice for complex state machine programming. It’s self documenting in that way. Just hope you don’t have to debug it because that’s a special hell.
This is exactly why a visual representation of code can be useful for analyzing certain things, but will rarely be the best (or even preferred) way to write code.
I think a happy medium would be an environment where you could easily switch between "code" and "visual" view, and maybe even make changes within each, but I suspect developers will stick with "code" view most of the time.
Also, from the article:
> Developers say they want "visual programming"
I certainly don't. What I do want is an IDE which has a better view into my entire project, including all the files, images, DB, etc., so it can make much better informed suggestions. Kind of like JetBrains on steroids, but with better built-in error checking and autocomplete suggestions. I want the ability to move a chunk of code somewhere else, and have the IDE warn me (or even fix the problem) when the code I move now references out-of-scope variables. In short, I want the IDE to handle most of the grunt work, so I can concentrate on the bigger picture.
Most industrial automation programming happens in an environment similar to
LabView, if not LabView itself. DeltaV, Siemens, Allen-Bradley, etc. Most industrial facilities are absolutely full of them with text-based code being likely a small minority for anything higher level than the firmware of individual PLCs and such.
I think the whole flow concept is really only good for media pipelines and such.
In mathematics, everything exists at once just like real life.
In most programming languages, things happen in explicit discrete steps which makes things a lot easier, and most node based systems don't have that property.
I greatly prefer block based programming where you're dragging rules and command blocks that work like traditional programming, but with higher level functions, ease of use on mobile, and no need to memorize all the API call names just for a one off tasks.
Anyone who mentions visual scripting without mentioning the game industry just hasn't done enough research at all. Its actually a really elegant way to handle transforming data.
Look up Unreal blueprints, shader graphs, procedural model generation in blender or Houdini. Visual programming is already here and quite popular.
> One reason is because we think that other, more inexperienced, programmers might have an easier time with visual programming. If only code wasn't as scary! If only it was visual! Excel Formula is the most popular programming language by a few orders of magnitude and it can look like this:
Ahem. Excel is one of the most visual programming environment out there. Everything is laid out on giant 2d grids you can zoom in and out. You can paint arrows that give you the whole dependency tree. You can select, copy, paste, delete code with the mouse only. You can color things to help you categorize which cell does what. You can create user inputs, charts and pivot grids with clicks.
I think people get too hung up on the visuals. There was a (failed) attempt to create something called intentional programming by Charles Simonyi. That happened in the middle of the model driven architecture craziness about 20 years ago.
In short, his ideas was to build a language where higher level primitives are created by doing transformations on lower level syntax trees. All the way down to assembly code. The idea would be that you would define languages in terms of how they manipulate existing syntax trees. Kind of a neat concept. And well suited to visual programming as well.
Weather you build that syntax tree by typing code in an editor or by manipulating things in a visual tool is beside the point. It all boils down to syntax trees.
Of course that never happened and MDA also fizzled out along with all the UML meta programming stuff. Meta programming itself is of course an old idea (e.g. Lisp) and still lives on in things like Ruby and a few other things.
But more useful in modern times is how refactoring IDEs work: they build syntax trees of your code and then transform them, hopefully without making the code invalid. Like a compiler, an IDE needs an internal representation of your code as a syntax tree in order to do these things. You only get so far with regular expressions and trying to rename things. But lately, compiler builders are catching onto the notion that good tools and good compilers need to share some logic. That too is an old idea (Smalltalk and IBM's Visual Age). But it's being re-discoverd in e.g. the Rust community and of course Kotlin is trying to get better as well (being developed by Jetbrains and all).
But beyond that, the idea seems a bit stuck. Too bad because I like the notion of programs being manipulated by programs. Which is what refactoring does. And which is what AI also needs to learn to do to become truly useful for programming.
Most of this isn't visual "programming" just good explanatory diagrams. I think it gets to a core issue which is a dichotomy between:
- trying to understand existing programs - for which visuals are wanted by most but they usually need concious input to be their best
- programming (creating new code) itself - where the efficiency of the keyboard (with its 1d input that goes straight to spaghetti code) has never been replaced by visual (mouse based?) methods other than for very simple (click and connect) type models
You are right. The diagrams are used as explanations not as the source of the program. But wouldn't it be neat if when you sketch out the state transition in a diagram (how I think about the state transitions), _that diagram_ was the source of truth for the program?
That is the implied point: let's go to places where we already draw diagrams and check if we can elevate them into the program
Yes, in order to be visual coding (or better yet specification) it needs to be executable in it's native form, or maybe a very direct translation.
The concept of an executable specification first came to my attention in IEC 61499 the standard for Distributed Automation. First published in 2005 it was way, way ahead of it's time, so far ahead it is still gaining traction today.
Shout out to anyone reading who was involved in the creation of IEC 61499 in 2005, it was a stroke of genius, and for it's time, orders of magnitude more so. It is also worth a look just to prompt thinking for any one involved in distributed systems of any kind.
Initially I thought there was no way you could have such a thing as an executable specification, but then, over many years I evolved to a place where I could generically create an arbitrary executable specification for state based behavior (see my other post this topic).
I believe I have found the best achievable practice to allow defining behaviors for mission/safety critical functionality, while avoiding implicit state.
Programming “via” Visualization — doesn’t scale. Great for demos. Good in limited places.
Visualizations “of” a Program — quite useful. Note there lots of different ways to visualize the same program to emphasise / omit different details. The map is not the territory, all models are wrong etc.
For example having models of capacitor and resistor you can put them together in schematic. Which in turn can be a part of the bigger design. Then test it in simulator. That's how Simplorer works. Alternatively you can write the code in VHDL or Modelica. But visual is quicker, easier, and more reliable.
Obviously it works well for UI, was used for decades now.
As for the rest,... there are visual programmers for robots, mostly for kids.
I think the difficulty here is addressing: who is your target audience? Depending on that answer, you have different existing relatively succesful visual programming languages. For example, game designers have managed to make good use of Unreals' blueprints to great effect. Hobbists use Comfy UIs node language to wire up generative AI components to great effect. As far as generic computing goes, Scratch has managed to teach a lot of programming principles to people looking to learn. The problem comes in when you try and target a generic systems programmer: the target is too abstract to be able to create an effective visual language. In this article, they try and solve this issue by choosing specific subproblems which a visual representation is helpful: codebase visualization, computer network topology, memory layouts, etc...but none of them are programming languages
[post author] I agree. On many domains you can find a great mapping between some visual representation and how the developer (beginner or not) wants to think about the problem.
I personally don't see any one pictorial representation that maps to a general programming language. But if someone does find one, in the large and in the small, that'd be great!
Scratch is the only type of visual programming I've enjoyed using. It's easy to read if you're an experienced programmer because it has the same structure as regular code, and it's easy to read for beginners because everything is broken into large blocks that have what they do written right on them. The way code is structured in most programming languages is actually very logical and intuitive, and it's the most successful system we have so far. The problem for beginners is that they can't figure out if they enjoy programming until they've learned the syntax, which can be very discouraging for some people. I've seen Scratch bridge that gap for people a couple of times, and I think it's probably the best model when it comes to teaching people to code.
I think other types of models would only be useful for situations where writing code isn't the most intuitive way to make something. From my limited experience, a visual system for making shaders is a pretty good idea, because ideally, you don't want to have many conditional branches or loops, but you might have a lot of expressions that would look ugly in regular code.
I'm going to throw a vote in here for Grasshopper, the visual programming language in Rhino3d as doing it the right way. It is WIDELY used in architectural education and practice alike.
Unfortunately, most visuals you'll get of the populated canvas online are crap. And for those of us who make extremely clean readable programs it's kind of a superpower and we tend to be careful with how widely we spread them. But once you see a good one you get the value immediately.
Here's a good simple program I made, as a sample. [0]
Also, I want to give a shout-out to the Future of Coding community in this.
The Whole Code Catalog [1] and Ivan Reese's Visual Programming Codex [2] are great resources in the area.
I also have to mention, despite the awful name, Flowgorithm is an EXCELLENT tool for teaching the fundamentals of procedural thinking. [3] One neat thing is you can switch between the flow chart view and the script code view in something like 35 different languages natively (or make your own plugin to convert it to your language of choice!)
p.s. If you are used to regular coding, Grasshopper will drive you absolutely freaking bonkers at first, but once you square that it is looping but you have to let the whole program complete before seeing the result, you'll get used to it.
People have mentioned a bunch of successful visual programming applications, but one that I've been thinking a lot about lately is Figma.
Figma has managed to bridge the gap between designers, UXR, and engineers in ways that I've never seen done before. I know teams that are incredibly passionate about Figma and use it for as much as they can (which is clearly a reflection of Figma themselves being passionate about delivering a great product) but what impressed me was how much they focus on removing friction from the process of shipping a working application starting from a UI mockup.
I think Figma holds a lot of lessons for anyone serious about both visual programming and cross-functional collaboration in organizations.
I simply have to recommend Glamorous Toolkit to anyone interested in visual programming: https://gtoolkit.com
It focuses on the kind of visual programming the article argues for: Class layout, code architecture, semantics. It's one of the best implementations I have seen. The authors are proponents of "moldable development", which actively encourages building tools and visualizations like the ones in the article.
The issue with every one I’ve used is that it hides all the parameters away in context aware dialog boxes. Someone can’t come along and search for something, they need to click every element to via the dialog for that element to hunt for what they are looking for. I found every time the lead dev on a project changed, it was easier to re-write the whole thing than to try and figure out what the previous dev did. There was no such thing as a quick change for anyone other than the person who wrote it, and wrote it recently. Don’t touch the code for a year and it might as well get another re-write.
This article seems focused on "how do we help programmers via visual programming", and it presents that case very well, in the form of various important and useful ways to use visual presentation to help understand code.
There's a different problem, of helping non-programmers glue things together without writing code. I've seen many of those systems fail, too, for different reasons.
Some of them fail because they try to do too much: they make every possible operation representable visually, and the result makes even non-programmers think that writing code would be easier. The system shown in the first diagram in the article is a great example of that.
Conversely, some of them fail because they try to do too little: they're not capable enough to do most of the things people want them to do, and they're not extensible, so once you hit a wall you can go no further. For instance, the original Lego Mindstorms graphical environment had very limited capabilities and no way to extend it; it was designed for kids who wanted to build and do extremely rudimentary programming, and if you wanted to do anything even mildly complex in programming, you ended up doing more work to work around its limitations.
I would propose that there are a few key properties desirable for visual programming mechanisms, as well as other kinds of very-high-level programming mechanisms, such as DSLs:
1) Present a simplified view of the world that focuses on common needs rather than every possible need. Not every program has to be writable using purely the visual/high-level mechanism; see (3).
2) Be translatable to some underlying programming model, but not necessarily universally translatable back (because of (1)).
3) Provide extension mechanisms where you can create a "block" or equivalent from some lines of code in the underlying model and still glue it into the visual model. The combination of (2) and (3) creates a smooth on-ramp for users to go from using the simplified model to creating and extending the model, or working in the underlying system directly.
One example of a high-level model that fits this: the shell command-line and shell scripts. It's generally higher-level than writing the underlying code that implements the individual commands, it's not intended to be universal, and you can always create new blocks for use in it. That's a model that has been wildly successful.
Shameless plug, but this is what we’re trying to do at Magic Loops[0].
We joke it’s the all-code no-code platform.
Users build simple automations (think scrapers, notifications, API endpoints) using natural language.
We break their requests into smaller tasks that are then mapped to either existing code (“Blocks”) or new code (written by AI).
Each Block then acts as a UNIX-like program, where it only concerns itself with the input/output of its operation.
We’ve found that even non-programmers can build useful automations (often ChatGPT-based like baby name recommenders), and programmers love the speed of getting something up quickly.
Mindstorms is an example of what did not work. I want to provide an example of what does. BBC microbits. It has a visual programming interface that is translatable to python or JavaScript .
Most times in my career that I've seen people talking about visual programming, it's not about the developers - it's about lowering the bar so that (cheaper) non-developers can participate.
A Business Analyst may or may not have a coding background, but their specifications can be quite technical and logical and hopefully they understand the details. The assumption is that if we create our own Sufficiently Advanced Online Rule Engine they can just set it all up without involving the more expensive programmers.
This is discussed a bit in the first paragraph, but I just wanted to reiterate that most systems I had to deal with like this were talked about in terms of supplying business logic, rules, and control flow configuration to a pre-existing system or harness that executes that configuration. The "real" programmers work on that system, adding features, and code blocks for anything outside the specification, while the other staff setup the business logic.
It works to some degree. I think things like Zapier can be quite good for this crowd, and a lot of mailing list providers have visual workflow tools that let non-programmers do a lot. A DSL like Excel formulas would be in this group too, since it operates inside an existing application, except that it's non-visual. Some document publishing tools like Exstream (I worked with it pre-HP, so years ago) did a lot in this space too.
I did read and appreciate the whole article, I just noticed this part for a reason - I'm working on a visual question builder again right now for a client who wants to edit their own customer application form on their custom coded website, instead of involving costly programmers. It always ended poorly in the past at my previous company, but maybe it'll be different this time.
>it's about lowering the bar so that (cheaper) non-developers can participate.
I think that is a terrible approach to anything. Programming isn't that hard and without a doubt anyone who can do business analysis is mentally capable of writing Python or whatever other scripting language.
Instead of teaching people something universal, which they can use everywhere and which they can expand their knowledge of as needed, you are teaching them a deeply flawed process, which is highly specific, highly limited and something which the developer would never use themselves.
Having a business analyst who is able to implement tasks in a standard programming language is immensely more valuable than someone who knows some graphic DSL you developed for your business. Both the interest of the learner and the corporation are in teaching real programming skills.
Even the approach of creating something so "non-programmers" can do programming to is completely condescending and if I were in that position I would refuse to really engage on that basis alone.
I remember the first time playing with "visual" programming (kind of). It was visual basic, probably the first version.
It lowered the bar for me.
I quickly learned how to create a UI element, and connect things. A button could be connected to an action.
So then I was confronted with event-driven programming, and that exposure was basically what was taught to me.
And then the beauty of creating a UI slowed as I exhausted the abstraction of visual basic and ended up with a lot of tedious logic.
I had a similar experience with xcode on macos. I could quickly create an app, but then the user interface I created was dragged down again. It seemed to me like the elegance of a mac user interface, required what seemed like a lot of tax forms to fill out to actually get from a visual app to a working app. I really wanted to ask the UI, what dummy stuff like the app name hasn't been filled out yet? What buttons aren't connected? how do I do the non-visual stuff visually, like dragging and dropping some connection on a routine? ugh.
In the end there's a beauty to plain source code, because it seems like text is the main and only abstraction. It's not mixed in with a lot of config stuff that only xcode can edit, and probably will break when xcode is upgraded.
This actually works if it's not a generic visual programming solution, but if it's a DSL. Don't give the business people pretty graphical loops, give them more abstract building blocks.
Unfortunately that means paying the professional programmers to build the DSL, so it doesn't reduce costs in the beginning.
I think part of the problem is that coding projects can get really big - like millions of lines of code big. Not making a huge mess of things at that scale is always going to be difficult, but the approach of text-based files with version control, where everyone can bring their favorite editors and tools, seems to work better than everything else we've tried so far.
Also, code being text means you can run other code on your own code to check, lint, refactor etc.
Visual programming - that almost always locks you into a particular visual editor - is unlikely to work at that scale, even with a really well thought out editor. Visual tools are great for visual tasks (such as image editing) or for things like making ER diagrams of your database schema, but I think that the visual approach is inherently limited when it comes to coding functionality. Even for making GUIs, there are tradeoffs involved.
I can see applications for helping non-programmers to put together comparatively simple systems, like the excel example mentioned. I don't think it will replace my day job any time soon.
It seems odd to me not to mention things like MaxMSP or PD in an article like this. Arguably Max is one of the most successful standalone visual programming languages (standalone in so far as it’s not attached to a game engine or similar - it exists only for its own existence).
Sequence diagrames (that seems not much different swimlane diagrams) are great, so much so that I created a tool that generates them from appropriately built TLA+ specs representing message exchange scenarios: https://github.com/eras/tlsd
However, while they are good for representing scenarios, they are not that good for specifying functionality. You can easily represent the one golden path in the system, but if you need to start representing errors or diverging paths, you probably end up needing multiple diagrams, and if you need multiple diagrams, then how do you know if you have enough diagrams to fully specify the functionality?
> The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol. In other words, I'd venture to say that if an implementation of the Double Rachet algorithm ever does something that doesn't match the diagrams, it is more likely it is the code that is wrong than vice-versa.
I would believe the latter statement, but I wouldn't say the first statement is that said in other words, so I don't believe this is the correct conclusion.
My conclusion would be that diagrams are great way to visualize the truth of the protocol, but they are not a good way to be the source of truth: they should be generated from a more versatile (and formal) source truth.
[+] [-] ejiblabahaba|1 year ago|reply
I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.
Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.
[0]: https://www.falstad.com/circuit/
[+] [-] stefanpie|1 year ago|reply
For complex tasks like connecting AXI, SoC, memory, and custom IP components together, things like bussed wires and ports, as well as GUI configurators, make the process of getting something up and running on a real FPGA board much easier and quicker than if I had to do it all manually (of course, after I can dump the Tcl trace and move all that automation into reproducible source scripts).
I believe the biggest advantage of the Vivado block editor is the "Run Block Automation" flow that can quickly handle a lot of the wire connections and instantiation of required IPs when integrating an SoC block with modules. I think it would be interesting to explore if this idea could be successfully translated to other styles of visual programming. For example, I could place and connect a few core components and let the tooling handle the rest for me.
Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells and whistles, including an IP library, programmable IP GUI configurators, bussed ports and connections, and block automation. You could even integrate different HDL front-ends as there are many more now than in the past. I know Icestudio is a thing, but that seems designed for educational use, which is also cool to see! I think a VSCode webview-based extension could be one easier way to prototype this.
[+] [-] outworlder|1 year ago|reply
Right. Trying to map lines of code to blocks 1 to 1 is a bad use of time. Humans seem to deal with text really well. The problem becomes when we have many systems talking to one another, skimming through text becomes far less effective. Being able to connect 'modules' or 'nodes' together visually(whatever those modules are) and rewire them seems to be a better idea.
For a different take that's not circuit-based, see how shader nodes are implemented in Blender. That's not (as far as I know a) a Turing complete language, but it gives one idea how you can connect 'nodes' together to perform complex calculations: https://renderguide.com/blender-shader-nodes-tutorial/
A more 'general purpose' example is the blueprint system from Unreal Engine. Again we have 'nodes' that you connect together, but you don't create those visually, you connect them to achieve the behavior you want: https://dev.epicgames.com/documentation/en-us/unreal-engine/...
> I don't think there's many analogous situations in typical programming applications that call for timing diagrams
Not 'timing' per se (although those exist), but situations where you want to see changes over time across several systems are incredibly common, but existing tooling is pretty poor for that.
[+] [-] low_tech_punk|1 year ago|reply
This post seems to still focus the former while an earlier HN post on Scoped Propagators https://news.ycombinator.com/item?id=40916193 showed what's possible with the latter. It specifically showed what's possible when programming with graphs.
Bret Victor might argue visualizing a program is still "drawing dead fish".
The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language. It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.
[+] [-] yoelhacks|1 year ago|reply
What you want with a programming language is to handle granular logic in a very explicit way (business requirements, precise calculations, etc.). What this article posits, and what I agree with, is that existing languages offer a more concise way of doing that.
If I wanted to program in a visual way, I'd probably still want / need the ability to do specific operations using a written artifact (language, SQL, etc). Combining them in different ways visually as a first-class operation would only interest me if it operated at the level of abstraction that visualizations currently operate at, many great examples of which are offered in the article (multiple code files, system architecture, network call).
[+] [-] dgb23|1 year ago|reply
Most visual programming environments represent programs in a static way, they just do it with pictures (often graphs) instead of text.
Perhaps there is something to be discovered when we start visualization what the CPU does at a very low level, as in moving and manipulating bits, and then build visual, animated abstractions with that.
A lot of basic bit manipulations might be much clearer that way, like shifting, masking etc. I wonder what could be built on top to get a more bird‘s eye view.
[+] [-] stcredzero|1 year ago|reply
The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language.
I disagree. We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are described using diagrams, like flowcharts or state transition diagrams, etc.
Furthermore, quite often there are zero direct code references between these chunks. Effectively, we are already organizing large systems in exactly the fashion the op is proposing. Inside each chunk, we just have code. But at a higher level viewpoint, we often have the abstraction described by a diagram. (Which is often maintained manually, separate from the repo.)
What exactly are the disadvantages here?
[+] [-] mlaci|1 year ago|reply
Not necessarily, programming with visual DSL is already a thing in the field of language oriented programming. Visual programming refers to different thing, but not impossible to make a connection between the two fields.
Visual programming is now more like umbrella term for projects (and research) exploring new ways of programming beyond the textual representation. Probably better to call it non-textual programming, because some of its ideas not tied to visuality, like structural editing.
Visual programming enviraments offers a concrete way to program general-purpose code, DSLs offers a very specific language to program in a domain (language orinted programming offers ways to invent these DSLs). Often visual programming applied to a specific domain, as an alternative to textual scripting languages. Maybe this confuses people, thinking they are less powerfull, non general-purpose.
What described in the article is a visual DSL based on diagrams, using as a source for the programming itself (which is exactly the same as UML). But the whole thing are not well thought, and I think only serves the purpose of dunk on visual programming or the people how are working on them for "not understanding what professional programmers need".
[+] [-] CyberDildonics|1 year ago|reply
Why would that be true?
It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.
What advantages would that give? The disadvantages are so big that it will basically never happen for general purpose programming. Making a brand new language make any sort of inroads in finding a niche takes at least a decade, and that's usually with something updating and iterating on what people are already doing.
[+] [-] lucideer|1 year ago|reply
My read of this post (especially the title) is the author does differentiate normally but chose to blur the lines here for a narrative hook & a little bit of fun.
[+] [-] Risord|1 year ago|reply
Aka: more visual/structured medium to some use cases we use text today.
[+] [-] npunt|1 year ago|reply
This is especially important in the age of AI coding tools and how coding is moving from lower level to higher level expression (with greater levels of ambiguity). One ideal use of AI coding tools would be to be on the lookout for ambiguities and outliers and draw the developer's attention to them with relevant visualizations.
> do you know exactly how your data is laid out in memory? Bad memory layouts are one of the biggest contributors to poor performance.
In this example from the article, if the developer indicates they need to improve performance or the AI evaluates the code and thinks its suboptimal, it could bring up a memory layout diagram to help the developer work through the problem.
> Another very cool example is in the documentation for Signal's Double Rachet algorithm. These diagrams track what Alice and Bob need at each step of the protocol to encrypt and decrypt the next message. The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol
This is the next step in visualizations: moving logic from raw code to expressions within the various visualizations. But we can only get there bottom-up, solving one particular problem, one method of visualization at a time. Past visual code efforts have all been top-down universal programming systems, which cannot look at things in all the different ways necessary to handle complexity.
[+] [-] ethbr1|1 year ago|reply
To me, this is an underappreciated tenet of good visualization design.
Bad/lazy visualizations show you what you already know, in prettier form.
Good visualizations give you a better understanding of things-you-don't-know at the time of designing the visualization.
I.e. If I create a visualization using these rules, will I learn some new facts about the "other stuff"?
[+] [-] red_admiral|1 year ago|reply
This will depend on the application, but I've encountered far more of the "wrong data structure / algorithm" kind of problem, like iterating over a list to check if something's in there when you could just make a map ("we need ordering": sure, we have ordered maps!).
[+] [-] tliltocatl|1 year ago|reply
- Any sufficiently advanced program has non-planar dataflow graph. Yes "pipelines" are fine, but anything beyond that - you are going to need labels. And with labels it becomes just like plain old non-visual program, just less structured.
- Code formatting becomes much more important and much harder to do. With textual program representation it is more or less trivial to do auto-formatting (and the code is somewhat readable ever with no formatting at all). Yet we still don't have a reliable way to layout a non-trivial graph so that it doesn't look like a spagetti bowl. I find UML state machines very useful and also painful because after every small edit I have to spend ten minutes fixing layout.
- Good data/program entry interfaces are hard to design and novel tools rarely do a good job of it the first time. Most "visual" tools have a total disaster for a UI. Vs. text editors that were incrementally refined for some 70 years.
[+] [-] Harmohit|1 year ago|reply
It has all the downsides of visual programming that the author mentions. The visual aspect of it makes it so hard to understand the flow of control. There is no clear left to right or top to bottom way of chronologically reading a program.
[+] [-] BobbyTables2|1 year ago|reply
LabView’s shining examples would be trivial Python scripts (aside from the GUI tweaking). However, it’s runtime interactive 2D graph/plot widgets are unequaled.
As soon as a “function” becomes slightly non trivial, the graphical nature makes it hard to follow.
Structured data with the “weak typedef” is a minefield.
A simple program to solve a quadratic equation becomes an absolute mess when laid out graphically. Textually, it would be a simple 5-6 line function that is easy to read.
Source control is also a mess. How does one “diff” a LabView program?
[+] [-] etrautmann|1 year ago|reply
[+] [-] kmoser|1 year ago|reply
I think a happy medium would be an environment where you could easily switch between "code" and "visual" view, and maybe even make changes within each, but I suspect developers will stick with "code" view most of the time.
Also, from the article: > Developers say they want "visual programming"
I certainly don't. What I do want is an IDE which has a better view into my entire project, including all the files, images, DB, etc., so it can make much better informed suggestions. Kind of like JetBrains on steroids, but with better built-in error checking and autocomplete suggestions. I want the ability to move a chunk of code somewhere else, and have the IDE warn me (or even fix the problem) when the code I move now references out-of-scope variables. In short, I want the IDE to handle most of the grunt work, so I can concentrate on the bigger picture.
[+] [-] dralley|1 year ago|reply
[+] [-] eternityforest|1 year ago|reply
In mathematics, everything exists at once just like real life.
In most programming languages, things happen in explicit discrete steps which makes things a lot easier, and most node based systems don't have that property.
I greatly prefer block based programming where you're dragging rules and command blocks that work like traditional programming, but with higher level functions, ease of use on mobile, and no need to memorize all the API call names just for a one off tasks.
[+] [-] f1shy|1 year ago|reply
It is a total abomination.
[+] [-] jayd16|1 year ago|reply
Look up Unreal blueprints, shader graphs, procedural model generation in blender or Houdini. Visual programming is already here and quite popular.
[+] [-] d--b|1 year ago|reply
> =INDEX(A1:A4,SMALL(IF(Active[A1:A4]=E$1,ROW(A1:A4)-1),ROW(1:1)),2)
Ahem. Excel is one of the most visual programming environment out there. Everything is laid out on giant 2d grids you can zoom in and out. You can paint arrows that give you the whole dependency tree. You can select, copy, paste, delete code with the mouse only. You can color things to help you categorize which cell does what. You can create user inputs, charts and pivot grids with clicks.
[+] [-] jillesvangurp|1 year ago|reply
In short, his ideas was to build a language where higher level primitives are created by doing transformations on lower level syntax trees. All the way down to assembly code. The idea would be that you would define languages in terms of how they manipulate existing syntax trees. Kind of a neat concept. And well suited to visual programming as well.
Weather you build that syntax tree by typing code in an editor or by manipulating things in a visual tool is beside the point. It all boils down to syntax trees.
Of course that never happened and MDA also fizzled out along with all the UML meta programming stuff. Meta programming itself is of course an old idea (e.g. Lisp) and still lives on in things like Ruby and a few other things.
But more useful in modern times is how refactoring IDEs work: they build syntax trees of your code and then transform them, hopefully without making the code invalid. Like a compiler, an IDE needs an internal representation of your code as a syntax tree in order to do these things. You only get so far with regular expressions and trying to rename things. But lately, compiler builders are catching onto the notion that good tools and good compilers need to share some logic. That too is an old idea (Smalltalk and IBM's Visual Age). But it's being re-discoverd in e.g. the Rust community and of course Kotlin is trying to get better as well (being developed by Jetbrains and all).
But beyond that, the idea seems a bit stuck. Too bad because I like the notion of programs being manipulated by programs. Which is what refactoring does. And which is what AI also needs to learn to do to become truly useful for programming.
[+] [-] twelvechairs|1 year ago|reply
- trying to understand existing programs - for which visuals are wanted by most but they usually need concious input to be their best
- programming (creating new code) itself - where the efficiency of the keyboard (with its 1d input that goes straight to spaghetti code) has never been replaced by visual (mouse based?) methods other than for very simple (click and connect) type models
[+] [-] sbensu|1 year ago|reply
That is the implied point: let's go to places where we already draw diagrams and check if we can elevate them into the program
[+] [-] airbreather|1 year ago|reply
The concept of an executable specification first came to my attention in IEC 61499 the standard for Distributed Automation. First published in 2005 it was way, way ahead of it's time, so far ahead it is still gaining traction today.
Shout out to anyone reading who was involved in the creation of IEC 61499 in 2005, it was a stroke of genius, and for it's time, orders of magnitude more so. It is also worth a look just to prompt thinking for any one involved in distributed systems of any kind.
Initially I thought there was no way you could have such a thing as an executable specification, but then, over many years I evolved to a place where I could generically create an arbitrary executable specification for state based behavior (see my other post this topic).
I believe I have found the best achievable practice to allow defining behaviors for mission/safety critical functionality, while avoiding implicit state.
[+] [-] reddit_clone|1 year ago|reply
The intermediate representation was in sexp !
[+] [-] LeonB|1 year ago|reply
Visualizations “of” a Program — quite useful. Note there lots of different ways to visualize the same program to emphasise / omit different details. The map is not the territory, all models are wrong etc.
[+] [-] astromaniak|1 year ago|reply
For example having models of capacitor and resistor you can put them together in schematic. Which in turn can be a part of the bigger design. Then test it in simulator. That's how Simplorer works. Alternatively you can write the code in VHDL or Modelica. But visual is quicker, easier, and more reliable.
Obviously it works well for UI, was used for decades now.
As for the rest,... there are visual programmers for robots, mostly for kids.
[+] [-] mapcars|1 year ago|reply
[+] [-] chacham15|1 year ago|reply
[+] [-] sbensu|1 year ago|reply
I personally don't see any one pictorial representation that maps to a general programming language. But if someone does find one, in the large and in the small, that'd be great!
[+] [-] mashpoe|1 year ago|reply
I think other types of models would only be useful for situations where writing code isn't the most intuitive way to make something. From my limited experience, a visual system for making shaders is a pretty good idea, because ideally, you don't want to have many conditional branches or loops, but you might have a lot of expressions that would look ugly in regular code.
[+] [-] Duanemclemore|1 year ago|reply
Unfortunately, most visuals you'll get of the populated canvas online are crap. And for those of us who make extremely clean readable programs it's kind of a superpower and we tend to be careful with how widely we spread them. But once you see a good one you get the value immediately.
Here's a good simple program I made, as a sample. [0]
Also, I want to give a shout-out to the Future of Coding community in this. The Whole Code Catalog [1] and Ivan Reese's Visual Programming Codex [2] are great resources in the area.
I also have to mention, despite the awful name, Flowgorithm is an EXCELLENT tool for teaching the fundamentals of procedural thinking. [3] One neat thing is you can switch between the flow chart view and the script code view in something like 35 different languages natively (or make your own plugin to convert it to your language of choice!)
p.s. If you are used to regular coding, Grasshopper will drive you absolutely freaking bonkers at first, but once you square that it is looping but you have to let the whole program complete before seeing the result, you'll get used to it.
[0] https://global.discourse-cdn.com/mcneel/uploads/default/orig...
[1] https://futureofcoding.org/catalog/
[2] https://github.com/ivanreese/visual-programming-codex
[3] http://flowgorithm.org/
[+] [-] ak217|1 year ago|reply
Figma has managed to bridge the gap between designers, UXR, and engineers in ways that I've never seen done before. I know teams that are incredibly passionate about Figma and use it for as much as they can (which is clearly a reflection of Figma themselves being passionate about delivering a great product) but what impressed me was how much they focus on removing friction from the process of shipping a working application starting from a UI mockup.
I think Figma holds a lot of lessons for anyone serious about both visual programming and cross-functional collaboration in organizations.
[+] [-] trashburger|1 year ago|reply
It focuses on the kind of visual programming the article argues for: Class layout, code architecture, semantics. It's one of the best implementations I have seen. The authors are proponents of "moldable development", which actively encourages building tools and visualizations like the ones in the article.
[+] [-] al_borland|1 year ago|reply
[+] [-] JoshTriplett|1 year ago|reply
There's a different problem, of helping non-programmers glue things together without writing code. I've seen many of those systems fail, too, for different reasons.
Some of them fail because they try to do too much: they make every possible operation representable visually, and the result makes even non-programmers think that writing code would be easier. The system shown in the first diagram in the article is a great example of that.
Conversely, some of them fail because they try to do too little: they're not capable enough to do most of the things people want them to do, and they're not extensible, so once you hit a wall you can go no further. For instance, the original Lego Mindstorms graphical environment had very limited capabilities and no way to extend it; it was designed for kids who wanted to build and do extremely rudimentary programming, and if you wanted to do anything even mildly complex in programming, you ended up doing more work to work around its limitations.
I would propose that there are a few key properties desirable for visual programming mechanisms, as well as other kinds of very-high-level programming mechanisms, such as DSLs:
1) Present a simplified view of the world that focuses on common needs rather than every possible need. Not every program has to be writable using purely the visual/high-level mechanism; see (3).
2) Be translatable to some underlying programming model, but not necessarily universally translatable back (because of (1)).
3) Provide extension mechanisms where you can create a "block" or equivalent from some lines of code in the underlying model and still glue it into the visual model. The combination of (2) and (3) creates a smooth on-ramp for users to go from using the simplified model to creating and extending the model, or working in the underlying system directly.
One example of a high-level model that fits this: the shell command-line and shell scripts. It's generally higher-level than writing the underlying code that implements the individual commands, it's not intended to be universal, and you can always create new blocks for use in it. That's a model that has been wildly successful.
[+] [-] jumploops|1 year ago|reply
We joke it’s the all-code no-code platform.
Users build simple automations (think scrapers, notifications, API endpoints) using natural language.
We break their requests into smaller tasks that are then mapped to either existing code (“Blocks”) or new code (written by AI).
Each Block then acts as a UNIX-like program, where it only concerns itself with the input/output of its operation.
We’ve found that even non-programmers can build useful automations (often ChatGPT-based like baby name recommenders), and programmers love the speed of getting something up quickly.
[0] https://magicloops.dev
[+] [-] JackeJR|1 year ago|reply
[+] [-] hnick|1 year ago|reply
A Business Analyst may or may not have a coding background, but their specifications can be quite technical and logical and hopefully they understand the details. The assumption is that if we create our own Sufficiently Advanced Online Rule Engine they can just set it all up without involving the more expensive programmers.
This is discussed a bit in the first paragraph, but I just wanted to reiterate that most systems I had to deal with like this were talked about in terms of supplying business logic, rules, and control flow configuration to a pre-existing system or harness that executes that configuration. The "real" programmers work on that system, adding features, and code blocks for anything outside the specification, while the other staff setup the business logic.
It works to some degree. I think things like Zapier can be quite good for this crowd, and a lot of mailing list providers have visual workflow tools that let non-programmers do a lot. A DSL like Excel formulas would be in this group too, since it operates inside an existing application, except that it's non-visual. Some document publishing tools like Exstream (I worked with it pre-HP, so years ago) did a lot in this space too.
I did read and appreciate the whole article, I just noticed this part for a reason - I'm working on a visual question builder again right now for a client who wants to edit their own customer application form on their custom coded website, instead of involving costly programmers. It always ended poorly in the past at my previous company, but maybe it'll be different this time.
[+] [-] constantcrying|1 year ago|reply
I think that is a terrible approach to anything. Programming isn't that hard and without a doubt anyone who can do business analysis is mentally capable of writing Python or whatever other scripting language.
Instead of teaching people something universal, which they can use everywhere and which they can expand their knowledge of as needed, you are teaching them a deeply flawed process, which is highly specific, highly limited and something which the developer would never use themselves.
Having a business analyst who is able to implement tasks in a standard programming language is immensely more valuable than someone who knows some graphic DSL you developed for your business. Both the interest of the learner and the corporation are in teaching real programming skills.
Even the approach of creating something so "non-programmers" can do programming to is completely condescending and if I were in that position I would refuse to really engage on that basis alone.
[+] [-] m463|1 year ago|reply
I think that might be right.
I remember the first time playing with "visual" programming (kind of). It was visual basic, probably the first version.
It lowered the bar for me.
I quickly learned how to create a UI element, and connect things. A button could be connected to an action.
So then I was confronted with event-driven programming, and that exposure was basically what was taught to me.
And then the beauty of creating a UI slowed as I exhausted the abstraction of visual basic and ended up with a lot of tedious logic.
I had a similar experience with xcode on macos. I could quickly create an app, but then the user interface I created was dragged down again. It seemed to me like the elegance of a mac user interface, required what seemed like a lot of tax forms to fill out to actually get from a visual app to a working app. I really wanted to ask the UI, what dummy stuff like the app name hasn't been filled out yet? What buttons aren't connected? how do I do the non-visual stuff visually, like dragging and dropping some connection on a routine? ugh.
In the end there's a beauty to plain source code, because it seems like text is the main and only abstraction. It's not mixed in with a lot of config stuff that only xcode can edit, and probably will break when xcode is upgraded.
[+] [-] nottorp|1 year ago|reply
Unfortunately that means paying the professional programmers to build the DSL, so it doesn't reduce costs in the beginning.
[+] [-] red_admiral|1 year ago|reply
Also, code being text means you can run other code on your own code to check, lint, refactor etc.
Visual programming - that almost always locks you into a particular visual editor - is unlikely to work at that scale, even with a really well thought out editor. Visual tools are great for visual tasks (such as image editing) or for things like making ER diagrams of your database schema, but I think that the visual approach is inherently limited when it comes to coding functionality. Even for making GUIs, there are tradeoffs involved.
I can see applications for helping non-programmers to put together comparatively simple systems, like the excel example mentioned. I don't think it will replace my day job any time soon.
[+] [-] Flipflip79|1 year ago|reply
[+] [-] _flux|1 year ago|reply
However, while they are good for representing scenarios, they are not that good for specifying functionality. You can easily represent the one golden path in the system, but if you need to start representing errors or diverging paths, you probably end up needing multiple diagrams, and if you need multiple diagrams, then how do you know if you have enough diagrams to fully specify the functionality?
> The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol. In other words, I'd venture to say that if an implementation of the Double Rachet algorithm ever does something that doesn't match the diagrams, it is more likely it is the code that is wrong than vice-versa.
I would believe the latter statement, but I wouldn't say the first statement is that said in other words, so I don't believe this is the correct conclusion.
My conclusion would be that diagrams are great way to visualize the truth of the protocol, but they are not a good way to be the source of truth: they should be generated from a more versatile (and formal) source truth.
[+] [-] billfruit|1 year ago|reply