There is a deeper question undercutting this project (and Bret Victor's Drawing Dead Fish talk and related approaches). That question is: how can we represent computation in an intuitive and scalable way?
Conventional programming languages is one answer. They associate programs with text. Some believe there is another way, by associating programs with diagrams. A more abstract example: machine learning associates programs with parameters and weights.
In some weird way, I feel these are all skeuomorphisms. We choose text because that's how we comprehend literature. We choose diagrams because we are visual. We choose ML because we mimic how our brains work.
We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.
For example, take thinking of programming textually. Text assumes a beginning, an end, and an ordered sequence between them. But in this small programming example, is there a well-defined ordering?
a = 0;
b = 1;
c = a + b;
Since the first and second lines can be switched, in some sense Text itself does not do Thought justice.
Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
I think the more interesting question is how we can accurately represent thought?
> is there a well-defined ordering?
> a = 0; b = 1; c = a + b;
In pure functional languages, these expressions form a dependency graph and the interpreter or compiler may choose an ordering and may cache intermediate results
We may even represent the program itself as a graph, just like you suggest with ML programs, but this is a general purpose program
Obviously we can't do this in imperative languages.
I think pure functional programming enables this future of thinking about programs as graphs and not as text.
There's someone who definitely spent a lot of time on this, definitely check out Ted Nelson's ZigZag structure: http://xanadu.com/zigzag/
It is exactly that, an attempt to structure data in a similar way that our thoughts are formed. I believe it was this video where he shortly explained that concept; https://www.youtube.com/watch?v=Bqx6li5dbEY
Although it might be a different video since Ted Nelson is all over the place with his documents and videos.
> We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.
Yes, we do. ML/AI is a moving target that literally represents the current SOTA in doing just that, and even symbolic logic itself is the outcome of an older, a priori, way of doing that. Actually, analytic diagrams are also an outcome of one approach to that. So, all programming methods you mention come from some effort to model thought and make a representation of that model.
String example: if you have "foo" and "bar", both are a list of characters. Now, "bar" has a beginning represented by a handle and you drag that handle to the end of "foo". Very briefly something like that. Of course, not everything is set in stone and we need to try multiple approaches to see which one is the fastest.
> We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.
I had to laugh here, because that is exactly how I designed ibGib over the past 15 years. It is built conceptually from the ground up, working in conflict (and harmony) with esoteric things like philosophy of mathematics and axiomatic logic systems, information theory, logic of quantum physics, etc. Anyway, like I said...I just had to laugh at this particular statement! :-)
> Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
> I think the more interesting question is how we can accurately represent thought?
In ibGib, I have created a nodal network that currently can be interacted with via a d3.js force layout. Each node is an ibGib that has four properties (the database has _only_ these four properties): ib, gib, data, and rel8ns (to keep it terse). The ib is like a name/id/quick metadata, the data is for internal data, the rel8ns are named links (think merkle links), and the gib is a hash of the other three.
The ib^gib acts as a URL in a SHA^256-sized space. So each "thought" is effectively a Goedelian number that represents that "thought".
This is essentially the "state" part of it. The way that you create new ibGib is for any ibGib A to "contact" an ibGib B. Currently the B ibGibs are largely transform ibGibs that contain the state necessary to create a tertiary ibGib C. So each one, being an immutable datum with ib/gib/data/rel8ns, when combined with another immutable ibGib, acts as a pure function given the engine implemented. This pure function is actually encapsulated in the server node that the transformation is happening, so it's conceivable that A + B -> C on my node, where A + B -> D on someone else's node. So the "pure" part is probably an implementation detail for me...but anyway, I'm digressing a little.
I'm only starting to introduce behavior to it, but the gist of it is that any behavior, just like any creation of a "new" ibGib, is just sending an immutable ibGib to some other thing that produces a tertiary immutable ibGib. So you could have the engine on some node be in python or R or Bob's Manual ibGib manipulating service where Bob types very slowly random outputs. But in the visual representation of this process, you would do the same thing that you do with all other ibGib. You create a space (the rel8ns also form a dependency graph for intrinsic tree-shaking btw) via querying, forking others' ibGibs, "importing", etc. Then you have commands that act upon the various ibGib, basically like a plugin architecture. The interesting thing though is that since you're black-boxing the plugin transformation (it's an ibGib), you can evolve more and more complex "plugins" that just execute "their" function (just like Bob).
Anyway, I wasn't going to write this much...but like I said. I had to laugh.
Some feedback on the website: IMO, it's obnoxious and disrespectful to play full screen video like you're doing. That just made me close it immediately and leave after reading through the text.
Considering the complexity involved in developing a an advanced IDE or similar, have you considered publishing an open source community version? Similar to JetBrain with IntelliJ. They seem to be doing great.
Since we're on the subject of experimental UI concepts, I'll plug Bret Victor's Inventing on Principle [0] talk. For me it was an instant classic.
Is it just me? Or does this look 10 times geekier than writing actual code?
I think the project is trying to be a user friendly way of writing programs, and I think that's an awesome idea, but the actual product looks otherwise.
I finished the video and still have no idea what the hell was going on through out the entire video.
Same here, the video is way too long without giving actual information I was looking for. I guess they should create two videos, one for programmers and one for everybody else.
There is a plan for a lengthier explanation video.
The intention of the video was to show that it can be done. Of course, some kind of explanation and training is needed for everything, I don't dispute that.
The video was trying to do two things at once: explain the goals and high level ideas as well as demonstrate "syntax" (the clicks and drag and drops). Since these are both new to people coming to your site, it's hard to digest at the same time. I'd love a video of just explaining what the demo is doing, since I didn't really understand how to program in the new approach you described.
As soon as I saw a logic gate implemented for a single keypress I was "noping" out of there. Visual methods, to a one, break down quickly when they reproduce low-level digital logic. At that point, you have a software circuit board, and this is a thing that your CPU can represent just fine by coding in assembly and possibly adding an abstraction on top for a dataflow representation.
Graphics are absolutely wonderful, in contrast, when they are able to stick to a domain abstraction, which is why we have a notion of application software at all. I have, in fact, aimed towards discovering what a "Pong synthesizer" would look like, so I have the domain knowledge to know that it does tend to lead to the digital logic breakdown if you aim to surface every possibility as a configurable patch point. As a result I started looking more deeply into software modularity ideas and rediscovered hierarchical structures(Unix, IP addressing, etc.) as the model to follow. I'm gradually incorporating those ideas into a functioning game engine, i.e. I'm shipping the game first and building the engine as I find time, and I do have designs for making a higher level editor of this sort at some point.
However, I also intend to have obvious cutoff points. There are certain things that are powerful about visual systems, but pressuring them to expose everything at the top level is more obscuring than illuminating. So my strategy is to instead have editors that compile down to the formats of the modular lower-level system, smoothing the "ladder of abstraction" and allowing people to work at the abstraction level that suits them.
The logic gate's primary function seemed to be in limiting the paddle to not move outside of the playing field.
Otoh, in a visual programming language it'd feel more natural to make the upper and lower edges of the playing field collidable (I'm sure there's a better word for that), so that moving the paddle is inherently limited by collision with the edges.
Like so many projects on the internet: Lots of big words and ideas, but no content. Show us some actual tech and code and I might throw my money your way. Or better: Release your code under an open license and I might even throw my time your way!
I need the money to develop it fully. If it was usable at this point, I would release it, no doubt. I want something out of the door ASAP, that's why it is focusing on 2D games first. That is feasible but still hard.
This is really not a technology in the traditional sense. The runtime itself is nothing new. The real thing is the UX, that is how programming can be made more efficient to do.
I believe if someone supports this, he/she supports the goal of this project, not the concrete implementation.
There is this old idea that programs are somehow "limited" by their textual representation, and that a 2D graphical syntax would unleash more possibilities. Never worked very well so far, unfortunately, except for a bunch of very specific niches.
From what I understand after watching true video, I think this abstraction could work for extremely simple implementation details, but once the implementation gets even slightly complex, the scale of complexity of coding it with this system balloons.
That's the exact problem I encountered when I worked on a similar product for microcontrollers.
Having to use a mouse to interact with your programming ide graphically doesn't scale. It does make for a decent tool for hobby projects or prototyping though.
I like it and I think this is the general direction that creating applications will look like in the future.
But don't throw away text-based programming yet; the wiser move would be to combine the two.
Find use-cases were visual DeepUI style programming shines and is vastly superior to text-based programming, but let me polish the details with old-school text source code.
There are apps which already do a lot of this, for example Unity - you can assemble your scenes and animations visually and tune it up with code.
I worked on Accelsor which is a tactile-spatial programming language, and I think the ideas here are actually really good (so don't let HN haters get you down).
Ultimately work like this though leads to needing to reinvent all of programming (unfortunately) for instance, I'm now having to build a graph database to handle Turing Complete systems that are being collaborated on in realtime (see http://gun.js.org). So prepare for a long haul of work ahead of you. Get to know people in the space, like me and the Eve team, etc.
If you persist long enough (don't let money or lack of money stop you) you'll make a dent. :)
This seems quite similar to LabView. So I imagine it'll have similar pros and cons: LabView is great for putting together quick prototypes for e.g data collection or visualization, but it quickly becomes unmanageable as the complexity increases; you need to 'tidy up' the placement of the various operators or it ends up being a rat's nest.
No, they explain that LabVIEW and other visual languages are just different ways of representing the code. The idea here, I think, is that there's far greater coupling between the output of the program and the program itself. It's similar to some of Bret Victor's ideas: https://www.youtube.com/watch?v=PUv66718DII
I'd like to incorporate this coupling idea into my own visual dataflow language (http://web.onetel.com/~hibou/fmj/FMJ.html), but haven't yet decided how to implement it. My approach has been to design the language from the bottom-up, so that simple programs can be simply drawn, and there are higher level programming constructs which simplify more complex code, avoiding the complexity problem (the Deutsch limit) you've seen with LabVIEW.
i have written very large systems in labview, and your viewpoint is simply not accurate for a good labview programmer. just like any coding discipline, you keep your VIs, classes, libraries, etc. small and suited for a single purpose. what you end up with is a collection of VIs that basically have a REPL automatically built in (i.e., just run the VI). and when i say large systems, i mean multiple projects with greater than 1,000 VIs and many tens of classes.
it's a rule amongst good labview programmers that you keep your block diagram to where it fits on a single, reasonably sized/resolution monitor without scrolling. simply adhering to that rule encourages good coding practice. within my large systems, i am able to freely edit pieces with often no unintended consequences. since reference-based data types are really only used for multi-threaded communication and instrument/file communication, you typically are operating on value-based data which makes reliable code development quite easy.
and what you describe is equally applicable to any text-based language. neither labview nor text-based languages have built-in precautions against horrific coding standards.
If it were "only" for the spatial relationship between "variables" and logic, LabVIEW wouldn't be such a pain to use.
What's really annoying about LabVIEW is, that its programming paradigm is kind-of functional, but it doesn't go the full effort and forces you to do things, which one kind of expects are abstracted away, and things become a mess. Let me explain my top pet peeve:
In LabVIEW the main concept are so called VIs: Virtual Instruments. A VI consists of a number of inputs called "Controls", some logic in between and outputs called "Indicators". Inside a VI you have the full range of programming primitives like loops (which interestingly enough can also work like list comprehensions through automatic indexing, but I digress) "variables" (in the form of data flow wires) but no functions. VIs are what you use as function. And if everything happens through VI inputs and outputs and you don't use global variables, feedback nodes or similar impure stuff it's pretty much functional.
Somewhere your program has to start, i.e. there must be some kind of "main" VI. But VIs mostly behave like functions, so if you hit "run" for the main VI it will just follow its data flow until every input has reached what it's wired to and all subVI instances have executed and thats it. That's perfect for a single shot program, like you'd have on the command line or executing to serve a HTTP request, however it's kind of the opposite of what you want for an interactive program that has a visual UI. Sure there is that "run continuously" mode which will just loop VI execution. But all what it does is re-evaluate and execute each and every input and subVI again and again and again. If you're using LabVIEW in a laboratory setting, which is its main use, you probably have some sensors, actuators or even stuff like lasers controlled by this. And then you do not want to have then execute whatever command again and again. There is a solution to this of course, which are called "event structures". Essentially its like a large "switch" statement, that will dispatch exactly once for one event. Of course this caters only toward input manipulation events and some application execution state events. And you can not use it in "run continuously" mode without invoking all the other caveats. So what you do is, you place it in a while loop. How do you stop the while loop? Eh, splat a "STOP" button somewhere on the Front Panel (and don't forget to add a "Value Changed" event handler for the stop button, otherwise you'll click STOP without effect until you manipulate something else).
And then in the Event structure you have to meticulously wire all the data flows not touched by whatever the event does through so called "shift registers" in the while loop to keep the values around. If you forget or miswire one data flow you have a bug.
What seriously annoys me about that is, that in principle the whole dataflow paradigm of LabVIEW would allow for immediate implementation of FRP (functional reactive programming): re-evaluation and execution of only those parts of the program that are affected by the change.
The other thing that seriously annoys me is how poorly polymorphism is implemented in LabVIEW and how limited dynamic typing is. I'd not even go as far as saying that LabVIEW does type inference, although at least for primitive types it covers a surprisingly large set of use cases. Connect numeric type arrays to an arithmetic operation and it does it element wise. Connect a single element numeric type and an array and it again does things element wise. Have an all numeric cluster (LabVIEW equivalent of a struct) and you can do element wise operations just as well. So if we were to look at this like Haskell there's a certain type class to which numeric element arrays, clusters and single elements belong and it's actually great and a huge workload saver! Unfortunately you can't expose that on the inputs/outputs of a VI. VI inputs/outputs always have to be of a specific type. Oh yes, there are variants, but they're about as foolproof to use as `void*` in C/C++. So the proper way to implement polymorphism in LabVIEW is to manually create variants of your VI for each and every combination of types you'd like to input and coalesce them in a polymorphic VI. And since you have to do it with the mouse and VIs are stored in binary this is not something you can easily script away. Gaaahhh…
I like the idea of a physical analog to game logic, e.g. tripping a certain condition based on collision mechanics. This could be useful for something akin to Game Maker, and I'd be curious to see how it would translate to other mediums. I could imagine front end programming would be well suited for this style, especially when creating interactive prototypes.
It seems like its great for things that have an on-screen spatial meaning like the pong example game[1]. But what if I want to represent something abstract?
Like a tree (lets say a quad-tree since this is for (2d?) games for now)? Or what if I want to implement AI logic (lets say I want some kind of decision-tree planner and path finding)? I'm having trouble visualising (I guess because the video didn't really go to explain) how any of this can be done, as opposed to "moving something around on the screen".
I assume this has been thought about. I just couldn't figure out any of the details from the video.
[1] although even in that case, I couldn't figure out what the symbols and lines in the video meant. The symbols especially seem cryptic. A mix between logic gates and something else?
Actually, that presents the code in a visual way while this allows you to work on the thing itself.
Similar to Stop Drawing Dead Fish by Bret Victor: https://vimeo.com/64895205
Looks interesting. I would like a more detailed look at the visual language being used here. How is logic projected out into the physical world, is it simply making variables into nodes?
FRP seems about right and I wanted it to look cool :). But no flowcharts, those represent steps. If you are talking about the logical symbols, they are not step based at all. Rather more similar to physically implemented logical circuits.
Simulink and LabVIEW to DeepUI can be compared to what C is to Lisp.
Simulink and LabVIEW are made for electrical engineers transitioning to developers. They are good tools for building and shipping products.
DeepUI on the other hand introduces new paradigm for visual expression, and it looks to be more targeted towards computer scientists and experimental artists (and maybe hobby game developers with aversion to traditional programming).
I agree that in all these visual programming languages the interface is always way too slow. They tend to completely ignore the keyboard which is by far the fastest input device.
dmvaldman|9 years ago
Conventional programming languages is one answer. They associate programs with text. Some believe there is another way, by associating programs with diagrams. A more abstract example: machine learning associates programs with parameters and weights.
In some weird way, I feel these are all skeuomorphisms. We choose text because that's how we comprehend literature. We choose diagrams because we are visual. We choose ML because we mimic how our brains work.
We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.
For example, take thinking of programming textually. Text assumes a beginning, an end, and an ordered sequence between them. But in this small programming example, is there a well-defined ordering?
a = 0; b = 1; c = a + b;
Since the first and second lines can be switched, in some sense Text itself does not do Thought justice.
Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
I think the more interesting question is how we can accurately represent thought?
dustingetz|9 years ago
In pure functional languages, these expressions form a dependency graph and the interpreter or compiler may choose an ordering and may cache intermediate results
We may even represent the program itself as a graph, just like you suggest with ML programs, but this is a general purpose program
Obviously we can't do this in imperative languages.
I think pure functional programming enables this future of thinking about programs as graphs and not as text.
azeirah|9 years ago
It is exactly that, an attempt to structure data in a similar way that our thoughts are formed. I believe it was this video where he shortly explained that concept; https://www.youtube.com/watch?v=Bqx6li5dbEY
Although it might be a different video since Ted Nelson is all over the place with his documents and videos.
dragonwriter|9 years ago
Yes, we do. ML/AI is a moving target that literally represents the current SOTA in doing just that, and even symbolic logic itself is the outcome of an older, a priori, way of doing that. Actually, analytic diagrams are also an outcome of one approach to that. So, all programming methods you mention come from some effort to model thought and make a representation of that model.
Naeron|9 years ago
ibgib|9 years ago
I had to laugh here, because that is exactly how I designed ibGib over the past 15 years. It is built conceptually from the ground up, working in conflict (and harmony) with esoteric things like philosophy of mathematics and axiomatic logic systems, information theory, logic of quantum physics, etc. Anyway, like I said...I just had to laugh at this particular statement! :-)
> Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
> I think the more interesting question is how we can accurately represent thought?
In ibGib, I have created a nodal network that currently can be interacted with via a d3.js force layout. Each node is an ibGib that has four properties (the database has _only_ these four properties): ib, gib, data, and rel8ns (to keep it terse). The ib is like a name/id/quick metadata, the data is for internal data, the rel8ns are named links (think merkle links), and the gib is a hash of the other three.
The ib^gib acts as a URL in a SHA^256-sized space. So each "thought" is effectively a Goedelian number that represents that "thought".
This is essentially the "state" part of it. The way that you create new ibGib is for any ibGib A to "contact" an ibGib B. Currently the B ibGibs are largely transform ibGibs that contain the state necessary to create a tertiary ibGib C. So each one, being an immutable datum with ib/gib/data/rel8ns, when combined with another immutable ibGib, acts as a pure function given the engine implemented. This pure function is actually encapsulated in the server node that the transformation is happening, so it's conceivable that A + B -> C on my node, where A + B -> D on someone else's node. So the "pure" part is probably an implementation detail for me...but anyway, I'm digressing a little.
I'm only starting to introduce behavior to it, but the gist of it is that any behavior, just like any creation of a "new" ibGib, is just sending an immutable ibGib to some other thing that produces a tertiary immutable ibGib. So you could have the engine on some node be in python or R or Bob's Manual ibGib manipulating service where Bob types very slowly random outputs. But in the visual representation of this process, you would do the same thing that you do with all other ibGib. You create a space (the rel8ns also form a dependency graph for intrinsic tree-shaking btw) via querying, forking others' ibGibs, "importing", etc. Then you have commands that act upon the various ibGib, basically like a plugin architecture. The interesting thing though is that since you're black-boxing the plugin transformation (it's an ibGib), you can evolve more and more complex "plugins" that just execute "their" function (just like Bob).
Anyway, I wasn't going to write this much...but like I said. I had to laugh.
TheAceOfHearts|9 years ago
Considering the complexity involved in developing a an advanced IDE or similar, have you considered publishing an open source community version? Similar to JetBrain with IntelliJ. They seem to be doing great.
Since we're on the subject of experimental UI concepts, I'll plug Bret Victor's Inventing on Principle [0] talk. For me it was an instant classic.
[0] https://vimeo.com/36579366
Naeron|9 years ago
He is my hero actually :)
immigrantsheep|9 years ago
IshKebab|9 years ago
Vimeo also behaves like this on mobile and it's far superior to Youtube, which often totally hides the fullscreen button.
Naeron|9 years ago
cocktailpeanuts|9 years ago
I think the project is trying to be a user friendly way of writing programs, and I think that's an awesome idea, but the actual product looks otherwise.
I finished the video and still have no idea what the hell was going on through out the entire video.
StreamBright|9 years ago
Naeron|9 years ago
zython|9 years ago
If you believe their website it has been used in the Russian Space Program.
But I have to agree with you, this DeepUI looks a PITA to work with in comparison to DRAKON.
http://drakon-editor.sourceforge.net/
vashington|9 years ago
Naeron|9 years ago
buzzybee|9 years ago
Graphics are absolutely wonderful, in contrast, when they are able to stick to a domain abstraction, which is why we have a notion of application software at all. I have, in fact, aimed towards discovering what a "Pong synthesizer" would look like, so I have the domain knowledge to know that it does tend to lead to the digital logic breakdown if you aim to surface every possibility as a configurable patch point. As a result I started looking more deeply into software modularity ideas and rediscovered hierarchical structures(Unix, IP addressing, etc.) as the model to follow. I'm gradually incorporating those ideas into a functioning game engine, i.e. I'm shipping the game first and building the engine as I find time, and I do have designs for making a higher level editor of this sort at some point.
However, I also intend to have obvious cutoff points. There are certain things that are powerful about visual systems, but pressuring them to expose everything at the top level is more obscuring than illuminating. So my strategy is to instead have editors that compile down to the formats of the modular lower-level system, smoothing the "ladder of abstraction" and allowing people to work at the abstraction level that suits them.
johnp_|9 years ago
Otoh, in a visual programming language it'd feel more natural to make the upper and lower edges of the playing field collidable (I'm sure there's a better word for that), so that moving the paddle is inherently limited by collision with the edges.
pttrsmrt|9 years ago
Naeron|9 years ago
cafebabbe|9 years ago
ipnon|9 years ago
relics443|9 years ago
Thrillington|9 years ago
Having to use a mouse to interact with your programming ide graphically doesn't scale. It does make for a decent tool for hobby projects or prototyping though.
IshKebab|9 years ago
delegate|9 years ago
But don't throw away text-based programming yet; the wiser move would be to combine the two.
Find use-cases were visual DeepUI style programming shines and is vastly superior to text-based programming, but let me polish the details with old-school text source code.
There are apps which already do a lot of this, for example Unity - you can assemble your scenes and animations visually and tune it up with code.
marknadal|9 years ago
Ultimately work like this though leads to needing to reinvent all of programming (unfortunately) for instance, I'm now having to build a graph database to handle Turing Complete systems that are being collaborated on in realtime (see http://gun.js.org). So prepare for a long haul of work ahead of you. Get to know people in the space, like me and the Eve team, etc.
If you persist long enough (don't let money or lack of money stop you) you'll make a dent. :)
Naeron|9 years ago
jdiez17|9 years ago
DonaldFisk|9 years ago
I'd like to incorporate this coupling idea into my own visual dataflow language (http://web.onetel.com/~hibou/fmj/FMJ.html), but haven't yet decided how to implement it. My approach has been to design the language from the bottom-up, so that simple programs can be simply drawn, and there are higher level programming constructs which simplify more complex code, avoiding the complexity problem (the Deutsch limit) you've seen with LabVIEW.
nikofeyn|9 years ago
it's a rule amongst good labview programmers that you keep your block diagram to where it fits on a single, reasonably sized/resolution monitor without scrolling. simply adhering to that rule encourages good coding practice. within my large systems, i am able to freely edit pieces with often no unintended consequences. since reference-based data types are really only used for multi-threaded communication and instrument/file communication, you typically are operating on value-based data which makes reliable code development quite easy.
and what you describe is equally applicable to any text-based language. neither labview nor text-based languages have built-in precautions against horrific coding standards.
datenwolf|9 years ago
What's really annoying about LabVIEW is, that its programming paradigm is kind-of functional, but it doesn't go the full effort and forces you to do things, which one kind of expects are abstracted away, and things become a mess. Let me explain my top pet peeve:
In LabVIEW the main concept are so called VIs: Virtual Instruments. A VI consists of a number of inputs called "Controls", some logic in between and outputs called "Indicators". Inside a VI you have the full range of programming primitives like loops (which interestingly enough can also work like list comprehensions through automatic indexing, but I digress) "variables" (in the form of data flow wires) but no functions. VIs are what you use as function. And if everything happens through VI inputs and outputs and you don't use global variables, feedback nodes or similar impure stuff it's pretty much functional.
Somewhere your program has to start, i.e. there must be some kind of "main" VI. But VIs mostly behave like functions, so if you hit "run" for the main VI it will just follow its data flow until every input has reached what it's wired to and all subVI instances have executed and thats it. That's perfect for a single shot program, like you'd have on the command line or executing to serve a HTTP request, however it's kind of the opposite of what you want for an interactive program that has a visual UI. Sure there is that "run continuously" mode which will just loop VI execution. But all what it does is re-evaluate and execute each and every input and subVI again and again and again. If you're using LabVIEW in a laboratory setting, which is its main use, you probably have some sensors, actuators or even stuff like lasers controlled by this. And then you do not want to have then execute whatever command again and again. There is a solution to this of course, which are called "event structures". Essentially its like a large "switch" statement, that will dispatch exactly once for one event. Of course this caters only toward input manipulation events and some application execution state events. And you can not use it in "run continuously" mode without invoking all the other caveats. So what you do is, you place it in a while loop. How do you stop the while loop? Eh, splat a "STOP" button somewhere on the Front Panel (and don't forget to add a "Value Changed" event handler for the stop button, otherwise you'll click STOP without effect until you manipulate something else).
And then in the Event structure you have to meticulously wire all the data flows not touched by whatever the event does through so called "shift registers" in the while loop to keep the values around. If you forget or miswire one data flow you have a bug.
What seriously annoys me about that is, that in principle the whole dataflow paradigm of LabVIEW would allow for immediate implementation of FRP (functional reactive programming): re-evaluation and execution of only those parts of the program that are affected by the change.
The other thing that seriously annoys me is how poorly polymorphism is implemented in LabVIEW and how limited dynamic typing is. I'd not even go as far as saying that LabVIEW does type inference, although at least for primitive types it covers a surprisingly large set of use cases. Connect numeric type arrays to an arithmetic operation and it does it element wise. Connect a single element numeric type and an array and it again does things element wise. Have an all numeric cluster (LabVIEW equivalent of a struct) and you can do element wise operations just as well. So if we were to look at this like Haskell there's a certain type class to which numeric element arrays, clusters and single elements belong and it's actually great and a huge workload saver! Unfortunately you can't expose that on the inputs/outputs of a VI. VI inputs/outputs always have to be of a specific type. Oh yes, there are variants, but they're about as foolproof to use as `void*` in C/C++. So the proper way to implement polymorphism in LabVIEW is to manually create variants of your VI for each and every combination of types you'd like to input and coalesce them in a polymorphic VI. And since you have to do it with the mouse and VIs are stored in binary this is not something you can easily script away. Gaaahhh…
swalsh|9 years ago
vvanders|9 years ago
Naeron|9 years ago
alunaryak|9 years ago
unknown|9 years ago
[deleted]
Naeron|9 years ago
dkersten|9 years ago
It seems like its great for things that have an on-screen spatial meaning like the pong example game[1]. But what if I want to represent something abstract?
Like a tree (lets say a quad-tree since this is for (2d?) games for now)? Or what if I want to implement AI logic (lets say I want some kind of decision-tree planner and path finding)? I'm having trouble visualising (I guess because the video didn't really go to explain) how any of this can be done, as opposed to "moving something around on the screen".
I assume this has been thought about. I just couldn't figure out any of the details from the video.
[1] although even in that case, I couldn't figure out what the symbols and lines in the video meant. The symbols especially seem cryptic. A mix between logic gates and something else?
petterfaiknavn|9 years ago
Naeron|9 years ago
seanmcdirmid|9 years ago
big_paps|9 years ago
blueprint|9 years ago
Naeron|9 years ago
unknown|9 years ago
[deleted]
aylmao|9 years ago
(Pro-tip for the site; screenshots and concrete descriptions on how things work)
godmodus|9 years ago
pjmlp|9 years ago
Fifer82|9 years ago
unknown|9 years ago
[deleted]
ungzd|9 years ago
Naeron|9 years ago
Vandash|9 years ago
Naeron|9 years ago
Naeron|9 years ago
Naeron|9 years ago
UhUhUhUh|9 years ago
org3432|9 years ago
revelation|9 years ago
IshKebab|9 years ago
Even if this wasn't spaghetti code, most of the Labview icons are totally unreadable:
http://www.ni.com/cms/images/devzone/pub/nrjsxmfm91216399872...
veli_joza|9 years ago
Simulink and LabVIEW are made for electrical engineers transitioning to developers. They are good tools for building and shipping products.
DeepUI on the other hand introduces new paradigm for visual expression, and it looks to be more targeted towards computer scientists and experimental artists (and maybe hobby game developers with aversion to traditional programming).
I agree that in all these visual programming languages the interface is always way too slow. They tend to completely ignore the keyboard which is by far the fastest input device.
triptych|9 years ago
Naeron|9 years ago
unknown|9 years ago
[deleted]
asow92|9 years ago