in the virtual world of computers,
everything can be replicated, from complex
physical phenomena to abstract ideas,
and even your own mind.
Does not understand computers well enough to be lecturing the world about the need for software "literacy". The assertion that computers can represent any of those things is a complete falsehood.
Computers are great with complex mathematical models of physical phenomena, and in some limited cases this is extremely useful. But smart engineers know the limits of their tools, and computer models are not an exception.
But to assert that abstract ideas or the human mind can be replicated in a computer only shows that the author either has no idea of the actual state of the art, and/or has no idea of what he means when he says this is possible.
One of the first things you learn in Computer Science is that not all problems can be solved with a computer. It's amazing that people so enamored with math that they want to believe they can model every phenomenon in the universe with a computer apparently disbelieve the math that proves computers can't correctly model everything.
Weird. Please don't try to teach the world that computers can do everything. They can't. And that should be lesson one in any plan to increase software literacy. Starting, apparently, with its advocates.
I remember a draft of the manifesto that had indeed way more classifiers in it. I guess removing almost all of them for better readability was maybe not the best idea.
DecoPerson and noxToken are right about this being a general statement about computing and not present day technology. Whether or not the human brain can be completely simulated by a machine is still an unanswered question but nothing I came across so far convinced me that it's not possible.
While it's true that not all mathematical problems can be solved algorithmically, that doesn't limit at all the kind of things a computer can simulate. Most physical phenomena don't have a mathematically precise answer and approximations are usually good enough. I hope I understood you correctly there.
If you say "computers can't correctly model everything" then I probably disagree with your definition of a "model". In my understanding of the word the concept of correctness doesn't even make sense. I'm convinced that models should be first and foremost be judged by their "usefulness".
To make this discussion more concrete, it would be very helpful if you could provide an example of something that cannot be modeled with a computer.
I remember the long faces, when i took my programming class through the introduction of floating points to the windows calculator- and they realized that this thing makes mistakes-
(x/(9^64))*(9^64) != x
And then the dawning realization, what fights the future would hold for them...
I don't think he was talking about present day technology; rather, he was referring to abstract machines.
Whoever taught you computer science must have had a very pessimistic attitude, which is surprising to me as most of the academics I've talked with think the computers of the distant future will have no limitations.
In the preset day, though, we have limited time and must choose our battles carefully. I respect the OP for his attempts, if a little a misguided. Just ask any active member of the Linux kernel and they'll tell you how often they see such people undertake similarly bold approaches which deliver little outcome (and, hopefully, a life lesson...)
Change replicated to represented or modeled, and it's a true and useful observation. Maybe the author believes "replicated," or maybe he just didn't find the right word that day.
In any case, if I change that word as I read it, it becomes a useful observation to me. Just because I'm reading an authors words doesn't mean I have to accept those words as-is. I just got a useful thought out of an incorrect statement. :)
Perhaps there's intended subtext saying that computers have the capability to do such things, but we are unable to do such things with the current technology and knowledge. I mean, you're right. The statement shouldn't have been said without being explicit.
I dunno. I'm just trying to give benefit of the doubt.
In writing this defense of the article, I realize that such a flawed premise shouldn't exist when talking about teaching fundamentals.
>To fix this, we need to increase the currently tiny number of people who are able to write, modify, combine and share software. We need a tool that makes writing software easy and fun, and accessible to everyone. A tool that enables software literacy.
Global access to niche knowledge doesn't seem like an effective use of the world's time to me. Let's look at this same idea, but replace "develop software" with "perform brain surgery."
> To fix this, we need to increase the currently tiny number of people who are able to perform brain surgery. We need a tool that makes performing brain surgery easy and fun, and accessible to everyone. A tool that enables neurosurgical literacy.
Just like Software Engineers don't need to know how to perform neurosurgery, accountants, marketers, burger flippers, and salespeople don't need to know how to write software. I'd rather the CEO of the company I work for spend time on growing the business, not learning how to write a "Hello, world!" program.
Ok, but what makes brain surgery a good analogy? Why not replace "writing software" with "hopping on one foot" instead?
Most of us own computers that can run software, but we don't own the equipment required to do brain surgery. More people have problems that can be solved by software than can be solved by brain surgery.
I think software is more like basic arithmetic: a bit of training (plus the pen and paper you already have) and you can do something yourself that used to require a professional. Of course, software is way more powerful!
I disagree with your analogy. In my opinion, programming is more analogous to composition. Most courses of study require at least one composition course, because no matter what discipline you're studying, it's important that you be able to organize your thoughts and express them clearly as well as understand the formal expressions of others.
I personally experience a huge overlap between ordering my thoughts for writing and making a mental model of a problem and its programmatic solution. Yet it's easier for most people to pick up a pen or open a word processor and express themselves than it is to write a working program.
Why should it be so? Apart from implementation details of hardware platforms and runtimes, you're really talking about expression in different units: subroutines rather than paragraphs, APIs rather than outlines, user stories rather than theses.
To bring it back to your analogy, consider what a tool that lowered the barrier to entry to brain surgery might do. It might certainly do things like list the particular skills, tools, and medications required to perform certain procedures. It might provide a check list of crucial steps and a sort of troubleshooting tool. It isn't going to make an expert out of an unskilled user, and a particularly skillful surgeon might not benefit from the tool at all, but on balance, it would raise the baseline competency of many users and help ensure that certain critical requirements were met.
A good assisted programming tool would do the same. It would help the user accurately model the problem, it would advise the user of certain best practices, and it would certainly warn the user of common mistakes. This is all very much in line with a good word processor, with features like spellcheck and document templates.
So I disagree that programming is analogous to brain surgery. Programming itself is a tool for the expression of an idea and much more akin to composition, and there is no good argument in my opinion against tools to assist with either. Remember that for a large part of our history, literacy was considered a niche skill, and widespread literacy led to explosive growth of our knowledge of the world. I'm not one of those "literally everyone should learn to code" types, but there is quite a lot of potential in spreading programming literacy, even if you feel most people have no business practicing it.
Oh, I think we could go a lot further as a society if we had more understanding of basic software and related concepts like revision control.
It hurts my head watching people sometimes, going through major contortions to achieve something just because they do not know a few commands or programming ideas. Is it “better” to develop massive, twisted dependencies on programs like Excel, when a few lines of real code would have been equivalent and infinitely more maintainable (not to mention vendor-independent)? Is it better to have massive, manually-copied-around duplicates of files because no one knows about revision control systems? Is it better to do things manually, like repetition, that computers do extraordinarily well?
Programming not only gives you the power to solve arbitrary previously-unknown problems but it puts you in the mindset to assume that an external tool will work better than an internal one. It makes you stop waiting for Your Only Vendor to “implement” feature X, and instead realize that there are already 4 other ways to do that with existing tools.
I thought long and hard about a apt analogy and decided on "literacy" because I'm hoping that using computers as a medium for formulating and exchanging ideas would have a similar effect as reading and writing had.
I can very well imagine that a couple of hundred years ago people thought that writing was some niche knowledge and it's perfectly fine to leave that to the monks.
My goal is not to make everybody a professional software engineer just like everybody should not be a professional author. But people should be able to use the medium to discover and understand powerful ideas.
The typical U.S resident interacts with or otherwise has their life significantly effected by software not only every single day, but likely nearly _every single minute_.
The same is not true of brain surgery -- it's probably not even true of 'medicine' in general.
I absolutely agree with the OP that being an empowered individual in this environment absolutely requires some degree of 'computational literacy', and optimally would involve nearly every person, yes, actually knowing how to write software to some extent, the more the better.
I don't agree this is realistically going to happen though. The Hypercard future is not the road we went down. I do think this has pretty major unfortunate impacts on the degree of alienation and loss of control most people are likely to continue to experience in our highly digital society though.
But I think the best thing we can do right now is focus on some notion of accessible "computational literacy" within the present actually existing environment, not thinking we're going to get everyone to the point of being able to build actually-useful software. But they still can _understand_ software better. A hypercard-like learning environment might be useful for this. But it's unlikely to change the face of software development any time soon.
Many many people write scripts in Excel. It is easy and accessible to the majority of people who have computers today. Some accountants and quants would even call it fun.
I usually try hard not to be cynical, but in this case I can't stop myself.
This seems to be another attempt at a grand, unified packaging and IPC / linking method. There have been hundreds of these, and there is no attempt at all to discuss where all the others went wrong, and how this will improve upon them.
At first glance, it seemed like yet another silicon-valley-neoliberlist-style lots of words and ideologies, but no code and practicalities. But it seems like OPs thesis has a more hands-on approach:
- It seems some kind of language/computation-model. Loosely based on a "everything is an actor" model.
- It did look goodish. I tried following the Fibonacci example, which sort of made sense (I got the impression that recursion is handled by creating new nodes/zells). The discussion chapter also seemed interesting.
- It has that abstruse academic feel, where sometimes it is hard to assess whether the problem is my ignorance, or just vagueness of the publication.
- Motivation sections usually have an exaggerated tone to them (i.e. this will totally change everything), but this one, and the article above, are a bit over the top.
- There are also some over-the-top statements sprinkled throughout the thesis (e.g. "a model of virtual objects which exist independent of any hardware").
This looks interesting to me as a programmer, but it looks much too complex and niche-knowledge-dependent to be, in my opinion, a real contender as a tool to allow non-programmers to program.
I only glanced at it, though, so maybe its easier than it seems, but to me, the samples looked complex enough that I don't see it.
It does look great as an experiment into different programming models though and that's something I always enjoy (and is needed if we're to get better future tools).
> Just as knowing why apples fall down and aeroplanes fly
up, the citizens of the 21st century need to know that
computers are not magical boxes but composed of dynamic
models.
In all honesty, I don't know why apples fall down and aeroplanes fly up. I just know that they do. No doubt that improving software literacy is a worthwhile goal, but I think the author hasn't made a strong case for it in the opening paragraphs.
One of the most common challenges that I see engineers face is to effectively empathize with the user. When you know a lot about something, you just see it in a fundamentally different way. This makes it difficult to focus on making it easy for the user to do what they want, not what you think they should want.
When I drive my car, I just want to get from point A to point B while listening to a podcast. I don't care to know how they work. This doesn't make me enfeebled or ignorant, I would just rather commit my learning cycles elsewhere.
Computers need to be the same. Why would we be so foolish as to think that it's a wrongdoing that people are able to effectively use computers without having any clue about how they function?
Literacy isn't about forcing people to learn things, it's about ensuring that everyone has the baseline exposure to help them discover if they have an interest in a topic, and then the resources to explore that topic if they choose.
> Just as knowing why apples fall down and aeroplanes fly up, the citizens of the 21st century need to know that computers are not magical boxes but composed of dynamic models.
I'm not sure most of the public could explain why gravity works, or how an airplane actually gets in the sky. This is because most people are not inquisitive by nature, they generally take things at face value without questioning why. This kind of attitude does not work if you want to build something complex whether it be software, a car, or a bridge. I think the first step is how we as a society can encourage a generation of thinkers and tinkerers.
> This is because most people are not inquisitive by nature, they generally take things at face value without questioning why.
I have to disagree with this statement. Almost all people that I meet, pretty much across all group boundaries you can imagine, I find to be inquisitive. Very inquisitive in fact.
What I think you may be observing is that the depths of questions that many are interested in aren't great... gossip magazines, for example serve to give answers to an inquisitive populous, we can question the value of such questions, but that is still a drive to know something.
Also, I find that when people do ask deeper questions they can be simply overwhelmed by the answers. I'm not particularly good at mathematics, but often times work with people that are in the very top tier in that realm: I will ask certain questions for which I'm simply not prepared to hear the answer... the answer requires background that I simply don't have... I am genuinely interested, but the required prep work is simply out of the question. One could argue that the answers can be simplified as well, but that's not always true.
I think the problem with creating a society of thinkers and tinkerers is that some people just do not find it interesting, so we should not force them into it.
I am curious by nature and like learning new things and figuring out how things work, and it has been like that since I was a kid. My brother is the exact oposite to me, yet we both had the same upbringings (we differ just three years).
When he studied the sciences, it was just pass highschool degree, and after that he went into the social field as a caretaker for special-needs children. I doubt he would benefit a great deal in his day to day life from knowing why apples fall down or planes fly up.
Whilst I like the idea of encouraging a generation of thinkers and tinkerers, I accept that it just isn't for everyone, and neither should it be. Maybe it's just not in their nature.
That's a nice example since nobody can explain how gravity works. It's the least understood force in the universe =)
I agree that most people are not inquisitive but I don't think it's by nature since most children I met were very very inquisitive. So my approach is to make it easy to stay a tinkerer.
I'm going to buck the trend of cynicism and say this is beautiful and matches my own ideas closely, though I have not read the paper yet.
How many of us seasoned programmers came to an understanding with computers by playing in a "toy world" of comprehensible, somewhat visible, documented, predictable abstractions, such as Logo, HyperCard, Excel, BASIC, or even assembly code, and now perform bizarre ceremonial rites on a daily basis to get a teetering stack to perform our bidding as part of "real" programming? There's a vicious interplay between how "bad" and complex software is and how arcane and unapproachable it is, even to experts, driving away all but the most determined to crack the code, who then work together to try to build quality components and applications against great odds.
After reading the thesis, the proposed model of computing is like a concurrent Smalltalk where everything is completely mutable, even an object's code and prototype chain. The author then writes a function to calculate the Fibonacci sequence by turning it into a distributed system, with some effort, and then runs the code and talks about its performance!
At first glance, there seems to be a lot of incidental complexity and creative choice in expressing a function from integer to integer in this system, which goes against the ideas of code reuse and separating meaning from optimization -- i.e. that there is one global Fibonacci function that we can inspect and understand and don't have to rewrite for performance, or in another language, or to run in a distributed fashion.
> More and more, users are put into - sometimes golden - cages, and forced to hand over their ideas, personal information, and even identities to international quasi-monopolies that put everything into walled gardens which the creator can only access through tiny keyholes.
I don't think this has anything to do with software, it seems more like an attempt to educate the masses on the evils of facebook and twitter, but then again that isn't a software problem. You can see this kind of locking system in financial businesses like loans/mortgages, or even benign ones like the eyeglasses business.
Pleased to see more efforts to fix fundamental problems with computing. It's certainly needed, this won't get fixed by piling more crap on top of the existing stacks (including: building OSS versions of proprietary things.)
This one seems to address some of the values I think are important, so, neat.
But I'd argue ordinary people just need to be able to use software, not necessarily build it. So, available software should be high-quality. There should be meaningful choices between alternatives. I think that means: no lock-in.
Whenever the physical world intrudes on the illusory "virtual world of computers", the man behind the curtain is observed in its less than superlative aspects and we're less prone to attribute OZ powers to "software".
Software /is/ magical in many aspects. But its own inherent magic has never been anything other than sleight of hand sort of magic. Even then, software magic borrows from the glory of /insanely/ complex physical machines, and various wondrous features of Nature itself.
One case in point was when Moore's Law and economy collided and the practice had to ramp up on concurrency and deal with SMP. A bit later (and still on going) we're ramping up on distribution and dealing with CAP. In the former case, the illusory 'indivisible' platform was seen to have a sort of geography. In the latter case, the illusory notions of linear 'time' and smooth 'space' was smashed.
OP's remedy for a software-driven world's ailments is software literacy. But the physical head poking through the curtain should make it clear a large subset of these ailments have to do with physical things, such as computing devices, infrastructure, access to energy, and even softer concerns such as legal and political cover for making, providing, and operating software.
Imagine if every single person was a world class chemist and biologist. Would we be able to do away with drug companies? You have that amazing molecule all figured out -- do you have the physical capability for actually realizing it?
I'm taking the opposite opinion for the sake of argument: The more you understand about software, the more frustrated and disappointed you will be, as you see all the easily avoidable flaws around you.
Web pages that have a few lines of text are unreadable on mobile devices and drain your battery.
Setting the spin speed on a washing machine takes ages because the computer polls for input only about once per second, so it misses most of your button presses. How to reconcile this if the clock speed is thousands to millions of cycles per second and the whole system has only a handful of inputs and outputs.
These are not tough technological limitations, but completely easily avoidable human failures. There can be nothing but negligence, cynicism and depression to explain them.
People would be a lot happier to just say "I guess it must be like this", or "My device is getting old and slow". A bit like "God works in mysterious ways" or "it must be fate" can often feel better than "our government is corrupt" or "I don't have any kind of plan".
I guess it's the same about any thing you feel passionate about. If you cared about clothes or food, seeing all the junk out there might make you less happy. A friend of mine was a barista. He wasn't happy that people were paying the same amount for worse coffee.
I haven't read the paper yet but the idea looks promising.
I've had this itch myself for a long time. I can't quite put my finger on it but we really need something like a high level assembly language. One that incorporates high level concepts like networked computers, cryptographic identites for users, access to a global, shared pool of data and algorithmic primitives regardless of which application they were originally written for, or what domain specific higher level language they were created in.
Like, once a user has entered their address into a computer, they shouldn't ever have to enter it again in a different software. Once someone has written an algorithm which takes input x and produces output y, noone should ever have to rewrite it again, but everyone should be able to reuse it. Applications shouldn't all indvividually handle transferring data between devices of the same user or even between users, this should be already built into our programming model. And so on.
If this sounds like some utopian dream or incoherent babble, that's because it probably is at this point. But I'm convinced this is the future of computing we should be aiming for, not piling up more stuff onto our existing stack that's just barely held together by ~50 year old technolgies built for that age.
IMO we're kind of headed that way now. I always saw this as the point of Google's combined interest in TensorFlow and Kubernetes.
Containers are the piece of the toolchain we've been missing. Now we actually have some feasible logical methods (deep neural networks + gradient descent) that can be used to structure existing computational tools into deeper, more intelligent systems.
Think of it this way: what's the difference between (an ideal) container and an artificial neuron? Structurally, they are nearly identical: they both have collectors and emitters, and perform some non-linear action in concert with other similarly-structured systems. Containers can also help with some of the "trust" problems: if we're shipping around trained data models (or container images) rather than the actual training data, we can push storage out to the edge of the network and run the models there. Containers provide a common computing language that enables you to do that.
This platform is not a leap forward for the theoretical capabilities of AI; but it is a shortcut that should eventually make it easy for AI researchers to leverage vast existing libraries of software written in any one of hundreds of programming languages.
I actually suspect that this is just the first generation; there are a number of software problems that currently can't be solved easily in a multithreaded manner. However, if you can build a container that does what you need, you can eventually train a model to replicate the container -- it may be less computationally efficient, but with future orchestration platforms it may end up being more time-efficient.
This industry is in its infancy. We have some of the greatest minds working in it, and yet we barely know how to make things work right, and OP’s suggesting we try and design tools that an average person can use to create powerful software. How about we let highly-trained specialists figure out how to write complex software first, before we try to teach average programmers how to do it, and perhaps then we can turn our heads towards the masses.
Making computers incredibly simple to use seems to have worked out well in the case of the iPhone. Lots of common usability problems in computer UI simply went away with a much better design. It's hard to say where the mobile-phone industry would be today if that shift hadn't happened in 2007.
Perhaps the same could happen for computer programming. For example, in the 1990s, HTML introduced coding to lots of people who otherwise probably wouldn't ever have considered it. A forgiving declarative interface was a lot more appealing than learning what compilers, linkers, functions, parameters, and APIs were.
Since this is my first submission on hacker news, I'm a bit overwhelmed by the amount and quality of the responses. I'll try to address critiques individually but in order to not have to repeat myself continuously I wanted to thank everybody for their time and energy this way. Your feedback is much appreciated and I'll use it to debug the idea and the manifesto.
I think this misses the problem completely. The problem is not that people that trust computers need to put more effort into understanding computers. The problem is that people keep trusting computers.
What people really need to be educated on is "Why computers are not and never will be trustworthy". And they don't need the details of why, just the bottom line.
Can we please just let this idea die. Almost since the day programming was invented there has been at least one person out there trying to dumb it down to work for the average person and it always fails. Computers are hard to program because they're complicated. Programming languages strike a balance between simplifying some aspects of that complexity and exposing enough of the inner workings to efficiently implement algorithms. Different languages strike different balances but ultimately they all are more complicated than the average person can handle because at the end of the day the computer itself is more complicated than the average person can handle. No amount of dumbing down or simplifying things is going to create a programming language that you're going to want to write serious programs in but that the average person is going to be able to understand.
I think there is room for a "global librarian". Moreover, when AI starts to program itself, having a logically linked system that can be queried for potential implementation strategies will be beneficial for integrating new features. However I think google can already do this to some regard. Perhaps its time for something like "site: github, lang: python, tags: [csv, excel, graph]. Then add some local db that can receive notifications from github on changes?
Viewpoints Research Institute (Alan Kay's group) is working on solving some of those problems. They have published some interesting papers although I don't know how practical their ideas are.
I propose capital punishment for people who do not put enough effort into optimizing memory usage and performance. I know we have more than 64kb of ram now. That does not mean we have to add useless shit that just bloats everything.
[+] [-] skywhopper|9 years ago|reply
Computers are great with complex mathematical models of physical phenomena, and in some limited cases this is extremely useful. But smart engineers know the limits of their tools, and computer models are not an exception.
But to assert that abstract ideas or the human mind can be replicated in a computer only shows that the author either has no idea of the actual state of the art, and/or has no idea of what he means when he says this is possible.
One of the first things you learn in Computer Science is that not all problems can be solved with a computer. It's amazing that people so enamored with math that they want to believe they can model every phenomenon in the universe with a computer apparently disbelieve the math that proves computers can't correctly model everything.
Weird. Please don't try to teach the world that computers can do everything. They can't. And that should be lesson one in any plan to increase software literacy. Starting, apparently, with its advocates.
[+] [-] rtens|9 years ago|reply
DecoPerson and noxToken are right about this being a general statement about computing and not present day technology. Whether or not the human brain can be completely simulated by a machine is still an unanswered question but nothing I came across so far convinced me that it's not possible.
While it's true that not all mathematical problems can be solved algorithmically, that doesn't limit at all the kind of things a computer can simulate. Most physical phenomena don't have a mathematically precise answer and approximations are usually good enough. I hope I understood you correctly there.
If you say "computers can't correctly model everything" then I probably disagree with your definition of a "model". In my understanding of the word the concept of correctness doesn't even make sense. I'm convinced that models should be first and foremost be judged by their "usefulness".
To make this discussion more concrete, it would be very helpful if you could provide an example of something that cannot be modeled with a computer.
[+] [-] SticksAndBreaks|9 years ago|reply
And then the dawning realization, what fights the future would hold for them...
[+] [-] DecoPerson|9 years ago|reply
Whoever taught you computer science must have had a very pessimistic attitude, which is surprising to me as most of the academics I've talked with think the computers of the distant future will have no limitations.
In the preset day, though, we have limited time and must choose our battles carefully. I respect the OP for his attempts, if a little a misguided. Just ask any active member of the Linux kernel and they'll tell you how often they see such people undertake similarly bold approaches which deliver little outcome (and, hopefully, a life lesson...)
[+] [-] trolor|9 years ago|reply
Oh ... could you elaborate? I think my cut-rate, community college degree skipped that part :) it seems like a useful thing to know!
[+] [-] a3n|9 years ago|reply
In any case, if I change that word as I read it, it becomes a useful observation to me. Just because I'm reading an authors words doesn't mean I have to accept those words as-is. I just got a useful thought out of an incorrect statement. :)
[+] [-] noxToken|9 years ago|reply
I dunno. I'm just trying to give benefit of the doubt.
In writing this defense of the article, I realize that such a flawed premise shouldn't exist when talking about teaching fundamentals.
[+] [-] inimino|9 years ago|reply
[+] [-] coderjames|9 years ago|reply
Global access to niche knowledge doesn't seem like an effective use of the world's time to me. Let's look at this same idea, but replace "develop software" with "perform brain surgery."
> To fix this, we need to increase the currently tiny number of people who are able to perform brain surgery. We need a tool that makes performing brain surgery easy and fun, and accessible to everyone. A tool that enables neurosurgical literacy.
Just like Software Engineers don't need to know how to perform neurosurgery, accountants, marketers, burger flippers, and salespeople don't need to know how to write software. I'd rather the CEO of the company I work for spend time on growing the business, not learning how to write a "Hello, world!" program.
[+] [-] panic|9 years ago|reply
Most of us own computers that can run software, but we don't own the equipment required to do brain surgery. More people have problems that can be solved by software than can be solved by brain surgery.
I think software is more like basic arithmetic: a bit of training (plus the pen and paper you already have) and you can do something yourself that used to require a professional. Of course, software is way more powerful!
[+] [-] sbov|9 years ago|reply
[+] [-] FroshKiller|9 years ago|reply
I personally experience a huge overlap between ordering my thoughts for writing and making a mental model of a problem and its programmatic solution. Yet it's easier for most people to pick up a pen or open a word processor and express themselves than it is to write a working program.
Why should it be so? Apart from implementation details of hardware platforms and runtimes, you're really talking about expression in different units: subroutines rather than paragraphs, APIs rather than outlines, user stories rather than theses.
To bring it back to your analogy, consider what a tool that lowered the barrier to entry to brain surgery might do. It might certainly do things like list the particular skills, tools, and medications required to perform certain procedures. It might provide a check list of crucial steps and a sort of troubleshooting tool. It isn't going to make an expert out of an unskilled user, and a particularly skillful surgeon might not benefit from the tool at all, but on balance, it would raise the baseline competency of many users and help ensure that certain critical requirements were met.
A good assisted programming tool would do the same. It would help the user accurately model the problem, it would advise the user of certain best practices, and it would certainly warn the user of common mistakes. This is all very much in line with a good word processor, with features like spellcheck and document templates.
So I disagree that programming is analogous to brain surgery. Programming itself is a tool for the expression of an idea and much more akin to composition, and there is no good argument in my opinion against tools to assist with either. Remember that for a large part of our history, literacy was considered a niche skill, and widespread literacy led to explosive growth of our knowledge of the world. I'm not one of those "literally everyone should learn to code" types, but there is quite a lot of potential in spreading programming literacy, even if you feel most people have no business practicing it.
[+] [-] makecheck|9 years ago|reply
It hurts my head watching people sometimes, going through major contortions to achieve something just because they do not know a few commands or programming ideas. Is it “better” to develop massive, twisted dependencies on programs like Excel, when a few lines of real code would have been equivalent and infinitely more maintainable (not to mention vendor-independent)? Is it better to have massive, manually-copied-around duplicates of files because no one knows about revision control systems? Is it better to do things manually, like repetition, that computers do extraordinarily well?
Programming not only gives you the power to solve arbitrary previously-unknown problems but it puts you in the mindset to assume that an external tool will work better than an internal one. It makes you stop waiting for Your Only Vendor to “implement” feature X, and instead realize that there are already 4 other ways to do that with existing tools.
[+] [-] rtens|9 years ago|reply
I can very well imagine that a couple of hundred years ago people thought that writing was some niche knowledge and it's perfectly fine to leave that to the monks.
My goal is not to make everybody a professional software engineer just like everybody should not be a professional author. But people should be able to use the medium to discover and understand powerful ideas.
[+] [-] jrochkind1|9 years ago|reply
The typical U.S resident interacts with or otherwise has their life significantly effected by software not only every single day, but likely nearly _every single minute_.
The same is not true of brain surgery -- it's probably not even true of 'medicine' in general.
I absolutely agree with the OP that being an empowered individual in this environment absolutely requires some degree of 'computational literacy', and optimally would involve nearly every person, yes, actually knowing how to write software to some extent, the more the better.
I don't agree this is realistically going to happen though. The Hypercard future is not the road we went down. I do think this has pretty major unfortunate impacts on the degree of alienation and loss of control most people are likely to continue to experience in our highly digital society though.
But I think the best thing we can do right now is focus on some notion of accessible "computational literacy" within the present actually existing environment, not thinking we're going to get everyone to the point of being able to build actually-useful software. But they still can _understand_ software better. A hypercard-like learning environment might be useful for this. But it's unlikely to change the face of software development any time soon.
[+] [-] nradov|9 years ago|reply
[+] [-] dustingetz|9 years ago|reply
[+] [-] CJefferson|9 years ago|reply
This seems to be another attempt at a grand, unified packaging and IPC / linking method. There have been hundreds of these, and there is no attempt at all to discuss where all the others went wrong, and how this will improve upon them.
[+] [-] pttrsmrt|9 years ago|reply
http://zells.org/res/Zells_DiplomaThesis.pdf
[+] [-] airesQ|9 years ago|reply
- It seems some kind of language/computation-model. Loosely based on a "everything is an actor" model.
- It did look goodish. I tried following the Fibonacci example, which sort of made sense (I got the impression that recursion is handled by creating new nodes/zells). The discussion chapter also seemed interesting.
- It has that abstruse academic feel, where sometimes it is hard to assess whether the problem is my ignorance, or just vagueness of the publication.
- Motivation sections usually have an exaggerated tone to them (i.e. this will totally change everything), but this one, and the article above, are a bit over the top.
- There are also some over-the-top statements sprinkled throughout the thesis (e.g. "a model of virtual objects which exist independent of any hardware").
[+] [-] dkersten|9 years ago|reply
I only glanced at it, though, so maybe its easier than it seems, but to me, the samples looked complex enough that I don't see it.
It does look great as an experiment into different programming models though and that's something I always enjoy (and is needed if we're to get better future tools).
[+] [-] gmluke|9 years ago|reply
In all honesty, I don't know why apples fall down and aeroplanes fly up. I just know that they do. No doubt that improving software literacy is a worthwhile goal, but I think the author hasn't made a strong case for it in the opening paragraphs.
[+] [-] Waterluvian|9 years ago|reply
When I drive my car, I just want to get from point A to point B while listening to a podcast. I don't care to know how they work. This doesn't make me enfeebled or ignorant, I would just rather commit my learning cycles elsewhere.
Computers need to be the same. Why would we be so foolish as to think that it's a wrongdoing that people are able to effectively use computers without having any clue about how they function?
Literacy isn't about forcing people to learn things, it's about ensuring that everyone has the baseline exposure to help them discover if they have an interest in a topic, and then the resources to explore that topic if they choose.
[+] [-] jdavis703|9 years ago|reply
I'm not sure most of the public could explain why gravity works, or how an airplane actually gets in the sky. This is because most people are not inquisitive by nature, they generally take things at face value without questioning why. This kind of attitude does not work if you want to build something complex whether it be software, a car, or a bridge. I think the first step is how we as a society can encourage a generation of thinkers and tinkerers.
[+] [-] sbuttgereit|9 years ago|reply
I have to disagree with this statement. Almost all people that I meet, pretty much across all group boundaries you can imagine, I find to be inquisitive. Very inquisitive in fact.
What I think you may be observing is that the depths of questions that many are interested in aren't great... gossip magazines, for example serve to give answers to an inquisitive populous, we can question the value of such questions, but that is still a drive to know something.
Also, I find that when people do ask deeper questions they can be simply overwhelmed by the answers. I'm not particularly good at mathematics, but often times work with people that are in the very top tier in that realm: I will ask certain questions for which I'm simply not prepared to hear the answer... the answer requires background that I simply don't have... I am genuinely interested, but the required prep work is simply out of the question. One could argue that the answers can be simplified as well, but that's not always true.
[+] [-] Insanity|9 years ago|reply
I am curious by nature and like learning new things and figuring out how things work, and it has been like that since I was a kid. My brother is the exact oposite to me, yet we both had the same upbringings (we differ just three years).
When he studied the sciences, it was just pass highschool degree, and after that he went into the social field as a caretaker for special-needs children. I doubt he would benefit a great deal in his day to day life from knowing why apples fall down or planes fly up.
Whilst I like the idea of encouraging a generation of thinkers and tinkerers, I accept that it just isn't for everyone, and neither should it be. Maybe it's just not in their nature.
[+] [-] rtens|9 years ago|reply
I agree that most people are not inquisitive but I don't think it's by nature since most children I met were very very inquisitive. So my approach is to make it easy to stay a tinkerer.
[+] [-] dgreensp|9 years ago|reply
How many of us seasoned programmers came to an understanding with computers by playing in a "toy world" of comprehensible, somewhat visible, documented, predictable abstractions, such as Logo, HyperCard, Excel, BASIC, or even assembly code, and now perform bizarre ceremonial rites on a daily basis to get a teetering stack to perform our bidding as part of "real" programming? There's a vicious interplay between how "bad" and complex software is and how arcane and unapproachable it is, even to experts, driving away all but the most determined to crack the code, who then work together to try to build quality components and applications against great odds.
[+] [-] dgreensp|9 years ago|reply
At first glance, there seems to be a lot of incidental complexity and creative choice in expressing a function from integer to integer in this system, which goes against the ideas of code reuse and separating meaning from optimization -- i.e. that there is one global Fibonacci function that we can inspect and understand and don't have to rewrite for performance, or in another language, or to run in a distributed fashion.
[+] [-] sickbeard|9 years ago|reply
I don't think this has anything to do with software, it seems more like an attempt to educate the masses on the evils of facebook and twitter, but then again that isn't a software problem. You can see this kind of locking system in financial businesses like loans/mortgages, or even benign ones like the eyeglasses business.
[+] [-] aethertron|9 years ago|reply
This one seems to address some of the values I think are important, so, neat.
But I'd argue ordinary people just need to be able to use software, not necessarily build it. So, available software should be high-quality. There should be meaningful choices between alternatives. I think that means: no lock-in.
[+] [-] eternalban|9 years ago|reply
Software /is/ magical in many aspects. But its own inherent magic has never been anything other than sleight of hand sort of magic. Even then, software magic borrows from the glory of /insanely/ complex physical machines, and various wondrous features of Nature itself.
One case in point was when Moore's Law and economy collided and the practice had to ramp up on concurrency and deal with SMP. A bit later (and still on going) we're ramping up on distribution and dealing with CAP. In the former case, the illusory 'indivisible' platform was seen to have a sort of geography. In the latter case, the illusory notions of linear 'time' and smooth 'space' was smashed.
OP's remedy for a software-driven world's ailments is software literacy. But the physical head poking through the curtain should make it clear a large subset of these ailments have to do with physical things, such as computing devices, infrastructure, access to energy, and even softer concerns such as legal and political cover for making, providing, and operating software.
Imagine if every single person was a world class chemist and biologist. Would we be able to do away with drug companies? You have that amazing molecule all figured out -- do you have the physical capability for actually realizing it?
[+] [-] Gravityloss|9 years ago|reply
Web pages that have a few lines of text are unreadable on mobile devices and drain your battery.
Setting the spin speed on a washing machine takes ages because the computer polls for input only about once per second, so it misses most of your button presses. How to reconcile this if the clock speed is thousands to millions of cycles per second and the whole system has only a handful of inputs and outputs.
These are not tough technological limitations, but completely easily avoidable human failures. There can be nothing but negligence, cynicism and depression to explain them.
People would be a lot happier to just say "I guess it must be like this", or "My device is getting old and slow". A bit like "God works in mysterious ways" or "it must be fate" can often feel better than "our government is corrupt" or "I don't have any kind of plan".
I guess it's the same about any thing you feel passionate about. If you cared about clothes or food, seeing all the junk out there might make you less happy. A friend of mine was a barista. He wasn't happy that people were paying the same amount for worse coffee.
[+] [-] barnacs|9 years ago|reply
I've had this itch myself for a long time. I can't quite put my finger on it but we really need something like a high level assembly language. One that incorporates high level concepts like networked computers, cryptographic identites for users, access to a global, shared pool of data and algorithmic primitives regardless of which application they were originally written for, or what domain specific higher level language they were created in.
Like, once a user has entered their address into a computer, they shouldn't ever have to enter it again in a different software. Once someone has written an algorithm which takes input x and produces output y, noone should ever have to rewrite it again, but everyone should be able to reuse it. Applications shouldn't all indvividually handle transferring data between devices of the same user or even between users, this should be already built into our programming model. And so on.
If this sounds like some utopian dream or incoherent babble, that's because it probably is at this point. But I'm convinced this is the future of computing we should be aiming for, not piling up more stuff onto our existing stack that's just barely held together by ~50 year old technolgies built for that age.
[+] [-] exelius|9 years ago|reply
Containers are the piece of the toolchain we've been missing. Now we actually have some feasible logical methods (deep neural networks + gradient descent) that can be used to structure existing computational tools into deeper, more intelligent systems.
Think of it this way: what's the difference between (an ideal) container and an artificial neuron? Structurally, they are nearly identical: they both have collectors and emitters, and perform some non-linear action in concert with other similarly-structured systems. Containers can also help with some of the "trust" problems: if we're shipping around trained data models (or container images) rather than the actual training data, we can push storage out to the edge of the network and run the models there. Containers provide a common computing language that enables you to do that.
This platform is not a leap forward for the theoretical capabilities of AI; but it is a shortcut that should eventually make it easy for AI researchers to leverage vast existing libraries of software written in any one of hundreds of programming languages.
I actually suspect that this is just the first generation; there are a number of software problems that currently can't be solved easily in a multithreaded manner. However, if you can build a container that does what you need, you can eventually train a model to replicate the container -- it may be less computationally efficient, but with future orchestration platforms it may end up being more time-efficient.
[+] [-] dreta|9 years ago|reply
[+] [-] sowbug|9 years ago|reply
Perhaps the same could happen for computer programming. For example, in the 1990s, HTML introduced coding to lots of people who otherwise probably wouldn't ever have considered it. A forgiving declarative interface was a lot more appealing than learning what compilers, linkers, functions, parameters, and APIs were.
[+] [-] rtens|9 years ago|reply
[+] [-] ysavir|9 years ago|reply
What people really need to be educated on is "Why computers are not and never will be trustworthy". And they don't need the details of why, just the bottom line.
[+] [-] orclev|9 years ago|reply
[+] [-] justaman|9 years ago|reply
[+] [-] nradov|9 years ago|reply
http://www.vpri.org/html/work/ifnct.htm
[+] [-] oZe|9 years ago|reply
[+] [-] rtens|9 years ago|reply