One principle that Niklaus Wirth has always espoused and with which I agree: you cannot reduce the complexity of a programming language by expanding the size of the language with greater features. This sounds like a contradiction, but as Wirth says in the preface to his Oberon programming tutorial:
"The language Oberon emerged from the urge to reduce the complexity of programming languages, of Modula in particular. This effort resulted in a remarkably concise language. The extent of Oberon, the number of its features and constructs, is smaller even than that of Pascal. Yet it is considerably more powerful."
The FreePascal of today has grown considerably over time. The same can be said of many popular languages today: PHP, Ruby, Python, C#, Javascript, etc. Some languages start big from the beginning (e.g. Ada).
With large languages there is also the feeling you've only learned a subset of the language, blissfully unaware of many other features the language offers. At least with smaller languages you have a greater chance to master the language whole.
Of course, it doesn't automatically follow that a smaller language will be simpler or easier to grasp. (There is plenty of argument surrounding Go's supposed simplicity.)
At some point the language becomes so small that the true featureset is hidden in libraries or extensions.
Prime example: Lisp. The language has the smallest possible syntax, but its semantics are mostly in special forms. If you don't know the exact meaning of all special forms, you don't know the language.
Ada was a real bear of a language. When it was introduced as the language of instruction in some class at UIC back in the 80s, students had to work on teams as a single compilation was enough to consume the weekly CPU time allocation for a student account on the VM/CMS mainframe. They had to set up file sharing between accounts and move to a new account with each new day's login. I think there were a few cases where students ran out of CPU time even with pooled accounts before they could complete an assignment.
One of the nice things about Oberon-07 is that the compiler I’ve played with could compile itself and the standard library all in a tiny fraction of a second.
One of the things he talks about it the approach the Microsoft Tools team took to implementing MFC in C++. He talks about a lesson he learned from Martin Carroll, one of the early C++ gurus, that essentially just because a feature exists, that confers no obligation to use it. “You’re writing code for your product, not a compiler test suite,”. He took that and turned it into the MFC team's approach of using C++ as a better C, not OOP for the sake of OOP.
I remember those days well, even then I was mostly working on Unix systems with a bit of Windows here and there, but I was well aware of what MS was doing at the time. He gives a really engaging account of those times and it's interesting how many of the lessons he learned and talks about are still relevant today.
But the principle also applies to the other way around: if the programming language is too "simple", complexity still arises, because you have to explicitly write into the code many things that a programming language could simplify. This makes the code larger than necessary and less easy to understand. So it needs the right balance. From my point Oberon-07 is already too simple. Personally, I find Oberon-2 the more practical programming language (e.g. it has dynamic arrays). I also find the requirement to capitalize keywords very impractical.
> you cannot reduce the complexity of a programming language by expanding the size of the language with greater features. This sounds like a contradiction
(Maybe I'm missing something, but) I'm not sure why this should sound at all like a contradiction – how could increasing a language's size/features reduce its complexity?! Maybe you meant "the complexity of programs in a language"?
The Pascal compiler on UNIX (v7, 4.1BSD) was idiomatic. I think probably no more than any other language, but enough to make me (graduate from York, a pascal teaching university) put off by micro-differences from what I learned.
The uplift to C was simpler in some ways. Why use a confusing port which winds up having to call C system libraries, when you can code directly in the language the system libraries is coded in?
Fortran, oddly, this wasn't such a big deal. Maybe the bulk of necessary code in Fortran meant the barrier to entry was lower.
When I read this book I had an hard time running and compiling the source code included. If you want to read the book and try out the example compiler (oberon0), you can follow the instructions here: https://github.com/lboasso/oberon0. You just need a JVM >= 8 and the oberonc compiler: https://github.com/lboasso/oberonc
There is a lot of Wirth influence in Go, like “goroutines”, and Oberon style type embedding. Google Go team including Rob Pike are fans of Wirth, and Robert Griesemer actually comes from ETH.
> a lot of Wirth influence in Go, like “goroutines”
Go inherited a lot from Oberon, but definitely not goroutines; rather e.g. the separate declarations of methods from the type declaration and the type binding syntax (which was actually an idea by H. Mössenböck, implemented in Oberon-2 in 1991, see e.g. ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/1xx/160.pdf).
I think Delphi got caught out by the coupling of the IDE and the need to support that maintenance - GNU C++ was a shot across the bows, but when Java came that business model sank.
Unfortunately. Pascal and decedents have a lot going for them.
I grew up using big languages. Hell, my first language was Perl, then I learned C++. Recent languages I liked are Haskell and Rust.
I'm a bit concerned that my technical aesthetics gravitate towards these types of languages. I really like the ideas and design of things like Scheme and C (and I use the latter when possible). Though I suspect I admire their simplicity from an ease of implementation POV more than day-to-day usage.
But I never actually sit down and study why something like Oberon is better from a design standpoint. I just nod my head at the simplicity bullet point, and go right back into worrying about insanity like "can I std::move this smart pointer in a copy constructor" with remarkably little cognitive dissonance.
I think what I'm asking is: have you had luck using a language with a lean, Wirth-style design? I tend to fixate on what those languages lack (e.g. Go's lack of generics) vs you get in return.
Languages with all the bells and whistles make you worry about how all the things interact in the situation you're dealing with (like your std::move example). A simple language can be much easier to reason about, because there's less to reason about.
On the other hand, if the simple language doesn't have what you need, then you have to do it yourself. Worse, if the language tries to "protect you from yourself" (as Wirth languages tended to), the language may block you from doing it yourself.
To me, that was the most frustrating problem with the original (pre Turbo) Pascal. Some guy thousands of miles away, who knew zero about my circumstances, was deciding what his language would allow me to do. When you're on the wrong end of that, it's very frustrating.
So I would say that either large or small languages can work, they just have different trade-offs. But avoid languages that try to restrict what you are allowed to do, even if they do it "for your own good". Languages need escape hatches. If they're clearly marked in the code, that's even better.
> why something like Oberon is better from a design standpoint
It's not "better". Wirth simply left out everything that did not seem absolutely necessary to him at the time. His colleague and he wanted to save work by doing this, i.e. to make their Oberon project at that time feasible for two developers on a part-time basis. With Oberon-07 he went even a step further in that direction.
>"On this website you will find information on Pascal for small machines, like Wirth compilers, the UCSD Pascal system, many scanned books and other files on UCSD Pascal, Pascal on MSX and CP/M, Delphi programming on PC, Freepascal and lazarus on Windows and Raspberry Pi, Oberon systems.
Many sources of early Pascal compilers!"
[...]
"WIRTH (1)1970- Pascal compilers, the P2-P4 compilers, Pascal-S, student VU Pascal (the forerunnner of the Amsterdam Compiler Kit), Andrew Tanenbaum, Professor R.P van de Riet.
1980 – UCSD P-System, Turbo Pascal, Pascal-M, 10 years VAX/VMS Pascal programmer, teacher of the Teleac support course Pascal, teacher and examinator Exin/Novi T5 Pascal
1990 – Turbo Pascal 3 on CP/M to Delphi on Windows
Having loved Pascal and struggled with Modula 2, I'll go out on a limb and I'll say: for cosmetic and typing issues.
Modula 2, at least in the implementation I had to use at my University, had case-sensitive keywords which were in UPPERCASE. Compared to Pascal's case-insensitive keywords, in the editors available at that time, Modula 2 stuck out like a sore thumb and made writing and reading code like a chore.
When I started grad school, in 1986, I'd written a lot of Pascal and used Turbo Pascal on my PC. But the school used Sun workstations with unix, so I had to learn C pretty quickly. Nobody seemed to use Pascal. There was a Modula II compiler around, but it was a resource hog and never popular.
Oberon was published about the same time. I thought it was pretty cool and worked on a compiler for it, though the language evolved out from under me.
My impression is that none of the Wirth languages made much of an impression in any of the US grad schools in that era. Of course, I didn't see everything that was going on, but I saw most of the compiler work. Almost 100% in C, moving eventually to C++.
I'd say it is because Turbo Pascal and then Borland Pascal got popular enough, and had TurboVision and other stuff.
Modula-2 was a nice language (and modern Go is built basically on its ideas, plus Oberon's), but it had a hard time competing for the same niche on the PC. It worked better in embedded space, though.
On Unix, there was C, a language with many shortcomings, but one big upside: it was the language the kernel and the userland was implemented in. Its compiler was readily available as a part if any Unix installation. It seemed a natural choice over Pascal, unless you wanted drastically different features (then you had Fortran, or awk, or Tcl, or later Perl, etc.)
I remember the Modula-2 I used in 1988-1991 was a royal pain in the butt to do any I/O. It was a pain compared to Turbo Pascal and, as Koshkin stated, already had modules with a significant ecosystem.
[+] [-] open-source-ux|5 years ago|reply
"The language Oberon emerged from the urge to reduce the complexity of programming languages, of Modula in particular. This effort resulted in a remarkably concise language. The extent of Oberon, the number of its features and constructs, is smaller even than that of Pascal. Yet it is considerably more powerful."
The FreePascal of today has grown considerably over time. The same can be said of many popular languages today: PHP, Ruby, Python, C#, Javascript, etc. Some languages start big from the beginning (e.g. Ada).
With large languages there is also the feeling you've only learned a subset of the language, blissfully unaware of many other features the language offers. At least with smaller languages you have a greater chance to master the language whole.
Of course, it doesn't automatically follow that a smaller language will be simpler or easier to grasp. (There is plenty of argument surrounding Go's supposed simplicity.)
[+] [-] choeger|5 years ago|reply
Prime example: Lisp. The language has the smallest possible syntax, but its semantics are mostly in special forms. If you don't know the exact meaning of all special forms, you don't know the language.
[+] [-] dhosek|5 years ago|reply
[+] [-] Koshkin|5 years ago|reply
[+] [-] pjmlp|5 years ago|reply
Although Wirth seems to only have been involved with Oberon-2 and Component Pascal.
[+] [-] simonh|5 years ago|reply
https://hardcoresoftware.learningbyshipping.com/
One of the things he talks about it the approach the Microsoft Tools team took to implementing MFC in C++. He talks about a lesson he learned from Martin Carroll, one of the early C++ gurus, that essentially just because a feature exists, that confers no obligation to use it. “You’re writing code for your product, not a compiler test suite,”. He took that and turned it into the MFC team's approach of using C++ as a better C, not OOP for the sake of OOP.
I remember those days well, even then I was mostly working on Unix systems with a bit of Windows here and there, but I was well aware of what MS was doing at the time. He gives a really engaging account of those times and it's interesting how many of the lessons he learned and talks about are still relevant today.
[+] [-] Rochus|5 years ago|reply
[+] [-] yesenadam|5 years ago|reply
(Maybe I'm missing something, but) I'm not sure why this should sound at all like a contradiction – how could increasing a language's size/features reduce its complexity?! Maybe you meant "the complexity of programs in a language"?
[+] [-] ggm|5 years ago|reply
The uplift to C was simpler in some ways. Why use a confusing port which winds up having to call C system libraries, when you can code directly in the language the system libraries is coded in?
Fortran, oddly, this wasn't such a big deal. Maybe the bulk of necessary code in Fortran meant the barrier to entry was lower.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] rramadass|5 years ago|reply
[+] [-] mro_name|5 years ago|reply
[+] [-] eatonphil|5 years ago|reply
[+] [-] lboasso|5 years ago|reply
[+] [-] setpatchaddress|5 years ago|reply
[+] [-] gatestone|5 years ago|reply
[+] [-] Rochus|5 years ago|reply
Go inherited a lot from Oberon, but definitely not goroutines; rather e.g. the separate declarations of methods from the type declaration and the type binding syntax (which was actually an idea by H. Mössenböck, implemented in Oberon-2 in 1991, see e.g. ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/1xx/160.pdf).
[+] [-] jhoechtl|5 years ago|reply
[+] [-] sgt101|5 years ago|reply
Unfortunately. Pascal and decedents have a lot going for them.
[+] [-] AnimalMuppet|5 years ago|reply
[+] [-] p_l|5 years ago|reply
And it was the mess that Borland (by then bough up, I think) made with Delphi 8 that was critical in decline of Delphi.
[+] [-] mattgreenrocks|5 years ago|reply
I'm a bit concerned that my technical aesthetics gravitate towards these types of languages. I really like the ideas and design of things like Scheme and C (and I use the latter when possible). Though I suspect I admire their simplicity from an ease of implementation POV more than day-to-day usage.
But I never actually sit down and study why something like Oberon is better from a design standpoint. I just nod my head at the simplicity bullet point, and go right back into worrying about insanity like "can I std::move this smart pointer in a copy constructor" with remarkably little cognitive dissonance.
I think what I'm asking is: have you had luck using a language with a lean, Wirth-style design? I tend to fixate on what those languages lack (e.g. Go's lack of generics) vs you get in return.
[+] [-] AnimalMuppet|5 years ago|reply
On the other hand, if the simple language doesn't have what you need, then you have to do it yourself. Worse, if the language tries to "protect you from yourself" (as Wirth languages tended to), the language may block you from doing it yourself.
To me, that was the most frustrating problem with the original (pre Turbo) Pascal. Some guy thousands of miles away, who knew zero about my circumstances, was deciding what his language would allow me to do. When you're on the wrong end of that, it's very frustrating.
So I would say that either large or small languages can work, they just have different trade-offs. But avoid languages that try to restrict what you are allowed to do, even if they do it "for your own good". Languages need escape hatches. If they're clearly marked in the code, that's even better.
[+] [-] Rochus|5 years ago|reply
It's not "better". Wirth simply left out everything that did not seem absolutely necessary to him at the time. His colleague and he wanted to save work by doing this, i.e. to make their Oberon project at that time feasible for two developers on a part-time basis. With Oberon-07 he went even a step further in that direction.
[+] [-] peter_d_sherman|5 years ago|reply
Many sources of early Pascal compilers!"
[...]
"WIRTH (1)1970- Pascal compilers, the P2-P4 compilers, Pascal-S, student VU Pascal (the forerunnner of the Amsterdam Compiler Kit), Andrew Tanenbaum, Professor R.P van de Riet.
1980 – UCSD P-System, Turbo Pascal, Pascal-M, 10 years VAX/VMS Pascal programmer, teacher of the Teleac support course Pascal, teacher and examinator Exin/Novi T5 Pascal
1990 – Turbo Pascal 3 on CP/M to Delphi on Windows
2010 – Freepascal + Lazarus on Windows and Linux"
[+] [-] xuesj|5 years ago|reply
[+] [-] Wildgoose|5 years ago|reply
Classic Book.
[+] [-] galaxyLogic|5 years ago|reply
[+] [-] quicksilver03|5 years ago|reply
Modula 2, at least in the implementation I had to use at my University, had case-sensitive keywords which were in UPPERCASE. Compared to Pascal's case-insensitive keywords, in the editors available at that time, Modula 2 stuck out like a sore thumb and made writing and reading code like a chore.
[+] [-] prestonbriggs|5 years ago|reply
[+] [-] nine_k|5 years ago|reply
Modula-2 was a nice language (and modern Go is built basically on its ideas, plus Oberon's), but it had a hard time competing for the same niche on the PC. It worked better in embedded space, though.
On Unix, there was C, a language with many shortcomings, but one big upside: it was the language the kernel and the userland was implemented in. Its compiler was readily available as a part if any Unix installation. It seemed a natural choice over Pascal, unless you wanted drastically different features (then you had Fortran, or awk, or Tcl, or later Perl, etc.)
[+] [-] Koshkin|5 years ago|reply
[+] [-] protomyth|5 years ago|reply
[+] [-] ngcc_hk|5 years ago|reply
[+] [-] kwhitefoot|5 years ago|reply
[+] [-] blondin|5 years ago|reply