top | item 21888096

C Is Not a Low-level Language (2018)

343 points| goranmoomin | 6 years ago |queue.acm.org | reply

167 comments

order
[+] topspin|6 years ago|reply
Intel and HP attempted to deliver what this paper advocates. They designed an entirely new processing element and compiler architecture that promised to deliver high performance with less complexity. It was called EPIC and manifested as Itanium. Billions of dollars and the best minds at the disposal of an industry wide consortium couldn't make it work; one year ago the last Itanium shipped. The market has spoken.

You can run unmodified, compiled OS/360 code from the 1960s on a z15 machine built this year. The market values the tools and code it has invested in far more than any idealized computing model you care to speculate about.

The flaws in contemporary CPUs that device manufacturers perpetrated on their customers for almost 20 years are not the fault of C and its users. They are the fault of reckless manufacturers that squandered their reputation in the name of performance and, ironically, helped perpetuate the lack of innovation in programming techniques called out in this paper.

[+] ngneer|6 years ago|reply
So true. The Spectre and Meltdown flaws have nothing to do with the abstract machine that a processor implements. You might as well claim that the vulnerabilities are the result of the ISA (which itself was born to smooth over differences among machines). Rather, these vulnerabilities are the case of a confused deputy. There is nothing to say you should not speculate, just do not do so across security boundaries leaving breadcrumbs for an adversary to discover.
[+] daxfohl|6 years ago|reply
Though nowadays if AWS or Azure or GCP finds a way to run some proprietary database more efficiently using a new kind of language and cpu architecture, the scale there somewhat changes the playing field.
[+] commandlinefan|6 years ago|reply
> The market has spoken.

The market “demands” cheap, turnkey, easily replaceable programmers who don’t really know what they’re doing, and justifies this with “I made a website last weekend, programming is easy, you’re just pretending its hard to keep out competition”. Until software engineering is treated as actual professional engineering, time, money and resources will continue to be wasted frivolously.

[+] jokoon|6 years ago|reply
> the lack of innovation in programming techniques called out in this paper.

What kind of innovation? Compiler optimizations? New language paradigms?

Am I right to think computers should adopt a more parallel architecture and design, and to expand on techs like OpenCL and CUDA, and generalize those technique to everything done by a computer, because we've hit a frequency limit?

We often see software being either sharded, distributed, load balanced, etc, but it seems that there is much more performance being possible if we start building chips that enforce the division of a task on the programmer. Of course it seems like it is a very big paradigm shift, which might be too expensive and complicated, but since bleeding edge techs like deep learning cannot be properly accomplished with traditional computers, I tend to believe the computer model of today is outdated.

[+] dragontamer|6 years ago|reply
> Intel and HP attempted to deliver what this paper advocates. They designed an entirely new processing element and compiler architecture that promised to deliver high performance with less complexity. It was called EPIC and manifested as Itanium. Billions of dollars and the best minds at the disposal of an industry wide consortium couldn't make it work; one year ago the last Itanium shipped. The market has spoken.

Because SIMD seems to be easier in practice when you actually need performance.

Every high-performance application seems to dip into explicit SIMD-parallelism: H264 / H265 encoders, GPU-based deep learning... video game shaders and raytracing, etc. etc. All of which get more performance from SIMD than VLIW / EPIC.

I think the market has spoken: if you are aiming for explicit instruction level parallelism, why not go for 32-way parallelism per clock tick (NVidia Volta or AMD RDNA architectures) instead of just 3 or 4 parallel assembly statements (aka a "bundle") that EPIC / VLIW Itanium can do?

----------

Another note: normal CPUs these days are approximately 6-way parallel. The Skylake i9-9900k can execute 6-uops per clock tick from the uop cache (ex: 2x Load instructions, 2x Additions, 1x XOR, 1x 512-bit SIMD statement). In addition to pipelines, reorder buffers, and other such tricks to "extract" more parallelism from instruction streams.

EPIC / VLIW just happens to sit in an uncomfortable spot. Its "different enough" that it requires new compiler algorithms and new ways of thinking. But its not "dramatic enough" such that it creates huge parallelism like SIMD can easily represent.

Back in the 90s and 00s, it was probably assumed that SIMD-compute was too hard to program, while traditional CPUs couldn't scale very easily.

EPIC / VLIW was wrong on both counts. OpenCL and CUDA made SIMD far easier to program, while traditional CPUs became increasingly parallel. And that is the history IMO.

[+] pjmlp|6 years ago|reply
Intel and HP only failed, because AMD exists and was allowed to design chips that provided an alternative to Itanium.

Without AMD, the market would not have spoken anything.

[+] ip26|6 years ago|reply
They also built it upon the promise of a "sufficiently smart compiler" which never manifest, and ensured there was no second source to provide market competition.
[+] api|6 years ago|reply
EPIC also had numerous problems. The most severe I think was that it exposed the innermost workings of the chip.

That sounds like an awesome idea, but it's actually a major issue because once you expose something you freeze it in time forever.

Pretty much all modern processors larger than in-order embedded cores are basically virtual machines implemented in hardware. The actual execution units are behind a sophisticated instruction decoder that schedules operations to achieve maximum instruction level parallelism and balance many other concerns including heat and power use in modern designs.

The presence of this translation layer frees the innermost core of the chip to evolve with almost total freedom. Even if fundamental innovations were discovered like practical trinary logic or quantum acceleration of some kind, this could safely be kept behind the instruction decoder.

EPIC on the other hand by exposing the core freezes it. I predict that if EPIC would have taken over eventually you'd have... drum roll... an instruction decoder for each EPIC "lane" or whatever that did exactly what today's instruction decoders do. It probably would have ended up evolving into a synchronous multithreaded vector processor with cores that look not unlike today's cores complete with pipelines and instruction schedulers and all the rest of that stuff.

I can imagine one scenario where EPIC and other designs that show their guts could work, but it would require the cooperation of operating system vendors. (Stop laughing!)

OSes could implement the instruction decoder layer in software, transpiling binaries from a standard bitcode like WASM, JVM bytecode, LLVM intermediate code, and/or even pre-existing instruction sets like X86 and ARM to the underlying instruction set of the processor core. Each new processor core would require what amounts to a driver that would look not unlike an LLVM code generator.

Have fun getting OS vendors to do that. Another major problem would be that CPU vendors would be incentivized to distribute these things as opaque blobs, making it very hard for open source OSes to support new chip versions. It would be a bit like the ARM / mobile phone binary blob hell, which is one of the factors making it hard to ship open source phone OSes or make open phones.

Keeping the instruction decoder on the silicon basically just avoids this whole shitshow. It lets the CPU vendors keep their innovations closed as they wish without imposing that closed-ness on the OS or apps.

The final issue with kernel compilation is that the performance probably wouldn't be much better than what we get now. We'd trade the overhead of an instruction decoder in silicon for a lot of JIT or AOT compilation and caching in the OS kernel. The performance and power use hit might be just as large or larger.

[+] campfireveteran|6 years ago|reply
Yeap. One of my roommates from college worked at HP during the Itanium development.. it was great on paper but customers didn't want it or need it. The bigger picture is throwing everything away every few model years isn't "innovation," it's planned-obsolescence consumerism and pointless churn. Turing completeness and standardization > whiz-bang over-engineering.
[+] joe_the_user|6 years ago|reply
Something about this constantly appearing trope bugs me.

I began programming C and assembler on the VAX and the original PC. At that time, C was a reasonable approximation of the assembly code level. We didn't get into expanding C to assembly that much but the translation was reasonably clear.

As far as I know, what's changed that mid-80s world and now is that a number of levels below ordinary assembler have been added. These naturally are somewhat confusing but they aim to emulate the C/assembler model that existed way back then. These levels involve memory protection, task switching, caches and all things involved with having the current zillion-element Intel CPU behave approximately like the 16-register CPU of yore but much-much faster.

I get the "there's more on heaven and earth than your flat memory model, Horatio" (apologies to Shakespeare).

BUT, I still don't see any of that making these "Your Ceeee ain't low-level no more sucker" headlines enlightening. A clearer way to say it would "now the plumbing is much more complicated and even c programmers have to think about it".

Because... adding levels below C and conventional assembler still leaves C exactly as many levels below "high level" language as it was before and if there's a "true low level language" for today I'd like to hear about it. And the same sorts of programmers use C as when it was a low level language and the declaration doesn't even give any context, doesn't even bother to say "anymore" and yeah, I'm sick of it.

Edit: plus this particular actual article is primarily a rant about processor design with C just pulled into the fight as a stand-in for how people normally program and modern processors treat that.

[+] lmm|6 years ago|reply
> Because... adding levels below C and conventional assembler still leaves C exactly as many levels below "high level" language as it was before and if there's a "true low level language" for today I'd like to hear about it. And the same sorts of programmers use C as when it was a low level language and the declaration doesn't even give any context, doesn't even bother to say "anymore" and yeah, I'm sick of it.

Not really. For many purposes, C is not any more low-level than a supposedly "higher level" language. 20 years ago one could argue that it made sense to choose C over Java for high-performance code because C exposed the low-level performance characteristics that you cared about. More concretely, you could be confident that a small change to C code would not result in a program with radically different performance characteristics, in a way that you couldn't be for Java. Today that's not true: when writing high-performance C code you have to be very aware of, say, cache line aliasing, or whether a given piece of code is vectorisable, even though these things are completely invisible in your code and a seemingly insignificant change can make all the difference. So to a large extent writing high-performance C code today is the same kind of programming experience (heavily dependent on empirical profiling, actively counterintuitive in a lot of areas) as writing high-performance Java, and choosing to write a program with extreme performance requirements in C rather than Java because it's easier to control performance in C is likely to be the wrong tradeoff.

[+] tybit|6 years ago|reply
The authors point was that it’s hard to separate discussion of modern CPU design from the constraints of C. Not from a technical perspective but from a pragmatic/commercial one.

The take away for me was that while C is obviously a higher level abstraction than CPUs, it’s a mistake to think that C has been designed for that hardware, when nowadays it’s the other way around.

[+] defanor|6 years ago|reply
The article blames C for the processor designs that emulate older (non-parallel) processors. I think it can be summarised as C having a relatively straightforward translation into hardware-friendly assembly in the past, but these days both major CPUs and compilers are working hard to preserve the same model, so that neither the assembly is hardware-friendly nor efficient/optimizing C compilers are straightforward.
[+] msla|6 years ago|reply
> I began programming C and assembler on the VAX and the original PC. At that time, C was a reasonable approximation of the assembly code level. We didn't get into expanding C to assembly that much but the translation was reasonably clear.

Right: On the VAX, there wasn't much else for a compiler to do other than the simple, straightforwards thing, and I'm including optimizations like common subexpression elimination, dead code pruning, and constant folding as straightforwards. Maybe loop unrolling and rejuggling arithmetic to make better use of a pipeline, if the compiler was that smart.

> As far as I know, what's changed that mid-80s world and now is that a number of levels below ordinary assembler have been added.

You make good points about caches and memory protection being invisible to C, but they're invisible to application programmers, too, most of the time, and the VAX had those things as well.

Another thing that's changed is that chips have grown application-visible capabilities which C can't model. gcc transforms certain string operations into SIMD code, which vectorizes it and turns a loop into a few fast opcodes. You can't tell a C compiler to do that portably without relying on another standard. C didn't even get official, portable support for atomics until C11.

You can dance with the compiler, and insert code sequences and functions and hope the optimizer gets the hint and does the magic, but that's contrary to the spirit of a language like C, which was a fairly thin layer over assembly back in the heyday of scalar machines. I don't know any modern language which fills that role for modern scalar/vector hybrid designs.

[+] saagarjha|6 years ago|reply
I found it insightful, because it goes on to discuss how C has had a huge effect on the programming model all the way from the processor to the compiler and it’s led to problems in performance and security. They suggest that it may be worthwhile designing or using a language that can better handle how processors are structured today.
[+] Gibbon1|6 years ago|reply
I think you touched close to an opinion of mine which is anything lower lever than c becomes very hard for ordinary people to work in. I remember DSP's in the 80 and 90's. Where to program them you needed to deeply understand how the machine worked. And guess what eventually manufacturers ported C over to them so that programmers could be productive when working on the non-performant parts of the code base.

If anything modern processors are even worse under the hood. With the added problem that you can't feed one of them raw 'true' instructions fast enough to keep them from stalling.

[+] philwelch|6 years ago|reply
> Because... adding levels below C and conventional assembler still leaves C exactly as many levels below "high level" language as it was before and if there's a "true low level language" for today I'd like to hear about it.

Me too, actually. I know you meant that rhetorically, but what if you designed an instruction set that better matched modern processor designs and then built a low-level compiled language on top of it? My hunch is that modern processor designs are so complicated that you’d have to do similar amounts of abstraction to make the language usable, but I’m not sure.

[+] benibela|6 years ago|reply
The optimizers are much stronger nowadays. They rewrite the program, so that the resulting assembly might have nothing to do with the code you wrote.

Especially if undefined behavior is invalid. Decades ago you did not need to care about undefined behavior. You write a + b, and you know the compiler emits an ADD instruction for +, and that ADD of x86 does not distinguish between signed and unsigned instructions, and you get the same result for signed and unsigned numbers, regardless of overflow. But nowadays the optimizer comes, says, wait, signed overflow is undefined, I will optimize the entire function away.

[+] mpweiher|6 years ago|reply
> still leaves C exactly as many levels below "high level" language as it was before

And the levels below assembly are inaccessible. Assembly/machine code is the lowest level you that's accessible.

[+] throwaway17_17|6 years ago|reply
I have read through the comments already posted and the comments from the previous HN discussion linked also. I can’t help but feel like I got something completely different from this article than everyone else. I am convinced that Chisnall used the ‘C is not a Low-Level Language’ title as click bait. The actual point of the article is to push the view that the x86 ISA, which according to him, is structured the way it is due purely to the desire to make the massive amount of existing C code run faster. His argument is essentially that ‘low-level’ programmers are not delusional, but are purposely being deluded by chip manufacturers. C, and x86 assembly, according to The article are not low level because they have only a passing relevance to the actual architecture of modern CPUs. Chisnall then goes on to argue that a low level language would require an ISA that presents a clearer picture of the actual architecture and would be geared for performance given the actual functionality of the CPU. He then bats around several features that could be a part of an ISA for a multi core, hierarchical memory, pipelined chip. His references to alternate memory models, changes in register structure and amount, a push for immutability and other features to adapt the ISA to reflect what would constitute actual performant code.

I’m all for his vision, it seems like there could be an x86 ISA translation layer or a portion of cores dedicated to maintaining X86 compatibility l, while transitioning to a new ISA. In fact, just have a new ISA be the target the CPU reducers x86 to and also expose that underlying ISA. But as said elsewhere in the thread, it’s been tried before and it hasn’t worked yet.

[+] Certhas|6 years ago|reply
But it's not clear that this has been tried. Itanium wasn't exactly that, was it?

Edit: It would be easy to imagine exposing some of the lower level details in a new ISA that lives alongside x86, and then allowing languages that have the right abstractions to make use of them, creating better fits between existing abstract programming models and underlying computational resources...

[+] Typhon|6 years ago|reply
This article quotes Perlis' famous saying that "a programming language is low-level when it calls attention to the irrelevant", and I am reminded of another Perlis aphorism :

« Adapting old programs to fit new machines usually means adapting new machines to behave like old ones. »

[+] gumby|6 years ago|reply
Only met him once, but the man was a giant of computer science.
[+] todd8|6 years ago|reply
A number of claims here and in the original article are inaccurate interpretations of the almost random walk that we have made to get to our modern processor designs and the C programming language.

Instruction level parallelism and out of order execution were done by the seminal CDC 6600 as early as 1964–at the time one of the worlds fastest computers. (I remember a conversation with Seymour Cray about the difficulty of handling machine state during an interrupt on such an architecture.) The C programming language didn’t come along until almost 10 years later.

As the article says, C is a good fit to the architecture of the PDP-11, mini-computer very different that the mainframes of the time. There were many competing visions for what a “high-level” programming language ought to look like back then, Pascal (1970), LISP (pre-1960), Prolog (1972), FORTRAN (pre-1960), COBOL (pre-1960), Smalltalk (1972), Forth (1970), APL (1966), Algol (pre-1960), Jovial (pre-1960), PL/1 (1964), CLU (1974). As a professional developer and CS grad student during this period we were well aware of these alternatives. Many of these had escape hatches to gain low level access to the machines that they ran on and libraries were customarily written in assembly language. C came along during this period. It wasn’t my favorite—the whole pointer/array punning seemed unnecessary to me.

Why did C prevail? Was it because it was low-level? No, there were other well established languages capable of low level work. I recall, a few reasons: Unix, DEC’s PDP-11, and Yourdon.

First Unix was amazing and C was the favored language on Unix. Just being able to type man on a TTY or VDT and see the man page for a command was so novel. Unix OS and commands were written in C.

Second, the PDP-11 was a very popular good machine and Unix ran on the PDP-11. Unix on a PDP-11 was a lot more fun than dropping off decks of punch cards to run on an IBM 360 or CDC mainframe.

Edward Yourdon was an influential American software consultant, author and lecturer. He had picked C over Pascal and other languages as a “practical” general purpose high level language to recommend.

Meanwhile, hardware was was not at all a monoculture; even though the PDP-11 was a comercial success, it was a simple machine and just a mini-computer. There were many attempts at alternative architectures. Harvard memory architecture machines, capability based addressing, programmable wide-microcode, LISP machines, RISC systems. I’ve programmed or designed systems for most of these.

So why did the x86 become one of the dominant architectures? It was because of the power of mass production of integrated circuits. Computers using other architectures can be built, but like the LISP machines, they will be slower and more expensive than mass produced processors.

[+] cryptonector|6 years ago|reply
All those languages are very serial and branch-happy too.

What would the alternatives be? SIMD, basically, and some sort of language layered on that -- array languages most likely -- and a complete retraining of programmers.

TFA also talks about the UltraSPARC CMT architecture and says it's a bad fit for C because most C programs don't use a lot of threads. That's nonsense though, since in fact there are many C10K-style, NPROC threads/processes applications out there. Sure, many applications remain that are thread-per-client, but those were obsolete in the 90s, and most such apps I run into are Java apps because Java didn't tackle async I/O way back when. I suppose Java is also C's fault since Java resembles C.

C is a scapegoat here, but TFA still has a point if we ignore that part of it: our programming languages (not just C) are serial and branch-happy, even when they have well-developed threading and parallelism features, and this translates to pressure on CPUs to do a lot of branch prediction.

But we do have less-radical ways out. For example, the CMT architecture results in pretty lousy per hardware thread performance, but pretty good overall performance with minimal or no Spectre/Meltdown trouble (because the architecture can elide most or all branch prediction) -- this won't do for laptops, but there's no reason it shouldn't do for cloud given server applications written in C10K/CPS/await styles.

My bet would be on a hybrid world with a mix of CMT and SIMD, and maybe also some deeply pipelines cores. CMT CPUs for services, SIMD for relevant applications, and deeply-pipelined CPUs for control purposes.

[+] Merrill|6 years ago|reply
The evolution of minicomputers recapitulated the evolution of mainframes. The evolution of microprocessors recapitulated the evolution of minicomputers.
[+] nine_k|6 years ago|reply
The money quote:

---

...processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware.

---

All else follows: hardware parallelism and memory hierarchy are not exposed to standard C. The compiler rewrites the code ruthlessly to replace loops with sequential instructions, vector instructions, etc (or not, then you wonder why and how to trigger the optimization).

C compilers do a number of things to continue supporting abstractions from 50 years ago. The article suggests that maybe other approaches, not compatible with C, could be considered for CPUs (not just GPUs).

[+] xenadu02|6 years ago|reply
The proposed benefit of C is that it is “close to the metal”, and from that follows that the generated code is “obvious” and thus its performance characteristics are “easy” to reason about.

It turns out that none of these three things are actually true. That just leaves us with a language poorly adapted to today’s use cases and simultaneously hardware that has optimized for the C abstract machine in not always useful or secure ways.

[+] stephc_int13|6 years ago|reply
Wait; what? If C is not a Low-Level Language, then what is a Low-Level Language?

"The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language when this hasn't been the case for decades."

Now C is again the root of all evils...

But I'm afraid that's not right, all those CPU optimizations (branch predictions, speculative execution, caches, etc.) are not tied to any specific languages.

They have been designed to make existing programs run faster; if all our software stack was written in Java, Lisp or PHP, I think that on the hardware front, most of the same decisions would have been made.

[+] dragonwriter|6 years ago|reply
> Wait; what? If C is not a Low-Level Language, then what is a Low-Level Language?

Assembly, actual machine code. (Contrary to the article, C was never a low-level language, when it was younger it was literally a textbook high-level language because it allows abstracting from the specific machine, and while it's less likely to be what a textbook points to as an example today, that hasn't changed.)

[+] blihp|6 years ago|reply
Microcode. Back when C became a thing, microcode only existed on 'big iron'. That changed 20+ years ago.

The microarchitecture is the 'real' architecture you're running on, the ISA that assembly language and C code is written against is a facade. It has value in that we don't need to rewrite everything every few years when the microarchitecture changes, but the downside is what we consider low level programming languages talking 'directly' to the hardware are now going through another layer of abstraction.

[+] inopinatus|6 years ago|reply
A matter of perspective.

”In a low-level language like C...” - applications programmer

”In a high-level language like C...” - chip designer

[+] Tomte|6 years ago|reply
Only marginally lower, but a common example is Ada.

You can actually describe hardware registers sanely and portably in Ada. You cannot do that in C.

(It obviously still works, because C is ubiquitous, and so processor and compiler vendors do their hardest to "make it work", but that's no accomplishment of C)

[+] dienciebsiwbsi|6 years ago|reply
The talk of out-of-order execution and caching doesn't make much sense. You could write in machine code and still have no idea how long a memory access will take or in what order your instructions will execute, so by this article's logic machine code ia a high-level language. Maybe "sequential instructions with flat memory model" is a high-level abstraction of what a modern machine does, but it is the only abstraction we have. The article proves not that C is high-level but that modern CPUs offer only a high-level interface.

(And you can get around some of this, too. You can issue prefetches and so forth.)

[+] jasonhansel|6 years ago|reply
But would this prevent the next Spectre? Not necessarily. Hardware vendors would continue to optimize execution of machine code in unexpected ways, including ways that allow for side channel attacks. They wouldn't be optimizing for C code, but they would still be optimizing for some portable low-level language.

Unless we made that low-level language constantly change as hardware microarchitecture improves (thus giving up on portability), I think we'd be back with the same problem of a mismatch between low-level languages and underlying CPU architecture.

[+] lmilcin|6 years ago|reply
I program in ARM assembler, C, Rust, Java and Clojure on a daily basis and for me C is definitely low level language.

For me a "level" up is something that helps me structure my program in a fundamentally better way. Java has VM and garbage collector, Lisps have macros and REPL. For me these are fundamental enablers to create different types of flows that would not be at all practical in assembly or C.

The difference between assembly and C is just that you need couple of instructions in Assembly to get an equivalent of line of code in C but the fundamental program structure is the same.

The automation is nice but other than reduced overhead the same problems that are difficult in Assembly are still difficult in C and for me this means they are roughly on a same level.

[+] Marazan|6 years ago|reply
The point is that C has no concept of caches. Yet efficient cache usage is crucial to writing performant low level code.

So what C programmers have to engage in cargo culted patterns to try and trigger the correct behaviour from their optimising compiler.

[+] talkingtab|6 years ago|reply
You need to define low-level language carefully. As the article says, a language like assembly is indeed close to the metal, however each assembly language is too close to some particular metal to work on other metal. On the other hand, C is probably as close to the metal as you can get and still run on a z80, z8000, m68k, i386, i286, arm, etc.

We would all love to C(!) a better common low-level language that was common across architectures but there is not one that I know of. Anyone?

There is indeed a problem associated with Spectre, Meltdown, etc., but associating that with the C language seems like a misdirection.

[+] coreai|6 years ago|reply
This paper revealed me some of the things than modern designs have adopted, which the regular uninformed coder would never notice (which he may not need to most of the times). But recently I have been looking into HFT computers and I see that these things are running C code (a friend works at a small startup who said they use C programs for most of the orders) on regular computers with most even using off the shelf hardware (intels 9900KS and 9990XE are hot targets and anandtech and servethehome have shown off some hardware). HFTs are highly sensitive to optimizations and it is my understanding that the lower they go to the hardware, the better returns given how competitive it can get.

With so much in the middle from a high level C program to low level instructions on a CPU, I wonder if we will see companies like JP Morgan and Morgan Stanley (big ones with money and time to invest) enter the chip business heavily. This could then bring back some of those optimizations to the consumer space and startups in the area of fast and efficient C code then might get into trouble. As of now this area seems to be open to compete.

[+] thayne|6 years ago|reply
Sure, a new processor that is designed to be optimized for a different threading and memory model might be better in a lot of ways, but backwards compatibility is important. We can't simply throw away all of the existing software written in C, and in most cases a regression in performance for C programs would be unacceptable. Sure, such a processor might find niche use cases, where it doesn't need to run any legacy software (including OS) and can take advantage of massive parellism, but I don't see it being able to replace the prevalent abstract computer model used in CPU's today.
[+] daxfohl|6 years ago|reply
Is assembly a low level language? It presumably benefits from instruction parallelism, branch prediction, caching too.
[+] mnowicki|6 years ago|reply
There's not objectively defined tiers for what level each programming language is on, when people describe a language as low-level they are speaking relatively; They're saying 'picture a typical programming language, I'm talking about something like that except more low-level'.

Almost all people would consider C to be a low-level language compared to most languages they're familiar with. If you work in assembly all day then maybe you don't think C is a low level language, but those people aren't the ones calling it low-level.

Unless someone wants to define a cutoff for what a 'low-level' language is(and that would be a bad idea, the way it's used now is very useful and working as intended - it'd be better to make a new word) then I think the way the term is typically used is perfectly fine.

I could see an argument for saying C is 'less low-level than people typically assume', but saying it's not low level makes me think it's on the same level as Java or something.

I'm probably just nitpicking though, I always feel like people ignore the idea that the point of language is to communicate what you're thinking to someone else and have them interpret what you're trying to say as closely as possible, and language is already pretty good at evolving in a way that optimizes this. Trying to change that or force people to address technicalities that are outside the scope of what they're trying to communicate only complicates it(ignoring obvious exceptions like if you're writing a scientific paper and say things that are factually wrong for the sake of clarity)

[+] ncmncm|6 years ago|reply
There is no surer route to being blasted on HR than to suggest there is anything about C that makes it slow, or distant from the actual machine.

Certainly, C is close to assembly language, but assembly language is itself itself a compiled and heavily rewritten and optimized language, nowadays.

The actual machines we have today are so complex that we are not smart enough to program them in actual machine language or anything close to it, but people insist they want a machine language that looks primitive, and close to C. So, the manufacturers give us that, and then compile the hell out of it, in hardware, trying stoically to extract performant instructions from the vague hints we provide them via the instructions we are willing to give them -- that resemble C.

We are stuck in a deadly embrace: any new language must perform well on a machine designed to emulate the C abstract machine, and any new processor has to emulate that abstract machine.

A new language designed to direct the operations of a wholly different design, no matter how capable, has no chance to succeed.

The only way forward may be for a language to be designed to program FPGAs directly, and bypass the whole C-industrial complex. Unfortunately, FPGAs are still mired in medieval-guild-style secrecy, so there is no more access to their internals than to mainstream C engines. The access offered is via Verilog or VHDL, which resembles C. It is not clear to what degree the guts of FPGAs are compromised by this orientation.

If somebody ever musters the courage to publish a fully exposed FPGA that can be programmed directly, and can be latticed by tens or hundreds for increased power, then it will become possible to create a wholly new language that need not be efficiently translatable to for execution on machines that don't resemble all the C machines. I won't be holding my breath.

[+] sullyj3|6 years ago|reply
> The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11.

What would an abstract machine that better matched current processors look like?

Edit: Probably should've read the rest of the article first

[+] vectorEQ|6 years ago|reply
This paper might be of interest: https://arxiv.org/pdf/1902.05178.pdf

it explains the 'concept' of these kind of attacks and why its kind of impossible to make an abstraction which does not suffer from such flaws in the presence of high precision timers which are in turn needed for high precision applications (not sure what, i guess realtime applications or things which need to measure super precise.. maybe audio / video ? )

the paper goes a bit further and imo is a bit simpler to read than the original spectre / meltdown papers.

[+] gpderetta|6 years ago|reply
OoO execution, virtual memory, simd, virtualization, microcoding, caches, and other features that are claimed to exist to propagate the illusion of a pdp11 they all predate or are contemporary with the creation of C.

They have been invented because they objectively make computers faster or easier to use.