top | item 42969667

(no title)

flitzofolov | 1 year ago

Makes sense, good luck! I know that sounds snarky, I'm looking forward to rational progress and cooperation on the evolution and adoption of the standard. Just haven't seen that played out in such a planned orderly fashion yet (ipv6?).

discuss

order

nottorp|1 year ago

ipv6, unicode, usb...

Why am I more worried than excited about a new standard?

By the way bounds checking was introduced in Turbo Pascal in 1987. Iirc people ended up disabling it in release builds but it was always on in debug.

But ... it's Pascal, right? Toy language.

pjmlp|1 year ago

Bounds checking exists at very least since JOVIAL in 1958, or if you consider FORTRAN compilers have add an option for bounds checking for quite some time, 1957.

Here is my favourite quote, every time we discuss bounds checking.

"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"

Guess what programming language he is referring to by "1980 language designers and users have not learned this lesson".

GoblinSlayer|1 year ago

I heard algol had bounds checking somewhere in 60s as an implementation feature. Reportedly customers liked it a lot that the programs don't produce wrong results faster.

leecommamichael|1 year ago

Yeah, I’m wondering what this even means. I’m assuming they’ll have to define “memory safety” which is already quite the task. Memory safe in what context? On what sort of machine? What sort of OS?

crabbone|1 year ago

> On what sort of machine? What sort of OS?

Just sharing an anecdote: recently, I had to create Linux images for x86 on ARM machine using QEMU. During this process, I discovered that, for example, creation of initrd fails because of memory page size (some code makes assumption about page size and calculates the memory location to access instead of using system interface to discover that location). There's a similar problem when using "locate" utility. Probably a bunch more programs that have been successfully used millions, well, probably trillions times. This manifests itself in QEMU segfaulting when trying to perform these operations.

But, to answer the question: I think, one way to define memory safety is to ensure that the language doesn't have the ability to do I/O to a memory address not obtained through system interface. Not sure if this is too much to ask. Feels like for application development purposes this should be OK, and for system development this obviously will not work (someone has to create the system interface that supplies valid memory addresses).

pjc50|1 year ago

I think the usual context just requires language soundness; it doesn't depend on having an MMU or anything like that. In particular, protection against:

- out-of-bounds on array read/write

- stack corruption such as overwriting the return address

It doesn't directly say "you can't use C", but achieving this level of soundness in C is quite hard (see sel4 and its Coq proof).