adreid's comments

adreid | 1 year ago | on: How to improve the RISC-V specification

Fixing the whole problem is large but there is a small sequence of steps that would improve things a lot that can be taken.

Specify the key formats in a json/xml file then go one by one through tools, docs, etc changing them to use the machine readable file.

adreid | 5 years ago | on: Rust testing or verifying: Why not both?

The difference is really about the level of annotation you provide. In testing (and the more automated verifiers) you might add assertions. In the “auto active” tools, you are adding more function contracts, invariants, maybe hints to help find a proof, etc.

adreid | 5 years ago | on: Rust testing or verifying: Why not both?

Burnt Sushi only uses the type to generate values. Proptest lets you constrain the values more precisely. Eg you can create a btreemap with up to 5 entries where the keys are some subset of ints (say) and the values are some subset of some other type. This can capture many of the constraints of a functions precondition and can be implemented efficiently in both fuzzers and verifiers.

adreid | 5 years ago | on: Rust testing or verifying: Why not both?

This is more at the end of automatic verification tools like KLEE, SMACK, etc. Auto active tools like Prusti are super-interesting too and I suspect that we need a hybrid approach.

adreid | 5 years ago | on: Rust testing or verifying: Why not both?

That is the project we are trying to support with our work. Unfortunately, this means I have to solve lots of hard problems like scaling, usability, modular verification, etc.

adreid | 8 years ago | on: Are natural language specifications useful?

Yes, that is a large part of what I was saying.

Also, some things are so hard to specify formally that we still don't know have any kind of formal spec. Memory concurrency semantics is an example. It is only in the last couple of years that we got a good spec of fixed size memory accesses. Then Peter Sewell's group drops the bombshell that if you have mixed size memory accesses then you can't make programs sequentially consistent even if you add a memory barrier after every single memory access. But we still don't know how to formally specify the memory orderings associated with atomic accesses, instruction fetches, page table walks or device accesses. So, until then, we use the best natural language definition we can and hope we will be able to formalise it soon.

Also, there are parts of the natural language spec that I had not seen any value in... until I started worrying about whether the spec itself was correct or could possibly be shown to be correct. And now that I do worry about that, I am seeing new value in those parts.

adreid | 8 years ago | on: Is it important to code away from work

Knowing more ways of tackling more problems makes you a better programmer. The more difference there is between what you do at home and what you do at work, the more you will learn. If at work you write C/C++/Java, learn Haskell at home. If you do GUIs at work, do low level coding at home. (And vice versa, of course.) Me, I play with SMT solvers at work and I build my own keyboards at home.

adreid | 9 years ago | on: ARM Releases Machine Readable Architecture Specification

There has been a steady industry of people retyping the manuals of all the major architectures for use in formal verification. At PLDI last year, there was even a paper where a team used synthesis techniques to automatically generate an instruction spec for x86 by generating tests for processors and using the answers to refine their current guess.

And within ARM, there were lots of people transcribing bits of the ARM manual into various forms: C/C++, LLVM .td files, Verilog, spreadsheets, etc. Apart from all the wasted effort, this also created a verification problem: each transcription had its own set of bugs that now had to be fixed. And it missed a verification opportunity: if everyone used the same master copy then each time one group spots a bug and reports it, the spec gets better for everybody. I talked about this virtuous cycle at S-REPLS last year: https://alastairreid.github.io/papers/srepls4-trustworthy.pd... (S-REPLS is a programming language seminar in SE England.)

adreid | 9 years ago | on: ARM Releases Machine Readable Architecture Specification

I agree - that should definitely be possible. Not necessarily easy though - but if anybody is looking for a PhD topic or, better yet, a whole research group looking for a challenge, you have all the bits you need to make it happen.

adreid | 9 years ago | on: ARM Releases Machine Readable Architecture Specification

Probably best not to get too excited about generating hardware from the specs... That would compete with ARM's main source of revenue and my reading of the license is that that is the one thing you cannot do with the spec. But I am not a lawyer.

Or, on a more technical note, my feeling is that if that is what you wanted to do, then you would not start from the style of specification ARM has released.

adreid | 9 years ago | on: ARM Releases Machine Readable Architecture Specification

Yes, ARM's architecture licensees (companies allowed to design their own ARM-compatible processors) have had access to the v8-A specs for years. The new bit is making it available so that researchers, companies and individuals can use it in software projects.

adreid | 9 years ago | on: ARM Releases Machine Readable Architecture Specification

I've been working on making the specs be more than just a way of killing trees since April/May 2011. So I have had time to do a few things with it.

I would say I have a love-hate relationship with XML - your feelings seem to be less ambivalent. One good thing about XML is that you can convert it to HTML. Have a look at ISA_v82A_A64_xml_00bet3.1/xhtml/index.html (in the AArch64 tarball) - I find this to be the easiest way to navigate the ARM specs because I can follow hyperlinks inside the ASL code that take me to the function definitions. And keep going until I have read everything about what the ADD instruction does. Then I switch to the LDR instruction and I can chase down memory accesses and see code to handle big-endian configurations, unaligned accesses, page table walks, page fault handling, etc. Once you have used this, you will not go back to PDF.

adreid | 9 years ago | on: Engineers Create the First Dust-Sized Wireless Implantable Human Sensors

Since you have to feed it ultrasonic energy to power it, I suspect it is not too bad because, IIRC, ultrasonic has a short range.

You could feed it electromagnetic energy - but then all you really have is a smaller version of the Great Seal bug https://en.wikipedia.org/wiki/The_Thing_(listening_device). I wouldn't be surprised if those already exist in about this size and the paper linked to from the web page talks about several such devices (and how they are not as well suited for implantation).

adreid | 9 years ago | on: Engineers Create the First Dust-Sized Wireless Implantable Human Sensors

From reading (ok, skimming) the paper, it seems that the sensor is powered by an external ultrasonic transducer and, I think, returns data by modulating that signal (a bit like RFID). That means that you lose power when the ultrasonic is removed.

It seems that you could do a lot more with it if you added a battery and some memory so that it can record signals when the ultrasonic is not present? I am thinking of things like the Michigan Mote which was also about a cubic millimetre. http://www.eecs.umich.edu/eecs/about/articles/2015/Worlds-Sm...

adreid | 9 years ago | on: Verifying ARM processors against the official ARM specification

It takes a lot of effort to write a specification. Just think how much software you have ever seen complete specifications for - and how close that number is to zero.

So if you are going to write a specification, you really want to get maximum bang per buck out of the effort. Which means that it has to be good for many purposes:

- documentation - verifying processors - dynamic verification (i.e., testing) - formal verification - testing test suites - verifying application software - dynamic and formal verification - verifying OSes, RTOSes, microkernels and other bare-metal software - verifying compilers - generating machine readable descriptions

Verilog is great for one of these - but less good for all the others. Starting with a relatively neutral language gives you a lot more flexibility. It makes any individual task a bit harder than if that was your only goal but you save effort overall.

There is also the virtuous cycle that bug fixes and improvements that help any one goal are folded into the master copy - so all other sub-projects benefit.

adreid | 9 years ago | on: Verifying ARM processors against the official ARM specification

A few clarifications:

The paper is about searching for bugs in ARM's designs - not designs by ARM's architecture licensees. (ARM has two types of partner: those who use ARM designed processors exactly as provided by ARM and those who create their own processors implementing ARM's architecture.)

The tests are performed on the Verilog. There are very good and thorough techniques such as Logical Equivalence Checking (LEC) for verifying that what gets taped out actually matches the Verilog.

We used the v8-A architecture spec that provides Virtual Memory and other features you need to support Linux, Android, etc. This is the architecture that applies to phones, tablets, servers, etc.

And we used the v8-M architecture spec that applies to micro-controllers.

Also, it is worth saying that this is only one of many verification techniques used. This one focuses on the processor pipeline rather than the memory system, instruction fetch, etc.

page 1