top | item 39250848

(no title)

rozzie | 2 years ago

Beyond being 60-bits, programming the 6400/6500/6600/6700 was interesting and memorable in other ways.

- Ones' complement (rather than two's complement) binary representation of integers, and thus the need to cope with "-0" in your code. Modern programmers are surprised that there was a day when "-1" had a different binary representation than today.

- The CPU/CPUs were not actually 'in charge' of the machine. There were ten 12-bit processors called PPU's (peripheral processing units) which did all I/O, and which had the unique capability of doing an "Exchange Jump" instruction to do a CPU task switch. In a sense, the CPUs were 'compute peripherals' to the PPUs.

- The architecture was fascinating in terms of memory hierarchy. The "centeral memory" used by the CPUs was augmented by a much larger "extended memory" (ECS - Extended Core Storage) with block transfer primitives. One could implement high-scale systems (such as the one I worked on - PLATO) that smoothly staged data between CM, ECS, and disk.

In those days, there was a necessarily-direct relationship between the machine language (the bit encoding of instructions for operations & registers) and the assembly language (COMPASS). As a developer it was incredibly enjoyable because, in Ellen Ullman's words, you felt very 'close to the machine'.

discuss

order

klelatti|2 years ago

Hi! Author of this short post here. Thanks so much for this comment. It's been 'on my list' to do a much longer post on Control Data and Seymour Cray for quite a while. This has convinced me to bump it up the list!

KerrAvon|2 years ago

I’m a subscriber and I’d be happy to read as much of that as you want to write. There’s not that much biographical or technical history coverage of Control Data and Cray other than what’s in the not-very-technical Supermen book and anecdotes form one or two individual engineer memoirs.

drpixie|2 years ago

And the intriguing load/store scheme.

There were no LOAD or STORE instructions. Instead there were "address" registers (18 bits wide), matched to each "operand" (60 bit data) register.

When you updated an address register, that memory address was automatically read into the correspond operand register. Except for the last couple address registers - updating them performed a write from the corresponding operand into memory.

By our current way of thinking, it seems arse about. But it worked well when you understood it, and apparently improved concurrency. Loads and stores became sort-of transparent. (Remembering that memory was as fast, sometimes faster than the CPU, so a few instructions saved was worth the occasional unnecessary load.)

See "Design of a Computer, The Control Data 6600" by J E Thornton.

ithkuil|2 years ago

An alternative way to model that in an ISA is the indirect addressing model coupled with the ability to use it when expressing operands in any instruction.

The models are isomorphic.

Write to an "address" register -> write to a register directly

Write to an "operand" register -> write to "(register)" (write to the memory at the address stored in the register)

Not sure which was the first architecture to model it that way. The PDP-11 had it.

You just need one bit in the instruction encoding to determine whether to use direct or indirect access. You need one bit in the instruction encoding if you have twice the registers.

You can save that bit if you make most instructions able to access the "operand" register only, and require that manipulation of the "address" register use special instructions.

In that case you have an "inverse load / store" architecture, where instead of using load store instructions to do indirect access you use them to do direct access.