Fan belts only exist, briefly, in the intervals between stars
Reviewing the informative Turing’s Cathedral
Book review It's a full four years since it was published, but Reg contributor Geoffrey G Rochat has finally gotten around to reading George Dyson's worthy tome Turing’s Cathedral. He finds it's not just a Best Book list lurker, but something actually worth reading.
Ostensibly about the beginnings of computers, Turing’s Cathedral is chock-full of fantastic stuff about geniuses, near geniuses, damned fools (overlapping sets), Princeton, Brigadier General Mercer, hydrogen bombs, computerised weather forecasting, mathematical and electronic evolution, and the birth of the modern age. It begins thus:
In 1956, at the age of three, I was walking home with my father, physicist Freeman Dyson, from his office at the Institute for Advanced Study, when I found a broken fan belt lying in the road. I asked my father what it was.
"It’s a piece of the sun," he said.
My father was a field theorist, and protégé of Hans Bethe, former wartime leader of the Theoretical Division at Los Alamos, who, when accepting his Nobel Prize for discovering the carbon cycle that fuels the stars, explained that "stars have a life cycle much like animals. They get born, they grow, they go through a definitive internal development, and finally they die, to give back the material of which they are made so that new stars may live".
To an engineer, fan belts exist between the crankshaft and the water pump. To a physicist, fan belts exist, briefly, in the intervals between stars.”
Marvellous. The book covers a wide field, and is full of facts and lore from the earliest days of computing, and the Institute for Advanced Studies.
It’s devoted to the personalities involved. It concentrates on mathematician, physicist, inventor and polymath John von Neumann, of course, and his ideas on self-replicating machines, and also Klára von Neumann, who was not only Johnny’s wife, but also a brilliant programmer.
Also getting a well-earned mention are Edward Teller, Oswald Veblen, Stan Ulam and Julian Bigelow, the chief engineer of the IAS computer, who got utterly screwed by his IAS bosses, thereby setting the standard for all computer engineers to follow.
Left to right: Julian Bigelow, Herman Goldstine, J. Robert Oppenheimer, and John von Neumann at Princeton Institute for Advanced Study (Bigelow, own work)
The work provides many insights into the technologies involved in vacuum tube computers. For example, Chapter 7 is entitled “6J6", which was the seven-pin miniature dual triode that was the logical workhorse of the machine.
They were made in incredible quantities in WWII, and the IAS machine used them by the thousands as, by 1947, they were available at a cheap price as war surplus. But, being war surplus, they were somewhat dodgy in their characteristics, and the trick with the IAS was to get reliable performance out of questionable parts.
And, in a few precious sentences, written in passing, Dyson spills the beans on several tricks used in the IAS machine that I’ve been wondering about for years.
In 1955, IBM's RK Richards wrote Arithmetic Operations in Digital Computers, perhaps the earliest and best text on computers in the tube era. Richards taught computer engineering on the basis of modular logic design, the way it was done two decades later in the TTL era, rather than as circuit design.
He largely ignored circuit design in his 1955 book, leaving that for his Digital Computer Components and Circuits of 1957 (by which time he’d quit IBM and gone into consulting).
In the latter work he discusses many "systems of logic", by which he meant the circuit design philosophies and trade-offs of digital circuits made in different ways. For tube-based logic, the basic problem is that while the output plate circuit voltage of a gate swings from ground to perhaps 100 volts, the input control grid circuit voltage of a subsequent tube must swing from ground down to perhaps -20 volts. And it’s the level shifting that’s killer.
In conventional analog circuitry the level shifting is done by capacitive coupling. This works wonderfully well for audio amplifiers and radios and radars and television, as there one is dealing with more-or-less periodic signals whose average value is about zero – or can be processed as though they’re about zero and level shifted later, as DC restoration circuits in televisions do for video signals that get stomped on by superimposed sync pulses.
But digital circuits deal with largely aperiodic signals, which is not something that tube circuit designers were good at handling in the years during and after the last war.
One way out was to make digital circuits that ran, not on logic levels, but on logic pulses. Pulses pass through capacitors, and the capacitors take care of the level shifting problem. In this system of logic, logic signals are the presence or absence of pulses, and logic is performed on the coincidence, or lack thereof, of pulses.
This was how ENIAC and its ilk were designed, and strings of pulses can drive counters that perform arithmetic operations (albeit serially and slowly), so this would seem like a natural solution.
Unfortunately, pulses are delicate things, and when you pass them through capacitors you have to take great care to manage rise and fall times, otherwise they degrade to the point of not being able to trigger counters. That’s not helpful at all.
Also, pulse-based logic requires critical timing to make sure pulses meet (or not meet) when they’re supposed to. So, while pulse-based tube logic can be made to work, it’s a pain in the butt – particularly when you’re dealing with cheap war surplus components of doubtful provenance.
Resistor dividers are useful!
Bigelow, at IAS, bit the bullet and used a level-based system of tube logic, where the only pulses were a set of overall synchronizing clocks.
He handled the level shifting problem with resistor dividers. Resistor dividers are very lossy, something anathema to designers of analog tube circuitry, but Bigelow realised that in digital circuitry, that runs between cutoff and saturation, losses up to a point are immaterial.
And by getting rid of capacitors, Bigelow got rid of fickle, fragile, critical pulses, and was able to build a reliable computer with parts that others had thrown away.
Bigelow also hit upon a solution to the second problem with level-based tube logic. To make a tube-based flip-flop one cross-couples two inverters (housed in a single 6J6) in what is known as the Eccles-Jordan circuit. It is a single bit set/reset memory.
If driven through capacitive coupling it can be made to toggle, usually, and with rather a bit more work using 6AL5 dual diodes (remember, this was before semiconductors) as pulse diverters, can be made into something that looks, more or less, like a D-type flip-flop we know of today.
The problem comes with that ‘usually’ and ‘more or less.’ Capacitively-coupled flip-flops are sensitive to noise and, critically so, to the shapes and durations of the pulses fed to them. Getting a 40-bit register (the IAS machine word was 40-bits wide) to behave properly in a capacitively-coupled system of logic was determined to be not practical.
So, Bigelow invented the master-slave set/reset flip-flop. So, picture two flip-flops, A and B, where the output of A feeds the input of B. Synchonising clock pulse 1 gates the logic preceding A into jamming long-settled data into A. After a settling interval synchronising clock pulse 2 gates the logic preceding B into jamming the settled output of A into B. After a settling interval, then, new data may be jammed into A, and so forth.
This makes for a very stable operation, as Dyson points out, but it has two drawbacks. First, a level-based system of logic requires twice as many 6J6s as does a pulse-based one, although, in recompense, far fewer 6AL5s.
Second, the use of set/reset flip-flops, rather than D-type flip-flops, means that every flip-flop must be fed with both true and complement data, otherwise a flip-flop will either be permanently set or permanently reset. That’s a lotta circuits!
But there is an out, something that Dyson does not mention in his book. Assume that only true data is available, without its complement, and start with the same two flip-flops, A and B, from above. Initially both A and B are cleared. Synchronising clock pulse 1 gates the logic preceding A into jamming true data into A. If the data is asserted A is set, otherwise it stays cleared.
After a settling interval synchronising clock pulse 2 clears B. After a settling interval clock pulse 3 gates the logic preceding B into jamming the settled true output of A into B. If the data is asserted B is set, otherwise it stays cleared. After a settling interval synchronising clock pulse 4 clears A. Rinse and repeat, forever.
One might try to get clever and try to "double up", to make one clock pulse do more than one thing. The problem with that is the case where B feeds back to A without an intervening flip-flop, resulting in a set/reset conflict.
Rather than driving oneself into the frantic dithers making sure this can never happen (hey, we’re talking tube-based logic circuitry here anyway, folks ...), it’s a more reasonable approach to use four synchronising clock pulses and keep jamming and clearing as separate operations.
Four clock phases, folks. Not 2 or 3. By using a non-complementary level-based system of logic one can make a reasonable compromise between performance and complexity. Note, by the way, that Texas Instruments’ TMS9900 minicomputer-on-a-chip used a 4-phase clock. So, effectively, did Motorola’s MC6809. And I suspect there’s a very good reason for that. Ecclesiastes was right: regardless of one’s system of logic, there is nothing new under the sun.
A joy of Dyson’s book, then, is that, while it doesn’t go into gory engineering detail, it drops enough hints to the cognoscenti to enable them to figure it out – exercises left to the reader. Dyson does this not just for tube-based computers, but for Monte-Carlo Method mathematics, population dynamics and thermonuclear reactions. In the latter case the key word is “opacity", and I betcha you need a pretty thick layer of FOGBANK to achieve that.
Unfortunately, toward the end of the book Dyson becomes vague, predictive and proscriptive, and adopts sister Esther’s ideas on how human evolution is going to make us all cyborgs, and all the world is going to look and act like the inside of Apple’s new spaceship headquarters. As though these are good things. Clearly, Dyson, who lives in the Pacific Northwest, really needs to get his mindset outside of Silly Con Valley and breathe the air of reality, out here in what is known as The Real World.
Indeed, it would be very handy if he’d bring some of his aetherial compadres with him, as they’re all desperately in need of a dose of reality. But I’d rather that they, once schooled, go back to Silly Con Valley so that us normal folks don’t have to put up with their incessant self-regard.
Other than that, though, the book is a great read, well written, and I recommend it to you all. ®