D16
Frequently
Asked Questions
Cool.
So what's a
minicomputer?
"Minicomputer"
is
a subjective term for a machine significantly smaller and
less expensive
than the traditional large computer or "mainframe," but
which has
been implemented with the same circuit technologies (such
as discrete
transistors, or with small- and medium-scale integrated
circuits). It
is further characterized by having a short
word length; usually no more than 16 or 18 bits.
Several
firms
pioneered small computers at about the same time, around
1960. Control
Data Corporation (CDC) emerged with
their 12-bit Model 160,
IBM with the
decimal Models 1401 and 1620, and the
Digital Equipment Corporation (DEC) with the 18-bit PDP-1; but
the first true minicomputer is generally
acknowledged to be the DEC
PDP-8, introduced in 1965.
The PDP-8 was
a 12-bit discrete-transistor machine small enough that it
could actually fit on
a large desk-top.
The
early 1970's saw
the introduction of the microcomputer,
whose entire CPU was typically contained in a single
large-scale integrated
circuit. At
that time, there was still a
clear downward progression in size, capability, and price
from the mainframe
through the minicomputer to the microcomputer.
Well,
microcomputers
have grown increasingly powerful, and while they are still
physically small,
today's units often significantly out-perform even the
largest mainframes of a
just a few years ago!
Just about all new
small computers are micros, and the traditional
minicomputer is a thing of the
past.
Why design your own computer at all?
I think of developing a CPU as something like the "graduate school" of digital design, and it was a challenge I just couldn't resist (to say nothing of its potential for fun).
Why did you design the D16 like you did? What is the philosophy behind it?
I appreciated the simplicity of early minicomputers such as the DEC PDP-8 and the Data General Nova, and early microprocessors like the Intel 8080, the Zilog Z-80, and the Motorola 6800. I regretted that when the first 16-bit micros emerged, they had begun to adopt what I felt to be unnecessary complexities, such as large register sets, segmented addressing, and software "privilege levels." Now, of course, most all microprocessors are possessed of an almost mind-numbing complexity, and it can require months to gain even a passing familiarity with them. You can literally make a career of the Intel Core!
I realize that some measure of complexity is necessary to achieve the highest performance, but I wanted a simple, orthogonal 16-bit computer to play with. Since I am partial to the single-address architecture, it seemed appropriate for the D16.
The D16 certainly has spiritual influences of the early DEC and Data General minicomputers (witness the machine's outrageous "lights and switches" front panel), but its architecture actually more closely resembles those of the early microprocessors. It is rather more sophisticated than the PDP-8 (it has a stack pointer and indexed addressing, for example), yet decidedly less so than the PDP-11 or the Motorola M68000 (powerful 16-bit machines with general-register architectures).
The D16 is a straight 16-bit machine. Why didn't you
make it capable of addressing
individual memory bytes, like the Motorola M68000 and most
other processors
having word lengths longer than a byte?
Again, I desired simplicity. A D16 address, having 16 bits, is the same length as a D16 data word. Byte addressing would have required an effective 17-bit address for the same memory size, and the nuisances of boundaries, alignment, and "endianness."
I have added three "byte oriented" instructions for the convenience of the programmer; BSW (Byte Swap Accumulator), BSL (Byte Shift Left Accumulator), and BSR (Byte Shift Right Accumulator). These instructions represent the only concessions to 8-bit data in the machine!
64K words seems awfully small; why
didn't you use a
longer memory address?
Again, my sense of "orthogonality" demanded that addresses and data words be of the same length.
Second, 64K is "small" only by today's inflated standards. 25 years ago, it was utterly enormous, and many actual minicomputer systems were equipped with no more than 8 or 16K. It's difficult for me to imagine, writing programs in assembly language, that I'll ever need more memory than I have.
I could always build some sort of memory-management unit, like the extension board that DEC made for the PDP-8 (which, without it, could only address 4K words of core!).
The D16 is obviously a "retro"
computer. Why
didn't you implement it with real
ferrite-core memory?
Oh, I wanted to...
Seriously, while the architecture of the D16 (or of any machine I could practically build at home) seems somewhat retrograde, it's designed around many modern components. HCMOS logic itself, the 32K x 8 bit RAM IC's I have used in the main memory, and the 256K EPROMs used in the microprogram store, are all of relatively recent vintage.
(I have since obtained some DEC PDP-11 core memories I hope to incorporate into an external add-on unit some day).
Why did you use SSI and MSI logic for
the D16? Why
didn't you use a field-programmable gate
array?
I have seen the future of digital logic, and it is programmable gate arrays...
The PGA is an amazing device. Designs of vast size and complexity may often be crammed into a single, relatively inexpensive IC. There are practical difficulties, though. Since the circuit is inside the IC, it is impossible to reach any node in the circuit at will, and so one cannot generally debug PGA designs in the traditional manner with 'scope probe in hand. Instead, one typically verifies the design by means of computer simulations.
Well, I hate programming computer simulations. Also, truth be told, I didn't want my homebrew processor to be "software." Ever since I first contemplated building my own machine, decades ago, my conception has been of a unit implemented the way they did it in the heyday of the minicomputer--with SSI and MSI integrated logic.
There is one programmable logic device on the board; a small Altera 5000-series EPLD that contains "debouncing" circuits associated with the front panel controls. It is not a component of the D16 CPU.
Why did you implement the D16 control
unit as
microprogrammed, rather
than with
(faster) random logic?
In fact, the D16 began with a random-logic control unit. But after I had designed it to the last gate, I found that it would not fit on the Augat wire-wrap board I had on hand.
I figured that I could reduce the chip-count considerably by going to a microprogrammed control unit, and so I re-designed the machine; as I worked, I decided that I preferred microprogramming anyway. The irony is that when I was done, the new design didn't fit the board either!
Some time later, I chanced upon a larger Augat board, and the new microprogrammed design fit it with room to spare.
Why did you use EPROMs for the D16
microprogram
store? Aren't
they really slow?
I used EPROMs because they made the machine easy to design and debug. They are also inexpensive. There is no question, however, that their (relatively) slow speed--their access time is 70 nanoseconds--represents the biggest performance bottleneck in the machine as built.
Someday, I may replace the EPROMs either with link-programmable bipolar PROMs (a much faster approach, but one which will necessitate some additional address-mapping logic) or with fast SRAM (creating a writable control store that would be loaded on power-up, and which would permit microcode changes on-the-fly).
Why did you use HCMOS logic instead of
TTL? And, why
are some of the components ACMOS or
F TTL rather than HCMOS?
HCMOS, a newer logic family, significantly outperforms standard TTL and LS TTL. It is faster, consumes much less power, and is even less expensive (2003). Also, its outputs swing all the way from ground to the positive supply rail, significantly improving noise margin in the circuits.
Unfortunately, not all of the traditional "7400" series logic functions have been made available in HCMOS. As far as I know, there have never been HCMOS versions of the '381 and '382 arithmetic/logic units or of the '169 up/down binary counter; hence my use of the 74F381, 74F382, and 74AC169 instead. And, wherever TTL-level logic outputs drive CMOS inputs, as in the ALU and in the microprogram ROM circuits, I used HCT parts (also HCMOS, but which have TTL-compatible inputs).
The
frequency of the
crystal oscillator is not specified on the schematic. What's the clock
speed?
I left out that detail because I figured that I would be experimenting a lot! Right now (04/2005) I am using an 8 MHz oscillator; since the clock-extend state machine (which doubles the clock period for memory and I/O cycles) divides that by two, the actual processor clock speed for most cycles is 4 MHz. For memory and I/O cycles it is 2 MHz.
What
about
software? Do
you have an assembler, or
do you have to do it all by hand,
"programming on the bare metal?"
I found a product called the "Cross-32
Meta-Assembler," available from Universal Cross Assemblers
in
I highly recommend Cross-32 for anyone attempting to build his own computer. It will also work with existing mini/microprocessors, and over 50 pre-prepared tables are included with the program as "standard equipment."
A couple of friends, Loren Blaney and Rich Ottosen, have written a keyboard monitor for the D16 which excecutes out of ROM, and which allows downloading of cross-assembled programs from the same PC that I use as a terminal. Loren has even written a chess program!
In June of 2005, Loren ported XPL0 (a real high-level language, derived from Pascal) to the D16. It compiles to D16 assembly language, and has made it possible for me to run a large number of available application programs and games. For more information on XPL0, see Loren's XPL0 Page (in Links).
Bill Buzbee, the designer of the Magic-1 homebrew computer, has ported the C programming language to the D16. He discusses the details on the “My Other Projects” page of his Web site (see Links).