This is only a preview of the January 2023 issue of Silicon Chip. You can view 39 of the 112 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Articles in this series:
Items relevant to "Q Meter":
Items relevant to "Raspberry Pi Pico W BackPack":
Items relevant to "Active Subwoofer, Part 1":
Items relevant to "Noughts & Crosses game using just two modules":
Items relevant to "Noughts & Crosses Machine, Pt1":
Purchase a printed copy of this issue for $11.50. |
> THE HISTORY OF
COMPUTER MEMORY
> EARLY DATA STORAGE
PART 1 BY DR DAVID MADDISON
One of the most critical technological advances driving the widespread
adoption of computers has been smaller, faster, higher-capacity memory
chips. It didn’t start with semiconductors and ICs, though; memory has been
around in various forms for a long time. This two-part series will investigate
how it started and grew into what we have today.
“Extended ASCII” (not an official
EMORY is one of the most for temporary storage, rather than ‘secM
important and commonly dis- ondary memory’ used for long-term name) uses 8 bits and has 256 characters, with extended foreign language
cussed elements of a computer. These storage, such as hard disks.
days, computer memory size is measured in gigabytes or even terabytes.
While huge & cheap memory capacities are taken for granted, early computers had tiny memories because integrated circuit technology had not yet
been developed and storing even one
byte was expensive and complicated.
For one byte, eight zero or one values need to be stored, so something
had to be duplicated eight times. Without integrated circuits, whatever was
used to store that information was
expensive and big. Note that there are
and have been systems that use bytes
with fewer or more than eight bits, but
eight is the most common number.
In this two-part series, we will
focus mainly on ‘primary memory’,
the working memory of the computer
12
Silicon Chip
However, the distinction wasn’t
always clear in early computers, which
also lacked convenient input and output systems. Hence, we will discuss
technologies like punched cards and
paper tape that were used for both
primary and secondary storage. Secondary storage may be the subject of
another article.
Bits and bytes
One byte is the unit of digital information typically used to encode a character, such as the ASCII-1977 character
set members, which includes the letters and numerals A-Z, a-z, 0-9, punctuation and special characters. ASCII
is a 7-bit encoding scheme that represents 128 printing and non-printing
characters.
Australia's electronics magazine
characters, symbols, line-drawing
characters etc. The exact set of symbols
depends on various proprietary implementations or standards like ISO/IEC
8859. That has largely been supplanted
now by Unicode (see panel). Still, with
this system, one character is stored in
one byte.
Five bits is the minimum amount
of storage necessary to represent the
alphabet; however, with just five bits,
all 26 letters could be represented in
one case (upper or lower), but not
all numbers. So most 5-bit character
code sets could switch between letters
(LTRS) and numbers (FIGS), allowing
60 letters, characters and other codes
to be used – see Table 1.
Today, a byte usually consists of
eight bits represented by 0 or 1 and
siliconchip.com.au
in the range 0 (decimal) or 00000000
(binary) to 255 (decimal) or 11111111
(binary). However, past computers
have used fewer, such as 6-bit codes
to represent 64 characters. Other byte
sizes are also used for addresses and
number representation in modern
computers in CPUs or GPUs (graphics processing units) such as 16, 32,
64, 128, 256 bits and beyond. These
architectures usually still group data
in multiples of 8-bit bytes.
You might have noticed a discrepancy between the stated size of a disk
drive and the size reported by the computer operating system. That depends
upon if the size is counted in decimal
or binary.
One kilobyte is 1000 bytes in decimal notation or 1024 bytes (210) in
binary notation, while one gigabyte
is one billion bytes (1,000,000,000)
in decimal notation or 1,073,741,824
bytes (230) in binary notation. This represents a difference of 7.3% for gigabytes or 10% for terabytes.
It is done this way because, for a
computer, indexing into a large file is
much more easily done in power-oftwo chunks (like 1024) than decimal
sizes like 1000.
This discrepancy has resulted in
new terms such as kibibyte (KiB;
1024 bytes), mebibyte (MiB; 1,048,576
bytes), gibibyte (GiB; 1,073,741,824
bytes) etc. While it might seem more
confusing at the moment, the introduction of these terms is an attempt to
reduce confusion about memory sizes.
Memory devices
The idea of using some device to
input or store data or instructions of a
variable nature is not new and has its
origins in the form of punched paper
tape or cards, as follows:
1725 Weaving looms were controlled using paper tape ‘programs’
with punched holes, a system developed by Basile Bouchon of Lyon,
France.
1804 Joseph Marie Jacquard (also
of Lyon) developed a loom control system using punched cards.
1832 Semyon Korsakov (St Petersburg, Russia) proposed using punched
cards for information search and
retrieval.
1837 Charles Babbage (London,
UK) proposed using punched cards for
inputting data and instructions to his
never completed (by him) “Analytical
Engine”, the first ‘Turing-complete’
siliconchip.com.au
Table 1: patterns represented
for a given number of bits.
Bits
Number of patterns (2bits)
1
2 (0 or 1)
2
4 (00, 01, 10 or 11)
3
8 (000, 001, 010, 011, 100,
101, 110 or 111)
4
16 (numerals 0...9 plus
some punctuation)
5
32 (26 letters plus some
punctuation)
6
64 (26 letters in two cases,
ten digits, space & full stop)
7
128 (all ASCII characters)
8
256 (full code page or
Unicode UTF-8)
16
65,536 (UTF-16)
32
4,294,967,296 (UTF-32)
64
1.84 × 1019
128
3.40 × 1038
256
1.16 × 1077
Fig.1: punched cards were used as the memory for the first ‘Turing-complete’
computer, Charles Babbage’s Analytical Engine. The smaller cards specify the
mathematical operations to be performed, while the larger cards hold numerical
variables. Source: https://w.wiki/5xR7 (CC-BY-2.0).
computer. It was mechanical rather
than electronic since electronic technology was still in its infancy. It still
contained all the elements of a modern computer – see Fig.1.
IBM punch(ed) cards
For over half a century, the world’s
most common medium for information
storage was the once-ubiquitous IBM
punched card. They have a fascinating
and long history, but we do not have
space to cover it all here, so we will
just mention the highlights.
Punched cards were not developed
for computers, which did not yet exist,
but for machines that tabulated data.
The “IBM card” originated with
Herman Hollerith (New York, USA) in
the 1880s and 1890s, who used them
in mechanical tabulating machines.
These electromechanical machines
were used to summarise information
encoded on punched cards, such as
census data (see Figs.2 & 3). Hollerith’s company eventually became part
of IBM, and the machine became a
core product.
The tabulating machine was not a
computer, but it could perform some
The Unicode Standard
Unicode is an international character set with 149,186 characters and
symbols (as of version 15.0) in current use. Before Unicode, every different
language required a distinct ‘code page’, making mixing different languages
virtually impossible and leading to much confusion. Unicode solves this by
bringing all the characters needed for human languages together in one set.
Clearly, you can’t encode that many characters in a single byte. Therefore,
in modern computer memory systems, characters are generally encoded as
variable-length byte strings, providing backward compatibility with existing
single-byte character sets like ASCII.
There are several valid Unicode encoding schemes. Probably the most
common is UTF-8, where a Unicode character that’s also part of the ASCII
set is encoded as a single byte with its top bit as 0. Other characters or
symbols are encoded as multiple bytes (up to four), where the first byte has
its top bit set to 1. Other schemes that are part of the standard include UTF16, UTF-32 and BOM.
Australia's electronics magazine
January 2023 13
Fig.2: a replica of an 1890 model Hollerith punched card tabulating machine
used to process data from the 1890 US Census. Source: https://w.wiki/5xR8 (CCBY-2.0).
mathematical operations, group data
and print results.
The first IBM card had 22 columns
and eight rows (punch positions); by
1900, they had 24 columns and 10
rows; and by the late 1920s, 45 columns and 12 rows. In 1928, a new
version of the card was introduced
with 80 columns and 10 rows – see
Fig.4 (they moved to 12 rows in 1930).
Those punched cards are the likely reason that early alphanumeric computer
monitors had 80 columns.
The cards measured 7-⅜ inches by
3-¼ inches or 187.3mm × 82.5mm.
These dimensions were that of US
paper currency from 1862-1923.
The IBM card had many incidental
uses besides computers; they were
often used for taking notes and making
dot points for presentations, as they
fitted the inside pocket of a suit jacket.
IBM was not the only manufacturer
of punched cards or equipment to
read and write them, but they became
known by that name.
People may laugh at punched cards
today but, like books, if stored correctly, the data will be readable with
the naked eye far into the future. However, data stored on CDs, magnetic
disks and the like may deteriorate over
time (disc rot) or become unreadable
due to a lack of software and hardware support.
Punched paper tape
Fig.4: an IBM card. The data encoded is one line of a FORTRAN program:
“ 12 PIFRA=(A(JB,37)-A(JB,99))/A(JB,47)
PUX 0430”
Source: https://w.wiki/4icp (CC BY 2.0).
Punched paper tape is conceptually
similar to cards but can be kept on long
rolls (sometimes formed into a loop)
rather than on individual cards. It was
invented in 1725 by Basile Bouchon
to control looms, but that was impractical at the time.
Like punched cards, punched paper
tape was used for various applications in the 19th and 20th centuries,
such as programmable looms, telegraphy systems, CNC machine tools
and computer data input and storage
from the 1940s (including military
code-breaking during WW2; see Fig.5)
through to the early 1970s.
Data stored on tape was also used
as read-only memory (ROM) for computers. Tougher versions of the tape for
industrial use were made with Mylar.
Like cards, paper tape has the advantage of being able to be read by eye
and is long-lasting if used and stored
correctly.
Paper tape was usually 0.1mm thick
and either 17.5mm wide (11/16th of
Australia's electronics magazine
siliconchip.com.au
Fig.3: a Hollerith punched card from about 1895, the predecessor of the IBM
card. Source: https://w.wiki/5xR9 (public domain).
14
Silicon Chip
Beyond punched cards & tape
1918 William Eccles and Frank
siliconchip.com.au
CAR. RET.
LINE FEED
LETTERS
FIGURES
SPACE
THRU
Uppercase
Lowercase
BELL
Fig.5: paper tape as used on the WW2 Colossus Mk2 code-breaking computer
in 1943. This computer had no internal memory storage (RAM), so the program
tape had to be continuously read in a loop. Source: https://w.wiki/5xRA
CITY
an inch) for five-bit codes, or 25.4mm
wide (one inch) for 6-bit or more
codes. The hole spacing was 2.54mm
(1/10th of an inch) in both directions.
Sprocket holes were 1.2mm (0.046
inches) apart.
Paper tape could store 10 characters
per inch (25.4mm). A standard teletype roll was 1000 feet long (305m),
so it could store up to 120kbytes, but
most tapes were much shorter than
that as many contemporary computers couldn’t handle that much data.
Several different encoding schemes
were used, starting with Baudot’s from
the 1870s. It was developed for telegraphs and used five holes (five bits).
In 1901, the Baudot scheme was
modified to create the Murray code
that included carriage return (CR)
and line feed (LF) – see Fig.6. Western
Union used that until the 1950s; they
modified it by adding control codes,
a space and a bell (BEL) symbol to
ring a bell.
1924 the Western Union code was
used by the International Telecommunications Union (CCIT) as the basis of
the International Telegraph Alphabet
No. 2 (ITA2), a version of which was
adopted by the USA and called TTY.
TTY was used until 1963. All of the
former systems used 5-bit codes, after
which 7-bit ASCII was adopted. There
were also some encoding schemes that
used six bits.
The IBM Selective Sequence Electronic Calculator was an electromechanical machine that operated from
1948 to 1952 – see Fig.7. It used uncut
IBM card stock to create tapes that
were 7.375 inches (18.73cm) wide
and the length of an IBM punched
card (joined end-to-end). Each of the
80 columns could contain a signed
19-digit number with parity bits plus
two rows for side sprockets.
The tape(s) typically contained large
mathematical tables; with multiple
readers and up to 36 tapes, they could
be searched in about one second. There
were another 30 readers for program
data. The rolls could be continuous
or looped; a full roll weighed 400lb
(181kg). About 400,000 characters
could be stored on the tapes.
The machine also used IBM punched
cards. It gave IBM excellent publicity
and was the basis for many interpretations of what a computer looked like.
- ? : $ 3 ! &# 8
( ) . , 90 14
57 ; 2 /6 "
A B C D E F G H I J K L MN O P Q R S T U VW X Y Z
Feed holes
Paper tape showing the five-bit Baudot Code
Fig.6: the five-bit code implemented on paper tape. More common characters
use fewer holes. Source: https://savzen.wordpress.com/tag/baudot/
Fig.7: a retouched version of the famous photo of the IBM Selective
Sequence Electronic Calculator. The 181kg paper tape rolls on the
readers in the background were made of IBM card stock. Source: www.
thedigitaltransformationpeople.com/channels/enabling-technologies/
mainframes-can-be-cool/
Australia's electronics magazine
January 2023 15
Fig.8: circuit diagrams of the Eccles
and Jordan flip-flop from their patent
application.
Fig.9: the magnetic drum memory from a Swedish BESK computer, with a
sample of much more compact core memory of unknown capacity above it.
Source: https://w.wiki/5xRB (GNU FDL).
Jordan filed a patent entitled “Improvements in Ionic Relays” and received
British patent 148,582 in 1920 – see
Fig.8 & siliconchip.au/link/abhs
While not intended for computer
memory (electronic computers had not
yet been invented), it was to become
the basis of later computer memory. It
comprised two valves (vacuum tubes)
that could exist together in one of two
stable states.
It was originally called the Eccles–
Jordan trigger circuit, the trigger circuit or a multi-vibrator, but today it is
known as a flip-flop.
The ability for a flip-flop to exist in
either of two stable states representing
a 0 or 1 is the basis of some computer
memory today, such as SRAM (static
random-access memory, see the 1963
entry later) and CPU registers.
1932 Austrian Gustav Tauschek
invented magnetic drum memory in
1932, which became a widely used
form of primary computer memory
(‘RAM’) in the 1950s and 1960s. How
was this device invented before the
first programmable digital computer?
It was initially devised to record data
from punched card machines and then
was adopted for early computers.
Tauschek’s original device from
1932 had a capacity of 500,000 bits or
62.5kbytes. As the name implies, drum
memory consists of a drum coated
with magnetic material; several read
and write heads are mounted along
the length of the drum.
Drum memory initially displaced
CRT and delay line memory (see
below) because it was more reliable.
Magnetic core memory gradually
replaced drum memory for primary
storage.
Keyboard drum
Decimal-to-binary
conversion drum
Capacitor
Counter drum
Decimal card
reader
Carry-over drum
Motor
Drum memory was also used for
secondary (semi-permanent) storage
and, in this role, drums were eventually replaced by floppy disk drives
starting in the early 1970s. One of the
latest known uses of drum memory is
in US Minuteman ICBM launch site
computers (until the mid-1990s).
Fig.9 shows drum memory from
the 1953 Swedish BESK computer
and magnetic core memory from the
same machine. The capacity of neither
device is known. The BESK computer
was used to create the first computer
animation; see the video titled “Rendering of a planned highway (1961) First realistic computer animation” at
https://youtu.be/oQMD7oufO4s
1942 John Atanasoff and Clifford
Berry built the little-known ABC
(Atanasoff-
B erry computer) – see
Fig.10. Some argue that this machine
Memory Disk (25 capacitors per side)
One-cycle
switch
Carry-over capacitor
Drive motor
Base 2
card reader
Base 2 output
card puncher
Power supply
and regulator
30 add-subtract logic circuits
Electrical card-punching circuits
Power supply
Memory-regenerating
circuit
Memory-regenerating circuits
Add-subtract logic circuit
Fig.10: an overall view of the ABC computer (left) and details of its regenerative capacitor memory unit (right) showing
only one disc of 30 and one drum of two. Source: www.researchgate.net/publication/242292661
16
Silicon Chip
Australia's electronics magazine
siliconchip.com.au
is the first automatic electronic digital
computer; others dispute that because
it was not programmable and was not
Turing-complete. It was at least what
would today be called the first ‘arithmetic logic unit’ (ALU), now built into
all computers.
This was the first computer to use
regenerative capacitor drum memory,
not to be confused with Tauschek’s
drum memory mentioned above.
Regenerative capacitor memory uses
individual capacitors to store memory
bits. They are either charged or discharged to represent a 1 or 0. Because
capacitors discharge with time, they
constantly need to be ‘refreshed’,
much like some other forms of memory (such as DRAM, to be discussed
next month).
The ABC computer had two drums
that stored 1500 bits each (thirty 50-bit
numbers) which rotated at 60 RPM;
the capacitors were refreshed on every
rotation.
The ABC computer (if it is accepted
as such) was the first computing
machine to use flip-flop memory of
the type described above by Eccles
and Jordan. You can see a fascinating
video about how the ABC works on
YouTube – “The Atanasoff-Berry Computer In Operation” – https://youtu.be/
YyxGIbtMS9E
1943 The British Colossus
code-breaking computer is regarded
as the world’s first programmable digital computer (see Fig.5). It was the first
device universally accepted as a computer to use the flip-flop design from
Eccles and Jordan.
The flip-flops were implemented
with vacuum tubes as transistors had
not yet been invented. They were used
for counting and logical operations, as
the computer had no memory except
the paper tape loop mentioned earlier.
1945 The first programmable
general-purpose digital computer was
ENIAC, used for artillery calculations
by the US military. It started with 20
words of system memory, or about 80
bytes, in the form of accumulators.
Extra data was stored on IBM punched
cards; a 100-word magnetic core memory unit was added in 1953.
‘Words’ are of variable size for different computers. For ENIAC, a word
was ten binary-coded decimal digits
in length, at a time before eight-bit
bytes were standardised. Most modern
computers use 16-bit (two-byte), 32-bit
(four-byte) or 64-bit (eight-byte) words.
siliconchip.com.au
Fig.11: a 256-bit Selectron tube. Source: https://w.wiki/5xRC (GNU FDL).
Cathodes
Selection Bars
Collector Plate
Storage Eyelets
Mica Backplate
Writing Plate
Write Pulse
Reading Plate
Read Pulse
Faraday Cage
Output Grid
Signal Out
Phosphor Screen
Glass Plate
Fig.12: how the Selectron tube worked. The arrows near the bottom indicate the
secondary emission of electrons that generate a pulse indicating a one-bit. In
contrast, the arrows higher up and to the right indicate no secondary emission
of electrons, indicating a zero-bit. Source: https://w.wiki/5xRD
The original 20-word ENIAC memory used flip-flops in the form of a pair
of triode valves. Ten flip-flops were
joined to form a decade ‘ring counter’,
capable of storing and adding numbers. A ring counter comprises a system of flip-flops and a shift register
with the output of the last flip-flop fed
to the first to make a ‘ring’.
A PM (p for positive and m for negative) counter circuit was also used to
store the sign of the number. One PM
counter and 10 ring counters made up
an accumulator.
1946 Development work on the
Selectron tube (Fig.11 & Fig.12) was
started by Jan A. Rajchman at RCA.
This vacuum tube stored digital memory data in the form of electrostatic
charges, similar to the Williams-
Kilburn tube discussed next. The original design was for 4096 bits, but that
was too difficult to build, so a 256-bit
form was made.
The device was never a commercial success; both it and the Williams-
Kilburn were superseded by magnetic
core memory, which was more reliable,
cheaper and easier to manufacture.
Australia's electronics magazine
The basic principle of operation is
shown in Fig.12. Electrons are emitted
from the heated cathodes at the top
of the diagram, like an electron gun
but not a point source. Each cathode
is surrounded by four selection bars,
two each running in one direction
and two at right angles to those. The
selection bars adjacent to the cathode
corresponding to the selected bit are
activated to address a particular bit.
Electrons move from the cathode
through the collector plate and toward
the storage area, which consists of
eyelets (like those on some shoes but
much smaller) embedded in a sheet
of insulating mica with a metal backing called the writing plate. The eyelets are insulated by the mica sheet
but capacitively coupled to the writing plate.
By pulsing (or not) the writing plate
at the same time as electrons are moving toward the selected eyelet storage
location (as determined by the selection bars), the eyelet can either be
charged or not, thus ‘writing’ the data
to be stored.
If the pulse is the same potential
January 2023 17
Fig.13: a Williams-Kilburn tube
from an IBM 701 at the Computer
History Museum in Mountain View,
California, USA. Source: https://w.
wiki/5xRF (CC BY-SA 3.0).
Fig.14: data in the form of dots and
double dots written to a WilliamsKilburn CRT memory tube. The
double dots are because a second dot
has been drawn as part of the erase
process. Source: https://w.wiki/5xRE
18
Silicon Chip
as the collector plate, electrons will
pass through the collector plate and
charge the eyelet (downward-facing
arrows on the left of the diagram). If
the potential is the same as the cathode, electrons will be blocked and not
charge the eyelet. Thus, the eyelet can
be in one of two states.
For reading the data out, electrons
from the cathodes will either pass
through an eyelet or be inhibited from
passing through to the reading plate,
depending on its charge state. By
selecting an eyelet using the selection
bars and pulsing the reading plate, the
signal from the output grid will indicate whether it is charged.
After passing through the reading
plate, electrons go through holes in
a Faraday cage and strike a phosphor
screen. This causes the phosphor to
glow, indicating the contents of individual memory locations (the eyelets)
as well as passing secondary electrons
to the output grid.
For more information on how the
Selectron worked, see the website:
www.rcaselectron.com
1946 The Williams-Kilburn tube
was patented in the UK and US in late
1946, 1947 and 1949. It was the first
fully electronic (and thus high-speed)
memory, using a CRT (cathode ray
tube) for storage. The fact that CRTs
were used this way was mentioned
briefly in our article on Display Technology in the September 2022 issue
(page 18, middle column; siliconchip.
au/Article/15458).
This type of memory was first used
to run a computer program in 1948.
Simply put, a Williams-Kilburn
tube (Fig.13) stores memory on a CRT
by writing a dot pattern representing the data to be stored (Fig.14). As
with any CRT, the image has a certain persistence but eventually fades
away. Therefore, it must constantly
be ‘refreshed’ by each bit being periodically read and re-written (similar
to DRAM).
A small charge of static electricity
appears above each dot which fades
over a fraction of a second. It is this
charge that gives the tube persistent
storage. So writing a ‘one’ to the display involves steering the electron
beam to a specific position and delivering electrons from the gun to allow
the charge to build up.
To write a zero, the charge at the dot
must be neutralised. This is done by
drawing a second adjacent dot (or line)
Australia's electronics magazine
because a negative halo is generated
around each dot. This eliminates the
positive charge of the first dot nearby.
Reading the state of a bit is done
with the aid of a thin metal plate on
top of the viewing screen. The electron beam is steered to that location
and energised, just like writing a ‘one’.
If a ‘one’ was already present, there is
no change in the charge at that location, so no current flows through that
metal plate. But if there was previously
a ‘zero’, writing the ‘one’ will cause a
detectable current to flow.
The Williams-Kilburn tube was susceptible to external influences, mainly
from electric fields, so frequent adjustments were required for error-free
operation.
Some notable uses of the tube were
the IBM 701, IBM’s first electronic
digital computer from 1952. It had
72 3-inch Williams-Kilburn tubes,
each having a capacity of 1024 bits,
giving a total memory of 2048 words,
each having 36 bits. The memory
could optionally be expanded to
4096 words.
Another use was MANIAC I (Mathematical Analyzer Numerical Integrator and Automatic Computer Model
I; Fig.15) at the Los Alamos National
Laboratory, which used 40 2-inch
tubes to store 1024 40-bit numbers
for hydrogen bomb calculations and
it became fully operational in 1952.
1947 Frederick Viehe filed for US
patent 2,992,414 for magnetic core
memory (Fig.16) in 1947, although it
wasn’t awarded until much later, in
1961. He filed another related patent in
1962 (US3264713), awarded in 1966.
Magnetic core memory was the dominant form of computer memory from
about 1955 to 1975. Incredibly, Viehe
was a Los Angeles pavement inspector
who played with magnetics as a hobby;
he was not a professional scientist or
engineer. IBM eventually purchased
his patents.
Core memory uses tiny toroids of
magnetic material wired as simple
transformers. By passing a current
through wires that go through the
toroid, it can be magnetised in one
direction or the other, thus storing a bit
of information. A sense wire passing
through the core detects if the toroid
has changed state.
Reading the data (magnetic polarity) is a destructive process, causing
the bit to be set to zero. To read a bit
of data, an attempt is made to flip a
siliconchip.com.au
bit. Nothing happens if it is a zero; if
it is a one, the toroid changes polarity,
inducing a pulse in the sense line. The
information is retained even when the
power is turned off.
A piece of magnetic core memory
is one of the most desirable items in
any collection of electronic ephemera. They are fine examples of delicate
manual construction and are almost
works of art.
Other claimants to this invention were An Wang (1949; US patent 2,708,722 awarded in 1955), Jan
Rajchman (1950) and Jay Forrester
(1951); there were many ‘intellectual
property’ disputes over it. In 1964,
IBM paid MIT (where Jay Forrester
worked) US$13 million for his patent, a substantial amount of money
at the time.
Core memory eventually obtained
a volumetric density of about 900
bits per litre, and the cost went down
from about $1 per bit to 1c per bit. The
beginning of the end for core memory
was when Intel introduced the 1103
DRAM IC in 1970, costing 1c per bit.
While core memory is obsolete,
computer memory is sometimes still
referred to as “core”. A file containing
the contents of memory from when
a program was running is still often
referred to as a “core dump”.
1947 J. P. Eckert and J. W. Mauchly
applied for US patent 2,629,827 for the
mercury delay line (and other forms of
delay line) in 1947, awarded in 1953.
The mercury delay line is a member
of various delay-line-based memory
devices. Delay line memories work
by sending acoustic, electrical or light
pulses, representing one bit, along a
path. When a pulse gets to the end
of the path, it has to be refreshed by
reshaping and amplifying it. It is then
recirculated.
Such memory is accessed by waiting
for the desired bit in the ‘pulse train’ to
arrive at the read mechanism at a predictable time. The memory capacity is
therefore determined by the length of
the mechanism, the length of pulses
and the speed of sound or similar in
the medium.
Mercury metal, a liquid at room
temperature, was a common medium
used in early computers. The resulting devices had a memory capacity of
a few thousand bits. J. P. Eckert originally developed mercury delay lines
to reduce clutter in radar return signals during WW2.
siliconchip.com.au
Fig.15: the aptly-named MANIAC I computer from 1952. The boxes on top of
the main structure contain two-inch Williams-Kilburn CRTs used as memory.
Source: https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LAUR-83-5073
Fig.16: a 64 × 64 bit (4096 bits) array of ferrite core memory from 1961. This
module measures 10.8cm × 10.8cm. The inset shows a detail of the ferrite cores
with two address lines per bit. Source: https://w.wiki/5xRG (CC BY 2.5).
Australia's electronics magazine
January 2023 19
Fig.17: the “hot box” containing
mercury delay line memory used in
Australia’s CSIRAC computer. It was
named that way because the delay
lines had to be kept at 40°C. Source:
https://collections.museumsvictoria.
com.au/items/406411 (CC BY 4.0).
Mercury was used in the delay lines
because its acoustic impedance is
similar to that of piezoelectric quartz
acoustic transducers, thus minimising energy loss. The speed of sound is
also very high in mercury compared to
certain other media, meaning there is
less time to wait for a pulse to arrive.
Mercury delay lines were challenging to design due to the need to ensure
there were no stray reflections. They
were tricky to set up and maintain as
they required very tight tolerances.
The UNIVAC I computer mentioned
below was an early computer that used
mercury delay lines.
1949 CSIRAC was Australia’s first
programmable digital computer and
the fifth in the world. It is the oldest
preserved first-generation computer.
Its primary memory was a mercury
delay line with a capacity of 768 20-bit
words and a supplemental disk-like
device of 4096-word capacity.
Some of the delay lines were 10mm
in diameter, 150cm long and a pulse
took 960µs to go from one end to the
other (Fig.17). You can see the computer on display at Scienceworks in
Melbourne: siliconchip.au/link/abe2
1949 Jay Forrester had the idea
to use core memory on the US Navy
Whirlwind I computer; a 1024-word
core memory was installed in 1951,
replacing CRT memory.
1950 A US military version of the
ERA 1101 computer (later renamed
the UNIVAC 1101) was the first computer to store and run programs from
electronically accessible memory,
as opposed to instructions that were
hard-wired or read from tape or cards.
The military version was known as the
ERA Atlas.
1951 Magnetic tape drives on computers were first used on the UNIVAC
I computer (Fig.18). The drive unit
was the Remington Rand UNISERVO
I (Fig.19), which used half-inch wide
metal tape (12.7mm) in 1200ft (366m)
lengths. The metal tape and reels
weighed 25lbs (11.3kg). The tape had
six data channels plus one for parity
and another for timing, and had a density of 128 bits per inch.
Each tape could hold 1,440,000
seven-bit characters. Later versions of
these drives used plastic Mylar tape,
which became the industry standard.
The IBM standard for information formatting on tape was widely adopted.
You can view an original UNIVAC I promotional video titled
Fig.18: a bank of reel-to-reel tape drives (background) on a UNIVAC 1108II computer from a 1965 UNIVAC sales brochure. Source: http://s3data.
computerhistory.org/brochures/sperryrand.univac1108ii.1965.102646105.pdf
20
Silicon Chip
Australia's electronics magazine
“Remington-Rand Presents the Univac” at https://youtu.be/j2fURxbdIZs
The Autumn 1964 issue of Martins
Bank (UK) magazine reported that
when data from one inch of paper tape
was transferred to magnetic tape, it
occupied 1/80th of an inch. The same
bank reported that paper tapes were
used for programming branch computer terminals as late as 1981 – see
siliconchip.au/link/abht
1952 The concept of ferroelectric
RAM was described in Dudley Buck’s
master’s thesis. Bell Telephone Laboratories conducted some experiments
on the concept in 1955, but it was not
commercially available until the 1980s
and 1990s (which will be described in
more detail next month).
Ferroelectricity is a property of certain materials with an electric polarisation state that can be reversed by
applying an electric field. The state
is kept even without the continued
application of the electric field. The
two states can be used to store binary
information.
1952 The IBM 726 computer was
introduced. It was the first computer
to use magnetic particle coated plastic
tape for storage (see Fig.20 and visit
siliconchip.au/link/abhu). It could
read or write 12,500 characters per
second and each tape had a capacity
of two million characters. The tape
was about half an inch (12.7mm)
wide and had six data tracks and a
parity track.
Fig.19: a promotional image of the
UNISERVO I tape drive. Source:
www.computer-history.info/Page4.
dir/pages/Univac.dir/images/
MagTapeDrive.jpg
siliconchip.com.au
The storage density was 100 bits
per inch, and tapes were up to 1200ft
(366m) long.
1953 The first transistor computer
originated at the University of Manchester. There were several experimental designs from 1953, culminating with a commercial design in 1956
by a Manchester company with the
computer called the Metrovick 950.
Only a small number were built.
Early transistor computers may have
used valves for the clock and other
functions. Possibly the first fully-
transistor computer was the Harwell
CADET from 1955, but there were
several other early claimants. Philco
shipped commercial transistor computers, the S-1000 and S-2000, in 1958.
The RCA 501 and the IBM 7070 are
also from 1958.
The TRADIC (for TRAnsistor DIgital Computer or TRansistorized Airborne DIgital Computer) was an early
US transistor-based computer used on
the B-52 bomber. It had 684 Bell Labs
Type 1734 Type A cartridge transistors
and 10,358 germanium point-contact
diodes. It also used one valve in the
power supply.
Early transistor computers used
drum memory or magnetic core memory, not transistor circuits as memory
elements. However, transistors were
used as registers for CPUs and amplifiers for magnetic core memory. Diodes
were used in arrays as a form of ROM
(read-only memory).
1955 The Konrad Zuse Z22 was
the first commercial computer to use
magnetic core memory (14 words of 38
bits) as well as magnetic drum memory (8192 38-bit words). It also used
paper tape and had 600 vacuum tubes.
1957 Bell Labs introduced Twistor
memory in 1957, first used in 1965. It
comprised a piece of magnetic tape
wrapped around a current-carrying
wire and was similar in operation to
magnetic core memory. It saw limited
use; however, the ideas were incorporated into bubble memory (described
next month).
1958 The Ferranti-Sirius magnetostrictive delay line was introduced
(see Fig.21). It used the magnetostrictive effect whereby a material changes
its shape in response to a magnetic
field. A long coil of magnetostrictive
material was fabricated, with an electromagnet at one end that induced a
torsional wave (twist) in the wire that
travelled down its length.
Such torsional waves were more
Fig.20: an IBM 726 magnetic
tape unit, as used by the IBM 701
computer system. Source: https://
johnclaudielectronics.tumblr.com/
post/42914025003/
Fig.21: a magnetostrictive delay line. Source: https://w.wiki/5xRH (CC BY-SA
3.0).
siliconchip.com.au
Videos on punched tape storage
● A homemade paper tape reader: “Paper tape reader demo” at https://
youtu.be/w7_9BmthB10
● Using paper tape with an Altair 8800, a microcomputer kit sold in 1974
and the first successful PC. The computer used in the demonstration is
actually a modern clone. “Altair 8800 - Video #28 - High Speed Paper Tape
Reader/Punch”: https://youtu.be/wALFrUd6Ttw
Electronics Australia’s EDUC-8 was published about the same time and
also supported paper tape (see siliconchip.com.au/Shop/3/1816).
Australia's electronics magazine
resistant to noise than the compressive
waves used in mercury delay lines.
A typical magnetostrictive delay line
in a package about 30 × 30cm could
hold about 1kbit of data. They were
used through the 1960s in computers, video display terminals and some
calculators.
1959 US patent 3,161,861 was filed
by Kenneth Olsen, awarded in 1964,
concerning magnetic core memory.
1962 CRAM (Card Random-Access
Memory) was introduced by NCR –
see Fig.22. It used cartridges containing 256 plastic cards with magnetic
coatings, which together could hold
5.5MB. The device was mechanically
complex but surprisingly successful,
and was an alternative to magnetic
tape until being surpassed by disk
drives.
1963 Robert Norman at Fairchild
patented static RAM (SRAM; US patent 3,562,721). It was faster than magnetic core memory and the logic circuitry used fewer components than
January 2023 21
Table 2: generations of computers and technology used
Generation
Technology
Approximate date range
1st
Valves
1940 to 1956
2nd
Transistors
1956 to 1963
3rd
Integrated circuits
1964 to 1971
4th
Microprocessors
1971 to present
5th
Artificial intelligence
Present and future
for other forms of memory. It was
used by IBM.
According to the patent, “This
invention provides a new switching
circuit, particularly designed for a
logic memory circuit, which achieves
a substantial reduction in the number
of components required.”
1964 The first 64-bit SRAM was
designed by John Schmidt at Fairchild.
1965 We don’t have a precise date
for the introduction of rope memory
(Fig.23), but we know it was used in
Apollo Guidance Computers by 1965.
Rope memory was a form of core memory with its physical configuration
altered to be much more compact than
regular core memory (due to the woven
core pattern), giving the higher storage
density required for spaceborne computers, but was read-only memory.
It was about 18 times more compact than regular core memory. It was
Fig.22: a CRAM device from an
NCR product brochure. Source:
http://archive.computerhistory.
org/resources/text/NCR/NCR.
CRAM.1960.102646240.pdf
22
Silicon Chip
used not only for storing data but also
computer programs. Its operation was
vastly more complicated than standard
core memory, with multiple wires and
bits per toroid and much larger toroids.
It is described in a video titled
“MIT Science Reporter—Computer
for Apollo (1965)” at https://youtu.be/
ndvmFlg1WmE?t=1245
The process of making rope memory
for the Apollo computers can be seen
from 20:45 in that video.
There is also a video about restoring
an Apollo guidance computer, which
has more details of its operation, titled
“Apollo Guidance Computer Part 14:
Bringing up fixed rope memory” at
https://youtu.be/2qe4W_USweE
Brek Martin has made a core rope
memory simulator; the first video is at
https://youtu.be/c-t2qyHOs7Y
1965 The Fixed Resistor-Card
Memory was an experimental form of
punched card. Information was stored
by severing (or not) connections to an
array of resistors on a cardboard or
plastic card; it could be punched on
existing punch-card machines.
Next month
After 1965, silicon-based memory
started rapidly taking over from the
technologies described so far. The second and final part of this series next
month will pick up where this one left
off, explaining how the semiconductor revolution radically changed computer memory up to the present day.
If you haven’t already seen it, in
preparation for the upcoming part
two, you might want to read the series
of articles on IC Fabrication technology in the June, July and August 2022
issues. They tie in with the computer
memory technology revolution that
SC
came after 1965.
Fig.23: a test sample of core rope memory for the Apollo Guidance Computer.
Actual production examples were much more compact than this. Source:
https://w.wiki/5xRJ (CC BY-SA 3.0).
Australia's electronics magazine
siliconchip.com.au
|