This is only a preview of the December 2024 issue of Silicon Chip. You can view 40 of the 112 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Items relevant to "Capacitor Discharger":
Articles in this series:
Items relevant to "Compact HiFi Headphone Amp":
Articles in this series:
Items relevant to "WiFi weather logger":
Items relevant to "The Pico Computer":
Items relevant to "Variable Speed Drive Mk2, Part 2":
Purchase a printed copy of this issue for $13.00. |
Vintage Electronics by Don Peterson
MicroBee 256TC
Restoration
This article documents
my restoration of a nearly
40-year-old computer, a
MicroBee 256TC. It was
the last of the original
MicroBee computers
and incorporated several
updates since the
original kit version from
1982, including a faster
processor, more RAM and
a colour video display.
I ran into significant
challenges restoring it
but I overcame most of
them!
T
he MicroBee 256TC computer
was released by Applied Technology (NSW) in 1987. It incorporated a Z80 8-bit processor running
at 3.375MHz, 256KiB of RAM and
onboard colour graphics. Like many
computers of the era, it was built into
a plastic case with the keyboard (see
the lead photo).
I still remember the excitement of
assembling my original MicroBee kit
back in 1982. I used it extensively
over the next few years for both learning and fun. I still have that machine
in working condition, so when I saw
a rare 256TC on eBay, it appealed to
me as an example of the other end of
the MicroBee era.
The unit was advertised as not working, apparently due to a power supply
fault, but it looked in good condition
otherwise. Importantly for this model,
Photo 1: the 12V to 5V DC
switching power supply.
Both the tantalum capacitor
(at lower left) and four-way
connector were damaged.
98
Silicon Chip
Australia's electronics magazine
the RTC (real-time clock) battery was
reported to be in good condition with
no leakage. I didn’t think a power supply problem would present too much
of a challenge to repair, so I bought it.
The new machine arrived unscathed
a few days later. The case was in good
condition – slightly yellowed, but
no more than you would expect for
a computer of this vintage and with
virtually no surface marks or damage. The keyboard appeared to be
brand new.
The case was held together with a
variety of mismatched screws. Removing the lid revealed a pair of Chinon
F-354L 3.5-inch floppy drives. One
drive was floating freely, but the other
was screwed to the aluminium mounting bracket as intended. Both drives
looked fine and undamaged.
Also attached to the drive mounting bracket was the standard 12V DC
to 5V switching power supply, which
looked to be in reasonable condition
except that there was half a charred
tantalum capacitor where a whole one
siliconchip.com.au
should have been (Photo 1). The fourway mainboard connector socket also
appeared to have badly overheated at
some point and was missing most of
one contact internally.
The keyboard wasn’t plugged in, so
I lifted it away to reveal the rest of the
mainboard.
The board was a bit dusty, but all the
bits appeared to be in the right places.
The two EPROMs even boasted stylish custom duct-tape window covers
(Photo 2). A close look at the battery
area showed clear evidence of past
leakage and widespread corrosion as
a result (Photo 3).
The battery leakage and associated
fumes had a few different effects, the
worst of which was the corrosion of
PCB tracks and pads. The solder mask
seems to have done a good job of generally protecting the tracks, but corrosion had definitely set in where the
mask was missing around component
pads or vias.
Some tracks close to the battery
area had also turned black and were
siliconchip.com.au
difficult to see, so the mask obviously
hadn’t protected everything.
Any exposed metal around the battery had also corroded, including component leads, IC sockets and the keyboard connector contacts.
The white component silkscreen
had detached from the PCB, and some
of it seemed to have floated away, ending up in odd random places around
the board. The identification labels on
the top of many ICs around the battery had faded, some so badly as to
be unreadable.
The machine had some good points
(case, keyboard, floppy drives), and all
the major parts seemed to be there, but
the mainboard and component damage did not look encouraging. Could
it be repaired?
Photo 2 (above): the
MicroBee 256TC
mainboard before
any work had been
done on it.
Photo 3 (left): a
close-up of the
battery section of
the above PCB.
There was evidence
of battery leakage
and corrosion of
the PCB tracks and
pads.
Australia's electronics magazine
December 2024 99
Cleaning up the mess
A few days later, I decided I might as
well remove the battery and clean up
the immediate area to see how bad the
damage was. I also did some research
online about how best to deal with the
leaked NiCad electrolyte.
Unfortunately, 50% of the hits said
that the battery residue is alkaline and
to neutralise it with a weak acid solution (eg, vinegar), while the other 50%
said it’s acidic and to use a weak alkali
instead (eg, bicarb). It can’t be both!
In the end, I reasoned that since the
NiCad electrolyte is an alkali (potassium hydroxide), the best course was
to use a vinegar solution initially and
then just run lots of plain water over
the whole area to remove anything that
was left, including the vinegar. To do
that properly, I’d need to remove the
components; otherwise, there would
be no proper way to clean the PCB
underneath.
I removed all the socketed ICs to get
them out of the way and checked them
for damage at the same time. Most
were located far enough away from
the battery to be unscathed, including
both PAL (programmable array logic)
chips, which would have been tricky
to replace.
The keyboard microcontroller
(M3870) and the Screen and Attribute
RAMs (TMM2015BP) are located close
to the battery and did show some surface corrosion on the adjacent pins.
Still, their sockets had taken most of
the damage, and the ICs themselves
looked like they would probably be
reusable.
Sadly, the Colour RAM and RTC
(146818) were not socketed, and since
they were also closest to the battery,
they were both write-offs. It seemed
odd to me at the time that these two
ICs would not be socketed, particularly when the other adjacent Screen
and Attribute RAMs were, but it
wasn’t until much later that I found
the reason for that when it came back
to bite me.
Next, I removed the battery and
some other nearby components I didn’t
want to get wet. The board looked even
worse with the battery gone.
I decided on a 1:2 ratio of vinegar:water and used cotton buds dipped
in that mix to liberally wipe over the
whole area several times, removing
as much of the visible battery residue
as possible. I wore latex gloves for
this part, as it was pretty messy, and
100
Silicon Chip
I wanted to avoid contact with anything nasty.
When the buds finally started coming away relatively clean, I ran lots of
cold water over the affected area, using
a small brush to get into all the nooks
and crannies. Much of the silkscreen
in the affected area wasn’t bonded
to the PCB anymore and was simply
washed away.
This turned out to be a useful way
of gauging how far across the board
the damage had spread – whenever I
reached an area of the board where the
silkscreen wasn’t detached, I figured I
was at the damage boundary and could
stop. I then sat the board in the sun for
a couple of hours to dry out.
It was looking a bit better now, and
I could see the extent of the track
damage. The worst area was within a
50mm radius from the battery. Numerous tracks there had gone black, presumably indicating corrosion underneath the solder mask. Surprisingly,
many of those tracks still measured
OK, but who knows how long that
would last.
Other tracks measured as open circuits, and a bit of digging showed that
most of those had corroded through
at a component pad or via, ie, where
the solder mask wasn’t present. There
was also widespread corrosion of component leads in the same area, and at
least one of the keyboard connectors
was a lovely shade of green internally,
an obvious write-off.
When I eventually downed tools for
the day, I was seriously thinking that
this machine was beyond repair.
I suspect I was visited by the Obsession Fairy again because by the next
morning my mind was made up that
this was now my very own MicroBee
256TC and I would fix it, no matter
what!
I set about removing all the damaged
components, working outwards from
the worst affected area. I’m lucky to
have a good desoldering station, which
usually makes this sort of job reasonably straightforward. However, I was
finding that while most joints desoldered OK, there were always a few on
each IC that would not desolder no
matter how many times I tried.
The problem seemed to be that I
couldn’t get enough heat into the joint
to fully melt the solder. This is a fourlayer PCB, and I suspect that the inner
layers consist of large areas of copper
for power distribution and/or shielding that sink much of the heat if you
try to desolder a pin tied to either VCC
or GND.
I decided that I would have to cut all
the pins from each IC, then heat each
pin individually with the iron and pull
it out using pliers rather than trying to
desolder all the pins and extract each
IC intact. This method worked better,
but perseverance and even heating
the pin alternately from each side was
sometimes needed to extract the more
difficult ones.
Photo 4: after removing the ICs, I cleaned up the PCB using rags and methylated
spirits. The job wasn’t finished but it was a good start.
Australia's electronics magazine
siliconchip.com.au
Once each IC was removed, I could
do a better job of cleaning the board
underneath using a rag and metho
(see Photo 4).
I also decided to replace all of the
original IC sockets, no matter where
they were located on the board, as
well as many of the connectors and
all the unsealed variable resistors and
capacitors, which may have had internal corrosion that I couldn’t see from
the outside.
The final task was to clear each hole
of solder, ready for the installation of
replacement components. The method
I used for this was to stand the board
vertically and apply heat to each hole
from both sides simultaneously – the
soldering iron on one side and the
desoldering iron on the other.
This conquered the heatsinking
problem nicely, and after a couple of
seconds, a quick flick of the desoldering pump trigger would generally be
enough to clear even the most problematic hole. Photo 5 shows the final
result, ready at last for the rebuild
phase.
Fixing the power supply
The power supply board is a simple switch-mode design that takes an
unregulated 12V input and delivers a
regulated 5V output. A single four-way
connector is used for both the input
and output. Oddly, this connector is
not polarised, and it is not obvious
which way around it should go!
Photo 6: I replaced the
four-way connector
on the power supply
(shown in Photo 1)
with a safer, polarised
connector.
It definitely needs to be connected
the right way around, and I suspect
that the blown capacitor and melted
connector in this machine are the
result of someone getting that wrong
in the past.
The PCB had a grey compressed
fibre sheet glued to the solder side to
insulate it from the mounting bracket.
The glue was stuck quite well, but only
in a couple of places, so it could be torn
away without doing too much damage. The glue could also be lifted from
the PCB with some persistence, and I
cleaned up what was left with metho,
leaving an undamaged PCB surface.
When I eventually reattached the
power supply to its bracket after restoration, I glued the fibre sheet to the
bracket rather than the PCB, making
any future work easier.
The blown capacitor is a 10μF tantalum across the 12V input. There was
no damage to the PCB from the failure,
and the capacitor was easily replaced. I
also replaced all the electrolytic capacitors at the same time.
Axial electrolytics have become less
common, so I used radial leaded units
as replacements instead. I fitted the
large 2200μF capacitor horizontally to
keep it within the original profile and
secured it to the board with a dollop
of hot-melt glue.
The large inductor (L1) is supposed
to be glued to the PCB, but the original
glue had failed, so I added a bit more
hot-melt glue to re-secure it.
The final restoration step for the
power supply was replacing the
melted mainboard connector and its
wiring. I used the Jaycar HM3434, a
beefy four-way connector with the
same pin pitch as the original, rated
at 7A. Importantly, this new connector is polarised (see Photo 6).
I connected the board to a current-
limited benchtop power supply and
Photo 5: the board
after cleaning off the
corrosion, removing
all components to be
replaced and clearing
solder out of the
through-holes. Putting
it back together was the
next step.
December 2024 101
Photo 7: a closeup showing the
worst section
of the board to
repair. Many
of the tracks
were damaged
and required
replacement
point-to-point
wiring to fix.
slowly increased the voltage to 12V.
The unloaded output measured just
over 5V, and there was no sign of
smoke. I applied a decent load to
the output and rechecked the voltage, confirming it was still measuring
close to 5V.
Next, I decided to have a go at powering up the mainboard for the first
time, using my newly repaired supply board. Obviously it wouldn’t work
with half of the components missing,
but the undamaged section included
a couple of clock circuits that looked
like they should still operate. There
was a small problem, though.
The pins on my spiffy new power
connector were slightly too large to fit
into the PCB holes. This connector has
pins with a square section; the answer
was to use a small file to remove the
Photo 8: the underside of the
MicroBee 256TC mainboard.
Multiple thin replacement
wires were required due
to damaged tracks on the
top side. The photo insert
below shows the ECO
modification made later, but
provides an example of how
the replacement wiring is
attached.
102
Silicon Chip
corners from the PCB end of each pin,
turning the square section into an octagon. The connector then slotted nicely
into the PCB.
I attached the supply board and,
using my benchtop supply as the
power source again, applied power to
the Bee through the 5-pin DIN connector. I initially set the current limit low
to ensure things wouldn’t get out of
control and then gradually increased
the voltage and current until a steady
5V was measured at VCC on the mainboard.
The benchtop supply was delivering 200-300mA, which seemed reasonable considering how unpopulated
the mainboard was. I used a DSO (digital storage oscilloscope) to check if
the mainboard clocks were working
and was very pleased to find a steady
13.5MHz for the system clock and
4MHz for the floppy controller clock.
Mainboard repair
The 256TC Technical Reference
Manual has a component overlay diagram and a complete parts list. Where
possible, I checked that the parts on
the board were correct. There were a
few variations – for example, some
74ALS157s on the board were supposed to be 74HC157s, according to
the manual. Wherever there was a difference, I went with the installed part
in preference to what the manual said.
Most of the 256TC components are
still readily available from mainstream
electronics suppliers, but some were
a bit more tricky to find.
Ǻ Screen/Colour/Attribute SRAM:
the original ICs were 2KiB
TMM2015BP-10 types in 24-pin
SDIP (skinny) packages, but the board
will also accept 8KiB 28-pin SDIP
ICs. Neither are available from mainstream suppliers, but the 2KiB units
at least can still be found on eBay. I
decided to use 28-pin IC sockets for
future flexibility but ordered 2KiB
HT6116 ICs from eBay, equivalent to
the TMM2015BP.
Ǻ RTC: the original IC is a type
146818 in 24-pin DIP, but a more modern Dallas DS12C887 would probably also work. I had already ordered
a 146818 from eBay, so I decided to
stick with that, but I picked up one of
the Dallas modules to try later.
Ǻ Keyboard connectors: there are
two FFC (flat flexible cable) connectors
on the mainboard, one 8-way and one
15-way. They seem largely obsolete
in the 2.54mm pitch that the 256TC
requires. While 8-way units are still
available from mainstream suppliers,
I couldn’t find the 15-way unit anywhere in reasonable quantities.
16-way devices are available, so
I went with that, and it worked out
well. The board can easily accommodate the slightly longer connector,
and the metalwork for the 16th pin is
easily removed by gripping the solder
tail with pliers and pushing it back up
into the connector housing, leaving an
empty hole that’s easily visible after
the connector has been installed.
It’s then just a matter of ensuring
that the keyboard cable is aligned to
the correct end of the 16-way connector when inserted – ie, the end without
the empty hole. The parts I used are
Mouser Cat 571-5-520315-8 (8-way)
and 571-6-520315-6 (16-way).
I ordered most of the remaining
parts from Mouser, but a handful of the
more common parts I wanted quickly
or had neglected to include in the order
came from my local Jaycar store.
Once the parts arrived, I started
by replacing all the IC sockets I’d
removed from the undamaged sections
of the board. I added a new one for the
optional sound IC (SN76489AN) that
I plan to try one day, and a couple of
28-pin SDIP sockets for a future 16KiB
to 32KiB PCG RAM expansion. PCG is
the Programmable Character Generator
that is used for displaying graphics and
customised characters.
I then moved on to the damaged
area. The 256TC board has provision
for many more bypass capacitors than
were actually fitted out of the factory,
and since I’d already cleared the solder from all the holes in the damaged
area, I thought it wouldn’t hurt to just
fit all of them as I went along. I used
100nF multi-layer ceramic capacitors
for those but stuck with 10nF for the
original factory-fitted bypass capacitors.
I decided to use IC sockets for everything that needed replacing in the
damaged section. I started work at the
edges of the damaged area, gradually
moving towards the centre, where I
knew it would get harder. The edges
were quite straightforward as most of
the tracks were still in good condition;
it was mainly just a matter of fitting
the new components and moving on.
As I approached the middle, I came
across tracks that looked dodgy or
tested as being open-circuit. For each
of these, I ran a parallel replacement
wire on the solder side and tested the
connection. Photo 7 is a close-up of
the worst section to repair. You can
see the damaged tracks, meaning lots
of replacement wires were needed on
the other side, as shown in Photo 8.
Pretty much every track in that
area needed replacing and testing. It’s
easy to make a mistake when running
replacement wires like this, as you’re
working with a mirror image of the
component pads on the reverse side.
The key I found is just to be methodical, work on only one component at
a time, and double-check everything
as you go.
Probably only about ¼ of the black
wires that you can see in Photo 8
replace tracks that actually tested as
open circuit – the others correspond to
tracks that tested OK but looked dodgy
enough that I paralleled them anyway.
The blue wires are re-implemented
factory mods detailed in section 5.20
of the 256TC Technical Manual.
Photo 9 is a top view of the completed board with all replacement
components fitted.
Troubleshooting
At this point, I was busting to power
up the machine again to see what
happened. I didn’t expect it to work
yet because there were just too many
opportunities for missed track damage or wiring mistakes. Still, there
was only one way to tell! I connected
the screen, speaker, and power supply
and switched it on. The result was a
short beep and the display shown in
Photo 10.
This was obviously not right, but the
display was pretty stable and much
closer to working than I had expected.
There was even a partly legible clock
in the right place for a 256TC kernel
boot screen. I was starting to think that
this machine was going to live again.
The next thing I did was break out
the DSO and have a poke around. I
Photo 9: the top side of
the completed mainboard
with all the replacement
components fitted.
December 2024 103
was hunting for any missing or odd-
looking signals that might indicate
open-circuit tracks, missing replacement wires or misrouted wiring that
might be joining pins and signals that
weren’t meant to be joined.
I had seen some bridged signals in
other boards in the past, so I knew they
would likely show up as distorted or
superimposed waveforms that would
hopefully stand out as being wrong.
Unfortunately, after a couple of
hours, I had largely drawn a blank.
I hadn’t found any missing signals
anywhere; while I did see a few odd-
looking ones, I couldn’t trace any of
them to a specific fault or wiring mistake.
The main problem was that I didn’t
have a working machine for comparison, so what looked odd to me might
have been perfectly normal for a 256TC
or vice versa. I needed a more structured approach than randomly poking
a probe around the circuit and hoping
that something would jump out at me.
I noticed that there was a brief
period during the power-on process
Photos 10 & 11: the top screenshot shows the display when first powered on,
while the lower image shows the screen after it was fixed.
as output if I could get a program to
run. A good way to load a program
would be to burn a spare EPROM and
install it in place of the standard kernel ROM.
To paraphrase Red Dwarf, this was
an excellent plan with only two minor
flaws: I don’t have an EPROM programmer that can burn the type 27128
EPROMs the 256TC uses, and I didn’t
have any 27128 EPROMs.
Could I load the program from disk
instead? The 256TC will automatically boot from a floppy disk during
power-up if it finds one. So, if I wrote
a small test program and put it on the
first sector of a disk in place of the
normal CP/M bootloader, my program
ought to get loaded and run automatically at power-on, without needing
any keyboard input. It was worth a try!
First, though, I would need to calibrate the 2793 floppy controller. The
256TC presents the floppy alignment
signals at a convenient six-way header
(X8) next to the adjustment controls
RV1, RV2 and CV1. The 2793 test
jumper is also presented at X8, so I
connected it to GND, got the DSO ready
and switched it on, ready to start the
adjustments.
I could set RPW and WPW with no
problems, but the 250KHz DIRC signal
I needed to adjust with CV1 was missing entirely. The disk controller and
associated support components are
located in a largely undamaged part of
the board, and the controller seemed
to be getting all the right inputs, but
the DIRC test signal simply refused to
appear despite numerous resets and
power cycles.
Perhaps I had a broken 2793? I tried
installing a known good controller to
test this but got the same result. I came
across the answer at www.pdp-11.nl/
homebrew/floppy/diskstartpage.html
It seems that you must set the test
jumper after applying power and after
the 2793 has completed its internal initialisation. All I had to do was power
off, disconnect the test jumper, power
on, then reconnect the jumper, and the
DIRG signal appeared as expected. The
CV1 frequency adjustment was then
straightforward; phew!
I reinstalled the original disk controller chip and repeated the process
without problems. I finally had a fully-
calibrated and hopefully functional
floppy controller.
Next, I needed to write a bootloader
program that I could use for the test.
Australia's electronics magazine
siliconchip.com.au
104
Silicon Chip
when some of the graphics that form
part of the normal 256TC kernel boot
screen would display correctly, only to
quickly disappear shortly afterwards
and be replaced by the corrupted display. I had a couple of theories as to
what might be causing that.
It could be that the boot program
was crashing, and the CPU was writing
rubbish all over the place. Or perhaps
the CPU was doing the right thing, but
something was going wrong with the
Screen, Attribute or PCG RAM.
I figured that the CPU would be a
good place to start, so I needed to load
and execute a program I controlled to
see if it ran correctly. If it didn’t, then
that would mean I should concentrate
my efforts on the processing parts of
the circuit.
The problem was that I didn’t have
a working screen to write any output
to, nor did I have a working keyboard
yet, because the new FFC connectors
hadn’t arrived.
I had a working speaker; I could
hear it beep quietly during poweron, so I could presumably use that
What I came up with is shown in Listing 1:
# Listing 1 – assembly
# language test program
ORG 00080h
ROMDisplay:
EQU 0E00Ch
Start:
LD SP,080h
Sound:
LD C,007h
CALL ROMDisplay
CALL BeepDelay
CALL BeepDelay
JR Sound
BeepDelay:
LD BC,0FFFFh
BeepDelayLoop:
DEC B
JR NZ,BeepDelayLoop
DEC C
JR NZ,BeepDelayLoop
RET
All this program does is call the kernel ROM to produce a beep sound, wait
a second or so, and repeat indefinitely.
I used a HEX editor to paste the
code into the first sector of a DS80 disk
image and used it to boot a MicroBee
emulator (ubee512). After a bit of
debugging, I eventually had a working
disk image. I then produced an HFE file
from the same image. I loaded that into
my GoTEK floppy emulator to simulate a real floppy drive with my custom bootloader disk mounted, ready
for some physical machine testing.
I tested this setup on a known-good
machine first, then moved the GoTEK
over to the 256TC, switched it on, and
was rewarded with a nice steady heartbeat sound. Yay! The beeps were stable
for as long as I could stand to leave it
running, so things were looking good.
This simple test actually proved
quite a bit of functionality. The CPU,
RAM (at least the part that contains
my program), kernel ROM, PIO and
disk controller were all OK.
I still had a broken display and
didn’t know whether the keyboard
worked, but a large part of the machine
was working fine.
Screen resolution
By now, I was quietly confident that
the cause of the screen corruption was
in the display handling part of the circuit. The video circuitry is concentrated in the area of the PCB that had
taken the most battery damage.
What I needed now was a way of
siliconchip.com.au
running some controlled tests of the
various video functions so I could narrow the problem down. Ubsermon/
ubsertool is a toolset I’d used previously for automating software testing
on real hardware; its core functionality is a MicroBee resident monitor program that is controlled and operated
via a serial connection to a remote PC.
The PC then acts as a serial terminal,
providing keyboard input and screen
output for the monitor part running
on the machine under test. Ubsermon
would allow me to run all sorts of tests
easily; all I needed was a serial cable
and a way of loading ubsermon into the
256TC. For that part, I needed to write
a new bootloader, shown in Listing 2:
# Listing 2 –
# ubsermon bootloader
ORG 00080h
Start:
LD SP,00080h
LD DE,00001h
LD HL,08000h-00080h
LD BC,01400h
CALL 0E039h
JP 08000h
This is a modified and cut-down version of the standard bootloader. It sets
some parameters, then calls a kernel
ROM routine (at 0xE039) that does all
the hard work of actually reading the
disk. Typically, the bootloader loads
CP/M from the disk into RAM and then
jumps to it, but I modified it to load
and run ubsermon instead.
The version of ubsermon I used
runs from RAM location 0x8000 (0x
means hexadecimal, so that’s 32768
in decimal). The bootloader simply
reads the first 0x1400 bytes from disk
and writes them to RAM starting at
address 0x7F80 since the first 0x80
bytes are the bootloader itself. Once
that’s done, we jump to 0x8000, the
entry point for ubsermon.
Next, I needed to create a disk image
containing both the bootloader and
ubsermon. For this, I started with a
blank RAW DS80 disk image and used
a hex editor to paste in the bootloader
program and ubsermon. The RAW
image was then easily converted into
an HFE file for use with my GoTEK
drive.
I soon had my PC communicating
with ubsermon on the 256TC and was
ready to run some tests.
I started with some simple read and
write tests to the PCG RAM and found
no problems – whatever I wrote could
Australia's electronics magazine
be read back unchanged. I also had the
256TC display output visible while
doing this, and what was shown on
the screen coincided with the data I
was writing to the PCG. That was a
big tick for PCG functionality.
The Screen RAM test was a different
story. I could read and write individual bytes OK, but sometimes, writing
a single byte would cause two bytes
to be modified. The target byte consistently wrote OK, but more often than
not, another byte at a seemingly random place within the screen RAM map
would also get updated. Reads didn’t
seem to be a problem.
As long as I didn’t actually write
anything, the contents of the screen
RAM remained stable. I then tried the
same tests with Colour and Attribute
RAM and found that Attribute RAM
had precisely the same problems, but
Colour RAM appeared to be working
fine. Hmmm.
I spent some time with the DSO
examining signals associated with the
Screen and Attribute RAM, looking for
crossed address lines or other weirdness, but I didn’t find anything particularly wrong. I did see an odd-looking
WE (write enable) signal on these chips
– more on that later.
While I was doing this, I was starting to recall something about a screen
corruption problem being experienced
with the MicroBee Premium Plus kit
(PP+) that I had built a few years prior,
and an ECO (engineering change order)
being released at the time to deal with
it. I had automatically applied that
ECO when I built the kit, so I never
saw what the problem looked like,
but now I was wondering if it might
be relevant to what was going on here.
I dug out the ECO document to have
a read. Apparently, there was a “timing
problem in the combinatorial logic”
associated with the faster RAM the
PP+ uses; the penny was now starting
to drop. I had fitted three new 2KiB
SRAMs as part of the board repair,
and these were 70ns parts (HT611670), compared with the original 100ns
parts (TMM2015BP-10). Could that be
the problem?
Two of the original RAM chips were
still in reasonable condition as they
had been socketed, so I removed my
new 6116s from the Screen and Attribute positions, fitted the old original chips and switched on. Bingo! A
perfectly normal kernel boot screen
appeared, as shown in Photo 12.
December 2024 105
That left me with two questions.
Firstly, what to do about the Colour
RAM, which was apparently working OK with a 70ns part. Why should
Colour be magically OK when the
other two clearly were not? Just
because I couldn’t trigger the Colour
RAM to misbehave didn’t mean that
there wasn’t some condition that
would, and I didn’t have a 3rd serviceable 100ns RAM chip.
Secondly, what was the underlying
problem? I decided to try to work it out
and hopefully get to the point where
the machine would function with the
faster SRAMs in all three positions.
Scope 1 shows the odd-looking WE
signal I mentioned earlier. The yellow
trace is a WE signal for one of the video
SRAMs during a single-byte write
operation. All three SRAMs (Screen,
Attribute and Colour) have a similar
signal during writes. The blue trace is
one of the SRAM address lines.
Note that the first 100ns negative
pulse is followed by a much shorter
pulse. My theory is that this extra pulse
is why faster SRAMs have a problem
with writes – they are fast enough to
react to that presumably unintentional
pulse, while the slower RAMs are not.
That could explain why an extra seemingly random byte gets updated.
The WE signal for each of the video
RAMs is generated by the Gold PAL
(U52), and one of its inputs is the CO1
clock. PP+ ECO 20120714-1 (Rev 2)
involves inserting a 1.5kW resistor into
the CO1 clock line, which, in combination with the input capacitance of
U52, causes the clock signal to the PAL
to be delayed slightly.
The 256TC is technically very
similar to the Premium, except for the
different keyboard, so I decided to try
applying this ECO and see what happened. Scope 2 shows the same two
signals after the ECO was applied.
Note that the second WE pulse in the
yellow trace has vanished. I removed
the old 100ns parts, refitted the new
70ns SRAMs and switched it on. Success! There was no longer any display
corruption.
Applying ECO 20120714-1 (Rev 2)
to the 256TC is relatively straightforward and just involves cutting a single track that runs to U52 pin 13 and
inserting a 1.5kW resistor across the
cut. I put the resistor inside heatshrink
tubing to prevent accidental shorts
against adjacent pads.
A fly in the ointment
Real life got in the way at this point,
so it was several more months before
I could return to finish this project.
However, when I fired up the 256TC
again, all was definitely not well.
Instead of the colourful kernel boot
screen that I had seen months earlier,
now I just had a monochrome display
filled with a mix of ASCII characters
0x00 and 0x02.
The display was flickering rapidly
and seemed to be cyclically redrawing itself several times a second; the
machine wouldn’t do anything else. It
wouldn’t boot from a floppy, either at
power-up or following a manual reset.
I had a quick look at all my track
repair wiring on the back of the board
to see if anything had come adrift, but
it seemed OK. I also had a poke around
with the DSO, but nothing stood out.
Whatever was going on, there seemed
Scope 1: the yellow trace shows the WE signal for one of
the video SRAMs during a 1B write operation, while the
blue trace is one of the SRAM address lines.
106
Silicon Chip
to be no attempt to access the disk controller chip, so there was no chance
of using ubsermon again to help with
the debugging.
I thought about it over the next couple of days and decided that, since it
was displaying reasonably ordered
screen content, it was probably starting to execute the kernel ROM OK.
Somewhere in the ROM code, the CRT
controller gets programmed, and that’s
when the random screen RAM data at
power-up would be replaced by the
more ordered data I was seeing.
My plan was to disassemble the
ROM and start tracing through the
code. I would compare what it said
should be happening with what I
was seeing on the screen or as signals
in the circuit using the DSO. When
I reached a part of the code that I
couldn’t see working, that might give
me a clue what the problem was and
where to look.
The code starts by setting what looks
like a flag in high Screen RAM to 0xFF.
It then performs some basic setup steps
before starting to program the CRTC
(cathode ray tube controller). I could
see that this code was working, both
by what I could see on the screen and
by checking for a CRTC chip selection
signal with the DSO.
Next, it initialises the contents of
the Colour RAM. This part looked to
be working, too, because the content
on the screen was all one colour, and
I could see an active WE signal on the
Colour RAM chip.
It then moves code from ROM to
RAM and calls another ROM routine
to clear the screen. It looked like the
screen might have been briefly cleared
Scope 2: the same two signals shown in Scope 1, but
after the ECO was applied (by cutting a single track and
soldering a 1.5kW resistor along that cut).
Australia's electronics magazine
siliconchip.com.au
as part of the cyclic flickering I could
see. I could also see an active WE signal on the Screen RAM chip, so it
seemed to be getting that far, at least.
Next, it calls another ROM routine to
initialise the Attribute RAM, and this
is where it starts to get interesting. This
routine fills the Attribute RAM with
zeros, then overwrites a block of that
with 0x02. This block is intended to
point to a “256TC” PCG graphic that’s
displayed towards the top right corner
of the kernel boot screen.
Two things were interesting about
this part of the code. Firstly, the DSO
was showing zero activity on the Attribute WE signal, so the RAM wasn’t getting written to, despite what the code
said should happen. Secondly, the
data this routine was trying to write
was the text I could see on the screen,
so it seemed it was actually writing
this data to Screen RAM instead of
Attribute RAM.
Screen RAM and Attribute RAM
occupy the same address space and are
swapped by writing to the Video Memory Latch port (0x1C). So, it seemed
something was going wrong with the
Video Memory Latch; it wasn’t switching between Screen and Attribute
RAM as it should.
Returning from the Attribute RAM
initialisation routine, the next significant action is to program the PIO. The
DSO showed that the PIO was being
selected, so I could only assume that
part was working.
Lastly, it checks the Screen RAM flag
that was set at the start, and as long as
it’s not zero, it attempts to boot from
the floppy. Unfortunately, by this point,
the flag has been set to zero by the malfunctioning Attribute RAM routine, so
it skips over the floppy boot function.
That explains why there was no attempt
to access the disk controller chip.
I stopped looking through the ROM
code because I now had a good lead to
follow; all the evidence pointed to a
Video Memory Latch problem. Looking at the circuit diagram, CPU access
to Attribute RAM data is via a bus
transceiver (U84), enabled at pin 19.
The DSO showed no activity on that
pin, and tracing back through some
logic gates showed that the LV4 signal
was permanently low.
LV4 is derived from pin 6 of U64, the
Video Memory Latch, and is supposed
to go high when bit 4 is set during a
write to port 0x1C. So why wasn’t this
happening?
siliconchip.com.au
Photo 12: the fully working computer showing off its colour display capabilities.
The time and date need updating though!
A rising signal on pin 11 of U64 triggers the latching of data, but looking
at that pin with the DSO showed it to
be permanently high, so no latching
could occur. Tracing this signal back
through an OR gate showed that pin
7 of U88 was permanently high. U88
is a 74HC138 used as a port decoder,
and pin 7 is supposed to go low whenever port 0x1C is accessed.
I could see other outputs from this
chip going low in response to activity
on other ports, but there was definitely
nothing happening on pin 7. All the
inputs to U88 seemed OK, so could
U88 itself be the problem?
I had not replaced U88 during the
rebuild a few months ago, but it is
located right on the border of the section of chips that did get replaced.
Being an original meant that it was
securely soldered in place (ie, no
socket), so it was not easy to swap it.
I did have a spare 74HC138, so I
Australia's electronics magazine
decided to set it up on a breadboard
first and feed it with all the same input
signals as the original to see what happened with pin 7. It took some fiddling
to get all the signals hooked up, but
eventually, I did. Pin 7 on the original
was still stuck high, but pin 7 on the
spare was behaving quite differently
and regularly pulsing low in response
to activity on port 0x1C.
I think I must be an expert in removing chips from a 256TC PCB now, so it
didn’t take long to remove the old chip,
fit a shiny new 16-pin IC socket and
insert my spare 74HC138. Success!
The normal colourful 256TC kernel
boot screen was back, and there were
no problems booting from a floppy.
I wonder if this machine is deliberately setting out to give me new challenges!
Postscript
I’m a fan of IC sockets and am still
December 2024 107
glad I decided to use them as part of
this repair, but there are a couple of
consequences to that decision that I
didn’t realise at the start.
The first problem is that the 256TC
power supply mounts underneath the
floppy drive mounting bracket and sits
very close to the main PCB when the
machine is assembled. I’m sure the
lack of clearance in this area is why the
RTC and Colour RAM ICs didn’t have
sockets fitted originally, whereas the
Screen and Attribute RAMs, located
right next door, did.
The main problem is the power supply inductor, L1, which fouls against
the side of a socketed U84 on the mainboard. I solved this by detaching L1
again and refitting it slightly further
to the rear and as close as possible to
the PCB.
The second problem is that the drive
mounting bracket ends up resting on
top of the row of socketed ICs immediately to the rear of the keyboard connectors (U95, U82, U11 etc). This is
less of a concern because it contacts the
insulated top surface of the ICs rather
than any pins, but I wasn’t happy to
leave it that way because I thought it
might eventually cause problems with
those ICs or their sockets.
My solution was to modify the drive
bracket slightly. It is installed on top of
a couple of plastic case posts at either
end of the bracket. Putting a kink in
the bracket at those two points causes
it to be lifted a few millimetres higher
and gives good clearance from all the
underlying ICs.
I suppose this solution is a bit agricultural, but it does the job and doesn’t
cause any problems with the floppy
drives or their presentation through
the case openings.
Floppy drives
The twin floppy drives that came
with the machine both looked to be
in good condition on the surface, but
unfortunately, I was not able to get
either of them to work reliably.
Drive #1 would read OK during
testing using the top head but refused
to read anything via the lower head.
Looking closely at the problematic
head showed what looked like a single
fine hair on the surface, but no amount
of cleaning would shift it.
Thinking it might be a scratch
instead, I gently ran my fingernail
along the surface to see if I could feel
anything, and it quickly became evident what the real problem was when
part of the head came away, as shown
in Photo 13.
I have no idea what could have happened to cause this damage, but I was
clearly wasting my time on this drive;
nothing short of a head replacement
was going to get it working again.
Drive #2 also had a problem with
read reliability. The bottom heads
seemed to work OK under testing, but
the top head would misread random
sectors. This problem improved somewhat with cleaning, but not enough
to be reliable. The problem may be
related to the media I’m using (HD as
opposed to DD), but the same media
works OK in other machines.
A future job might be to make one
good drive from the pair, but for now,
I’ve installed drive #2 as the B drive
just to fill the hole in the case. On
the other side, I have installed a new
Photo 13 (above): the damaged floppy
disk drive head.
Photo 14 (right): a GoTEK floppy drive
emulator was installed as drive A.
108
Silicon Chip
Australia's electronics magazine
GoTEK drive emulator as drive A,
which works very nicely (see Photo
14).
Assembly
Assembling the 256TC is a bit of a
jigsaw puzzle and can be a struggle
if you don’t do everything in the correct order. I’ve found this technique
works well:
1. Attach the rear panel to the mainboard using the D socket posts. Install
the board/panel combination into the
case base and insert all screws. Three
screws attach the rear bracket to the
case, and there are another two at the
front of the mainboard. You need to
support the thin edge of the case at
the rear with one hand as those three
screws go in. Tighten all screws.
2. Attach the power supply and
floppy drives to the mounting bracket
and plug in all drive cables.
3. Facing the front of the machine,
hover the bracket roughly where it
should go and plug the main power
supply cable and both floppy drive
power cables into the mainboard
underneath. The 34-way floppy drive
cable is best left unplugged for now.
Lower the bracket into its usual resting place.
4. Lay the keyboard upside-down
on top of the floppy drives with the
cables facing forward. Run the keyboard cables down through the open
slot between the mounting bracket and
the power supply.
5. Lift the front edge of the mounting bracket/power supply and reach
underneath to plug in the keyboard
cables. Lower the mounting bracket
again, then roll the keyboard forward
so it is the right way up and in its
proper place.
6. Plug in the 34-way floppy cable
and put the case top in place. The
bracket or keyboard location might
need shifting slightly to get the top
to fit correctly. Squeeze the whole
package together at the sides and
hold it together tightly while turning
it upside down so that the screws can
be installed.
7. Install all screws loosely, starting
with the centre screws on each side
that run through the drive mounting
bracket. Check that everything stays
aligned as each screw goes in, then
tighten them all, working outwards
from the two centre screws.
8. Turn the machine over and you’ve
finished.
SC
siliconchip.com.au
|