This is only a preview of the August 2024 issue of Practical Electronics. You can view 0 of the 72 pages in the full issue. Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
|
That makes so
much sense!
Techno Talk
Max the Magnificent
I’m always amazed by the cunning creations of the pioneers of yesteryear, especially when I consider
the rudimentary sensors they had at their disposal. I often wonder what their reactions would be to
see the sophisticated sensors we have available to us today.
A
s usual, my poor old noggin
is full of random thoughts bouncing around like super balls on
steroids. The topic that has currently
captured my attention is that of sensors.
What pops into your mind when you
hear (or see) the word ‘sensor’? Living
in an age of wonders as we do, you may
be thinking of highfalutin’ devices like
lidar (laser detection and ranging) or
radar (radio detecting and ranging) .
When we boil things down, however, a
sensor is any device that detects some
physical phenomenon and produces a
corresponding output signal.
For our purposes here, we will assume electrical output signals in terms
of voltage or current being used to feed
electrical or electronic systems, but this
isn’t cast in stone.
Victorian fax machines
Can you imagine the Victorians sending
faxes to each other? This may seem farfetched, but in 1842, a Scottish engineer
and inventor called Alexander Bain came
up with a cunning idea. He created an
image to be transmitted by cutting it out
of a thin sheet of tin. He placed this metal
image on a movable insulated base and
connected it to one side of a battery. The
base was slowly passed under a swinging pendulum formed from a conducting
wire with a weighted point on the end.
Whenever this point connected with the
metal image, it completed the electrical
circuit, thereby converting the dark and
light areas of the image – which were
represented by the presence or absence
of tin – into an electrical signal.
This cunning creator used this electrical signal to activate a relay attached to
the end of another pendulum that was
swinging back and forth over a second
moving bed. The activated relay caused
an attached pencil to encounter a piece of
paper laying on the moving bed, thereby
reproducing the original image in metal
as a drawing in pencil.
Ray guns and TV controllers
Did you ever see the original Buck Rogers
science fiction serial from 1939 starring
Buster Crabbe? Filmed in glorious blackand-white, this was originally released as
8
a series of 20-minute movies, and later
shown in the 1950s and 1960s on TV.
Buck sported a ray gun in the form
of a ‘U-235 Atomic Pistol’. The reason I
mention all this is that the first practical
photoelectric cells were invented in the
1880s. In 1955, Zenith introduced the
world’s first wireless television remote
control called the Flash-Matic. Looking
like something Buck Rogers would not
be ashamed to be seen carrying, this glorified torch (flashlight) employed a beam
of light to activate four photocells located at the corners of the screen, thereby
allowing the user to control the volume
and channel selection.
Mobile sensor platforms
Do you remember the artificial intelligence (AI) called KITT (Knight Industries
Two Thousand) powering the highly advanced, very mobile, robotic automobile
in the Knight Rider TV series of the 1980s?
Today’s cars are getting close to (sometimes they surpass) KITT’s capabilities.
My own 2019 Subaru Crosstrek is
equipped with binocular vision that can
be used to detect and correct any drifting
out of lane, vary the speed of the cruise
control if we get too close to a car in front,
and slam on the breaks if it feels we are
in danger of imminent collision. All of
this is made even more exciting by my
wife screaming in my ear.
In fact, today’s autonomous cars and
robots are essentially mobile sensor and
computing platforms. A very common
scenario is to have multiple cameras
equipped with CMOS sensor arrays that
are sensitive to light in the visible part
of the spectrum. These feed advanced
processors running AI algorithms that
can perform tasks like object detection
and recognition.
These cameras can be augmented by
lidar and radar sensors. The original lidars were big, bulky, and expensive,
but new versions are coming online in
which almost everything is implemented
in semiconductor form. As opposed to
a simple time-of-flight (TOF) approach
which involves generating powerful pulses of light and measuring the round-trip
time of any reflections, companies like
SiLC Technologies are using a frequency
modulated continuous wave (FMCW)
approach that can provide distance and
velocity data on a pixel-by-pixel basis,
allowing them to perceive and identify
objects more than a kilometer away.
Meanwhile, companies like Owl
Autonomous Imaging are creating longwave infrared (LWIR) thermal focal plane
arrays (thermal imagers). The signals
from these images can be employed by
AI to perform object detection, classification and ranging. As the folks at Owl
told me, ‘Within five years, all new cars
will be able to see at night!’
That’s deep!
Have you ever thought about our amazing ability to perceive the world around
us in three dimensions? Powered by our
optical sensors (eyes) and associated computers (brains), we call this ability ‘depth
perception.’ There are many aspects to
this, but we start with the fact that each
of our eyes sees a slightly different image due to their separation in our heads.
The resulting disparities are processed
in the visual cortex of our brains to yield
depth information.
Even with one eye closed, we can still
do things like track and catch a ball heading our way. In this case, our brains make
use of visual cues, including knowing
how big we expect objects to be, and our
understanding that if an object appears
to be growing bigger, then this may be a
good time to duck.
In the case of machine vision, one of
the components of depth perception
is the ability to create a 3D depth map
(point cloud) of the scene. We can do
this using two CMOS sensors to provide binocular vision, but that increases
the cost. We can employ a single CMOS
sensor in conjunction with an AI, using its understanding of the scene to
determine where and how big things
are in 3D space, but this requires a lot
of computation.
The folks at a company called AIRY3D
have come up with a way to use a single
CMOS sensor to generate both a regular 2D image and a 3D point cloud on a
pixel-by-pixel basis with very little computation. I don’t know about you, but
I certainly didn’t see this one coming!
Practical Electronics | August | 2024
|