This is only a preview of the July 2023 issue of Practical Electronics. You can view 0 of the 72 pages in the full issue. Articles in this series:
|
Techno Talk
AI and robots – what
could possibly go wrong? Max the Magnificent
We are currently making tremendous strides in the field of artificial intelligence. There are also a lot of
interesting developments in the world of humanoid robots and cobots (collaborative robots) which work
hand-in-hand with humans. Let’s hope we all stay friends.
A
s I pen these words, progress
is accelerating dramatically on
the artificial intelligence (AI)
front. The first person to really contemplate the possibility of AI was the
English mathematician and writer Lady
Ada Lovelace (daughter of the English
romantic poet Lord Byron). In her early
20s, Ada assisted the English polymath
Charles Babbage when he started work
on his proposed general-purpose mechanical computer, the Analytical Steam
Engine, in the late 1830s.
Babbage only viewed his Analytical
Engine in the context of performing mathematical calculations. In her notes, Ada
discussed how the numbers being processed could represent abstract symbols,
such as musical notes, and that future
versions of the engine ‘might compose
elaborate and scientific pieces of music
of any degree of complexity or extent.’
During WWII, the English mathematician and computer scientist Alan Turing
started to ponder the possibilities of
machine intelligence. He gave a lecture
in 1947 that discussed how machines
could learn from experience, and in
1948 he wrote a paper entitled Intelligent
Machines, which introduced many of the
concepts that are central to today’s AI.
Unfortunately, he failed to publish this
paper, which meant most of his ideas had
to be reinvented by others later.
The AI ball starts to roll
In 1956, the Dartmouth Summer Research
Project on Artificial Intelligence was held
at Dartmouth College in New Hampshire,
US. This seven-week brain-stem-storming
session of mathematicians and scientists
is widely considered to be the founding
event that set the AI ball rolling – and it
hasn’t stopped rolling since.
Having said this, the ball did not roll
very fast at first. Although expert systems,
which use knowledge and rules-based
approaches, were formally introduced in
1965, progress was painfully slow, largely because computers of the time were
limited in memory and performance.
Work on expert systems picked up
pace in the 1970s and they really started
to proliferate in the 1980s. By the 1990s,
8
however, a lot of us had a sinking feeling that they weren’t living up to their
promise. Things were not helped when
the marketing weenies hopped on the
bandwagon and started to stamp ‘Artificial
Intelligence Inside’ labels on everything,
even things that had nothing to do with
AI whatsoever (in much the same way
we currently see ‘Gluten Free’ stamped
on foods that never had a hint of a sniff
of a whiff of gluten in the first place).
Just when we least expected it
To be honest, throughout the 2000s, I’d
largely relegated thoughts about AI to
the recesses of (what I laughingly call)
my mind. I knew work was still ongoing
in academic circles, but I really didn’t
envisage any real-world applications
for quite some time.
All this started to change in the 2010
to 2015 timeframe when the combination
of more powerful computing engines
coupled with new AI architectures
based on digital artificial neural networks (ANNs) and new AI algorithms
such as convolutional neural networks
(CNNs) sprang onto the scene.
How big? How fast?
In 2018, an AI research laboratory called
OpenAI published a paper titled AI and
Compute, which defined two eras of
AI computation requirements. During
the first era, which started with the
Dartmouth Workshop and lasted until
2012, the requirements for AI computational capability doubled approximately
every two years, which roughly mapped
onto the well-known Moore’s law.
A ‘perfect storm’ occurred in 2012
with the introduction of new AI architectures and algorithms. The result was
an inflection point across multiple domains (speech, vision, language, games…)
that heralded the second (current) era,
in which AI computer ‘power’ started
to double every 3.5 months.
How do we do it?
The first AI systems ran on some of the
larger computers available at the time. Of
course, processor technologies have improved dramatically. Also, AI algorithms,
both big and small, have become more
sophisticated and more varied. It’s now
possible to get a humble microcontroller
to perform some simple AI functionality.
For example, the first AI app I created
ran on an Arduino Nano 33 IoT, see:
https://bit.ly/3pJ99Cz
Until recently, heavy duty AI applications ran on general-purpose
field-programmable gate arrays (FPGAs)
or graphics processing units (GPUs).
Over the past year or so, however, I’ve
talked to a bunch of companies who are
developing special analogue, digital and
even optical-based devices capable of
performing the billions-upon-billions
of computations required to implement
high-end AI models at extreme speed
while consuming relatively little power.
What about robots?
I’m glad you asked. I’ve also been talking
to several companies that are working
on humanoid robots. For example, EVE
robots from Halodi Robotics are already
operate as nighttime security guards in
factories. They are also employed in
hospitals and supermarkets.
I’m sure you’ve heard about ChatGPT,
the AI chatbot introduced by OpenAI
last November. A lot of people are worried about kids using chatbots like this
to do their homework, but I don’t think
we’ve fully wrapped our brains around
all the potential applications (and problems). For example, I recently read about
a non-invasive ChatGPT-based system
that can translate activity in the human
brain into a continuous stream of text:
https://bit.ly/3MqgFKy
One of my favorite science fiction books
is Great Sky River by Gregory Benford.
Set tens of thousands of years in the
future, humans have spread across the
Milky Way. When they approach the
galactic center, they butt heads (or whatever) with mechanoid civilizations. The
‘Mechs’ regard biological lifeforms as an
infestation to be eradicated. We never
do learn what happened to the Mechs’
creators, the first of whom had to be biological in nature. I don’t know about
you, but I can’t help thinking: ‘AI and
robots – what could possibly go wrong?’
Practical Electronics | July | 2023
|