This is only a preview of the May 2024 issue of Practical Electronics. You can view 0 of the 72 pages in the full issue. Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
|
One step closer to a
dystopian abyss?
Techno Talk
Max the Magnificent
As always, we live in exciting times. Indeed, times are getting more exciting by the minute.
Can you imagine being able to simply look at something, ask a question, and receive a spoken
answer from an AI? I just saw such a system in action!
I
n my previous column (PE, April
2024), we talked about the concept
of mixed reality (MR), which encompasses augmented reality (AR),
diminished reality (DR), virtual reality
(VR) and augmented virtuality (AV).
Mixed reality is exciting, but the real
game-changer will come when we combine it with artificial intelligence (AI),
all boosted by the awesome data bandwidths promised by mmWave 5G and 6G
cellular communications. The question
is, whether this will be a game-changer
for good… or the other sort.
Where are we?
A large language model (LLM) is an AI
model notable for its ability to achieve
general-purpose language understanding and generation. The first LLM to
impinge on the general public’s collective consciousness was ChatGPT.
Created by OpenAI, ChatGPT began to
roam wild and free in November 2022,
which is only around 18 months ago
as I pen these words.
This form of Generative AI (GenAI) is
now all around us. There are AI-based
writing tools (give them a few text
prompts and they will write your marketing slogans, product descriptions,
brochures… and so on); AI-based presentation-generation tools (give them a
few text prompts and they will generate
your PowerPoint presentation for you);
AI-based speech-to-text transcribers
(give them an audio or video file and
they will return the written transcript);
AI-based content summarisers (give
them an audio or video file – or the output from a transcriber – and they will
return a summary along with a list of
action items); text-to-image generators
(I’m currently having a lot of fun with
Stable Fusion), and – most recently
– a company called DeepMotion announced a text-to-3D-animation tool
called SayMotion.
In my case, I’m particularly interested
in how AI might help with hardware
design and software development. As a
case in point, shortly before I started to
write this column, I whipped up a tiny
test program to run on an Arduino Uno.
8
This comprised only 34 lines of code,
20 of which were items like { and }.
Out of the 14 lines containing more
meaty statements, 11 of them (that’s
close to 80%) had bugs, and this was
one of my better days!
Prior to the introduction of LLMbased assistants called copilots,
embedded software developers typically spent 20% of their time thinking
about the code they were poised to
write, 30% of their time writing the
code they’d just thought about, and
50% of their time debugging the code
they’d just written. By comparison,
60% of today’s embedded code is automatically generated by GitHub Copilot.
This would offer a tremendous performance boost if not for the fact that
– since Copilot was trained on opensource sources – 40% of the code it
generates has bugs or security vulnerabilities. Fortunately, we have Metabob
from Metabob (don’t ask), which is a
form of copilot that identifies and addresses any problems introduced by
humans and other AIs.
Where are we heading?
There’s a famous quote: ‘It is difficult
to make predictions, especially about
the future.’ This quote is so famous
that no one knows who said it. It’s been
attributed to all sorts of people, from
Mark Twain to Niels Bohr to Yogi Berra.
Whoever did say this knew what they
were talking about. I would never have
predicted many of the technologies we
enjoy today. Contrariwise, some of the
technologies I was looking forward to
seeing have failed to materialise (in
more ways than one).
One of the questions I often ask technologists when I’m interviewing them
is, ‘Will we have technology XYZ next
year?’ (where XYZ is whatever futuristic technology forms the topic of our
conversation). Of course, they always
answer ‘No.’ My next question is, ‘Will
we have this technology in 100 years’
time?’ To this, they always respond
‘Yes.’ Then I say: ‘So, now we have the
endpoints, all we need to do is narrow
things down a little.’
I am confident it won’t be long before
we are all sporting some form of MR
‘something or other.’ Personally, I think
one of the MR interfaces intended for
daily usage that will arrive on the scene
sooner rather than later will look a bit
like a pair of ski goggles, but I’m prepared to be surprised by something else.
When people tell me that they would
have little use for an AI+MR solution, I
think back to the early 2000s when the
same folks told me they had no use for
phones that could take pictures (‘All
I want to do with my phone is make
calls’). All I can say is, ‘Look at you now!’
One of the examples I often present
is being able to ask my AI+MR combo
a question like, ‘What was that book
I was reading a few months ago that
talked about AI and Ada Lovelace?’ I
can envisage the AI responding with
the name of the book, while the MR
highlights its location on my bookshelf.
There are several required ‘building blocks’ that are starting to fall into
place. For example, a company called
Prophesee makes a teeny tiny eventbased vision sensor that’s only 3mm x
4mm in size and consumes only 3mW
of power. Another company called
Zinn Labs has mounted these sensors
in a pair of glasses frames that are also
equipped with an eight-megapixel forward-looking camera, microphones,
loudspeakers, and a cellular connection to a cloud-based AI. The camera
captures the scene while the sensors
track what your eyes are looking at.
I’ve seen a demo where the user looks
at something like a plant and simply
asks a question like, ‘Can I grow this
plant indoors?’ The AI immediately
responds, ‘Yes, but it requires bright
light, so you’ll need to place it near a
south-facing window.’
Horns and swords
I feel like we are sitting on the horns of
a dilemma with a Sword of Damocles
hanging over our heads (I never metaphor I didn’t like). I hope we’re heading
toward an age of wonder; I fear we’re
one step closer to a dystopian abyss.
Pass me my dried frog pills.
Practical Electronics | May | 2024
|