This is only a preview of the July 2023 issue of Practical Electronics. You can view 0 of the 72 pages in the full issue. Articles in this series:
|
QRWH
3RO\SKRQ\
+DUPRQLF
6\QWKHVLV
'7LPEUH
0RUSKLQJ
0,',
-HUHP\/HDFKÍV
6<17+(6,6(5
0,', 86%
,QSXWV
/RZ/DWHQF\
$XGLR
2QERDUG
3DWFKLQJ
This advanced MIDI synthesiser is easy to build and can be hooked up
to any MIDI-compatible device. It lets you explore the broad range of
acoustic elements that capture the characteristics of real and imaginary
instruments. It is more than an experiment – it is a full-blown instrument
capable of forming rich, detailed sounds using a plethora of settings,
envelopes and waveforms – a blank canvas.
T
he MIDI Spectral Sound
Synthesiser uses seven dsPIC
chips, each running at 70 or
40MIPS, in combination to produce
sounds digitally. This gives it 18 note
polyphony (ie, the ability to play 18
different notes simultaneously) and
complex sound creation, with ‘timbre morphing’ being the module’s
key feature.
It also has low latency, which is
important since you don’t want an
apparent delay between pressing a
key on a keyboard and the sound
being produced.
Being a standalone sound module, it
has the tantalising possibility of being
built into custom DIY musical instruments without the need for a computer.
The module is an adventure into
real-time sound synthesis, exploring
the broad range of acoustic elements
that capture the characteristics of
real and imaginary instruments. As
such, it is a wonderful way of appreciating types of sound, how musical
instruments work and why some
16
instruments are notoriously difficult to emulate.
While it is certainly a working
device as presented, it’s also a great
way to experiment with audio synthesis. This is a fun and stimulating
pursuit, with an endearing interplay
between digital waveform generation
and human perception.
The module’s design focuses on true
parallel processing by splitting the
computational load across six ‘Tone
Processor’ chips.
All the source code is available for
the firmware and the accompanying
Windows desktop software. However,
the software is also available pre-built,
including any version updates.
This is an advanced project, and
there is even a Technical Reference
Manual for those who want to explore
more deeply.
The whole topic of sound synthesis, interwoven with music history, is
a rich and intensely interesting evolutionary journey, driven by our instinctive desire to understand and create
sound. The Spectral Sound Synthesiser taps into that desire.
An overview of the system
Fig.1 shows how the overall system
works. It has a MIDI input for receiving MIDI note and control messages
(eg, from a keyboard), a USB input
for configuration by the Windows
software, and a stereo line-level
audio output jack, for hooking it up
to an amplifier.
You can create patches with the
Windows software, and a certain number of these patches are loaded onto
the module and stored internally. You
can send any tweaks to patch settings
immediately to the module and hear
the result.
The ‘Master Controller’ is a
PIC18LF25K50 8-bit micro with useful
USB connectivity. This chip is common in embedded systems requiring
USB. It functions as a hub, processing
incoming USB and MIDI messages. It
also allocates processors to tasks in
the rest of the system.
Practical Electronics | July | 2023
The six Tone Processors are
dsPIC33EP512MC502-I/SP 16-bit
chips running identical code to generate digital sound samples. Each calculates up to three live note instances
at once, so the system has a maximum
of 6 x 3 = 18 note polyphony. Each
Tone Processor holds a single patch,
but different ICs can have different
patches, making this MIDI instrument
‘multi-timbral’.
A single ‘Mixer’ chip, a 16-bit
dsPIC33FJ128GP802-I/SP, mixes the
samples from all the Tone Processors,
limits the generated audio level using
automatic gain control (AGC), then
passes the audio out through its inbuilt
stereo DAC to an MCP6022 op amp.
The output is ‘pseudo stereo’, using
a well-known audio trick called the
Haas effect, where delaying a copy
of a signal from one ear to the other
gives a very convincing impression of
a stereo field!
The module can hold several
patches and ‘performances’ in a
24LC512 EEPROM IC.
What is additive synthesis?
Ongoing research into hearing and
human perception reveals that we
are still a long way from completely
understanding how our brains process and identify sound. A key element is timbre, which is related to a
sound’s frequency spectrum and how
it changes over time.
Additive synthesis is a method
of creating and modulating timbres
based on the fact that any periodic
function can be expressed as the sum
of a series of sinewaves – the ‘Fourier
series’, described by Joseph Fourier
in 1822. He was using it to solve heat
transfer functions, but this idea soon
became widespread, from predicting
tides to planetary motion, and much
later, audio synthesis.
For samples of what the
Synthesiser can do, visit
siliconchip.au/link/abeo
Fig.1: this block diagram shows how the Spectral Sound Synthesiser works. The
Master Controller receives MIDI messages from the MIDI In port and patch data
from the computer via USB. It commands the six Tone Processors to generate
sounds based on the stored patches and possibly stored performance data.
These sounds are fed to the Mixer and then to the analogue audio output.
The simplicity of the idea is appealing because it means that complex timbres can be constructed just by adding
sinewaves with appropriate weights
and phases.
It turns out that phase is not generally important because our hearing disregards it. This makes sense when pondering sound waves bouncing about in a
room; despite the phases of different frequencies getting mangled, we generally
do not perceive any timbral difference.
Another appeal is that sounds in
nature are based on vibrations where
the timbral sinewaves have frequencies that are integer multiples, or harmonics, of the base ‘fundamental’ frequency. This well-defined relationship lends itself to computation. Musical instruments can be recognised by
their characteristic harmonic levels,
with some examples shown in Fig.2.
The large and evolutionary family tree of electronic synthesisers
includes prominent examples of
additive synthesis. For example, the
well-known and beloved Hammond
organ dating back to 1935 stacks tones
The MIDI Synthesiser fits in an instrument
case measuring 150 x 100 x 40mm. A
different case could be used as long as it’s
bigger, the height of it depends on the heatsink you17
use.
Practical Electronics | July | 2023
17
generated by pickups placed close
to rotating mechanical ‘tonewheels’.
Also, the early Fairlight Quasar
synth of the 1980s was additive, as
were the Synclavier and a few Kawai
keyboards. Loom, a modern VST
instrument, is also an additive type.
With enough computing power,
additive synthesis makes the ‘morphing’ of timbres possible by altering the
set of sinewaves being summed over
time – akin to what happens all around
us with natural sounds.
Additive synthesis also has the
great advantage of operating in the frequency domain rather than the time
domain. This makes filtering a simple
concept, where the filter contour simply scales the levels of the base sinewaves. Brick-wall filtering is nothing
more than including or excluding certain sinewaves.
This method of synthesis can create rich, stimulating and captivating
sounds. But it has limitations when
emulating real instruments compared
to sample-based synthesis.
The problem is that natural sound is
far more complex than just harmonics;
there are ‘in-harmonic’ frequencies in
the spectrum, especially for percussive
sounds. There is noise from blowing,
scraping and scratching. The harmonics
are not always exact integer multiples.
So, as a sonic tool, additive synthesis is great. But it cannot always emulate natural sound easily.
The Fairlight CMI synthesiser of
the 1980s (which took its name from
a Sydney Harbour ferry) was a breakthrough in sound production through
sampling. It revolutionised pop music
with genuinely new sounds.
The irony is that the inventors
started by using additive synthesis,
according to co-founder Kim Ryrie
(interestingly, also the founder of
ETI magazine): ‘We regarded using
recorded real-life sounds as a compromise – as cheating – and we didn’t feel
particularly proud of it.’
This Fairlight model was a ‘sampler’, with the ability to record sound,
soon followed by cheaper ‘Romplers’
with recorded sound baked into ROM.
These days we can have gigabytes of
samples on solid-state hard disks.
It is undeniable that sample-based
synths can give amazing results, especially with many nuanced ‘layers’ for
parameters such as note velocity. However, they use masses of memory instead
of modelling anything on a physical basis. Nevertheless, from the early
sampled tape loops of the Mellotron
onwards, as used on classics such as
the pipe organ in the Beatles’ Strawberry fields, it is clear that samples are
here to stay.
18
Fig.2: approximations of the harmonic structure of different instruments. From
left to right, the bars represent the sequential harmonics above the fundamental.
The harmonic structure is what defines the timbre of an instrument, while the
fundamental frequency is determined by the pitch of the note being played.
As computing power has steadily
increased in recent years, we have
seen growth in physically modelled
sound, such as in the popular ‘Pianotech’ VST pianos based on the physics of real instruments, ancient and
modern. Additive synthesis can also
be categorised as physical modelling
to a degree because of its timbre-based
approach and dynamic nature.
Harmonics and the equaltempered scale
Real instrument sounds are generated through vibration, such as the
movement of air in a flute, the vibration of a guitar string or the oscillating of the skin of a drum. The vibrations create standing waves, having
fixed nodes and moving antinodes.
The nodes divide the length into
equal divisions, leading to the integral harmonics seen in the frequency
spectrum of many instruments.
Fig.2 broadly shows how these
harmonics have characteristic levels
in different instruments, although it
is extremely generalised.
An instrument plays at a pitch we
recognise as the fundamental frequency, but the tone has a ‘colour’
dictated by the relative strength of
the harmonics. The fundamental is
known as the first harmonic; the second harmonic is at double the fundamental frequency (an octave higher),
the third harmonic at three times the
fundamental frequency and so on.
But it is seldom realised that some
harmonic frequencies only roughly
match the pitches that we recognise in
the chromatic (12-note) musical scale!
Our brains have heard the pitches of
notes from our earliest memories. Yet,
the musical scale we use today is relatively recent, and human beings have
tried several alternatives, going right
back to Pythagoras.
The scale we use today is called
the ‘equal-tempered’ scale, where
‘equal’ refers to a fixed frequency
ratio between any note and its neighbour. To calculate the frequency of a
note a semitone higher, we can multiply by this fixed factor. Since notes
one octave apart have a ratio of 2:1,
if each semitone has a fixed geometric ratio, that ratio must be the 12th
root of two (approximately 1.0595:1).
Remember that this is a human
invention to get a system of equal
ratios so that music can be transposed without altering how it
sounds. Although the oddities of
Practical Electronics | July | 2023
Fig.3: the equal-tempered scale has the advantage that music can be played
in any key without retuning the instrument. But some note harmonics do not
precisely match any note in the scale, with the worst being the 7th and 11th
harmonics. Usually, though, such high harmonics are not especially loud, so
this tends not to matter.
►
Fig.4: the main tasks and calculations that are constantly being processed by the
six Tone Processor chips that do most of the synthesis work.
previous scales contributed to the
richness of music diversity, and
some bemoan their demise, the
equal-tempered scale makes a certain amount of sense.
Fig.3 is a detailed analysis of musical note A3 (440Hz), showing how
the harmonics of this note do not
always accurately align to the pitches
of the equal-tempered scale. Power-of-two harmonics have an exact
match in the scale, but others don’t
(although they are often quite close).
This reveals a degree of ‘in-
harmonicity’ in harmonics (ironic,
given the name). The analysis
applies to all notes; the 7th and 11th
harmonics deviate the most from
recognisable pitches, and if audible, they sound ‘flat’ and dissonant.
These imperfections are wellknown to instrument makers. For
example, pianos are designed to have
the hammer strike the strings at the
seventh vibrational node to suppress
this ‘ugly’ seventh harmonic! The
11th harmonic is less noticeable,
often being naturally quieter.
Practical Electronics | July | 2023
Tone processors
Fig.4 shows the heart of one of our
Tone Processor ‘Note Instances’, showing how it generates sound. A note
instance represents a note we play on
the MIDI-connected keyboard. The
Master Controller ensures that the
played notes are evenly spread across
the available Tone Processors.
When a note instance is started on a
Tone Processor chip, the first thing that
happens is the calculation of the waveform to be used based on the ‘static’
patch settings, plus other ‘dynamic’
factors such as the note velocity. This
involves looping through all active
harmonics and adding corresponding
sinewaves together.
Because harmonics are exactly integer multiples of the fundamental, the
wavetable holds exactly one cycle of
the summed periodic wave. Once that
has been calculated, this physical table
in memory becomes ‘active’ for the
note instance. To do this, table pointers are swapped so that we never have
to copy data slowly and inefficiently
from one table to another.
The wavetable for a note instance
gets regularly refreshed during its
active life cycle. The rate of refresh is
not fixed but is approximately 50Hz.
As a side note, it is fascinating to
read research where they have found
that the threshold where humans can
detect a change in timbre occurring
is often a lower rate than this. Timbre detection is clearly a demanding,
abstract recognition task in the brain.
During note generation, several ‘gain
envelopes’ can be applied to aspects of
the sound. This is the sound’s amplitude envelope. Fig.5 indicates how the
system calculates envelopes. Both linear and exponential envelopes can be
created, and each section of the ADSR
(attack, decay, sustain, release) envelope has a ‘target’ value. During each
section, the envelope’s current value
moves towards the target.
Exponential curves exist everywhere in nature, relating to energy
decay. So, that is a natural choice,
and not just for amplitude. For example, when plucking a string, the high-
frequency content decays first. The
19
Fig.5: envelopes are time-based profiles that can be applied to various synthesis
parameters. The easiest to understand is the volume envelope, which varies the
loudness of the note from the time it is triggered until it is no longer audible.
Fig.6: ‘3D timbre morphing’ is a solution to the problem that the harmonics of
various instruments can vary depending on which note is being played, how
hard it is being played and so on. This is especially obvious on instruments like
pianos, where each key can have a unique sound, and louder notes can trigger
various resonances.
jostling of atoms in high-frequency
vibration will use up energy at a
higher rate.
A note instance also features three
‘Low-Frequency Oscillators’ or LFOs:
Vibrato varies the pitch, Tremolo the
amplitude and Timbre the harmonic
levels. LFO modulation can add so
much character to sound. There are
also envelopes for the depth of this
modulation, allowing, for example, a
gradual onset of vibrato.
Another interesting point is that
whereas many synths would use Tremolo across the total sound output of
the synth, this module modulates per
20
note, making the overall sound more
complex and interesting.
Timbre morphing
Determining the harmonic levels to
use when constructing a waveform
is quite a complicated process. Fig.6
shows that a patch holds the harmonic data for 75 waveforms, with
the waveform to synthesise depending on three parameters. The ‘Note’
parameter is the position of the note
on the keyboard. The ‘Intensity’
parameter most often means note
velocity. The ‘Waveform’ picks from
five waveforms.
Each point in this conceptual 3D
space grid has a set of harmonic levels defining a waveform. The current
parameter values define the required
point internal to this space, and the
harmonic levels to use are interpolated
from the nearest defined grid points
in this space.
Taking this further by modulating through this space with the Timbre LFO can give impressive realism
compared to plain vibrato. Typical
vibrato purely modulates the pitch of
the whole waveform, whereas timbre
modulation is harmonic-based and
therefore, a far more complex modulation for our brains to perceive.
It is thrilling to hear this difference
and realise that our brains feed off
the interest in sound. Perhaps, considering the incredibly clever processing that our brains can perform
with language, pattern detection and
all aspects of sound, this realisation
should not be too surprising.
Towards greater realism
We have already mentioned that natural sound includes in-harmonic elements, which do not fit the neat integer
multiples of harmonic frequencies. In
an attempt to address this point, this
synthesiser includes some additional
features outside of the purely additive-
synthesis approach.
First, a noise envelope is available
to help simulate ‘blown’ instruments.
This is a white noise generator with an
adjustable low-pass filter.
Second, you can add short in-
harmonic samples. These are hardcoded clips of the sound of taps,
scratches, clicks and bonks. However,
the sample feature also includes an
implementation of the well-known
‘Karplus Strong’ delay line technique
of plucked string synthesis.
The 1983 paper by Kevin Karplus
and Alex Strong entitled ‘Digital Synthesis of Plucked-String and Drum
Timbres’ first described this technique.
It is a computationally simple but
effective method of generating realistic, decaying string sounds that start
off life in the delay buffer as noise. It
adds a powerful tool to this module,
even though it does not have anything
to do with additive synthesis!
There are also settings to randomly
or systematically detune the frequency
of played notes, in an attempt to introduce the impurities of real instruments. Plus, there is an option of
using two wavetable oscillators per
note instance, detuned by a specified
amount, giving a chorus-like effect,
which tries to account for the fact that
instruments like the piano use multiple detuned strings per note.
Practical Electronics | July | 2023
A final feature is the ‘Body Resonance Filter’. This attempts to emulate
the body resonance of a real instrument by filtering the overall system
sound. After specifying the filter contour in the app, it scales the harmonic
levels. Although this method has great
theoretical appeal, it has mathematical limitations.
Despite all these extra features, there
are still real-life complexities that the
module just cannot tackle. For example, a piano has peculiarities due to the
stiffness of its strings, where the higher
harmonics get sharper compared to
the expected harmonics of the string.
This is because the stiffness effectively shortens the string for higher frequencies, raising the harmonic oscillation frequency. This effect applies
to all stringed instruments, and is
something our system cannot address
because the model relies entirely on
integral harmonics.
A real-time system
The whole of the module is an example of a ‘hard’ real-time system, where
the deadlines of sample production
and processing are immovable. This
presents considerable challenges, and
the development of the module was a
slow evolution of coding, measuring,
refining and sometimes redesigning.
The Tone Processors all run
entirely in parallel and are polled
by the mixer chip to provide samples at the audio sampling rate of
41.7kHz. Inside each Tone Processor
is a hierarchy of interrupts, as shown
in Fig.7, made possible by the dsPIC’s
ability to assign priorities.
The main routine of a Tone Processor, the centrepiece of the entire system, is just a simple loop that recalculates wavetables. This ‘background’
task is unpredictable in duration, is not
on a deadline, and can vary depending
on the interrupt activity and the complexity of the waveform being built.
This means that the timbre refresh
rate could slow down in certain circumstances – although, in use, performance is very acceptable, and timbre changes are perceived as fluid
and smooth.
The processing ‘layers’ above this
base main loop are concerned with
calculating envelope steps, calculating the sample output and processing
received data.
A trick used to improve throughput
on the SPI bus between the Tone Processors and the Mixer is only sending the changes in sample values. The
summing of sines in a Tone Processor
can result in a total value exceeding 16
bits. The total on the Tone Processor
is a 32-bit signed integer, but the Tone
Practical Electronics | July | 2023
Fig.7: an overview of all the tasks that the Tone Processor chip has to handle,
in order of priority. The highest priority events are those that would cause the
sound to break up or otherwise give unexpected results if delayed.
Processor only sends the change in this
total, capped at 16 bits, to the Mixer.
This can cause signal distortion,
but statistically, this will happen
rarely. The performance advantage
of this method massively outweighs
rare anomalies that are probably not
even noticeable.
The software and hardware are
designed for speed. All the dsPICs
are running at their fastest. All calculations are integer-based, coupled
with extensive use of the on-chip
hardware multiplier via the compiler’s ‘__builtin’ commands. The code
extensively uses shift operations for
fast multiplication and division, and
numerous lookup tables are used,
including a detailed sine lookup.
Circuit details
The full circuit is shown in Fig.8. The
first thing to note is that the six Tone
Processors (IC5-IC10) are identical
dsPICs configured in the same fashion,
each with just a handful of associated
components: one Vdd bypass capacitor, one Vcap capacitor (required for
the chip’s internal regulator) and one
10kW MCLR pull-up resistor to prevent
spurious resets.
Besides the power supply, the only
connections to these chips are a common SPI bus, as they are pure number crunchers, and all commands and
data are sent on this bus. The only
differences between the connections
to these chips are that each Tone Processor’s CS2a-CS2f input (pin 4) connects to a different select line on the
Mixer, IC3.
The Mixer is a different (but
related) type of dsPIC processor.
Besides being connected to this SPI
bus and the six chip select lines for
the Tone Processors, it also has two
differential analogue outputs from an
internal stereo DAC.
These signals are fed to op amp IC4,
which converts the differential signals
to single-ended audio signals suitable
for feeding to the CON2 output jack.
Simultaneously, this circuitry filters
out the DAC step artefacts using lowpass filters built from added capacitors
and the existing gain-setting resistors.
21
Spectral Sound Synthesiser
A virtual ‘half-supply’ rail is generated using zener diode
ZD1 biased from the +3.3V rail so that the audio signals
from IC3 remain within the supply rails of the op amp.
Mixer IC3 also connects to the 24LC512 EEPROM (IC11)
using a two-wire I2C serial interface (SDA and SCL). That
chip has its own bypass capacitor plus pull-up resistors
22
for those serial lines, and that’s it. The last task for IC3
is to drive the Mixer Alert LED, LED2, from its RA0 output (pin 2).
MIDI input, USB and other control tasks fall on the
PIC18LF25K50, IC2. It monitors the presence of USB 5V at its
RA0 digital input (pin 2) via a 2.2kΩ/10kΩ ‘divider’, which
Practical Electronics | July | 2023
Fig.8: the entire Synthesiser circuit, which is somewhat unusual in that it mainly consists of eight PIC microcontrollers (of
three varieties), all communicating via two separate SPI serial buses. The remainder of the circuit comprises the EEPROM
(IC11) used to store patch and performance data, the power supply, MIDI input and audio output.
mainly exists to limit the current into that pin and ensure
that it’s pulled to 0V when no USB connection is present.
IC2 and IC3 communicate via a second separate SPI bus,
with a dedicated chip select line, from pin 7 of IC2 to pin
22 of IC3. IC2 also drives LED1, the MIDI Alert LED, from
its RB6 digital output (pin 27).
Practical Electronics | July | 2023
External potentiometer VR1 (the volume control) connects to CON4, placing it across the 3.3V supply. Its wiper
goes to analogue input AN11 of IC2 (pin 25). IC2 reads the
voltage at the wiper using its analogue-to-digital converter
(ADC) and passes the digital value along to IC3, which
then scales its output to provide the desired volume level.
23
The Spectral Sound Synthesiser PCB
is relatively easy to build. Although
with about a dozen ICs, many of
them having 28 pins, there are lots of
solder joints to make. Be careful to
make each joint properly or it won’t
function correctly.
The pot value or type isn’t critical, but 100kΩ is reasonable. Scaling the audio sample values entering
the DAC, rather than directly adjusting the op-amp gain, simplifies the
PCB at the cost of reduced audio bit-
resolution with the volume turned
down. In practice, it’s hard to hear
this degradation.
That just leaves the MIDI input,
clock signal distribution and the
power supply to describe.
The MIDI signal is applied to CON6,
and it powers the IR LED within
FOD260L opto-isolator OPTO1. A
220Ω resistor provides current limiting, while diode D2 prevents the LED
from being reverse-biased. It is essential to use the FOD260L opto-coupler
as this is suitable for 3.3V operation
– other varieties may well not work.
The output transistor in OPTO1 is
operated in common-emitter mode
with a 470W pull-up resistor. The
resulting signal goes to the RX input
(pin 18) of IC2.
A single external oscillator is used
because we have eight microcontrollers that all need clock sources.
This is built using crystal X1, its two
33pF load capacitors and unbuffered
inverter IC1a. The resulting 16MHz
signal is inverted by IC1b and buffered by IC1c and IC1d, then fed to all
the microcontrollers’ clock input pins.
We don’t recommend using a buffered inverter in place of IC1, such as
the more common 74HC04, as it might
not oscillate correctly.
The power supply is simple; the
unit is powered with 5V DC from barrel socket CON1, and this flows via a
Programming the microcontrollers
This project uses eight microcontrollers of three different types. They are all
Microchip products (one PIC and seven dsPICs), so they can be programmed
with a PICkit 3, PICkit 4, Snap programmer or similar. Or you can build it from
our kit, which will come with all the micros pre-programmed.
Each different type of micro has its own software. In other words, there are
three sets of firmware. The codes are given in the parts list, and the download
package on our website includes the source code for all three, plus the three
HEX files you need to program them.
If you want to rebuild the source code to produce new HEX files (eg, you want
to make changes to the way it functions), you’ll need the Microchip XC16 Pro
compiler (which can be ordered from the Microchip website; there are also free
trial versions). Otherwise, optimisation level 3 will not be available, and the resulting firmware will not be fast enough to work correctly.
The PIC18 code is less critical, so you can probably get away with using the
free XC8 compiler to build that HEX file.
24
reverse-polarity protection diode to
the inputs of linear regulators REG1
and REG2. REG1 powers all the digital circuitry while REG2 powers the
analogue circuitry, which is basically
just op amp IC4 and the bias for zener
diode ZD1.
Increasing the signal-to-noise
ratio (SNR)
A challenge with any system comprising mixed digital and analogue circuitry is to stop the digital noise bleeding through into the audio output. The
module PCB takes the basic steps of
separating audio and digital components as much as possible, with separate regulators and the use of a ground
plane. However, additional measures
have been taken to ensure a generally
quiet and acceptable audio system.
One such measure is an audio limiter in the mixer audio code using
advanced look-ahead AGC. A limiter
squashes the dynamic range slightly by
attenuating peaks, thereby effectively
boosting the quieter sounds and lowering the noise floor.
This does not seem natural when
trying to emulate polyphonic instruments; however, limiters and compressors are commonplace in audio
reproduction, and it has a significant
beneficial effect in this system.
We are also using a trick called
pre-emphasis and de-emphasis. The
digital audio generated has high-
frequency boost applied, and the analogue signal processing circuitry has a
matching high-frequency attenuation
applied through a low-pass filter on
the op amps. This way, the higher,
more noticeable element of circuit
noise is suppressed.
The module actually ‘boosts’ the
higher harmonic levels by carefully
attenuating lower harmonic levels. It
is nice that complex digital filters are
not needed to do this!
Finally, the Patch Editor application automatically boosts harmonic
levels to the maximum, ensuring that
the summed wavetable waveform is
across the full signed 16-bit range,
maximising the SNR.
Construction
The Spectral Sound Synthesiser is
relatively straightforward to build,
as the use of numerous microcontrollers minimises the number of separate
components required. Most components mount on a double-sided PCB
coded 01106221 that measures 145 x
94mm. The overlay diagram for this
PCB is shown in Fig.9.
There is nothing remarkable about
construction except that it requires
good soldering skills to solder 200+
Practical Electronics | July | 2023
pins accurately! We recommend using
IC sockets throughout, including for
the opto-coupler; while sockets can
cause long-term reliability problems
due to oxidation of the contact points,
there is no real provision for in-circuit programming.
Still, since most constructors will be
using pre-programmed micros (or programming them before assembly), you
could consider soldering them directly
to the board as long as you are confident they have been programmed correctly. Note that we haven’t specified
a socket for IC1 as there’s little reason
to use one there.
Start with the resistors, checking
each lot of values with an ohmmeter
before soldering them in place. Follow with the three diodes. Each is a
different type, so don’t get them mixed
up, and ensure they are fitted with the
cathode stripes facing as shown.
Next come the IC and opto sockets
(or ICs and opto-coupler). Ensure they
all have pin 1 facing towards the top
of the board, and if soldering the ICs
to the board, be very careful not to get
the different 28-pin types mixed up.
After that, mount all the non-
polarised ceramic and MKT capacitors; there are 100nF ceramic and MKT
capacitors, so make sure the MKTs go
in the positions shown in Fig.9.
Now install the electrolytic capacitors with the longer positive leads to
the pads marked ‘+’ in Fig.9, followed
by the polarised pin headers and jack
socket CON2.
Next, solder the LEDs in place with
the longer leads to the side marked A.
Fit these with sufficient lead length so
that they will reach the top panel of the
case once the PCB has been installed
(see the section ‘Wiring it up’ below
for a discussion on case selection).
Follow with the two regulators, first
attaching the heatsink to REG1 using
the machine screw, nut and washer.
That just leaves crystal X1, DC
socket CON1, USB socket CON5 and
MIDI socket CON6 on the PCB. Mount
those in order of increasing height.
Finally, if you’ve soldered sockets
to the board, plug in all the ICs and
the opto-coupler now, paying careful
attention to their pin 1 orientation and
not getting the different 28-pin and
8-pin ICs mixed up.
Case selection
The PCB is designed to fit into the
case specified in the parts list, and the
front panel label (Fig.10), lid artwork
(Fig.11) and drilling template (Fig.12)
all fit that case. These can also be
downloaded as PDFs and a PNG from
the July 2023 page of the PE website at:
https://bit.ly/pe-downloads
Practical Electronics | July | 2023
Parts List – Spectral Sound MIDI Synthesiser
1 double-sided PCB coded 01106221, 145 x 94mm from the PE PCB Service
1 instrument case [Takachi YM-150; RS Cat 373-2255]
4 stick-on rubber feet
1 front panel label, 145 x 37mm (see Fig.10)
1 lid label, 141 x 85mm (see Fig.11)
1 5-6V DC 1A regulated plugpack •
1 16MHz crystal, HC-49 (regular or low-profile) (X1)
1 PCB-mount DC barrel socket, 2.1mm or 2.5mm ID to suit plugpack (CON1)
1 3.5mm stereo DPST switched jack socket (CON2) [Altronics P0094, RS
Cat 913-1021 or CUI SJ1-3555NG]
1 2-pin polarised header and matching plug (CON3, for power switch)
1 3-pin polarised header and matching plug (CON4, for volume control)
1 through-hole full-size type-B USB socket (CON5) [Jaycar PS0920,
Altronics P1304A/P1304B]
1 5-pin 180° DIN socket, right-angle PCB mount (CON6) [Jaycar PS0350,
Altronics P1188B or RS Cat 491-087]
1 SPST or SPDT panel-mount slide switch (S1, power)
1 100kW panel-mount linear potentiometer and knob (VR1, volume control)
8 28-pin narrow DIL IC sockets (optional; for IC2, IC3 and IC5-IC10)
3 8-pin DIL IC socket (optional; for IC4, IC11 and OPTO1)
1 TO-220 heatsink (REG1) [maximum 40mm wide, 13mm deep from tab,
<18°C/W; RS Cat 263-251 used for prototype]
4 M3-tapped 6.3mm spacers
1 M3 x 10mm panhead machine screw, shakeproof washer and hex nut
8 M3 x 5mm panhead machine screws
2 M2 x 10mm countersunk screws and nuts (for slide switch mounting)
1 100mm length of rainbow cable (for wiring to S1 and VR1)
1 small tube of thermal paste
• up to 9V can be used, but 5-6V results in more reasonable dissipation
Semiconductors
1 74HCU04 unbuffered hex inverter, DIP-14 (IC1)
1 PIC18LF25K50-I/SP 8-bit micro programmed with 0110622A.HEX (IC2)
1 dsPIC33FJ128GP802-I/SP 16-bit microcontroller programmed with
0110622B.HEX (IC3)
1 MCP6022-I/P rail-to-rail op amp, DIP-8 (IC4)
6 dsPIC33EP512MC502-I/SP 16-bit microcontrollers programmed with
0110622C.HEX (IC5-IC10)
1 24LC512-I/P 64Kbyte I2C EEPROM, DIP-8 (IC11)
1 FOD260L opto-coupler, DIP-8 (OPTO1)
2 LF33CV 3.3V low-dropout linear regulators (REG1, REG2)
2 3mm high-brightness green LEDs (LED1, LED2)
1 1.8V 250mW zener diode (ZD1) [eg, 1N4614]
1 1N4004 400V 1A diode (D1)
1 1N4148 75V 150mA signal diode (D2)
Capacitors
1 100μF 6.3V electrolytic
1 10μF 16V electrolytic
8 10μF 16V X7R ceramic
2 1μF 63V MKT
4 100nF 63V MKT
13 100nF 50V X7R ceramic
2 33pF 50V ceramic
Resistors (all 1/4W 1% metal film axial)
1 1MW
6 4.7kW
1 1kW
1 100kW
4 3.3kW
2 470W
10 10kW
4 2.2kW
1 220W
UK builders – Kit SC6261 – We recommend UK builders purchase an almost
complete kit from Silicon Chip. It includes programmed dsPICs and everything
else except the case, feet, labels and plugpack. AU$260 including delivery to the
UK. See: https://www.siliconchip.com.au/Shop/20/6261
You could use a different case provided it’s large enough to house the PCB,
since all the connectors/controls (apart
from the two which are panel-mounted)
are along one edge of the PCB.
With a 145 x 94mm PCB, most
cases measuring at least 165 x 100mm
should be suitable. The height
required depends on the heatsink
you are using for REG1. The specified
25
CON2
A
LED2
3.3kW
4.7kW
3.3kW
10mF
IC4
2.2kW
IC11
24LC512
10m F
1 .8 V
100nF
2.2kW
10kW
4.7kW
ZD1
4.7kW
10mF
100nF
instrument cases, and it will be a bit
harder to fit the board.
Mount the board in the case using
machine screws and tapped spacers so
that its top edge is against one side of the
case (for an instrument case, it should be
the front panel). Next, mark and drill/cut
holes in the adjacent panel for the DC
power barrel plug, MIDI input socket,
USB socket and audio output jack.
If you’re using the specified case,
you can use the drilling diagram
(Fig.12) to assist you. It could also be
used on other cases, but you will need
100nF
MCP6022
1mF
100nF
IC10 dsPIC33EP512MC502
10kW
10m F
100nF
IC9 dsPIC33EP512MC502
10kW
10m F
+
4.7kW 1mF
3.3kW
heatsink is only 20mm tall, so cases at
least 35mm tall should be fine. If you’re
using a taller heatsink, add 10-15mm
to its height to figure out what cases
will be suitable.
Possible alternative instrument
cases include Altronics Cat H0374
or Cat H0378 (with a short heatsink),
Jaycar Cat HB5912 or the Hammond
RM2055M, which is available from
Digi-Key and Mouser. It should also
fit into a UB2 Jiffy box like Jaycar
Cat HB6012, Altronics Cat H0152 or
H0202, but they don’t look as good as
IC8 dsPIC33EP512MC502
10kW
100nF
10m F
IC7 dsPIC33EP512MC502
10kW
10m F
4.7kW
100nF
4.7kW
100nF
100nF
10mF
10kW
100nF
IC2 PIC18LF25K50
10kW
100nF
IC3 dsPICFJ128GP802
100nF
10kW
2.2kW
K
REG2 LF33CV
LED1
CON5
10kW
100kW
2
1kW
IC1
74HCU04N
1MW
X1
16MHz
1
MIXER
A
33pF
100nF
IC6 dsPIC33EP512MC502
470W
OPTO1
FOD260L
10kW
10m F
100nF
IC5 dsPIC33EP512MC502
220W
33pF
3
K
2.2kW
CON3
D1
1N4004
CON6
D2
4148
100nF
REG1 LF33CV
100mF
4
100nF
+
CON4
4
MIDI
1
3
5
CON1
AUDIO OUT
VOLUME POT
3.3kW
USB
470W
MIDI IN
PWR
SW
100nF
5V DC INPUT
Fig.9: like the circuit
diagram, the eight
PICs dominate the
overlay, all in 28-pin
DIL packages. Make
sure to orient those
correctly and don’t
get them mixed up.
Fig.11: the lid panel ►
artwork (shown at
approximately 85%
actual size) is a nice
finishing touch to the
project. It’s designed
to be printed onto a
transparent medium.
Note that the two
LED positions could
vary somewhat,
especially if you’re
using a different case;
you could simply cut
that part of the decal
off and position it
separately.
to adjust the placement on the panel
to match your PCB mounting location.
Wiring it up
Once you’ve confirmed these are all
accessible through the panel, if you
haven’t already, drill holes for the
volume pot and power switch in convenient locations. Then solder appropriate lengths of ribbon cable strips to
those parts and crimp/solder pins to
the other ends that you then push into
the plastic polarised header blocks (or
solder direct to the PCB).
Fig.10: the front
panel artwork can
be downloaded,
printed, laminated
(or protected in
another manner) and
then attached to the
drilled panel.
Fig.12: the positions of the holes to drill/cut in the front panel. The volume control and on/off switch are panel-mounted,
so they could be moved, but these positions are designed to clear other nearby parts. You can use this for cases other than
the recommended one, but you’ll need at least one reference point to position it correctly.
26
Practical Electronics | July | 2023
You will also need to mark and drill
two 5mm holes in the lid for the LEDs
to protrude through. Depending on
their lead lengths, you might have a
little bit of flexibility in where those
LEDs are placed as you can bend the
leads slightly. Keep in mind that if
you are applying the lid panel label, it
will have to line up with those holes.
Now is a good time to adhere the
front and/or lid panel labels (see below
for hints on making them) and cut out
the holes using a sharp hobby knife.
Verify that S1 and VR1 are wired
up correctly, mount them on the front
panel and then plug them into headers
CON3 and CON4. You can then ‘button
up’ the board inside the case, power
it up and check that it’s operational.
To do that, you will need to plug it
into a computer running Windows,
download and install the software
described in the following section,
and verify that it can connect to the
Spectral Sound Synthesiser.
screengrab of this software is shown
in Screens 1 and 2.
This is a ‘Click-Once’ .NET application that I am hosting online at:
https://bit.ly/pe-jul23-patch
This is in Microsoft Azure ‘blob’
storage, which means that the user
is notified of version enhancements
if installed from this online location.
A comprehensive user guide for this
software is available.
The app includes tools to help shape
the timbre ‘landscape’, the envelopes,
the filters and more. It includes ‘visualisers’ to view the timbres both in the
time or frequency domain, and even
includes a harmonic analyser, where
you can grab the harmonic content
from audio!
The app also has its own unique
programming language called ‘Spectral Definition Language’ (SDL) [not to
be confused with Simple DirectMedia
Layer – Editor]. You can write SDL
code to finely tune the patch definitions and easily reuse chunks of code.
The idea is to ‘abstract’ sound design
to a higher level, simplifying all the
complexities of detailed configuration.
To this end, you can store your own
code snippets and execute them as
necessary via the app menu – a powerful concept, with ‘out of the box’
default examples for setting things like
a ‘Hammered String’ envelope!
Final thoughts
This project has been a very intense but
rewarding journey, often feeling like it
is ‘shooting for the moon’. It shows that
sound synthesis is still fertile soil for
experimentation and invention. Fig.13
Making the labels
I’ve found that just printing with an
inkjet printer and then spraying with
art ‘fixing’ spray works well. For the
lid label, I use decal sheets for my inkjet printer. They have a paper backing.
The process is:
1.
Print just like you would to any
sheet of paper (but with the shiny
side up).
2. Spray with varnish/lacquer/fixative.
3. Submerse in water for 30 seconds
to a minute.
4. Gently slide the decal off (the very
thin decal detaches from the paper
backing when wet) and onto the case.
5. Consider varnishing the dried decal
for added protection.
The ‘Patch Editor’ software
The module has an associated, powerful Windows program called the ‘Patch
Editor’, written in C# Winforms. A
Practical Electronics | July | 2023
Screens 1 and 2: sample screenshots from the powerful Windows-based Patch
Editor software designed to interface with the Spectral Sound Synthesiser. Its
source code is included in the download package.
27
This would require the ability to
prioritise the harmonic importance
and ignore the ones of least significance. An interesting aspect of this
approach would be that the threshold could be based on system performance, always processing the maximum number of harmonics possible
but degrading the sound quality in a
controlled way if needed.
This approach might also be able
to deviate from the integer-based
harmonic requirement, thus offering
more realism.
Another idea is to return to a more
sample-based approach, but instead
of storing samples in the conventional
PCM way, keep them as timbres or
even as timbre changes. This might
provide significant savings.
Other ideas start questioning fundamentals about the precision needed for
harmonic levels. Since humans perceive sound logarithmically, adequate
level scaling might result from simple
bit shifting. Can we really tell the difference in harmonic levels to such a
degree that it justifies anything better?
Moreover, we need to think more
about how our brains perceive sound,
and less about the purity of mathematical calculation. Our brains work
on impression and recognising overall characteristics, so maybe there’s
potential in focusing on techniques
that make huge computational savings
by disregarding things that just do not
matter to perception. It seems there
is still much to think about regarding
sound synthesis!
Fig.13: the MIDI Synthesiser was combined with a standard MIDI controller
keyboard, amplifier and speaker to form this electronic clavichord.
shows my DIY ‘Electronic Clavichord’
containing a standard MIDI controller
keyboard coupled with this module
and a tiny amplifier and speaker.
One tantalising idea for the future
could be to approach the problem of
a timbral-based system from a more
holistic angle. Rather than each played
Useful Links
The biological bases of musical
timbre perception:
https://bit.ly/pe-jul23-bio
Synthesising plucked strings:
https://bit.ly/pe-jul23-pluck
Synthesising wind instruments:
https://bit.ly/pe-jul23-wind
Sound quality or timbre:
https://bit.ly/pe-jul23-tim1
Details on timbre:
https://bit.ly/pe-jul23-tim2
28
note having its own wavetable, think
of the required harmonics from all
played notes as one giant pool of oscillators. We could then use the phenomenon of ‘psychoacoustic masking’ to
significantly ‘prune’ the actual harmonics that require calculation.
Reproduced by arrangement with
SILICON CHIP magazine 2023.
www.siliconchip.com.au
The finished
Synthesiser
has two LEDs
on the top of
the panel to
indicate when
it is receiving
MIDI messages
and when it is
communicating
with a computer (eg,
loading patches).
Practical Electronics | July | 2023
|