This is only a preview of the October 2015 issue of Silicon Chip. You can view 29 of the 96 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Items relevant to "Ultra-LD Mk.4 Power Amplifier, Pt.3: 110W Version":
Items relevant to "An Arduino-Based USB Electrocardiogram":
Items relevant to "2-Way Crossover For Senator 10-Inch Loudspeakers":
Purchase a printed copy of this issue for $10.00. |
What is
by Dr David Maddison
COMPUTATIONAL
PHOTOGRAPHY?
There have been dramatic advances in
photography and imaging techniques in
recent times, along with similarly dramatic
advances in image processing software.
But arguably most exciting is the ability
to create images a camera is not naturally
capable of producing. Many of the techniques
fall within the realm of the emerging
field of “computational photography”.
W
IKIPEDIA defines computational photography or
computational imaging as “digital image capture
and processing techniques that use digital computation instead of optical processes”.
Essentially, computational photography takes advantage
of the substantial computing power now available in portable devices to either augment or replace conventional
optical processing. As a result, cameras which use these
techniques can take photos in ways previously impossible
or impractical.
Probably the single most revolutionary application is that
of “lightfield photography” which captures images in a new
way, allowing changes to the focus, depth of field and even
perspective after the photo has been taken and can also
reconstruct captured images in three dimensions.
Other applications inlude novel imaging systems which
can operate at a trillion frames per second, see around corners
or see through objects. With the exception of “invisibility”,
all these techniques fall within the realm of computational
photography.
Other computational photography techniques which readers may already be aware of, or have even used, include high
dynamic range (HDR) photography and panoramic stitching.
High dynamic range imaging
High dynamic range (HDR) imaging allows the recording
of a greater range of luminosity or brightness than an imaging system would normally allow but which the human eye
can easily perceive. Examples are scenes in which there is
an extreme range of luminosity such as a backlit object or
person or an indoor scene with bright light coming through
12 Silicon Chip
windows or a combination of sunlit and shaded areas.
Some high end cameras and smart phones have built-in
HDR functions (and there also some Apps for smart phones)
although many photographers prefer to do their HDR processing manually, as they are not satisfied with the built-in
functions of the cameras.
The basic technique of HDR imaging is to first acquire a
series of images of the same scene with different exposure
settings. Many consumer digital cameras are able do this
automatically (“exposure bracketing”) but it can be done
manually on any camera where exposure can be controlled.
Such a series of photos will ensure that there is at least one
photograph in which part of the scene of interest is correctly
exposed and collectively, the entire set of photos will have
all parts of the scene exposed correctly. It is then a matter to
combine these pictures into one composite image.
Interestingly, HDR photography was invented in the 1850s
by Gustave Le Gray. He took pictures that contained both
sea and sky and took one negative of the sea and another of
the much brighter sky and combined them to form a single
picture. The desired luminosity range could not be recorded
for such a scene using the photographic media of the time.
If you are interested in trying HDR photography there are a
number of online tools you can use to generate photographs
and also tutorials.
Panoramic imaging
Panoramic cameras were invented as early as 1843 and
often had specialised gears and curved film planes to pan
across a scene exposing a portion of the film as they rotated.
Today, panoramic imaging is a common feature found in
siliconchip.com.au
many modern digital cameras and phones and involves
software to “stitch together” a number of separate images to
make one single image with a wide field of view.
There also smart phone Apps, software suites and free online
services to do this, eg, Hugin (http://hugin.sourceforge.net/)
and Panorama Tools (http://panotools.sourceforge.net/)
are two free software suites for making panoramas and
stitching photos.
Panoramic photography can be greatly facilitated by a
special panoramic tripod head. Some are commercially
available and others can be home-made. Some websites
related to home-made heads are at http://teocomi.com/
build-your-own-pano-head/; www.worth1000.com/tutorials/161123/tutorial and www.peterloud.co.uk/nodalsamurai/nodalsamurai.html
A popular commercial non-automated panoramic head
is the Panosaurus: http://gregwired.com/Pano/pano.htm
If setting up a panoramic head it is desirable to find the
“nodal point” to ensure there is no parallax error in the
image. See video “Finding a lens nodal point and shooting
panoramas” https://youtu.be/JpFzBq0g7pY
A popular technique related to panoramic photography is
the creation of gigapixel resolution images. For info on this
technique, just Google “make your own gigapixel image”.
You can also read the article about military use of gigapixel
photography in the article entitled “ARGUS-IS Wide Area
Persistent Surveillance System” (SILICON CHIP, December
2014) www.siliconchip.com.au/Issue/2014/December/
The+Amazing+ARGUS-IS+Surveillance+System
Leonardo da Vinci and light-field photography
Leonardo da Vinci
realised that light from
an object arriving at a
viewer contains
all the information
necessary to
reproduce any
view possible at
that point.
That is, he
recognised the concept of light rays and
that if enough information
could be collected any image
could be formed after the fact
of information collection that
had any desired depth of field
or focus.
He wrote “The...atmosphere is full of infinite pyramids [light rays] composed of
radiating straight lines, which
are produced from the surface
of the bodies....and the farther
they are from the object
which produces them
the more acute
they become
and although
in their distribution
they intersect and
cross they never
mingle together,
but pass through all the
surrounding air, independently converging,
spreading, and diffused.
And they are all of equal
power [and value]; all equal
to each, and each equal to all.
By these the images of objects are transmitted through
all space and in every direction, and each pyramid, in
itself, includes, in each minutest part, the whole form of the
body causing it.”
da Vinci’s 15th century depiction of what we now know
as the light-field. From “The Notebooks of Leonardo da
Vinci” edited by Jean Paul Richter, 1880.
High dynamic range picture by Michael D. Beckwith of the Natural History Museum in London. This would not be
possible with normal photographic techniques; with a regular photo, either the highlights would be over-exposed or the
shadows would be under-exposed. www.flickr.com/photos/118118485<at>N05/12645433164
siliconchip.com.au
October 2015 13
An example of a “panoramic” photo: Sydney Harbour Bridge at night. Some cameras have this mode inbuilt; others require
after-shot software attention. https://upload.wikimedia.org/wikipedia/commons/e/ea/Sydney_Harbour_Bridge_night.jpg
Previous articles on gigapixel photography were published in the February 2004 & September 2011 issues of
SILICON CHIP: “Breaking The Gigapixel Barrier”, by Max
Lyons; www.siliconchip.com.au/Issue/2004/February/
Breaking+The+Gigapixel+Barrier and “World Record
111-Gigapixel Photograph”, by Ross Tester; www.siliconchip.
com.au/Issue/2011/September/World+Record+111Gigapixel+Photograph
For general image manipulation, Adobe Photoshop is the
standard image processing software and it can be used to
manually stitch photos into a panorama. There are a number
of free alternative although they might not be as feature-rich
as Photoshop. GIMP, for GNU Image Manipulation Program
(www.gimp.org/) is a free image processing program that
works on many platforms and is almost as powerful as
Photoshop.
Other free programs are Photoshop Express Editor (www.
photoshop.com/tools) which is an online tool but also has
Apps for smart phones; Pixlr Editor or Pixlr Express https://
pixlr.com/ also online and paint.net (download from www.
getpaint.net/index.html).
Light-field and lens-less photography
HDR and panorama photograohy use a standard camera
with a lens, iris, shutter and an image sensor. But now there
are cameras in production or under development which
use either a micro-lens array in front of an image sensor or
multiple lenses, or dispense with the lens altogether.
Imagine a camera in which you could change the focus,
depth of field or even the perspective after you have taken
the picture and left the scene. This can be done right now
with a light-field camera, also known as a plenoptic camera.
The lens in a conventional camera focuses light rays
arriving at different angles onto the film or sensor, such that
a two-dimensional image is formed where the subject is in
The first digital scanned image
The first digital scanned picture was created in 1957. The
image resolution was 176x176 or a total of 30,976 pixels and
in black and white only but it produced a recognisable image.
Multiple scans at different thresholds produced some grey scale
as shown in the image.
The group that lead this work at the US National Bureau
of Standards was Russell Kirsch. The computer used was
SEAC (Standards Eastern Automatic Computer) and it stored
512 words of memory in acoustic delay lines, with each word
being 45 bits.
14 Silicon Chip
sharp focus. Only the colour and intensity of the light over
the film or sensor is recorded and thus no depth information is retained. All that is recorded is the one point of view
and the focus and depth of field as determined by the lens
setting at the time the photograph was taken.
By contrast, as well as recording colour and intensity, a
plenoptic (or light-field) camera also captures information
concerning the direction from which light rays arrive. This
means that the image can be re-processed later, to produce
a new two-dimensional image or extract three-dimensional
information (eg, to form a “point cloud”).
In a light-field photograph, enough information is recorded
about a scene such that, with appropriate software, the depth
of field or focus can be changed after the picture is taken.
For example, all parts of a scene could be bought into focus,
or just parts of a scene such as only those objects at a middle distance. Or if the lens was not properly focused at all,
the focus can be improved. See the three images below for
an example of what can be done with an image captured
in thie manner
It is also possible to generate a plenoptic image using an
array of multiple conventional cameras and combining their
images with computational methods or using one camera that
is moved to a variety of positions at which images are taken
(which would work only for unchanging scenes). Plenoptic
cameras, however, can yield 3D information from a single
image and single camera from one position.
Leonardo da Vinci in the 15th century was the first to
recognise the idea that light arriving at a given location
contains all the information required to reproduce all possible views from that position (see box on previous page).
What does that actually mean?
A philosophical question in regard to light-field photography is: how is one expected to “use” the image? If it is printed
as a conventional image, only one possible interpretation
(depth of field, perspective) will be
rendered. Should the image remain
online and interactive where all possible interpretations of the image can
be viewed?
By the way, you may have noticed
that there are some similarities between light-field photography and
stereoscopic photography, which has
been around for a long time. However light-field photography allows
for a lot of new possibilities so that is
what we are going to concentrate on.
siliconchip.com.au
Inside a Lytro camera. Apart from its unusual
rectangular design, it is very much like a regular
digital camera in layout. The distinguishing feature,
not visible here, is the presence of a microlens array
in front of an image sensor which enables the
recording of the light field. (Image source: NY Times.)
History of
light-field
photography
Light-field photography is not new
either – the idea has
been around for over
100 years. In 1903, F.
E. Ives described what
some consider to be
the first light-field
camera in the US
Patent entitled “Parallax Stereogram and
Process of Making Same”
(US Patent 725,567). This consisted of a pinhole array located
just in front of the focal plane in a
conventional camera. Each pinhole
image captured the angular distribution of radiance.
G. Lippmann in 1908 introduced “integral photography”
and replaced the pinholes of Ives with lenses. (Lippmann,
incidentally, was a Nobel Laureate for colour photography
and Marie Curie’s thesis advisor). See http://people.csail.
mit.edu/fredo/PUBLI/Lippmann.pdf for a translation of his
original paper.
For those interested, a presentation of the history of lightfield photography by Todor Georgiev, “100 Years Light-Field”
can be read at www.tgeorgiev.net/Lippmann/100_Years_
LightField.pdf
The Lytro Camera
The Lytro plenoptic camera is essentially a conventional
camera in terms of the geometry of its components but it has
Below: cross-section
of a Lytro Camera
a micro-lens array placed in front of the image sensor. The
micro-lens array has at least 100,000 separate lenses over
the image sensor (Lytro does not disclose the exact number)
generating at least 100,000 slightly different micro-images of
perhaps one hundred or more pixels each, all from slightly
different angles. The pitch of the micro lenses (the centre
to centre distance) is said to be 13.9 microns.
The information in this large number of individual images is mathematically processed in the camera, yielding
an image for which the focus, depth of field and the per-
Photograph showing the variable depth of field (DoF) capability of a single Lytro camera image. Slight changes in
perspective are also possible. Screen grabs from https://pictures.lytro.com/lytro/collections/41/pictures/1030057
siliconchip.com.au
October 2015 15
(A) The PiCam Camera Array Module against a US Quarter coin (24.3mm diameter), (B) raw 4x4 array of images each of
1000 x 750 pixels resolution or 0.75MP, (C) parallax-corrected and “super- resolved” 8MP high resolution image and (D)
high resolution 3D depth map with different colours corresponding to different distances from the camera.
spective can be changed after the picture is taken. A disadvantage of this type of technique is that the final image
is of much lower resolution than the image sensor. While
Lytro have given no particular specifications, it has been
estimated that in one model of Lytro camera, the Illum,
the sensor has a 40 megapixel resolution while the images themselves have about 1.1 megapixels of resolution.
(See discussion at www.dpreview.com/articles/4731017117/
lytro-plans-to-shed-jobs-as-it-shifts-focus-to-video).
When you think of it, having 100,000 separate images,
all from a slightly different perspective, is just a scaled up
version of human vision, with two eyes giving a slightly
different perspective. Or it might be compared with the
compound eye of an insect.
Each of the thousands of individual eye elements or
ommatidia in an insect eye contain between six and nine
photoreceptor cells, very roughly equivalent to pixels. Interestingly, insect compound eyes are also of relatively low
resolution. To have a resolution the same as human eyes
would require a compound eye with a diameter of 11 metres!
Lytro have an image gallery on their website where you
can view and manipulate individual images from Lytro
cameras. See https://pictures.lytro.com/
In addition to the traditional camera specifications such
as lens focal length, lens f-number, ISO speed range, sensor
resolution and shutter speed range, there is an additional
specification for plenoptic cameras which is the lightfield resolution in megarays which refers to the number
of individual light rays that can be captured by the sensor.
The Lytro Illum model, for example, has a capability of 40
megarays per picture.
The Lytro camera was developed out of the PhD work of
The 16 lens array of the PiCam and the associated RGB
filters comprising of sets of two green, one red and one blue
filter forming four 2x2 sub-arrays.
Dr Ren Ng who started his PhD studies in 2003 and founded
Lytro in 2006, shipping the first cameras in 2012.
PiCam Camera Array Module
Recognising that photography from mobile phones is by
far the most popular form of photography today, Pelican
Imaging (www.pelicanimaging.com) is developing imaging
sensors for these devices. The problem with current mobile
phones is that they are so thin that there is insufficient depth
to have a sophisticated lens system to provide extremely
high quality images.
The PiCam uses 16 lenses over one image sensor, yielding
sixteen slightly different images instead of one. Each of the
16 different images effectively represents a different camera
with a sensor area assigned to it of one sixteenth of the total
sensor area. Unlike the Lytro which uses a micro-lens array
(Left): an image captured with PiCam camera and (Right); its conversion into a 3D object represented by a “point cloud”.
16 Silicon Chip
siliconchip.com.au
(Above): Raytrix industrial light-field camera.
(Right): The imaging scheme used in Raytrix camera.
with 100,000+ lenses, 16 non-micro lenses are used.
Now, the smaller an image sensor area is, the smaller the
size of lens that can be used to project an image onto it.
This means that instead of having one larger lens to project
an image onto a larger sensor, a series of smaller lenses can
be used to project a series of images onto a smaller sensor
area. This enables a significant reduction in the size of the
lens required and a corresponding reduction in the thickness of the device.
This relationship between lens size and sensor size can
be seen with regular digital cameras in which larger lenses
are required as the image sensor is increased in size. It also
means that cameras with smaller sensors can have larger
zoom ratios; to achieve similar zoom ratios on a camera with
a larger sensor such as an SLR would require impossibly
large lenses. Of course, the disadvantage of having a smaller
sensor size is that it gathers less light and so requires longer
exposures, and the resolution is generally lower.
A further innovation of the PiCam is to remove the colour
filters from the image sensor and have them within the lens
stack. This means that each of the 16 sensor areas will image
one particular colour range only; red, green or blue.
Having one colour range for each lens dramatically simplifies the design as each lens only has to operate over a
restricted range of wavelengths rather than the whole visible
spectrum. The lens for each colour is optimised for that colour’s range of wavelengths. Image quality is also improved
as chromatic aberration is minimised.
Not having a filter on each individual pixel on an image
sensor also has the advantage that the sensor can accept light
from a wider range of angles than if a filter were present.
This improves light gathering efficiency (to allow greater
sensor sensitivity) and reduces crosstalk between pixels
which can cause image blur.
The software associated with the camera adjusts for
parallax errors between the 16 different images and uses a
“super-resolution” process to reconstruct a final 8MP image
from the individual images, taking into account various degradations that will occur during the acquisition of an image.
The difference in optical configuration between this camera and the Lytro is that with the Lytro a micro-lens array is
placed at the focal plane of the main (conventional) lens and
the image sensor is placed at the focal plane of the microlens, while in the PiCam the sensor is at the focal plane of
the one 16 lens array.
As with other light-field cameras, an image can be captured
first and focused later, avoiding the delay that occurs with
focussing conventional cameras.
The PiCam is a 3D-capable device (as are all light-field
cameras, in theory) and can generate both depth maps and
siliconchip.com.au
“point clouds” representing the 3D object and this data can
then be converted to a conventional 3D mesh.
As a hand-held 3D capture device, the potential applications are very interesting. For example, a “selfie” from a
camera such as the PiCam could be emailed to someone to
be reproduced on a desktop 3D printer....
For further information and details of the image reconstruction process see the video “Pelican Imaging SIGGRAPH
Asia 2013: PiCam (An Ultra-Thin High Performance Monolithic Camera Array)” https://youtu.be/twDneAffZe4 Also
see “Life in 3D: Pelican Imaging CEO Chris Pickett Explains
Depth-Based Photography” https://youtu.be/CMPfRR4gHTs
For some sample images, see www.pelicanimaging.com/
parallax/index.html
Raytrix
Raytrix is a German firm (www.raytrix.de) specialising
in light-field cameras for industrial use and specifically
targeting research, microscopy and optical inspection in
manufacturing operations.
Unlike the Lytro and the PiCam, the Raytrix camera uses
a scheme devised by Todor Georgiev that he calls “Plenoptic
2.0” in which a micro-lens array is placed in an area other
than the focal plane of the main lens. With this optical arrangement, the number of micro-lenses is not a limiting
factor in the resolution of the final image and in theory at
least, could approach the sensor resolution.
While Plenoptic 2.0 achieves a higher proportion of the
native sensor resolution than, say, the Lytro camera, substantial computation is required to achieve that result and
the camera has to be connected to a high-end computer with
a specialised graphics card for processing the video data.
In the case of the Lytro camera, video processing is done
within the camera.
The micro-lens array in the Raytrix cameras has several
different focal length for each of the 20,000 micro-lenses and
this allows the depth of field to be significantly extended.
In addition to still photography, Raytrix cameras can be
Scanography
The field of “scanography” involves using a flat-bed scanner
to produce images for artistic or technical purposes. Flat objects
such as leaves can of course be scanned but since a flatbed scanner has a depth of field of about 12mm, small 3D objects can be
scanned as well. Three dimensional images can also be generated
using appropriate software. Some image examples are shown at
https://commons.wikimedia.org/wiki/Category:Scanography
October 2015 17
(Left): several versions of
LinX imaging devices from
before Apple Inc. purchased
the company.
used to generate 3D video and are also being used in microscopy where they can video living micro-organisms and
ensure the whole organism is kept in focus.
LinX Computational Imaging
LinX Computational Imaging is an Israeli company which
was recently purchased by Apple Inc, so their website no
longer exists. LinX developed a number of multi-aperture
cameras for mobile devices that had reduced height to allow
their incorporation in thin phones.
LinX offered several camera modules including a 1x2 array
which had a colour and monochrome sensor for better low
light performance and basic depth mapping, a 1+1x2 array
with two small aperture cameras to make a high quality
depth map and a larger camera with a 2x2 array for better
quality depth maps, high dynamic range, better low-light
performance and improved image quality. It is highly likely
that this technology (or a spin-off from it) will end up in
future iPhones.
Corephotonics
Corephotonics Ltd (http://corephotonics.com/) is another
Israeli company. It offers solutions with novel optical actuators and optical designs and which also involve computational photography. Its offerings are generally customised
for particular clients but they are built around a dual camera module incorporating two 13MP sensors, a Qualcomm
Snapdragon 800 processor and special computational photography algorithms.
One of the sensors has a fixed focus telephoto lens and
the other has a wide-angle lens. The image data from both
is seamlessly integrated to provide great image sharpness
and up to five times optical zoom. This camera system can
also do high dynamic range imaging with one shot.
Superior performance in optical zoom, image noise, focus
error and camera movement reduction are possible. It is also
capable of depth mapping.
(Two images above): 3D
point cloud created by a
LinX camera from a single
frontal image.
incorporates processing hardware to construct a lens-less
computational imaging device. The output of the grating is
meaningless without computer reconstruction.
To understand how this device works we will first consider
its predecessor. A device called a planar Fourier capture
array (PFCA) was invented by Patrick Gill while a student
at Cornell University. This lens-less device consisted of an
array of pairs of optical gratings on top of an array of photodiode image sensors. Consider that a pair of optical gratings
is equivalent to a pair of picket fences. Light will only pass
through the gaps at angles at which the gaps in both fences
are aligned with each other.
By having the pairs of optical gratings on the chip arranged
at a variety of angles, it was possible to have photodiodes
activated through the full possible range of angles of incident light impinging on the chip. The image data was then
processed to yield the original image. A disadvantage of
this device was limited resolution and spectral bandwidth.
Patrick Gill went on to work for Rambus where he addressed the limitations of the PFCA device. He developed
a new type of diffractive element called a “phase antisymmetric grating” which is based upon a spiral pattern.
Unlike the PFCA in which a pair of diffraction gratings correspond to only one angle of light and sensitive to limited
light frequencies, photodiodes under the spiral grating can
be sensitive to light from all angles and light frequencies.
These devices promise much better quality images in smaller
device packages than PFCAs.
Single pixel cameras
A single pixel camera, as the name implies, acquires an
image with a sensor with just one pixel of resolution. The
image is acquired by scanning a scene with mirrors and
Rambus
The Lytro, PiCam, Raytrix, LinX and Corephotonics
cameras mentioned above all have some type of lens as an
optical element to focus the image.
Rambus (www.rambus.com) have used a spiral diffraction grating on the surface of a sensor chip which also
18 Silicon Chip
Corephotonics dual camera module for mobile devices.
siliconchip.com.au
The Pinhole Camera
(a)
(b)
(e)
(c)
(d)
(a) Phase anti-symmetric grating and how a point of light
(top left) is sensed by the imaging array (top right); (b)
image of the Mona Lisa and how it is sensed (c) on the
imaging array; (d) image of Mona Lisa after data from
array is processed; (e) same image as it would appear when
generated from a PFCA device showing inferior quality.
Lens-less imaging is one of the oldest ideas in photography
and F. E. Ives developed the first plenoptic camera with a series
of pinhole images, as described in the text. The simplest camera
uses a “pinhole” to form an image, although exposure times are
long due to the small amount of light that gets through. There are
many instructions on the web for making your own pinhole camera
such as at www.kodak.com/ek/US/en/Pinhole_Camera.htm
Pinhole cameras are also commercially available from a number
of sources such as www.pinholecamera.com for beautifully
crafted models or you can get mass-produced models on eBay
quite cheaply (search “pinhole film camera”). An intriguing use
of pinhole cameras in modern times is their use in “solargraphy”
to capture the path of the sun as it moves across the sky. See
www.solargraphy.com
the second mirror in the array reflects light onto the sensor
while all the others reflect light away and so on for all the
mirrors, about 10 million of them. Eventually an image
is built up which will contain all the information of the
original scene. That data could then be transformed to a
compressed image in the conventional way.
We know from conventional imaging that there is a lot
of redundant data in most scenes that does not need to be
recorded.
For example, there is no need to record all pixels representing the sky in a scene because, simplifying things,
we can say a certain patch of sky consisting of say several
thousand pixels can all be assigned the one colour. Compression algorithms do that and dispose of much of the
original data.
This leads us to the second and preferred way to drive
the DMD array to acquire compressed data. This is called
the compressed sensing mode. The mathematics are quite
complex and beyond the scope of this article but basically
what happens is as follows.
An image can be represented as a series of wavelets, or
wave-like oscillations. To construct, say, a 10MP image
with wavelets, would require the same number of wavelets and a lot of data. It turns out, however, that, as noted
above, most realistic images contain redundant data. It
might turn out that for a 10MP image there would only be
500,000 significant wavelets and the remaining 9,500,000
seem insignificant noise, the removal of which would go
then mathematically reconstructing the original image.
One might ask why you would want to do this but it does
have some advantages and is the subject of active research.
The concept falls under the general category of “compressed sensing” or “sparse sampling”. The key difference
between a conventional megapixel camera and a single pixel
camera is that vast amounts of data are collected with the
megapixel camera and then essentially thrown away in the
compression process after the image is recorded while in
a single pixel camera, only information that is required is
recorded. It achieves this by compressing the information
in the image before the data is recorded with the sensor’s
built-in hardware.
Rice University, among others, has done pioneering work
in single pixel imaging.
The basic principle of the single pixel camera is that light
from a scene is reflected from a digital micro-mirror device
(DMD) onto a single-pixel sensor such
as a photodiode. The DMDs in many
video projectors contain thousands of
individually controllable microscopic
mirrors in an array. The mirrors can be
made to either reflect light in a certain
direction or away from it.
Using the DMD there are two ways
an image can be acquired, depending
upon how the mirrors are driven.
One way is to acquire an image in
raster mode like in a CRT (as in an
old TV or computer monitor). This
is done by causing the first mirror in Single pixel camera from Rice University. The DMD is the digital micro-mirror
the DMD array to reflect light onto the device, the PD is the photo-detector (the single pixel), the DSP is the digital
sensor while all other mirrors reflect signal processor and the RNG is the random number generator. In this case the
light away from it. In the next stage, data is transmitted wirelessly to the DSP from the device.
siliconchip.com.au
October 2015 19
Make your own light-field camera
Interested in making your own light field camera? Here are
some web sites to look at. Mats Wernersson describes how he
made his at http://cameramaker.se/plenoptic.htm
Here is an article that describes how to convert video of a
still image with changing focus to something that resembles a
light-field photograph but is not a real one: “Turn any DSLR into a
light field camera, for free” www.pcadvisor.co.uk/how-to/photovideo/turn-any-dslr-into-light-field-camera-for-free-3434635
sensor technology but much more difficult with sensors
in, say, the IR or UV bands. Making a single pixel sensor
sensitive for those bands is much easier.
• Lens-less single pixel photography is also possible, as
recently demonstrated by Bell Labs.
Google is also apparently interested in single pixel
photography, perhaps for use in wearable devices and
recently filed a 2015 patent, see http://google.com/patents/
US20150042834
A single pixel camera using an Arduino and components made with a 3D printer can be seen at: www.gperco.
com/2014/10/single-pixel-camera.html and http://hackaday.com/2015/01/21/diy-single-pixel-digital-camera/
Not a single pixel camera but also of interest; researchers
at the Massachusetts Institute of Technology in the area
of light-field photography have combined an old bellows
view camera and a flatbed scanner as an imaging sensor.
See http://web.media.mit.edu/~raskar/Mask/
unnoticed.
This is the basis of image compression although the
algorithms are much more complex than described.
The objective of the compressed sensing mode is to acquire compressed data without the need for post-processing.
It turns out mathematically that if instead of using raster
mode scanning, which acquires the maximum amount of
uncompressed image data, one takes random measurements Cloaking – making things “invisible”
from a scene in a certain manner, it is possible to build
While not strictly computational photography, an interup an image with far fewer than the original 10 million esting development in optics is a relatively simple method
measurements as mentioned above.
to give a certain area the illusion of invisibility using lenses.
Using a random number generator, the software creates This method was developed at the University of Rochester
a random tile pattern in the micro-mirror array. The first and may have practical applications such as enabling a
measurement is made and then another random pattern surgeon to see “through” his hands as he operates.
is generated and another measurement taken and so on.
For a demonstration, see “The Rochester Cloak” https://
Light from the random tile pattern is reflected onto the youtu.be/vtKBzwKfP8E
single pixel sensor and sent to the digital signal processor.
For those unfamiliar with Star Trek, the device is referred
After processing of this data, an image will be built up to as a cloaking device after the technology used to render
that is indistinguishably close to that from the original ras- a space ship invisible in that show. See www.startrek.com/
ter methods but with approximately 20 percent of the data database_article/cloaking-device
or far less than that needed for the raster measurements.
The data from the random tile pattern is said to be math- Femto-photography
ematically incoherent with wavelets within the image and
Femto-photography is a new field in which the propagatherefore automatically compressed at the time it appears tion of light can be visualised using frame rates of around
at the single pixel detector therefore there is no need to half a trillion frames a second.
compress the images that come out of the camera.
The technique involves the use of a titanium sapphire
For more details, see https://terrytao.wordpress.com/ laser as a light source that emits approximately 13 nanosec2007/04/13/compressed-sensing-and-single-pixel-cameras/
ond long pulses and detectors that have a timing accuracy
While conventional digital photography is suitable for a vast number of
applications, the advantages of single
pixel photography are as follows:
• The single pixel sensor requires very
little power and large amounts of
CPU power are not required to drive
millions of pixels or process the data.
• Data that comes from the sensor is
already compressed.
• The device can be made at low cost
as there is no large scale sensor to
fabricate.
• The device can be miniaturised and
with low power consumption and
low cost, could be used for persistent
surveillance applications, eg, environmental monitoring and defence.
• A single pixel sensor can be optimised
to be sensitive to certain ranges of
Computed path of light rays in cloaking lens
frequencies. Making a megapixel
arrangement. Image from “Paraxial ray optics cloaking”
sensor that is sensitive to visible light
http://arxiv.org/pdf/1409.4705v2.pdf (See referenced text for details.)
is straightforward with conventional
20 Silicon Chip
siliconchip.com.au
in the order of picoseconds. It also requires a “streak camera” which can measure the variation of the intensity of an
ultra-fast light pulse with time. Mathematical techniques
are used to reconstruct the image.
As the exposure times at such frame rates are so short
(around 2 trillionths of a second), it is not possible to capture imagery without repeating an exposure many millions
of times. This means that whatever is filmed has to be repeatable, such as a light pulse striking an object. Random
events such cannot be filmed as they are not repeatable.
To give an idea of the sort of time periods involved, bear
in mind that light travels 0.30mm in a trillionth of a second
or picosecond (10-12 seconds) in a vacuum.
To watch a video of the propagation of a light pulse see
the videos “Visualizing Light over a Fruit with a Trillion
FPS Camera, Camera Culture Group, Bawendi Lab, MIT”
https://youtu.be/9RbLLYCiyGE and “Laser pulse shooting
through a bottle and visualized at a trillion frames per
second” https://youtu.be/-fSqFWcb4rE
Looking around corners with
femto-photography
Using the principles of femto-photography as described
above, researchers in the same group have developed
methods to image objects that are obscured and cannot be
directly seen, by analysing “light echoes”.
The principle is that if an area is illuminated, some
photons from even obscured areas will return to the source
through multiple bounces.
Knowing the time that photons were emitted in the form
of a laser pulse and given the finite speed of light and the
return time of the photons, it is possible to computationally
determine the shape of an unseen object they bounced off.
Possible applications for this technique include seeing
around corners in endoscopic procedures or other medical
imaging or even seeing around blind corners when in a
car, or in search and rescue applications where fire fighters might have to see around a blind corner, among many
others.
A video demonstrating the technique is “CORNAR:
A camera that looks around corners” https://youtu.
be/8FC6udrMPvo
Build your own “cloaking device”
You can build your own “cloaking” device similar to the device
developed by the University of Rochester. They provide a generic
description on their web page at http://www.rochester.edu/
newscenter/watch-rochester-cloak-uses-ordinary-lenses-tohide-objects-across-continuous-range-of-angles-70592/ (that
description is repeated many times in other locations).
A document on how to build the device is at http://nisenet.
org/sites/default/files/RochesterCloak-NISENet.pdf
You will need appropriate sources and mounting hardware
for the lenses and laboratory grade lenses and components
can still get very expensive. A kit of lenses is available at www.
surplusshed.com/pages/item/l14575.html
(Note: this kit has not been tried or tested by SILICON CHIP).
Conclusion
We have surveyed a variety of techniques of computational photography, its history and some of the capabilities
it offers.
Computational photography can generate extremely
information-rich images that can lead to many new uses
such as simple 3D photography. Many of these advances
will end up in cameras in mobile devices which will be
used to construct 3D models of the environment.
As time goes on, fewer photos will be taken on “conventional” cameras due to the high quality achievable
with new miniaturised mobile phone cameras. Of course,
photography will still be an art and that should always
be remembered but artistic possibilities with these new
technologies will be greatly expanded. 3D photography
and movie making will be much easier and it will be easy
to generate 3D models of the environment.
3D photos such as “selfies” could even be taken and
emailed to others who could use a printer to print the
picture in 3D. New imaging technologies such as lens-less
photography and its associated miniaturisation will continue to develop. Recording of all life’s events will become
pervasive and recordings will have unprecedented detail
and we will have more information about our environment
than ever before.
SC
Experimental setup to view object behind barrier. The object is invisible
to the camera and must be imaged by reflected photons that may have
travelled back to the camera by multiple different paths. Frame grab from
https://youtu.be/JWDocXPy-iQ At right is a computationally reconstructed
image of an object hidden behind the barrier.
siliconchip.com.au
October 2015 21
|