This is only a preview of the June 2010 issue of Silicon Chip. You can view 31 of the 112 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Articles in this series:
Items relevant to "Air-Quality Meter For Checking CO & CO₂ Levels":
Items relevant to "Dual-Tracking ±19V Power Supply, Pt.1":
Items relevant to "Build a Digital Insulation Meter":
Articles in this series:
Items relevant to "A Solar-Powered Lighting System, Pt.2":
Purchase a printed copy of this issue for $10.00. |
Breakthrough Aussie innovation . . .
MAKING
3D MOVIES
While there has been a lot of publicity around the latest
introduction of 3D movies and the accompanying 3D screens
and glasses, 3D movie production is very expensive.
In fact, it comes as a surprise to most people to find that the
blockbuster “Alice in Wonderland” was actually shot in
conventional 2D and labouriously converted to 3D later!
Now there is a new Australia camera rig, the SpeedWedge,
which promises to streamline the whole process.
By BARRIE SMITH
siliconchip.com.au
June 2010 21
T
he blockbuster “Avatar” set the
new standard for 3D movies
and film-goers have been very
enthusiastic.
Part of its success is due to the very
good 3D camera work but the bulk
of the 3D cameras has been a major
disadvantage. The set-ups demand
that the paired cameras either employ
prisms or partially reflecting mirrors
to permit a controllable separation of
the two lenses, to capture the left and
right image pairs.
The average human eye separation
is around 65mm, so image capture is
best served by the camera lens’ interocular distance (IOD) set at about
70mm for most subject material. For
close-ups, a smaller IOD is preferred.
In “Dial M for Murder”, often recognised as one of the best of the 1950s’
3D movies, Director Alfred Hitchock
was forced to use a large and inflexible camera rig. In one key dramatic
scene he used a scaled up telephone
to provide an extreme close-up. The
reason: it was physically impossible
to rig the lenses to give a closer IOD.
A recent Australian innovation,
the SpeedWedge, could make things
much easier. It was developed by
physicist and stereographer Leonard
Coster. The rig consists of a housing
that holds a pair of gen-locked Silicon
Imaging SI-3D digital cameras. One
camera is placed on top, its lens
pointing downwards and aimed onto
a partially-silvered mirror with 50%
reflectance. This camera captures the
left eye image.
Beneath it is another, matching
camera installed horizontally within
the rig, its lens pointing ahead and
looking through the same partiallysilvered mirror. This camera captures
the right eye image. The complete rig
is mounted onto a television camera
tracking pedestal.
Fig.1 shows the general concept.
The partially silvered mirror is the
key, with each camera receiving half
the light from the scene. By having
the cameras mounted at rightangles to
each other, their effective lens separation can be varied from zero to as wide
as is desired, without any mechanical
interference between them.
In practice, if the scene involves
action in the foreground, the IOD is
set to a small value. Conversely, if the
scene or subject is more distant, the
IOD is set to a large value. While any
video cameras could have been used
in the SpeedWedge, the Silicon Imag-
Realising that the 3D camera setup
would not capture macro shots,
Director Alfred Hitchcock organised a
scaled-up phone for a key scene in the
1950s movie “Dial M for Murder”.
Bwana Devil is a 1952 drama based on
the true story of the Tsavo maneaters.
It started the 3-D boom in the US filmmaking industry from 1952 to 1954.
Fig.1: the over/under Speedwedge
arrangement to hold the two cameras: the upper
camera captures its view via the 50% reflectance
mirror, while the lower, horizontal camera is aimed
through the mirror’s 50% reflective surface.
22 Silicon Chip
ing cameras were chosen because they
use the SiliconDVR recording software
which solves a major post production
problem.
SiliconDVR records the two camera
data streams in one go so, in terms of
the capture workflow, a major task is
handled elegantly. As Coster says: “If
you can’t synchronise your cameras
and record the two data streams eas-
Fig.2: the IOD is the distance between the axes of the two
lenses and the convergence distance is the distance from the
camera to the object they are both pointing at. Some people
also refer to this as the convergence angle, which is the
angle between these two axes.
siliconchip.com.au
ily on set, you’re in a lot of trouble!”
The SI camera heads have a significant advantage with their small size.
This leads to a complete rig that can
be picked up by one person. If you
wanted to strap two big film cameras
into the housing, you could do it but
the weight and final size would be
impractical for hand-held operation.
In practice, the Speedwedge rig allows the IOD (inter-ocular distance) to
be varied from zero (for macro shots)
to 70mm, covering mid-range and
telephoto shots. In practice, the IOD
needs to be set differently for each
and every scene and the actual setting
depends on how strong the director
wants the 3D effect to be.
This leads to a further consideration.
If a wide angle or telephoto lens is to be
used in 3D shooting, what it does to the
apparent depth in the scene has to be
taken into account .Telephoto lenses
tend to fight the 3D effect because, even
in 2D photography, a telephoto lens
gives a fore-shortening or flattening
effect. Hence, the perception of depth
is quite poor with a telephoto shot.
Leonard Coster says it is much the
same in 3D shooting: “We can try and
push a little bit of apparent depth back
into it by increasing the interocular
separation. However, you have to be
very careful that you don’t produce too
large an offset in those images’ background and foreground divergences
on screen — otherwise you make the
vision too hard for your viewers.”
He stresses “we’re not producing a
perfect reproduction of the real world
because we may not be using the same
size sensor and a standard focal length
lens or ‘normal’ IOD, but I want to
give the audience a comfortable stereo
Leonard Coster the and beam-splitting camera rig.
On-set checking of the stereo effect can be made with a display set up as an anaglyph (red/cyan) picture or with a crosspolarised monitor, viewed through appropriate specs.
siliconchip.com.au
June 2010 23
The Speedwedge rig used on another recent production, also photographed by DOP Tom Gleeson and directed by
Tahnee McGuire.
image that is immersive and visceral
without causing eye strain.”
Data handling
As already noted, the SI-2K cameras’ data streams are recorded to two
hard drives. Coster adds that if you’re
using other broadcast cameras, the
data may go to flash memory cards
or a hard drive; if you’re using 35mm
film cameras it goes onto two film rolls.
On-set monitoring can be accomplished by using a video display, with
the pair of left/right images shown on
screen as an anaglyph (red/cyan) image. It is viewed through the familiar
red/cyan specs, just as you would a
3D movie.
Alternatively, you can use small
cross-polarised monitors which take
two colour signals and give you, with
polarising glasses, full colour stereoscopic viewing.
For post production, you still edit,
Coster explains, just as you normally
would, with two streams of vision for
every scene. You can merge these two
separate data streams later, then have
a file which represents a single series
of frames, in which there are a myriad
of post production paths.
It’s even possible to create a stereo
DCP (Digital Cinema Package) file that
allows you to deliver a hard drive to
any DCP-compliant cinema in the
world. The cinema operator can load
it into the server and play back the 3D
vision through the house projectors.
In the post-production process,
overall colour corrections and convergence can be adjusted. The latter process involves off-setting the two images
right or left relative to each other.
What this effectively does is rack
the entire set back and forth, determining what the audience will see at the
screen plane.
The shoot
Colour grading and adjustment of convergence can be made post shooting,
thanks to Silicon Imaging’s software.
24 Silicon Chip
Producer-Director Bernie Zelvis
was asked by SMPTE Sydney in 2009
siliconchip.com.au
Converting ‘Flatties’ to ‘Deepies’
It may come as a surprise to some to find
Tim Burton’s spectacular 3D movie “Alice
in Wonderland” was not originally shot
in 3D but photographed in 2D. The same
applies to “Clash of the Titans”.
Hollywood producers are now looking
through their back catalogs to find suitable titles that can be converted from 2D
to 3D, to cash in on the current fervour
for 3D titles.
The last two Harry Potter films are likely
candidates as are classics, such as early
Star Wars, Titanic and other major titles.
The result is not always a perfect transformation: many viewers who saw Clash
were, to say the least, unimpressed, with
one blogger claiming the film was “flawed
in so many ways, not least because of its
underwhelming visual appeal, its lack of
‘3Dness’ but also because the story is just
as flat as the visuals.” The “Clash” conversion is reputed to have taken 10 weeks to
perform at a cost of around $US4.5 million.
to produce a 3D short for its Dimensionale 3D film festival. To do it, he
called on Leonard Coster to supply
his new 3D rig to be used by Director
of Photography (DOP) Tom Gleeson.
The result was Highly Strung, with a
running time of two minutes.
Tom Gleeson admits that, like many,
he had a dim view of 3D “based on
cheezy movies and red/blue paper
glasses.” Then, for the first time, he
viewed an HD film on a 3D cross-polarised monitor: “I was at first astounded
and then converted. HD and 3D are a
potent mix.”
In his view a large part of a DOP’s
job is to create a sense of depth in
2D images using lighting, lenses and
composition. When confronted with
images that actually have depth there
needs to be a rethink!
He says that, after a lifetime watching, analysing and creating 2D, it can
be confronting when the paradigm
shifts. Once immersed in 3D shooting you have powerful new tools like
IOD and convergence that control this
new depth.
Gleeson recalled that the footage
looked “amazing, with depth that felt
like you could walk into it.” On one
occasion he used smoke to enhance
the lighting in the shots and to help
create a sense of volume. He feels 3D
can immerse a viewer within a picture
and story like no other format can.
siliconchip.com.au
The US company involved was Prime
Focus who developed the software and
employed an Indian facility to do the actual
leg work.
How is it done?
A process called rotoscoping is at the heart
of it: using part manual and part ComputerGenerated Imagery (CGI) processes, an
operator hand traces the main elements in
each scene, so separating them and allowing
each object to be tracked and “converted”
to produce the second eye’s view. This of
course can be extremely complex, depending
on the scene.
The first step is to separate the shot into
somewhere between two and eight layers of
depth. One example may be an image of a
person standing in front of a building, with
a blue sky and clouds behind. The operator
can separate this shot into three layers: the
person, the building and the sky with clouds.
Contour lines are then drawn around ob-
jects in each layer and a topographic layout
created with depth lines to indicate the
position of each object in the stereo window.
Naturally, the objects might well be moving in the succession of frames: computer
software can track this movement and
create ‘in-between’ frames, so avoiding the
laborious effort of tracing each frame. The
software also assesses and inserts detail
that may be behind each moving object.
At this point you have a collection of objects that may look like cardboard cutouts
situated at different depth planes. To ‘round
them out’, texture maps are taken from
each object and overlaid on the shapes.
This will give each character facial depth,
costume detail, etc.
Coster remarks: “It’s a lot of work. Ultimately, it can produce very good results
but it is not as good as shooting in 3D.
However, you do end up with a movie asset
of far more value.”
In Hollywood, the dollar speaks loudly.
Leonard Coster has created an iPod app which helps handle the stereo
configurations when a cameraman is on a 3D shoot.
To convey and enhance the depth
in each shot he kept the camera
movements fluid, with long tracking
shots. The cameraman must also give
consideration to editing: fast cutting 3D shots can be challenging for
viewers, so longer and wider tracking
shots are often more suitable. Cutting
points also need to be thought out, as
parameters such as convergence and
IOD should not jar.
3D Drama
Although the SMPTE short was
Bernie Zelvis’ first foray into 3D stereoscope production, he currently finds
himself writing a TV drama series
specifically for 3D.
From the exercise, he discovered the
3D viewing experience can be easily
‘broken’ by trying to squeeze too much
space range onto the picture and ends
up hurting the eyes.
He concludes there are “traps for
young players”. In his opinion, a
stereographer like Leonard Coster is
necessary to keep you from trying
things that just won’t work, as well as
to supervise the post process.
“The biggest plus with the system
we were using was that completed
shots could be projected onto a screen,
only minutes after the shoot: This
impressed all who saw it. One TV executive said this wasn’t even on their
radar... it is now!”
SC
June 2010 25
|