The terrifying tech behind this summer's zombie assault
And how 1GB should get you a poke in the eye from Winslet
Feature So there you are in the cinema. Watching Titanic again for some reason. Kate Winslet has just saucily suggested to Leonardo Di Caprio that he paints her like one of his French girls.
Then she leans over and pokes you in the eye.
What happened there? You’ve got a fair idea how 3D films are made, and you’re pretty confident that if Jim Cameron had taken a stereo camera rig onto the set of the most successful movie ever made you’d have heard about it by now.
Someone’s been messing around with our films.
Our perception of depth is (almost) completely dictated by the placement of our eyes. Each eye has a slightly different view of the objects in from of us and the differing perspectives are assembled by some neurological parallax magic into a three dimensional scene inside our brains. 3D movies "cheat" this system by using glasses to deliver a different image to each eye.
The result is a fiction, but like the rapid succession of still images that makes movies, it’s a fiction that most of us can readily accept.
The idea of 3D movies is older than you think. William Friese-Greene devised a system that used synchronised side-by-side projectors and hand-held stereoscopes in the last years of the 19th century. The complexity of both camera and projection equipment, though, meant that it was never anything more than an experimental novelty.
The popular perception of early 3D films is one of those little cardboard glasses with tinted red-and-green lenses. In fact, all sorts of technological creativity was applied to the problem of simulating depth. In 1922 an entire cinema was wired with synchronised shuttered glasses to enable a full frame to be delivered to the audience’s left and right eyes in turn — relying on the same persistence of vision that made moving pictures work in the first place to create stereo images in the brain.
Eventually, after a great many false starts, the polarised lens solution we know today became standard. Even so, making 3D movies still required bulky 3D cameras, which rather limited the scope for directors to do much creatively with the medium. While,almost from the beginning, attempts were made to post-process "flat" footage into 3D, it hasn’t been a practical possibility until very recently. As with a lot of modern problems, computer magic has delivered the solution.
Simulating depth: Elijah Wood in The Lord Of The Rings trilogy. © 2009 Paramount Pictures
The solution takes us right back to the early days of cinema. In 1915 Max Fleischer devised a technique of tracing over live action footage to produce uncannily lifelike animation. You can see good examples of rotoscoping, as the technique became known, in Disney’s 1937 Snow White.
Or, if you’re child of the ‘80s, A-Ha’s Take On Me video.
To build a 3D image from a 2D original, the picture is separated into foreground and background elements. Then a version is created for each eye, introducing artificial parallax by moving foreground elements slightly relative to the background.
Of course we don’t live in the world of Blade Runner. The camera can’t reconstruct the background behind our slightly shifted cut-outs of Kate Winslet. Instead, artists at 3D conversion facilities recreate the "missing" background using cloned areas from elsewhere in the film. If it’s done badly, the effect can resemble a Victorian toy theatre, with flat characters hovering in front of dimensionless scenery. If it’s done well, the results can actually look more natural than the dedicated stereo-camera approach.
The masters at work
An awful lot of high-tech know-how and low-tech artistry goes into making sure things don’t end up looking flat. I visited the London studios of 3D conversion specialists Prime Focus World to find out more. When I visited they were putting the finishing touches to World War Z, and I got to see how 3D conversion can do a lot more then just create an imaginary room behind the screen.
To avoid the "Victorian theatre" syndrome, multiple elements in an actor’s face are separated and placed in their own 3D space. And that makes it possible to subtly tweak the elements of a performer’s face to make them look more interesting or, as in this case, more like a ravening zombie.
Real life... only better
The infected scale the walls in World War Z Photo by MPC/Paramount Pictures © 2013 Paramount Pictures.
Rajat Roy, the global technical supervisor at Prime Focus World, explains further:
“When people get to the theatre they expect it to look like the world around us now…like real life. Whereas of course it’s not. It’s not supposed to be that, even. It can’t be. Because when you’re looking at a thing you’re automatically converging your focus on each thing that you’re looking at.
In the cinema you’re in a controlled fantasy environment where you’re actually looking at a physical screen that’s a certain distance from you. And whatever I’m tricking your brain to think you’re looking at by putting pictures on that screen creates a dichotomy between what is physically happening to you and what we’re showing you.
There are things that I can do to your brain that will hurt you, that are bad for you. If you can look around the image and see those things, they’re the things we’re trying to cut out.
Those artefacts are prevalent in stereo shooting, and they’re prevalent in stereo CG. That’s one of the things that I think are not well understood currently. 3D is not supposed to look like "real life"; it’s supposed to serve the purposes of a story. And where the 3D image is, where the focus is, is supposed to serve the story.
The kit used to achieve these impressive tricks is comparatively unsophisticated. A mixture of Dell and HP boxes, either Quad Core or Dual Quad Core. Most with 24GB of RAM and an Nvidia Quadro 4000 graphics card. Fancy, but not exactly otherworldly.
Big data - literally
But the sheer quantity of data involved is dizzying. In the case of Prime Focus, there is an office in London and one in Mumbai, sharing data as required. Each digitised frame of film is over 12MB in size. Given that there are 24 such frames every second, you’re looking at an astronomical storage requirement: 288MB per second, 17.3GB per minute. Or 1.6TB for a 90-minute movie. And that’s just for the finished article.
Hollywood being Hollywood, studios insist on seeing all the options. The managing director of Prime Focus’s software-development subsidiary View-D™, Matthew Bristow, gives us a sense of how much data we’re talking about: Every shot will go through multiple iterations. One shot could have between five and 25 versions.
Bristow says: "Given that there are on average about 2,000 shots for a film, we could be looking at up to 50,000 shots. What you don’t want to be doing is pumping around all the data for every single shot so a copy of the plates will sit in every facility.
Instead, when we make a change, we don’t send through the entire shot, we send through a file that enables us to render the shot here – that minimises the amount of data traffic."
A still from World War Z: imagine stringing 2,000 of these shots together at 100MB a pop... This is a reletively tiny 566KB morsel. Photo by: Jaap Buitendijk. © 2013 Paramount Pictures
Prime Focus’s creative director, Richard Baker, adds:
What we’ve also done, in the last year or so, is rather than preview every shot in DPX we will, up to a certain level, view changes as JPEGs… they’re like JPEG2000s which come in at about 4MB per frame, as opposed to a DPX that will be more like 12MB. The way we have things set up you should be able to open up a script in India or London and you should be able to see all the assets and render out a version.
A simple frame without too many separate elements can be rendered in as little in a minute, but more complex scenes can take the software up to 25 minutes to shade and blend.
The team were reluctant to say exactly how long it might take to convert a whole movie - partly because its one of those jobs that’s never finished. As in movie post-production generally, tweaking and polishing keeps happening until the project’s due in cinemas.
However, Tony Bradley at Prime Focus told us:
With all of the View-D render farm working, we use about 3.5 TeraFlops of calculation power (for London. India is roughly the same so all farms working is approx 10TFlops. Now this is a per second figure and over the course of a project we could use anywhere between 1000 and 1500 hours of this power to get a show out. This is the equivalent of a single core machine running continuously for approx 1.8 million hours or about 205 years.
How to make sure it doesn't all end up on The Pirate Bay
To move all this data between Hollywood, London and Mumbai, Prime Focus needs data transfer that’s not only fast, but secure. Specialist ISP Sohonet runs fibre between all of the major facilities houses.
I had a quick chat with Sohonet's chief operating officer, Damien Carroll, to get a sense of how the network operates. I asked him about the scope of the network, and how the major moviemaking cities were connected.
Carroll told me:
Sohonet utilises third-party fibre from major telecommunications companies throughout the world. Wherever possible we procure dark fibre which we light ourselves. This is particularly relevant for "tail" or "last mile" circuits.
In all major metropolitans we have multiple Points-Of-Presence (POP) with a highly resilient backbone connecting various parts of the city (eg, we have five POPs in LA providing a backbone linking Hollywood, Burbank, downtown LA and Santa Monica). These POPs are interconnected by multiple 10Gbit/s connectivity and customers are connected into this backbone via either single connections to one POP, or where diversity is required, customers are connected to more than one POP.
Each metropolitan is interconnected with other metropolitans (such as LA, New York, London, Sydney and Singapore). The main metropolitans are interconnected with multiple redundant 10G paths.
I was quite proud of my top-of-the menu Virgin connection until I asked the COO about transfer speeds:
Customers can connect at 100Mbit/s, 1Gbit/s and 10Gbit/s. The majority of customers connect at 1Gbit/s and we are seeing an increasing trend to 10Gbit/s within metropolitans. There are further drivers for increased bandwidth with the launch of our new private, offsite storage services where customers require an increased bandwidth to access these services for storage purposes as well as moving content around.
Generally connections between metropolitans such as London and LA are up to 1Gbit/s but we're increasingly seeing requirements for greater than 1Gbit/s where customers are looking for 3,4 or 5Gbit/s and full 10Gbit/s. Transfer speeds do not vary per location on a technical basis as customers in LA or London or Sydney can achieve the same speeds across the network. Commercial drivers dictate the level of bandwidth in markets such as Australia and Singapore where bandwidth continues to cost significantly more than in North American and European Markets and as a result we see less drive toward greater than 1Gbit/s capacity at present.
Then again, at least I know what I’m paying every month for high-speed access to all the funny cat videos I can handle. Sohonet’s pricing structure is the kind of thing that only grown-ups understand.
“Sohonet operates on a bandwidth-based business model where customers contract to a bandwidth over a contract term," says Carroll. "This bandwidth is dedicated for a customers’ usage. There are no costs for any data transferred and the contract charge is fixed and predictable. The services are usually deployed in such a way that customers can burst upwards above the base contracted rate for project based work and can drop back to the base level when the crunch is over.
"For other services such as storage and file transfer capability, the services are deployed on a pay-as-you-go basis where customers pay per month on the amount of resources consumed – eg, storage is priced at a per TB/month charge and customer only pay for their usage. This usage can be scaled up and down on a monthly basis.”
Once all the transfers are done, Prime Focus is running a more or less seamless workspace across three continents. The seams are kept tight by data security at all levels: from secure online transfers to strong bolts on the tape cupboard. Industry "police" run spot-checks on all the major post-production facilities too, checking that there are no open USB ports or default passwords in use.
Behind those carefully locked doors there’s more than just cutting and pasting of backgrounds going on. As well as off-the-shelf applications such as Shake, Nuke and Fusion, the team develop bespoke tools to handle specific jobs.
Sometimes a project will come along and it has a involve masking around lot of bouncy hair, or coruscating laser beams, so the team will build a tool to help handle that kind of content, Richard Baker, the creative director of View-D™, London, explains.
There isn’t one tool that does everything. They all have pros and cons so we use them all and we’ve written an in-house piece of translation software so that we can pass files between them.
Assuming everything has gone to plan, the seamless intercontinental workflow should result in a seamless viewing experience. Certainly when I attended a preview screening of World War Z, the star of the show, even more than main box-office draw Brad Pitt, was the 3-D processing.
There were a few "look at this" moments to make cinema-goers who had paid the extra feel like they were getting their money’s worth, but in the main the film had a natural-looking three dimensional space that was a world away from earlier headache-inducing attempts such as Alice In Wonderland.
3D films are increasingly becoming the standard, at least for summer blockbusters, and until dedicated stereo cameras get a lot lighter, more flexible post-processing will probably be the way to go.
...Even if that does mean we get a poke in the eye from Kate Winslet now and then. ®