Lights, Camera, Artificial Action: Start-Up Is Taking AI to the Movies
PALO ALTO, Calif. — Inside an old auto body shop here in Silicon Valley, Stefan Avalos pushed a movie camera down a dolly track.
He and a small crew were making a short film about self-driving cars. They were shooting a powder-blue 1962 Austin Mini, but through special effects the rusted relic would be transformed into an autonomous vehicle that looked more like the DeLorean from “Back to the Future.”
Stepping back from the camera, Mr. Avalos referred wryly to the movie he was filming as “Project Unemployment.” The film was a way of testing new technology from a start-up called Arraiy, which is trying to automate the creation of digital effects for movies, television and games.
This new type of artificial intelligence, which is also being developed by the software giant Adobe and in other technology industry research labs, could ultimately replace many of the specialists who build such effects.
“This is no joke; it will put people out of work,” said Mr. Avalos, a Los Angeles-based filmmaker who also runs a visual effects house. “The artists are safe. But it will replace all the drudgery.”
Over the past three decades, computer-generated imagery has transformed how movies and television are made. But building digital effects is still a painstaking and enormously tedious process. For every second of movie time, armies of designers can spend hours isolating people and objects in raw camera footage, digitally building new images from scratch, and combining the two as seamlessly as possible.
Arraiy (pronounced “array”) is building systems that can handle at least part of this process. The company’s founders, Gary Bradski and Ethan Rublee, also created Industrial Perception, one of several robotics start-ups snapped up by Google several years ago.
“Filmmakers can do this stuff, but they have to do it by hand,” said Mr. Bradski, a neuroscientist and computer vision specialist with a long history in Silicon Valley. He has worked with companies as varied as the chip maker Intel and the augmented reality start-up Magic Leap.
Backed by more than $10 million in financing from the Silicon Valley venture firm Lux Capital, SoftBank Ventures and others, Arraiy is part of a widespread effort spanning industry and academia and geared toward building systems that can generate and manipulate images on their own.
Thanks to improvements in so-called neural networks — complex algorithms that can learn tasks by analyzing vast amounts of data — these systems can edit noise and mistakes out of images or apply simple effects and create highly realistic images of very fake people or help graft one person’s head onto the body of someone else.
Inside Arraiy’s offices — the old auto body shop — Mr. Bradski and Mr. Rublee are building computer algorithms that can learn design tasks by analyzing years of work by movie effects houses. That includes systems that learn to “rotoscope” raw camera footage, carefully separating people and objects from their backgrounds so that they can be dropped onto new backgrounds.
Adobe, which makes many of the software tools used by today’s designers, is also exploring so-called machine learning that can automate similar tasks.
At Industrial Perception, Mr. Rublee helped develop computer vision for robots designed to perform tasks like loading and unloading freight trucks. Not long after Google acquired the start-up, work on neural networks took off inside the tech giant. In about two weeks, a team of Google researchers “trained” a neural network that outperformed technology from the start-up that had taken years to create.
Mr. Rublee and Mr. Bradski collected a decade of rotoscoping and other visual effects work from various design houses, which they declined to identify. And they are adding their own work to the collection. After filming people, mannequins, and other objects in front of a classic “green screen,” for example, company engineers can quickly rotoscope thousands of images relatively quickly to be added to the data collection. Once the algorithm is trained, it can rotoscope images without help from a green screen.
The technology still has flaws, and in some cases human designers still make adjustments to the automated work. But it is improving.
“These methods are still rough around the edges — there is still a long tail of things that can go wrong in unpredictable ways — but there aren’t any fundamental roadblocks.” said Phillip Isola, a computer vision researcher at the M.I.T. and OpenAI, the artificial intelligence lab created by Tesla’s chief executive, Elon Musk, and others.
Mr. Avalos thinks this work could ultimately supplant work done by his own effects house. But he is comfortable with that. He already farms out many of the more tedious tasks, via the internet, to workers in other countries.
If tech companies can help automate some of the grunt work involved in creating special effects, creative people will have a chance to try new things, said Pasha Shapiro, a filmmaker and special effects artist who has also worked with Arraiy.
“Some work is so tedious that it is not practical,” he said. “That is where technology can help even more.”
Let’s block ads! (Why?)