Hello from an extremely overcast and murky Wednesday evening in London! Fortunately we have some pretty cool Remedy-related news to distract us from the weather...
The developer has teamed up with Nvidia to help streamline an expensive and time consuming but important part of game development, motion capture and animation. The results were teased online a few days ago but were officially announced at this year's SIGGRAPH. The new technology run on Nvidia's eight-GPU DGX-1 server and is described as a deep learning neural network.
Once the network is fed the initial reference footage; up to ten minutes of the actors providing three phonetically complex sentences, it's possible with some additional training, to record audio later and have the software produce realistic facial animation using the original footage as a guideline.
At the event, Antti Herva (Remedy's Lead Character Technical Artist) stated "Based on the Nvidia Research work we've seen in AI-driven facial animation, we're convinced AI will revolutionise content creation. Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create. After working with Nvidia to build video, and audio, driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks."
The project is still in the making but it's looking very promising so far!
For more information, check out Ars Technica's article on it, HERE.
2nd August 2017
Remedy and Nvidia Team Up To Streamline Motion Capture & Animation
20:12
Rachel