Source: MetaPost Author: MetaPostOfficial
Motion capture is the key step to “digitize” people
If the metaverse is the ultimate form of “digitization”, then motion capture technology is the key step to “digitize” people. If the metaverse is the ultimate form of “digitization”, then motion capture technology is the key step to realize the “digitization” of people.
In film and television production, motion capture is one of the most commonly used technologies. Whether it is “Avatar” or “Lord of the Rings” in Gollum, are first used to capture the physical performance of the actors using motion capture, and then the captured movements rendered and processed before presenting the stunning visual effects.
The game industry is also a core application scenario for motion capture technology. Game animation contains many complex posture movements, by capturing the movement data of real actors, bound to the skeleton of the game character, you can maximize the restoration of the human body’s real posture, expression, weight and speed, so that players can experience a more realistic game world.
With the full popularization of the concept of “meta-universe”, the long-term value of motion capture to the meta-universe has gradually emerged, which is at the same level as the engine, transmission, computing and display technologies, and is an important piece of the “huge puzzle” of the underlying construction of the meta-universe.
motion capture technology history
similar motion capture technology first appeared in 1915, when the master animator Max Fleischer built a projector, the principle is to display the content of the film to the translucent table. With this projector, the animator could easily draw the character’s movements according to the shape of the character’s movements on the screen.
Figure | Drawing projector
In 1983, Tom Calvert of Canada’s Simon Fraser University made a major breakthrough in physical mechanical capture of clothing, a technology that allowed people to see the earliest mechanical class capture. At the same time, MIT also introduced a set of LED-based “puppet image (graphical marionetter)” system, which is the prototype of the early optical motion capture system.
Figure | Tom Calvert’s research
This biomechanical research paved the way for future film production, and in the following time, when motion capture and computer graphics technology met, the accessibility of motion data made motion capture technology rapidly developed and was successively applied to the game and film industry in a complete and large-scale manner.
In the late 1990s, the filming of the movie “The Lord of the Rings” brought motion capture to the set for the first time, and Andy Serkis, a pioneer motion capture actor, was able to interact with other actors on the set as the character “Gollum,” which was more conducive to characterization. Because only when the actor in the performance process to get the emotions and language feedback from other actors, their own emotions can be more sound to be released, the role can be more flesh and blood, alive and well.
Photo | Motion capture stills from the movie “Lord of the Rings”
The movie “Avatar” released in 2009 can be considered a pioneer in the successful combination of motion capture and expression capture technology. Director James Cameron and team used head-mounted facial capture cameras and built the largest filming and motion capture studio ever built.
Photo | Motion capture stills from the movie Avatar
Special effects film and television production and games have never been separate, and soon someone brought the concept of motion capture to the gaming community. The most pioneering consciousness in this field is then in the field of the console and Nintendo, Sony three worlds of Sega.
it launched in 1994 arcade game “VR fighter” on the use of motion capture simulation of character action. This new concept in the then rough arcade and home console game market became a breath of fresh air, with realistic and smooth action stunned a crowd of players. The following year, Namco also released “Blade & Soul”, which was also a success as a vanguard of its own motion capture technology.
Today, motion capture has become almost standard for large game studios, using motion capture technology, real people and animated characters are synchronized, the game characters will appear more realistic and vivid. That’s why we can see cinematic level of action performance in the game.
common motion capture technology
with the maturity of the technology, motion capture technology is now more and more widely used in the field, from animation, human-computer interaction, to robot remote control, sports training, and so on, and even now the virtual human live, is also used in the motion capture technology.
In the face of different scenarios, motion capture technology has emerged from a variety of technical routes, common optical motion capture technology, inertial motion capture technology and visual motion capture technology.
optical motion capture technology will operate directly on the human body for simple marking, marking points will be reflected directly to the camera set in advance, and then through the reflection of different positions of the imaging information to budget the spatial movement of the marking point information, and finally the information will be simply positioned and output.
Figure | Optical motion capture: body marker light points
inertial motion capture technology will be worn directly on the human body gyroscope, when the person is in motion, gyroscope will also follow the rotation. At this time, directly through the sensing gyroscope rotation information will be projected from the human movement, and then to achieve motion capture.
Figure | Inertial motion capture requires the wearing of various devices
visual motion capture technology in the operation is not required to mark and wear equipment, as long as the human activity range through the ordinary camera to record the action, the human body key information for identification, and then special AI algorithms are used to achieve motion capture.
Figure | AI engine driven motion capture technology
optical motion capture technology and inertial motion capture technology has a certain threshold of use, more common in film and television and game fieldAlthough the rendering effect is very accurate, there are two problems: first, the cost is high. Cheap at least need tens of thousands, expensive need hundreds of thousands to several million, only large film and television and game studios can afford such costs. Second, the use of inconvenient. In the production site, motion capture actors often wear a lot of equipment on their bodies, wearing equipment and motion capture requires a team of multiple people to cooperate.
and more convenient in the general consumer market for the popularization of visual motion capture technology, in recent years by Apple, Meta and other major manufacturers to pursue.
Meta with a headset to handle the whole body motion capture
as early as 2019, Meta has announced its virtual human avatar system, which features 3D motion capture technology through VR devices to restore the image of real people, can render a high degree of fidelity skin tone, texture, hair, micro-expressions and other details. Meta hopes that in the future people will meet in virtual environments as real as they are in reality.
Figure | Meta’s VR device Quest can recognize facial expressions
According to foreign media reports, according to a paper released this month, Meta has proposed a solution to achieve full-body motion capture through Quest alone. That is, previously the VR headset can only facial expressions for motion capture, but now it has been possible to achieve full-body motion capture.
This is largely driven by the predictive power of artificial intelligence.
For upper body tracking, the experience gained during AI training is sufficient to accurately translate hands to the virtual world with only a small amount of input from the real world. For example, Quest’s camera can see your arms, elbows, and palms, so it’s good to estimate the complete posture of your upper body based on musculoskeletal structure.
Figure | Quest headset enables full-body motion capture
Now for the lower body, Meta is also exploring the use of this principle. Using the collected tracking data to train the artificial intelligence, using only sensor data from the VR headset and two controllers, it is possible to realistically produce full-body virtual human animation.
Meta’s team used the artificially generated sensor data to train QuestSim (AI engine). To do this, the researchers simulated the motion of the headset and controller based on 8 hours of motion capture clips of each of 172 people. This way, they didn’t have to capture the headset and controller with body movements from scratch.” The
motion capture clips included 130 minutes of walking, 110 minutes of jogging, 80 minutes of gestures, conversations, 90 minutes of whiteboard discussions and 70 minutes of maintaining balance.
Figure | The AI engine self-learns in
After training, QuestSim can recognize the actions a person is performing based on real headset and controller data. Using AI predictions, QuestSim can even simulate the movement of body parts without real-time sensor data.
The researchers further found that even without a handle controller, only 60 poses of the headset (including position and orientation data) are needed to reconstruct various motion poses, and the restored effect is also free of physical artifacts (imaging that does not exist but appears in the image).
For the future of motion capture technology, CITIC Securities believes that motion capture technology is expected to further develop and apply in the direction of biomechanics, engineering applications, games, film and television, VR, etc. In the process of metaverse development, capturing user actions and generating the corresponding performance in the virtual world in a timely manner is an important part of the user’s high-quality experience, and motion capture will have a broad base of application space in the future.