A computer visualizes the shape of a person walking using input from a TikTok video.

Jafarian Uses TikTok to Advance Computer Vision and Machine Learning


Leveraging the wide-spread popularity of TikTok dances, University of Minnesota Computer Science Ph.D. student Yasamin Jafarian is hopping on this viral trend to improve 3D avatars of real people. Using frame-by-frame data, Jafarian has fed over 1,000 videos into a computer algorithm in efforts to bring previously cartoonish avatars closer to reality. She’s a member of Assistant Professor Hyun-Soo Park’s lab in the Department of Computer Science and Engineering on the Twin Cities campus.                                                                

While major motion pictures and video games have made big strides in this area and mastered the art of special effects, the tools in the entertainment industry are not wide widely accessible, requiring substantial time, money and data. Jafarian’s goal is to design an algorithm that only needs one photo or video of a person to generate a realistic avatar. TikTok dance videos—which often feature only one person showing the full length of their body in multiple different poses—provided the perfect dataset to train an algorithm to meet her goal. Eventually, this technology could be used by every-day people to improve virtual reality.                                                                                                       

Jafarian is a member of Assistant Professor Hyun-Soo Park’s lab where she is furthering her study of computer science, and more specifically the field of computer vision, which involves training artificial intelligence computers to understand visual data through images and video.

Read the full story on the College of Science and Engineering website, and learn more about Jafarian’s TikTok research.

Share