A pose-driven image-to-video generation framework specifically for virtual human generation.
Generates virtual human dance videos using reference images and given pose sequences, with the quality of the generated results surpassing most existing open source models.
MusePose

MusePose is an image-to-video generation framework developed by Lyra Lab of Tencent Music Entertainment, designed to generate videos of virtual characters through posture control signals. It is the last building block of the Muse open source series, and together with MuseV and MuseTalk, it aims to push the community towards the vision of generating virtual characters with full-body movement and interactive capabilities. Based on diffusion models and posture guidance, MusePose is able to generate dance videos of characters in reference images, and the quality of the results surpasses almost all current open source models on the same subject.

MusePose

Target group:

“MusePose is mainly aimed at developers and researchers who want to generate avatar video content. Whether in game development, animation production or virtual reality, MusePose can provide strong technical support to help users generate high-quality avatar video content at a lower cost and higher efficiency.”

Example usage scenarios:

Game developers use MusePose to generate dynamic dance videos of game characters.

Animators use MusePose to quickly create character movements for animated shorts.

VR content creators use MusePose to add natural and fluid movements to characters in virtual environments.

feature of product:

Generate dance videos: Generate dance videos of people in reference images given a sequence of poses.

Posture alignment algorithm: Users can align any dance video with a reference image, significantly improving inference performance and model usability.

Improved Code: Based on the code from Moore-AnimateAnyone with significant bug fixes and improvements.

Detailed Tutorials: Detailed tutorials on installation and basic usage of MusePose are provided for new users.

Training Guide: Provides guidance for training the MusePose model.

Face Enhancement: If needed, face areas in the video can be enhanced using FaceFusion technology for better facial consistency.

Usage Tutorial:

Install Python environment and necessary packages, such as opencv, diffusers, mmcv, etc.

Download and prepare the pre-trained models and weights of other components of MusePose.

Prepare reference images and dance videos and organize them in designated folders as per the examples.

Perform pose alignment to obtain the aligned pose of the reference image.

Add the path to the reference image and alignment pose in the test profile.

Run MusePose for inference and generate a virtual character video.

If necessary, use FaceFusion technology to enhance the face areas in the video.

MusePose MusePose MusePose MusePose

© Copyright notes

Related posts

No comments

No comments...