Still hesitating because creating a digital human requires modeling, animation, voice acting, and collaboration across multiple teams? Worried about high production costs and long timelines? RunningHub’s ComfyUI workflow – Wan2.2-S2V Digital Human Module——turns the complex process of digital human creation into an easy and creative experience for anyone, using the collaborative magic of “Image + Audio + Text”!

Three Simple Steps to Create Your Exclusive Digital Human Video:
Step 1: Upload the “Base Look” and Set the Visual Image of Your Digital Human
Open the Wan2.2-S2V digital human workflow, click on the “Image Upload” section, and choose the pre-designed virtual image to define the visual appearance of your digital human.
This allows you to move away from relying on real human images, avoiding scheduling conflicts and high costs, and turning a unified virtual image into a long-term visual symbol to strengthen recognition.
Step 2: Clone the “Exclusive Voice” and Give Your Digital Human a Unique Sound
Go to the “Audio Settings” section, upload a 1-minute clear audio sample, and activate the system’s intelligent cloning algorithm. After a few minutes, click “Preview” to confirm, and you’ll receive a voice similar to the sample, providing your digital human with a unique voice.
Say goodbye to the stiffness of generic voiceovers, and enhance your digital human’s recognition with a distinctive voice that makes it easier for the audience to remember the digital human’s image through sound.
Step 3: Input Text + Adjust Output and Generate Dynamic Content
Enter the text content your digital human needs to deliver in the text box, choose the output aspect ratio (e.g., vertical for short video platforms), and click the “Run” button.
In a short time, you’ll get a dynamic video with lip-syncing, expressions, and actions that naturally sync with the text and voice.
No need for professional production teams, quickly produce high-quality dynamic content, lowering the barrier for using digital humans and meeting the demands of various communication scenarios.
With just an image, an audio clip, and some text, you can awaken your digital human creativity — Wan2.2-S2V makes every idea come to life and speak out with excitement!
About RunningHub
RunningHub is the world’s first open-source ecosystem-based AI graphic, audio, and video AIGC application co-creation platform. Through a modular node system and cloud computing power integration, it transforms complex processes such as design, video production, and digital content generation into “building block” style operations. The platform serves users from 144 countries, processing over a million creative requests daily, fundamentally reshaping the traditional content production model.
RunningHub is not only a creation tool but also a creator ecosystem community! It supports developers in uploading nodes and workflows to earn revenue, forming a sustainable economic model of “creativity – development – reuse – monetization”.