Motion Controls

Motion Controls

  • Motion Controls is an AI-powered video effect system that dynamically animates static images or enhances existing videos with motion-driven transformations such as zoom, spin, shake, bounce, and custom camera path effects.

Features

  • Motion Controls enables fluid animation effects, simulating camera movements and object dynamics using AI-generated motion layers and keyframes.

Key Features

  • Dynamic Motion Effects: Apply zoom, spin, shake, pan, rotate, and bounce using prompt-based or preset controls.
  • Static-to-Motion Animation: CTurn static images into moving visuals with lifelike camera movement and animation flows.
  • Smooth Transitions: Generates interpolated frames for consistent, flicker-free animation.
  • Prompt or Preset-Based: Supports both natural language prompts (e.g., “rotate left and zoom in”) and predefined effect types.
  • Short Clip Focus: Optimized for clips between 5–10 seconds for maximum quality.
  • Fast Inference Pipeline: Most clips render in under 2 minutes depending on duration and resolution.

Limitations

  • Not Real-World Motion Capture: Does not track or mimic real object motion — it creates synthetic camera/path-based motion.
  • Short Clip Optimization: Not suitable for clips longer than 10 seconds without segmentation.
  • Prompt Ambiguity: Prompts that lack specificity (e.g., “move it around”) may lead to unexpected output.
  • Single Subject Focus: Works best on images/videos with one main subject or focal point.

Out-of-Scope Use

  • Motion Controls must not be used in ways that violate ethical guidelines or legal standards, including:

    1. Generating misleading content to impersonate real-world footage.
    2. Applying motion to sensitive, violent, or harmful imagery.
    3. Enhancing or animating NSFW content.
    4. Harassment or abuse via manipulated video motion effects.
    5. Using AI-generated animations for political or misinformation campaigns.

Authentication

API Endpoints

Submit Task & Query Result

  • cURL

    # Submit Task
    curl --location --request POST 'https://api.muapi.ai/api/v1/  generate_wan_ai_effects' \
      --header "Content-Type: application/json" \
      --header "x-api-key: {MUAPIAPP_API_KEY}" \
      --data-raw '{
        "prompt": "a blueberry person",
        "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/833006366055/4f966d88-9ad7-4bd9-966d-c0fd7c5a9eb9.jpg",
        "name": "360 Orbit",
        "aspect_ratio": "16:9",
        "resolution": "480p",
        "quality": "medium",
        "duration": 5
      }'
    
    # Poll for results
    curl --location --request GET "https://api.muapi.ai/api/v1/predictions/$  {request_id}/result" \
      --header "x-api-key: {MUAPIAPP_API_KEY}"
    
  • Python

    import os
    import requests
    import json
    import time
    
    from dotenv import load_dotenv
    load_dotenv()
    
    def main():
      print("Hello from MuApiApp!")
      API_KEY = os.getenv("MUAPIAPP_API_KEY")
      print(f"API_KEY: {API_KEY}")
    
      url = "https://api.muapi.ai/api/v1/generate_wan_ai_effects"
      headers = {
        "Content-Type": "application/json",
        "x-api-key": f"{API_KEY}",
      }
      payload = {
        "prompt": "a blueberry person",
        "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/833006366055/4f966d88-9ad7-4bd9-966d-c0fd7c5a9eb9.jpg",
        "name": "360 Orbit",
        "aspect_ratio": "16:9",
        "resolution": "480p",
        "quality": "medium",
        "duration": 5
      }
    
      begin = time.time()
      response = requests.post(url, headers=headers, data=json.dumps(payload))
      if response.status_code == 200:
        result = response.json()
        request_id = result["data"]["request_id"]
        print(f"Task submitted successfully. Request ID: {request_id}")
      else:
        print(f"Error: {response.status_code}, {response.text}")
        return
    
      url = f"https://api.muapi.ai/api/v1/predictions/{request_id}/result"
      headers = {"x-api-key": f"{API_KEY}"}
    
      while True:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
          result = response.json()
          status = result["data"]["status"]
    
          if status == "completed":
            end = time.time()
            print(f"Task completed in {end - begin} seconds.")
            url = result["video"]["url"]
            print(f"Task completed. URL: {url}")
            break
          elif status == "failed":
            print(f"Task failed: {result.get('error')}")
            break
          else:
            print(f"Task still processing. Status: {status}")
        else:
          print(f"Error: {response.status_code}, {response.text}")
          break
    
        time.sleep(0.5)
    
    if __name__ == "__main__":
      main()
    
  • JavaScript

    require('dotenv').config();
    const axios = require('axios');
    
    async function main() {
      console.log("Hello from MuApiApp!");
    
      const API_KEY = process.env.MUAPIAPP_API_KEY;
      console.log(`API_KEY: ${API_KEY}`);
    
      const url = 'https://api.muapi.ai/api/v1/generate_wan_ai_effects';
      const headers = {
        'Content-Type': 'application/json',
        'x-api-key': `${API_KEY}`,
      };
    
      const payload = {
        prompt: 'a blueberry person',
        image_url: 'https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/833006366055/4f966d88-9ad7-4bd9-966d-c0fd7c5a9eb9.jpg',
        name: '360 Orbit',
        aspect_ratio: '16:9',
        resolution: '480p',
        quality: 'medium',
        duration: 5,
      };
    
      try {
        const begin = Date.now();
        const response = await axios.post(url, payload, { headers });
    
        if (response.status === 200) {
          const requestId = response.data.data.request_id;
          console.log(`Task submitted successfully. Request ID: ${requestId}`);
    
          const resultUrl = `https://api.muapi.ai/api/v1/predictions/${request_id}/result`;
          const pollHeaders = { x-api-key: `${API_KEY}` };
    
          while (true) {
            const pollResponse = await axios.get(resultUrl, { headers: pollHeaders });
    
            if (pollResponse.status === 200) {
              const status = pollResponse.data.data.status;
    
              if (status === 'completed') {
                const end = Date.now();
                console.log(`Task completed in ${(end - begin) / 1000} seconds.`);
                const videoUrl = pollResponse.data.video.url;
                console.log(`Task completed. URL: ${videoUrl}`);
                break;
              } else if (status === 'failed') {
                console.log(`Task failed: ${pollResponse.data.error}`);
                break;
              } else {
                console.log(`Task still processing. Status: ${status}`);
              }
            } else {
              console.log(`Error: ${pollResponse.status}, ${pollResponse.statusText}`);
              break;
            }
    
            await new Promise(resolve => setTimeout(resolve, 500));
          }
        } else {
          console.log(`Error: ${response.status}, ${response.statusText}`);
        }
      } catch (error) {
        console.error(`Request failed: ${error.message}`);
      }
    }
    
    main();
    

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes""-The prompt for generating the output.
image_urlstringYes""-The image for generating the output
namestringYes""-Custom name for the task or output file.
aspect_ratiostringNo16:91:1, 9:16, 16:9Aspect ratio of the output video.
resolutionstringNo480p480p, 720pResolution of the output video.
qualitystringNomediummedium, highControls overall video quality (affects rendering time).
durationnumberNo55 ~ 10Length of the generated video in seconds.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
idstringUnique identifier for the generation task.
outputsarray of stringsURLs pointing to the generated video output(s).
urls.getstringAPI URL to retrieve the result by task ID.
has_nsfw_contentsarray of booleansIndicates whether the output contains NSFW content.
statusstringStatus of the task: completed, failed, or processing.
created_atstring (timestamp)ISO timestamp indicating when the task was created.
errorstringError message if the task failed; empty if successful.
executionTimenumber (milliseconds)Total time taken to complete the task in milliseconds.
timings.inferencenumber (milliseconds)Time taken specifically for model inference in milliseconds.

Result Query Parameters

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
idstringUnique identifier for the generation task.
outputsarray of stringsURLs pointing to the generated video output(s).
urls.getstringAPI URL to retrieve the result by task ID.
has_nsfw_contentsarray of booleansIndicates whether the output contains NSFW content.
statusstringStatus of the task: completed, failed, or processing.
created_atstring (timestamp)ISO timestamp indicating when the task was created.
errorstringError message if the task failed; empty if successful.
executionTimenumber (milliseconds)Total time taken to complete the task in milliseconds.
timings.inferencenumber (milliseconds)Time taken specifically for model inference in milliseconds.