AI Video Effects

AI Video Effects

  • AI video effects model is a state-of-the-art video transformation engine designed to apply advanced stylization, restoration, and animation effects to user-submitted images using AI.

Features

  • AI Video Effects enables seamless video enhancement and transformation using prompt-based controls or pretrained effect types.

Key Features

  • Prompt-Driven Video Effects: Apply rotation, animal, assassin, or angry effects using natural language prompts.
  • Pretrained Effects Library: Choose from a growing set of built-in effects like VHS Footage, Samurai It, Film Noir, Inflate It, and more.
  • Frame Consistency: Maintains temporal consistency across frames for smooth output without flickering.
  • Fast Inference: Designed for high-speed inference — typically under 2 minutes for short clips.
  • Flexible Input Support: Accepts a variety of input formats including MP4, MOV, and WebM.
  • Cloud-Based or On-Prem Deployments: Can run via API or be deployed in private environments.

Limitations

  • Creative Focus: The model is tuned for artistic and entertainment use cases, not real-world factual video restoration.
  • Effect Specificity: Some effects may have subtle impact on certain types of videos (e.g., low-light or fast-moving scenes).
  • Clip Duration: Designed for short-form content (under 10 seconds); long videos may need preprocessing or chunking.
  • Prompt Ambiguity: Vague or contradictory prompts may yield unpredictable results.

Out-of-Scope Use

  • AI Video Effects must not be used in ways that violate applicable laws or ethical guidelines. Prohibited uses include, but are not limited to:

    1. Generating deepfakes intended to impersonate real individuals without consent.
    2. Creating or spreading disinformation or manipulated media with intent to deceive.
    3. Using the model to generate non-consensual or exploitative visual content.
    4. Applying effects to content involving minors in ways that could be construed as harmful or exploitative.
    5. Using AI-generated videos for harassment, threats, or coordinated abuse campaigns.
    6. Creating video output intended to incite violence, hate, or discrimination.
    7. Circumventing intellectual property rights or creating misleading representations of brands or public figures.

Authentication

API Endpoints

Submit Task & Query Result

  • cURL

    # Submit Task
    curl --location --request POST 'https://api.muapi.ai/api/v1/  generate_wan_ai_effects' \
      --header "Content-Type: application/json" \
      --header "x-api-key: {MUAPIAPP_API_KEY}" \
      --data-raw '{
        "prompt": "a cute kitten",
        "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/  902675646946/c0951838-8bc5-4598-8e8e-941df16446fa.jpg",
        "name": "Cakeify",
        "aspect_ratio": "16:9",
        "resolution": "480p",
        "quality": "medium",
        "duration": 5
      }'
    
    # Poll for results
    curl --location --request GET "https://api.muapi.ai/api/v1/predictions/$  {request_id}/result" \
      --header "x-api-key: {MUAPIAPP_API_KEY}"
    
  • Python

    import os
    import requests
    import json
    import time
    
    from dotenv import load_dotenv
    load_dotenv()
    
    def main():
      print("Hello from MuApiApp!")
      API_KEY = os.getenv("MUAPIAPP_API_KEY")
      print(f"API_KEY: {API_KEY}")
    
      url = "https://api.muapi.ai/api/v1/generate_wan_ai_effects"
      headers = {
        "Content-Type": "application/json",
        "x-api-key": f"{API_KEY}",
      }
      payload = {
        "prompt": "a cute kitten",
        "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/902675646946/c0951838-8bc5-4598-8e8e-941df16446fa.jpg",
        "name": "Cakeify",
        "aspect_ratio": "16:9",
        "resolution": "480p",
        "quality": "medium",
        "duration": 5
      }
    
      begin = time.time()
      response = requests.post(url, headers=headers, data=json.dumps(payload))
      if response.status_code == 200:
        result = response.json()
        request_id = result["data"]["request_id"]
        print(f"Task submitted successfully. Request ID: {request_id}")
      else:
        print(f"Error: {response.status_code}, {response.text}")
        return
    
      url = f"https://api.muapi.ai/api/v1/predictions/{request_id}/result"
      headers = {"x-api-key": f"{API_KEY}"}
    
      while True:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
          result = response.json()
          status = result["data"]["status"]
    
          if status == "completed":
            end = time.time()
            print(f"Task completed in {end - begin} seconds.")
            url = result["video"]["url"]
            print(f"Task completed. URL: {url}")
            break
          elif status == "failed":
            print(f"Task failed: {result.get('error')}")
            break
          else:
            print(f"Task still processing. Status: {status}")
        else:
          print(f"Error: {response.status_code}, {response.text}")
          break
    
        time.sleep(0.5)
    
    if __name__ == "__main__":
      main()
    
  • JavaScript

    require('dotenv').config();
    const axios = require('axios');
    
    async function main() {
      console.log("Hello from MuApiApp!");
    
      const API_KEY = process.env.MUAPIAPP_API_KEY;
      console.log(`API_KEY: ${API_KEY}`);
    
      const url = 'https://api.muapi.ai/api/v1/generate_wan_ai_effects';
      const headers = {
        'Content-Type': 'application/json',
        'x-api-key': `${API_KEY}`,
      };
    
      const payload = {
        prompt: 'a cute kitten',
        image_url: 'https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/902675646946/c0951838-8bc5-4598-8e8e-941df16446fa.jpg',
        name: 'Cakeify',
        aspect_ratio: '16:9',
        resolution: '480p',
        quality: 'medium',
        duration: 5,
      };
    
      try {
        const begin = Date.now();
        const response = await axios.post(url, payload, { headers });
    
        if (response.status === 200) {
          const requestId = response.data.data.request_id;
          console.log(`Task submitted successfully. Request ID: ${requestId}`);
    
          const resultUrl = `https://api.muapi.ai/api/v1/predictions/${request_id}/result`;
          const pollHeaders = { x-api-key: `${API_KEY}` };
    
          while (true) {
            const pollResponse = await axios.get(resultUrl, { headers: pollHeaders });
    
            if (pollResponse.status === 200) {
              const status = pollResponse.data.data.status;
    
              if (status === 'completed') {
                const end = Date.now();
                console.log(`Task completed in ${(end - begin) / 1000} seconds.`);
                const videoUrl = pollResponse.data.video.url;
                console.log(`Task completed. URL: ${videoUrl}`);
                break;
              } else if (status === 'failed') {
                console.log(`Task failed: ${pollResponse.data.error}`);
                break;
              } else {
                console.log(`Task still processing. Status: ${status}`);
              }
            } else {
              console.log(`Error: ${pollResponse.status}, ${pollResponse.statusText}`);
              break;
            }
    
            await new Promise(resolve => setTimeout(resolve, 500));
          }
        } else {
          console.log(`Error: ${response.status}, ${response.statusText}`);
        }
      } catch (error) {
        console.error(`Request failed: ${error.message}`);
      }
    }
    
    main();
    

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes""-The prompt for generating the output.
image_urlstringYes""-The image for generating the output
namestringYes""-Custom name for the task or output file.
aspect_ratiostringNo16:91:1, 9:16, 16:9Aspect ratio of the output video.
resolutionstringNo480p480p, 720pResolution of the output video.
qualitystringNomediummedium, highControls overall video quality (affects rendering time).
durationnumberNo55 ~ 10Length of the generated video in seconds.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
idstringUnique identifier for the generation task.
outputsarray of stringsURLs pointing to the generated video output(s).
urls.getstringAPI URL to retrieve the result by task ID.
has_nsfw_contentsarray of booleansIndicates whether the output contains NSFW content.
statusstringStatus of the task: completed, failed, or processing.
created_atstring (timestamp)ISO timestamp indicating when the task was created.
errorstringError message if the task failed; empty if successful.
executionTimenumber (milliseconds)Total time taken to complete the task in milliseconds.
timings.inferencenumber (milliseconds)Time taken specifically for model inference in milliseconds.

Result Query Parameters

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
idstringUnique identifier for the generation task.
outputsarray of stringsURLs pointing to the generated video output(s).
urls.getstringAPI URL to retrieve the result by task ID.
has_nsfw_contentsarray of booleansIndicates whether the output contains NSFW content.
statusstringStatus of the task: completed, failed, or processing.
created_atstring (timestamp)ISO timestamp indicating when the task was created.
errorstringError message if the task failed; empty if successful.
executionTimenumber (milliseconds)Total time taken to complete the task in milliseconds.
timings.inferencenumber (milliseconds)Time taken specifically for model inference in milliseconds.