VFX

VFX

  • The VFX model enables cinematic, high-impact visual effects to be applied to videos or images. It is designed to simulate large-scale visual effects such as explosions, disintegration, levitation, and elemental forces using AI-based compositing and motion synthesis.

Features

  • VFX empowers creators with Hollywood-style visual effects through simple prompts or preset selections. Effects are composited onto provided media using spatial, temporal, and visual consistency mechanisms.

Key Features

  • AI-Driven Visual FX: Apply dynamic, cinematic effects to static images or videos like explosions, lightning, and tornadoes.
  • Preset Effects Library: Supports prebuilt effects including: Building Explosion, Car Explosion etc.
  • Spatially-Aware Compositing: Effects are aligned and blended with subjects using AI-inferred object positioning.
  • Temporal Control: Timing of effect duration and transitions are automatically calibrated for realism.
  • Image Input: Works with static frames and short video clips.

Limitations

  • Effect Intensity May Vary: Some presets are highly dependent on subject pose, position, or lighting for best results.
  • Single-Subject Priority: Currently optimized for scenes with one or two main visual anchors (e.g., a person or car).
  • Static Backgrounds Preferred: Scenes with camera motion or busy backgrounds may reduce realism.
  • No Fine-Tuned Control: Users cannot yet fully customize FX positioning, layering, or path animation.

Out-of-Scope Use

  • VFX must not be used for

    1. Simulating real-world disaster footage to mislead or cause panic.
    2. Creating visual depictions of harm, violence, or abuse without context or consent.
    3. Modifying imagery of minors in sensitive scenarios.
    4. Misinformation or impersonation campaigns.
    5. Creating scenes that promote hate, extremism, or illegal activities.

Authentication

API Endpoints

Submit Task & Query Result

  • cURL

    # Submit Task
    curl --location --request POST 'https://api.muapi.ai/api/v1/  generate_wan_ai_effects' \
      --header "Content-Type: application/json" \
      --header "x-api-key: {MUAPIAPP_API_KEY}" \
      --data-raw '{
        "prompt": "a Mercedes bench car",
        "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/325013990791/fa5a980c-5c3c-42c3-87a3-578976ffd8a6.jpg",
        "name": "Car Explosion",
        "aspect_ratio": "16:9",
        "resolution": "480p",
        "quality": "medium",
        "duration": 5
      }'
    
    # Poll for results
    curl --location --request GET "https://api.muapi.ai/api/v1/predictions/$  {request_id}/result" \
      --header "x-api-key: {MUAPIAPP_API_KEY}"
    
  • Python

    import os
    import requests
    import json
    import time
    
    from dotenv import load_dotenv
    load_dotenv()
    
    def main():
      print("Hello from MuApiApp!")
      API_KEY = os.getenv("MUAPIAPP_API_KEY")
      print(f"API_KEY: {API_KEY}")
    
      url = "https://api.muapi.ai/api/v1/generate_wan_ai_effects"
      headers = {
        "Content-Type": "application/json",
        "x-api-key": f"{API_KEY}",
      }
      payload = {
        "prompt": "a Mercedes bench car",
        "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/325013990791/fa5a980c-5c3c-42c3-87a3-578976ffd8a6.jpg",
        "name": "Car Explosion",
        "aspect_ratio": "16:9",
        "resolution": "480p",
        "quality": "medium",
        "duration": 5
      }
    
      begin = time.time()
      response = requests.post(url, headers=headers, data=json.dumps(payload))
      if response.status_code == 200:
        result = response.json()
        request_id = result["data"]["request_id"]
        print(f"Task submitted successfully. Request ID: {request_id}")
      else:
        print(f"Error: {response.status_code}, {response.text}")
        return
    
      url = f"https://api.muapi.ai/api/v1/predictions/{request_id}/result"
      headers = {"x-api-key": f"{API_KEY}"}
    
      while True:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
          result = response.json()
          status = result["data"]["status"]
    
          if status == "completed":
            end = time.time()
            print(f"Task completed in {end - begin} seconds.")
            url = result["video"]["url"]
            print(f"Task completed. URL: {url}")
            break
          elif status == "failed":
            print(f"Task failed: {result.get('error')}")
            break
          else:
            print(f"Task still processing. Status: {status}")
        else:
          print(f"Error: {response.status_code}, {response.text}")
          break
    
        time.sleep(0.5)
    
    if __name__ == "__main__":
      main()
    
  • JavaScript

    require('dotenv').config();
    const axios = require('axios');
    
    async function main() {
      console.log("Hello from MuApiApp!");
    
      const API_KEY = process.env.MUAPIAPP_API_KEY;
      console.log(`API_KEY: ${API_KEY}`);
    
      const url = 'https://api.muapi.ai/api/v1/generate_wan_ai_effects';
      const headers = {
        'Content-Type': 'application/json',
        'x-api-key': `${API_KEY}`,
      };
    
      const payload = {
        prompt: 'a Mercedes bench car',
        image_url: 'https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/325013990791/fa5a980c-5c3c-42c3-87a3-578976ffd8a6.jpg',
        name: '3Car Explosion',
        aspect_ratio: '16:9',
        resolution: '480p',
        quality: 'medium',
        duration: 5,
      };
    
      try {
        const begin = Date.now();
        const response = await axios.post(url, payload, { headers });
    
        if (response.status === 200) {
          const requestId = response.data.data.request_id;
          console.log(`Task submitted successfully. Request ID: ${requestId}`);
    
          const resultUrl = `https://api.muapi.ai/api/v1/predictions/${request_id}/result`;
          const pollHeaders = { x-api-key: `${API_KEY}` };
    
          while (true) {
            const pollResponse = await axios.get(resultUrl, { headers: pollHeaders });
    
            if (pollResponse.status === 200) {
              const status = pollResponse.data.data.status;
    
              if (status === 'completed') {
                const end = Date.now();
                console.log(`Task completed in ${(end - begin) / 1000} seconds.`);
                const videoUrl = pollResponse.data.video.url;
                console.log(`Task completed. URL: ${videoUrl}`);
                break;
              } else if (status === 'failed') {
                console.log(`Task failed: ${pollResponse.data.error}`);
                break;
              } else {
                console.log(`Task still processing. Status: ${status}`);
              }
            } else {
              console.log(`Error: ${pollResponse.status}, ${pollResponse.statusText}`);
              break;
            }
    
            await new Promise(resolve => setTimeout(resolve, 500));
          }
        } else {
          console.log(`Error: ${response.status}, ${response.statusText}`);
        }
      } catch (error) {
        console.error(`Request failed: ${error.message}`);
      }
    }
    
    main();
    

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes""-The prompt for generating the output.
image_urlstringYes""-The image for generating the output
namestringYes""-Custom name for the task or output file.
aspect_ratiostringNo16:91:1, 9:16, 16:9Aspect ratio of the output video.
resolutionstringNo480p480p, 720pResolution of the output video.
qualitystringNomediummedium, highControls overall video quality (affects rendering time).
durationnumberNo55 ~ 10Length of the generated video in seconds.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
idstringUnique identifier for the generation task.
outputsarray of stringsURLs pointing to the generated video output(s).
urls.getstringAPI URL to retrieve the result by task ID.
has_nsfw_contentsarray of booleansIndicates whether the output contains NSFW content.
statusstringStatus of the task: completed, failed, or processing.
created_atstring (timestamp)ISO timestamp indicating when the task was created.
errorstringError message if the task failed; empty if successful.
executionTimenumber (milliseconds)Total time taken to complete the task in milliseconds.
timings.inferencenumber (milliseconds)Time taken specifically for model inference in milliseconds.

Result Query Parameters

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
idstringUnique identifier for the generation task.
outputsarray of stringsURLs pointing to the generated video output(s).
urls.getstringAPI URL to retrieve the result by task ID.
has_nsfw_contentsarray of booleansIndicates whether the output contains NSFW content.
statusstringStatus of the task: completed, failed, or processing.
created_atstring (timestamp)ISO timestamp indicating when the task was created.
errorstringError message if the task failed; empty if successful.
executionTimenumber (milliseconds)Total time taken to complete the task in milliseconds.
timings.inferencenumber (milliseconds)Time taken specifically for model inference in milliseconds.