VFX
VFX
- The VFX model enables cinematic, high-impact visual effects to be applied to videos or images. It is designed to simulate large-scale visual effects such as explosions, disintegration, levitation, and elemental forces using AI-based compositing and motion synthesis.
Features
- VFX empowers creators with Hollywood-style visual effects through simple prompts or preset selections. Effects are composited onto provided media using spatial, temporal, and visual consistency mechanisms.
Key Features
- AI-Driven Visual FX: Apply dynamic, cinematic effects to static images or videos like explosions, lightning, and tornadoes.
- Preset Effects Library: Supports prebuilt effects including: Building Explosion, Car Explosion etc.
- Spatially-Aware Compositing: Effects are aligned and blended with subjects using AI-inferred object positioning.
- Temporal Control: Timing of effect duration and transitions are automatically calibrated for realism.
- Image Input: Works with static frames and short video clips.
Limitations
- Effect Intensity May Vary: Some presets are highly dependent on subject pose, position, or lighting for best results.
- Single-Subject Priority: Currently optimized for scenes with one or two main visual anchors (e.g., a person or car).
- Static Backgrounds Preferred: Scenes with camera motion or busy backgrounds may reduce realism.
- No Fine-Tuned Control: Users cannot yet fully customize FX positioning, layering, or path animation.
Out-of-Scope Use
-
VFX must not be used for
- Simulating real-world disaster footage to mislead or cause panic.
- Creating visual depictions of harm, violence, or abuse without context or consent.
- Modifying imagery of minors in sensitive scenarios.
- Misinformation or impersonation campaigns.
- Creating scenes that promote hate, extremism, or illegal activities.
Authentication
- For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
-
cURL
# Submit Task curl --location --request POST 'https://api.muapi.ai/api/v1/generate_wan_ai_effects' \ --header "Content-Type: application/json" \ --header "x-api-key: {MUAPIAPP_API_KEY}" \ --data-raw '{ "prompt": "a Mercedes bench car", "image_url": "https://example.com/car.jpg", "name": "Car Explosion", "aspect_ratio": "16:9", "resolution": "480p", "quality": "medium", "duration": 5 }' # Poll for results curl --location --request GET "https://api.muapi.ai/api/v1/predictions/${request_id}/result" \ --header "x-api-key: {MUAPIAPP_API_KEY}" -
Python
import os import requests import json import time from dotenv import load_dotenv load_dotenv() def main(): print("Hello from MuApiApp!") API_KEY = os.getenv("MUAPIAPP_API_KEY") print(f"API_KEY: {API_KEY}") url = "https://api.muapi.ai/api/v1/generate_wan_ai_effects" headers = { "Content-Type": "application/json", "x-api-key": f"{API_KEY}", } payload = { "prompt": "a Mercedes bench car", "image_url": "https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/325013990791/fa5a980c-5c3c-42c3-87a3-578976ffd8a6.jpg", "name": "Car Explosion", "aspect_ratio": "16:9", "resolution": "480p", "quality": "medium", "duration": 5 } begin = time.time() response = requests.post(url, headers=headers, data=json.dumps(payload)) if response.status_code == 200: result = response.json() request_id = result["data"]["request_id"] print(f"Task submitted successfully. Request ID: {request_id}") else: print(f"Error: {response.status_code}, {response.text}") return url = f"https://api.muapi.ai/api/v1/predictions/{request_id}/result" headers = {"x-api-key": f"{API_KEY}"} while True: response = requests.get(url, headers=headers) if response.status_code == 200: result = response.json() status = result["data"]["status"] if status == "completed": end = time.time() print(f"Task completed in {end - begin} seconds.") url = result["video"]["url"] print(f"Task completed. URL: {url}") break elif status == "failed": print(f"Task failed: {result.get('error')}") break else: print(f"Task still processing. Status: {status}") else: print(f"Error: {response.status_code}, {response.text}") break time.sleep(0.5) if __name__ == "__main__": main() -
JavaScript
require('dotenv').config(); const axios = require('axios'); async function main() { console.log("Hello from MuApiApp!"); const API_KEY = process.env.MUAPIAPP_API_KEY; console.log(`API_KEY: ${API_KEY}`); const url = 'https://api.muapi.ai/api/v1/generate_wan_ai_effects'; const headers = { 'Content-Type': 'application/json', 'x-api-key': `${API_KEY}`, }; const payload = { prompt: 'a Mercedes bench car', image_url: 'https://d3adwkbyhxyrtq.cloudfront.net/ai-images/186/325013990791/fa5a980c-5c3c-42c3-87a3-578976ffd8a6.jpg', name: '3Car Explosion', aspect_ratio: '16:9', resolution: '480p', quality: 'medium', duration: 5, }; try { const begin = Date.now(); const response = await axios.post(url, payload, { headers }); if (response.status === 200) { const requestId = response.data.data.request_id; console.log(`Task submitted successfully. Request ID: ${requestId}`); const resultUrl = `https://api.muapi.ai/api/v1/predictions/${request_id}/result`; const pollHeaders = { x-api-key: `${API_KEY}` }; while (true) { const pollResponse = await axios.get(resultUrl, { headers: pollHeaders }); if (pollResponse.status === 200) { const status = pollResponse.data.data.status; if (status === 'completed') { const end = Date.now(); console.log(`Task completed in ${(end - begin) / 1000} seconds.`); const videoUrl = pollResponse.data.video.url; console.log(`Task completed. URL: ${videoUrl}`); break; } else if (status === 'failed') { console.log(`Task failed: ${pollResponse.data.error}`); break; } else { console.log(`Task still processing. Status: ${status}`); } } else { console.log(`Error: ${pollResponse.status}, ${pollResponse.statusText}`); break; } await new Promise(resolve => setTimeout(resolve, 500)); } } else { console.log(`Error: ${response.status}, ${response.statusText}`); } } catch (error) { console.error(`Request failed: ${error.message}`); } } main();
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | "" | - | The prompt for generating the output. |
| image_url | string | Yes | "" | - | The image for generating the output |
| name | string | Yes | "" | - | Custom name for the task or output file. |
| aspect_ratio | string | No | 16:9 | 1:1, 9:16, 16:9 | Aspect ratio of the output video. |
| resolution | string | No | 480p | 480p, 720p | Resolution of the output video. |
| quality | string | No | medium | medium, high | Controls overall video quality (affects rendering time). |
| duration | number | No | 5 | 5 ~ 10 | Length of the generated video in seconds. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| id | string | Unique identifier for the generation task. |
| outputs | array of strings | URLs pointing to the generated video output(s). |
| urls.get | string | API URL to retrieve the result by task ID. |
| has_nsfw_contents | array of booleans | Indicates whether the output contains NSFW content. |
| status | string | Status of the task: completed, failed, or processing. |
| created_at | string (timestamp) | ISO timestamp indicating when the task was created. |
| error | string | Error message if the task failed; empty if successful. |
| executionTime | number (milliseconds) | Total time taken to complete the task in milliseconds. |
| timings.inference | number (milliseconds) | Time taken specifically for model inference in milliseconds. |
Result Query Parameters
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| id | string | Unique identifier for the generation task. |
| outputs | array of strings | URLs pointing to the generated video output(s). |
| urls.get | string | API URL to retrieve the result by task ID. |
| has_nsfw_contents | array of booleans | Indicates whether the output contains NSFW content. |
| status | string | Status of the task: completed, failed, or processing. |
| created_at | string (timestamp) | ISO timestamp indicating when the task was created. |
| error | string | Error message if the task failed; empty if successful. |
| executionTime | number (milliseconds) | Total time taken to complete the task in milliseconds. |
| timings.inference | number (milliseconds) | Time taken specifically for model inference in milliseconds. |