title: "n8n Integration" description: "Use MuAPI inside n8n workflows with the official community node package — generate images, videos, and audio with 60+ AI models using a drag-and-drop visual editor."
n8n Integration
MuAPI provides an official n8n community node package that brings the full MuAPI model catalog directly into your n8n automation workflows. Generate images, videos, and audio — or upload media assets — without writing a single line of code.
| Package | Purpose |
|---|---|
| n8n-nodes-muapi | Full suite — 2 nodes covering all 60+ MuAPI models across 7 categories |
Installation
Via n8n Community Nodes (recommended)
- Open n8n → Settings → Community Nodes
- Click Install
- Enter
n8n-nodes-muapi - Restart n8n
Via npm (self-hosted)
cd ~/.n8n
npm install n8n-nodes-muapi
sudo systemctl restart n8n
Docker
Add the following environment variable to your Docker setup:
N8N_COMMUNITY_PACKAGES=n8n-nodes-muapi
Credentials Setup
- Sign up at muapi.ai and go to Dashboard → API Keys → Create Key
- In n8n, go to Credentials → New Credential → MuAPI API
- Paste your API key and save
The credential is validated automatically against your account balance.
Nodes
1. MuAPI (Predictor)
The main generation node. Select a category, then pick a model — the node dynamically loads all available models for that category.
Categories and models:
| Category | Notable Models |
|---|---|
| Text to Image | FLUX Dev, FLUX Schnell, FLUX Kontext Dev/Pro/Max, HiDream Fast/Dev/Full, GPT-4o, Midjourney V7, Reve, Wan 2.1, Seedream 3/4, Qwen |
| Image to Image | FLUX Kontext Dev/Pro/Max (I2I), FLUX Kontext Effects, GPT-4o Edit, Reve Edit, Midjourney V7 (I2I/Style/Omni), SeedEdit, Qwen Edit |
| Text to Video | Veo 3, Veo 3 Fast, Wan 2.1/2.2, Runway, Kling V3 Pro/Standard, Seedance Pro/V1.5, MiniMax Hailuo, HunyuanVideo, PixVerse, Sora 2 |
| Image to Video | Veo 3, Veo 3 Fast, Wan 2.1/2.2, Runway, Kling V3, Seedance, Midjourney V7, HunyuanVideo, PixVerse, Sora 2 |
| Image Enhance | AI Upscale, Background Remover, Face Swap, Skin Enhancer, Photo Colorizer, Ghibli Style, Anime Generator, Image Extender, Object Eraser, Product Shot |
| Video Edit | Wan AI Effects, Face Swap (Video), Dress Change, AI Clipping, Lipsync |
| Audio | Suno Create/Remix/Extend, MMAudio Text-to-Audio, MMAudio Video-to-Audio |
Execution flow:
- Submits your request to
POST /api/v1/{endpoint} - Automatically polls
GET /api/v1/predictions/{id}/resultuntil complete - Returns the full result including
outputsarray with your generated media URLs
Options:
| Option | Default | Description |
|---|---|---|
| Max Wait Time | 600s | How long to wait before timing out |
| Poll Interval | 3s | How often to check for completion |
| Return Request ID Only | Off | Get the request_id immediately without waiting |
2. MuAPI Upload
Upload any media file to MuAPI and receive a hosted CDN URL for use in generation tasks.
Input modes:
- Binary Data — pipe in any file from a previous n8n node (HTTP Request, Read Binary File, etc.)
- URL — provide a URL; the node downloads the file and re-uploads it
Supported formats:
| Type | Formats |
|---|---|
| Image | jpg, jpeg, png, gif, webp, bmp, tiff, svg |
| Video | mp4, mov, avi, webm, mkv |
| Audio | mp3, wav, ogg, flac, m4a, aac |
Output:
{
"url": "https://cdn.muapi.ai/uploads/your-file.jpg",
"filename": "your-file.jpg",
"mimeType": "image/jpeg",
"size": 204800
}
Example Workflows
Text to Image
[Manual Trigger]
→ [MuAPI: Text to Image — FLUX Dev]
→ [HTTP Request: Download image]
Image to Video pipeline
[Read Binary File: photo.jpg]
→ [MuAPI Upload] ← get hosted URL
→ [MuAPI: Image to Video — Kling V3] ← use URL as image_url
→ [HTTP Request: Download video]
Automated product photography
[Schedule Trigger]
→ [HTTP Request: fetch product image]
→ [MuAPI Upload]
→ [MuAPI: Image Enhance — Product Shot]
→ [MuAPI: Image Enhance — AI Upscale]
→ [Send Email: attach result]
Suno music generation
[Webhook Trigger]
→ [MuAPI: Audio — Suno Create Music]
→ [Slack: post audio URL]
Async generation (fire and poll manually)
[Manual Trigger]
→ [MuAPI: Text to Video — Veo 3] ← "Return Request ID Only" ON
→ [Wait 60s]
→ [HTTP Request: GET /api/v1/predictions/{request_id}/result]
Using the Hosted URL in Other Nodes
The url output from MuAPI Upload or the outputs[0] from MuAPI can be passed directly to any n8n node:
{{ $json.url }} ← upload result
{{ $json.outputs[0] }} ← generation result
API Pattern Reference
All MuAPI generation calls follow an async submit → poll pattern:
POST /api/v1/{endpoint}
→ { "request_id": "abc123" }
GET /api/v1/predictions/abc123/result
→ { "status": "completed", "outputs": ["https://cdn.muapi.ai/..."] }
The MuAPI node handles this automatically. Status values are pending, processing, completed, and failed.
Quick Start
- Install
n8n-nodes-muapivia Settings → Community Nodes - Add credentials: MuAPI API → paste your API key
- Drag the MuAPI node into your canvas
- Select a Category (e.g., "Text to Image") and a Model (e.g., "FLUX Dev")
- Enter your prompt and click Execute node
Your generated image/video/audio URL will appear in the output panel.
Source Code
The node package is open source:
- GitHub: github.com/SamurAIGPT/n8n-nodes-muapi
- npm:
n8n-nodes-muapi
Pull requests and issues are welcome.