Workflow Documentation
MuAPI’s workflow system allows you to build sophisticated, multi-stage AI pipelines using a visual node-based editor or natural language chat.
Core Concepts
1. Nodes and Categories
Each node in a workflow represents a specific AI model or utility.
- Text: LLM processing, prompt engineering.
- Image: Generation, upscaling, editing.
- Video: Motion, effects, high-fidelity generation.
- Audio: Music generation, sound effects.
- Utility: Logic nodes like "Passthrough" or "Concatenator".
- API: Direct access to third-party integrations (Straico, WaveSpeed).
2. Edges and Connections
Connections (edges) define the flow of data.
- Handles: Nodes have specific input/output handles (e.g.,
imageOutput -> videoInput).
- Dynamic References: Use Jinja2 syntax to inject data:
{{ node_id.outputs[0].value }}.
Agentic Workflow Architect
Building complex graphs is easy with the built-in AI assistant.
- Natural Language Creation: "Design a marketing pipeline that starts with a text prompt, creates a high-res image, and then generates a 5s video."
- To-and-Fro Planning: For broad requests, the architect will lead a planning discussion, proposing multiple architectural options before built the graph.
- Intelligent Refinement: Ask the architect to "Change the video model to Runway" or "Add a background removal step".
Running Workflows
Manual Execution
- Run button: Initiates the complete graph from the start nodes.
- Node-level Run: Test individual nodes in isolation with custom parameter overrides.
API Orchestration
Workflows can be triggered via REST API.
- Endpoint:
POST /api/workflow/{workflow_id}/run
- Webhook Support: Provide a
webhook_url to receive a single notification when the entire workflow completes, or granular updates for each node.
External API specialized Integration
Access specialized cinematic and multimodal models via integrated helpers:
- Straico: One interface for dozens of leading AI models.
- WaveSpeed: Optimized for high-speed cinematic generation with automated spec parsing.