Overview
A MultAI run has three phases:
- Invoke — type
/multaiin Claude Code, provide your prompt, select platforms and mode - Execute — the orchestrator submits your prompt to each selected platform and waits for responses
- Review — read the collated archive, optionally run
/consolidatorto synthesize findings
There are two execution paths depending on where you invoke MultAI. In the Claude Code tab all platforms run in parallel via the Python orchestrator. In the Cowork tab platforms run sequentially via Claude-in-Chrome browser tools.
A typical REGULAR-mode run takes 2–5 minutes. A DEEP-mode run with all platforms can take 15–30 minutes for the platforms that support Deep Research.
Invoking /multai
Type /multai in the Claude Code chat input. The skill automatically detects which runtime you're in (Code tab or Cowork tab) and routes accordingly.
You can pass your prompt inline or from a file:
If you invoke /multai without arguments, the skill will prompt you interactively for the query, mode, and platform selection.
Platform Selection
By default MultAI runs all 7 platforms. Use --platforms to run a subset:
Platform identifiers (case-insensitive):
claude— Claude.aichatgpt— ChatGPTgemini— Google Geminigrok— Grok (xAI)deepseek— DeepSeekperplexity— Perplexitycopilot— Microsoft Copilot
Multiple identifiers are space-separated. Omit the flag entirely to run all 7.
Choosing a Mode
Pass --mode REGULAR (default) or --mode DEEP:
REGULAR mode (default)
Submits the prompt as a standard chat message. Fast — typical response time is 30 seconds to 3 minutes per platform. Runs all 7 platforms in parallel (Code tab) in under 5 minutes total.
DEEP mode
Activates extended research features on supported platforms:
- ChatGPT — Deep Research (web search + synthesis, 5–20 min)
- Gemini — Deep Research (Google Search + synthesis, 5–15 min)
- DeepSeek — DeepThink extended reasoning
- Other platforms — receive REGULAR mode prompt (no DEEP support)
Reading the Output
MultAI writes output to two locations in your project directory:
Per-platform files — output/
One markdown file per platform, named YYYY-MM-DD-task-name-platform.md. Each file contains the full platform response wrapped in <untrusted_platform_response> XML tags.
Collated archive — archive/
After all platforms complete, all per-platform files are merged into a single archive file: YYYY-MM-DD-task-name-archive.md. This is the primary file to read — it contains all 7 responses in order.
The task name comes from the --task-name flag, or is derived from the first few words of your prompt if not specified:
Cowork Path
When MultAI is invoked from a Claude Cowork tab, it uses a completely different execution path — instead of Playwright, it uses Claude-in-Chrome MCP browser tools to navigate each platform's UI.
Key differences in Cowork mode:
- Platforms run sequentially — one at a time, in order
- No Python or Playwright required on the local machine
- Requires the Claude-in-Chrome browser extension installed and connected
- Slower overall — sequential means total time = sum of all platform times
- More resilient — Claude reads the screen directly, not CSS selectors
/multai.Consolidating Results
After MultAI finishes, invoke /consolidator to synthesize all platform responses into a single structured intelligence report:
The consolidator reads the archive file and produces a report that:
- Identifies points of consensus across platforms
- Highlights divergences and platform-specific insights
- Surfaces unique information that only one or two platforms provided
- Assigns confidence levels based on cross-platform agreement
The consolidated report is the recommended end artifact for research tasks — it distills 7 responses into one authoritative synthesis.