Chrome / CDP Connection Issues
Symptom
Error: Cannot connect to Chrome on port 9222
Chrome is not running with the
--remote-debugging-port=9222 flag, or Chrome closed after setup.Fix
-
1
Run the setup script — it launches Chrome with CDP automatically:bash setup.sh
-
2
If setup.sh fails, launch Chrome manually:
# macOS /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \ --remote-debugging-host=127.0.0.1 \ --remote-debugging-port=9222 # Linux google-chrome --remote-debugging-host=127.0.0.1 --remote-debugging-port=9222 -
3
Verify CDP is reachable:curl http://127.0.0.1:9222/json/versionYou should see JSON with Chrome version info. If you get a connection refused error, Chrome is not listening on that port.
Only one Chrome instance: If you have multiple Chrome windows open and only one was launched with the debugging flag, MultAI will only connect to the debug-enabled instance. Close all other Chrome windows if you experience unexpected behavior.
Platform Authentication
Symptom
Platform shows login page — MultAI cannot proceed
MultAI uses your existing browser session cookies. If you're not logged in to a platform in the CDP-connected Chrome instance, the automation will encounter the login page and fail.
Fix
-
1
Open the platform URL in your Chrome window (the one with CDP enabled)
-
2
Log in manually — complete any 2FA or email verification steps
-
3
Re-run MultAI — sessions are now cached via Chrome's profile in
~/.chrome-playwright/and will persist across runs
MultAI copies your Chrome profile to
~/.chrome-playwright/ on first setup. Log in to all 7 platforms once in that Chrome instance and you won't need to log in again unless sessions expire.Rate Limit Hit
Symptom
Platform skipped — "rate limit active" in status.json
The platform returned a rate limit error on a previous run. MultAI records the cooldown period and skips the platform until it expires.
Fix
-
1
Wait for the cooldown — check
~/.chrome-playwright/rate-limit-state.jsonfor the expiry timestamp -
2
Or reset manually:rm ~/.chrome-playwright/rate-limit-state.json
-
3
To avoid DEEP mode quotas, use
--mode REGULARfor ChatGPT and Gemini when quota is limited
Platform-specific limits to be aware of:
- ChatGPT Deep Research — daily usage quota. Resets at midnight UTC
- Gemini Deep Research — monthly cap. More generous but finite
- Perplexity — free tier has rate limits on Pro Search. REGULAR mode is unaffected
Extraction Failures
Symptom
Output file is empty or contains the prompt instead of a response
Playwright couldn't find or read the response element. This can happen when a platform UI updates its CSS selectors, when the page shows a rate limit banner, or when the platform echoed the prompt back.
Fix
-
1
Check if the platform loaded correctly — open Chrome and navigate to the platform manually. If you see a rate limit or error message, that's the cause
-
2
Reset browser state with the
--freshflag:/multai --fresh --prompt "your prompt" -
3
Enable agent fallback — if Playwright selectors are broken, fallback will navigate the UI visually:/multai --with-fallback --prompt "your prompt"
Prompt echo detection: If an output file contains your prompt text as the "response", the platform likely echoed it back without generating a real answer. This usually indicates the submission failed silently. Try
--fresh to reset the tab state.Agent Fallback Errors
Symptom
Agent fallback failed — API key not found or timeout
The browser-use agent requires an API key and the browser-use package to be installed. Missing either will cause fallback to fail immediately.
Fix
-
1
Install browser-use — run setup with the fallback flag:bash setup.sh --with-fallback
-
2
Add your API key to .env:ANTHROPIC_API_KEY=sk-ant-your-key-here # or GOOGLE_API_KEY=AIzaSy-your-key-here
-
3
For timeout errors, increase the max steps limit:/multai --with-fallback --max-steps 100 --prompt "your prompt"
-
4
Check the fallback log for detailed error information:cat output/agent-fallback-log.json
Output and Archive Issues
Symptom
Output file missing or archive not created
The platform run may have failed silently, or the collation script didn't run after all platforms completed.
Fix
-
1
Check status.json for per-platform errors:cat output/status.jsonLook for platforms with
"status": "error"and read the"error"field -
2
Run collation manually if per-platform files exist but archive is missing:python3 skills/orchestrator/engine/collate_responses.py
-
3
Check task name — if
--task-namecontained special characters or spaces, the filename may have been sanitized differently than expected. Use only alphanumeric characters and hyphens:/multai --task-name "my-research-task" --prompt "your prompt"