What It Is
Mo-BrAInz is an Adobe CEP (Common Extensibility Platform) extension that lives inside After Effects as a dockable panel. It provides conversational AI assistance for motion designers -- you describe what you want to build, and it generates ExtendScript code, applies expressions, creates compositions, and manipulates layers directly inside your active project. But it goes significantly further than a chatbot that writes code.
The Three-Layer Value Model
The product thesis, refined through extensive testing, identifies three distinct sources of value. First, workflow acceleration: a curated script library, import/export utilities, image and audio generation, voice input, and reusable actions. Second, execution help: generating AE scripts from natural language intent, inspecting live project context, fixing broken scripts, and suggesting adaptations of known-good patterns rather than generating from scratch. Third, compounding learning: remembering what worked, distilling reusable techniques, ranking known-good patterns by success rate, and improving suggestions over time.
Multi-LLM Consensus
Version 3 introduced a multi-LLM architecture where Claude serves as the primary reasoning engine and Gemini provides verification. When the system generates code, the secondary model validates it before execution. This consensus approach dramatically reduces hallucination -- particularly important when the generated code runs directly inside a creative application with unsaved work open.
The Knowledge System
The core competitive advantage is the learning database. A 28KB structured JSON knowledge base encodes expert-level After Effects knowledge: critical gotchas (ExtendScript is ES3 -- no const/let, no arrow functions; layer indices start at 1, not 0; effect colors need 4 elements, shape fills need 3), expression engine differences between JavaScript and Legacy ExtendScript, effect match names for cross-language compatibility, property paths, interpolation methods, and random function signatures. The system loads this on startup and injects relevant context into every prompt.
Beyond static knowledge, the extension maintains a dynamic learning system. It tracks execution outcomes, stores communication patterns (how the user phrases requests, what corrections they make), and consolidates successful patterns every 15 executions. Memory is persisted to disk and survives between sessions.
The Technical Challenge
Building a CEP extension is uniquely constrained. The panel runs in a Chromium-based renderer with Node.js access, but communication with After Effects happens through CSInterface -- a bridge that evaluates ExtendScript strings in the host application's scripting engine. The AI must generate code in two different JavaScript dialects (modern JS for the panel, ES3 ExtendScript for the host) and handle the asynchronous bridge between them. Error handling requires capturing both panel-side exceptions and host-side script failures, then routing them back to the AI for self-correction.
Additional Features
Voice control via Web Speech API for hands-free operation while animating. A mobile companion app (separate panel) for remote control. Image and audio generation for quick asset creation. A script library with categorized, reusable utilities. And a Supabase-backed cloud RAG system for sharing knowledge across installations.
Tech Stack
CEP extension (HTML/CSS/JS panel + ExtendScript host scripts), Claude API (primary), Gemini API (verification), Supabase (cloud knowledge base with vector search), Web Speech API, CSInterface bridge, Node.js filesystem access for persistent memory.