Overview
Mulu Code is a desktop application built on Electron with a React renderer process and a Node.js main process. Because users trust us with their project files and credentials, security is a foundational concern rather than an afterthought. This page describes the specific measures we have implemented to protect user data, how our architecture enforces isolation between processes, and how AI API calls are handled without exposing credentials on the client side.
We follow a local-first model. Your project files are read from and written to your local filesystem. They are not uploaded to our servers, synced to a cloud database, or transmitted anywhere unless you explicitly choose to deploy through Mulu Cloud. The desktop app itself communicates with external services only for AI model inference and authentication, and both of those channels are locked down with the mechanisms described below.
Local-First Architecture
Mulu Code stores all project data on your local machine. When you create or open a project, the app reads and writes files directly to the directory you chose on your filesystem. There is no background sync service uploading your code to our servers. There is no cloud storage layer that mirrors your local files. Your source code, assets, configuration files, and build artifacts remain entirely under your control on your own disk.
The only network requests the desktop app makes are to our API proxy for AI model inference (described below in API Proxy), to Supabase for user authentication when you sign in, and to our update server to check for new versions of the app. None of these channels transmit your project source code. When you use AI features, only the prompt context necessary for the current request is sent to the proxy, not your entire codebase.
Encryption
AES-256-GCM
Mulu Code includes an implementation of AES-256-GCM encryption for protecting sensitive data at rest on your local machine. The implementation uses scryptSync for password-based key derivation with parameters of N=16384, r=8, p=1, deriving a 256-bit key from the provided password and a cryptographically random 32-byte salt. Each encryption operation generates a fresh 16-byte initialization vector using crypto.randomBytes. The output format concatenates the salt, IV, GCM authentication tag, and ciphertext into a single base64-encoded string. The authentication tag provides integrity verification, ensuring that encrypted data has not been tampered with.
Electron safeStorage
For credentials that need OS-level protection, Mulu Code uses Electron's safeStorage API. This delegates encryption to the operating system's native credential store: Keychain on macOS, the Data Protection API (DPAPI) on Windows, and libsecret on Linux. The advantage of this approach is that the encryption keys are managed by the operating system itself and are tied to the current user session, meaning they cannot be extracted by simply reading application files on disk. The safeStorage functions check whether encryption is available on the current system before attempting any operations and will fail gracefully if the platform does not support it.
Transit Encryption for Secrets
When secrets such as environment variable values are passed between the renderer and main processes via IPC, they are encrypted in transit using AES-256-CBC with a session-scoped random key generated at app startup using crypto.randomBytes(32). Each transit encryption operation generates a fresh 16-byte IV. The secret value is held in memory only briefly during the write operation and is then cleared. Secret values are never logged. When displaying secrets in the UI, only a masked preview showing the first four and last four characters is returned.
Process Isolation
Mulu Code enforces Electron's recommended security model. The renderer process runs with contextIsolation: true and nodeIntegration: false. This means the renderer (where the React UI runs) has no direct access to Node.js APIs, the filesystem, or system-level capabilities. All interactions between the renderer and the main process go through a preload script that exposes a carefully scoped API surface via Electron's contextBridge.
The preload script defines a fixed set of named IPC channels, and no others are accessible from the renderer. Each method maps to a specific ipcMain.handle or ipcMain.on handler in the main process that validates its inputs before performing any operation. The renderer cannot directly read files, write files, execute commands, or access credentials. It must go through the preload API, and every handler in the main process performs its own validation before acting.
Content Security Policy. In production builds, Mulu Code enforces a Content Security Policy via response headers. Script sources are restricted to 'self', connection targets are limited to known API endpoints (our proxy, Supabase, voice services), and object embeds are blocked. The CSP is not applied in development mode to avoid interfering with Vite's hot module replacement.
API Proxy
All AI model API calls from the desktop app are routed through a Cloudflare Worker that acts as a secure proxy. The proxy holds the actual API keys for providers like Anthropic, OpenAI, and Google. These keys are stored as environment secrets in the Cloudflare Worker configuration and are never sent to or stored on the user's device. The desktop app authenticates to the proxy using a separate application key, which is validated on every request.
The proxy validates the origin of incoming requests, accepting calls from the Electron app (which has no browser Origin header), localhost during development, and the mulu.ai domain. Requests from other origins are rejected. The proxy also implements rate limiting at 30 requests per minute per IP address to prevent abuse. When a rate limit is hit, the proxy returns a 429 response and the client backs off with exponential delay.
On the request path, the proxy maps model names to actual provider model IDs, reformats request bodies to match provider-specific APIs (Anthropic Messages API vs. OpenAI Chat Completions), and forwards the request with the appropriate provider API key. On the response path, it strips internal metadata such as reasoning content from certain models before returning results to the client. The proxy supports both streaming and non-streaming responses.
File System Security
The main process maintains an allowlist of directories that the user has explicitly opened as projects. Every file system operation, including reads, writes, deletions, and search operations, validates that the target path falls within an allowed directory. Paths are normalized using path.normalize and checked with startsWith to prevent path traversal attacks where a malicious input like ../../etc/passwd might attempt to escape the project directory.
Additional safeguards include a maximum file size limit of 10 MB for read operations and a blocklist of dangerous file extensions: .exe, .dll, .bat, .cmd, .sh, and .ps1. The app will not read or serve files with these extensions through the IPC file read handlers. Search operations (grep and glob) also validate that the search base path is within an allowed project directory before executing.
For AI tool calls where the model may return malformed or hallucinated file paths, a dedicated path sanitizer strips JSON debris, non-ASCII characters, and invalid path characters before the path reaches any file system operation. This sanitizer also handles URL-encoded paths, limits path length to 500 characters, and rejects empty results. Corrupted tool arguments go through multiple recovery strategies before being used.
Terminal Safety
When executing terminal commands on behalf of the user or AI agent, the main process applies a blocklist of dangerous command patterns. Commands matching patterns like rm -rf /, format, del /f, sudo, or fork bombs are rejected before execution. The working directory for all command execution is validated against the project allowlist, preventing commands from running in unexpected directories. Commands are also subject to a 30-second soft timeout and a 45-second hard timeout, after which the process is killed.
Secrets Management
Mulu Code provides a secrets management flow for writing environment variables to .env files in your project. Variable names are validated against the pattern /^[A-Z][A-Z0-9_]*$/, which prevents environment variable injection by ensuring names contain only uppercase letters, digits, and underscores and start with a letter. The target file path is validated to ensure it points to a .env file within an open project directory.
Secret values are encrypted before being sent over IPC from the renderer to the main process, as described in the Transit Encryption section above. The main process decrypts the value, writes it to the target file, and immediately clears the plaintext from memory. The value is never logged at any point in this flow. When the UI needs to display existing secrets, it shows only a masked preview with the first and last four characters visible.
Security Scanner
Mulu Code includes a built-in security scanner that analyzes your project code for vulnerabilities. The scanner runs in a dedicated Worker Thread to avoid blocking the main process or the UI. It uses a two-stage approach: first, a set of regex patterns pre-filters files for potential issues including hardcoded secrets (AWS keys, GitHub tokens, JWTs, private keys, API keys, database connection strings), dangerous code patterns (eval, dangerouslySetInnerHTML, dynamic Function constructors, unsanitized exec calls), and Electron-specific misconfigurations (nodeIntegration: true, contextIsolation: false, webSecurity: false).
In the second stage, files flagged by the regex pre-filter are sent to an AI model for context-aware review. The AI evaluates whether the regex matches represent actual vulnerabilities or false positives. For example, a .pem file containing a private key is expected and not a vulnerability, while a private key hardcoded in a JavaScript source file is. The scanner skips directories like node_modules, .git, dist, and build, as well as binary files, minified files, lock files, and any file over 1 MB. It also skips .env files entirely since those are expected to contain secrets. Scan results are persisted locally with the last five scans retained per project.
Cloud + Deploy
When you deploy an app to Mulu Cloud, it is hosted on Cloudflare's edge network. Cloudflare provides DDoS protection, automatic SSL certificate provisioning and renewal for all domains, and TLS termination at the edge. Deployed apps receive HTTPS by default with no additional configuration required.
The database layer is powered by Supabase, which provides PostgreSQL databases with row-level security (RLS) policies. RLS ensures that database queries from your app's users can only access rows they are authorized to see, based on the policies you define. Authentication is handled by Supabase Auth, which supports email/password, OAuth providers, and magic links.
Data & Privacy
Mulu Code does not collect telemetry or usage analytics by default. The app does not phone home with information about what you are building, which files you are editing, or how you are using features. The only data transmitted from the app is authentication requests (to sign in), AI prompts (to the proxy for model inference), and deployment packages (when you explicitly publish to Mulu Cloud).
We do not sell or share your data with third parties. We do not use your code, prompts, or project data to train AI models. When you use AI features, your prompts are forwarded to the respective model provider (Anthropic, OpenAI, or Google) through our proxy, and those providers' own data policies apply to the prompts they receive. We do not retain copies of your prompts or AI responses on our proxy servers beyond what is needed to complete the request.
If you use Mulu Cloud for deployment, your deployed app files are stored on Cloudflare and your database is hosted on Supabase. You can delete your account and all associated cloud data at any time.
Reporting Security Issues
If you discover a security vulnerability in Mulu Code, please report it through our contact page with the subject "Security Report." Please include a description of the vulnerability, steps to reproduce it, and any relevant screenshots or code samples. We will acknowledge receipt within 48 hours and work to address the issue as quickly as possible.
Please do not disclose security vulnerabilities publicly before we have had the opportunity to investigate and address them. We appreciate responsible disclosure and are happy to credit researchers who report valid issues.