Black Magic AI
Black Magic AI
Surfaces
AI Chat
One chat box, every agent, your whole stack.
Agents
Sixteen revenue agents that run your GTM playbook.
Drafts
Review, edit, and auto-send every AI reply.
Triggers
Event-driven dispatch — webhooks, CRM, schedules.
Desktop app
Native macOS & Windows — local, offline-first.
Foundations
Context
Your playbook as markdown — read by every agent.
Knowledge
Import docs & research the agents can cite.
Skills
Compose tools into reusable agent doctrine.
Integrations
27 first-party connectors — HubSpot, Gmail, Apollo.
BYOK
Bring your own keys — no data-vendor markup.
By role
RevOps Automation
Retire brittle Zaps with agent-driven workflows.
Demand Gen
Turn anonymous traffic into booked meetings.
Sales
Walk into every call with full context.
By use case
Website Visitor ID
Deanonymize visitors and route them to reps.
Signal-Based Outbound
Reach buyers the moment they show intent.
Automated Lead Qualification
Score, enrich, and triage in under 60 seconds.
Contact Data Enrichment
Fill in every CRM record on autopilot.
Reply Guy
On-brand peer replies on Reddit & X.
PricingDocsBlogAbout
Log in
Start free
Black Magic AI

Your AI GTM Engineer. Turn any revenue workflow into an agentic process — from prompt to pipeline.

Product
  • AI Chat
  • Agents
  • Drafts
  • Triggers
  • Desktop app
  • Pricing
Foundations
  • Context
  • Knowledge
  • Skills
  • Integrations
  • BYOK
Solutions
  • Website Visitor ID
  • Signal-Based Outbound
  • Lead Qualification
  • Contact Enrichment
  • Reply Guy
Roles
  • RevOps
  • Demand Gen
  • Sales
Company
  • Docs
  • About
  • Blog
  • Changelog
  • Privacy
  • Terms
© 2026 Black Magic AI. All rights reserved.
blackmagic.engineering
blackmagic ai
Docs · single page

Everything Black Magic AI does, on one page.

Foundations are the four primitives every agent stands on: Context, Knowledge, Skills, Integrations (with BYOK). Surfaces are where you actually use them — Chat, Drafts, Triggers, the Desktop app. Skim the table of contents, jump to what you need, ⌘F for the rest.

Foundations
ContextKnowledgeSkillsIntegrationsBYOK
Surfaces
AI ChatDraftsTriggersDesktop app
Related
AgentsPricingChangelog
Foundations · Context

Context

Context is the on-disk markdown store every agent reads and writes. ICPs, deal histories, objections, tone — one source of truth, versioned, greppable, yours.

What it is

  • Plain markdown on disk

    Context is just files. `companies/acme.md`, `signals/pipeline-health/`, `sequences/linkedin-post-signal.md`. Open any editor — it just works.

  • Version-controlled by default

    Point at a git repo and every agent edit becomes a commit. Review in your PR tool, merge on approve.

  • Grep-first retrieval

    Agents locate files by path and ripgrep, then read with full fidelity. No mystery chunk boundaries, no vector drift.

  • Edit live, in-app

    The Context panel has a full-height editor with ⌘↵ to save. No 320px chat-box clipping, no modal dance.

  • Diff every agent write

    When an agent proposes a change — new case study, updated ICP — you see the diff before accepting.

  • Auto-captured from real conversations

    Gong transcripts, reply threads, deal post-mortems can be piped into Context automatically, tagged and deduped.

Every moving part

Frontmatter-aware

YAML frontmatter (`model:`, `audience:`, `updated_at:`) is parsed by every agent — lets you steer behavior per-file.

Templates for day one

Ships with starter files for ICP, brand voice, objection library, case studies, competitor sheets — fill in the blanks.

Knowledge base sub-store

Long-form research, product docs, battle cards sit alongside operational context — same store, same rules.

Scoped per agent

Pin exactly which sub-folder an agent may read or write. Reply Guy never touches your deal records.

Local-first by default

Lives in `~/Library/Application Support/BlackMagic/context` (or your repo). Sync to cloud only if you opt in.

Works with your IDE

Edit in VS Code, Cursor, Obsidian, vim. Black Magic AI watches the directory and reloads agents as you save.

FAQ

Why markdown and not a database?+

Because your team already reads and writes markdown in PRs, Notion, Linear. Databases need tooling. Files need nothing — and git gives you history, blame, review, and rollback for free.

Is there a Vault? I heard about Vault.+

Vault was the old name. As of 0.5.19 (April 2026) it is called Context everywhere — same store, clearer word.

Can I point it at an existing repo?+

Yes. Set `context_path` to any directory. A git-tracked folder is ideal — every agent edit becomes a reviewable commit.

How big can Context get?+

Tested up to 50k files / 2 GB. Larger stores should be split by concern (ops, content, knowledge) so retrieval stays fast.

Foundations · Knowledge

Knowledge

Knowledge sits alongside Context — product docs, battle cards, customer research, closed-won post-mortems. Every agent can read it. Every edit is tracked.

What it is

  • Imported, not retyped

    Point at a Notion workspace, a Google Drive folder, a website. Knowledge scrapes, converts to markdown, dedupes.

  • Agents cite by filename

    When Outbound says "our enterprise SSO supports SAML and SCIM", it links the exact doc it read — not a vibe.

  • Tagged by audience

    Tag docs as internal / enablement / customer-facing. Reply Guy never quotes internal roadmap at a prospect.

  • Merges with Context

    Operational Context (ICPs, playbooks) + Knowledge (product, research) are one continuous store the agents can traverse.

  • Semantic + grep

    Fast grep for filename + path, semantic recall for fuzzy questions, always with the raw doc for inspection.

  • Scheduled re-sync

    Notion/Drive sources re-sync on a schedule. Stale docs are flagged so no agent quotes a six-month-old price.

Every moving part

Upload any format

PDF, DOCX, PPTX, HTML, Notion export, Google Drive. Converted to markdown, links preserved.

Web crawl

Point at blackmagic.engineering/docs and it pulls the full site, stays in sync, respects sitemap and robots.

Cite-on-write

When an agent writes a draft, citations are inline. Edit the draft, the citations follow the text.

Auto-summary

Every imported doc gets an LLM-generated one-paragraph summary at the top so agents can skim before reading.

Structured case studies

A template for closed-won post-mortems (ICP, pain, objection, outcome) that every agent knows how to query.

Gap detection

When multiple drafts need a fact no doc can answer, Knowledge flags the gap and suggests a doc to write.

Works with

NotionGoogle DriveConfluenceGitHubReadMeDocusaurus

FAQ

How is this different from Context?+

Context is operational — ICPs, playbooks, brand voice. Knowledge is long-form — docs, research, case studies. Same on-disk store, different folders and different agent permissions.

Will it leak internal docs to prospects?+

No. Every doc has an audience tag. Outbound-facing agents are scoped to the customer-facing tag and cannot read internal-only docs.

Does it work with our existing Notion?+

Yes. Connect Notion, pick the workspaces / pages to sync, and Knowledge keeps a local markdown mirror that re-syncs on a schedule you set.

What if a doc is wrong?+

Edit it in place — or in Notion upstream. The next sync picks up the fix and every agent uses it immediately.

Foundations · Skills Library

Skills

Every vault ships with a library of ready-to-run Skills — brand monitoring, competitor radar, KOL discovery, SEO analysis, CMS publishing, API testing, LinkedIn intel. Each one reads your us/market/* files, fires the right integrations, writes signals/ notes, and sends a notification when done.

What it is

  • Skills as editable .md files

    Every Skill is a markdown file in playbooks/ with frontmatter (name, agent, inputs, requires) and a prompt body. Version-controlled, diff-able, forkable. No vendor lock.

  • Reads your us/market/* files

    brand-monitor-apify reads us/company.md. competitor-radar reads us/market/competitors.md. kol-outreach-draft reads us/brand/voice.md. Your context, your output.

  • Pre-flight before every run

    Each Skill declares integrations + us_files + cli needed. Pre-flight modal blocks Run until green with one-click fixes — paste a key, fill a file, copy an install command.

  • Writes to signals/, not a hidden DB

    Every Skill outputs a dated markdown note in signals/<kind>/<date>.md. Readable, searchable, version-controlled. No dashboard lock-in, no CSV export needed.

Every moving part

brand-monitor-apify

Daily Reddit + Twitter/X scan via Apify for brand mentions. Reads keywords from us/company.md. Classifies positive / neutral / negative / question / compare. Writes signals/mentions/<date>.md.

competitor-radar

Weekly competitor teardown — pricing / changelog / blog diff against prior week. Reads us/market/competitors.md. Flags material changes to the top of the report.

doc-leads-discover

Finds ICP-matching companies via Apify Google search. Reads us/market/icp.md. Enriches and drafts approval-gated outbound.

linkedin-intel-weekly

Competitor + KOL profile + post diff via Apify. Role changes, engagement-ranked replyable posts. Writes signals/linkedin/<iso-week>.md.

reddit-pulse

Daily brand + category narrative check. Question-post detection for tool-recommendation threads. High-urgency notify for time-sensitive replies.

kol-discover + kol-score + kol-outreach-draft

Creator-marketing loop: LinkedIn search via Apify → ICP-score against us/market/icp.md → approval-gated DM drafts via draft_create. CSV-tracked end-to-end.

gsc-content-brief

REWRITE / PUSH / GAP analysis from Google Search Console Search Analytics. Daemon signs the service-account JWT for you — no OAuth dance.

cms-blog-stats + cms-publish-draft

Ghost + WordPress. Blog overview and approval-gated draft push. Always creates as draft; you publish via your CMS UI.

api-endpoint-test

Generate + run a JSON test suite via apidog-cli against any REST backend. Covers auth / validation / method / 404 / happy-path. Free npm package, no account.

enrich-company + qualify-icp + enrich-contact

Building-block skills the Outbound Agent chains. Each one is independently usable — firmographics, ICP scoring, contact enrichment.

Signal scanners (free tier)

brand-mention-scan, competitor-scan, news-scan use only web_search + web_fetch. No Apify needed — run these if you just want the fast, cheap version.

Custom Skills

Drop a .md file in playbooks/ with the right frontmatter. Shows up in /skills. Invocable from chat, schedulable via trigger_create. Yours.

Works with

ApifyAmazon SESGoogle Search ConsoleGhostWordPressUnipileEnrichLayerDataForSEOFeishuSlack

FAQ

Can I edit the default Skills?+

Yes — they're markdown files at <vault>/playbooks/. Edit freely. The revision-aware seeder only overwrites your version when the shipped one has a higher `revision:` number AND you haven't customized. Your edits win.

Can I write my own Skills?+

Yes. Drop <slug>.md in playbooks/. Frontmatter needs `kind: skill, name, agent, inputs?, requires?`. Body is the prompt an agent follows. Shows up in /skills immediately.

What's the difference between a Skill and an Agent?+

Agents are personas (Researcher, SDR, Outbound, GEO Analyst). Skills are capabilities — a pre-written prompt + tool list that an agent runs. One agent can invoke many skills. The Skills page lets you trigger a one-off invocation for testing.

Do Skills cost credits?+

LLM reasoning goes through our billed proxy. Integration calls (Apify, SES, Google, Ghost, WordPress, GSC) use your own BYOK keys and your own provider bills — we don't touch those.

Foundations · Integrations

Integrations

Twenty-seven first-party integrations out of the box — CRMs, inboxes, data vendors, warehouses. Your existing contracts stay yours. No hidden markup.

What it is

  • BYOK — bring your own keys

    Wire your existing Apollo, ZoomInfo, Cognism, Clearbit keys. We route through them at cost. No markup, no resell.

  • 27 first-party connectors

    HubSpot, Salesforce, Gmail, Outlook, Slack, Gong, Apollo, Attio, Pipedrive, Stripe, Segment — every major GTM surface.

  • OAuth or API key, your choice

    OAuth where available (Google, HubSpot, Slack), API keys for everything else. Keychain-backed, never logged.

  • Warehouse as a first-class input

    Snowflake, BigQuery, Postgres treated as providers — an agent can SELECT the same way it can call Apollo.

  • MCP-compatible

    Any Model Context Protocol server plugs in. Tell the agent about a new tool and it uses it on the next run.

  • Observable calls

    Every integration call shows provider, args, status, cost, and latency in the run transcript.

Every moving part

CRMs

HubSpot, Salesforce, Attio, Pipedrive, Close — bidirectional, with field-level mapping and custom objects.

Comms

Gmail, Outlook, Slack, Microsoft Teams, LinkedIn (managed relay or BYOK), Reddit (managed relay), X (BYOK).

Data vendors

Apollo, ZoomInfo, Cognism, Clearbit, PeopleDataLabs (first-party), BuiltWith, Wappalyzer.

Call + content intel

Gong, Chorus, Fathom. Transcripts feed replies, signal detection, and closed-lost analysis automatically.

Warehouses

Snowflake, BigQuery, Postgres. Read-only by default; write-enabled with explicit per-table scopes.

Model providers

OpenAI, Anthropic, Google, plus the Black Magic AI gateway. Pin per agent, BYOK for any provider.

Works with

HubSpotSalesforceAttioPipedriveGmailOutlookSlackTeamsGongChorusApolloZoomInfoCognismClearbitPeopleDataLabsBuiltWithSnowflakeBigQuerySegmentStripeLinearGitHubZendeskClayLinkedInRedditX

FAQ

What does "BYOK" mean in practice?+

Bring Your Own Key. Paste your Apollo / ZoomInfo / OpenAI API key once; every tool call routes through your account at your contracted rate. We never mark it up.

What if I don't have a data vendor?+

First-party fallbacks (PeopleDataLabs for LinkedIn enrichment, our own email verifier) kick in. Priced per call, no minimum.

Can I add a tool you don't support?+

Yes — via MCP (Model Context Protocol). If a system exposes an MCP server, the agents can use it with zero platform work from us.

Are integration calls logged?+

Every call writes to the run transcript with args, response status, cost, and latency. SOC 2 auditable out of the box.

Foundations · BYOK + Local-First

BYOK

Every integration is bring-your-own-key. Keys live in a local JSON file and mirror to a plain-text .env in your vault — so scripts, skills, and other tools can all read them naturally. Nothing leaves your Mac except the LLM calls you bill credits for.

What it is

  • BYOK — you paste, you own

    22+ integrations, every one BYOK. Paste once in Integrations → Apify (or SES, GSC, Ghost, Unipile, Stripe, GitHub, …). Keys stay in ~/BlackMagic/.bm/integrations.json on your disk, forever.

  • .env mirror for scripts

    Every saved integration also writes a plain KEY=value line to <vault>/.env. Your Python / Node / shell scripts just load_dotenv() and read APIFY_API_TOKEN, AWS_ACCESS_KEY_ID, FEISHU_WEBHOOK, SES_FROM, etc.

  • Vault is just files

    companies/*.md, contacts/*.md, deals/*.md, signals/*.md, playbooks/*.md, drafts/*.md. Readable in any text editor. Version-controllable with git. Movable to any Mac with rsync.

  • Daemon runs on your machine

    All tool execution — fetch, scrape, send_email, cms_create_draft, GSC query — runs in a local Node daemon. The only outbound traffic is the target API call + our LLM proxy for billed reasoning.

  • Nothing to migrate when you switch

    Stop paying us and the vault stays on your disk. Open it with a text editor. The keys in .env still work with whatever replaces us. Literally no lock-in.

  • Git-native

    Your vault is git-init'd by default. Diff a contact's history. Revert a bad enrichment. Branch a new ICP to experiment. It's just files.

Every moving part

integrations.json

Canonical store at ~/BlackMagic/.bm/integrations.json. Per-provider `{ status, connectedAs, connectedAt, credentials }`. UI reads + writes, daemon consumes.

Auto .env mirror

Every save regenerates <vault>/.env with predictable names — APIFY_API_TOKEN, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, SES_FROM, FEISHU_WEBHOOK, GHOST_ADMIN_API_KEY, GHOST_ADMIN_API_URL, GSC_SERVICE_ACCOUNT_JSON, …

LLM keys stay separate

zenn_api_key (your credits token) lives in .bm/config.toml and is never mirrored to .env. A Skill cannot accidentally use your billing token to bypass the proxy.

One file per entity

A company is a markdown file. A contact is a markdown file. A run is a markdown file. You can grep, git-log, and rm -rf with confidence.

Pure Node network layer

SES requests use Node's native https module, sidestepping any Electron / Chromium network interference. GSC JWT signing uses Node crypto. No wrapper SDKs that phone home.

Your own provider bills

Apify charges you directly. AWS SES charges you directly. Unipile charges you directly. We charge you only for credits on the LLM proxy. No 3× markups.

Works with

ApifyAmazon SESUnipileGoogle Search ConsoleGhostWordPressHubSpotSalesforceFeishuSlack

FAQ

Does any of my integration data hit your servers?+

No. Keys live in .bm/integrations.json on your Mac. Integration API calls (Apify → api.apify.com, SES → email.us-east-1.amazonaws.com, GSC → googleapis.com, etc.) go directly from your daemon to the provider. We route only LLM reasoning calls through our proxy — and even those carry your `ck_` credits token, not your integration keys.

Can I audit what leaves the machine?+

Yes. Daemon logs every outbound request to ~/Library/Logs/BlackMagic AI/. Plus you can set a proxy and inspect every packet — SES, Apify, Feishu, and all CMS calls are plain HTTPS to the providers' public APIs.

What happens if I stop paying for BlackMagic?+

The vault stays on your disk, integrations.json stays on your disk, .env stays on your disk. The desktop daemon stops receiving updates but continues to run. Skills you've customized are yours. Migrate everything to a text editor if you want — it's just .md + .json + .env.

Can I share a vault across devices?+

Yes — the vault is designed to be rsync'able / Dropbox-syncable / git-pushable. Keys in integrations.json are machine-local by convention (don't check them into git), but if you want to sync them, that's your call.

Why mirror to .env if integrations.json already has the keys?+

.env is the universal interface. Every Python, Node, Go, or shell script in the world knows how to load_dotenv(). Mirroring means any script you write in your vault picks up the same keys you pasted in the UI — zero glue code.

Surfaces · AI Chat

AI Chat

Ask in plain English. Black Magic AI picks the right agent, pulls the right context, calls the right tools, and writes back with receipts — every time.

What it is

  • Sixteen agents on call

    Mention an agent or just describe the job — the router picks the right one (Outbound, Pipeline Ops, Reply Guy, …) behind the scenes.

  • Grounded in your Context

    Every reply reads from the on-disk Context store — your playbooks, ICPs, past deals — not the open web.

  • Attach anything

    Drop a CSV, a deal record, a LinkedIn URL, a Gong transcript. The chat turns it into structured input for the agent.

  • Real tool calls, not vibes

    Chat writes to HubSpot, fires sequences, runs SQL against your warehouse. Every action is logged with a reversible receipt.

  • Cited answers

    Numbers link back to the CRM record or Context file they came from. One click to verify, one click to edit the source.

  • Starter prompts per agent

    Each agent ships with 6 starter chips tuned to its real doctrine — no blank-canvas paralysis.

Every moving part

Default gpt-5.5, pin anything

The default is always the latest frontier model. Pin a specific model per-agent when determinism or cost matters.

Files in, files out

Upload spreadsheets, PDFs, decks. The agent writes back markdown, CSV, or JSON you can paste straight into a board.

Voice input

Dictate your prompt between meetings. Whisper-class transcription with your own vocabulary of buyer names.

Fire-and-forget runs

Long-horizon tasks (10-minute research, 500-row enrichment) run in the background. The composer unlocks instantly.

Keyboard-first

⌘K to switch agents, ⌘↵ to send, ⌘/ to cite. Built for people who live in the command line.

Promote to agent

When a chat thread is something you want to repeat, one click turns it into a named agent with its own prompt and tools.

Works with

HubSpotSalesforceGmailSlackApolloGongAttioLinearSnowflakeBigQuery

FAQ

Is this just ChatGPT with a wrapper?+

No. Chat is grounded in your on-disk Context (playbooks, ICPs, past deals) and has real authenticated tools into your CRM, inbox, and warehouse. ChatGPT has neither.

Which model runs under the hood?+

gpt-5.5 by default, as of April 2026. You can pin a different model per-agent or per-chat — Opus, Sonnet, or your own provider via BYOK.

Can I see what tools the agent called?+

Yes — every reply shows an expandable trace of the tool calls, their arguments, and their results. Nothing is hidden.

Does it work offline?+

The desktop app keeps Chat History and Context local. Model calls need network, but read-only sessions against cached answers work in airplane mode.

Surfaces · Drafts

Drafts

Drafts is the staging area for everything the agents want to send — emails, LinkedIn DMs, Slack nudges. Review, edit, schedule, or flip the switch and let auto-send ship them.

What it is

  • Every draft is a real document

    Subject, recipient, body, thread ID — stored on disk. Edit inline, your changes survive regeneration.

  • Policy, per audience

    Auto-send replies to known customers. Queue first-touch outbound for review. Per-segment rules, not global ones.

  • Auto-send, actually

    The global auto-send toggle is now authoritative. Explicit `auto: false` from an agent no longer silently overrides it.

  • Hold, edit, release

    Pause the whole queue with one switch. Edit in place. Release individually or in a batch.

  • Undo window

    Every auto-sent draft has a configurable undo window (default 60s) before it actually leaves your outbox.

  • Signal badges

    See why the draft exists — "LinkedIn post signal", "pricing page visit", "reply needed" — without opening the thread.

Every moving part

Email via your provider

Gmail, Outlook, or BYO SMTP. Drafts are real drafts in your sent folder — deliverability is yours.

LinkedIn DMs + connection notes

Through our managed relay or your own account — same review flow, same policy rules.

Scheduled send

Send-at-9am-local per prospect. Pause over weekends. Respect your own working-hours policy.

Per-agent auto toggle

Reply-to-inbound agents can auto-send while outbound agents stay queued — granular controls.

Batch approve

50 drafts look similar? Sweep them with a keyboard shortcut, skim the outliers, ship the rest.

Tone-match from Context

Every draft is written in your brand voice — pulled from `context/brand-voice.md` and last-10-sent exemplars.

Works with

GmailOutlookLinkedInSlackHubSpotSalesforce

FAQ

Will auto-send really send without me?+

Only if the global toggle is ON. It is OFF by default, and individual agents or audiences can be excluded. Every auto-sent message has an undo window.

What about the "auto: false" regression I heard about?+

Fixed in 0.5.22 (April 2026). The global toggle is now authoritative — explicit per-call `auto: false` no longer silently overrides your setting.

Can I regenerate a single paragraph?+

Yes — select text, ⌘R, describe the change. The rest of the draft is untouched.

Does it handle replies to threads?+

Yes — Drafts preserves thread IDs and in-reply-to headers so sent replies thread correctly in the recipient's inbox.

Surfaces · Triggers

Triggers

Triggers watch inboxes, webhooks, CRM fields, and external feeds — and fire the right agent the second something happens. Your GTM motion, event-driven.

What it is

  • Webhook in, agent out

    Point any system at a Trigger URL. The payload becomes structured context for the dispatched agent.

  • Inbox listeners

    Watch Gmail for replies, bounces, out-of-offices. Each triggers the right follow-up agent with full thread context.

  • CRM field listeners

    HubSpot or Salesforce: a stage change, new contact, or custom field flip dispatches the matching agent instantly.

  • External feeds

    LinkedIn post signals, 6sense intent surges, RSS feeds, Reddit mentions — pluggable sources, one dispatch layer.

  • Scheduled runs

    Cron-style schedules for recurring audits — daily pipeline health, weekly closed-lost rollup — no babysitting.

  • Filters and rate limits

    ICP filters, dedup windows, per-agent rate caps (Reply Guy caps at 5 Reddit / 10 X per day) enforced by the platform.

Every moving part

Typed webhook signatures

Declare the payload shape once; incoming events are validated before an agent ever sees them.

Dedup and merge

Two signals about the same contact in 30 minutes become one agent run, not two duplicate outreaches.

Retries with backoff

Transient failures retry on an exponential schedule. Dead-letter queue for anything beyond the limit.

Run transcripts

Every trigger-started run writes a full transcript to `runs/<date>-<agent>/` — grep-able, replayable.

Works across your stack

Gmail, HubSpot, Salesforce, Apollo, Gong, Clay, Stripe, Segment, Linear, GitHub, Slack, Zendesk.

Promote a chat to a trigger

A one-off chat that solved a problem can be promoted to a recurring trigger in two clicks.

Works with

HubSpotSalesforceGmailSlackApolloGongSegmentStripeLinearGitHubZendeskClay

FAQ

Is this a Zapier replacement?+

For revenue workflows, yes. Zapier is great for static flows. Triggers dispatch to agents that handle judgment — edge cases, long tails, weird inputs — without you building a forked graph for each one.

How do I stop a runaway loop?+

Per-agent rate caps, per-trigger daily caps, and a global kill-switch that pauses every trigger across the workspace in one click.

Can I see what's pending?+

Yes — the Runs list is chronological (fixed in 0.5.21) and includes both chat-initiated and trigger-initiated runs.

What about scheduled audits?+

Cron-style schedules fire the same dispatch path — Pipeline Ops daily at 8am, Closed-lost Analysis every Monday, whatever cadence you want.

Surfaces · Desktop

Desktop app

Black Magic AI ships as a native macOS and Windows desktop. Context lives on your disk. Tool calls happen on your machine. The cloud is optional, never required.

What it is

  • Native on macOS + Windows

    Signed, notarized, auto-updating. Feels like a native app because it is one — not an Electron tab.

  • Context on your disk

    Every file the agents read and write lives in a folder on your machine. Cancel us, keep your data.

  • Local daemon

    Tool calls, file reads, git ops run on-device. The cloud only sees what you explicitly route through it.

  • Works offline, mostly

    Browse Context, read chat history, queue drafts offline. The moment wifi returns, queued runs dispatch.

  • Keyboard-first

    ⌘K command palette, ⌘↵ send, ⌘[ history, ⌘\ sidebar. Built for people who live in their keyboard.

  • Instant cold-start

    App launches in <1s. No spinner, no auth dance, no loading bar — just your workspace where you left it.

Every moving part

macOS 13+, universal binary

Runs native on both Apple Silicon and Intel. Menu bar status, notification center support, system share sheet.

Windows 10 + 11

MSI-based installer, auto-update via Squirrel, Start Menu integration, system tray for quick access.

CLI shipped alongside

`blackmagic` on your path — pipe stdin into an agent, tail logs, script your GTM ops from a terminal.

MDM-friendly

Jamf and Intune deploy packages. Managed config via plist / registry so IT can pre-seed workspaces.

Keychain-backed secrets

API keys stored in the OS keychain — never in plaintext, never in the cloud unless you ask.

Multiple workspaces

Switch between client workspaces (consultants) or business units (in-house) with a keyboard shortcut.

FAQ

Is there a web version?+

Yes — the dashboard at app.blackmagic.engineering works in any modern browser. But the desktop app is where the real ergonomics live: offline Context, local tool calls, OS-level shortcuts, and keychain-backed secrets.

How big is the download?+

~80 MB signed DMG on macOS, ~90 MB MSI on Windows. Auto-update delivers patches in the background.

Do I need to re-authenticate each time?+

No. Session is persisted in the OS keychain. Launch the app and you are in.

Will my Context sync across machines?+

Point the `context_path` at a synced folder (Dropbox, iCloud Drive, git repo) and it travels with you. Cloud-side optional sync is on the roadmap.

Ready to try it?

The fastest read of these docs is twenty minutes inside the app. Install on macOS, point it at a vault, ship one Outbound run.

Start freeSee pricing