MacCopilot - Native Copilot AI for macOS is a native macOS application that acts as an AI-powered copilot, integrating advanced AI models (GPT-4, ClaudeAI, Google Gemini, and more) directly into your screen workflow. It supports seamless screen capture, AI-assisted insights, and content export in Markdown. Designed for macOS 12.0 and later, MacCopilot aims to revolutionize interaction with on-screen content through conversation, visual AI prompts, and streamlined content creation.
Key Capabilities
- Engage in conversations with advanced AI models directly about your screen content (GPT-4o, GPT-4, ClaudeAI, Google Gemini, etc.).
- Visual-based interaction: chat with AI models about the current screen view to gain insights, explanations, and creative ideas.
- Flexible screenshot capture: select any region of the screen using built-in screenshot tools, with optional resizing and reuse of the last selected region for quick captures.
- Multi-model support: connect to OpenAI GPT-4o, GPT-4-turbo, ClaudeAI (Sonnet, etc.), Google Gemini (Gemini pro 1.0/1.5), Ollama/local LLM servers, Azure OpenAI, and OpenAI API-compatible servers.
- Cross-platform model integration: OpenAI, ClaudeAI, Google Generative AI, Ollama, Azure OpenAI, and more via Preferences > Models.
- Built-in Markdown export: export your interactions and capture results as Markdown for easy sharing and documentation.
- Subscription options: multiple plans including lifetime, monthly, and tiered usage to fit different needs.
- Local and cloud options: supports local Ollama servers and API-based models for flexible deployment.
How to Use MacCopilot
- Install MacCopilot on macOS 12.0 or later.
- Add AI models in Preferences > Models (OpenAI, ClaudeAI, Google Generative AI, Ollama, OpenAI Compatible, Azure OpenAI).
- Capture a region of your screen with the built-in screenshot tool, then chat with the AI about the captured content or export it as Markdown.
- Configure hotkeys (Preferences > General) for quick region captures and re-captures.
Disclaimer: Some models require API keys or local server setups. Refer to the model-specific guides within the app for instructions.
Supported Models and Integration
- OpenAI GPT-4o and GPT-4o-compatible variants
- OpenAI GPT-4-turbo
- ClaudeAI (3.5 Sonnet family and beyond)
- Google Generative AI (Gemini series)
- Ollama (local LLM server)
- Azure OpenAI
- OpenAI API-compatible endpoints
How to Configure and Manage Models
- Go to Preferences > Models > Add Model and choose the appropriate format (OpenAI, ClaudeAI, Google Generative AI, Ollama, OpenAI Compatible, Azure OpenAI).
- Enter required API keys or server endpoints and assign a model name.
- For non-multimodal models, note that screen content may not be used in some cases.
System Requirements
- macOS 12.0 Monterey or later
Support and Licensing
- Various subscription plans available, including One-Time Lifetime Access and monthly plans with different AI request limits.
- For support or feedback, contact [email protected]
Safety and Privacy
- MacCopilot interacts with on-screen content and AI services via API calls or local servers. Ensure you manage keys and endpoints securely and follow platform policies for data handling.
Core Features
- Native macOS app with deep integration
- Multi-model AI support: GPT-4o, GPT-4-turbo, ClaudeAI, Google Gemini, Ollama, Azure OpenAI, and more
- Screen capture with flexible region selection and reuse of last region
- Conversational AI about current screen content with multiple models
- Visual-based AI interactions for insights, explanations, and ideas
- Markdown export of content and interactions
- Direct model management from within the app (Preferences > Models)
- Various subscription plans including lifetime access
- Local and cloud deployment options for AI models