Scoopika is an open-source toolkit that lets developers build modern, fast, multimodal LLM-powered web applications 10x faster. It provides built-in streaming, error recovery, multimodal inputs handling, LLM-output validation, and serverless memory/knowledge stores to power AI agents that can interact with data, APIs, and voice in real-time.
Overview
- Open-source, developer-first platform to connect LLM providers, build AI agents, and deploy multimodal apps with ease.
- Key capabilities include multimodal inputs (text, images, audio), real-time streaming, validated JSON outputs, and edge-hosted knowledge/memory stores.
- Targets building AI assistants, automation bots, data extraction tools, search engines, and more with visibility into APIs and external data sources.
How Scoopika Works
- Connect your LLM provider. Power agents with your preferred LLM securely from your infrastructure.
- Create AI agents. Define agents that can perform tasks like automation, chat, or data extraction.
- Add tools and APIs. Equip agents with APIs, tools, and knowledge sources they can use in context.
- Handle multimodal inputs. Feed text, images, and audio to agents; support for voice interactions with streaming responses.
- Store memory and knowledge. Use managed serverless memory stores and knowledge stores to retain context and expand capabilities.
- Deploy easily. Run agents in your app or expose them as HTTP endpoints with streaming support.
Key Capabilities
- Multimodal inputs: Text, images, audio (with real-time processing and streaming)
- AI agents: Build automation apps, chatbots (text & voice), and data extraction tasks
- Data sources: Connect files, websites, PDFs, URLs to knowledge stores
- Memory & knowledge stores: Managed, edge-hosted memory and knowledge storage
- JSON validation: Generate validated JSON according to any schema with retries and error handling
- Streaming: Real-time text and voice output for interactive experiences
- Custom tools & APIs: Use any API or custom function your code provides
- Security & hosting: Deploy endpoints on your own cloud with memory/knowledge stores managed by Scoopika
- Open source: 90% of Scoopika components are open source for easy inspection and contribution
How to Use Scoopika
-
- Connect your provider: Link your LLMs securely from your servers.
-
- Create AI agent: Build an agent for your desired use case (automation, chat, data extraction).
-
- Build: Deploy the agent in your app or expose it as an API endpoint with streaming support.
- Use cases: Multimodal AI assistants, AI-powered search engines, data extraction bots, and complex automation tasks.
Why Scoopika
- Built for developers and engineers: easy integration with simple APIs, strong type-safety, and clear error handling.
- Real-time streaming and memory/knowledge management reduce the burden of maintaining context.
- Edge-native deployment and global scalability across 26 regions.
- Open source core with commercial options for advanced features like long-term memory and larger knowledge stores.
Core Features
- Open-source toolkit for building multimodal LLM-powered web apps
- Real-time streaming of text and voice responses
- Multimodal inputs (text, images, audio) with fast processing
- AI agents that can interact with data, APIs, and knowledge stores
- Validated JSON output generation with retry and error handling
- Managed memory stores and knowledge stores at the edge
- Tools and API integration to fetch data or perform actions based on context
- Easy deployment as app components or HTTP API endpoints
- Strong focus on performance, reliability, and scalable architecture
- No vendor lock-in with self-hosted, configurable infrastructure
Example References
- Example usage snippet shows how to instantiate Scoopika and an agent, provide inputs, and handle hooks for token events and tool calls. This illustrates the developer-friendly approach and extensibility for custom workflows.
Pricing Tiers (High-level)
- Free/Starter: Core features with unlimited AI agent runs and real-time streaming
- Pro: Voice responses, long-term memory, expanded knowledge stores, higher rate limits
- Scale: Higher memory, higher throughput, larger knowledge quotas, prioritized support
- Availability of on-demand custom development and enterprise options
What’s Included
- Built-in streaming, memory encryption, and LLM output validation
- Edge-hosted memory and knowledge stores for low-latency access
- Simple, typed APIs for agents, tools, and memory/knowledge stores
- Client/server deployment options with hooks for custom behavior
- Fully open-source core with paid upgrades for advanced features