Core Responsibilities
The server handles a wide range of critical functions:Authentication
Manages user sessions, JWT-based authentication, and supports multi-user environments.
LLM Orchestration
Provides a unified interface for interacting with various LLM providers (OpenAI, Anthropic, Gemini, etc.).
RAG Pipeline
Handles document ingestion, embedding generation, and semantic search for Retrieval-Augmented Generation.
Workspace Management
Organizes chats, documents, and settings into logical, isolated workspaces.
Real-time Events
Supports WebSocket and Server-Sent Events (SSE) for streaming AI responses and real-time updates.
Procotol Server
Acts as a bridge for the Model Context Protocol (MCP), enabling external tool integration.