Skip to main content
The Backend Server is the central nervous system of the DataDot application. Built on FastAPI, it orchestrates the interaction between users, data, and AI models.

Core Responsibilities

The server handles a wide range of critical functions:

Authentication

Manages user sessions, JWT-based authentication, and supports multi-user environments.

LLM Orchestration

Provides a unified interface for interacting with various LLM providers (OpenAI, Anthropic, Gemini, etc.).

RAG Pipeline

Handles document ingestion, embedding generation, and semantic search for Retrieval-Augmented Generation.

Workspace Management

Organizes chats, documents, and settings into logical, isolated workspaces.

Real-time Events

Supports WebSocket and Server-Sent Events (SSE) for streaming AI responses and real-time updates.

Procotol Server

Acts as a bridge for the Model Context Protocol (MCP), enabling external tool integration.

Project Structure

The project follows a clean, modular architecture designed for scalability and maintainability.
server/
├── app/
│   ├── api/              # API route definitions (v1 endpoints)
│   ├── core/             # Configuration, database setup, and security
│   ├── crud/             # Database CRUD abstractions
│   ├── middleware/       # Custom middleware (CORS, Auth, Logging)
│   ├── models/           # SQLAlchemy ORM models
│   ├── schemas/          # Pydantic data schemas
│   ├── services/         # Core business logic (Chat, LLM, Embeddings)
│   ├── tasks/            # Background tasks and scheduler
│   └── utils/            # Helper utilities
├── main.py               # Application entry point
├── requirements.txt      # Python dependencies
└── Dockerfile            # Container definition

Getting Started

To dive deeper into the server, check out the following resources: