Installation¶
Multiple ways to install and run Remembra.
Docker (Recommended)¶
The easiest way to get started. Everything is bundled.
docker run -d \
--name remembra \
-p 8787:8787 \
-e OPENAI_API_KEY=sk-your-key \
-v remembra-data:/app/data \
remembra/remembra
See Docker Guide for production configuration.
Python Package¶
SDK Only (Client)¶
If you just need the client SDK to connect to an existing Remembra server:
Full Server¶
To run your own Remembra server:
Then start it:
With Reranking (Optional)¶
For better recall quality with CrossEncoder reranking:
From Source¶
For development or customization:
# Clone the repo
git clone https://github.com/remembra/remembra
cd remembra
# Install with uv (recommended)
uv sync --all-extras
# Or with pip
pip install -e ".[server,rerank,dev]"
# Run tests
pytest
# Start the server
python -m remembra.server
Dependencies¶
Required¶
- Python 3.10+
- Qdrant - Vector database (bundled in Docker, or run separately)
- OpenAI API key - For embeddings and extraction
Optional¶
- Ollama - Local embeddings (no API costs)
- Cohere - Alternative embeddings
- Redis - For rate limiting at scale
Embedding Providers¶
Remembra supports multiple embedding providers:
Verifying Installation¶
Check Server Health¶
Expected response:
Test the SDK¶
from remembra import Memory
memory = Memory(
base_url="http://localhost:8787",
user_id="test"
)
# Store and recall
memory.store("Test memory")
result = memory.recall("test")
print(result) # Should return the test memory
Troubleshooting¶
"Connection refused" error¶
Make sure the server is running:
"API key not set" error¶
Set your OpenAI API key:
Qdrant connection issues¶
If running Qdrant separately, ensure it's accessible:
Next Steps¶
- Docker Guide - Production deployment
- Configuration Reference - All environment variables
- Python SDK - Full SDK documentation