Quickstart¶
Get Remembra running in 5 minutes.
Prerequisites¶
- Docker (recommended) or Python 3.10+
- OpenAI API key (for embeddings/extraction)
Step 1: Start Remembra¶
Step 2: Verify It's Running¶
Or open the dashboard: http://localhost:8787
Step 3: Install the SDK¶
Step 4: Store Your First Memory¶
from remembra import Memory
# Connect to your Remembra instance
memory = Memory(
base_url="http://localhost:8787",
user_id="quickstart-user"
)
# Store a memory
memory.store("""
Had a great meeting with Sarah from Acme Corp today.
She mentioned they're looking for AI solutions for their
customer support team. Budget is around $50k/year.
Follow up next Tuesday.
""")
print("Memory stored!")
Step 5: Recall Memories¶
# Ask questions about your memories
context = memory.recall("What do I know about Acme Corp?")
print(context)
# Output: "Sarah from Acme Corp is looking for AI solutions
# for customer support. Budget: $50k/year.
# Follow up scheduled for Tuesday."
What Just Happened?¶
- Smart Extraction: Your messy text was transformed into clean facts
- Entity Resolution: "Sarah" was identified as a PERSON, "Acme Corp" as an ORG
- Relationship Mapping: Sarah → WORKS_AT → Acme Corp
- Vector Storage: Facts embedded and stored for semantic search
- Recall: Your query found the relevant memories
Next Steps¶
- Installation Guide - All installation options
- Docker Deployment - Production Docker setup
- Python SDK Guide - Full SDK reference
- Entity Resolution - How entity matching works
Example: Building a Chatbot¶
from remembra import Memory
import openai
memory = Memory(base_url="http://localhost:8787", user_id="user_123")
def chat(user_message: str) -> str:
# Recall relevant context
context = memory.recall(user_message, limit=5)
# Build prompt with memory
messages = [
{"role": "system", "content": f"You are a helpful assistant. Context: {context}"},
{"role": "user", "content": user_message}
]
# Get response
response = openai.chat.completions.create(
model="gpt-4o",
messages=messages
)
assistant_message = response.choices[0].message.content
# Store the conversation
memory.store(f"User: {user_message}\nAssistant: {assistant_message}")
return assistant_message
# Chat with memory!
print(chat("My name is Alex and I love hiking"))
print(chat("What do you know about me?")) # Remembers Alex loves hiking!
Pro Tip
Store important facts explicitly, not just conversation history. The extraction model works best with clear statements.