Skip to content

🔓 Stack Vendor-Agnostic: Controle Total + Portabilidade

🎯 Objetivo

Arquitectura onde você controla tudo, sem vendor lock-in, e pode rodar em: - Seu laptop (local) - VPS ($5-10/mth) - Qualquer cloud (AWS, GCP, Azure, Digital Ocean) - Seu datacenter

O mantra: Docker Compose em qualquer lugar = mesma stack


📊 Comparação: Vendor vs Agnostic

Vendor-Locked (Atual - Recomendado)

Supabase ($25) + Ollama Cloud ($15) + Cloud Run ($5)
├─ Total: $45/mth
├─ Setup: 2h
├─ Ops: 0h/semana
├─ Vendor lock: ⚠️ FORTE
│  ├─ Google: Cloud Run
│  ├─ Supabase: PostgreSQL (migrável)
│  └─ Ollama Cloud: Proprietário
└─ Porta para outro lugar: 2-3 semanas

Vendor-Agnostic (100% Open Source)

PostgreSQL ($0) + Ollama ($0) + Stack ($0)
├─ Total: $0-15/mth (VPS)
├─ Setup: 6-8h
├─ Ops: ~3h/semana
├─ Vendor lock: ✅ ZERO
│  ├─ PostgreSQL: Standard SQL
│  ├─ Ollama: Open source
│  ├─ Docker: Standard container
│  └─ Tudo é portável
└─ Porta para outro lugar: 2-3 dias (Docker Compose)

🏗️ Stack Vendor-Agnostic Recomendada

┌────────────────────────────────────────────────────┐
│ APLICAÇÃO                                          │
│ ├─ slack_bot.py                                    │
│ ├─ agent.py (Google ADK) → Alternativa: LangChain │
│ └─ Custom tools                                    │
└────────────────────┬───────────────────────────────┘
                     │
        ┌────────────┼────────────┐
        │            │            │
        ▼            ▼            ▼
    ┌────────┐  ┌──────────┐  ┌──────────┐
    │Database│  │Embeddings│  │Vector DB │
    │        │  │          │  │          │
    │PGSQL   │  │Ollama    │  │pgvector  │
    │+       │  │(local)   │  │in PGSQL  │
    │pgvector│  │          │  │          │
    └────────┘  └──────────┘  └──────────┘

    ┌──────────────────────────────┐
    │ Docker Compose               │
    │ (rodar em qualquer lugar)    │
    └──────────────────────────────┘

Ferramentas de Cada Camada

PostgreSQL 16 + pgvector
├─ Open source: ✅ (Apache 2.0)
├─ Self-hosted: ✅ Trivial
├─ Cloud alternatives: AWS RDS, Azure Database, Digital Ocean
├─ Portabilidade: 100% (standard SQL)
├─ Performance: ⭐⭐⭐⭐⭐
└─ Custo: $0 (self) ou $12-20/mth (managed)

2️⃣ Embeddings

Ollama (Local)
├─ Open source: ✅ (MIT)
├─ Self-hosted: ✅ (Docker trivial)
├─ Models: 100+ HuggingFace compatible
├─ CPU/GPU: Ambos suportados
├─ Latência: <5ms (local)
├─ Portabilidade: 100% (Docker container)
└─ Custo: $0 + hardware

3️⃣ Memory Service

Custom service (Seu código Python)
├─ Dependências: async Python + PostgreSQL + Ollama
├─ Vendor dependência: ❌ ZERO
├─ Portabilidade: 100%
└─ Stack: FastAPI/aiohttp → PostgreSQL + Ollama

4️⃣ Orchestration

Docker Compose (single machine)
│
├─ Para escala: Kubernetes (k3s, EKS, AKS, GKE)
├─ Tudo compatível: SIM (mesmos containers)
└─ Vendor lock: ZERO (Kubernetes é standard)

🐳 Docker Compose Stack

Arquivo: docker-compose.yml (Vendor-Agnostic)

version: '3.8'

services:
  # Database + Vector Search
  postgres:
    image: pgvector/pgvector:pg16-latest
    container_name: ifriend-postgres
    environment:
      POSTGRES_DB: ifriend
      POSTGRES_USER: ifriend
      POSTGRES_PASSWORD: ${DB_PASSWORD:-changeme}
      POSTGRES_INITDB_ARGS: "-c shared_preload_libraries=vector"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init_db.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ifriend"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - ifriend_network
    restart: unless-stopped

  # Embeddings Service
  ollama:
    image: ollama/ollama:latest
    container_name: ifriend-ollama
    volumes:
      - ollama_data:/root/.ollama
    ports:
      - "11434:11434"
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
      - OLLAMA_NUM_GPU=0  # 0 para CPU, 1 para GPU
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - ifriend_network
    restart: unless-stopped

  # API Backend
  api:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: ifriend-api
    environment:
      DATABASE_URL: postgresql://ifriend:${DB_PASSWORD:-changeme}@postgres:5432/ifriend
      OLLAMA_URL: http://ollama:11434
      SLACK_BOT_TOKEN: ${SLACK_BOT_TOKEN}
      SLACK_APP_TOKEN: ${SLACK_APP_TOKEN}
    depends_on:
      postgres:
        condition: service_healthy
      ollama:
        condition: service_healthy
    ports:
      - "8000:8000"
    volumes:
      - ./:/app
    networks:
      - ifriend_network
    restart: unless-stopped

  # (Optional) Adminer para gerenciar DB via UI
  adminer:
    image: adminer:latest
    container_name: ifriend-adminer
    ports:
      - "8080:8080"
    depends_on:
      - postgres
    networks:
      - ifriend_network
    restart: unless-stopped

volumes:
  postgres_data:
  ollama_data:

networks:
  ifriend_network:
    driver: bridge

Arquivo: Dockerfile (Vendedor-Agnostic)

FROM python:3.11-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY . .

# Run Slack bot + Memory service
CMD ["python", "-m", "slack_bot"]

Arquivo: .env

# Database
DB_PASSWORD=your_secure_password_here
DATABASE_URL=postgresql://ifriend:your_secure_password_here@localhost:5432/ifriend

# Slack
SLACK_BOT_TOKEN=xoxb-your-token
SLACK_APP_TOKEN=xapp-your-token

# Ollama
OLLAMA_URL=http://localhost:11434

# Logging
LOG_LEVEL=INFO

📦 Deployment: Do Laptop para Qualquer Lugar

Opção 1: Seu Laptop (Desenvolvimento)

# 1. Instalar Docker Desktop
# 2. Clonar repo
git clone seu-repo
cd seu-repo

# 3. Setup env
cp .env.example .env
# Editar .env com suas credenciais

# 4. Iniciar stack
docker-compose up -d

# 5. Verificar
docker-compose ps
curl http://localhost:11434/api/tags  # Ollama
psql postgresql://ifriend@localhost/ifriend  # Postgres

# 6. Puxar modelo Ollama
docker exec ifriend-ollama ollama pull nomic-embed-text

# 7. Testar API
curl -X POST http://localhost:8000/api/test

# 8. Ver logs
docker-compose logs -f api

Opção 2: VPS Digital Ocean / Linode / Vultr ($5-10/mth)

# 1. Criar VM Ubuntu 22.04 (2GB RAM, 50GB SSD)
# Custo: $5-6/mth

# 2. SSH na VM
ssh root@your-vps-ip

# 3. Instalar Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
usermod -aG docker $USER

# 4. Clonar e iniciar
git clone seu-repo
cd seu-repo
cp .env.example .env
# Editar .env

docker-compose up -d

# 5. Setup CI/CD (optional)
# Usar GitHub Actions para auto-deploy a cada push

Opção 3: Kubernetes (k3s) para Escala

# 1. Converter docker-compose para Kubernetes
docker-compose config > docker-compose.yml.tmp
kompose convert -f docker-compose.yml

# 2. Deploy no k3s (lightweight Kubernetes)
curl -sfL https://get.k3s.io | sh -
kubectl apply -f postgres-deployment.yaml
kubectl apply -f ollama-deployment.yaml
kubectl apply -f api-deployment.yaml

# 3. Resultado: Mesma stack em cluster
kubectl get pods

Opção 4: Qualquer Cloud (AWS, GCP, Azure)

# EC2 (AWS) / VM (GCP) / VM (Azure)
# Exatamente igual ao VPS - docker-compose up

# OU
# ECS (AWS) / Cloud Run (GCP) / Container Instances (Azure)
# Push imagem Docker → Cloud vendor roda os containers

💾 Inicializar Database com Schema

Arquivo: init_db.sql

-- Extensions
CREATE EXTENSION IF NOT EXISTS vector;
CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- Create schema
CREATE SCHEMA IF NOT EXISTS memories;

-- Memories table
CREATE TABLE memories.memories (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  app_name TEXT NOT NULL,
  user_id TEXT NOT NULL,
  session_id TEXT,
  sector TEXT NOT NULL DEFAULT 'general',
  title TEXT NOT NULL,
  content TEXT NOT NULL,
  embedding vector(384),
  created_at TIMESTAMPTZ DEFAULT now(),
  updated_at TIMESTAMPTZ DEFAULT now(),
  metadata JSONB DEFAULT '{}'::jsonb
);

-- Índices
CREATE INDEX ON memories.memories USING ivfflat (embedding vector_cosine_ops)
  WITH (lists = 100);
CREATE INDEX ON memories.memories(app_name, user_id);
CREATE INDEX ON memories.memories USING GIN(metadata);

-- Function para buscar
CREATE OR REPLACE FUNCTION memories.match_memories(
  query_embedding vector,
  user_id text,
  app_name text,
  limit_count int DEFAULT 5
)
RETURNS TABLE (
  id uuid,
  title text,
  content text,
  similarity float8
) AS $$
BEGIN
  RETURN QUERY
  SELECT 
    m.id,
    m.title,
    m.content,
    1 - (m.embedding <=> query_embedding) as similarity
  FROM memories.memories m
  WHERE m.user_id = user_id AND m.app_name = app_name
  ORDER BY m.embedding <=> query_embedding
  LIMIT limit_count;
END;
$$ LANGUAGE plpgsql;

-- Sessions table (from Firestore migration)
CREATE TABLE memories.sessions (
  id TEXT PRIMARY KEY,
  app_name TEXT NOT NULL,
  user_id TEXT NOT NULL,
  data JSONB,
  created_at TIMESTAMPTZ DEFAULT now(),
  updated_at TIMESTAMPTZ DEFAULT now()
);

CREATE INDEX ON memories.sessions(app_name, user_id);
CREATE INDEX ON memories.sessions(created_at DESC);

-- All done
GRANT USAGE ON SCHEMA memories TO ifriend;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA memories TO ifriend;

🐍 CustomMemoryService Vendor-Agnostic

Arquivo: memory_service_agnostic.py

import asyncpg
import aiohttp
import json
from typing import List, Dict, Optional
from datetime import datetime

class VendorAgnosticMemoryService:
    """
    100% vendor-agnostic memory service
    - Database: PostgreSQL (pode rodar anywhere)
    - Embeddings: Ollama (pode rodar anywhere)
    - Sem dependências em cloud vendors
    """

    def __init__(
        self,
        db_url: str = "postgresql://ifriend:changeme@localhost:5432/ifriend",
        ollama_url: str = "http://localhost:11434"
    ):
        self.db_url = db_url
        self.ollama_url = ollama_url
        self.pool = None

    async def connect(self):
        """Connect to PostgreSQL"""
        self.pool = await asyncpg.create_pool(self.db_url)

    async def disconnect(self):
        """Disconnect from PostgreSQL"""
        if self.pool:
            await self.pool.close()

    async def embed_text(self, text: str, model: str = "nomic-embed-text") -> List[float]:
        """Generate embedding using local Ollama"""
        try:
            async with aiohttp.ClientSession() as session:
                async with session.post(
                    f"{self.ollama_url}/api/embeddings",
                    json={"model": model, "input": text},
                    timeout=aiohttp.ClientTimeout(total=30)
                ) as resp:
                    if resp.status != 200:
                        raise Exception(f"Ollama error: {resp.status}")
                    data = await resp.json()
                    return data["embedding"]
        except Exception as e:
            print(f"Error generating embedding: {e}")
            raise

    async def add_memory(
        self,
        app_name: str,
        user_id: str,
        title: str,
        content: str,
        sector: str = "general",
        session_id: Optional[str] = None,
        metadata: Optional[Dict] = None
    ) -> str:
        """Add memory to PostgreSQL with embedding"""
        async with self.pool.acquire() as conn:
            # Generate embedding
            embedding = await self.embed_text(content)

            # Insert to database
            query = """
            INSERT INTO memories.memories 
            (app_name, user_id, session_id, sector, title, content, embedding, metadata)
            VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
            RETURNING id
            """

            memory_id = await conn.fetchval(
                query,
                app_name, user_id, session_id, sector, title, content,
                embedding, metadata or {}
            )

            return str(memory_id)

    async def search_memory(
        self,
        app_name: str,
        user_id: str,
        query: str,
        limit: int = 5,
        sector: Optional[str] = None
    ) -> List[Dict]:
        """Search memories by semantic similarity"""
        async with self.pool.acquire() as conn:
            # Generate query embedding
            query_embedding = await self.embed_text(query)

            # Search using PostgreSQL
            search_query = """
            SELECT 
              id,
              title,
              content,
              sector,
              1 - (embedding <=> $1::vector) as similarity
            FROM memories.memories
            WHERE user_id = $2 AND app_name = $3
            """ + (f"AND sector = $5" if sector else "") + """
            ORDER BY embedding <=> $1::vector
            LIMIT $4
            """

            params = [query_embedding, user_id, app_name, limit]
            if sector:
                params.append(sector)

            rows = await conn.fetch(search_query, *params)

            return [
                {
                    "id": str(row["id"]),
                    "title": row["title"],
                    "content": row["content"],
                    "sector": row["sector"],
                    "similarity": float(row["similarity"])
                }
                for row in rows
            ]

    async def delete_memory(self, memory_id: str, user_id: str) -> bool:
        """Delete memory"""
        async with self.pool.acquire() as conn:
            result = await conn.execute(
                "DELETE FROM memories.memories WHERE id = $1 AND user_id = $2",
                memory_id, user_id
            )
            return result == "DELETE 1"

# Usage
async def main():
    service = VendorAgnosticMemoryService()
    await service.connect()

    # Add memory
    memory_id = await service.add_memory(
        app_name="busca-produtos",
        user_id="user123",
        title="Customer favorite color",
        content="Customer prefers blue products"
    )
    print(f"Added memory: {memory_id}")

    # Search memory
    results = await service.search_memory(
        app_name="busca-produtos",
        user_id="user123",
        query="What color does customer like?"
    )
    print(f"Found memories: {results}")

    await service.disconnect()

🚀 Deployment em 3 Ambientes Diferentes

Mesmo código, diferentes lugares:

# Laptop (dev)
docker-compose up -d
curl http://localhost:8000

# VPS ($5/mth)
docker-compose -f docker-compose.prod.yml up -d
curl http://your-vps-ip:8000

# Kubernetes (escala)
kubectl apply -f k8s/
kubectl port-forward svc/api 8000:8000
curl http://localhost:8000

💰 Custo Análise: Vendor-Agnostic

Opção 1: Seu Laptop (Dev)

Custo: $0
Hardware: Seu computador
Database: PostgreSQL local
Embeddings: Ollama local
Ideal para: Desenvolvimento

Opção 2: VPS Single Machine ($5-10/mth)

VPS: $5-10 (Digital Ocean, Linode)
├─ 2GB RAM
├─ 50GB SSD
├─ CPU: 1-2 core
└─ Bandwidth: Unlimited

Services:
├─ PostgreSQL: ✅ (runs fine)
├─ Ollama: ✅ (cpu-only ok)
├─ API: ✅
└─ Total: $5-10/mth

ROI: $35-60/mth savings vs Supabase + Ollama Cloud

Opção 3: VPS + Scaling ($10-30/mth)

Database VM: $5-10/mth
├─ 4GB RAM
├─ 100GB SSD
└─ PostgreSQL only

Ollama VM: $5-10/mth
├─ 2GB RAM
├─ GPU: optional
└─ Ollama only

API VM: $5-10/mth
├─ 2GB RAM
├─ 50GB SSD
└─ Slack bot

Total: $15-30/mth
Performance: Mejor (dedicated services)

Opção 4: Kubernetes (GKE / EKS / AKS)

Kubernetes cluster: $20-100/mth
├─ Same docker-compose.yml
├─ Automatic scaling
├─ Multi-region capability
└─ Container registry ($0 free)

ROI: $0 (Docker é grátis)
Custo: Apenas infraestrutura VM

📊 Comparação Total: Vendor vs Agnostic

Aspecto Vendor-Locked Vendor-Agnostic
Custo $45/mth $0-10/mth
Lock-in ⚠️ Forte ✅ Zero
Setup 2h 8h
Ops 0h/week 3h/week
Portabilidade 3 weeks 2 days
Para Laptop
Para VPS
Para Cloud Sim ✅ Qualquer
Para Kubernetes ⚠️ Mudança ✅ Direto
Performance ⭐⭐⭐⭐ ⭐⭐⭐⭐
Facilidade ⭐⭐⭐⭐⭐ ⭐⭐⭐

✅ Checklist: Implementar Stack Vendor-Agnostic

Semana 1: Setup Local

  • [ ] Docker Compose + PostgreSQL
  • [ ] Ollama local setup
  • [ ] Teste conexão
  • [ ] CustomMemoryService implementado
  • [ ] Testes unitários passando

Semana 2: VPS Deployment

  • [ ] Criar VPS (Digital Ocean / Linode)
  • [ ] Copiar docker-compose.yml
  • [ ] Setup DNS + reverse proxy
  • [ ] Deploy inicial
  • [ ] Testes E2E

Semana 3: Otimizações

  • [ ] Backups automáticos
  • [ ] Monitoring + alerting
  • [ ] Performance tuning
  • [ ] Security hardening
  • [ ] Documentation

Semana 4: Escalabilidade

  • [ ] Kubernetes setup (k3s)
  • [ ] Helm charts
  • [ ] Multi-region (optional)
  • [ ] Load balancing
  • [ ] Disaster recovery

🎯 Por Que Escolher Vendor-Agnostic

✅ Vantagens

1. Zero vendor lock-in
   └─ Você é free para mudar quando quiser

2. Controle total
   └─ Seu código, sua infraestrutura, seus dados

3. Custo baixo
   └─ $0-10/mth vs $45+ com vendors

4. Conhecimento portável
   └─ PostgreSQL + Docker + Kubernetes = mercado valoriza

5. Liberdade de operação
   └─ Pode rodar em laptop, VPS, cloud, datacenter

⚠️ Desvantagens

1. Mais responsabilidade operacional
   └─ Você precisa fazer: backups, monitoring, scaling

2. Setup inicial mais complexo
   └─ 8h vs 2h com vendor

3. Menos "serverless" (scale zero)
   └─ VM está always running

4. Requires DevOps skills
   └─ Não é clique-e-pronto como Cloud Run

🚀 Recomendação Final

Para MVP (Agora):

Use VENDOR-AGNOSTIC (Docker Compose local)
├─ Motivo: Aprender tudo
├─ Investimento: 0h ops
├─ Custo: $0
└─ Depois: Fácil mudar pra VPS

Para Produção (Pequena Escala):

Use VENDOR-AGNOSTIC (VPS)
├─ Motivo: Controle + Barato
├─ Investimento: $5-10/mth
├─ Ops: ~3h/semana
└─ Later: Fácil escalar

Para Produção (Crescimento):

Use HÍBRIDA (Vendor + Agnostic)
├─ Database: Managed PostgreSQL (AWS RDS)
├─ Embeddings: Self-hosted Ollama (VPS)
├─ API: Kubernetes (qualquer cloud)
└─ Total: $30-50/mth + melhor

Para Enterprise:

Use VENDOR-AGNOSTIC (Kubernetes Multi-Cloud)
├─ Database: Managed PostgreSQL (cada cloud)
├─ Embeddings: Self-hosted Ollama (cada cloud)
├─ API: Kubernetes (AWS, GCP, Azure simultaneamente)
└─ Total: $100+/mth + Portabilidade total

📚 Próxima Documentação Necessária

  1. Dockerfile otimizado para produção
  2. CI/CD pipeline (GitHub Actions)
  3. Monitoring + Alerting (Prometheus + Grafana)
  4. Backup strategy
  5. Kubernetes manifests

🎁 Arquivos para Criar

proyecto/
├─ docker-compose.yml          ✅ Criado (em cima)
├─ docker-compose.prod.yml     ⏭️ Para produção
├─ Dockerfile                  ✅ Criado (em cima)
├─ init_db.sql                 ✅ Criado (em cima)
├─ .env.example                ⏭️ Exemplo
├─ memory_service_agnostic.py  ✅ Criado (em cima)
├─ requirements.txt            ✅ Já existe
└─ k8s/                         ⏭️ Kubernetes manifests
   ├─ namespace.yaml
   ├─ postgres-deploy.yaml
   ├─ ollama-deploy.yaml
   └─ api-deploy.yaml