WaddleAI Testing Setup with OpenWebUI
This guide provides instructions for setting up a complete testing environment with WaddleAI proxy server and OpenWebUI for comprehensive LLM testing.
๐ Quick Start
Prerequisites
- Docker and Docker Compose installed
- At least 4GB RAM available for containers
- Ports 3001, 8000, 8001 available on your system
1. Environment Setup
# Copy environment template
cp .env.testing .env
# Edit .env file with your configuration
# At minimum, set your WaddleAI API key:
# WADDLEAI_API_KEY=wa-your-api-key-here
2. Launch Testing Environment
# Start all services
docker-compose -f docker-compose.testing.yml up -d
# Check service status
docker-compose -f docker-compose.testing.yml ps
3. Access Interfaces
| Service | URL | Purpose |
|---|---|---|
| OpenWebUI | http://localhost:3001 | Modern chat interface for testing |
| WaddleAI Proxy | http://localhost:8000 | OpenAI-compatible API endpoint |
| WaddleAI Management | http://localhost:8001 | Admin and monitoring interface |
| Documentation | http://localhost:8080 | WaddleAI documentation |
| Website | http://localhost:3000 | Marketing website |
๐งช Testing Scenarios
OpenWebUI Testing
- First Time Setup:
- Go to http://localhost:3001
- Create an account (signup enabled in testing)
-
OpenWebUI will automatically detect WaddleAI models
-
Model Testing:
- Test different models: GPT-4, Claude, LLaMA, etc.
- Verify model switching works correctly
-
Check response streaming functionality
-
Advanced Features:
- Upload documents for RAG testing
- Test conversation memory
- Verify chat history persistence
API Testing
# Test WaddleAI health endpoint
curl http://localhost:8000/health
# List available models
curl http://localhost:8000/v1/models \
-H "Authorization: Bearer your-api-key"
# Test chat completion
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello, World!"}],
"stream": false
}'
VS Code Extension Testing
- Setup Extension:
- Open
/vscode-extension/waddleai-copilot/in VS Code - Press F5 to launch Extension Development Host
-
Configure API key: "WaddleAI: Set API Key"
-
Test Chat Participant:
- Open VS Code Chat panel
- Type
@waddleai Hello, can you help me code? - Verify responses stream correctly
๐ง Configuration Options
WaddleAI Proxy Settings
SECURITY_POLICY: balanced, strict, or permissiveCORS_ALLOWED_ORIGINS: Configure for your domainOPENAI_COMPATIBILITY_MODE: Enable full OpenAI API compatibility
OpenWebUI Settings
ENABLE_SIGNUP: Allow new user registrationDEFAULT_USER_ROLE: Default permissions for new usersENABLE_MODEL_FILTER: Filter available modelsRAG_EMBEDDING_ENGINE: Configure document processing
๐ Troubleshooting
Common Issues
OpenWebUI can't connect to WaddleAI:
# Check if WaddleAI proxy is healthy
docker-compose -f docker-compose.testing.yml exec openwebui curl http://waddleai-proxy:8000/health
Models not appearing in OpenWebUI:
- Verify API key is set correctly
- Check WaddleAI proxy logs: docker-compose -f docker-compose.testing.yml logs waddleai-proxy
Database connection issues:
# Check PostgreSQL health
docker-compose -f docker-compose.testing.yml exec postgres pg_isready -U waddleai
Logs and Debugging
# View all logs
docker-compose -f docker-compose.testing.yml logs
# View specific service logs
docker-compose -f docker-compose.testing.yml logs waddleai-proxy
docker-compose -f docker-compose.testing.yml logs openwebui
# Follow logs in real-time
docker-compose -f docker-compose.testing.yml logs -f waddleai-proxy
๐งน Cleanup
Stop Services
# Stop all containers
docker-compose -f docker-compose.testing.yml down
# Stop and remove volumes (WARNING: Deletes all data)
docker-compose -f docker-compose.testing.yml down -v
Reset Environment
# Complete cleanup
docker-compose -f docker-compose.testing.yml down -v --remove-orphans
docker system prune -f
๐๏ธ Architecture Overview
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ OpenWebUI โโโโโโ WaddleAI Proxy โโโโโโ LLM Providers โ
โ (Port 3001) โ โ (Port 8000) โ โ (GPT, Claude,etc)โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ โ
โ โโโโโโโโโโโโโโโโโโโ
โ โ WaddleAI Mgmt โ
โ โ (Port 8001) โ
โ โโโโโโโโโโโโโโโโโโโ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PostgreSQL + Redis โ
โ (Ports 5432, 6379) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ Production Deployment
For production deployment:
1. Use docker-compose.yml instead of docker-compose.testing.yml
2. Set secure passwords in .env
3. Configure proper SSL/TLS certificates
4. Set up monitoring and backup strategies
5. Review security policies and CORS settings
๐ API Compatibility
WaddleAI provides OpenAI-compatible endpoints:
- /v1/models - List available models
- /v1/chat/completions - Chat completions with streaming
- /v1/completions - Text completions
- /v1/embeddings - Text embeddings (if supported)
This ensures compatibility with: - OpenWebUI - VS Code Extension - OpenAI Python/JavaScript clients - Any OpenAI-compatible tool