Streamlit Frontend
The Streamlit app provides a quick UI to:
- Select an IONOS AI model from the dropdown
- Chat with the intelligent ReAct agent
- View real-time responses with dynamic web search and reasoning
- Experience context-aware conversations with chat history
Run
a) Windows (PowerShell)
# In a new shell
cd frontends/streamlit-starter
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt # or: pip install streamlit requests
streamlit run app.pyb) macOS / Linux (bash/zsh)
# In a new shell
cd frontends/streamlit-starter
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt # or: pip install streamlit requests
streamlit run app.pyNote: The frontend expects the backend on http://backend-service:8000 by default (Kubernetes). For local development, create a .env file with BACKEND_URL=http://localhost:8000.
Configure
- Backend URL defaults to
http://backend-service:8000for Kubernetes deployment - For local development: Create
.envfile withBACKEND_URL=http://localhost:8000 - IONOS_API_KEY is required in frontend
.envfor fetching inference models - Model selection supports both IONOS Hub inference models and Studio fine-tuned models
- The agent automatically handles web search and reasoning without additional configuration
Features
- Real-time Chat: Instant responses from IONOS AI models
- Dynamic Web Search: Automatically searches the web when needed (inference models only)
- Context Awareness: Remembers conversation history
- Model Type Selection: Switch between Inference and Fine-tuned models
- Inference Models: IONOS Hub models with web search capabilities (fetched directly from IONOS API)
- Fine-tuned Models: Studio models for specialized tasks (fetched from backend
/studio/modelsendpoint)
- Model Selection: Choose from available models based on selected type
Adding New Fine-tuned Models
To add a new fine-tuned model:
- Add model UUID to backend
.env:
STUDIO_YOUR_MODEL_NAME=your-model-uuid-here- Update backend
main.pyin the/studio/modelsendpoint:
@app.get("/studio/models")
async def get_studio_models():
import os
models = {
"qwen-gdpr": os.getenv("STUDIO_MODEL_QWEN_GDPR"),
"granite-gdpr": os.getenv("STUDIO_MODEL_GRANITE_GDPR"),
"qwen3-sharegpt": os.getenv("STUDIO_QWEN3_SHAREGPT"),
"your-model-name": os.getenv("STUDIO_YOUR_MODEL_NAME"), # Add this line
}
return {k: v for k, v in models.items() if v}- Restart backend - The new model will appear in the frontend dropdown automatically
Note: The frontend sends raw Studio model UUIDs to the backend (e.g., 7b19cae7-0983-4a6b-a03c). The backend automatically detects UUID format and routes to IONOS Studio API, while provider/name formats (e.g., mistralai/Mistral-Small) route to IONOS Hub.