🔄 Refresh: 0
📡 WS: disconnected
⚠️ Errors: 0
🕒 Last: never

LLM Proxy Pro

Enterprise Dashboard v2.1

{{ systemHealth.score }}%
System Health

{{ stat.label }}

{{ stat.value }}

{{ stat.subtitle }} {{ stat.subvalue }}

Backend Status ({{ Object.keys(backendStatus).length }} backends)

Live monitoring

{{ name }}

{{ backend.config.multimodal ? '🖼️ Multi' : '📝 Text' }} Q: {{ backend.config.quality }}/10
Load {{ backend.status.active_connections }}/{{ backend.config.max_parallel }}
Latency: {{ formatLatency(backend.health.latency) }}
Status: {{ backend.health.status }}
Requests
{{ backend.status.total_requests }}
Errors
{{ backend.status.total_errors }}
Success%
{{ getSuccessRate(backend) }}%

Request Trends

Response Times

Queue Analysis

Queue Status

Main Queue: {{ queueStatus.main_queue_size }}
Priority Queue: {{ queueStatus.priority_queue_size }}
Processing: {{ queueStatus.processing }}

Performance

Avg Wait: {{ formatTime(queueStatus.estimated_wait_time) }}
Throughput: {{ calculateThroughput() }}/min

Totals

Processed: {{ queueStatus.metrics.total_processed }}
Failed: {{ queueStatus.metrics.total_failed }}

Recent Activity ({{ recentRequests.length }} entries)

Time Backend Model Status Duration Tokens
{{ formatTimestamp(req.timestamp) }} {{ req.backend }}
{{ req.model }}
{{ req.status }} {{ req.response_time?.toFixed(2) }}s {{ req.tokens || '-' }}