Between 2:15 PM and 3:42 PM PST, some users experienced increased response times (2-5s additional latency) when routing to cloud models. Root cause: a configuration change during a routine deployment. Rolled back and deployed a fix. No data loss.
CLI downloads and documentation were intermittently unavailable for EU users between 8:00 AM and 9:15 AM UTC. Caused by upstream CDN provider maintenance. Switched to backup CDN. All services restored.
Sign-in and token refresh were unavailable for approximately 8 minutes starting at 11:22 PM PST. Caused by a database connection pool exhaustion during a usage spike. Auto-scaling kicked in and resolved the issue. We've since increased the base pool size.