Is GitHub Copilot Down?
Real-time status check for github.com
Checking status...
About GitHub Copilot Status
BlueMonitor checks GitHub Copilot (github.com) by sending automated requests to its servers. If the service responds within a normal timeframe and returns a successful status code, it's marked as operational. Response times over 3 seconds indicate the service is slow, and connection failures or server errors indicate the service may be down.
Recent Incidents
Problems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard
Apr 10 , 13:28 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 10 , 13:08 UTC Update - We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard. Apr 10 , 13:07 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Disruption with some GitHub services
Apr 9 , 20:36 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9 , 19:52 UTC Update - We continue to investigate periodic delays in Copilot Cloud Agent job processing Apr 9 , 18:57 UTC Update - We are continuing to investigate Copilot Cloud Agent job delays Apr 9 , 17:48 UTC Update - Copilot Cloud Agent jobs are being processed and we are monitoring recovery Apr 9 , 16:57 UTC Update - We are investigating delays processing Copilot Cloud Agent jobs Apr 9 , 16:20 UTC Update - We are experiencing issues where jobs are being delayed to start for copilot coding agent Apr 9 , 16:20 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Disruption with some GitHub services
Apr 9 , 10:15 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9 , 10:15 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 9 , 09:57 UTC Update - We are investigating an issue affecting GitHub Copilot coding agent. Users may experience significant delays when starting new agent sessions, with jobs remaining queued longer than expected. Our team has identified increased load as a contributing factor and is actively working to restore normal performance. Apr 9 , 09:50 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Disruption with GitHub notifications
Apr 9 , 04:57 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9 , 04:57 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 9 , 04:42 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Disruption with some GitHub services
Apr 2 , 21:48 UTC Resolved - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected. Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards. Apr 2 , 21:48 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 2 , 20:35 UTC Update - Although we are observing recovery once again, we expect continued periods of degradation. Work that is queued during times of degradation does eventually get processed. We continue to investigate and find a mitigation, and will update again within 2 hours. Apr 2 , 19:28 UTC Update - This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. We are still investigating and trying to understand the pattern of degradation. Apr 2 , 18:25 UTC Update - We are once again seeing recovery with Copilot Cloud Agent job starts. We are keeping this open while we verify this won't recur. Apr 2 , 17:59 UTC Update - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating. Apr 2 , 17:49 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Copilot Coding Agent failing to start some jobs
Apr 2 , 16:30 UTC Resolved - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected. Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards. This was the same incident declared in https://www.githubstatus.com/incidents/d96l71t3h63k Apr 2 , 16:28 UTC Update - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating. Apr 2 , 16:18 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
GitHub audit logs are unavailable
Apr 1 , 16:10 UTC Resolved - On April 1, 2026, between 15:34 UTC and 16:02 UTC, our audit log service lost connectivity to its backing data store due to a failed credential rotation. During this 28-minute window, audit log history was unavailable via both the API and web UI. This resulted in 5xx errors for 4,297 API actors and 127 github.com users. Additionally, events created during this window were delayed by up to 29 minutes in github.com and event streaming. No audit log events were lost; all audit log events were ultimately written and streamed successfully. Customers using GitHub Enterprise Cloud with data residency were not impacted by this incident. We were alerted to the infrastructure failure at 15:40 UTC — six minutes after onset — and resolved the issue by recycling the affected environment, restoring full service by 16:02 UTC. We are conducting a thorough review of our credential rotation process to strengthen its resiliency and prevent recurrence. In parallel, we are strengthening our monitoring capabilities to ensure faster detection and earlier visibility into similar issues going forward. Apr 1 , 16:07 UTC Update - A routine credential rotation has failed for our our audit logs service; we have re-deployed our service and are waiting for recovery. Apr 1 , 16:06 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Disruption with GitHub's code search
Apr 1 , 23:45 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 1 , 23:45 UTC Update - Code search has recovered and is serving production traffic. Apr 1 , 22:00 UTC Update - We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic. Apr 1 , 19:37 UTC Update - We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours. Apr 1 , 17:48 UTC Update - We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data. We will update again within 2 hours. Apr 1 , 16:00 UTC Update - We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data. Apr 1 , 15:02 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Incident with Copilot
Apr 1 , 12:41 UTC Resolved - On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC. Apr 1 , 12:10 UTC Update - The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery Apr 1 , 12:02 UTC Update - The degradation has been mitigated. We are monitoring to ensure stability. Apr 1 , 11:37 UTC Update - The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels. Apr 1 , 11:24 UTC Update - The degradation has been mitigated. We are monitoring to ensure stability. Apr 1 , 10:56 UTC Monitoring - The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability. Apr 1 , 10:31 UTC Update - Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate. Apr 1 , 10:00 UTC Update - We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation. Apr 1 , 09:58 UTC Investigating - We are investigating reports of degraded performance for Copilot
Incident with Pull Requests: High percentage of 500s
Mar 31 , 21:23 UTC Resolved - On Monday March 31st, 2026, between 13:53 UTC and 21:23 UTC the Pull Requests service experienced elevated latency and failures. On average, the error rate was 0.15% and peaked at 0.28% of requests to the service. This was due to a change in garbage collection (GC) settings for a Go-based internal service that provides access to Git repository data. The changes caused more frequent GC activity and elevated CPU consumption on a subset of storage nodes, increasing latency and failure rates for some internal API operations. We mitigated the incident by reverting the GC changes. To prevent future incidents and improve time to detection and mitigation, we are instrumenting additional metrics and alerting for GC-related behavior, improving our visibility into other signals that could cause degraded impact of this type, and updating our best practices and standards for garbage collection in Go-based services. Mar 31 , 21:16 UTC Monitoring - The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability. Mar 31 , 21:12 UTC Update - We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests. Mar 31 , 19:28 UTC Update - Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations. Mar 31 , 18:42 UTC Update - We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes. Mar 31 , 17:16 UTC Update - We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out. Mar 31 , 16:35 UTC Update - We are seeing recovery in laten
Frequently Asked Questions
Is GitHub Copilot down right now?
This page shows the real-time status of GitHub Copilot. The status is checked automatically by pinging GitHub Copilot's servers. If the status shows "Down", it means GitHub Copilot is currently experiencing issues.
Why is GitHub Copilot not working?
GitHub Copilot may not be working due to server outages, scheduled maintenance, network issues, or high traffic. Check the current status above for real-time information.
How do I check if GitHub Copilot is down for everyone?
BlueMonitor checks GitHub Copilot's servers from our monitoring infrastructure. If the status shows "Down" here, it's likely down for everyone. If it shows "Up" but you can't access it, the issue may be on your end.
What should I do if GitHub Copilot is down?
If GitHub Copilot is down, you can: wait a few minutes and try again, check their official social media for updates, clear your browser cache, or try using a different network connection.