GitHub Copilot icon

Is GitHub Copilot Down?

Real-time status check for github.com

Checking...

Checking status...

Official GitHub Copilot Status Page

About GitHub Copilot Status

BlueMonitor checks GitHub Copilot (github.com) by sending automated requests to its servers. If the service responds within a normal timeframe and returns a successful status code, it's marked as operational. Response times over 3 seconds indicate the service is slow, and connection failures or server errors indicate the service may be down.

Recent Incidents

Incident with Pull Requests: High percentage of 500s

3h ago

Mar 31 , 18:42 UTC Update - We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes. Mar 31 , 17:16 UTC Update - We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out. Mar 31 , 16:35 UTC Update - We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied. Mar 31 , 16:15 UTC Update - We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause. Mar 31 , 15:39 UTC Update - We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause. Mar 31 , 15:06 UTC Update - We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate. Mar 31 , 15:05 UTC Investigating - We are investigating reports of degraded performance for Pull Requests

Mar 31, 2026 at 03:39 PMResolvedSource

Issues with metered billing report generation

5h ago

Mar 31 , 15:10 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 31 , 15:01 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Mar 31 , 14:59 UTC Update - We have applied mitigations to a data store related to billing reports, and are seeing partial recovery to billing report generation. We continue to monitor for full recovery. Mar 31 , 14:56 UTC Update - We are seeing a high number of 500s due to timeouts across GitHub services. We are redeploying some of our core services and we expect that this allow us to recover. Mar 31 , 14:39 UTC Update - We're continuing to see high failure rates on billing report generation, and are working on mitigations for a data store related to billing reports. Mar 31 , 13:56 UTC Update - We're seeing issues related to metered billing reports, intermittently affecting metered usage graphs and reports on the billing page. We have identified an issue with a data store, and are working on mitigations. Mar 31 , 13:47 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Mar 31, 2026 at 01:56 PMResolvedSource

Elevated delays in Actions workflow runs and Pull Request status updates

1d ago

Mar 30 , 13:25 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 30 , 13:25 UTC Update - The degradation has been mitigated. We are monitoring to ensure stability. Mar 30 , 13:20 UTC Monitoring - The degradation affecting Actions and Pull Requests has been mitigated. We are monitoring to ensure stability. Mar 30 , 13:02 UTC Investigating - We are investigating reports of degraded performance for Actions and Pull Requests

Mar 30, 2026 at 01:25 PMResolvedSource

Incident with Copilot

4d ago

Mar 27 , 05:00 UTC Resolved - On March 27, 2026, from 02:30 to 04:56 UTC, a misconfiguration in our rate limiting system caused users on Copilot Free, Student, Pro, and Pro+ plans to experience unexpected rate limit errors. The configuration that was incorrectly applied was intended solely for internal staff testing of rate-limiting experiences. Copilot Business and Copilot Enterprise accounts were not affected. During this period, affected users received error messages instructing them to retry after a certain time. Approximately 32% of active Free users, 35% of active Student users, 46% of active Pro users, and 66% of active Pro+ users were affected. After identifying the root cause, we reverted the change and restored the expected rate limits. We are reviewing our deployment and validation processes to help ensure configurations used for internal testing cannot be inadvertently applied to production environments.

Mar 27, 2026 at 05:00 AMResolvedSource

Disruption with some GitHub services

6d ago

Mar 24 , 20:56 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 24 , 20:38 UTC Update - We are investigating elevated error rates affecting multiple GitHub services including Actions, Issues, Pull Requests, Webhooks, Codespaces, and login functionality. Some users may have experienced errors when accessing these features. Most services are now showing signs of recovery. We'll post another update by 21:00 UTC. Mar 24 , 20:23 UTC Update - Issues is experiencing degraded performance. We are continuing to investigate. Mar 24 , 20:23 UTC Update - Pull Requests is experiencing degraded performance. We are continuing to investigate. Mar 24 , 20:20 UTC Update - Webhooks is experiencing degraded performance. We are continuing to investigate. Mar 24 , 20:18 UTC Investigating - We are investigating reports of degraded performance for Actions

Mar 24, 2026 at 08:56 PMResolvedSource

Teams Github Notifications App is down

7d ago

Mar 24 , 19:51 UTC Resolved - On March 24, 2026, between 15:57 UTC and 19:51 UTC, the Microsoft Teams Integration and Teams Copilot Integration services were degraded and unable to deliver GitHub event notifications to Microsoft Teams. On average, the error rate was 37.4% and peaked at 90.1% of requests to the service -- approximately 19% of all integration installs failed to receive GitHub-to-Teams notifications in this time period. This was due to an outage at one of our upstream dependencies, which caused HTTP 500 errors and connection resets for our Teams integration. We coordinated with the relevant service teams, and the issue was resolved at 19:51 UTC when the upstream incident was mitigated. We are working to update observability and runbooks to reduce time to mitigation for issues like this in the future. Mar 24 , 18:50 UTC Update - We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure. Mar 24 , 17:43 UTC Update - We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue. Mar 24 , 17:09 UTC Update - We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation. Mar 24 , 16:59 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Mar 24, 2026 at 04:59 PMResolvedSource

Disruption with some GitHub services

9d ago

Mar 22 , 10:02 UTC Resolved - On March 22, 2026, between 09:05 UTC and 10:02 UTC, users may have experienced intermittent errors and increased latency when performing Git http read operations. On average, the error rate was 3.84% and peaked at 15.55% of requests to the service. The issue was caused by elevated latency in an internal authentication service within one of our regional clusters. We mitigated the issue by redirecting traffic away from the affected cluster at 09:39 UTC, after which error rates returned to normal. The incident was fully resolved at 10:02 UTC. We are working to scale the authentication service and reduce our time to detection and mitigation of issues like this one in the future. Mar 22 , 09:27 UTC Update - We are investigating intermittently high latency and errors from Git operations. Mar 22 , 09:08 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Mar 22, 2026 at 09:27 AMResolvedSource

Disruption with Copilot Coding Agent Sessions

11d ago

Mar 20 , 01:58 UTC Resolved - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its backing datastore. We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first. We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future. Mar 20 , 01:26 UTC Update - We are rolling out our mitigation and are seeing recovery. Mar 20 , 01:00 UTC Update - We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation. Mar 20 , 00:58 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Mar 20, 2026 at 12:58 AMResolvedSource

Git operations for users in the west coast are experiencing an increase in latency

12d ago

Mar 20 , 00:05 UTC Resolved - On March 19, 2026 between 16:10 UTC and 00:05 UTC (March 20), Git operations (clone, fetch, push) from the US west coast experienced elevated latency and degraded throughput. Users reported clone speeds dropping from typical speeds to under 1 MiB/s in extreme cases. The root cause was network transport link saturation at our Seattle edge site, where a fiber cut affecting our backbone transport resulted in saturation and packet loss. We had a planned scale-up in progress for the site that was accelerated to resolve the backbone capacity pressure. We also brought online additional edge capacity in a cloud region and redirected some users there. Current scale with the upgraded network capacity is sufficient to prevent reoccurrence, as we upgraded from 800Gbps to 3.2Tbps total capacity on this path. We will continue to monitor network health and respond to any further issues. Mar 20 , 00:05 UTC Update - We have reached stability with git operations through our changes deployed today. Mar 19 , 23:52 UTC Update - We are seeing early signs of improvement. We are working on one more small change to further improve traffic routing on the west coast. Mar 19 , 22:57 UTC Update - We have completed the rollout of our new network path and are monitoring its impact. Mar 19 , 21:59 UTC Update - We are beginning the rollout of our new network path. During this change, users will continue to see higher latency from the west coast. We will provide another update when the rollout is complete. Mar 19 , 18:27 UTC Update - We are working to enable a new network path in the west coast to reduce load and will monitor the impact on latency for Git Operations Mar 19 , 17:49 UTC Update - We are still seeing elevated latency for Git operations in the west coast and are continuing to investigate Mar 19 , 17:01 UTC Update - We are redirecting traffic back to our Seattle region and customers should see a decrease in latency for Git operations Mar 19 , 16:25 UTC Invest

Mar 19, 2026 at 04:25 PMResolvedSource

Issues with Copilot Coding Agent

12d ago

Mar 19 , 14:32 UTC Resolved - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its backing datastore. We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first. We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future. Mar 19 , 14:06 UTC Update - Copilot is operating normally. Mar 19 , 14:02 UTC Update - We are investigating reports that Copilot Coding Agent session logs are not available in the UI. Mar 19 , 13:45 UTC Update - Copilot is experiencing degraded performance. We are continuing to investigate. Mar 19 , 13:44 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Mar 19, 2026 at 01:45 PMResolvedSource

Frequently Asked Questions

Is GitHub Copilot down right now?

This page shows the real-time status of GitHub Copilot. The status is checked automatically by pinging GitHub Copilot's servers. If the status shows "Down", it means GitHub Copilot is currently experiencing issues.

Why is GitHub Copilot not working?

GitHub Copilot may not be working due to server outages, scheduled maintenance, network issues, or high traffic. Check the current status above for real-time information.

How do I check if GitHub Copilot is down for everyone?

BlueMonitor checks GitHub Copilot's servers from our monitoring infrastructure. If the status shows "Down" here, it's likely down for everyone. If it shows "Up" but you can't access it, the issue may be on your end.

What should I do if GitHub Copilot is down?

If GitHub Copilot is down, you can: wait a few minutes and try again, check their official social media for updates, clear your browser cache, or try using a different network connection.

Related Services