GitHub icon

Is GitHub Down?

Real-time status check for github.com

Checking...

Checking status...

Official GitHub Status Page

About GitHub Status

BlueMonitor checks GitHub (github.com) by sending automated requests to its servers. If the service responds within a normal timeframe and returns a successful status code, it's marked as operational. Response times over 3 seconds indicate the service is slow, and connection failures or server errors indicate the service may be down.

Recent Incidents

Disruption with some GitHub services

1d ago

Apr 14 , 06:08 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 14 , 06:07 UTC Update - This incident has been resolved. We will continue to monitor to ensure stability. Thank you for your patience and understanding as we addressed this issue. Apr 14 , 06:07 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 14 , 04:40 UTC Update - We identified an issue that impacts the Copilot Dashboard on the Insights tab and are working on mitigation. We will continue to keep you updated on progress. Apr 14 , 03:47 UTC Update - The team continues to investigate issues accessing with Copilot Dashboard on the Insights tab. We will continue providing updates on the progress towards mitigation. Apr 14 , 02:40 UTC Update - The Copilot Dashboard on the Insights tab is not accessible and we are continuing to investigate. Apr 14 , 02:37 UTC Update - Degradation of Service - Insights Page Apr 14 , 01:57 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 14, 2026 at 01:57 AMResolvedSource

Incident with Pages

1d ago

Apr 13 , 20:35 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 13 , 20:32 UTC Update - We have mitigated the issue with Pages. Apr 13 , 20:30 UTC Monitoring - The degradation affecting Pages has been mitigated. We are monitoring to ensure stability. Apr 13 , 19:57 UTC Update - We are investigating reports of issues with Pages. We will continue to keep users updated on progress towards mitigation. Apr 13 , 19:56 UTC Investigating - We are investigating reports of degraded availability for Pages

Apr 13, 2026 at 07:57 PMResolvedSource

Disruption with some GitHub services

1d ago

Apr 13 , 17:40 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 13 , 16:59 UTC Update - We have identified the root cause and are rolling out a fix for Copilot. The services should now be in recovery, with expected full recovery in 5 to 10 minutes. Apr 13 , 16:41 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 13, 2026 at 04:59 PMResolvedSource

Problems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard

4d ago

Apr 10 , 13:28 UTC Resolved - On April 9, 2026, between 22:59 UTC and April 10, 2026, 13:24 UTC, the Copilot Mission Control service was degraded and did not display Claude and Codex Cloud Agent sessions in the agents tab dashboard. Customers were unable to see, list, or manage their third party agent sessions during this period. The underlying agent sessions continued to function normally. This was a visibility and management issue only, and no HTTP errors were generated. The API returned successful responses with incomplete results, with an average error rate of 0% and a maximum error rate of 0%. This was due to a code change that introduced a filter which inadvertently excluded third party agent sessions. We mitigated the incident by reverting the problematic code change and deploying the fix to production. We are working to add automated monitoring for dashboard content visibility and improve integration test coverage for third party agent session listing to reduce our time to detection and mitigation of issues like this one in the future. Apr 10 , 13:08 UTC Update - We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard. Apr 10 , 13:07 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 10, 2026 at 01:28 PMResolvedSource

Disruption with some GitHub services

5d ago

Apr 9 , 20:36 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9 , 19:52 UTC Update - We continue to investigate periodic delays in Copilot Cloud Agent job processing Apr 9 , 18:57 UTC Update - We are continuing to investigate Copilot Cloud Agent job delays Apr 9 , 17:48 UTC Update - Copilot Cloud Agent jobs are being processed and we are monitoring recovery Apr 9 , 16:57 UTC Update - We are investigating delays processing Copilot Cloud Agent jobs Apr 9 , 16:20 UTC Update - We are experiencing issues where jobs are being delayed to start for copilot coding agent Apr 9 , 16:20 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 9, 2026 at 04:57 PMResolvedSource

Disruption with some GitHub services

6d ago

Apr 9 , 10:15 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9 , 10:15 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 9 , 09:57 UTC Update - We are investigating an issue affecting GitHub Copilot coding agent. Users may experience significant delays when starting new agent sessions, with jobs remaining queued longer than expected. Our team has identified increased load as a contributing factor and is actively working to restore normal performance. Apr 9 , 09:50 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 9, 2026 at 09:57 AMResolvedSource

Disruption with GitHub notifications

6d ago

Apr 9 , 04:57 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9 , 04:57 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 9 , 04:42 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 9, 2026 at 04:57 AMResolvedSource

Disruption with some GitHub services

12d ago

Apr 2 , 21:48 UTC Resolved - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected. Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards. Apr 2 , 21:48 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. Apr 2 , 20:35 UTC Update - Although we are observing recovery once again, we expect continued periods of degradation. Work that is queued during times of degradation does eventually get processed. We continue to investigate and find a mitigation, and will update again within 2 hours. Apr 2 , 19:28 UTC Update - This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. We are still investigating and trying to understand the pattern of degradation. Apr 2 , 18:25 UTC Update - We are once again seeing recovery with Copilot Cloud Agent job starts. We are keeping this open while we verify this won't recur. Apr 2 , 17:59 UTC Update - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating. Apr 2 , 17:49 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 2, 2026 at 05:59 PMResolvedSource

Copilot Coding Agent failing to start some jobs

12d ago

Apr 2 , 16:30 UTC Resolved - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected. Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards. This was the same incident declared in https://www.githubstatus.com/incidents/d96l71t3h63k Apr 2 , 16:28 UTC Update - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating. Apr 2 , 16:18 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 2, 2026 at 04:30 PMResolvedSource

GitHub audit logs are unavailable

13d ago

Apr 1 , 16:10 UTC Resolved - On April 1, 2026, between 15:34 UTC and 16:02 UTC, our audit log service lost connectivity to its backing data store due to a failed credential rotation. During this 28-minute window, audit log history was unavailable via both the API and web UI. This resulted in 5xx errors for 4,297 API actors and 127 github.com users. Additionally, events created during this window were delayed by up to 29 minutes in github.com and event streaming. No audit log events were lost; all audit log events were ultimately written and streamed successfully. Customers using GitHub Enterprise Cloud with data residency were not impacted by this incident. We were alerted to the infrastructure failure at 15:40 UTC — six minutes after onset — and resolved the issue by recycling the affected environment, restoring full service by 16:02 UTC. We are conducting a thorough review of our credential rotation process to strengthen its resiliency and prevent recurrence. In parallel, we are strengthening our monitoring capabilities to ensure faster detection and earlier visibility into similar issues going forward. Apr 1 , 16:07 UTC Update - A routine credential rotation has failed for our our audit logs service; we have re-deployed our service and are waiting for recovery. Apr 1 , 16:06 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.

Apr 1, 2026 at 04:10 PMResolvedSource

Frequently Asked Questions

Is GitHub down right now?

This page shows the real-time status of GitHub. The status is checked automatically by pinging GitHub's servers. If the status shows "Down", it means GitHub is currently experiencing issues.

Why is GitHub not working?

GitHub may not be working due to server outages, scheduled maintenance, network issues, or high traffic. Check the current status above for real-time information.

How do I check if GitHub is down for everyone?

BlueMonitor checks GitHub's servers from our monitoring infrastructure. If the status shows "Down" here, it's likely down for everyone. If it shows "Up" but you can't access it, the issue may be on your end.

What should I do if GitHub is down?

If GitHub is down, you can: wait a few minutes and try again, check their official social media for updates, clear your browser cache, or try using a different network connection.

Related Services