Is GitHub Down?
Real-time status check for github.com
Checking status...
About GitHub Status
BlueMonitor checks GitHub (github.com) by sending automated requests to its servers. If the service responds within a normal timeframe and returns a successful status code, it's marked as operational. Response times over 3 seconds indicate the service is slow, and connection failures or server errors indicate the service may be down.
Recent Incidents
Actions is experiencing degraded availability
May 15 , 08:48 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. May 15 , 08:41 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. May 15 , 08:29 UTC Update - We are monitoring an issue that was affecting GitHub Actions and causing downstream issues in GitHub Coding Agent and GitHub Code Review Agent. The issue has resolved now but we are closely monitoring our systems for full recovery. May 15 , 08:27 UTC Update - The degradation affecting Pages has been mitigated. We are monitoring to ensure stability. May 15 , 08:26 UTC Update - The degradation affecting Actions has been mitigated. We are monitoring to ensure stability. May 15 , 08:14 UTC Update - Pages is experiencing degraded availability. We are continuing to investigate. May 15 , 08:13 UTC Investigating - We are investigating reports of degraded availability for Actions
[Retroactive] Incident with GitHub.com
May 15 , 02:30 UTC Resolved - Beginning at 02:49 UTC on May 15 2026 and lasting until 03:04 UTC, GitHub.com was unavailable for a subset of customers. This impact has been mitigated and normal service resumed. The issue was rooted in a sudden spike in traffic, with intermittent impact. We've identified the source of the traffic and prevented further disruption.
Incident with CodeQL
May 13 , 16:03 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. May 13 , 15:30 UTC Update - CodeQL impact has been mitigated. We are continuing to monitor for durable recovery. May 13 , 15:26 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. May 13 , 14:58 UTC Update - We have applied a mitigation to increase processing capacity. We are continuing to monitor to confirm full recovery. We will provide another update by 15:30 UTC. May 13 , 14:43 UTC Update - We are investigating delays affecting CodeQL, the code analysis engine used by Code Scanning. Some users may experience delayed or incomplete code scanning results. Our engineering team is investigating. We will provide another update by 15:15 UTC. May 13 , 14:41 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Incident with CodeQL, Webhooks, Notifications, and Slack Integration
May 12 , 17:43 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. May 12 , 17:43 UTC Update - All services have fully recovered. May 12 , 16:59 UTC Update - CodeQL has fully recovered. We're continuing to work on recovery for the remaining impacted services. May 12 , 16:29 UTC Update - Webhooks have fully recovered. Continuing to work on recovery for the other services. May 12 , 16:28 UTC Update - Webhooks is operating normally. May 12 , 16:18 UTC Update - We've established that most delays are related to a queuing service and are working to scale out. Early signals from the scale-out are showing signs of recovery for some services. We'll provide an update when services are fully recovered. May 12 , 15:44 UTC Update - Webhooks is experiencing degraded performance. We are continuing to investigate. May 12 , 15:42 UTC Update - We're continuing to investigate issues with CodeQL actions workflows. We're additionally seeing delays for notifications, webhooks, and the Slack integration. May 12 , 15:13 UTC Update - CodeQL actions are currently experiencing delays, which may result in those actions being stuck in a pending state or having failed due to a timeout. May 12 , 14:38 UTC Investigating - We are investigating reports of degraded performance for CodeQL
Incident with high errors on Git Operations
May 11 , 14:33 UTC Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. May 11 , 14:25 UTC Investigating - We are investigating reports of degraded performance for Git Operations
CCR and CCA failing to start for PR comments
May 7 , 06:56 UTC Resolved - On May 7, 2026, between 04:12 UTC and 06:13 UTC, Copilot Cloud Agent and Copilot Code Review Agent sessions for pull requests were delayed or failed to start. The issue was caused by follow-up recovery work from a separate Pull Requests incident (https://www.githubstatus.com/incidents/f5pb5d5mr9yh). As part of that recovery, we ran a large database migration, which caused replication delays on several replica hosts. Although those replicas were not serving user traffic, our safeguards correctly treated the elevated replication lag as a signal to slow down writes to the affected database cluster. As a result, some pull request background processing was temporarily delayed. That processing is responsible for sending the internal events that Copilot agents use to begin work, so affected agents did not start until the database replicas caught up. The system recovered once replication lag returned to normal and pull request processing resumed. We are reviewing how this safeguard interacts with recovery migrations so we can reduce the chance of similar secondary impact during future incident recovery work. May 7 , 06:14 UTC Update - Copilot code review and cloud agents are starting again for pull requests, we are monitoring for full recovery. May 7 , 06:13 UTC Monitoring - The degradation has been mitigated. We are monitoring to ensure stability. May 7 , 05:02 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Incident with Pull Requests
May 6 , 19:04 UTC Resolved - On May 6, 2026 between 15:12 and 19:02 UTC creation of new pull request review threads on GitHub.com failed. This included new line comments and file comments on pull requests. Existing PRs and previously created comments were unaffected. This incident was caused by a 32-bit integer key reaching its maximum value in a Vitess lookup table used during PR thread creation. The primary table had been migrated to a 64-bit integer key but the Vitesse lookup table remained 32-bit. Once the values in the primary table passed the available 32-bit ID space in the lookup table, attempts to create new review threads began failing, resulting in near 100% failure rate for new thread creation requests. We mitigated the issue by updating the impacted lookup table definitions across all shards to use 64-bit integer column types, increasing the available ID range and restoring normal operation. Service was fully restored once the schema changes competed globally. To help prevent similar incidents, we are expanding existing monitoring of database columns to include Vitess lookup tables to notify in advance of any tables that is approaching a column size limit. This work is intended to provide earlier detection of columns approaching size limits before customer impact occurs. May 6 , 19:04 UTC Update - Mitigations have been fully applied and we are seeing full recovery of functionality on Pull Request threads. We are continuing to monitor to ensure sustained recovery. May 6 , 17:52 UTC Update - Creation of new Pull Request threads (including line and file comments) continues to be affected although we are seeing partial recovery. A mitigation is being applied to continue to accelerate recovery with complete recovery expected by 8:00pm UTC. Top-level comments on pull requests still function and should remain usable during recovery. Opening and merging pull requests, actions, and other pull request operations remain functional. May 6 , 16:20 UTC Update - Creat
Disruption with some GitHub services
May 6 , 11:59 UTC Resolved - On May 6, 2026 between 11:02 UTC and 11:13 UTC, users were unable to start or view Copilot Cloud Agent or remote sessions. During this time, requests to the session API returned errors, preventing users from creating new sessions or viewing existing ones. The issue was caused by a configuration change to the service's network routing that inadvertently removed the ingress path for the service. The team reverted the change at 11:13 UTC which restored service. The incident remained open until 11:59 UTC while the team verified full recovery. We are taking steps to improve our deployment validation process to prevent similar configuration changes from impacting production traffic in the future. May 6 , 11:59 UTC Update - We have applied a mitigation and Copilot services have recovered. May 6 , 11:25 UTC Update - We are investigating issues with the ability to start Copilot Cloud Agent sessions and view them. May 6 , 11:21 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Incident with Actions, we are investigating reports of degraded availability
May 6 , 09:44 UTC Resolved - On May 6, 2026, from approximately 06:45 UTC to 09:15 UTC, GitHub Actions Standard Ubuntu hosted runners were degraded. 17.1% of jobs requesting a standard runner failed. This was caused by an unexpected data shape in the allocation configuration data for standard runners. That data was introduced as part of post-incident remediation work for an incident the previous day and caused new allocations to be blocked as load ramped up for the day. Removing that data at 08:51 allowed allocations to proceed and hosted runner pools to scale up and recover. We are updating the filter logic for this allocation data to be resilient to abnormal data shapes and improving monitoring to alert when allocations are blocked, allowing the team to respond before customer impact starts. May 6 , 09:44 UTC Update - Actions wait times have fully recovered. May 6 , 09:19 UTC Monitoring - The degradation affecting Actions has been mitigated. We are monitoring to ensure stability. May 6 , 09:08 UTC Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery. May 6 , 08:00 UTC Update - Actions is experiencing issues with ubuntu standard hosted runners leading to high wait times. We are actively investigating the issue May 6 , 07:19 UTC Investigating - We are investigating reports of degraded availability for Actions
Increased Latency and Failures for SSH Git Operations
May 5 , 18:35 UTC Resolved - Between approximately 14:00 and 16:10 UTC on May 5, 2026, SSH-based Git operations experienced elevated latency and intermittent failures. On average, the error rate was 0.46% and peaked at 0.6% of SSH write requests. HTTP-based Git operations, including web UI and HTTPS clones, were not affected. The impact was caused by reduced SSH capacity at one of our data center sites. During a period of high traffic, the remaining hosts became overloaded, leading to connection exhaustion and some failures for SSH-based operations. Additional capacity was provisioned to expand SSH capacity and resolve the incident. The expanded capacity was fully online by 18:18 UTC. To reduce the likelihood of similar incidents, we will implement faster scaling solutions for SSH infrastructure and improved alerting for host availability and capacity thresholds. May 5 , 18:35 UTC Update - We've completed our mitigation to prevent further impact. At this time the incident is considered resolved. May 5 , 18:25 UTC Monitoring - The degradation affecting Git Operations has been mitigated. We are monitoring to ensure stability. May 5 , 17:26 UTC Update - We're continuing to work on preventing further impact from the earlier issue. No SSH-based impact is expected at this time. We'll post new updates if impact recurs or once our mitigation is in place. May 5 , 17:23 UTC Investigating - Git Operations is experiencing degraded performance. We are continuing to investigate. May 5 , 16:54 UTC Monitoring - Between approximately 14:00 and 16:10 UTC, customers using SSH-based Git operations may have experienced elevated latency and failures. HTTP-based operations were not impacted. We've identified a suspected root cause and are working to implement a mitigation to prevent further impact. May 5 , 16:49 UTC Investigating - We are investigating reports of impacted performance for some GitHub services.
Frequently Asked Questions
Is GitHub down right now?
This page shows the real-time status of GitHub. The status is checked automatically by pinging GitHub's servers. If the status shows "Down", it means GitHub is currently experiencing issues.
Why is GitHub not working?
GitHub may not be working due to server outages, scheduled maintenance, network issues, or high traffic. Check the current status above for real-time information.
How do I check if GitHub is down for everyone?
BlueMonitor checks GitHub's servers from our monitoring infrastructure. If the status shows "Down" here, it's likely down for everyone. If it shows "Up" but you can't access it, the issue may be on your end.
What should I do if GitHub is down?
If GitHub is down, you can: wait a few minutes and try again, check their official social media for updates, clear your browser cache, or try using a different network connection.