
GitHub is adding AI-powered security detections to its Code Security offering, aiming to catch more vulnerabilities across a wider set of languages, frameworks, and file types than traditional static analysis alone can reach.
The new capability, entering public preview in early Q2, is designed to work alongside GitHub’s existing CodeQL engine rather than replace it. CodeQL continues to provide deep semantic analysis for supported languages, while the AI system extends coverage to parts of modern codebases that are harder to model with static analysis, such as scripts, infrastructure-as-code, and other non-core application components.
GitHub frames the move as a response to how AI and modern tooling are speeding up software development and widening the mix of technologies used in a single repository. Security teams are increasingly expected to protect code that spans many ecosystems, beyond the “core enterprise” languages that static analysis tools have historically focused on.
To address that, GitHub Code Security now pairs CodeQL with AI-powered security detections as part of a hybrid model. When a pull request is opened, GitHub automatically analyses the proposed changes and chooses the most appropriate method CodeQL-based static analysis or AI-powered detection before surfacing the results directly in the pull request alongside existing code scanning findings.
The types of risks GitHub calls out include:
- Unsafe, string-built SQL queries or commands
- Insecure cryptographic algorithms
- Infrastructure configurations that could expose sensitive resources
This approach is meant to put security checks in the same place developers already review and approve code, without requiring them to switch tools or workflows. GitHub positions this as a way to catch vulnerabilities earlier in the development lifecycle and reduce the back-and-forth between security and engineering teams after code has shipped.
In GitHub’s internal testing over a 30-day period, the system processed more than 170,000 security findings. According to GitHub, more than 80% of developer feedback on those findings was positive, an early indicator the company uses to gauge usefulness and signal quality.
The AI detections are particularly targeted at ecosystems that were not previously covered, or not easily covered, by static analysis rules. GitHub highlights strong early coverage in:
- Shell/Bash scripts
- Dockerfiles
- Terraform configurations (HCL)
- PHP
These additions sit within what GitHub describes as its broader “agentic detection platform,” a foundation that powers multiple experiences like security, code quality, and code review across the developer workflow. The company frames this move as “expanded coverage today” that lays the groundwork for more advanced, AI-augmented static analysis over time.
The idea is to combine the precision and structure of traditional static analysis with AI’s ability to reason over a wider variety of file types and contexts, and to adapt as new vulnerability patterns appear in rapidly evolving software stacks.
GitHub underscores that this shift is not just about flagging more issues but about giving developers suggested fixes as part of the same flow. The hybrid model is built to surface both the vulnerability and a potential remediation path inside the pull request itself.
Because GitHub sits at the “merge point” of many development workflows, the platform can also serve as the enforcement layer for security policies. Teams can tie detection results to review and approval rules, helping ensure that issues are addressed before code is merged rather than after deployment.
At the RSA Conference (RSAC), GitHub plans to demonstrate how these AI-powered detections expand application security coverage within pull requests, as part of a broader pitch around hybrid detection, developer-native remediation, and platform governance. The company will be present at RSAC booth #2327 for live previews.
GitHub is also connecting these new detections more tightly with its remediation tooling. Copilot Autofix, which is already available, can propose fixes for identified issues so developers can review, test, and apply them as part of a normal code review.
GitHub reports that developers are using Autofix at scale. In 2025, Copilot Autofix addressed more than 460,000 security alerts. On average, alerts resolved with Autofix reached closure in 0.66 hours, compared to 1.29 hours without it.
By combining expanded detection coverage with Autofix, GitHub is pitching a shorter path from “finding risk” to “fixing it.” Detection happens at the pull request boundary, and remediation suggestions appear in the same context, which GitHub argues helps teams move faster without sacrificing review rigor.
The AI-powered detections are part of a broader set of GitHub security initiatives. The company points to its investments in open source security, including funding maintainers, partnering with Alpha-Omega, and expanding access to tools that help strengthen software supply chains and reduce the burden on open source maintainers.
GitHub also highlights the GitHub Security Lab Taskflow Agent, which it describes as effective at finding high-impact vulnerabilities such as authentication bypasses, insecure direct object references (IDORs), and token leaks. The Taskflow Agent is being used to triage categories of vulnerabilities in GitHub Actions and JavaScript projects, and GitHub is sharing tips, technical guides, and best practices through a biweekly newsletter aimed at developers.
Across these efforts, GitHub’s message is that security should be embedded directly into the developer workflow at the point of code change, with automated detection, AI-augmented analysis, and in-context fixes rather than bolted on after code has shipped.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







