Start now →

Vulnerability Pipelines

By Zach Griffin · Published March 6, 2026 · 10 min read · Source: Level Up Coding
EthereumRegulationSecurity
Vulnerability Pipelines
Windows Pipes Screensaver

In the world of Information Security, dealing with vulnerabilities is hard. Your surface area of things that can be vulnerable always seems to be growing — and every time something gets fixed, just like a hydra, new findings take their place.

A company’s vulnerability landscape grows in multiple dimensions. Not only do new findings appear for existing tools and systems as they age, but as a business grows, we also inherit new platforms, applications, devices, and cloud services that can be exploited by threat actors. If threats expanded in only one direction, we could stay on track and maintain a solid understanding of threat evolution. But that second dimension, growth and change, forces security teams to be in multiple places at once: hunting the new while still protecting the existing footprint.

That is the challenge in governing vulnerabilities and creating a vulnerability governance program. It’s not just about fixing issues. It’s about building a workflow that consistently takes a finding from “detected” to “resolved,” while keeping teams aligned, preventing chaos, and reducing the risk of things slipping through the cracks.

This article walks through how to define and create a vulnerability pipeline at a high level. The goal is to stay platform-agnostic, so findings from different tools can be treated consistently, prioritized, routed, tracked, and reported on the same way, regardless of where they came from.

Sources

To begin creating an effective vulnerability governance program, we need to identify vulnerability sources. “Sources” in this case doesn’t refer to an endpoint or a website, but rather a tool or scanner that generates findings. Nessus could be a source. Rapid7 could be a source. An EASM tool could be a source. So could cloud security tools, container scanners, dependency scanners, and code scanners — anything that raises a finding that needs to be addressed.

Start by documenting every vulnerability source in your environment, then identify how each one presents a finding. You’ll notice they differ in ways that matter:

These differences are one of the biggest reasons vulnerability management becomes overwhelming. If you treat every tool output as equal without translation, your teams will get flooded with inconsistent work orders, and security ends up acting like a human conversion layer instead of running a program.

Identifying sources is what allows you to build a pipeline that takes in all types of findings, processes them consistently, and outputs clean, repeatable work orders for teams to action.

One important note: this only works if you’re actually scanning what you own. If your environments aren’t covered — devices, apps, cloud workloads, external exposure — then a “lack of findings” might just mean “lack of visibility.” A pipeline can’t provide governance without detection, so coverage and scanning hygiene are part of pipeline health.

Destinations

Opposite of sources are destinations. Destinations are where vulnerability work orders land. The systems where IT team members and developers already manage their day-to-day work.

This matters more than many security teams expect. A pipeline isn’t successful because it creates tickets. It’s successful because those tickets result in fixes. The best way to make that happen is to put the work where the remediation teams already live, rather than making them hunt through a security-owned queue.

For developers, destinations are often Jira, Azure DevOps, or Linear. In some organizations, teams may even track certain work in tools like Notion if the workflow is lighter. For IT teams, it could be ServiceNow, ManageEngine, FreshService, or another ITSM platform. Many companies have multiple destinations, and that’s okay — vulnerability pipelines should support that reality.

Defining destinations early forces practical questions that you’ll need for governance anyway:

Your destination system is where accountability lives, so your pipeline should respect how teams work instead of trying to replace it.

Integration

Now that we have the left and right sides of the pipeline, it’s time to meet in the middle. Integration is where the pipeline becomes real and the logic that transforms raw scanner output into consistent, actionable work orders.

This is where vulnerability programs become scalable… or become painful. If you simply forward findings, you’ll generate an endless stream of noisy tickets, and teams will stop trusting them. A pipeline needs a middle layer that makes output controllable and repeatable.

A good way to start is to work backwards from the destination: define what a “good vulnerability ticket” looks like for the team receiving it.

At a baseline, a useful ticket includes:

But the deeper value of a pipeline comes from governance rules: the things that prevent your workflow from collapsing under volume.

Common governance rules include:

Integration isn’t just plumbing; it’s where you enforce consistency, reduce noise, and turn raw findings into real work.

Normalization and Triage

Once you integrate multiple sources, you run into a fundamental problem: tools describe risk differently. This is why normalization matters.

Normalization means translating findings into a shared internal format so your pipeline can treat everything consistently. You don’t need a perfect universal schema. You just need one structure your pipeline can rely on.

A normalized vulnerability record might include:

With normalization in place, triage becomes manageable.

Triage is the set of decisions that determine what happens next. This matters because not every finding should become a ticket immediately, and not every ticket should be treated the same way.

A healthy pipeline supports outcomes like:

This is also where you establish a really important distinction:

Priority is where context matters: internet exposure, production systems, business criticality, compensating controls, and likelihood of exploitation. A mature pipeline uses severity as an input, but treats priority as a decision.

SLA and Ownership

Once findings are normalized and triaged, governance needs two things to be true:

  1. Every actionable vulnerability has an owner.
  2. Every actionable vulnerability has an expected timeline.

If a vulnerability has no owner, it will age quietly into danger.

Ownership can be determined in a few common ways:

You don’t need perfect ownership mapping on day one, but you do need a strategy for improving it. Routing accuracy is one of the biggest drivers of whether teams trust your program.

Timelines are where SLAs come in. SLAs shouldn’t exist purely as “security pressure.” They exist to create consistent expectations and prevent high-risk items from quietly lingering forever.

A practical approach is to define expected fix timelines by priority level (Critical / High / Medium / Low) and then build workflow outcomes for exceptions:

Without defined outcomes, SLAs become meaningless numbers that everyone misses.

Verification and Closure

This is the part that many vulnerability programs forget: closing the loop.

Finding → ticket → remediation is not the end of the workflow. The work isn’t complete until the issue is verified as resolved and you can trust that it will stay resolved.

Verification is how you avoid the most frustrating vulnerability management cycle:

A vulnerability pipeline should define what it means for a finding to be “done.” That might include one or more of the following:

1) Scanner confirmation
The finding disappears in a new scan. This is the cleanest outcome when possible, but it depends on scan schedules and coverage.

2) Manual confirmation
For certain findings, the remediation team might provide evidence (configuration change, version update, deployment reference, screenshot, etc.). This is useful when scanners lag behind reality.

3) Compensating control validation
Sometimes you can’t fix an issue quickly, but you can reduce risk in the meantime (segmentation, access restrictions, hardening, monitoring, temporary blocking). If you allow this, your pipeline should capture it clearly so the issue doesn’t look “closed” when it’s simply “mitigated.”

A good pipeline also benefits from tracking a few lifecycle states:

Even if you don’t implement all of these formally, designing for verification keeps the pipeline honest. The goal is simple: a closed ticket should mean reduced risk, not just a closed task.

Reporting

Reporting is where the pipeline becomes measurable and defensible. Without reporting, vulnerability management turns into “we’re working on it” with no clear proof of progress and no way to identify bottlenecks.

Good reporting answers questions at multiple levels.

Pipeline health (are we operating cleanly?):

Operational progress (are we reducing risk?):

Program improvement (are we learning and improving?):

Reporting isn’t a separate thing at the end. It’s designed into the pipeline through the fields you normalize and the data you consistently capture. If you invest in consistent identifiers (asset IDs, owner mapping, priority logic, lifecycle states), reporting becomes easy and meaningful.

Closing Thoughts

Vulnerability governance isn’t defined by the tools you buy. It’s defined by the workflow you build.

Vulnerability pipelines take scattered scanner outputs and turn them into a repeatable process: detect, normalize, triage, route, remediate, verify, and report. When built well, pipelines reduce noise, improve accountability, and create a predictable system that scales as your environment grows.

You don’t need to solve everything on day one. Start simple:

Over time, your pipeline becomes one of the strongest foundations of your security program — not because it’s complicated, but because it’s consistent.


Vulnerability Pipelines was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.

This article was originally published on Level Up Coding and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →