
In the world of Information Security, dealing with vulnerabilities is hard. Your surface area of things that can be vulnerable always seems to be growing — and every time something gets fixed, just like a hydra, new findings take their place.
A company’s vulnerability landscape grows in multiple dimensions. Not only do new findings appear for existing tools and systems as they age, but as a business grows, we also inherit new platforms, applications, devices, and cloud services that can be exploited by threat actors. If threats expanded in only one direction, we could stay on track and maintain a solid understanding of threat evolution. But that second dimension, growth and change, forces security teams to be in multiple places at once: hunting the new while still protecting the existing footprint.
That is the challenge in governing vulnerabilities and creating a vulnerability governance program. It’s not just about fixing issues. It’s about building a workflow that consistently takes a finding from “detected” to “resolved,” while keeping teams aligned, preventing chaos, and reducing the risk of things slipping through the cracks.
This article walks through how to define and create a vulnerability pipeline at a high level. The goal is to stay platform-agnostic, so findings from different tools can be treated consistently, prioritized, routed, tracked, and reported on the same way, regardless of where they came from.
Sources
To begin creating an effective vulnerability governance program, we need to identify vulnerability sources. “Sources” in this case doesn’t refer to an endpoint or a website, but rather a tool or scanner that generates findings. Nessus could be a source. Rapid7 could be a source. An EASM tool could be a source. So could cloud security tools, container scanners, dependency scanners, and code scanners — anything that raises a finding that needs to be addressed.
Start by documenting every vulnerability source in your environment, then identify how each one presents a finding. You’ll notice they differ in ways that matter:
- Some tools use CVSS, others use vendor scoring, and some prioritize based on exploit likelihood.
- Some provide solid remediation steps, while others stop at broad best-practice suggestions.
- Some assume SLAs that don’t match your organization and may not let you tune them.
- Some create clean, unique findings, while others generate duplicates or near-duplicates that represent the same underlying problem.
These differences are one of the biggest reasons vulnerability management becomes overwhelming. If you treat every tool output as equal without translation, your teams will get flooded with inconsistent work orders, and security ends up acting like a human conversion layer instead of running a program.
Identifying sources is what allows you to build a pipeline that takes in all types of findings, processes them consistently, and outputs clean, repeatable work orders for teams to action.
One important note: this only works if you’re actually scanning what you own. If your environments aren’t covered — devices, apps, cloud workloads, external exposure — then a “lack of findings” might just mean “lack of visibility.” A pipeline can’t provide governance without detection, so coverage and scanning hygiene are part of pipeline health.
Destinations
Opposite of sources are destinations. Destinations are where vulnerability work orders land. The systems where IT team members and developers already manage their day-to-day work.
This matters more than many security teams expect. A pipeline isn’t successful because it creates tickets. It’s successful because those tickets result in fixes. The best way to make that happen is to put the work where the remediation teams already live, rather than making them hunt through a security-owned queue.
For developers, destinations are often Jira, Azure DevOps, or Linear. In some organizations, teams may even track certain work in tools like Notion if the workflow is lighter. For IT teams, it could be ServiceNow, ManageEngine, FreshService, or another ITSM platform. Many companies have multiple destinations, and that’s okay — vulnerability pipelines should support that reality.
Defining destinations early forces practical questions that you’ll need for governance anyway:
- What fields are required for ticket creation?
- What does “a good ticket” look like for each team?
- How does assignment work (user, queue, component, service)?
- Do we need to update tickets over time as context changes?
- What happens if tickets are closed without the issue being fixed?
Your destination system is where accountability lives, so your pipeline should respect how teams work instead of trying to replace it.
Integration
Now that we have the left and right sides of the pipeline, it’s time to meet in the middle. Integration is where the pipeline becomes real and the logic that transforms raw scanner output into consistent, actionable work orders.
This is where vulnerability programs become scalable… or become painful. If you simply forward findings, you’ll generate an endless stream of noisy tickets, and teams will stop trusting them. A pipeline needs a middle layer that makes output controllable and repeatable.
A good way to start is to work backwards from the destination: define what a “good vulnerability ticket” looks like for the team receiving it.
At a baseline, a useful ticket includes:
- A clear title (what is wrong, in plain language)
- A short description (what was detected and why it matters)
- A recommended fix (specific enough to act on)
- Impacted asset(s) (hostname, instance ID, repo, application name)
- Evidence (scanner output summary, detection timestamp, context)
- Tracking identifiers (CVE, plugin ID, tool finding ID)
But the deeper value of a pipeline comes from governance rules: the things that prevent your workflow from collapsing under volume.
Common governance rules include:
- De-duplication: prevent multiple tickets for the same issue on the same asset.
- Grouping: combine similar findings into a single work item when the team will fix them together (like patching cycles).
- Filtering: remove informational noise or “not actionable” items so teams don’t get spammed.
- Routing: send issues to the right team automatically based on ownership signals.
- Escalation: increase urgency for findings that are exposed, exploitable, or high-impact.
Integration isn’t just plumbing; it’s where you enforce consistency, reduce noise, and turn raw findings into real work.
Normalization and Triage
Once you integrate multiple sources, you run into a fundamental problem: tools describe risk differently. This is why normalization matters.
Normalization means translating findings into a shared internal format so your pipeline can treat everything consistently. You don’t need a perfect universal schema. You just need one structure your pipeline can rely on.
A normalized vulnerability record might include:
- Source (which tool reported it)
- Asset identity (how you track it internally)
- Finding type (patching, configuration, dependency, exposure, etc.)
- Severity and priority (not always the same thing)
- Detection date and last-seen date
- Recommended remediation
- Suggested destination/team
With normalization in place, triage becomes manageable.
Triage is the set of decisions that determine what happens next. This matters because not every finding should become a ticket immediately, and not every ticket should be treated the same way.
A healthy pipeline supports outcomes like:
- Create a ticket now (actionable and in-scope)
- Merge/group (related to existing work or maintenance)
- Delay (planned patch window or coordinated remediation)
- Close/ignore with justification (false positive, out of scope, or accepted risk)
- Escalate (requires fast action)
This is also where you establish a really important distinction:
- Severity is the technical “how bad is it.”
- Priority is “how urgently do we need to fix it here.”
Priority is where context matters: internet exposure, production systems, business criticality, compensating controls, and likelihood of exploitation. A mature pipeline uses severity as an input, but treats priority as a decision.
SLA and Ownership
Once findings are normalized and triaged, governance needs two things to be true:
- Every actionable vulnerability has an owner.
- Every actionable vulnerability has an expected timeline.
If a vulnerability has no owner, it will age quietly into danger.
Ownership can be determined in a few common ways:
- Asset inventory tags (team, environment, application)
- Service ownership models (app owner, platform team)
- Repo ownership (codeowners or team mappings)
- IT-managed systems mapping (CMDB or inventory records)
You don’t need perfect ownership mapping on day one, but you do need a strategy for improving it. Routing accuracy is one of the biggest drivers of whether teams trust your program.
Timelines are where SLAs come in. SLAs shouldn’t exist purely as “security pressure.” They exist to create consistent expectations and prevent high-risk items from quietly lingering forever.
A practical approach is to define expected fix timelines by priority level (Critical / High / Medium / Low) and then build workflow outcomes for exceptions:
- documented risk acceptance
- mitigation steps or compensating controls
- planned fix dates with tracking
Without defined outcomes, SLAs become meaningless numbers that everyone misses.
Verification and Closure
This is the part that many vulnerability programs forget: closing the loop.
Finding → ticket → remediation is not the end of the workflow. The work isn’t complete until the issue is verified as resolved and you can trust that it will stay resolved.
Verification is how you avoid the most frustrating vulnerability management cycle:
- A ticket gets closed
- The scanner still sees the issue
- The ticket gets reopened or duplicated
- Teams lose trust and stop taking tickets seriously
A vulnerability pipeline should define what it means for a finding to be “done.” That might include one or more of the following:
1) Scanner confirmation
The finding disappears in a new scan. This is the cleanest outcome when possible, but it depends on scan schedules and coverage.
2) Manual confirmation
For certain findings, the remediation team might provide evidence (configuration change, version update, deployment reference, screenshot, etc.). This is useful when scanners lag behind reality.
3) Compensating control validation
Sometimes you can’t fix an issue quickly, but you can reduce risk in the meantime (segmentation, access restrictions, hardening, monitoring, temporary blocking). If you allow this, your pipeline should capture it clearly so the issue doesn’t look “closed” when it’s simply “mitigated.”
A good pipeline also benefits from tracking a few lifecycle states:
- Detected: the source found it
- Triaged: decisions were applied (ticket, merge, ignore, etc.)
- In Progress: remediation is underway
- Mitigated: risk reduced, but not fully fixed
- Resolved: fix applied and verified
- Exception / Accepted Risk: tracked with a reason and timeline
Even if you don’t implement all of these formally, designing for verification keeps the pipeline honest. The goal is simple: a closed ticket should mean reduced risk, not just a closed task.
Reporting
Reporting is where the pipeline becomes measurable and defensible. Without reporting, vulnerability management turns into “we’re working on it” with no clear proof of progress and no way to identify bottlenecks.
Good reporting answers questions at multiple levels.
Pipeline health (are we operating cleanly?):
- How many findings are entering the pipeline?
- How many are turning into tickets vs being merged/filtered?
- Are we producing clean, consistent work orders?
- Are teams receiving manageable volumes or getting spammed?
Operational progress (are we reducing risk?):
- How many vulnerabilities are being closed each week/month?
- How long do findings stay open?
- How does backlog age look by priority level?
- Are SLAs being met?
Program improvement (are we learning and improving?):
- Which vulnerability types repeat most often?
- Which teams/systems generate the most recurring issues?
- Which sources produce the best signal vs the most noise?
- Where are the scanning gaps (areas with suspiciously low findings)?
Reporting isn’t a separate thing at the end. It’s designed into the pipeline through the fields you normalize and the data you consistently capture. If you invest in consistent identifiers (asset IDs, owner mapping, priority logic, lifecycle states), reporting becomes easy and meaningful.
Closing Thoughts
Vulnerability governance isn’t defined by the tools you buy. It’s defined by the workflow you build.
Vulnerability pipelines take scattered scanner outputs and turn them into a repeatable process: detect, normalize, triage, route, remediate, verify, and report. When built well, pipelines reduce noise, improve accountability, and create a predictable system that scales as your environment grows.
You don’t need to solve everything on day one. Start simple:
- identify sources
- define destinations
- standardize ticket format
- normalize and apply triage rules
- establish ownership and expected timelines
- verify closures and track exceptions
- measure progress and improve the pipeline over time
Over time, your pipeline becomes one of the strongest foundations of your security program — not because it’s complicated, but because it’s consistent.
Vulnerability Pipelines was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.