← All case studies

Technical Assessment, Modernization Planning & Implementation · Public Health / Government

Modernization Assessment and Roadmap for a Public Health Surveillance Platform

A phased architecture assessment and modernization roadmap engagement — completed Summer 2025.

A federally-affiliated public health surveillance platform built on Django and React had been in active operation for several years, carrying the operational weight of influenza and respiratory illness data handling across reporting seasons. The codebase reflected the accumulated decisions of a long-lived system: technically functional, mission-critical, and carrying a level of technical debt that was limiting the team's ability to extend, maintain, and transfer ownership of the platform confidently. Protabyte engaged to conduct a comprehensive modernization assessment and deliver a phased roadmap grounded in system realities — not a rewrite proposal, but a practical, prioritized path forward for the team responsible for the platform.

The challenge

The platform had reached an inflection point common to systems that have served a critical function for years while evolving under significant operational and resource constraints. The core functionality was sound. Seasonal data moved. Surveillance workflows executed. The public health mission was being met. But the engineering foundation beneath that functionality had drifted from the standard that a maintainable, extensible, and security-conscious platform requires — and the gap was widening with each passing season.

Several specific areas of concern shaped the assessment scope from the outset.

Settings and configuration architecture had accumulated complexity across environments. Django's settings system had grown to reflect years of incremental change, with environment-specific logic, embedded secrets handling, and configuration values that no single contributor fully understood. Onboarding new engineers or deploying to new environments required institutional knowledge that wasn't documented anywhere.

Security posture had not been systematically reviewed. Dependencies carried known vulnerabilities that had gone unaddressed due to the risk of uncontrolled upgrades. Authentication patterns, data access controls, and the handling of sensitive public health data had evolved organically rather than being designed to meet an explicit security standard. No dependency audit cadence existed.

CI/CD maturity was limited. Builds were not fully automated, test execution was manual in key areas, and deployment required steps that were undocumented or performed by specific individuals. The risk surface for human error in the deployment process was elevated.

Testing coverage was sparse. Unit test gaps were present throughout the codebase, integration testing was largely absent, and no systematic regression coverage existed for the seasonal workflow paths that carried the highest operational risk. Changes to core logic required developers to hold institutional knowledge rather than relying on a test suite to validate behavior.

The application server and runtime strategy had not been revisited since early in the platform's lifecycle. Gunicorn configuration, worker settings, and the approach to serving the Django application had not been reassessed against current load profiles, containerization practices, or infrastructure direction.

The frontend layer carried meaningful accessibility and maintainability debt with direct compliance consequences. The existing datatable implementations did not meet Section 508 requirements — a concrete risk for a federally-affiliated platform. TypeScript was absent entirely from the frontend layer, and the build pipeline relied on a heavily customized Webpack configuration that had grown difficult to maintain. These gaps were addressed during the engagement: TypeScript and modern React patterns were introduced to replace the aging datatables and close the 508 compliance gap, and the scope expanded from there into new feature delivery — Advanced Search functionality, task assignment workflows, and additional capability surfaces the platform needed. The Webpack configuration was assessed and resolved as part of this track.

Underlying all of this was the organizational reality: the platform was operated by a team with real capacity constraints working in a regulated, mission-critical environment where risk tolerance for disruption was low. A rewrite was not the answer. A credible, prioritized, and sequenced plan for improvement was.

Figure 3 · Technical Debt by Architecture Layer

A full-stack assessment: debt was present at every layer from infrastructure to frontend. Color indicates severity. Read top-down: frontend to infrastructure.

06

React Frontend

HIGH
  • Datatable implementations did not meet Section 508 accessibility requirements — critical gap for a federally-affiliated platform
  • TypeScript absent across the frontend layer — introduced as part of the React modernization track; scope expanded to Advanced Search and task assignment workflows
  • Heavily customized Webpack configuration with 15 build-specific dev dependencies — configuration resolved during engagement
05

REST API Layer

MEDIUM
  • Machine-to-machine endpoints protected by CSRF exemption decorators and environment header checks — not token-based authentication
  • Authentication gap enables unauthorized access if environment header is spoofed
04

Django Application

HIGH
  • Authentication configuration carried legacy security risks — password hasher settings, session flags, and API access controls reviewed and corrected across the Django layer
  • 337-line monolithic settings file manages all environments with branching conditional logic
  • Test suite is almost entirely HTTP response-code smoke tests — no model, serializer, or behavioral API tests
03

Application Server

HIGH
  • Third-party library patched at Docker image build time via shell commands — routine base image updates can silently break builds
  • Python path mismatch between base image and WSGI server requires the patch to remain indefinitely
  • No path to async request handling under current architecture
02

Container Layer

MEDIUM
  • Five Docker Compose files for different deployment targets — diverged over time with substantial duplication
  • Session and CSRF cookie security flags (Secure, HttpOnly, SameSite) present in settings but commented out
  • No base-plus-override pattern; environment-specific changes require full file maintenance
01

Infrastructure

MEDIUM
  • Health checks return HTTP 200 only — no validation of downstream dependency availability
  • Logging is file-based with no structured format suitable for log aggregation
  • No integration with centralized log management; incident response relies on manual file inspection

Full-stack assessment · Architecture to Infrastructure

System realities and risk areas

Protabyte's assessment identified risk across eight distinct dimensions. The dimensions were not equally urgent — and recognizing that distinction is what makes a sequenced roadmap possible rather than an undifferentiated list of things to fix.

01

Dependency Health

Core dependencies including the numeric computing library were pinned to versions with documented security vulnerabilities. The application server was bound to a legacy WSGI process manager that required a custom patch to a third-party library to function — a patch applied via shell commands at Docker build time. Several dependencies had been abandoned by their maintainers. The practical consequence: routine base image updates could silently break builds, and there was no clean path to upgrading the application server without eliminating the patch dependency first.

02

Security Posture

Authentication configuration had accumulated security risks over the platform's operational history. Password hasher settings, session and CSRF cookie security flags, and API access control patterns were each reviewed and corrected as part of the engagement. Session and CSRF cookie security flags (Secure, HttpOnly, SameSite) were present in the configuration but had been disabled — a pattern found across several security settings that had been introduced and then commented out without removal. Machine-to-machine API endpoints relied on CSRF exemption decorators with environment header checks as the primary access control mechanism rather than token-based authentication. Dependency versions carried documented vulnerabilities that had not been addressed due to the risk of uncontrolled upstream changes. The security posture work addressed these areas broadly rather than treating any single finding as the primary concern.

03

Configuration Sprawl

A 337-line monolithic settings file managed all environments with branching conditional logic. Five Docker Compose files existed for different deployment targets, with substantial duplication across them. Local debug configuration was controlled by an informal environment variable that altered static file paths and middleware behavior in ways that were not documented. The consequence was that understanding what any given deployment target actually did required reading through multiple files simultaneously.

04

Testing Gaps

The test suite was almost entirely composed of HTTP response-code smoke tests — verifying that endpoints returned 200 or 404, not that they returned correct data or behaved predictably under varied inputs. There were no model tests, no serializer tests, and no behavioral API tests. Coverage enforcement was absent. The Playwright end-to-end test suite existed and was reasonably comprehensive, but it was not connected to CI — meaning it ran manually, infrequently, and without integration into the change validation workflow.

05

CI/CD Maturity

The only GitHub Actions workflow in place was a dependency review scan. There was no automated test gate, no linting pipeline, no type-checking, and no build validation on pull requests. Changes could be merged and deployed with no automated validation of any kind. In a system with compliance obligations and operational dependencies, this created a meaningful risk surface around every deployment.

06

Frontend Architecture

The frontend layer carried both accessibility and maintainability debt that became an active implementation track during the engagement. The existing datatable implementations did not meet Section 508 compliance requirements — a concrete risk for a federally-affiliated platform handling public health data. TypeScript was absent entirely from the React layer. The build pipeline relied on a heavily customized Webpack configuration with fifteen build-specific development dependencies that had become a maintenance burden. TypeScript and modern React patterns were introduced to replace the aging datatable implementations and close the 508 compliance gap. Once that foundation was in place, the frontend scope expanded to new feature delivery: Advanced Search functionality, task assignment workflows, and additional capability surfaces the platform needed. The Webpack configuration was resolved as part of this track — the heavily customized setup was simplified, reducing build-time complexity and restoring maintainability.

07

Application Server Strategy

The platform ran on a WSGI server with a manual patch applied during image builds to work around a Python version path mismatch. This fragility meant that routine base image updates could silently break the build process. There was no path to async request handling under this architecture, which constrained what the application could do with long-running operations. Resolving this required both upgrading the application server and eliminating the dependency that made the patch necessary.

08

Infrastructure Redundancy

Five environment-specific Compose files had diverged over time, creating inconsistency between what ran locally, in staging, and in production. Health checks were superficial — returning HTTP 200 without validating downstream service availability. Logging was file-based with no structured format suitable for ingestion into a centralized log management system, meaning incident response required manual log file inspection rather than structured query or alerting.

Figure 1 · Risk Heat Map — Eight Assessment Dimensions

Each dimension plotted by severity (columns) against remediation complexity (rows). Quadrant determines sequencing priority.

High Severity

Medium Severity

High Complexity

Plan Carefully

Dependency Health

Pinned CVEs · build-time library patching · abandoned maintainers

Application Server

Patched WSGI server · no async path · fragile base image updates

Testing Gaps

HTTP smoke tests only · no model or behavioral coverage · e2e disconnected from CI

Strategic Work

Configuration Sprawl

337-line monolithic settings · five diverged Docker Compose files

Frontend Architecture

508 compliance gaps in datatable layer · TypeScript and React modernization delivered · Webpack configuration resolved

Low Complexity

Address First

Security Posture

Authentication configuration · session security settings disabled · access control gaps in API layer · dependency vulnerabilities remediated

CI/CD Maturity

Zero automated test gates · no linting or type-checking pipeline on pull requests

Schedule Soon

Infrastructure Redundancy

Superficial health checks · file-based logging · no structured log ingestion

← Severity →

Address First
Plan Carefully
Schedule Soon
Strategic Work

Assessment approach

Protabyte approached the engagement as a structured technical assessment, not a consulting review. The goal was to produce a concrete modernization roadmap grounded in what the codebase actually contained, not what a generic assessment framework assumed it might.

The assessment proceeded across eight dimensions, each evaluated for current state, risk level, and improvement feasibility given the team's capacity and the platform's operational constraints.

01

Codebase and Architecture Audit

Reviewed the Django application architecture for separation of concerns, API design patterns, view and serializer organization, and accumulated workarounds. Identified areas where architectural drift had made the codebase harder to reason about and where targeted refactoring would yield the highest maintainability return.

02

Security and Dependency Assessment

Conducted a full dependency audit against known vulnerability databases. Reviewed authentication patterns, session handling, data access controls, and the treatment of sensitive public health data. Assessed the settings architecture for embedded secrets and environment-specific security risks. Produced a prioritized remediation list distinguishing immediate-action items from scheduled improvement work.

03

CI/CD and Deployment Maturity Evaluation

Mapped the current build and deployment process in detail, including manual steps, undocumented procedures, and single-person dependencies. Evaluated the gap between the current state and a baseline of automated, repeatable, low-risk deployment. Scoped the work required to achieve that baseline as a prioritized improvement track.

04

Testing Strategy and Coverage Analysis

Assessed existing test coverage across unit, integration, and end-to-end layers. Identified the highest-risk untested paths, particularly in the seasonal surveillance workflow logic that carried the greatest operational consequence if failures were introduced. Defined a sequenced testing strategy that built toward meaningful regression coverage without requiring the team to stop feature delivery.

05

Runtime and Infrastructure Strategy Review

Evaluated the application server configuration — Gunicorn settings, worker model, request handling capacity — against current load profiles and deployment context. Assessed containerization state and the alignment between the current runtime approach and the infrastructure direction the organization was targeting.

06

Frontend Modernization & Feature Delivery

The frontend work began with a concrete compliance requirement: existing datatable implementations did not meet Section 508 accessibility standards, creating risk for a federally-affiliated platform. TypeScript and modern React patterns were introduced to replace those implementations and establish a maintainable frontend foundation. Once that baseline was in place, the scope expanded to new feature delivery — Advanced Search functionality, task assignment workflows, and additional capability surfaces. The Webpack build configuration was assessed and resolved during this track, simplifying the toolchain and eliminating accumulated build-time complexity.

07

AI-Assisted System Understanding

Applied AI-assisted analysis throughout the assessment to accelerate the archaeology of a large, long-lived codebase. AI tooling was used to navigate unfamiliar code patterns rapidly, identify structural issues across a broad surface area, surface dependency relationships that would have taken weeks to map manually, and draft elements of the roadmap documentation. What would have been a multi-month assessment was compressed into a focused engagement cycle without sacrificing depth.

08

Phased Modernization Roadmap

Synthesized findings across all eight assessment dimensions into a phased three-stage roadmap. Phase 1 prioritized security posture and dependency remediation — the areas where accumulated debt carried the most immediate and concrete risk. Phase 2 addressed CI/CD maturity, settings architecture, and testing foundations — the structural improvements that would make ongoing development meaningfully safer. Phase 3 addressed frontend modernization and longer-term architectural distribution, sequenced after the foundation had been stabilized and the team had built confidence in the improvement process.

Figure 2 · Ten-Phase Modernization Roadmap

Phases sequenced by risk profile and implementation dependency. Each phase is independently deliverable with explicit entry criteria. Bar width indicates relative effort.

Security

Phase 1Immediate Security Hardening
Days

Remove SHA-1 from active hasher list · enable session and CSRF cookie security flags · add dependency audit scan to CI

Foundation

Phase 2Dependency Stabilization
1–2 wks

Adopt proper dependency management · upgrade pinned CVEs · upgrade Node runtime · establish pre-commit hooks

Phase 3CI/CD Foundation
1–2 wks

Wire automated test execution, linting, type-checking, and build validation into pull request lifecycle · connect e2e suite to nightly workflow

Application Layer

Phase 4Application Server Replacement
1–2 wks

Replace patched WSGI server with a modern ASGI-capable alternative · eliminate build-time patch step · establish async request handling path

Phase 5Configuration Architecture
1–2 wks

Decompose 337-line monolithic settings into environment-specific profiles · consolidate five Compose files using base-plus-override pattern · document environment variable contract

Structural & Frontend

Phase 6Test Coverage Expansion
2–3 wks

Incremental behavioral coverage: model tests · serializer tests · API contract tests · coverage gate in CI

Phase 7Django LTS Upgrade
1–2 wks

Migrate to current Django LTS with new dependency tooling in place · validate integration surface before promoting

Phase 8TypeScript Migration
3–4 wks

Introduce TypeScript to the React layer to close Section 508 compliance gaps and replace aging datatable implementations · progressively typed file by file · scope expanded to deliver Advanced Search and task assignment workflows once foundation was in place

Phase 9Build Toolchain Replacement
2–3 wks

Resolve heavily configured Webpack build toolchain · eliminate 15 build-specific dev dependencies · completed during the TypeScript and React modernization track

Phase 10Background Task Infrastructure
2–3 wks

Introduce proper background task queue for long-running operations · decouple synchronous request cycle from heavy processing

Figure 4 · Architecture Maturity Assessment

Eight domains scored on current state (filled dots = maturity level, out of 5) against assessed risk. Priority and recommended phase drive roadmap sequencing.

Domain

Current state

Risk level

Pri.

Phase

Security

Auth config, session flags, API access controls

2/5 → target 4/5

Critical
01
Phase 1

Dependencies

Pinned CVEs, abandoned packages, patched WSGI server

2/5 → target 4/5

High
02
Phase 1–2

CI / CD

No automated test gate, linting, or build validation on PRs

1/5 → target 4/5

High
03
Phase 2

Testing

Smoke tests only — no behavioral, model, or serializer coverage

2/5 → target 4/5

High
04
Phase 2–3

Runtime / Server

Build-time patch required; no async handling path

2/5 → target 4/5

High
05
Phase 2

Configuration

337-line monolithic settings; five diverged Compose files

2/5 → target 4/5

Medium
06
Phase 2

Frontend

508 compliance gap, no TypeScript, complex Webpack config

2/5 → target 4/5

Medium
07
Phase 3

Infrastructure

Superficial health checks; file-based logging only

3/5 → target 4/5

Medium
08
Phase 2–3
Maturity present
Gap to target
Scored on current operational state · not theoretical maximum

Where AI fit in

Public health surveillance platforms are exactly the kind of system where AI-assisted analysis earns its keep. The codebase was large, mixed-stack, and had evolved over several years under real delivery pressure. Settings logic was distributed across multiple files that had diverged from each other. Dependencies had accumulated over time without consistent management. Authentication and security patterns were present in some places and missing in others. The e2e test suite ran in one context; the unit tests barely existed. Understanding where any of this actually stood — across all eight assessment dimensions simultaneously — was a significant surface area problem before it was a judgment problem.

AI tooling was used to compress the archaeology phase of the engagement. Tasks that would have taken days of manual tracing — mapping every place a configuration value was read, identifying all endpoints sharing a particular authentication pattern, surfacing the full dependency graph for a given processing function, auditing comment-to-code consistency across the settings files — were completed in a fraction of the time. This let the engineering work shift quickly from discovery to analysis: from “what is here” to “what does this mean and what should be done about it.”

AI also played a meaningful role in the documentation layer. Turning a set of technical findings spread across three people's notes, multiple file reviews, and a week of codebase examination into a structured requirements specification that both engineers and program leadership could act on is genuinely difficult work. AI-assisted drafting substantially compressed the time between raw findings and polished deliverable — producing the first version of the risk categorization, the phased roadmap structure, and the requirements specification in a form that engineers could then review, correct, and finalize rather than write from scratch.

What AI did not do is provide the engineering judgment the engagement required. Deciding that the CI/CD foundation needed to precede the test coverage investment — because tests without a CI gate provide false confidence rather than real safety — is a sequencing call that requires understanding how these systems interact in practice. Recognizing that the commented-out security flags were more dangerous than fully absent ones, because they created false confidence in anyone reading the configuration, requires knowing what false confidence costs in a regulated environment. Calibrating the phased roadmap to what the team could realistically execute given its operational obligations is an organizational judgment that no tool can make.

How AI divided the work in this engagement

AI accelerated

  • Codebase archaeology across a large, mixed-stack surface
  • Dependency graph mapping and CVE cross-referencing
  • Configuration pattern analysis across diverged files
  • Authentication and access control pattern detection
  • First-draft structuring of risk categories and findings
  • Requirements specification and roadmap documentation

Engineers decided

  • Which findings were significant vs. background noise
  • Sequencing rationale across eight interdependent domains
  • Calibration of effort estimates to team capacity constraints
  • Risk priority calls in a regulated operational context
  • Architectural recommendations and refactoring approach
  • Stakeholder communication and executive framing

Lessons for public sector and regulated organizations

Public sector and government-adjacent platforms carry a particular kind of technical debt: the kind that accumulates in systems that are too important to rebuild and too constrained to keep up with the pace of industry practice. The platform keeps running. The mission keeps being served. And underneath, the gap between what the system requires to be maintained safely and what the engineering foundation actually provides continues to grow.

The organizations that serve these platforms are often fully aware of the problem. They know the dependencies are out of date. They know the deployment process has undocumented steps. They know the test coverage is insufficient for the confidence they need to make changes without risk. What they lack is not awareness — it's a credible, prioritized path forward that respects the system they actually have, the team capacity they actually possess, and the risk tolerance that serving a mission-critical public function actually requires.

This is where the rewrite reflex does the most damage. When a long-lived system reaches an inflection point, the default recommendation from many consultants is some version of "start over." It's analytically defensible in the abstract, and completely disconnected from organizational reality. A rewrite requires sustained investment the organization may not have. It requires migrating a running system without disrupting the operational workflows that depend on it. It introduces a multi-year period during which both systems must be maintained. And it discards the institutional knowledge embedded in a codebase that has been serving a real function for years.

The more valuable answer — and the harder one to produce — is a credible modernization roadmap that works within constraints rather than pretending they don't exist. One that distinguishes between debt that carries immediate risk and debt that can be safely sequenced later. One that gives a team a realistic improvement cadence rather than a transformation mandate. One that makes the system incrementally better, safer, and more maintainable without requiring a bet-the-organization investment to execute.

AI-assisted analysis changed what's possible in this kind of engagement. The archaeology of a large, long-lived codebase — understanding what the code actually does, mapping structural relationships, surfacing risk concentrations, identifying dependency chains — has historically been the slow, expensive part of any modernization assessment. Applied thoughtfully, AI tooling compresses that timeline significantly, allowing an assessment engagement to go deeper in less time, and to produce recommendations grounded in specific code realities rather than general pattern-matching.

Technologies & domains

PythonDjangoDjango REST FrameworkReactPublic Health / SurveillanceCI/CD ArchitectureSecurity AssessmentLegacy ModernizationInfrastructure Strategy

Outcome

The engagement delivered a comprehensive modernization assessment and phased roadmap that gave the team a clear, credible, and actionable path forward — without requiring a rewrite, without recommending a disruptive transformation the team could not execute, and without the generic advice that often passes for consulting in this space. Phase 1 of the roadmap concentrated on security posture and dependency remediation. Specific vulnerabilities were identified and ranked by severity and upgrade complexity. Authentication and data handling gaps were documented with concrete remediation steps. The settings architecture was diagnosed and a practical refactoring approach was outlined that would progressively reduce configuration sprawl without requiring a full environment overhaul. Phase 2 addressed the structural foundations of sustainable development velocity. CI/CD automation gaps were mapped with a step-by-step path toward automated, low-risk deployment. A testing strategy was defined that started from the areas of highest operational risk and built coverage incrementally. The settings and configuration work continued in this phase, progressing toward an architecture where environment-specific configuration was managed safely and reproducibly. Phase 3 scoped the frontend modernization work and longer-term architectural improvements, sequenced after the team had stabilized the foundation and established confidence in their improvement cadence. The React layer upgrade path, build tooling modernization, and component architecture improvements were defined with enough specificity to be actionable when the time came. The roadmap was delivered around Summer 2025. The team left the engagement with a shared, documented understanding of where risk was concentrated, what to address first, and what the path forward looked like across a realistic improvement timeline.

Key results

  • Comprehensive modernization assessment delivered across eight risk and readiness dimensions with a prioritized implementation roadmap
  • Security posture broadly remediated — authentication configuration corrected, session and CSRF security settings enabled, dependency vulnerabilities addressed, and access control gaps resolved across the Django application layer
  • TypeScript and modern React introduced across the frontend layer, replacing aging datatable implementations that did not meet Section 508 accessibility requirements
  • Frontend scope expanded from 508 compliance remediation to new feature delivery: Advanced Search, task assignment workflows, and additional capability surfaces built on the TypeScript and React foundation
  • Webpack build configuration resolved — heavily customized toolchain simplified, build-time dependency footprint reduced, build reliability restored
  • CI/CD maturity gaps documented with a sequenced path to automated, low-risk deployment including test gating and linting pipelines
  • Django settings architecture diagnosed with a practical refactoring path reducing configuration sprawl across environments
  • AI-assisted analysis accelerated codebase archaeology across a large, long-lived system — compressing assessment timelines without sacrificing depth

Capabilities applied

  • Platform Modernization
  • Architecture Leadership
  • Engineering Enablement
  • Regulated Environment Delivery
Discuss a modernization assessment

Related engagements

Government / Public Sector

Legacy Platform Modernization for a Regulated Public Sector Organization

We led the technical modernization of a mission-critical platform serving a regulated public sector organization — migrating from a fragile legacy codebase to a maintainable, API-first architecture while maintaining continuity of service throughout. The engagement combined direct architecture leadership with implementation work and structured knowledge transfer to the internal team.

Read case study →

Life Sciences / Scientific Computing

Modernizing Scientific Software for Operational Reality

Protabyte led the modernization of a long-running scientific and operational platform that had evolved from domain tooling into production-critical infrastructure. The engagement focused on modernizing legacy Django patterns, implementing cleaner Django REST Framework conventions, refactoring overloaded APIs, and clarifying separation of concerns — while accounting for downstream integration dependencies and protecting operational continuity throughout.

Read case study →

Cross-Industry

AI Workflow Assessment and Modernization Discovery

We conducted a structured AI readiness assessment for an organization evaluating where AI automation could deliver the highest-leverage improvements to their operational workflows — producing a prioritized opportunity map, implementation roadmap, and the technical and organizational groundwork needed to move from assessment to delivery.

Read case study →

Enterprise Technology

Engineering Leadership During a Major Platform Transformation

We provided fractional CTO leadership during a major platform transformation initiative at an enterprise technology company — resolving an architectural impasse, establishing delivery governance, aligning engineering and product teams around a shared technical direction, and providing executive-level technical communication to board and leadership stakeholders throughout.

Read case study →

Work with Protabyte

Facing a similar situation?

If you are managing a platform that has grown more fragile than it is dangerous but more constrained than it should be, a structured assessment is usually the right first step. It produces a clear picture of where the risk actually lives and a sequenced plan for addressing it without disrupting operations.

Discuss a modernization assessment