From Weeks to Days: How We Re-Architected Two Legacy Platforms Without Stopping Delivery
Two systems, two domains, one root problem: scattered core logic with no single source of truth. A payroll tax consolidation engine modernized to add new tax codes in 2-3 days instead of 3-4 weeks. A gift card platform rebuilt to onboard partners in 8-11 days instead of six weeks.
Arko IT Services ·
Two systems, the same root problem
Arko IT Services has re-architected legacy systems in two very different domains: enterprise payroll tax processing and digital gift card platform operations. On the surface the two had nothing in common. One processed multi-jurisdiction tax remittances for hundreds of thousands of employees. The other handled gift card issuance and redemption for retail partners.
But map both against the business constraints they were creating and the root problem is identical: the core logic was scattered, and no single place owned the truth.
In the payroll system, federal and provincial tax consolidation logic was duplicated and hardcoded in several places at once. Adding a new tax code meant hunting down every spot it touched, carefully changing each one, testing the combinations, and confirming nothing had broken in an adjacent jurisdiction. Three to four weeks per tax code.
In the gift card platform, the redemption and integration logic was one monolith. Partners connected straight into the core engine through custom code paths. Commercial deals stalled because engineering timelines could not keep pace with what sales had promised.
The fixes looked different. The pattern underneath was the same.
Story 1: re-architecting the tax consolidation engine
The system and its history
The payroll platform processed remittances for both federal and provincial tax authorities across Canada. The remittance engine was responsible for:
- Rolling up federal and provincial employee tax values from the payroll processing engine
- Applying consolidation rules for the specific remittance type (standard, trust, GRS)
- Generating sequence-managed remittance files in the required format (DDCCPP, DDCHOLD, and related formats for TOHO processing)
- Submitting files to the appropriate government endpoints and tracking acknowledgment
The system had grown by accretion over years. Each new tax code, each new jurisdiction, each new government filing format had been bolted on by a developer working inside a codebase with no extension points. The result:
- Consolidation logic for each tax code was spread across several handler classes, each with its own reading of the same underlying rules
- Federal and provincial rollup calculations were duplicated, so a rule change had to be tracked down and updated in multiple places
- Adding a new tax code meant modifying the core consolidation processor, the sequence number management service, the file generator, and the test suite, even when the new code was logically identical to an existing one with different rate parameters
The diagnosis
Mapping where the consolidation logic actually lived produced this picture:
BEFORE: Scattered Consolidation Logic
ConsolidationProcessor.cs
├── HandleFederalBasicTaxCode() [contains logic]
├── HandleFederalSupplementalCode() [contains similar logic, slightly different]
├── HandleProvincialAlberta() [contains similar logic, AB-specific]
├── HandleProvincialOntario() [contains similar logic, ON-specific]
├── HandleProvincialQuebec() [contains logic, QC has different rules]
└── ... (each new code = new method = new risk surface)
RemittanceFileGenerator.cs
├── GenerateDDCCPP() [calls into ConsolidationProcessor - knows about tax codes]
└── GenerateDDCHOLD() [calls into ConsolidationProcessor - knows about tax codes]
SequenceNumberManager.cs
└── AssignSequence() [tax-code-specific branching logic embedded here too]
Here is the insight that mattered: the consolidation algorithm itself was the same for every tax code. What varied was the rate parameters, the applicable jurisdictions, the file format requirements, and the sequence management rules. None of that variation needed to live inside the core algorithm.
The new architecture: plugin-based consolidation
The redesign rested on three components.
The consolidation core is a single, stable algorithm that processes any registered tax code handler. It owns the rollup sequence, the parallel-run reconciliation logic, and the file generation orchestration. It knows nothing about specific tax codes.
The tax code handlers are one class per tax code, each implementing a common ITaxCodeHandler interface. Each handler owns its rate parameters, jurisdiction applicability, and format specification.
The tax code registry is a configuration-driven registry mapping tax code identifiers to handler implementations. Adding a new tax code now means three steps: implement the interface, register the handler, write handler-level unit tests.
graph TD
subgraph CORE["Consolidation Core - stable, unchanged"]
ROLLUP[Federal + Provincial Rollup Engine]
SEQ[Sequence Number Manager]
GEN[Remittance File Generator]
ROLLUP --> SEQ
SEQ --> GEN
end
subgraph REGISTRY["Tax Code Registry"]
R[Registry - configuration-driven]
end
subgraph HANDLERS["Tax Code Handlers - pluggable"]
H1[Federal Basic Handler]
H2[Federal Supplemental Handler]
H3[Provincial Alberta Handler]
H4[Provincial Ontario Handler]
H5[Provincial Quebec Handler]
H6[NEW Handler - 2 days to add]
end
H1 --> R
H2 --> R
H3 --> R
H4 --> R
H5 --> R
H6 --> R
R --> CORE
subgraph OUTPUT["Output"]
DDCCPP[DDCCPP File]
DDCHOLD[DDCHOLD File for TOHO Process]
TRUSTGL[Trust GL Record]
GEN --> DDCCPP
GEN --> DDCHOLD
GEN --> TRUSTGL
end
Parallel running period
The existing system kept processing live payroll in production through the entire redesign. The new engine ran alongside it, taking the same inputs and comparing outputs for 6 weeks, covering two full payroll cycles across every active jurisdiction. By the end, reconciliation accuracy on true-positive cases was 100%.
The result
Adding a new tax code got 92% faster. What used to take 3 to 4 weeks (touching core logic in several classes, coordinating a risk-review sprint, running parallel testing across the full engine) became a 2 to 3 day job.
A few secondary wins came with it:
- The consolidation core became independently testable for the first time
- Federal and provincial rollup calculations finally had one authoritative implementation
- Sequence number management moved into the core, killing a long-running source of bugs nobody could reliably reproduce
Story 2: re-architecting the gift card platform
The system and its constraints
The digital gift card and loyalty platform ran on a monolithic redemption engine. Issuance, redemption, partner integrations, loyalty logic, and financial reconciliation were all welded into a single deployable unit.
The business constraints:
- Partner integration took 4 to 6 weeks each, and it was blocking commercial deals
- Operations was burning 15 hours a week on manual reconciliation
- The loyalty and gift card domains were entangled, so a bug fix in one meant regression testing both
The new architecture: adapter layer and event bus
Two targeted interventions, both using the strangler fig pattern.
The first was an integration adapter layer: a thin layer in front of the existing redemption engine, exposing a stable API contract for partner integrations. New partners connected to the adapter, not the engine, so the engine's internal interface stopped being a public contract.
The second was an event bus. Azure Service Bus publishes on every state change: issuance, redemption, reversal, expiry. Downstream consumers (reconciliation, the loyalty points engine, analytics) subscribe to the events they care about.
graph TD
subgraph BEFORE["Before: Monolithic Coupling"]
PA[Partner A - custom code] --> ENGINE_OLD[Redemption Engine]
PB[Partner B - custom code] --> ENGINE_OLD
PC[Partner C - custom code] --> ENGINE_OLD
ENGINE_OLD -->|manual export| OPS_OLD[Operations - 15 hrs/week]
ENGINE_OLD --- LOYALTY_OLD[Loyalty Engine - tightly coupled]
end
subgraph AFTER["After: Adapter + Event Bus"]
PD[Partner D] --> ADAPTER[Integration Adapter API]
PE[Partner E] --> ADAPTER
PF[Partner F] --> ADAPTER
ADAPTER --> ENGINE_NEW[Redemption Engine]
ENGINE_NEW --> BUS[Azure Service Bus]
BUS --> RECON[Reconciliation Subscriber - automated]
BUS --> LOYALTY_NEW[Loyalty Engine - decoupled subscriber]
BUS --> ANALYTICS[Analytics Pipeline]
end
The result
Partner integration fell from 4 to 6 weeks down to 8 to 11 days. Manual reconciliation dropped to zero. Three commercial partner integrations that had been stuck for months all closed inside 90 days.
The common thread
Two systems, two domains, two different technical interventions. The same architectural move underneath all of it: find where the core logic is scattered, pull it into one authoritative place, and make variation a first-class extensibility concern instead of a core modification.
The business result was the same in both cases. A process that took weeks now takes days. That is what re-architecture looks like when it chases business outcomes instead of technical aesthetics.