On Analytical Soundness in Structural Analysis
Many analyses explain results convincingly yet fail to guide decisions. This article examines why analytical failure is often structural rather than technical, and sets out the conditions under which analysis becomes a durable system for reasoning about change, not just explaining results.
Many analyses used in organisational decision-making are numerically correct, professionally presented, and logically consistent – yet still fail to support durable decisions. This article is not concerned with techniques, tools, or visual clarity, but with a more upstream question: when does “analysis” deserve to be trusted as analysis?
Drawing on common but often unexamined practices in financial and performance analysis, I argue that analytical failure is more often structural than technical in nature. The aim here is to make those structural failures visible, and to articulate a small set of conditions under which analysis functions as a reliable system for reasoning about change, rather than as a sequence of explanations assembled after the fact.
What I Mean by Analysis
In business contexts, the word analysis is used loosely. It often refers to reporting, commentary, segmentation, or simply the presentation of results. That is not how I am using the term here.
When I refer to analysis, I mean structured reasoning about how outcomes are generated, how they respond to change, and how decisions propagate through a system. The point of analysis, in this sense, is not to explain results after the fact, but to make causal relationships explicit enough that alternative actions and trade-offs can be assessed before decisions are taken.
Under this definition, dashboards, variance explanations, segmentation, and forecasts are not analysis in themselves. They become analytical only when embedded in a coherent structure that continues to make sense over time, across views, and as conditions change within the range implied by its assumptions.
This distinction matters because analytical soundness is a property of that underlying structure – not of the techniques used to present or summarise results. In this article, analytical soundness means structural soundness – the soundness of the causal structure.
When analysis is structurally sound, it can be examined, challenged, and reused with far less narrative mediation – a property that becomes especially visible when such structures are made inspectable through disciplined visual representation.
What I Mean by Structure (and Causal Structure)
When I use the term structure in this article, I am not referring to formatting, layout, or the organisation of content. I am referring to the internal logic an analysis uses to relate inputs, drivers, and outcomes.
A structure encodes assumptions about how a system behaves – what changes what, under what conditions, and with what constraints. It determines which elements are treated as causes, which as consequences, and how different parts of the analysis interact.
Because reasoning about causes and effects is directional, such structures can often be visualised as hierarchies, with outcomes emerging from underlying drivers and decisions. This is not because analysis must take the form of a tree, but because hierarchical representations are a natural way to depict causal reasoning. The structure itself, however, lies in the meaning of the relationships it encodes, not in the visual form used to represent them.
The term causal is used here in a practical rather than philosophical sense. I am not attempting to resolve questions of causal truth, inference, or proof. Instead, causality is treated as a modelling assumption – a way of representing how outcomes are believed to arise from activities, decisions, constraints, and their interactions. A causal structure, in this context, is judged not by whether it is ultimately “true”, but by whether it continues to support sensible reasoning about change, intervention, and trade-offs when reused over time, viewed through different segmentations, and placed under stress – that is, when conditions move towards the boundaries implied by its assumptions.
A causal structure, then, is simply a structure that makes those relationships explicit. It represents outcomes as the result of activities, decisions, constraints, and their interactions, rather than as the result of classification, aggregation, or comparison alone. It clarifies what can be intervened on, how effects propagate, and where trade-offs arise.
This distinction matters because many analyses that we see everyday have structure in a superficial sense – they are organised, reconciled, and internally consistent – while lacking a causal structure. They may resemble hierarchies or driver trees, yet encode accounting identities or descriptive groupings rather than mechanisms that can be tested, stressed, or acted upon.
Throughout this article, when I refer to structural soundness, I am referring to the soundness of this underlying causal structure – not to the neatness of presentation, the presence of a hierarchy, or the sophistication of analytical techniques.
Why Professional Analysis Often Fails Structurally
Most analytical failure in business does not arise from poor arithmetic, weak data, or inadequate tools. It arises because analysis is routinely asked to perform tasks it was never structurally designed to support.
In many organisations, analysis is expected to be responsive, explanatory, and reassuring. It must reconcile quickly, align with existing narratives, and adapt smoothly to each new reporting cycle. These expectations reward analyses that explain this result, this variance, or this period convincingly, even if the explanation must be rebuilt from scratch the following month.
As a result, many analytical artefacts are optimised for explanatory adequacy in the moment, not for durability over time. They are good at answering “what happened” and “how does this compare”, but become fragile when asked to support reasoning about change, intervention, or trade-offs. Much of what passes for analysis in organisations is therefore oriented towards describing results, not reasoning about them: it explains what happened convincingly, but does not encode how outcomes are generated or how they would respond to different decisions.
This limitation is rarely obvious upfront. Professional-looking analysis often appears rigorous precisely because it is internally consistent, neatly segmented, and carefully presented. Its weaknesses tend to surface only when the analysis is reused, reinterpreted, or placed under conditions it was never structurally designed to handle.
The problem, then, is not a lack of analytical effort. It is that much of what is labelled analysis is structurally descriptive rather than causally reasoned. It organises outcomes convincingly, but does not encode the mechanisms that produced them in a way that can be reused, tested, or stressed.
This distinction is easy to miss because descriptive structures can be reused as templates even when their meaning does not persist. A bridge can be rebuilt each month. A segmentation can be re-sliced. A scorecard can be reinterpreted. The work continues, but understanding does not accumulate.
The remainder of this article looks at several common forms of professional analysis that fail in this way. These examples are not presented as mistakes or misapplications of technique. They are presented as structural failure modes – patterns of analysis that look sound, feel familiar, and yet consistently fail to function as durable decision support.
Five Professional-looking Analyses That Fail Structurally
The following examples are widely used forms of analysis that are professionally executed and often well received. Their limitation is not correctness or technique, but the fact that their underlying structure cannot sustain causal reasoning over time, across views, or under stress.
1. Identity-based driver trees
Analyses that decompose outcomes using familiar accounting identities – for example, revenue as volume × price, or cost as fixed + variable – are neat, intuitive, and easy to reconcile. They often resemble hierarchical trees and provide a reassuring sense of completeness.
The limitation is that identity is mistaken for causality. The decomposition explains how a number is constructed, not how it can be changed. In such decompositions, “volume” and “price” usually appear as aggregated outcomes (total units sold, average realised price), each reflecting many underlying decisions and constraints, rather than as levers that can be independently adjusted.
When drivers correspond to mathematical relationships rather than actionable decisions, the structure looks causal while remaining descriptive. The analysis reconciles perfectly, but offers little guidance on intervention.
2. Period-by-period bridge analysis
Period-by-period bridge analysis is a staple of performance reporting, often presented visually as a waterfall chart. It explains changes from one period to the next clearly and efficiently, and fits neatly into recurring reporting cycles.
The structural issue is the absence of a stable causal structure linking the starting and ending results. The bridging items are not derived from a causal structure that is shown to persist over time. As a result, they can be split, merged, or relabelled without violating the internal logic of the analysis.
What appears as a distinct driver in one period may be absorbed into a residual the next, or reappear under a different label. The bridge explains this movement, but it does not constrain the explanation of the next one. The waterfall can be reused as a reporting format, but because the underlying causal structure does not persist, explanations do not accumulate and must be reconstructed each period.
3. Segmentation-driven explanations
Segmenting results by customer, product, region, or function often reveals striking patterns. Overspends appear concentrated, growth looks uneven, and attention naturally gravitates to the largest variances. This often leads to explanations such as: “The overspend is mainly driven by Team A,” or “Growth is concentrated in Product X,” with the segment itself implicitly treated as the reason for the outcome.
The structural problem arises when segmented results are mistaken for drivers. Segmentation operates on outcomes, but is treated as if it explains their causes. Different cuts of the same data then produce different “reasons”, none of which generalise beyond the chosen view. As the segmentation changes, so does the explanation.
In these cases, segmentation highlights where effects appear, but because the underlying causal structure is not defined, the explanation changes with the view and provides no stable basis for reasoning about intervention or future outcomes.
4. Scorecards without an analytical spine
Balanced scorecards and multi-dimensional performance views provide breadth and visibility. Financial, operational, customer, and people metrics are tracked together, often with targets and status indicators.
What is usually missing is an explicit logic for resolving conflicts between signals. When metrics move in opposing directions, the analysis provides no guidance on which outcome should be prioritised, or why. Trade-offs are handled through narrative, escalation, or judgement rather than encoded logic. The scorecard observes performance, but does not reason about it. As a result, the same conflicts recur period after period, resolved through judgement each time rather than through accumulated analytical logic.
5. Forecast models that fail stress tests
Many forecast models fit historical data well, reconcile cleanly, and produce plausible base-case projections. Scenario analysis is often implemented by adjusting a small number of parameters.
Their structural weakness becomes visible under stress. Constraints remain implicit, and relationships are assumed to hold indefinitely. When scenarios move beyond a certain range (the model’s implicit validity range), the causal logic embedded in the model no longer holds, even though the calculations continue to run. The model still produces numbers, but loses explanatory power. What remains is extrapolation of past patterns, rather than reasoning about how the system would actually respond to change.
A unifying observation
What these examples share is not poor execution, but a common orientation towards describing outcomes convincingly rather than encoding causal logic durably. Each analysis works locally – for a given period, view, or scenario – without functioning as a stable system for reasoning about decisions.
When conditions change, when data is re-segmented, or when decisions must be made under pressure, this fragility becomes visible. Explanations multiply, priorities blur, and judgement is repeatedly reintroduced to compensate for what the analysis itself cannot resolve.
The Rules – What Analytically Sound Structures Require
If these failures are structural rather than technical, then the question is not how to improve individual analyses, but what an analytical structure must satisfy in order to function as durable decision support.
From the failure modes above, a small set of requirements emerges. These are not best practices, techniques, or stylistic preferences. They are the minimum conditions under which analysis can support durable decision-making, rather than function as a sequence of period-specific explanations.
An analytically sound structure satisfies the following five conditions. They can be read as diagnostic tests rather than prescriptions: if a structure fails them, it may still explain results, but it cannot be relied upon to support decisions.
1. Drivers are actionable
Elements treated as drivers must correspond to activities, decisions, or constraints that can be meaningfully intervened on (for example, pricing decisions, staffing levels, capacity limits, or policy settings). An element that cannot be influenced through identifiable actions or choices functions as a label, not a cause.
This distinction matters because many analyses treat descriptive labels as if they were drivers. Such labels reconcile outcomes, while genuinely actionable drivers support reasoning about change. An analysis built on what are called “drivers” but are in fact non-actionable labels may explain results convincingly, but it cannot inform decisions about what to do differently.
Actionability does not imply direct managerial control. It implies that the driver represents a lever whose movement can be influenced by intervention, even if that intervention is indirect, delayed, or constrained.
2. Granularity is decision-useful
Drivers should be decomposed to the level at which different actions or choices become meaningfully distinct (for example, hiring vs overtime, price change vs discount mix). Further decomposition should stop once it no longer changes the nature of the decisions being considered.
Granularity that is too coarse obscures choice; granularity that is too fine obscures judgement. In both cases, the structure fails to support decision-making.
Decision-useful granularity is reached when the structure differentiates between materially different courses of action, without collapsing into arithmetic detail or operational trivia.
3. Structure is persistent over time within its valid range
A sound analytical structure can be reused across periods within the range of conditions implied by its assumptions, without redefining its drivers or explanatory logic. While values may change and external conditions may shift, the structure itself should continue to apply so long as those conditions remain valid.
If the logic of explanation must be reconstructed each cycle, the analysis is explanatory rather than structural. It may be responsive and convincing in the moment, but it does not allow understanding to accumulate.
Persistence over time is what allows analysis to move from explanation to projection – from describing what happened to reasoning about what is likely to happen if similar conditions recur or if changes occur within the structure’s valid range.
4. Structure is persistent under re-segmentation within its valid range
A sound structure retains its causal meaning when the same system is viewed through different segmentations – such as by product, function, customer, or region – within the same valid range. Segmentation should localise effects, not redefine causes.
When explanations change materially as segmentation changes, the structure is view-dependent rather than causal. In such cases, each segmentation yields a plausible explanation for the observed changes, but those explanations are not derived from a single underlying causal structure.
Persistence under re-segmentation is therefore a strong test of structural soundness. If a structure cannot survive alternative ways of slicing the same system without exceeding the bounds implied by its assumptions, it is unlikely to support reliable reasoning about decisions.
5. Trade-off logic is explicit
When outcomes move in competing directions, a sound structure makes clear how those conflicts are to be reasoned about (for example, margin vs volume, service levels vs cost, short-term vs long-term outcomes). Trade-offs between objectives should be encoded in the structure itself, rather than deferred to narrative, escalation, or judgement at the point of decision.
This does not eliminate the need for judgement. It confines judgement to situations where uncertainty genuinely exceeds what the structure can resolve.
Explicit trade-off logic is what allows analysis to guide decisions under pressure, rather than merely describe performance after the fact.
A compact synthesis
Taken together, analytically sound structures use actionable drivers at decision-useful levels of granularity, remain persistent over time and under re-segmentation within a defined valid range, and make trade-offs explicit.
When these conditions are met, analysis functions as a durable system for reasoning about change and decision-making. When they are not, even the most polished analytical artefacts tend to collapse into explanation, narrative, and repeated judgement.
A Worked Example – Explaining an OPEX Overspend
Consider a common situation.
Monthly operating expenses (OPEX) come in above budget. Senior leaders want to understand why, and what – if anything – should be done differently next month.
The data available is familiar:
- OPEX by function (Operations, Finance, HR, IT, etc.)
- OPEX by expense category (personnel, IT, administration, travel, etc.)
- Actuals and budget for the current month, with prior periods available for comparison
The expectation placed on analysis is equally familiar. It should:
- explain the overspend clearly,
- identify the main contributors,
- and support a discussion about corrective action.
What follows is not an example of poor analysis. It is an example of analysis that is professionally executed, internally consistent, and widely accepted in practice – yet structurally unsound.
The Conventional Analysis
A typical first step is to segment the overspend.
The variance is broken down by function. Operations accounts for most of the overspend. Finance and HR are broadly on budget. Attention naturally focuses on Operations.
Within Operations, the overspend is then segmented by expense category. Personnel costs and contractor spend stand out. Travel and other discretionary costs are within tolerance.
A short narrative follows:
- “The OPEX overspend is driven primarily by Operations.”
- “Within Operations, higher personnel and contractor costs are the main contributors.”
- “This reflects resourcing pressures and project-related activity during the month.”
The analysis is often presented using:
- tables of variances,
- bar charts by function and category,
- and sometimes a waterfall bridge to reconcile the movement from budget to actual.
At this point, the analysis feels complete. The numbers reconcile. The explanation is plausible. The discussion moves quickly to reassurance, context, and expectation-setting for the next month.
A Pause Here
At this point, many readers will feel that the answer is obvious.
If personnel and contractor costs in Operations are over budget, then the response seems clear: tighten control, slow hiring, reduce contractors.
But this apparent clarity is precisely the problem.
What the analysis provides is directional pressure, not decision logic. It indicates where costs appear, but not which interventions are actually available (for example, slowing hiring vs reducing overtime vs substituting contractors), what trade-offs those choices imply (delivery delays, service degradation, staff burnout), or how those effects would propagate if conditions change (such as demand increasing or deadlines tightening). “Control costs” is an intention, not an analytically supported decision.
The purpose of structural analysis is not to point at pressure points, but to make explicit how outcomes are generated – so that interventions can be evaluated before they are taken, and reused after conditions change.
In the next section, we examine this analysis against the five rules of analytically sound structure – and show precisely where it fails, even though it looks professional.
Diagnosing the Structural Failure – Rule by Rule
Rule 1: Drivers are actionable
What the analysis does
The overspend is “explained” using:
- function (Operations vs others), and
- expense category (employee salaries, contractors, travel, etc.).
Statements such as:
- “Operations drove the overspend”, or
- “Personnel costs are the main contributor”
sound like causal explanations.
Why this fails structurally
Function and expense category are classifications of results, not drivers that can be intervened on.
- “Operations” is an organisational boundary, not an activity.
- “Salary expense” is an accounting category, not a decision.
They describe where the overspend appears, not what actions or decisions caused it.
As a result:
- the analysis reconciles the number,
- but does not identify a lever that management can meaningfully pull.
The rule is violated because the analysis treats labels as causes.
Rule 2: Granularity is decision-useful
What the analysis does
The analysis drills down:
- from total operating expenses,
- to function,
- then to expense category.
This feels like “detail”.
Why this fails structurally
Despite the additional granularity, the decision space does not change.
Knowing that:
- employee salaries in Operations are over budget
does not distinguish between materially different actions, such as:
- hiring above plan,
- overtime usage,
- contractor substitution,
- timing effects,
- or role mix changes.
The structure stops at accounting detail, not at decision-relevant distinctions.
Granularity increases, but decision clarity does not.
Rule 3: Structure is persistent over time within its valid range
What the analysis does
The same segmentation is applied every month:
- by function,
- by expense category.
Why this fails structurally
While the template persists, the explanatory logic does not.
- One month, the overspend is “Operations – personnel”.
- Next month, it is “IT – contractors”.
- Residuals appear and disappear.
- Categories are emphasised or downplayed depending on the movement.
Nothing in the structure constrains what will count as a ‘driver’ next month – or what would count as a driver when the variance flips direction. Nor does it define the structure’s valid range: the conditions under which those drivers are meant to remain meaningful.
The analysis explains this month, but does not accumulate understanding about how operating expenses behave over time.
Persistence of format is mistaken for persistence of structure.
Rule 4: Structure is persistent under re-segmentation within its valid range
What the analysis does
The same overspend can be viewed:
- by function,
- by expense category,
- by cost centre,
- or by project.
Each view yields a different “reason”.
Why this fails structurally
The explanation changes with the segmentation.
- Viewed by function, Operations is the issue.
- Viewed by category, employee salaries are the issue.
- Viewed by project, a specific initiative is the issue.
Each explanation is plausible. None is structurally derived from a single causal model of how operating expenses are generated. Nor is any boundary on validity stated – no indication of which conditions should leave the causal logic unchanged, and which should be treated as regime shifts.
Segmentation localises effects, but here it is being used to create descriptive explanations of outcomes, not to test whether a causal structure holds across views.
Rule 5: Trade-off logic is explicit
What the analysis does
The analysis implicitly invites trade-offs:
- cost vs delivery,
- resourcing vs service levels,
- budget adherence vs operational continuity.
But these trade-offs are not encoded.
Why this fails structurally
When leaders ask:
- “Should we slow hiring?”
- “Should we cut contractors?”
- “Should we accept the overspend to maintain service?”
the analysis offers no logic for prioritisation.
Decisions are resolved through:
- narrative,
- reassurance,
- escalation,
- or judgement.
The analysis observes operating expenses, but it does not reason about them.
What this diagnosis shows
At no point is the analysis wrong:
- the numbers reconcile,
- the explanations are plausible,
- the discussion feels informed.
And yet:
- no stable drivers are identified,
- no decision levers are isolated,
- no causal structure persists across time or views within a stated valid range,
- no trade-offs are made explicit.
This is why such analyses feel familiar, professional – and unsatisfying because they provide no durable basis for decision-making.
Reconstructing the Analysis – A Structurally Sound Approach to Operating Expenses
This is not a recommended universal OPEX analysis model. It is a demonstration of what it means to replace classifications with mechanisms.
To rebuild the analysis, the objective is not to find a better explanation of the overspend. It is to construct a structure that would remain meaningful across periods and segmentations, so long as conditions remain within the range implied by its assumptions.
That requires starting from the causal structure of operating expenses, not from their accounting presentation.
Step 1: Start from how operating expenses are generated
Operating expenses are not caused by functions or expense categories.
They are generated by activities, resourcing decisions, and constraints.
At a conceptual level, operating expenses arise from a small number of mechanisms:
- work being performed,
- capacity being provided to perform that work,
- and choices about how that capacity is sourced and deployed.
A structurally sound analysis therefore begins by identifying drivers such as:
- volume of activity or demand,
- capacity levels (headcount, contractor capacity, system capacity),
- utilisation of that capacity,
- cost per unit of capacity,
- and explicit constraints (service levels, regulatory requirements, delivery commitments).
These drivers are not yet numbers.
They are causal roles that remain meaningful across periods and segmentations.
Step 2: Define drivers that are actionable and decision-useful
From that structure, drivers are defined at a level where different decisions become distinct.
For example, instead of treating “salary expense” as a driver, the structure distinguishes between:
- approved headcount vs actual headcount,
- permanent staff vs contractors,
- planned utilisation vs overtime,
- role mix changes vs rate changes,
- timing of hiring relative to delivery commitments.
Each of these corresponds to a different decision or trade-off.
Each can be reasoned about separately.
At this point, the analysis has not explained the overspend.
It has established what would count as an explanation.
Step 3: Apply the same structure to budget and actuals
The same causal structure is then applied to:
- the budget, and
- the actual outcome.
Differences emerge not as ad-hoc bridge items, but as changes in the same drivers:
- higher activity volume than planned,
- faster hiring than budgeted,
- higher reliance on contractors due to timing constraints,
- lower utilisation efficiency than assumed,
- or binding service-level constraints that prevented cost deferral.
Importantly, these differences are not specific to a particular segmentation.
They exist regardless of whether the data is viewed by function, cost centre, project, or any other segmentation.
Segmentation now localises effects rather than creating explanations.
Step 4: Preserve the structure across time and re-segmentation within its valid range
Because the structure is defined in terms of activities, capacity, and constraints:
- the same drivers apply next month,
- the same logic applies if the overspend appears in a different function,
- and the same structure can be reused even if organisational boundaries change.
What changes over time are:
- the values of drivers,
- the binding constraints,
- and the trade-offs faced.
What does not change is the explanatory logic.
Understanding accumulates.
Step 5: Make trade-off logic explicit
With this structure in place, trade-offs can be reasoned about directly.
For example:
- reducing contractor spend may require accepting delivery delays,
- slowing hiring may increase overtime and burnout,
- holding service levels constant may imply temporary budget overruns.
These are not narrative judgements added after the fact.
They are logical consequences of the structure.
When leaders ask:
- “Should we slow hiring?”
- “Should we accept the overspend?”
- “Where can we intervene without breaking delivery?”
the analysis now provides a clear basis for assessing prioritisation choices and their consequences.
Judgement is still required – but it is applied within a structure, rather than being relied on to compensate for the absence of one.
What changes – and what doesn’t
What changes in this reconstructed analysis is not the data, the tools, or the visualisations.
What changes is the role of analysis:
from explaining where money went → to reasoning about how operating expenses are generated and how they would respond to different decisions.
The same tables, charts, and reports can still be produced.
But they are now anchored to a structure that persists over time and under re-segmentation within its valid range – even under reporting-cycle pressure.
That is the difference between analysis that reassures, and analysis that supports decisions.
Closing bridge to the article’s core argument
The worked example illustrates a broader point.
Analytical soundness is not achieved by adding detail, improving visuals, or refining narratives.
It is achieved by designing structures that make causal reasoning possible – and reusable.
When analysis is structurally sound, explanation becomes easier, not harder.
When it is not, explanation must be rebuilt every time.
That is the distinction this article has been concerned with.
What this is not
Readers familiar with frameworks such as activity-based costing or zero-based budgeting may recognise some similarities in the way operating expenses are decomposed here. That resemblance is intentional, but limited. Those approaches are concerned primarily with cost attribution or cost justification. The focus in this article is different: whether the analytical structure used to reason about operating expenses continues to support decision-making when conditions change, trade-offs arise, and explanations must persist over time. In that sense, the concern here is not a budgeting technique, but the soundness of the underlying analytical structure.
Conclusion – What It Means for Analysis to Be Sound
The central claim of this article is not that analysis should be more detailed, more sophisticated, or more rigorously presented. It is that analysis should be structurally constraining.
An analysis deserves to be trusted not because it looks rigorous, but because its structure continues to constrain reasoning when conditions change.
Analytically sound structures do not merely support explanation; they limit it. They restrict what can plausibly count as a driver, what can be varied independently, and how trade-offs can be resolved. When structure is sound, explanations cannot be freely reassembled each period without either violating the existing causal logic or explicitly revising it. Understanding accumulates because the space of possible explanations narrows over time.
This is why analytical soundness is not a stylistic property, nor a matter of technique. It is a property of whether the same structure can be used to reason before outcomes are known, as conditions change, and after results shift again. Where this holds, judgement is still required – but it is exercised within a defined causal logic, not relied on to substitute for its absence.
Seen in this light, much of what is called analysis in organisations is structurally descriptive rather than analytically sound. It explains outcomes convincingly, but allows almost any explanation to be reconstructed next time. The effort is real, the professionalism is real – yet the same reasoning must be rebuilt each cycle, because the analysis lacks a structure that clarifies what can plausibly be said, rules out weaker explanations, or makes subsequent decisions easier.
This is also where visualisation finds its proper place. Disciplined visualisation does not compensate for weak structure, nor does it create analytical soundness. It makes sound structures inspectable. When the underlying causal logic is coherent, visualisation allows drivers, patterns, and trade-offs to be examined directly, reducing the need for narrative mediation. When the structure is unsound, visual clarity only accelerates explanation.
Clarity in analysis, then, is not achieved through better charts or better models alone. It emerges when analytically sound structures are paired with disciplined visual representation, so that reasoning about change becomes explicit, inspectable, and reusable.
That is the standard this article argues for – not as a method, but as a test of whether analysis deserves to be trusted.
© 2026 Colin Wu. All rights reserved.
Quotations permitted with attribution. No reproduction without permission.