The Second-Order Effects of Enterprise AI: What Nobody Is Modelling
Every AI business case models the direct effects — cost reduction, speed improvement, accuracy gains. Almost none model the second-order effects: how AI changes team dynamics, shifts power structures, alters information flow, and reshapes organisational boundaries.
The business case blind spot
Every enterprise AI deployment starts with a business case. The business case models direct effects: this process currently costs X, AI will reduce it to Y, the ROI is Z. These projections are usually defensible. The AI does reduce the cost. The speed does improve. The accuracy does increase.
What the business case never models — and what we’ve never seen any organisation systematically predict — are the second-order effects: the structural, behavioural, and political consequences that emerge after the AI is deployed, not from what the AI does, but from how its presence changes the organisation around it.
We tracked second-order effects across nine enterprise AI deployments over 18 months. In every case, the second-order effects were more consequential than the first-order effects the business case had modelled. In three cases, they were negative enough to erode the entire projected ROI.
Five second-order effects nobody models
1. Skill atrophy
When AI handles a task that humans previously performed, the humans’ ability to perform that task degrades over time. This is well-documented in aviation (automation bias) and medicine (deskilling). In enterprise contexts, the effect is the same but rarely discussed.
A credit assessment team that relies on AI-generated risk scores gradually loses the ability to assess risk independently. When the AI fails — and it will, eventually, in an edge case it wasn’t trained for — the human fallback capability has degraded. The organisation has become dependent on the AI not for its convenience but for its competence, because the alternative competence has atrophied.
2. Power redistribution
AI deployments change who has access to information, who makes decisions, and who can be held accountable. These are changes to the organisation’s power structure, even when they’re not framed that way.
When a sales forecasting AI replaces the regional managers’ quarterly estimates, the regional managers lose a source of organisational power — the authority that came from being the people who “know the market.” The AI doesn’t just produce a forecast. It redistributes the political capital that the forecasting process used to confer.
We observed regional managers in one organisation actively undermining the AI’s forecasts — not because the forecasts were wrong, but because the alternative (accepting them) diminished their organisational role. This is a rational response to a structural shift. No business case modelled it.
3. Information flow restructuring
AI systems change how information moves through an organisation. An AI that summarises customer feedback for executive consumption eliminates the people who previously performed that summarisation — and with them, the contextual knowledge they added. The information reaches the executive faster but thinner.
Conversely, AI that makes data available to frontline teams changes what those teams know relative to their managers. When the call centre agent has the same customer analytics the team leader used to hold, the information asymmetry that supported the management relationship has shifted. Neither the agent nor the team leader was prepared for this.
4. Boundary creation
Every AI system creates new organisational boundaries: between the people who understand the model and those who don’t, between the data the model was trained on and the data it encounters in production, between the team that maintains the system and the teams that depend on its outputs.
These boundaries are invisible — they don’t appear on any org chart — but they create the same friction as any organisational boundary: misunderstanding, coordination costs, trust deficits, and translation errors. The AI deployment that was meant to remove friction between teams often creates new friction that nobody anticipated.
5. Accountability diffusion
When a human makes a decision, accountability is clear. When an AI recommends a decision and a human approves it, accountability becomes ambiguous. Did the human exercise judgment, or did they rubber-stamp the AI’s output? When the decision goes wrong, is it the AI’s fault or the human’s?
This ambiguity doesn’t just create legal risk. It creates organisational dysfunction. People become reluctant to override AI recommendations (because if they override and it goes wrong, they’re clearly accountable) and equally reluctant to accept them without question (because if they accept and it goes wrong, they’re also accountable). The result is decision paralysis dressed as AI-augmented decision-making.
The first-order effect of enterprise AI is usually positive: a process gets faster, cheaper, or more accurate. The second-order effects — skill atrophy, power redistribution, information restructuring, boundary creation, and accountability diffusion — are usually unmodelled, often negative, and always more consequential.
What changes
Organisations can’t predict every second-order effect. But they can build the capability to detect and respond to them:
Pre-deployment structural mapping. Before deploying AI, map the current information flows, decision rights, skill dependencies, and power dynamics in the affected area. These are the structures that will be disrupted. Knowing what they look like before disruption makes it possible to see what’s changed after.
Post-deployment monitoring. Don’t just measure the AI’s performance. Measure the organisation’s response to it. Track changes in decision patterns, escalation rates, override rates, cross-functional coordination, and team satisfaction. These are the leading indicators of second-order effects.
Deliberate design. Where second-order effects are foreseeable — and many are, with enough structural awareness — design for them. If skill atrophy is likely, build deliberate practice intervals. If power redistribution is inevitable, restructure roles proactively rather than letting political resistance undermine the deployment.
The organisations that capture the full value of AI will be the ones that manage both orders of effect — not just the direct impact the business case promised, but the structural consequences that business cases never mention.