Strategic Intelligence Report
AI Is Not One Story:
Power, Practice, and the Shape of What Comes Next
A fact-grounded analysis for executives and investors navigating the AI transition
Artificial intelligence is routinely described in extremes — transformative or destructive, liberating or controlling, imminent or overhyped. None of these positions are wrong. Each is incomplete. What is missing is a layered account that integrates operational reality, economic logic, long-term capability risk, and the structural concentration of power now underway. This article provides that account, grounded in what companies are demonstrably doing today.
What Companies Are Actually Doing With AI
In most organisations today, AI is not replacing entire roles. It is being inserted into existing workflows as a productivity layer — reducing friction, accelerating output, and augmenting decision-making without assuming full authority.
The evidence is consistent across sectors:
Copilot deployed across documents, email, meetings, and coding. Internal studies report measurable time savings, not structural job replacement.
AI handles a significant share of customer service interactions, reducing response time and operational load at scale.
AI assists analysts in document processing and financial modelling. Decision authority remains human-led.
This aligns with the practical view advanced in works like Ethan Mollick’s Co-Intelligence: AI is currently a productivity layer, not a full replacement system. The visible layer is also the least controversial. What lies beneath it is more complex.
Labour Replacement: Where Incentives Point
The economic logic behind AI adoption is not neutral. Historically, automation follows a principle that has not changed: if a task can be standardised and performed more cheaply by a machine, it eventually will be.
Current signals confirm this direction, even if the pace is uneven:
Announced a pause in hiring for roles that could be replaced by AI, particularly in back-office and administrative functions.
Large customer service operations are reducing headcount as conversational AI matures and accuracy improves.
Companies experiment with AI-generated content to reduce production costs, with legal, marketing, and summarisation functions already partially automated.
The reality is structurally uneven: AI performs well in tasks that are structured and repetitive; it performs poorly in ambiguous, high-context decisions. This creates selective replacement — parts of roles disappear before entire roles do. The risk is cumulative. Many small substitutions over time produce large structural shifts in employment that are difficult to detect in real time.
“The direction is not random. It follows incentives. And the incentive to reduce labour costs through standardisation has not weakened — it has found a more capable instrument.”
The Race to Build — and Control — the AI Stack
Above the operational uses visible in enterprise deployments, a small number of companies are investing at a fundamentally different scale. They are not building tools. They are competing to control the infrastructure through which AI will operate across the global economy.
The primary actors:
Each is pursuing control over distinct but reinforcing layers: the most capable models, the most widely adopted platforms, and the dominant compute infrastructure. Investment scales are unprecedented — tens of billions in hardware and model development, sustained over timelines measured in decades, not quarters.
What Control Actually Looks Like
Control does not appear as direct authority. It emerges through dependency — gradually, through decisions that appear as efficiency gains.
As users interact with the world through AI systems — search, writing, analysis, customer interaction — the system becomes the filter, the interpreter, and the recommender. Search is already shifting from links to answers. The system decides what is shown, summarised, or ignored.
AI systems improve through usage. More users generate more data, which produces better models, which attract more users. This self-reinforcing cycle structurally favours large incumbents and raises the barrier to entry for competitors and regulators.
Most companies do not build their own models. They rely on cloud platforms, APIs, and external AI services. This creates structural dependence on a small set of providers — a dependency that typically deepens the longer it is in place.
The companies building the most advanced systems effectively define how tasks are performed, what good output looks like, and which behaviours are acceptable. This norm-setting power is subtle but durable.
What Happens If It Scales: The Long-Term Question
The perspective associated with Max Tegmark’s Life 3.0 asks a different kind of question: what happens if AI systems move beyond assisting humans to outperforming them across most cognitive tasks?
Early capability signals are already visible:
Large language models pass bar exams, medical licensing tests, and MBA-level assessments at levels comparable to trained practitioners.
DeepMind’s AlphaFold solved the protein folding problem — a challenge that had stumped biology for fifty years — in a domain where AI outperforms humans structurally.
Autonomous systems are being tested and deployed in logistics, defence, and industrial operations, with supervised autonomy expanding in scope.
Constraints remain significant: systems lack reliable reasoning in ambiguous contexts, alignment with human intent is not guaranteed, and generalisation across novel domains remains inconsistent. The gap between capability and control is the critical tension. This is not immediately operational for most companies, but it is directly relevant to long-term investment positioning, regulatory exposure, and dependency on AI infrastructure providers.
AI as an Information System: The Control of Perceived Reality
The framework developed in Yuval Noah Harari’s Nexus addresses a dimension that receives less attention in corporate strategy discussions: AI is not only performing tasks — it is shaping the information environment through which decisions are made.
Three distinct risks operate simultaneously:
AI systems summarise news, generate explanations, and filter sources. Even without malicious intent, this introduces systematic bias derived from training data composition, design choices, and optimisation objectives. At scale, this influences how events are understood and interpreted.
Recommendation systems already shape purchasing decisions, political views, and content consumption. AI systems extend this reach into professional decision-making, planning, and expert judgement — domains previously considered resistant to algorithmic influence.
AI systems simplify complex realities into summaries, answers, and outputs. This increases efficiency but structurally reduces exposure to nuance, uncertainty, and alternative perspectives — distorting the quality of decisions made downstream.
The strategic implication is direct: companies whose marketing, customer engagement, and internal analysis run through external AI systems are ceding partial control over how information reaches and shapes their own people — and their customers.
What Is Actually Happening Inside Companies: A Fragmented Strategy
If you examine real organisations, the four layers described above are not separate narratives. They overlap — and they are rarely managed as an integrated whole.
A typical pattern looks like this:
This creates a fragmented strategy. Few companies are deliberately pursuing full automation. But many are unintentionally moving in that direction through incremental decisions that are individually rational and collectively significant.
The transition is also slower and messier than public discourse suggests. European companies lag behind US firms in deployment due to regulatory and cultural friction. SMEs lack the data and expertise required for effective implementation. Large enterprises face internal resistance and the burden of legacy systems. AI adoption is real, directional, and structurally uneven at the same time.
- Which parts of our operation can be standardised and automated, and what is our timeline for that shift?
- Where do we depend on external AI providers — and what leverage do those providers have over our operations?
- How does AI change our cost structure over a five-year horizon, including the costs of transition and retraining?
- Where do we risk losing control over decisions or customer relationships as AI mediates more of those interactions?
- What information systems do we rely on — and who controls the AI layer that shapes what those systems show us?
- How do we audit our own increasing dependency before it becomes structurally irreversible?
The Question That Matters
AI is not a single narrative. It is the simultaneous operation of four distinct forces: a productivity tool reshaping how work is done; an economic force following incentives toward labour substitution; a long-term technological shift whose capability ceiling remains unknown; and an information system concentrating the power to shape how reality is perceived and decisions are made.
Each of these perspectives is valid. None is sufficient on its own.
The organisations that will navigate this transition most effectively are not necessarily those that adopt AI fastest. They are those that maintain coherent visibility across all four layers — understanding where AI changes cost, control, dependency, and decision quality — before those changes become difficult to reverse.
These are not technology questions. They are governance questions. And they belong at the level where strategy is set.