AI, sovereignty, cybersecurity: the three forces reshaping the technology landscape
- Together Elevate
- 1 day ago
- 7 min read
Updated: 17 hours ago

For years, enterprise digital transformation revolved around a single keyword: the cloud. And with it, from the very start, an inseparable subject: security. Those who lived through those migrations remember the debates around hybrid architectures, access policies, and security by design in hosting infrastructures. It was never a secondary concern. It was already a question of control.
Today, that chapter is largely behind us. Organizations have migrated their infrastructures, their tools, their data. Hybrid has become the norm.
The cloud is no longer a disruption. It has become the standard.
But a new shift is now underway.
And this time, it does not rest on a single technological rupture. It rests on three dynamics advancing together, feeding one another, with implications that extend well beyond the technology domain into geopolitics, regulation, and national competitiveness.
These three dynamics are artificial intelligence, sovereignty, and cybersecurity. Understanding how they interconnect is understanding what will define the major strategic decisions of the next decade.
AI: the universal accelerator, for better and for worse
AI commands every spotlight today. Investments are exploding. Use cases are multiplying.
Companies are accelerating, sometimes without a clear sense of where they are heading.
And yet the market has already made its verdict: AI is non-negotiable.
According to Goldman Sachs Research, generative AI could add up to 7 trillion dollars to global economic output over the next decade. Microsoft, Google, Amazon, and OpenAI are collectively deploying tens of billions of dollars per year into the infrastructure required to support it. These figures are not incidental. They signal systemic conviction, not a passing trend.
What is striking is that this conviction took hold before use cases were fully stabilized.
We already see roles disappearing in some organizations, justified by expected AI-driven productivity gains, sometimes prematurely, sometimes legitimately.
The honest reality is that no one knows exactly at what pace or to what depth AI will reshape organizations. What is certain, however, is that the massive investments being made today are creating a self-reinforcing pressure to adopt. And that pressure sometimes leads organizations to deploy AI without a clear strategy, without defined governance, and without anticipating the side effects.
That is precisely where AI shifts from opportunity to risk.
AI does not only transform productivity. It transforms the threat landscape.
For a long time, executing a sophisticated cyberattack required deep technical expertise. That barrier is collapsing. AI is democratizing capabilities once reserved for highly specialized profiles: malicious code generation, attack automation, ultra-personalized fraudulent content creation, and large-scale analysis of stolen data.
The consequences are already visible and documented. AI-generated phishing attacks are reported to have increased by more than 1,200% since 2023 (Check Point Research, 2025). Deepfake voice attacks, where a voice or face is synthesized to impersonate an executive or employee, have seen an estimated rise of +1,600% over the same period. 60% of companies report having already faced fraud or identity theft attempts linked to AI.
What these figures illustrate is not simply a surge in attack volume. It is a qualitative transformation of the threat itself. AI enables attacks that are more targeted, more credible, harder to detect, and cheaper to produce.
We are entering what might be called an economy of veracity, where the real question is no longer simply "is my data protected?" but "how do we guarantee the integrity of what is real?" How can an executive team validate an instruction received via audio if the voice itself can be synthesized? How do we preserve trust between systems, between decision-makers, between organizations, when reality itself becomes attackable?
This is no longer a data problem. It is a systemic problem of perception and trust.
Sovereignty: the quietest and most structural battle
Behind AI and cybersecurity, a subject is emerging with a discretion inversely proportional to its strategic weight: technological sovereignty. And it may be the most widely misunderstood of the three.
Europe today depends heavily on American actors for its cloud infrastructure, its AI models, its computing capacity, its data platforms, and its cybersecurity tools.
This dependency is not neutral. Technological dependency is also strategic dependency, with concrete implications across at least three dimensions.
The first is regulatory. When European data transits through infrastructure subject to US law, specifically the Cloud Act, extraterritorial injunctions theoretically allow American authorities to access it without notifying the companies involved. This is not a theoretical scenario. It is a legal reality that many organizations still ignore.
The second is operational. Dependency on a single provider, whoever that may be, creates a continuity risk that few executives have genuinely mapped. Whether through a commercial decision, a geopolitical sanction, or a major outage, the question of resilience in the event that a critical service becomes unavailable deserves to be taken seriously.
The third is competitive and geopolitical. Access to the most powerful AI models, to computing capacity, and to training data is becoming a strategic advantage of the first order. Whoever controls the infrastructure also controls, in part, the ability of economic actors to innovate, to decide, and to act. In this context, technological dependency is not only an operational risk. It is a power relationship.
What makes this subject particularly complex is that the alternatives on offer are often illusions.
Many so-called "sovereign" initiatives still rely, on closer inspection, on non-European technologies or infrastructure.
The "sovereign cloud" label frequently obscures a reality where the foundational layers, processors, hyperscalers, and base models, remain outside European control. Being sovereign is not about hosting your data in Paris on servers operated from Seattle.
True technological sovereignty is the ability to control your data, your critical infrastructure, your models, and your dependencies in a way that protects you from unilateral decisions by an external actor, whether commercial or governmental. That is a demanding definition. And that is precisely why so few organizations meet it today.
Cybersecurity: A permanent challenge that AI and sovereignty are amplifying together
Cybersecurity is not a new subject. Those who built cloud architectures in the early 2010s know this well: security was already there, already central, already difficult to manage. It gradually became more structured, more tooled, more professionalized. But it is now entering a qualitatively different dimension, driven by the combined effects of AI and the growing complexity of digital ecosystems.
The numbers are striking. In France alone, the Ministry of the Interior recorded 348,000 digital incidents in 2024, representing a 74% increase over five years. ANSSI handled more than 3,500 security events in 2025.
Globally, Cybersecurity Ventures estimates that the cost of cybercrime could reach 10.5 trillion dollars per year in the coming years, a figure that exceeds the GDP of most countries in the world.
The question is no longer whether an organization will be targeted. It is when, and with what level of preparedness.
What is changing today is the attack surface itself. With no-code and low-code tools, and now AI assistants capable of building entire applications from a simple natural language prompt, technology creation has become broadly accessible. This is a genuine opportunity for innovation. But it creates a structural paradox: thousands of applications are now being designed and deployed by people without a development background, and therefore with little or no cybersecurity culture. The result is predictable: poorly secured APIs, approximate access management, vulnerable dependencies, sensitive data embedded in AI prompts without oversight, and secrets exposed in publicly accessible code repositories.
AI accelerates creation, but it potentially accelerates the creation of vulnerabilities as well.
A further and well-documented phenomenon compounds this. According to Gartner, 57% of employees already use personal AI tools in a professional context, and nearly a third have already shared sensitive data with public AI systems, often without realizing it. The greatest risk is not the tool itself. It is its ungoverned use, outside any security framework defined by the organization.
The sovereignty dimension connects directly to cybersecurity here. When sensitive data transits through AI models hosted outside Europe, under different jurisdictions, the risk is not only technical. It is regulatory, with potential implications under GDPR, and it is strategic, since organizations often do not know precisely how that data is processed, stored, or used to train future models.
This reality is fundamentally changing the nature of cybersecurity within organizations. It is no longer simply a technical layer added after the fact. It is becoming a matter of governance, trust, resilience, and strategic positioning.
What this means concretely for organizations
Faced with these three interconnected dynamics, several imperatives emerge.
Govern before you deploy. Before accelerating AI adoption, organizations must define a clear framework: which data can flow through which tools, under which jurisdiction, with what security guarantees. This governance is not a brake on innovation. It is the condition for innovation to be sustainable.
Map your critical technological dependencies. Which actors, infrastructures, and models are essential to your organization's operations? What is your exposure if a critical service becomes unavailable due to a commercial decision, a geopolitical sanction, or a major incident? This mapping should be treated with the same rigor as a portfolio review or a supplier risk assessment.
Embed security at the design stage, systematically. In an environment where application creation is accessible to everyone and AI is accelerating that democratization, security by design must become a non-negotiable standard, including for non-technical teams. This requires clear policies, accessible tooling, and a security culture that extends beyond IT teams.
Elevate cybersecurity to the executive level. Security decisions have direct implications for strategy, reputation, operational resilience, and regulatory compliance. They can no longer be delegated to a technical silo. The CISO must have a seat at the strategic table, not just in technical review meetings.
Train continuously. The majority of security incidents begin with human error. In a world where AI makes attacks more credible and risky behaviors more frequent, ongoing awareness is not optional. It is the first line of defense.
Conclusion: the next decade will be defined by mastery
Artificial intelligence is probably one of the most profound technological transformations of our era. But the real challenge for organizations is not moving faster. It is building a digital future that remains reliable, secure, and genuinely sovereign.
The companies and nations that succeed tomorrow will not simply be those that invested most heavily in AI. They will be those that managed to align innovation, security, governance, and control over their strategic dependencies.
Because innovation without security creates fragility. Because technology without sovereignty creates dependency. And because dependency, in a geopolitically unstable world, is a strategic risk that few organizations have yet fully integrated into their governance model.
The next decade will not only be defined by technological innovation. It will be defined by mastery.
Sources: Goldman Sachs Research, Cybersecurity Ventures, ANSSI (Panorama de la cybermenace 2025), Gartner, Check Point Research AI Security Report 2025, Ministère de l'Intérieur, Rapport cybercriminalité France 2024.
.png)
Comments