There is a moment in every technological shift when experimentation turns into exposure. We are at that moment with artificial intelligence.
Across our divisions, AI is no longer a pilot initiative running quietly in a lab. It is embedded in productivity tools, customer platforms, analytics engines, cybersecurity detection systems, operational automation, and even decision-making support. It writes, recommends, predicts, classifies, and increasingly acts. And with every deployment, it extends our operational attack surface.
From a CISO’s perspective, this is not a story about innovation alone. It is a story about control.

AI has extraordinary potential. It can strengthen our resilience, sharpen our detection capabilities, accelerate development cycles, and unlock operational efficiencies. But it also introduces a new layer of systemic risk — one that does not behave like traditional IT risk. AI systems can produce unpredictable outputs, amplify data vulnerabilities, embed bias at scale, depend heavily on opaque third-party models, and operate in ways that blur the lines of accountability.
In the current European regulatory environment — with the AI Act and NIS2 shaping governance expectations — this reality carries executive responsibility. AI risk is no longer theoretical, and it is certainly no longer optional.
The power we now have, however, lies in clarity.
For the first time, we operate in a framework where AI governance is structured, defined, and enforceable. This is not a constraint; it is an opportunity. The European AI governance architecture creates predictability. It establishes risk classifications, oversight mechanisms, compliance expectations, and enforcement powers. Combined with NIS2’s requirements on cybersecurity governance, incident management, supply chain control, and executive accountability, it gives us something we have long needed: alignment.
AI governance is becoming an extension of cybersecurity governance. And cybersecurity governance, when embedded properly, becomes operational resilience.
That is where the COO and the operational divisions enter the picture.
AI risk does not manifest in policy documents. It materializes in procurement decisions, system integrations, vendor contracts, automation scripts, workflow redesigns, and customer-facing applications. It is in the enthusiasm of a department deploying a generative AI assistant without fully understanding data exposure. It is in a supply chain dependency on a general-purpose AI model whose training data, transparency, or resilience profile is unclear. It is in automation logic that bypasses human review because it “works most of the time.”
Operational speed is valuable. Uncontrolled operational speed is dangerous.
My message to the COO is straightforward: AI must be treated as critical infrastructure within each division. Not because we fear it, but because we respect its power.

Every AI system deployed in our organization must be known, documented, and classified. If we do not know where AI is embedded, we cannot protect it. And if we cannot protect it, we cannot defend the enterprise.
Procurement discipline becomes a frontline control. No AI-enabled system should enter a division without a structured review of its security posture, compliance standing, traceability mechanisms, and supplier governance. Third-party AI risk is no different from third-party cyber risk — except that its operational impact can be faster and less predictable.
Data governance becomes even more critical. AI systems consume data at scale, and what they ingest determines what they produce. Sensitive information, intellectual property, and regulated datasets must not flow into uncontrolled environments. Divisions must understand that convenience tools can quietly become compliance exposures.
Human oversight must remain intact. Automation is attractive precisely because it reduces friction. But friction sometimes protects us. AI systems that influence critical processes must have defined accountability, escalation paths, and override mechanisms. No algorithm should operate beyond the reach of responsible authority.
Incident preparedness must evolve as well. We must anticipate not only traditional cyberattacks but also prompt injection, model manipulation, data poisoning, and AI-driven fraud. These scenarios are no longer theoretical exercises; they are emerging operational realities. Our resilience planning must reflect that.
The partnership between CISO and COO becomes essential here. I can design the risk framework, interpret regulatory developments, and ensure monitoring and assurance. But governance without operational execution is a hollow structure. It is the divisions — guided by the COO — that transform policy into discipline.
This is not about slowing innovation. On the contrary, controlled AI adoption accelerates sustainable innovation. When divisions operate within a clear governance structure, experimentation becomes safer. When suppliers are vetted, integration becomes smoother. When accountability is defined, confidence increases. Trust becomes a competitive asset.
Even for business units or subsidiaries not formally classified as NIS2 entities, the direction of travel is unmistakable. Customers, partners, insurers, and regulators increasingly expect demonstrable control over digital and AI risk. Organizations that institutionalize AI governance now will be trusted tomorrow.
As CISO, I do not see AI as an uncontrollable force. I see it as a strategic lever — one that can strengthen our cybersecurity posture, enhance operational efficiency, and reinforce our credibility. But only if we own it.
AI must not be something that happens to the organization. It must be something the organization governs.Because today, AI is woven into cybersecurity. And cybersecurity is woven into business continuity.
The real question is not whether we will use AI. We will. So keep a eye on AI.

The question is whether we will control it — or whether we will allow uncontrolled complexity to shape our risk profile. From where I stand, control is not optional. It is leadership.
Board-Level Message
From a CISO perspective, my message to executive leadership is clear:
AI risk is manageable.
Unmanaged AI is existential.
We now have:
- European-level governance clarity
- Structured regulatory alignment
- Defined accountability
- Practical compliance pathways
This gives us power—not restriction. The organizations that institutionalize AI governance now will:
- Avoid systemic risk
- Reduce liability
- Accelerate innovation
- Strengthen resilience
AI must be controlled, documented, monitored, and continuously assessed—just like cybersecurity.
Here is a short list of important links from the European AI Office page and related pages that will help your company perform a right AI risk assessment — including key governance, regulatory, and practical resources:
Core AI Office & Governance Resources
🔗 1. European AI Office (main page)
Official EU policy page describing the role of the AI Office, its tasks, structure, and connection to AI risk governance.
👉 https://digital-strategy.ec.europa.eu/en/policies/ai-office
Strategic Context & Policy Framework

🔗 2. AI Act (Regulatory framework for AI)
Central legal framework defining risk categories for AI systems and compliance obligations for deployers and providers.
👉 https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
🔗 3. European Approach to Artificial Intelligence
EU policy explaining risk-based AI structure and high-level principles which are essential for risk assessment models.
👉 https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
🔗 4. Apply AI Strategy
EU strategy focusing on adoption, risk awareness, and uptake of AI in business — helpful for internal risk prioritisation.
👉 https://digital-strategy.ec.europa.eu/en/policies/apply-ai
🔗 5. AI Continent Action Plan
High-level EU strategic plan shaping innovation investments that also identifies risk areas and sector priorities.
👉 Accessible via the AI Office page under “AI Continent action plan”
Practical Tools & Participation
🔗 6. AI Act Service Desk (Single Information Platform)
Official EU helpdesk with tools like the AI Act Explorer and Compliance Checker — critical for carrying out accurate risk assessments and compliance queries.
👉 https://ai-act-service-desk.ec.europa.eu/en
🔗 7. AI Pact
Voluntary initiative encouraging companies to map AI systems, establish governance, and work toward best practices in accountability.
👉 https://digital-strategy.ec.europa.eu/en/policies/ai-pact
🔗 8. European Artificial Intelligence Board (AI Board)
EU body coordinating national regulators and ensuring consistent implementation — useful for compliance benchmarks and guidance.
👉 https://digital-strategy.ec.europa.eu/en/policies/ai-board
Helpful Supplementary Resource
🔗 9. Artificial Intelligence Act (text + explorer)
Direct access to the Official AI Act text (Regulation 2024/1689) — indispensable for detailed risk classification and compliance requirements.
👉 https://artificialintelligenceact.eu/the-act/
These links together help you:
- Understand the EU governance framework for trustworthy and safe AI.
- Ground your risk assessment in actual regulatory obligations and enforcement oversight.
- Access practical compliance tools (Service Desk, AI Pact) for operational risk evaluation.
- Align risk models with both policy strategy and legal risk categories.
“Artificial Intelligence without human guidance is like a cruise ship drifting across the open sea — powerful, magnificent, and full of potential — yet without a captain or crew, it is directionless, vulnerable, and one storm away from disaster. Only when humanity takes the helm does technology become a journey instead of a gamble.”








