Pretend you protect us, and we will pretend to believe you. In some companies and organisations, these unspoken words reduce risk, compliance, and audit to mere formalities, stripping them of their true role as pillars of corporate governance and resilience.
Unspoken words speak volumes. They reveal hidden corporate vulnerabilities that no one dares to acknowledge, risk-blind cultures where silence replaces accountability, and procedural compliance without substance. The “no news is good news” strategy discourages risk disclosures that affect confidence.
Sometimes, the members of the Board prioritise short-term profits over long-term security, so critical risks are undermined until they escalate into crises. This cost-cutting mindset sometimes extends beyond operations and efficiency to risk management, creating a dangerous situation. The board hires a CEO whose primary mandate is to drive cost reductions and profitability. The CEO, in turn, hires a Chief Risk Officer who is expected to align with this cost-minimization strategy, often treating risk management as a financial burden rather than a strategic necessity.
Sometimes, the members of the Board and the CEO leave risk, compliance, and audit, underfunded and understaffed, and push for minimum compliance, just enough to meet regulatory requirements, but not enough to genuinely mitigate threats. Although they always say in public that security is their top priority, behind closed doors, they see it as an annoyance.
When executives prioritize perception over protection, the long-term consequences can be severe. This institutionalized risk blindness is a systemic failure to recognize, assess, and address critical threats to the entity, the critical infrastructure, and the country.
The good news: Many companies and organizations do the opposite. Many Boards understand that risk management is an essential investment. They see corporate governance as a competitive advantage. They embrace a culture of transparency, ensuring that boardroom discussions go beyond checklists and cover resilience, security, and long-term sustainability.
The Hybrid Resilience Initiative (HRI), operated by Cyber Risk GmbH, is there to promote these good corporate governance practices, in the era of hybrid threats and state-sponsored adversaries. It will support organizations and promote the best practices that prioritize resilience over short-term cost-cutting. It will encourage a shift from risk avoidance to strategic risk leadership, where companies view resilience as a long-term asset.
The Hybrid Resilience Initiative (HRI) has the mission to enhance resilience against hybrid warfare tactics, cyber espionage, and asymmetric threats that target the private sector. HRI provides independent insights and strategic defenses against state-sponsored cyber intrusions, hybrid coercion tactics, strategic deception campaigns, influence operations, and insider threats, impacting corporate and national security. The initiative operates with full neutrality, free from commercial, political, or regulatory influences, and the knowledge is shared without financial, legal, or membership obligations.
The initiative envisions a world where citizens and entities of the public and private sector work together and share knowledge in a collaborative, intelligence-driven environment to defend against hybrid warfare tactics, cyber espionage, and disinformation campaigns to protect democracy, critical infrastructure, and societal stability.
HRI is guided by the following principles:
1. Independence: The initiative is free from financial, political, or commercial influence, ensuring neutrality and credibility.
2. Voluntary Participation: Engagement is entirely voluntary, with no obligations, contracts, or membership requirements.
3. Practical Impact: The initiative prioritizes actionable insights, focusing on real-world challenges and solutions in hybrid resilience.
4. Adaptability: HRI remains flexible and agile, allowing for continuous evolution based on emerging threats.
News and updates from the Hybrid Resilience Initiative (HRI) can be found in the monthly newsletter of Cyber Risk GmbH, a comprehensive publication exceeding 80 pages each month. The newsletter provides in-depth insights on hybrid warfare, cyber espionage, and resilience strategies. You can download it at no cost, with no registration, subscription, or commitment required at:
https://www.cyber-risk-gmbh.com/Reading_Room.html
What is Performative Risk Management?
British philosopher John Langshaw Austin (1911–1960) proposed a distinction between performatives and constatives in his lectures (published as "How to Do Things with Words"). Austin’s work challenged the assumption that language is merely descriptive. Instead, he showed that words shape reality.
Austin originally categorized statements into two types:
- Performatives (shaping reality with words - a performative utterance is not just describing reality, it also performs an action simply by being spoken.
- Constatives (describing reality - a constative utterance makes a statement that can be evaluated as true or false).
Performative risk management refers to the practice of appearing to manage risk without actually mitigating real threats. It is a superficial approach that prioritizes optics over substance, often reducing risk management to a bureaucratic checklist exercise rather than an integral part of corporate strategy.
While some organizations present themselves as risk-conscious and compliant, their actual risk management efforts lack depth, enforcement, or a real commitment to security and resilience. This phenomenon is also prevalent in highly regulated industries and the critical infrastructure, where some companies and organisations seek to satisfy auditors and regulators on paper while failing to implement meaningful risk controls.
In Performative Risk Management, entities draft detailed risk policies that look strong in theory but are never fully implemented in practice. Their risk management frameworks are created to pass regulatory inspections rather than to address real vulnerabilities. Risk disclosures are crafted to appear compliant, often downplaying or omitting significant concerns.
What is Internal Disinformation?
Disinformation is almost always associated with external threats, such as state-sponsored campaigns, social media manipulation, or geopolitical influence operations. However, disinformation also exists within organizations.
Internal disinformation refers to the spread of misleading, incomplete, or false information within an organization, influencing decision-making, risk management, compliance, and corporate culture. It can be found in sanitized risk reports, manipulated performance metrics, suppressed security vulnerabilities, and selective disclosure of critical information, leading to a false sense of security, regulatory exposure, and weakened resilience.
Strategic disinformation from leadership includes the manipulation of data or narratives to control investor perceptions or avoid accountability, the overstatement of security readiness to satisfy regulators or shareholders, and the misrepresentation of risk and compliance efforts. In legal terms, it can lead to severe regulatory, civil, and even criminal consequences.
Strength in Adaptation. Power in Resilience.
In an era where risks evolve rapidly, organizations must embrace continuous adaptation and proactive resilience. The ability to anticipate, adjust, and respond to emerging threats is not just a competitive advantage, it is a necessity for survival.
Strength in Adaptation – This is the ability to adjust strategies, processes, and structures in response to evolving threats. Organizations that master adaptation are proactive rather than reactive, continuously learning from internal and external disruptions to anticipate, prepare for, and capitalize on change.
The risk landscape is dynamic. Threats, regulatory requirements, economic downturns, and geopolitical instability create ever-changing challenges. Regulations evolve, and organizations must continuously update their risk and compliance frameworks to meet shifting legal obligations. Adversaries adapt, so must defenses.
Power in Resilience - This goes beyond adapting to change, it is about withstanding shocks, maintaining operations, and emerging stronger from disruptions. Attacks are inevitable, organizations cannot prevent all breaches, but they can build resilience to withstand attacks and minimize damage.
We must address all risks, traditional, emerging, and exotic.
Labeling a risk as "exotic" should never be an excuse for inaction. The financial crisis, cyber warfare, and pandemic disruptions were once considered exotic risks, until they happened. In risk management, we prepare for the unexpected and difficult to understand too, not just the usual and familiar.
Exotic risk, example 1 - The external HR departments.
How many HR departments exist in a critical entity? Officially one, but the threat actors' shadow HR teams are always hiring and promoting. This is not a joke. Threat actors aren’t just profiling; they’re running a shadow leadership program. They are working to promote their targets that can be blackmailed, bribed and manipulated, into roles with more access and responsibility.
Sophisticated threat actors commonly develop detailed profiles of individuals working in critical entities. They increasingly deploy personalized cyber attacks that exploit psychological vulnerabilities. By analyzing an individual’s behaviors and patterns, these attackers can design highly targeted and effective attacks.
Their favorite targets often include individuals with psychological disorders. An example is the Obsessive-Compulsive Disorder (OCD), a mental health condition characterized by persistent, intrusive thoughts (obsessions) and repetitive behaviors (compulsions) performed to reduce anxiety or distress. This disorder affects approximately 1–2% of the global population and can significantly affect daily functioning, relationships, and quality of life.
Obsessions are intrusive and unwanted thoughts or urges that cause significant distress or anxiety. Examples include contamination fears (excessive worry about germs, dirt, or illness), symmetry and order, and intrusive aggressive thoughts. Compulsions are repetitive behaviors or mental acts performed to neutralize obsessions or prevent perceived harm. Examples include cleaning (excessive handwashing or cleaning of objects and spaces), excessive checking (repeatedly ensuring doors are locked, appliances are off, or mistakes have not been made), performing actions a specific number of times, and reassurance-seeking (asking others for validation to alleviate anxiety).
According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), OCD is diagnosed if obsessions/compulsions are time-consuming (e.g., take more than 1 hour per day), cause significant distress/impairment, and if the disorder is not better explained by another mental health issue, like the generalized anxiety disorder. What if the disorder takes 50 minutes per day, or causes significant distress / impairment that is covered, hidden, or tolerated by polite colleagues?
Threat actors have a window of opportunity due to the challenges in addressing OCD. Individuals with OCD symptoms have a high rate of treatment resistance, and there is a delay in diagnosis and treatment, often spanning several years.
Where is the opportunity for threat actors? Individuals with OCD symptoms have an overwhelming desire to correct errors or achieve flawless outcomes. They are often very sensitive to threats related to viruses (physical and digital).
The most sophisticated cyber attacks often begin with a simple and common first step, one that opens the door for a highly complex operation.
Example 1 – Exploiting the need for perfection: A phishing email claims there’s an error in the victim’s online profile. The victim clicks on a link to “correct” the issue. This attack plays on the victim’s obsession with avoiding mistakes or achieving perfection.
Example 2 – Exploiting intrusive thoughts: Threat actors send threatening messages claiming knowledge of the victim’s “secrets”. The victim is coerced into cooperating and providing sensitive information.
Persons with OCD symptoms are not inherently less intelligent or capable than anyone else. In fact, many individuals with OCD possess high levels of intelligence, focus, and problem-solving abilities. They can learn to recognize that they are targeted. This is supported by research on cognitive-behavioral approaches to addressing vulnerabilities.
We can use OCD against threat actors, by explaining to possible victims the troubles they will get in by responding to phishing emails, fake job offers, or urgent requests exploiting psychological vulnerabilities. Once trained, they will follow security protocols rigorously. Their natural inclination to prevent mistakes can be channeled into careful adherence to security and cybersecurity best practices.
We are no doctors, and our opinion about disorders does not constitute medical advice or medical assistance of any kind.
Exotic risk, example 2 - Cyber Espionage-as-a-Service (CEaaS).
Cyber Espionage-as-a-Service (CEaaS) refers to a professionalized, commodified approach to cyber espionage where actors provide espionage tools, techniques, and operational capabilities to clients for a fee. These services are often marketed through dark web forums, making them accessible to a range of actors, from nation-states to corporations to organized crime groups.
What could CEaaS providers offer? We can start with custom malware development (tailored spyware, ransomware, or backdoors designed to exploit specific targets). They also offer phishing kits (ready-to-deploy phishing campaigns with custom research, templates, domains, and hosting). They offer harvesting services (data extraction from targeted organizations, including passwords, proprietary information, and trade secrets). They can offer one-stop shop solutions, adding exfiltration infrastructure (secure, anonymized channels for transmitting stolen data).
We will not be surprised if these providers offer loyalty programs too (every 10 hacks, you get one for free). Does it look like a joke? Have a look at their business model: They do offer subscription-based services (monthly fees for continuous access to tools and updates), one-time payments (single transactions for specific attacks or tools), profit-sharing agreements (a results-oriented model, taking a share of the profits derived from stolen data), and service level agreements (SLAs – they offer guarantees for results, data delivery, or attack success rates).
Cyber Espionage-as-a-Service (CEaaS) can also be used as a tool to overwhelm targets, creating distractions or diversions that make attacks by the real threat actor more effective and harder to understand and attribute. This tactic leverages the noise and chaos created by multiple simultaneous or consecutive cyber events to obscure the true origin or intent of the primary attack.
By launching attacks on various aspects of the target's infrastructure, CEaaS creates a multi-front challenge. This confuses defenders and forces them to spread their resources thin, reducing their ability to identify and counter the real threat. CEaaS providers may plant fake indicators of compromise (IOCs), or use tools associated with known cybercriminal groups. It shifts suspicion away from the true perpetrators and complicates attribution efforts.
The use of CEaaS actors introduces layers of plausible deniability. Even if the tools or methods point to specific groups, it becomes challenging to establish direct links to state actors.
With repeated, visible cyberattacks, they create an environment where stakeholders focus on immediate damage control, overlooking covert activities.
Albert Einstein once observed, “Confusion of goals and perfection of means seems to characterize our age.” His insight resonates deeply in the realm of cyber espionage. The rise of CEaaS exemplifies the perfection of means (sophisticated tools, professionalized services, and efficient execution), all available for hire. The confusion of goals is accomplished with attribution masking (employing proxies or third-party groups to carry out the attacks), false flags (leaving behind evidence to implicate another actor), and global infrastructure (leveraging servers and systems worldwide to confuse attribution efforts).
Exotic risk, example 3 - The connection between country risk and environmental compliance.
Environmental compliance and country risk may appear as separate concerns, but they can be deeply interconnected. Environmental factors influence country risk, and can be weaponized, as they increasingly impact political stability, economic performance, and legal frameworks.
Country risk refers to the potential risks and uncertainties associated with investing in or conducting business in a particular country. These risks stem from a variety of factors, including political, economic, legal, and social conditions in the country. Country risk can impact businesses, investors, and governments, and it is often analyzed to determine the feasibility of entering a market or engaging in financial activities in a given region.
For example, country risk is a critical consideration under Basel III, influencing capital adequacy, liquidity management, and overall risk governance. By integrating country risk into their frameworks, banks can better withstand the challenges of cross-border operations and contribute to global financial stability.
Political Risk refers to the potential for losses or adverse effects on business operations, investments, or assets due to political changes or instability in a country. These risks arise from decisions or events within a country's political or legal framework that can impact businesses, investors, or other stakeholders. It is a key component of country risk and is critical for businesses operating internationally or investing in foreign markets.
Hybrid warfare strategies increasingly incorporate environmental risk as a tool to shape geopolitical landscapes, destabilize nations, and influence political risk. Adversaries recognize that environmental conditions—both natural and artificially induced—can serve as force multipliers in conflict, economic coercion, and disinformation campaigns.
Artificially induced environmental disasters can trigger ecological crises, contamination, and accidents. We must recognize that environmental risk is no longer just a passive factor but an active domain in geopolitical conflict.
Exotic risk, example 4 - The “Harvest Now, Decrypt Later” risk.
Adversaries already follow the “Harvest Now, Decrypt Later” strategy. It refers to a security threat where adversaries collect (“harvest”) encrypted data today, with the intention of decrypting it in the future when they have access to quantum computers powerful enough to break the encryption methods currently in use.
The assumption in cryptography that “they will see it, but they will not understand it” has historically hinged on the strength of encryption: adversaries might intercept encrypted data but cannot decipher it without the appropriate key. This belief is rooted in the computational infeasibility of breaking encryption with classical methods. However, the emergence of quantum computing fundamentally changes this dynamic, challenging long-standing assumptions about cryptographic security.
For years, no explicit legal framework addressed the post-quantum threat. Laws were technology-agnostic and assumed the continued robustness of cryptography. For example, Article 32 of the General Data Protection Regulation (GDPR) in the EU required organizations to implement appropriate technical and organizational measures, including encryption, to ensure data security. It implied that existing encryption could ensure security. Under GDPR Article 25, encryption must be incorporated into the design of systems handling personal data. But what constitutes “appropriate measures” and “data protection by design and by default” in the quantum era?
Governments and international bodies are beginning to draft regulations that address the anticipated challenges of quantum decryption. Bodies (like the US National Institute of Standards and Technology) are creating post-quantum cryptography standards, which may become mandatory under future laws.
Should we worry about retroactive exposure? In a legal context, this is the situation where an organization becomes liable for events, actions, or omissions in the past that were considered secure at the time but later proved to be problematic. In the realm of cybersecurity and data protection, particularly in light of quantum computing, retroactive exposure takes on new dimensions. Breaches that occur years later due to quantum decryption could still trigger liability if the organizations failed to adopt protective measures when the threat was foreseeable.
Personally Identifiable Information (PII) such as names, social security numbers, and birthdates can be exploited decades after the collection. Medical histories and genetic data remain sensitive. Bank account records, credit histories, and tax records can be exploited over extended periods. Biometric identifiers like fingerprints, retina scans, or facial recognition data can be misused indefinitely.
The timeline for adversaries to decrypt data using quantum computers depends on several factors, including the pace of advancements in quantum computing. We often read that it could take 10 years. In my opinion, it could take way less for well-funded entities, like a secret service or a government.
It is very unlikely that secret services would openly disclose that they possess quantum computing capabilities capable of breaking encryption. The primary advantage of having quantum decryption capabilities is information asymmetry. An entity with such capabilities can intercept and decrypt communications without the target knowing, giving them a strategic edge in intelligence and counterintelligence operations.
Quantum computing disrupts the foundation of classical cryptography, creating an environment where today’s secrets may become tomorrow’s vulnerabilities. The transition to post-quantum cryptography is not just a technical upgrade but a strategic necessity to ensure the longevity of data security in an evolving threat landscape. The game has changed, and organizations must act to adapt.
What can we do? Which is the first step? Well, we must identify sensitive data with a long security lifespan. We must also start understanding and preparing for quantum-resistant algorithms.
Exotic risk, example 5 - Space risk.
Space risk presents a dynamic legal landscape where innovation often outpaces regulation. Space risk refers to the potential for financial, operational, legal, or reputational harm arising from activities conducted in outer space.
The Outer Space Treaty (OST, 1967) forms the cornerstone of space law, establishing the principle that outer space, including the moon and other celestial bodies, is the “province of all mankind.” It prohibits the placement of nuclear weapons in space and emphasizes that space activities must be conducted for the benefit of all countries. Unfortunately, the OST has no explicit provisions against anti-satellite (ASAT) weapons.
Today, dual-use technologies blur the line between peaceful and military purposes. Also, private companies are exploring mining of asteroids and the moon. Although the OST prohibits national appropriation of celestial bodies, raising questions about private ownership, there are national laws that permit private entities to claim resources, creating potential conflicts with international law.
The growing privatization of space activities introduces unique legal challenges, like contractual disputes between operators, manufacturers, and insurers.
- The weaponization of space refers to the placement and use of weapons in outer space, which can include kinetic, non-kinetic, directed energy, and cyber-based systems.
- The militarization of space refers to the use of space-based assets (e.g., satellites) for military purposes, such as communication, reconnaissance, and navigation.
Kinetic weapons rely on physical force, impact, or explosion to damage or destroy a target. Examples include anti-satellite missiles (designed to destroy satellites or space infrastructure), and kinetic kill vehicles (devices that collide with targets in space, neutralizing them through impact). Kinetic weapons create space debris, which can endanger other space assets for decades.
Non-kinetic weapons disable, disrupt, or degrade a target’s capabilities without physical impact or destruction. They operate through non-physical means, like electromagnetic interference or manipulation. Examples include jamming devices (that interfere with communication signals, rendering satellites or ground systems inoperable), and electromagnetic pulse weapons (that emit bursts of electromagnetic energy to disable electronics). Detection and attribution is very challenging.
Space is no longer just an enabler of hybrid warfare, it is becoming a direct arena for strategic conflict. The consequences extend far beyond governments and militaries. Private companies, financial markets, and everyday citizens are highly dependent on space-based infrastructure, making them vulnerable to disruption, manipulation, and coercion.
Exotic risk, example 6 - Geoengineering risk.
Geoengineering risk refers to the potential unintended consequences, legal challenges, geopolitical conflicts, and ethical dilemmas associated with the deliberate large-scale manipulation of Earth's climate systems. Geoengineering could also be an effort to mitigate or reverse climate change, and involves technologies designed to alter atmospheric, oceanic, or terrestrial processes.
Solar Radiation Management (SRM) technologies reflect a portion of the sun’s energy back into space to cool the earth. A good example is the stratospheric aerosol injection, where reflective particles like sulfur dioxide are injected into the stratosphere to mimic volcanic eruptions.
Carbon Dioxide Removal (CDR) technologies remove CO₂ from the atmosphere and store it safely. A good example is ocean fertilization, where nutrients are added to oceans to stimulate algae growth, which absorbs CO₂.
Unfortunately, geoengineering interventions could disrupt ecosystems, weather patterns, and biodiversity. They may cause irreversible damage, such as altering ocean chemistry through fertilization.
Geoengineering leads to geopolitical risks. A single nation or entity acting independently could lead to international disputes, especially if adverse effects are felt by others. Geoengineering technologies could also be exploited as tools for geopolitical leverage or conflict. Weaponization of geoengineering refers to the use of geoengineering technologies as tools of geopolitical leverage, conflict, or coercion. Climate manipulation strategies, originally designed to address climate change, could be intentionally exploited to harm adversaries, disrupt ecosystems, or achieve strategic dominance. Given the transformative potential of geoengineering technologies, their misuse poses serious risks to global security and stability.
Cyber weaponization of geoengineering systems includes cyberattacks targeting geoengineering deployment mechanisms to redirect or misuse technologies. For example, hacking such systems can cause targeted disruptions in a rival nation’s climate and infrastructure.
Identifying the actors responsible for weaponized geoengineering could be difficult. Existing international laws do not specifically address the weaponization of geoengineering, leaving a regulatory vacuum.
Climate is a new battlefield. Geoengineering could expand the concept of warfare from land, sea, air, space, and cyber to climate itself. Nations will disrupt economies without firing a shot, and turn climate into a geopolitical bargaining chip.
Exotic risk, example 7 - AI-driven attacks, AI-driven defences.
Emerging trends like AI-driven attacks (and AI-driven defences), quantum cryptography arms race, deepfake-as-a-service (DFaaS), and cyber-physical espionage (covert activities that exploit vulnerabilities in interconnected cyber-physical systems (CPS) to gather intelligence, sabotage operations, or influence critical infrastructure) are reshaping the landscape.
We are entering an era where AI-powered attackers and defenders compete in a dynamic and escalating arms race. This “AI vs. AI” development transforms traditional strategies and introduces complex challenges and opportunities for all stakeholders.
Attackers use AI to enhance reconnaissance, as AI automates intelligence gathering, identifying potential targets and weak points in real time. Natural language processing (NLP) can analyze communications, social media, or documents to reveal exploitable information. AI identifies vulnerabilities faster than human attackers, including zero-day vulnerabilities, and generates exploits dynamically. AI can also tailor phishing emails or social engineering attempts by analyzing target behavior, language patterns, and preferences.
AI-powered malware can learn and evolve during an operation, adapting to bypass detection mechanisms and defenses. AI can modify attack payloads based on the real-time responses of a target system.
Defenders are leveraging AI to counteract advanced attacks. In threat detection and response, AI analyzes vast amounts of data to detect anomalies and potential threats that traditional methods might miss. Behavioral analysis can identify unusual patterns in network traffic, user behavior, or system activity.
AI systems can quarantine infected devices, block suspicious activity, and deploy patches autonomously. Predictive AI models help security teams anticipate the attacker’s next move. AI can also create dynamic honeypots that lure attackers into fake environments, gathering intelligence on their tactics without risking real assets. Machine learning helps ensure these traps remain convincing and adaptive.
The real challenge emerges when both attackers and defenders deploy AI systems simultaneously, leading to complex, real-time interactions. Attackers may probe defender AI systems to understand their algorithms and find ways to exploit blind spots. For example, AI-powered attacks may generate data patterns that confuse or overwhelm detection systems. Defenders may probe attacker AI systems in adversarial machine learning to predict and neutralize attacker strategies.
Where both attacker and defender AIs evolve simultaneously, a feedback loop may arise, with each AI learning from the other's responses. This could lead to rapid escalation in attack sophistication and response. AI vs. AI interactions can result in unpredictable behaviors that experts will struggle to understand.
The rise of AI vs. AI in cyberespionage represents a shift from human-centric operations to an automated, high-speed conflict. Automated systems can respond aggressively to perceived threats. Feedback loops between adaptive systems may escalate conflict beyond initial intentions. AI has blurred the lines between offense and defense, as it can perform pre-emptive actions that resemble offensive measures.
Autonomous decision-making complicates liability, as actions may not directly align with human intentions. The idea that the most aggressive AI could dominate an AI conflict is both compelling and deeply concerning. An aggressive AI will strike first, overwhelming the defender before their countermeasures can adapt. It prioritizes success over caution, exploiting opportunities others might avoid due to ethical concerns. This suggests a future where AI systems prioritize offensive actions over restraint, potentially leading to significant damage, legal ambiguities, and ethical dilemmas.
Learning from the Hybrid Resilience Initiative (HRI).
News and updates from the Hybrid Resilience Initiative (HRI) can be found in the monthly newsletter of Cyber Risk GmbH, a comprehensive publication exceeding 80 pages each month. The newsletter provides in-depth insights on hybrid warfare, cyber espionage, and resilience strategies. You can download it at no cost, with no registration, subscription, or commitment required at:
https://www.cyber-risk-gmbh.com/Reading_Room.html
Strengthening Hybrid Resilience Through Knowledge
Cyber Risk GmbH develops and maintains 50 specialized websites, each providing critical insights into risk management, compliance, cybersecurity, and resilience.
As part of the Hybrid Resilience Initiative (HRI), these websites serve as a knowledge hub for professionals navigating the complexities of modern hybrid threats, whether in financial services, critical infrastructure, or geopolitical risk.
Explore our resources and stay informed. Knowledge is the first and most important line of defense.
a. General, sectors, industries.
11. Transport Cybersecurity Toolkit
13. Sanctions Risk
14. Travel Security
b. Understanding Cybersecurity.
4. What is Synthetic Identity Fraud?
c. Understanding Cybersecurity in the European Union.
2. The Digital Operational Resilience Act (DORA)
3. The Critical Entities Resilience Directive (CER)
5. The European Data Governance Act (DGA)
6. The European Cyber Resilience Act (CRA)
7. The Digital Services Act (DSA)
8. The Digital Markets Act (DMA)
10. The Artificial Intelligence Act
11. The Artificial Intelligence Liability Directive
12. The Framework for Artificial Intelligence Cybersecurity Practices (FAICP)
13. The EU Cyber Solidarity Act
14. The Digital Networks Act (DNA)
15. The European ePrivacy Regulation
16. The European Digital Identity Regulation
17. The European Media Freedom Act (EMFA)
18. The Corporate Sustainability Due Diligence Directive (CSDDD)
19. The Systemic Cyber Incident Coordination Framework (EU-SCICF)
20. The European Health Data Space (EHDS)
21. The European Financial Data Space (EFDS)
22. The Financial Data Access (FiDA) Regulation
23. The Payment Services Directive 3 (PSD3), Payment Services Regulation (PSR)
24. The Internal Market Emergency and Resilience Act (IMERA)
26. The European Cyber Defence Policy
27. The Strategic Compass of the European Union
28. The European Space Law (EUSL)
29. The EU-US Data Privacy Framework
31. The EU Cyber Diplomacy Toolbox