**LESSONS FROM THE PAST: WHY WE COULDN’T INTEGRATE RISK, PERFORMANCE AND STRATEGY** Remember when companies had data silos for risk, performance and strategy execution? The owners of those silos used to wield power over insights derived from the data, leading to numerous problems: - Risks were assessed without connection to the overall context. Back then, we called this a "risk-centric" approach. Even connections between risks were often ignored. - Companies maintained separate databases for risks, which only a few people in the organization could access. The platforms for these databases were expensive—can you believe that? - Performance assumptions failed to properly account for uncertainty. We often confused epistemic uncertainty (lack of knowledge) with aleatory uncertainty (randomness). - Decisions were made with very little quantitative information, and trade-offs were opaque. - Many C-level execs claimed they were unaware of risky decisions because they hadn’t been warned of the potential consequences. This ignorance often served as a convenient shield from personal liability. That was a long time ago—2024, to be precise. **A Risk-Centric Micro-World View** Just kidding! I’m being ironic: I accept the risk of being accused of clickbaiting. The truth is, risk management has many implementation issues that undermine what is otherwise a highly respectable body of knowledge. Perhaps most thinkers, authors, and practitioners (including me) argue that risk management has been done the wrong way. They cite numerous scientific arguments, while most organizations remain oblivious to the problem and continue practicing siloed risk management with inadequate tools that add little value, simply because they are easy to use. Here’s a fact: we were all trained to assess risks by zooming in and treating each one as if it existed in a neatly boxed bowtie—a legacy of tools like FMEA and event investigation. Fig. 1. Risk-centric View [[Bowtie View of RM.png]] Is this a fair representation of risk management as a whole? It may be, but only if we’re talking about a micro-world with clearly defined boundaries. Even frameworks like ISO 31000 and COSO ERM acknowledge that this perspective is far too narrow for enterprise-level decision-making. They recommend linking risk to strategy and objectives in a hierarchical, tree-like structure. However, if you focus on one risk and assess it as a standalone entity, you inevitably return to the micro-world of risk-centrism (as shown in Fig. 1). The hallmark of corporate risk-centrism is the enterprise risk register. While this tool can be useful in specific, contained environments (e.g., a project or contract at a specific moment in time), using a relational database to list unconnected risks across hierarchical levels and timelines—and then assessing them individually to inform the Board—is the opposite of systemic risk assessment. Uncertainty is like Einstein’s gravity—it bends space and time around an organization’s objectives, creating non-obvious interactions. Just as you cannot understand the universe simply by listing stars and planets, you cannot grasp uncertainty by focusing solely on individual elements. Of course, as with Newtonian gravity, micro-worlds still matter—it all depends on the level of zoom you need. **ADOPTING SYSTEMS-THINKING** The prevailing consensus is that when we zoom out, we should abandon risk-centrism and adopt a systems view of the drivers behind an organization’s success (or failure). The only fixed point in an enterprise system is the enterprise’s adjusted value—a fuzzy and elusive metric. But that’s no problem: systems are defined by their purpose, not by a central “fixed point.” Consider this advice from recognized experts: **Donella Meadows - Diana Wright** One of the most powerful ways to influence the behavior of a system is through its purpose or goal. That’s because the goal is the direction-setter of the system, the definer of discrepancies that require action, the indicator of compliance, failure, or success toward which balancing feedback loops work. If the goal is defined badly, if it doesn’t measure what it’s supposed to measure, if it doesn’t reflect the real welfare of the system, then the system can’t possibly produce a desirable result. Systems, like the three wishes in the traditional fairy tale, have a terrible tendency to produce exactly and only what you ask them to produce. Be careful what you ask them to produce. **Stafford Beer** The problem of management itself, which is that of regulating an immense proliferation of variety, is less horrific once the underlying homeostatic regulators are perceived, properly designed, and allowed to absorb the variety of each other’s entities. This is the essence of VIABILITY. The purpose of a system is what it does. There is, after all, no point in claiming that the purpose of a system is to do what it consistently fails to do. A comparator (i.e., software), naturally enough, continuously compares the declared purpose with the purpose imputed from the results that the system delivers. This results in the feedback of an error signal, which will modify the original statement of purpose. 'Feedback' is improperly used to refer merely to response, which is how most managers use the term. Feedback generates corrective action (whether positive or negative). **Hans Lassöe** Strategic planning is not risk management, but by embedding elements of risk management into the strategic planning process, both processes are vastly improved.  • Strategic planning becomes more nuanced and takes uncertainties explicitly  into account. • Addressing uncertainties may lead to changes in the strategic planning schedule if e.g. an organization finds that the world is changing more within a strategic horizon than the strategic maneuverability can handle. • Risk management becomes a supporting element of a process rather than being a standalone and add-on process and burden on management. **Norman Marks** One element of risk culture is the degree to which individuals understand that risk and compliance rules apply to everyone as they pursue their business goals. To start, that requires a common understanding of the organization and its business purpose (i.e., their raison d'être). Today, some seem to have lost sight of those business goals, forgetting that they serve the company and shareholders, and not the other way around. **Stefan Hunziker** 𝑹𝒆𝒄𝒐𝒎𝒎𝒆𝒏𝒅𝒂𝒕𝒊𝒐𝒏𝒔 𝒇𝒐𝒓 𝑰𝒎𝒑𝒓𝒐𝒗𝒆𝒎𝒆𝒏𝒕 • Integrate risk management into strategic planning and decision-making processes rather than treating it as a standalone activity. • Focus on meaningful prioritization of risks aligned with organizational goals, avoiding unnecessary administrative burdens. • It's time to shift from the negative connotation of 'risk' to a broader understanding of 'uncertainty,' which encompasses threats and opportunities. • Effective risk management should reduce complexity and resource consumption while supporting leadership in making well-informed decisions. • A cultural shift is needed to view risk management as a driver of strategic success rather than as a compliance or administrative task. • Clear, practical communication free of jargon is crucial to integrating risk management into everyday business operations. _𝐔𝐥𝐭𝐢𝐦𝐚𝐭𝐞_ _𝐆𝐨𝐚𝐥_ • Risk management should enhance decision quality, contributing to sustainable business success by managing uncertainties effectively. • The future of risk management lies in its integration into core business strategies and processes, emphasizing its role in fostering organizational resilience and adaptability. **Alex Sidorenko** Very often, however, I see other risk management matters relegated to the Audit Committee, including risk management strategy and other risk management activities. The Audit Committee becomes the place for all risk matters. That is totally against the whole purpose of risk management. If risk management is a decision making tool (under [RM2](https://riskacademy.blog/rm1-vs-rm2-which-side-will-you-choose/) it sure is), then discussing risks, goals, objectives, performance targets or actual performance separate from risks is insanity. Risk is not a standalone item that needs to be managed (except few compliance risks, but only because regulators missed the plot and now, we all have to pretend compliance risks need to be managed and not a driver in business decision making), risk is the other side of the performance coin. Business performance is 2-dimensional: reward and risk. How much did we make and how much did it or could’ve cost us (how much risk did we take to generate the revenue). Separating the risk conversation from planning, budgeting and performance conversations should stop asap. **Werner Gleißner - Thomas Berger** An integrative risk management approach focused on supporting business decisions and tied corporate planning via risk aggregation becomes the foundation of management control. The strategic, decision-oriented development of ERM has many commonalities with risk governance (see Stein and Wiedemann 2016; Stein et al. 2019; Wiedemann et al. 2022) as ERM’s decision orientation aligns with the risk governance approach (see, for example, Weigel et al. 2018). Moreover, risk governance is integrative, emphasizing the importance of involving all employees in risk culture. The highest maturity level is reached when all employees consciously manage risk (“embedded risk management”). **E. Bracci · A. Bruno  · G. D’Amore · R. Ievoli** In recent years, public management scholars and regulators have stressed the importance of the integration of performance management and risk management (e.g. Halligan, 2009). Despite these calls, especially in the stream of the contribution of public value (Deidda Gagliardo & Saporito, 2021), there are still few empirical studies (e.g. Capaldo et al., 2017) that evidence how the integration works in practice. Researchers have mostly focused their attention on the risk related to disinte- gration (Arena et al., 2017), while the conditions that enable or hinder integration have been little investigated. **Carol Williams** The approach you ultimately choose should not be a carbon copy from a framework or what other organizations are doing, but rather something tailored to your company’s specific needs and culture. And regardless of the chosen approach, nothing in a business occurs in a vacuum, so why should your risk assessment? You cannot, and should not, look at risks in isolation because business doesn’t work that way. How can you determine the impact a risk will have if you don’t know what the risk is related to? Also, in the next phase (risk analysis and prioritization), this connection will enable you to better understand how one risk could impact other risks should it occur, among other things. It’s hard to say what the most critical phase of the ERM process is – they all are equally important. Without reliable information about a risk in the form of impact, likelihood, and other parameters, it will be impossible to understand which risks deserve the most attention and which ones do not. **Richard Chambers** Operating with a connected risk mindset means focusing on outcomes that drive value, decision-making, performance, and competitive advantage. When an outcomes-oriented approach is aligned with the organization’s strategic objectives, there’s no question of your value to the business. You are not “reporting,” but rather “achieving.” In sum, connected risk thinkers recognize that all organizations exist to achieve objectives—and appreciate that the strategies organizations use to achieve their objectives are inevitably fraught with risks. Consequently, connected risk thinkers focus on strategic outcomes, striving to help their organizations navigate not only operational, compliance, financial, and technology risks, but strategic risks as well. They leverage their extraordinary risk acumen to recognize and understand the systems that define and influence the organization’s goals and direction, and to pursue a strategic role on behalf of their organization. **Graeme Keith** The path is short from analysis like this to rich, meaningful discussions about distribution of resources to manage risks, meaningful risk metrics like the probability of ruin, probability of making targets or how much it rains on a rainy day (VaR or CVaR). Risk management becomes an essential tool in the highest level strategic operation of the company as it informs decisions regarding capital spend and financial structure. The effects of investments on risk profile (how the curves move in different scenarios) makes risk management indispensable in investment analysis. **Hernan Huwyler** Compliance risk assessments can be integrated into decision-making processes using artificial intelligence-based models, without requiring additional software or incurring extra costs. **Tim Leech** The role of an internal audit function in this system, where one exists, is to provide independent assurance that senior executives, the CEO, and board of directors are receiving materially reliable information on the organization’s residual risk status/certainty related to core strategic/value creation/value preservation objectives. This is NOT an objective many IA functions or other specialist assurance groups are accountable for today. **Tom McLeod** Last week I bemoaned that a profession that I hold dear - Internal Audit - had wasted the last two years since the popularisation of artificial intelligence (https://lnkd.in/gM6v3YJG). This week I thought I would set myself the challenge of imagining an Internal Audit agentic world. Firstly I needed - for my wonderfully ignorant benefit - to define what I meant by an Internal Audit agent: a program or system that is designed to perceive its environment, make decisions and take actions to achieve a specific goal or set of goals. It operates autonomously, meaning it is not directly controlled by a human operator. So with nomenclature set here is my initial dream list of Internal Audit  helpful / related agents with their (soon to be if there is money in it for me!) trademarked names: 1 - Big_Mac: The agent that automatically detects where there has been an internal control weakness in any system and captures both the weakness, reports what it is and sends the issue for correction to the personal agent of the person that is designated as the owner of the internal control weakness. 2 - Done_Your_Homework: There needs to be an agent that is out there that cleans up after you and make sure that you do what has been asked of you. It captures what is wrong from Big_Mac; checks to see whether there has been previous similar issues and then creates an action plan that it will enact unless you as the control owner step in. 3 - George_Orwell: This agent takes what has happened and then predicts, based on the failing of that particular internal control, what is likely to happen in the next week, month and year. It will give you the opportunity to have an alert ahead of those deadlines or it automatically makes a decision now as to what needs to be done and gets on and does it. 4 - Manchester_United: This agent is looking for significant variances in performance between what used to happen and what is happening now. Once it has detected that there is that difference it then searches for the reason for that variance and if it is a control weakness or failing it reaches out to Big_Mac for a chat. 5 - Linda_Evangelista: This agent is constantly looking for indicators of fraudulent behaviour and you can set the misadventure tolerance so you don't have to get out of bed for minor stuff! 6 - Tendulkar_Warne: The aim of this agent is to seek out performance patterns between two competing functions and determine what it is in that performance pattern that delivers brilliance and then the agent sends to all what they need to do to improve to that level. 7 - Socrates: This agent runs across everyone's agents globally (!) and comes back with real time wisdom to let you know what others are doing that you should be to better improve your management of your risks and you get one chance to deny the suggested approach before it goes off and actions it .. possibly making more work for Big_Mac. ______________________________________________________________________________ What we have above is just a tiny-tiny sample. The list of recognized experts that share the same beliefs about enterprise risk management is huge. It looks like the problem is not ‘what’, but ‘how’. **BACK TO THE QUESTION** But what is the difficulty then? Why does this huge gap exist between what we believe is right and what ends up being done? Why isn’t systems-thinking the natural choice? **1.**     **Integration of risk and performance for decision-making is a complex task.** I’ve been working with systems representation for over three decades, mainly connecting KPIs and KRIs for Business Intelligence. It is a powerful tool for consulting work, but only recently has the BI software started to handle the math necessary for predictive analysis. Systems representation as a visual resource for executive reporting is fine, but ends up being a consultant’s trick, rather than something that gets implemented as routine. It is amazing how difficult it is for many people to use influence diagrams properly. Another roadblock is that risk and performance assessments happen at different times, with different teams, using diverse datasets and end up being reported in diverse meetings. Either you have a very strong leadership trying to pull it all together or you regulate and automate the decision checkpoints. **2.**     **Actually, silos are OK.** We want to connect silos, not break them down. Interestingly, silos themselves are not inherently problematic—they are essential for organizing work and knowledge. The challenge lies in connecting them effectively, creating pathways for communication and integration. **3.**     **Convolution* and quantitative assessment can get complex.** Not always, of course. But aggregation of different sources of uncertainty demands knowledge/ experience from the assessment team and it is also difficult to be incorporated into the company’s practices. * In mathematics, convolution is a fundamental operation that combines two functions to produce a third function, expressing how the shape of one is modified by the other. Convolution has widespread applications across various fields, including Probability Theory, where it helps find the distribution of the sum of two or more independent random variables by convolving their individual distributions. **THE GAME CHANGER** I researched graph technology coupled with system-dynamics in the past as a potential solution to the challenge of connecting two indicators that belonged to two diverse datasets and performing a regression analysis. For example: a recurrent defect in manufacturing and customer complaints, are they correlated and how? What we used to do was to perform the analysis manually, either because someone asked or because you were following your instinct. Then you would throw the results in a Powerpoint file (or, even worse, in physical slides). Back in the day at Kodak and Embraer, it took one month only to get the datasets. To connect the two indicators for good and obtain periodic insights, you had to open an expensive IT project. About a decade ago Power BI and Tableau solved partially the problem. Then, the challenge moved on: how do I get data from several risks and from several performance indicators, convolve them with the best method for the decision to be made and inform the decision-maker with a powerful insight—with a little help from a machine? You need some kind of mortar or gluing application flexible enough to adapt rapidly to changes and maintain connections alive between alien blocks of software. Graph technology emerged to fill that need, but it was a bit cumbersome and rigid, until very recently. Well, now this is being done in many fields, including supply chain management, healthcare, biology, chemistry, finance, etc. AI software developers such as Palantir are leading the way, because of two technologies that have evolved very rapidly in the last two years: _Generative AI_ and _Graph Querying_. **WHERE DO WE GO FROM HERE?** I have been following the evolution of these converging technologies since GenAI became widely available in late 2022. To work well in tandem, these technologies require a preliminary conceptual alignment so that semantics won’t become a barrier. I believe that the domain of decision-making under uncertainty, and the domains of risk management and performance management by default, are not well served by general conceptual alignment and semantic agreement, for the reasons previously explained. You can’t implement advanced AI agents without semantic modeling, so companies will have to figure it out by themselves, before their rivals eat their dinner. Unfortunately, as far as I know, ISO and COSO are still fighting old wars with old guns. These people should be working together in conceptual alignment, at least so that we all could agree to a common minimal meaning of words and the systems they represent. Here is the deal: a Large Language Model is trained in a trillion things, but not in your company’s specific knowledge. If you feed the LLM with a bunch of private documents, certainly you will have a better conversation with that newly acquired knowledge. This is called RAG - Retrieval Augmented Generation. The problem is that this LLM doesn’t really understand your business yet and is not prepared to be a reasonable Agent, meaning the LLM won’t be a big help with applications. How do you transfer that specific business knowledge to software? You need a computational framework that the machine understands: an ontology, a semantic layer and knowledge graphs to connect datasets—in other words, you should feed the business architecture to the LLM. And you can do an astonishing number of nice things when you are successful doing that. You don’t need to do the feeding all at once, you can do it bite-by-bite. **PROJECT AND ARCHITECTURE** Last year, in the wake of a tsunami of papers about Agentic AI and reasoning-on-graphs, I started up **Project Enchiridion**, with the mission of developing a use-case that can be made tangible for any businessperson and may serve as a test platform for the application of a merger of semantic architecture and data analytics in decision-making and governance domains. I had in mind three main outcomes: 1) **Wiser decision-making**: Clearer trade-off representation of risk and performance and useful insights at the speed of business. 2) **Dynamic Decision-based Internal Auditing**: For improved governance, Internal Auditing should not only perform risk assurance (a typically risk-centric activity), but also _chance-of-success assurance_. The use-case of Enchiridion will host more than one hundred types of relevant decision-making subsystems that auditors can acknowledge, prioritize and for the critical ones verify the impact on objectives. 3) **Monitoring and early warning automation**: inject uncertainty into dashboards and call for attention when thresholds are violated or when there are changes in the horizon. The project is being developed with a free note-taking app, Obsidian, which offers many important functionalities for the task and is reasonably easy to deploy and edit by anyone interested, including old timers like me. I am using ChatGPT 4o to test an LLM’s response to the use-case, i. e. if the LLM improves its understanding of the specific business, and also Neo4j for graphs (a nice read: [[Rattler-The GraphRAG Manifesto Adding Knowledge to GenAI]]). The *Decision-Making Architecture* is made of: **1.**     **Ontology** In computer science, Ontology is a standardized architecture for the representation of Knowledge, in the form of agreed domain semantics. The fundamental asset of ontologies is their relative independence of applications, i.e. an ontology consists of relatively generic knowledge that can be reused by different kinds of applications/tasks. An ontology is built by connecting triples of entity, relationship, and another entity. For example: ·        Ontology: Risk Driver → Informs → Decision Some authors call this a “triplet of subject, verb and predicate”. Knowledge graphs are built the same way but are more specific (answering “which risk driver informs which decision?”) to the use-case: ·        Knowledge Graph: Product Contamination (Risk Driver) → Informs → Product Safety Validation (Decision) An ontology can be designed by an institution or group to communicate knowledge in a specific domain that many organizations can replicate in computer applications. This is widely done in health sciences, biology and many other domains. For example, my custom GPT can retrieve triples from the Obsidian files and build a knowledge graph in seconds for practical applications. **2.**     **Applied Ontology | Knowledge Graphs** If an organization adopts an ontology, it must develop an applied ontology (the “metadata layer”) and one or more knowledge graphs. This means that the organization adapts to the generalized framework, adding specific entities underneath the ontology, and will transfer that information to a specialized graph software (such as Neo4j, GraphDB, ArangoDB, Memgraph, Metaphacts and many others). Applied ontologies also add real-world details, such as causes of risks and relevant controls, enabling advanced querying and automation. The Enchiridion applied ontology provides a blueprint that any organization can tweak, regardless of industry, because it covers all common decision points and respective trade-offs. **3.**     **Graph databases**  A graph database is built in one of the graph applications mentioned above, by adding real data underneath a knowledge graph. For example, the results of the risk assessments of product contamination and the test data from product sampling can be connected to the rest of the decision system. **OVERVIEW OF ENCHIRIDION** Enchiridion follows the framework for decision support outlined in _Decision-Making under Deep Uncertainty – Theory and Practice_, edited by Marchau et al. for DMDU (Society for Decision-Making under Deep Uncertainty, 2019). Fig. 2. Framework for Decision Support (page 5) The implicit assumption is that decisions made in day-to-day operations are just as important as any other but should involve much less tension in balancing risk and performance. This is because they follow stricter criteria, whether technical or procedural. The ontology was designed for decision-making under uncertainty—where options exist, information is insufficient, or uncertain input variables create a wide range of plausible outcomes. It assumes that the company establishes policies, defines targets, and sets tolerances for both risk and performance appetites based on its objectives. However, a degree of tension will always remain due to constraints in resources or values. Risk and performance drivers are assessed to inform decisions, whether systematic or ad hoc. These decisions are made by designated entities (e.g., executive meetings, committees) and are continuously fed back into the system for monitoring, comparing real-world data with expected performance. This creates an endless cycle in a model that any company can adopt—because the beauty of ontologies is their replicability. Enchiridion’s basic ontology serves as a ‘template diagram’ with nodes, labels, and edges (i.e., entities, properties, and their relationships) that organizations populate with their own data, transforming it into an applied ontology. Fig. 3. Overview of Enchiridion’s Architecture [[Overview Enchiridion Ontology.png]] Another major influence on the conceptual design was NASA’s _Risk-Informed Decision Making Handbook_ (2010), which clearly explains how risk and performance intertwine to guide critical decisions in NASA programs. NASA uses the term _Continuous Risk Management_ for routine activities within operations and processes, which are separated conceptually from RIDM. Figure 4 illustrates the reasoning behind RIDM. Fig. 4. Overview of NASA RIDM’s Logic (page 19) ![[Pasted image 20250131150415.png | 750]] Fourteen years ago, implementing that concept must have been a significant challenge for NASA. However, with advancements in AI and fast simulation, it has now become much more feasible. **VISUALIZATION IS A HUGE BENEFIT OF CONNECTEDNESS** This is a graph view of the Enchiridion Ontology generated by my Custom GPT, ready to feed Neo4j (the graph application): Fig. 5. Ontology Overview [[Ontology KG.png]] Our basic ontology has 20 entities, but the applied ontology for the use-case will end up having hundreds, because each entity is deployed using the logic of the ontology above. You end up getting a very nice view (in graph format) of how decisions are made in the company and will be able to: -        Plan audits based on the quality of decision-making (which is the main root-cause of excessive or unreasonable risk exposure and of not meeting targets) -        Analyze logic loopholes in how policies are designed and implemented -        Analyze specific decision flows and their interaction with resources -        Refine criteria and assure their congruence -        Refine metrics and how they are correlated -        Obtain insights on how to enhance stakeholder relationships -        Simulate future scenarios with system-dynamics applications -        And many other possibilities And, finally, you can deploy Agentic AI with an LLM* that knows your business. All of this with the support of visual tools to identify the critical interactions in the decision’s blueprint. (Note) Yes, **DeepSeek R1** is a very good candidate. Your company wants to start with a pilot test? No problems. You can start small and grow later, because the methodology is modular. **WHAT ABOUT MICRO-WORLDS?** The term ‘micro-world’ is not very precise. At the beginning of the article, I mentioned Einstein vs Newton’s theories of gravity, because Newton was concerned with the Solar System and Earth, while Einstein was minding the Universe. But gravity is gravity, of course, and you can count on an acceleration of 9.8 m/s2 on the surface of Earth independently of whether it is a force or a curvature in space-time. Earth surely doesn’t feel like a micro-world to homo sapiens, but everything is relative. When we know how to play with ontology and knowledge graphs, we can apply systems-thinking at whatever level we wish. If we think in abstraction that an enterprise or corporation is the Universe, then a routine decision in the shopfloor of plant 3 is a micro-world. But another nice thing about this methodology is that it works at any level. You can design an ontology and its applications about any type of problem in which you need a clear common understanding of the system components and their interaction with uncertainty, be it at a corporate level or in a micro-world at the raw materials inspection lab. **ON THE NEED OF FEEDBACK AND SUPPORT** Initially, I planned to use Enchiridion as a tool for consulting assignments, but I fell in love with the project and decided to open it and share it. Now, the project has become my assignment. Obviously, this type of work thrives on feedback, suggestions and subscriptions (still working for money). I also look for open criticism and am prepared to get my feelings bruised. There is a lot of work to do, but the main conceptual backbone is ready to be laid out for discussion. Your support for me to continue is very important. The Risk Leap site ([https://riskleap.com](https://riskleap.com)) was completely renewed and is now available for every type of visitor, no strings attached. Anyone can browse the site; the entrance to Project Enchiridion is easily located (also for free). From there, the visitor can enter the Obsidian Publish site and will be able to roam the semantic architecture in detail. Risk Leap continues to be a Professional Development Program, now modified to absorb Project Enchiridion. Quantitative Assessment is still crucial in the training corpus and is one of the eight suggested coaching pathways. Keep in mind that any professional dealing with uncertainty, no matter in what function, needs quant training. One major change: Risk Leap is now primarily in English. More languages can be added if demand is sufficiently high. Risk Leap subscribers will be able to participate in monthly forums (in English) and will get the key to the up-to-date Enchiridion vault in Obsidian. The subscriber receives a license to access Project Enchiridion’s native files and copy the files to her machine, being able to modify and adapt everything to a company or a use-case. For this, subscribers only need to install Obsidian, access the updated files in the central vault, transfer to their machines and _voilà_. ______________________________________________________________________________ The site also offers customized coaching and consulting services (in English, Portuguese or Spanish; _voilà_ was sporadic), including development of specific solutions. This article is the first in a series where I will discuss the choices I had to make when designing the ontology that I use, so you can save time by knowing the crossroads, pros and cons, etc., beforehand. I am sure there are other points of discussion that may surface, such as “What if the company doesn’t formalize policies and requirements?” or “I must keep using the GRC software because it was a hard sell. How do we link things?”. All these questions are pertinent and must be discussed. Waiting for your feedback, can be by LinkedIn message, e-mail, WhatsApp or through the Risk Leap site.