Decision Management and Machine Learning: The Ideal Partnership

Decision Management and Machine Learning: The Ideal Partnership


In all but the simplest cases, our experience shows that machine learning (predictive analytics) greatly benefits from decision management to deliver true business value, improve interpretability and mitigate risk. In turn, the days of static business decisions based only on simple decision table based business rules are numbered. Decision management needs machine learning to meet the needs of data driven businesses – to deliver decisions that are capable of statistical inference and are reactive to evolving business conditions: decisions that learn. In short machine learning and decision management form a powerful partnership.

In this article I discuss how decision management can be used as a unifying framework to integrate machine learning (ML) models and business rules (often expressed as decision tables) into decisions that learn. I show why this partnership is beneficial to both ML and decision modelling. Using BPMN these learning decisions can then be integrated into business processes yielding a powerful, coherent platform for automating dynamic business operations. We assert that the framework of decision management is essential for any complete, accountable application of machine learning to automated business processes.


‘‘the days of static business decisions based only on simple decision table based business rules are numbered’’

About Machine Learning

ML has been variously referred to as predictive, proscriptive or descriptive analytics or even AI – I eschew these marketing terms here in favour of machine learning because these algorithms are machines capable of learning and providing actionable insight in a business context.

Machine learning can be supervised or unsupervised. By supervised machine learning, I refer to a range of techniques and algorithms that use an important subset of data attributes (features) to predict vital facts (labels) about new data, given exposure to labelled old data. These include classifiers where the label is one of a number of discrete values (e.g., whether a transaction if fraudulent or not) or regressors for which the label is a continuous value (e.g., a person’s salary). Unsupervised Machine Learning does not use labels. It includes algorithms that draw actionable insight from the distribution of data, often in high dimensions (e.g., clustering, outlier detection), and that make assertions (e.g., predictions, classifications) about new data instances given this distribution.

ML is also frequently partitioned into deep and shallow learning. Deep learning uses neural networks, typically well suited to image or audio classification, that offer automatic feature detection at the cost of additional compute resources. Whereas shallow learning uses a range of alternative linear and non-linear techniques which are often more efficient and yield good results for tabular data, but which require features be explicitly identified in advance by domain experts. This is independent of supervised or unsupervised ML.

I believe business use of machine learning, supervised or unsupervised, deep or shallow, will become increasingly pervasive as a means of making smarter, real-time operational decisions and enabling decisions that learn.

Machine Learning: The Missing Pieces

Machine learning is very widely applicable. Various industries have used it for tasks as diverse as predicting new uses for established drugs, forecasting credit default, detecting fraud in real time, forecasting customer churn and determining market sentiment from twitter and newspaper articles. Despite some very promising results by some companies, many find their first applications of machine learning to be disappointing. It is often marred by:

  • Rush to implementation, with resulting poor understanding, shared vision and satisfaction of the real business need. Machine learning is fun and often ML projects are started with very unclear business goals
  • Poor data, poor understanding of data, Jumping to building models before a complete understanding of the data and a clean dataset and set of features have been acquired
  • Wasted Business Expertise, use integration of machine learning with the existing knowledge of a company’s human subject matter experts (SMEs) with the frequent result that ML models conclude what the experts already know
  • Ethical and regulatory concerns (e.g., GDPR), what safeguards are needed for models with outcomes that can impact people’s lives?
  • Unpleasant surprises, discontinuities in normally well behaved models can yield surprisingly bad results under some circumstances. Some neural network models, for example, can be tricked into producing farcical results with small changes to the input data (such as the failure of facial recognition systems when the subject wears a small badge)
  • Poor interpretability, machine learning models can yield accurate results. However, they are often unable to explain the rationale of their outcomes or the specific data on which they relied. Likewise, machine learning models cannot show what changes in the case would yield a different result
  • Badly handled model drift, machine learning models can perform well initially, but their prediction performance declines over time. This is often due to covariate or prior drift, a natural change in the relationship between variables or their distributions over time
  • Unintentional bias resulting from biased training data or even a biased model or attribute selection
  • Poor integration with business processes so that the outcomes of machine learning can be actioned safely
  • Conflicted goals, An attempt to use a single machine learning algorithm to solve many problems at once which yields a sub-optimal model (for any one of them)
  • No quantified benefit, an inability to take a business context into account, to connect and contribute to business key performance indicators
  • Incomplete, Inability to capture higher level human criteria such as compassion, ethics, business goals and common sense

How do we address these problems?


How Decision Management Helps

Decision management is a framework for expressing, maintaining and executing business decisions and supporting their integration into a business process to produce actionable insights, either under the auspices of a human operator or automatically. Decision management can combine business rules (including decision tables), machine learning models and invocations of external services into a single model (expressed in a decision model using a notation called DMN), that:

  • Is open and transparent. Can be understood and maintained by business subject matter experts (not just developers or data scientists)
  • Supports process automation. Can be fully integrated to an automated business process in BPMN and support directly executable models with quantified business performance indicators
  • Strengthens controls. Clearly expresses the data dependencies, business objectives and performance indicators of all operations

Using this framework, we can address the above issues with machine learning.


Focuses on Business Benefit

Decision management focuses on the business need and benefit. All elements of decision making (machine learning or otherwise) are driven by a strong understanding of these factors. Every decision is explicitly associated with business objectives and key performance indicators (KPIs). Example business objectives include attaining some regulatory standard, lowering costs or retaining more customers. KPIs are more specific, providing a specific metric goal and timescale. Example KPIs include: ‘the total negative margin should not exceed 5% of the value of the portfolio’, ‘the offer rejection rate should fall at least 1% per quarter’ or ‘the number of cases requiring manual oversight should remain below 3%’. Many decision management stacks allow these business goals to be measured and tracked.

By associating learning decisions with strong business goals in this way we can avoid nebulous ML projects. Furthermore, the outcomes of machine learning can be viewed as insights that can be actioned for business benefit. Decisions that use them can be directly integrated with a company’s business process, improving business performance and accountability.


Provides Strong Data Provenance and Ethical Compliance

Decision management fosters complete understanding of all input data, business rules and machine learning models required to make a decision and the dependencies between them. This is expressed in a highly explicit and visible way. This is invaluable when applying ML.

All machine learning models are explicitly associated with specific attributes of input data called features. Decision management provides a clear chain of custody between each input data attribute and business decisions and outcomes that rely on it. This improves the rigour of the application of machine learning and makes decision models an ideal vehicle for regulatory and ethical controls such as the European GDPR and the American CCA. Decision management can support techniques such as Active Adversarial Impact Mitigation (AAIM) which determines how effectively your model can predict protected attributes such as gender or race. If it can, this is an indication that some of your features are proxies for these attributes and will result in a biased model. This works if you selected the features manually or automatically (e.g., using a neural network).

In this way decision models can be used to avoid the use of protected attributes in machine learning, either directly or as inferred or proxy attributes. Decision models can even be used to detect and address the bias introduced by these accidental dependencies.


Augments Machine Learning Models with Expert Knowledge

Inexperienced data scientists often make the assumption that machine learning models are the best solution for every business decision. All that is needed is a sufficiently large data set of previous observations and their associated decision outcomes (labels) and a supervised model to train and optimize through cross-validation. Thereafter one can replace the decision-making process with the trained model. Surely by this approach we can learn to automate any business decision using ML alone? This approach is often very poor because:

  • In some cases it is not feasible because previous observations may not be available and, even if they are, the efficacy of the outcomes are unknown and may be biased.
  • ‘Learning’ business expertise, which is already on-hand (in the heads of human experts), is inefficient. It requires a lot of time and computer resources. If there are complex but deterministic elements of the decision making which are well understood and do need to be relearned, these can be better represented as a network of decision tables created by human experts. Even if these are not flawless, they can be improved with ML, rather than learning everything from scratch.
  • Some logic, such as compliance, is volatile, complex and well-defined by nature. It is unwise to make this the focus of a machine learning model otherwise frequent (and expensive) retraining will be required.
  • Decision table networks created by human experts have much higher interpretability than machine learning models that, despite their accuracy, are often unable to clarify their rationale. Many aspects of automated decision making are under increasing pressure to explain their outcomes. For example, regulations demand decisions demonstrate In other words that they exhibit no bias between ethnic groups, genders or other protected criteria. In some cases, they are not allowed to use the protected criteria in any way. This is more readily achieved, and crucially more easily demonstrated, using static business rules.
  • It is sometimes not possible to train a machine learning model to have the same recall and precision of a decision model constructed by subject matter experts because the training data can be noisy and because the training process itself is not perfect.
  • Decision outcomes are often the results of several independent decisions, combining them into a single optimized machine learning model will increase its complexity thereby compromising its accuracy and its interpretability. A better solution is to embed a set of narrowly focussed, tightly defined machine learning models into a decision that can augment them with traditional business rules as needed. An example of this is a credit award decision which may consist of a machine learning model to determine likelihood of default and a network of supporting decision tables to provide compliance support and parity checking.

In short we should use a combination of ML and traditional decision making as dictated by requirements and use decision modelling as a means of integrating the two approaches.


Improves Safety and Performance of Machine Learning

Machine learning models predictive performance can be monitored from decision models and any drift or discontinuity can be contained and addressed by the overall logic of the decision improving the robustness of the outcome.

Using decision management, ML models can be combined with business knowledge from subject matter experts expressed as rules such as decision tables. These rules can augment the models with real-world business expertise, enhancing their predictive accuracy and reducing their training time. Furthermore, these rules can be used to constrain the machine learning component in accordance with required controls. Of course, there are other means of establishing these controls, but only decision management allows machine learning models and controls to be combined in ways that are transparent to non-technical business experts.

Decision management can also be used to track on-line machine learning models, those which learn and constantly re-train as they process production data.


Enhances Power and Interpretability of Machine Learning

Sets of different ML models can be combined, in a decision model, into an ensemble (or set of alternate specialists) which collaborate to improve overall business performance. This facilitates integration of existing models with different zones of applicability.

Decision models are highly transparent, and these techniques can be used to improve the interpretability of inscrutable machine learning models, making it possible for business subject matter experts to understand them holistically and to understand specifically how they generated each outcome in production. This approach is more applicable to shallow ML working on tabular data than deep learning on image data. However even here techniques like exemplars, attention and bounding boxes can be used to justify the outcome of neural networks. This facilitates an equal partnership between data scientists and business SMEs in the design and governance of machine learning models.



Although originally designed to manage the expression, governance and execution of business decisions based solely on decision tables, decision management is rapidly becoming the best means of integrating machine learning (especially on-line machine learning) into a robust, end-to-end business process. ML and decision management mutually strengthen one another:

  • ML provides decision models with be ability to supplement decision making with effective behaviours learned from data
  • decision management brings the benefit of improved alignment with business goals and greater interpretability.
Better AI Transparency Using Decision Modeling

Better AI Transparency Using Decision Modeling

Many Effective AI Models Can’t Explain Their Outcome

Some AI models (machine learning predictors and analytics) have strong predictive capability but are notoriously poor at providing any justification or rationale for their output. We say that they are non-interpretable or opaque. This means that they can be trained to tell us whether, for example, someone is likely to default on a loan, but cannot explain in a given case why this is.

Let’s say we are using a neural network to determine if a client is likely to default on a loan. After we’ve submitted the client’s details (their features) to the neural network and achieved a prediction, we may want to know why this prediction was made (e.g., the client may reasonably ask why the loan was denied). We could examine the internal nodes of the network in search of a justification, but the collection of weights and neuron states that we would see doesn’t convey any meaningful business representation of the rationale for the result. This is because the meaning of each neuron state and the weights of links that connects them are mathematical abstractions that don’t relate to tangible aspects of human decision-making.

The same is true of many other high-performance machine learning models. For example, the classifications produced by kernel support vector machines (kSVM), k-nearest neighbour and gradient boosting models may be very accurate, but none of these models can explain the reason for their results. They can show the decision boundary (the border between one result class and the next), but as this is an n-dimensional hyperplane it is extremely difficult to visualize or understand.  This lack of transparency makes it hard to justify using machine learning to make decisions that might affect people’s lives and for which a rationale is required.

Providing An Explanation

There are several ways this problem is currently being addressed:

  1. Train a model to provide rationale. In addition to training an opaque model to accurate classify data, train another to explain it.
  2. Use value perturbation. For a given set of inputs, adjust each feature of the input independently to see how much adjustment would be needed to get a different outcome. This tells you the significance of each feature in determining the output and gives insight into the reasons for it.
  3. Use an ensemble. Use a group of different machine learning models on each input, some of which are transparent.
  4. Explanation based prediction. Use a AI model that bases its outcome on an explanation so that building a rationale for its outputs is a key part of its prediction process.

Model Driven Rationale

In this approach, alongside the (opaque) machine learning model used to predict the outcome we train another (not necessarily of the same type) to provide a set of reasons for the outcome. Because of the need for high performance (in providing an accurate explanation), this second model is also opaque. This ensemble is represented using the business decision modelling standard DMN, on the right.

The left hand model explains the outcome generated by the right

DMN DRD Showing Two Model Ensemble: One Model Explains the Other

Both models are trained (in parallel) on sample data: the first labelled with the correct outcome and the second with the outcome and a supporting explanation. Then we apply both models in parallel: one provides the outcome and the other the explanation. This approach has some important drawbacks in practice:

  • It can be very labour intensive to train the explanation model because you cannot reply on historical data (e.g. a history of who defaulted on their loans) for the explanation labels. Frequently you must create these labels by hand.
  • It is difficult to be sure that your explanation training set has full coverage of all scenarios for which explanations may be required in future.
  • You have no guarantee that the outcome of the first model will always be consistent with that of the second. It’s possible, if you are near the decision boundary, that you might obtain a loan rejection outcome from the first model and a set of reasons for acceptance from the second. The likelihood of this can be reduced (by using an ensemble) but never eliminated.
  • The explanation outputs stand alone and no further explanation is available because the model that produced them is itself opaque.
  • The two models are still opaque, so there is still no general understanding of how they work outside the outcome of specific cases.


Value Perturbation

This approach uses a system like LIME to perturb the features of inputs to an opaque AI model to see which ones make a difference in the outcome (i.e., cause it to cross a decision boundary). LIME will ‘explain’ an observation by perturbing the inputs for that observation a number of times, predicting the perturbed observations and fitting an explainable model to that new sample space. The idea is to fit a simple model in the neighbourhood of the observation to be explained. A DMN diagram depicting this approach is shown below.

LIME uses the opaque AI module to classify perturbed variants of the data to give an explanation for the outcome

DMN DRD Showing How LIME Produces Explanations for Opaque AI Modules

In our loan example, this technique might reveal that, although the loan was rejected, if the applicant’s house had been worth $50000 more or their conviction for car-theft was spent they would have been granted the loan. This technique can also tell us which features had very limited or no impact on the outcome.

This technique is powerful because it applies equally to any predictive machine learning model and does not require the training of another model. However it has a few disadvantages:

  • Each case fed to the machine learning model, must be replicated with perturbations to every feature (attribute) in order to test their significance so the input data volume to the machine learning model (and therefore the cost of using it) rises enormously.
  • As above, the model is still opaque, so there is still no general understanding of it works outside the outcome of specific cases
  • As pointed out in the original paper on LIME, It relies on the decision boundary around the point of interest being a good fit with the transparent model. This may not always be the case, so, to combat this, a measure of the faithfulness of the fit is produced.

Using an Ensemble

The use of multiple machine learning models in collaborating groups (ensembles) has been common practice for many decades. The aim of bagging (one popular ensemble technique) is typically to increase the accuracy of the overall model by training many different sub-models on the same data so they each rely on different quirks of that data. This is rather like the ‘wisdom of crowds’ idea: one gets more accurate results if you ask many different people the same question because you accept their collective wisdom whilst ignoring individual idiosyncrasies. An ensemble of machine learning models is, collectively, less likely to overfit the training data. This technique is used to make random forests from decision trees. In use, the same data is applied to many models and they vote on the outcome.

This technique can be applied to solve transparency issues by combining a very accurate opaque model with a transparent model. Transparent models, such as decision trees generated by Quinlan’s C5.0, or rule sets created by algorithms like RIPPER (Repeated Incremental Pruning to Produce Error Reduction)[1], are typically less accurate than opaque alternatives but much easier to interpret. The comparatively poor performance of these transparent models (compared to the opaque ones) is not an issue because, over the entire dataset, the accuracy of an ensemble is usually higher than that of the best member providing the members are sufficiently diverse. A DMN model explaining this approach is shown below.

The approximate transparent model provides an explanation for the accurate opaque model.

DMN DRD Showing Ensemble of Opaque and Transparent AI Modules

However the real advantage of this approach over the others is that because the transparent models are decision trees, rules or linear models they can be represented statically by a decision service. In other words, the decision tree or rules produced by the transparent analytic can be represented directly in DMN and made available to all stakeholders. This means that this approach not only provides an explanation for any outcome, but also an understanding (albeit an approximation) of how the decision is made in general, irrespective of any specific input data.

Using Ensembles in Practice

The real purpose of these transparent models is to produce an outcome and an explanation. Clearly the explanation is only useful if the outcome of the transparent and opaque models agree for the data provided. This checking is the job of the top level decision in the DRD shown. For each case there are two possibilities:

  • The outcomes agree, in which case the explanation is a plausible account of how the decision was reached (note that it is not necessarily ‘the’ explanation as it has been produced by another model).
  • The outcomes disagree, in which case the explanation is useless and the outcome is possibly marginal. In a small subset of these cases (when using techniques like soft voting) the transparent model’s outcome may be correct making the explanation useful. Nevertheless the overall confidence in the outcome is reduced because of the split vote.

The advantages of this approach is the fact that we have a static representation of an opaque model in the DMN standard which gives an indication of how the model works and can be used to explain its behaviour both in specific cases and in general.

If the precision of our opaque model is 99% and that of our transparent model is 90% (figures obtained from real examples) then the worst case probability for obtaining an accurate outcome and a plausible explanation is 89%. Having a parallel array of transparent decision models would increase this accuracy at the cost of making the whole model harder to understand. Individual explanations would retain their transparency.

Explanation Based Prediction

This is a relatively new approach typified by examples like CEN (contextual explanation networks) in which explanations are created as part of the prediction process without overhead. This is a probabilistic, graph based approach based on neural networks. As far as we are aware there are no generally-available implementations yet. The key advantages of this approach are:

  • The explanation is, by definition, an exact explanation of which a given case yielded the outcome it did. None of the other approaches can be sure of this.
  • The time and accuracy performance of the predictor is broadly consistent with other neural networks of this kind. There are no accuracy or rum-time consequences for using this approach.

The disadvantages with this approach is that it only works with a specific implementation of neural networks (which is not yet generally supported) and only yields a case-by-case explanation, rather than a static overview of decision-making.


Decision Modelling is an invaluable way of improving the transparency of certain types of narrow AI modules, machine learning models and predictive analytics which are almost always non-transparent (opaque). A particularly powerful means of doing this for datasets with fewer than 50 features is to combine both opaque and transparent predictors in an ensemble. The ensemble can then provide both an explanation for the outcome of a specific case and a general overview of the decision making process.

The transparent model uses a decision tree or rules to represent the logic of classification. Using DMN to represent these rules provides a standard and powerful means of making the model transparent to many stakeholders. DMN can also be used to represent the ensemble itself as shown in the articles’ examples.

Dr Jan Purchase and David Petchey are presenting two examples of this approach at Decision CAMP 2018 in Luxembourg, 17-19 September 2018. Why not join us?


My thanks to David Petchey for his considerable contributions to this article. Thanks also to CCRI for the headline image.

[1] Repeated Incremental Pruning to Produce Error Reduction (RIPPER) was proposed by William W. Cohen as an optimized version of IREP. See William W. Cohen: Fast Effective Rule Induction. In: Twelfth International Conference on Machine Learning, 115-123, 1995.

[2] Contextual Explanation Networks Maruan Al-Shedivat, Avinava Debey, Eric P. Xing, Carnegie Mellon University, January 2018.

Decision Management and Machine Learning: The Ideal Partnership

Improve Your Chances of AI Success with Decision Modeling

Increasing Use of Narrow AI in Business Automation

In this series of articles I explore how decision management and modeling can help increase the success of AI deployments.

As part of their digital transformation initiatives, companies are going beyond the established use of predictive analytics by embedding ‘narrow’ artificial intelligence models within their automated systems. The outcomes of these models directly control the system’s actions. Not to be confused with Strong AI, which attempts to approximate general human intelligence, this ‘Narrow’ (or ‘Weak’) AI uses machine learning to supplement (or even replace) human judgement in very specific areas. Typically these systems automate the acquisition of business insights from data, previously requiring human experience, and then act on these insights with or without human supervision.

AI models provide observations that can be used by companies to: segment and target higher value customers; personalize service and product offerings to better anticipate customer requirements and predict and avoid customer churn. By these means companies hope to acquire, satisfy and retain higher value customers. Other uses of narrow AI include: optimizing transport logistics by anticipating demand for products; detecting fraud in real time and automating market sentiment analysis to understand the public mood and anticipate market changes.

Contrary to the hype, many are discovering that this use of narrow AI and machine learning does not guarantee success and has its own drawbacks. Frequently, initial attempts to use AI models can be expensive and of surprisingly limited value. Highly trained and expensive personnel and sophisticated, high-performance hardware can balloon budgets while the promised business benefits remain elusive. Why is this?

Narrow AI Informs Business Decisions

Some projects lose sight of the fact that narrow AI models are used, first and foremost, to inform a business decision, either directly by predicting something a business can act on to make the decision more profitable or, indirectly, by providing insight into the hidden relationships in business data that might be exploited. This is often a commercial decision such as:

  • Should we target this customer for sales and services? What’s their risk/reward profile?
  • What kind of customer is it: to which products and pitches would they be eligible and most responsive?
  • What specific actions should we take to pre-emptively please customers and prevent churn?
  • How can we reduce costs and minimize delays by predicting demand for products and services?
  • Does the pattern of customer behaviour suggest fraud or some other danger (e.g., incipient insolvency)?

It may cover automation of decisions which previously required human guidance:

  • Are recent changes in blood chemistry indicative of a medical condition?
  • Given past behaviour is this employee likely to be unreliable?

When AI projects lose sight of this business decision they deliver less benefit as a result.

This is because an accurate definition of this business decision and its business context are valuable assets in selecting, training and using machine learning techniques. Decisions help to focus the application of each AI model, define its business value and guide its evolution.

How Decision Models Help

In this series of posts we’ll be looking specifically at how decision models help to:

  • Define a Business Context: to show how the use of AI models fit into the big picture—how they collaborate to generate business insight, what their requirements are and exactly how they impact company behaviour.
  • Improve the Transparency of AI models: to make even the most opaque AI models yield an explanation for their outcomes—essential to meet the increasing public and regulatory demand for transparent decision-making in all areas of business.
  • Define Business Goals: to focus AI projects and provide both a business case and a means to measure their success.
  • Help SMEs to Steer AI: providing the best fusion of machine learning and human expertise by depicting how existing expertise informs and constrains AI.

In this article, let’s look at the first of these…

How Decisions Clarify the Business Context of AI

Business decisions provide a defining context for the use of AI Models and this is important because…

Each AI Model Should Be Applied to a Well-Defined Task

Narrow AI models are best developed for a specific purpose; to be used at a specific time and under specified circumstances. They rely on training and test data that are well understood and of adequate quality for the task at hand. In many circumstances they collaborate with business rules and other analytic models to achieve a business outcome. They perform poorly if their purpose is vaguely specified or appropriate data is not provided. They also behave poorly if they are unfocused and try to address more than one issue simultaneously (the ‘jack of all trades’ problem).

Defining a decision model is an excellent way of expressing both the ‘big-picture’ and the specific details of the context in which AI models are used. A DMN decision requirements diagram (DRD) shows how AI models collaborate with other sources of business information and knowledge to make a specific contribution to a business decision. It drives out the data and knowledge requirements of models and establishes a clearly defined relationship between them and the business decisions that use them. Such a context defines the inputs and outputs of the AI models and how these align with the key business actors, processes (using process oriented metadata in the DRD) and rules.

Example Decision Model

Consider the example DMN decision requirements diagram (DRD) below. This diagram shows how two AI models collaborate to determine the details of a client mortgage offer. For clarity, we’ve added some colour to better illustrate the integration of narrow AI (note: this isn’t part of the DMN standard). Real DMN DRDs are, at this level, agnostic about the technology used to implement decisions.


AI models (green) contribute to a decision model

A Decision Model DRD Illustrating the Context for AI Two Models (in Green)

Be aware that there is much more to a DRD that just a box and line diagram (see below) and much more to a decision model than just a DRD. However, just this single diagram tells us a lot about the use of narrow AI in this example.

  • The yellow rectangles present conventional decisions based on business rules or human decision making.
  • The green decisions (rectangles) in this example are AI models. The Determine Property Risks decision uses Bayesian inference to classify the key physical perils within the vicinity of the property and Determine Likely Property Value uses a neural network to estimate the value of the property given its survey details. This highlights the fact that each decision in a DRD can be underpinned by business rules, machine learning models or any other executable representation. The DRD is agnostic of this and just shows the dependencies between them all.
  • The yellow knowledge sources (with wavy bases) are sources of knowledge that inform or constrain their respective decisions. They represent guidelines, policies, regulations, legal mandates, etc.
  • The blue knowledge sources, for decisions using weak AI, represent training and test data sets used to build the model. This representation holds for batch and continuously trained machine learning models.
  • The rounded rectangles represent data sources.
  • The lines represent information (or knowledge) dependencies. In each case item at the ‘arrow’ or ‘circle’ end of the line is dependent on the information or knowledge made available by the item at the ‘plain’ end.

How This Helps

Notice how the DRD:

  • Allows us to be explicit about the combined use of AI and business rules in business decision-making, so that all stakeholders can see the extent to which AI is being used and how.
  • Clearly defines all the dependencies in our decision making, helping us to understand and manage the collaboration and handle change in individual parts as well as informing all stakeholders precisely how AI is contributing.
  • Captures all the data requirements of the AI models—both in training and in use, in addition to the rest of the decision-making elements. The process of completing a DRD drives out the information required to support models and its relationships with data used elsewhere in the same decision. It can also identify missing data early.
  • Allows us to document the collaboration of AI modules using a recognized international standard for decision modeling: DMN. This means we can share our models more easily with others and take advantage of the many DMN tools on the market.

The DRD is just the diagrammatic representation the dependencies between the elements of decision-making. The DMN standard also defines many properties for each of the symbols on the DRD that provide a wealth of additional information (such as its business goal and key performance indicators). In addition, the decision logic diagram provides detail on the business behaviour of each decision-making element. In the case of a decision using narrow AI, this could be a reference to a specific machine learning model and a definition of its interface.


The process of decision modelling adds much needed rigour to the design of AI model deployments, forcing AI engineers to be explicit about their models’ business contribution, data requirements and use scenarios early.

Currently, the AI and machine learning community have no notation to express how models interact with input data, training data and each other to achieve business outcomes or how their use fits into a business process. DMN (and, by association, BPMN) is well placed to fulfil this role. DMN could be used, for example, to illustrate the architecture of AI model ensembles and to show, using conventional decision tables, how ensemble voting works.

In our next article we consider how decision modelling can improve the transparency of notoriously inscrutable AI models like neural networks and support vector machines…