Select Page

Among the various new technologies and old technologies rejuvenated in a new bottle, indisputably Artificial Intelligence and the sub-domain of Machine Learning have become cornerstones of digital transformation and innovation in the modern enterprise. It is fair to say most medium to large-sized enterprises across industries have one or more initiatives ongoing using AI/ML. It is also fair to say all of them are on a journey to create tangible value from these initiatives to their business. Many find the journey hard due to a variety of reasons typically ranging from not having sufficient quality data for the subject areas under consideration to picking the wrong use cases or architecting the overall solution incorrectly.

One of the problems we often see now is, within the context of a specific use case, the dilemma of whether to use traditional process automation or machine learning; and the consequent mismatched expectations on both. It usually boils down to a question of how to split the solution for a particular use case problem between the two. Let’s take a step back and look at what each of these entails.

Traditional rule-based process automation is characterized by:

  • Being deterministic and associated with certainty, in the sense that it implements a set of fixed rules codified in a program, working on a known set of inputs to produce a specifically codified known result
  • Once validated on functional “old” test data and expected test results, the program produces the results consistently the same as per the coded rules with no dynamic change in behavior when run on “new” data
  • Unless there are bugs in the program, you typically expect the program to give 100% correct results 100% of the time.

Machine learning is characterized by:

  • Being stochastic (an aspect of randomness) or probabilistic, in the sense that intrinsically it deals with an element of uncertainty in the “predicted” results
  • As the machine learning algorithm continuously learns from data, trying to best fit a given data to one or more linear and/or non-linear mathematical functions, the results from the ML model almost always varies as it encounters “new” data
  • The stochastic/probabilistic nature directly implies that you typically cannot (and should not) expect anywhere near 100% accuracy from the model. In fact, it is now generally accepted that an accuracy of 70% or higher is a good machine learning model. Of course, the model should keep pushing that accuracy of predictions up continuously as it learns from seeing newer and newer data.

In many use cases where machine learning is used today, typically as part of the solution architecture you have some parts which are rule-based and the others being ML model-driven. So, when do we use what? Here is a quick playbook.

Use traditional rule-based process automation for the parts of the solution where:

  • The problem domain is clear and straight forward, and the processing can be codified into a simple fixed set of rules; much of transaction processing and reporting/BI analytics usually fall in this category
  • 100% accuracy of results is required and possible to achieve every time the program is run; business processes where any significant element of material risk is just not acceptable will always need this level of accuracy (for example, the processes that happen in the backend of an ATM when a customer withdraws some money)
  • Straight-through processing without human intervention is possible and desirable, without contravening the point mentioned above

Use machine learning for the other parts of the solution where:

  • The problem domain inherently deals with uncertainty and results are “predicted” and not “determined”
  • The model has to learn and continuously refine the solution iteratively as it encounters newer data
  • Typically the problem domain is multi-dimensional and involves dealing with multiple variables (many a times in hundreds or even thousands) or involves dynamically changing structure and nature of input data; in such cases it generally is next to impossible to codify all the possible combinations in the form of fixed rules in a program
  • The problem domain and the solution are not well understood or the problem domain is an open one to be explored for innovative solutions; Unsupervised ML models using algorithms such as K-means Clustering and Association are good at detecting patterns in the input data that will give a human useful information for further processing/decision-making – for example, assist an underwriter with a predicted probability of default of a loan applicant or semi-automate the process of detecting fraudulent transactions
  • It is accepted that the model will start with a lower accuracy and keep improving as it trains itself of newer data
  • Usually involves the model results assisting a human to make decisions with higher productivity; does not mean straight-through processing with no human intervention is not possible at all – for example, movie recommender system is completely automated, inaccuracies in results are not generally seen as a big issue, and the system keeps getting better with time as it gets to “know” the user more
  • The problem domain involves unstructured data such as textual information, audio, or video; these are squarely in the domain of machine learning and extremely hard or impossible to handle in a rule-based approach.

 Also take a look at the Challenge Of the Week!

Ramki Sethuraman