Are Machine Learning Models to be Trusted?

October 13, 2020

Well constructed machine learning models can give a business a competitive advantage by helping make more nimble decisions but ML models should not be left to themselves. Companies that can make decisions faster can capture new opportunities quicker and compete with and take share from larger rivals. While ML can be very valuable, there are also risks when models don’t perform as expected or perpetuate past mistakes.

To work, ML models need to be trained and how we train them will dictate if outcomes are good or bad. Machine learning models are like children they learn from what they see around them and don’t differentiate between good and bad examples. Models that malfunction or make decisions that cut against society’s ethics can cause serious damage to brands or create legal problems. ML algorithms need rules or guardrails to ensure that they don’t cause irreversible damage. With rules defined by humans to monitor ML models, we can have more confidence that they will not make unscrupulous decisions that can cause issues.

How can ML get into trouble?

Well by learning from our bad habits or unethical past behavior. Models are developed using historical data and if that data includes biases, these prejudices can unwittingly be incorporated into the model. In many cases, ML models can even amplify our past poor decisions. Specifically, bias around gender and race can lead ML models to make discriminatory decisions. For example, a model used to hire software developers may detect a trend in the data that shows that the majority of software developers are male and learn to disqualify applicants that are not male. Using this variable to discriminate against females is not only unethical but it is illegal.

An astute data scientist should be able to spot this error and exclude sex as a factor in the model, but there are secondary and tertiary variables that may correlate with sex that can lead to an ML model to make a sexist decision. The fact that an applicant might be a member of a women’s organization may lead to rejection. While not specifically disqualifying women, females are much more likely to be a member of a women’s group or attending a women’s college leading to discriminatory practices. This is not a hypothetical example but is what happened to a recruiting bot built by Amazon.

Even when the most diligent data scientist removes biases from models, data drift and new data incorporated into the model can introduce new biases. This requires someone to monitor models in production to make sure that they are operating properly. Unfortunately in some cases, models can get orphaned with no owner to keep track of them. Algorithms can operate for an extended period of time before anyone notices that they are discriminating.

Another example of machine learning being led astray by humans acting badly is the chatbot that was taught to use bad language due to its repeated engagement with irate customers. The bot may have also been corrupted by some mischievous adolescents.

ML models can also just make mistakes based on bad data or large disruptions in data caused by rare events. If models are ingesting erroneous data, outcomes can be catastrophic, especially if medical equipment is leveraging ML to automate decisions.

Risk Can be Reduced by Implementing Business Rules

To combat the challenge of keeping algorithms on track, organizations can incorporate guardrails. By implementing business rules that can trigger a specific action if an output from an ML model is outside of defined parameters, risk can be reduced. For example, If loans are not approved at the same rate for a group protected by civil rights laws as the general population, a bias could exist in the algorithm. By building rules to flag this anomaly, models can be adjusted to make sure they are fair and just.

Rules can also be implemented to monitor real-time decision making to ensure AI models do their jobs. For example, rules can be built to pause a trading system running AI models if there are wild fluctuations in the market. In a healthcare setting, bad data may lead to an insulin pump letting glucose levels fall below safe levels. Rules can be written to take action if certain thresholds are broken.

These rules need to be high level and should be implemented in plain language. Rules should also not be too technical so business managers can implement them when models are producing results that do not align with business policy or basic ethics. With easily understood rules implemented around ML we can put more trust in our models.

Once we know the right rules to implement, ML models are required to follow them no matter what the input data is revealing. We can even restrict profanity from entering a chatbot’s vocabulary regardless of the number of times they hear a swear. If only we could get our kids to do the same, and do as we say not as we do.

If you would like to see a great rules engine in action, contact Decisions to see a demo and learn how your organization can try the platform for free.

Gordon Jones
Gordon Jones has founded and sold three companies with the last built using Decisions technology. He has also led factories and large IT implementations both in the US and in Asia, where he lived for over seven years.

Latest Articles

Transform your business with automation.

Decisions is the quickest way to build software and solve your most difficult problems. Book a demo to learn how we can simplify and standardize your business operations.