Artificial Intelligence and Machine Learning (AI/ML) promise to help cybersecurity professionals deal with the accelerating change in threats and vulnerabilities. Without transparency and accountability, this may yet face massive acceptance problems.
I am on my fourth credit card this year. I would love to tell you why, but the simple answer is: I don’t know. This is all because a risk management algorithm at my bank “believes” that my cards continue to be compromised. I use the tools that my bank gives me to keep my cards safe. I use their mobile app. I use their credit card secure feature. I use their two-factor authentication and I am enrolled in the card provider’s security programme (read: VISA 3D Secure, or MasterCard SecureCode).
Did I get a warning at the time of the high-risk transactions? No. Did I have the chance to feed back that the questionable transactions were legitimate? No. Could I prevent the reissue of the cards once the algorithm had decided that a compromise had occurred? No. Do I know how to change my behaviour to prevent the algorithm from triggering? No. Am I unhappy? Definitely!
This is a perfect example of where AI/ML algorithms are being used to manage the growing complexity of the threat landscape. They are a cost-effective way for the bank to keep the residual risk at an acceptable level. They may even be the only viable way to do so today.
From my perspective as a consumer, this is now a black box taking decisions that can no longer be explained to me. What is even worse is that I no longer have any way to argue my case, should I disagree with the decision of the algorithm. What if the algorithm got it wrong (and continues to do so)? Can I still trust my bank to act in my best interest?
So let’s take a step back and examine the wider perspective. AI/ML is becoming an ever more crucial tool for cybersecurity and risk management. It heralds great benefits to the organisations that deploy such tools. There is however a significant risk of losing transparency and accountability of decisions made by/with these algorithms. This erodes consumer trust in the organisation, as it has done between me and my bank. More seriously, it has a direct impact on the accountability of senior executives for the decisions being taken.
These factors should be considered when implementing AI/ML tools within an organisation. The decisions taken by algorithmic tools need to be able to be at least as explainable as those made by human employees, if not more so. And where errors, inconsistencies or biases are identified, there need to be clear pathways in place to fix these. Only then will we come to a trustworthy use of AI/ML within our organisations. This is especially important in an area like cybersecurity where trust is the foundation of all actions.
These points don’t implement themselves. Today at least, they are not something that the AI/ML tools come with out of the box. They therefore need to be added as part of the implementation and solution design which will add more effort to projects in the short term. In the long term these however promise greater acceptance of the result and lower organisational risk. It is therefore up to the project sponsors and executives to ensure that his doesn’t fall under the table.
If you want to delve further into this topic, I can only recommend Paul R. Daugherty’s and H. James Wilson’s book Human + Machine – Reimagining Work in the Age of AI. They describe several actions and roles that will help bridge this gap. Something worth thinking about.