Is AI the answer to the cybersecurity minefield?
Artificial intelligence (AI) isn’t going anywhere anytime soon. With 20% of the C-suite already using machine learning and 41% of consumers believing that AI will improve their lives, wide scale adoption is imminent across every industry – and cybersecurity is no exception.
A lot has changed in the cyber landscape over the past few years and AI is being pushed to the forefront of conversations, says Dan Panesar, VP EMEA, Certes Networks. It’s becoming more than a buzzword and delivering true business value.
Its ability to aid the cybersecurity industry is increasingly being debated; some argue it has the potential to revolutionise cybersecurity, whilst others insist that the drawbacks outweigh the benefits.
With several issues facing the current cybersecurity landscape such as a disappearing IT perimeter, a widening skills gap, increasingly sophisticated cyber attacks and data breaches continuing to hit headlines, a remedy is needed. The nature of stolen data has also changed – CVV and passport numbers are becoming compromised, so coupled with regulations such as GDPR, organisations are facing a minefield.
Research shows that 60% think AI has the ability to find attacks before they do damage. But is AI the answer to the never-ending cybersecurity problems facing organisations today?
The cost-benefit conundrum
On one hand, AI could provide an extremely large benefit to the overall framework of cybersecurity defences. On the other, the reality that it equally has the potential to be a danger under certain conditions cannot be ignored. Hackers are fast gaining the ability to foil security algorithms by targeting the data AI technology is training on. Inevitably, this could have devastating consequences.
AI can be deployed by both sides – by the attackers and the defenders. It does have a number of benefits such as the ability to learn and adapt to its current learning environment and the threat landscape. If it was deployed correctly, AI could consistently collect intelligence about new threats, attempted attacks, successful data breaches, blocked or failed attacks and learn from it all, fulfilling its purpose of defending the digital assets of an organisation. By immediately reacting to attempted breaches, mitigating and addressing the threat, cybersecurity could truly reach the next level as the technology would be constantly learning to detect and protect.
Additionally, AI technology has the ability to pick up abnormalities within an organisation’s network and flag it quicker than a member of the cybersecurity or IT team could; AI’s ability to understand ‘normal’ behaviour would allow it to bring attention to potentially malicious behaviour of suspicious or abnormal user or device activity.
As with most new technologies, for each positive there is an equal negative. AI could be configured by hackers to learn the specific defences and tools that it runs up against which would give way to larger and more successful data breaches. Viruses could be created to host this type of AI, producing more malware that can bypass even more advanced security implementations.
This approach would likely be favoured by hackers as they don’t even need to tamper with the data itself – they could work out the features of the code a model is using and mirror it with their own. In this particular care, the tables would be turned and organisations could find themselves in sticky situations if they can’t keep up with hackers.
Organisations must be wary that they don’t adopt AI technology in cybersecurity ‘just because.’ As attack surfaces expand and hackers become more sophisticated, cybersecurity strategies must evolve to keep up. AI contributes to this expanding attack surface so when it comes down to deployment, the benefits must be weighed up against the potential negatives. A robust, defence-in-depth Information Assurance strategy is still needed to form the basis of any defence strategy to keep data safe.
The author is Dan Panesar, VP EMEA, Certes Networks