Artificial Intelligence as a threat to cybersecurity?
According to a study, conducted by Kaspersky Lab (2016), almost 50 percent of VSBs, SMBs and Enterprises are impacted by some kind of phishing attack. As pointed out by Dr. Stephen Herrod (General Catalyst Partners) during his guest lecture at Stanford University, the impact of those phishing techniques can be tremendously high since they are targeted at getting access to sensitive information through the users themselves.
With the development of Artificial Intelligence, this thread is likely to become even higher as current research shows.
This blog post intends to give a basic idea of phishing techniques, how AI could make it even more harmful and how we can still be sure to maintain the current level of cybersecurity.
An introduction into phishing
To create a basic understanding of what phishing is and how to identify phishing attempts, I have analyzed the paper ” Why Phishing Works ” by Dhamija et al. (2006). Definition-wise they find a rather simple way to express what phishing is: In their view, phishing refers to “directing users to fraudulent websites” (Dhamja et al, 2006, p.1). In this context directing the users to those web pages mainly leverages information gathering about the person. One challenge phishing mainly faces in this context is the conflict between sophistication and scale (Benenson et al., 2017). Many of us know this conflict when receiving a bad phishing email. One of the topics of this blog post will, thus, be how AI could enable large-scale and sophisticated phishing at the same time.
However, the phishing itself does not end with the email you have in your inbox. One central part of Damija et al. ‘s research was, thus, to identify the effectiveness of sample phishing pages. For me, their results were very surprising: As a personal heuristic, I have always used the secure label in my browser bar which gave me a feeling of being on a safe webpage. As they show, the following indicators cannot guarantee a safe webpage on a standalone basis (Damija et al., 2006):
- Verisign logo
- Certificate validation seal
- SSL Indicators in a fake address bar
Even though we are talking about the upper end of phishing web pages, this level of sophistication really surprised me, and not only me: For the best phishing page, 91% percent of the participants were fooled (Damija et al., 2006). As the consequence, the best technique to avoid phishing would be to already hinder people from clicking on the link.
Image Source: Schweizerische Kriminalprävention
How AI could revolutionize phishing
As we figured out in the last paragraph, the most critical part of phishing prevention is avoiding users being directed to malicious web pages. Currently, identifying those phishing emails seems to be rather easy as it is, at least in large-scale phishing, not targeted to you as in individual – but with AI this could change! As the cybersecurity company IRONSCALES (2018) points out, Artificial Intelligence might enable high/quality large scale phishing. Just thinking about what we have learned in the course so far, this is not very surprising. As pointed out by Craig Martell (LinkedIn) in his guest lecture, AI is a combination of programming and statistics to develop algorithms for uncertain decision making. Now imagine a malware being installed on a computer gathering all the user communication data. By applying AI, the algorithm could learn, how the user communicates in certain situations and adapt accordingly – a scary vision! James Tapsfield (Daily mail) draws a similar image: He expects phishing to leverage a huge amount of data (e.g. from IoT devices) and contextualize phishing emails accordingly (Tapsfield, 2018). This position is also supported by Dave Palmer (Director at Darkrace) who points out that AI could even be applied by individuals and is easily accessible to everyone (Tapsfield, 2018). Given those perspectives it seems to be a given that we need to be prepared for more individualized and targeted phishing attempts! But can we do something about it?
How can we manage the new threats?
Fortunately, the answer to this question is: Yes! As pointed out by Stephen Herrot, the malware protection leverages the same technology. Applying AI, modern malware protection software will be able to gather all the messaging patterns of the respective users. As a consequence, even small deviations might be enough to notify the user that the respective mail might be malicious. As mentioned by Rick Grinell, CSO at Contributor, discrepancies in certain predefined parameters can be identified more easily and thus, AI will most likely have a huge impact to cybersecurity as well (Grinell, 2017).
However, it may be argued about to which degree the identification of those phishing emails can be successful when AI is applied on the other side as well. As a consequence, in my view, the solution might not only lie in AI but in combining AI with other security means such as isolation and self/protecting applications. Looking forward to your thoughts!
Sources:
Benenson, Z., Gassmann, F., & Landwirth, R. (2017). Unpacking Spear Phishing Susceptibility. In International Conference on Financial Cryptography and Data Security (pp. 610-627). Springer, Cham.
Dhamija, R., Tygar, J. D., & Hearst, M. (2006). Why phishing works. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 581-590). ACM.
Grinell, R. (2017). Can AI eliminate phishing? Opinion; accessed from: https://www.csoonline.com/article/3239527/phishing/can-ai-eliminate-phishing.html; July 25th, 2018
Kaspersky Lab (2016). The dangers of phishing: Help employees avoid the lure of cybercrime; accessed from: https://go.kaspersky.com/rs/802-IJN-240/images/Dangers_Phishing_Avoid_Lure_Cybercrime_ebook.pdf; July 25th, 2018.
Ironscales (2018). Artificial Intelligence is Revolutionizing Phishing – and It’s Not All Good; accessed from: https://ironscales.com/blog/Artificial-Intelligence-Revolutionizing-Phishing/; July 25th, 2018.
Tapsfield, J. (2018). Could robots pretend to be YOU? Cyber security experts warn that AI could mimic writing styles and habits of millions of users to launch devastating scams; accessed from: http://www.dailymail.co.uk/news/article-5440017/Cyber-experts-warn-threat-AI-phishing-attacks.html; July 25th, 2018.
Users who have LIKED this post:
One comment on “Artificial Intelligence as a threat to cybersecurity?”
Comments are closed.
A great post about AI and cybersecurity. The same way AI can pose a threat to cybersecurity, it opens up opportunities for securing our digital assets. In detecting anomalies when our digital assets are accessed, artificial intelligence (AI) can be of great help. Collecting data on how we access and interact with our digital assets lets AI raise a red flag when actions happen that are out of the norm. This can trigger security alerts or launch special security protocols including forced multi-factor authentication (MFA) before the user can proceed. Of course, collecting all this data about the various forms of cyber attacks will help to build the required safeguards as well. I believe we will see a lot of blockchain applications adopted for security products. Smart contracts using the underlying blockchain technology are one example here. In general, we should be prepared to expect once complete datasets with our digital identity are stored in central, attackable locations, there will be players working on accessing these data sets in illegal ways. And they will use AI – the same way we should use AI to protect ourselves.