According to McAfee Labs, attackers and defenders are working to out-innovate each other in AI.
Human-machine teaming is becoming an essential part of cybersecurity, augmenting human judgment and decision making with machine speed and pattern recognition. Machine learning is already making significant contributions to security, helping to detect and correct vulnerabilities, identify suspicious behaviour, and contain zero-day attacks.
During the next year, we predict an arms race. Adversaries will increase their use of machine learning to create attacks, experiment with combinations of machine learning and artificial intelligence (AI), and expand their efforts to discover and disrupt the machine learning models used by defenders.
Novel attack methods
At some point during the year, we expect that researchers will reverse engineer an attack and show that it was driven by some form of machine learning. We already see black-box attacks that search for vulnerabilities and do not follow any previous model, making them difficult to detect.
Attackers will increase their use of these tools, combining them in novel ways with each other and with their attack methods. Machine learning could help improve their social engineering—making phishing attacks more difficult to recognize—by harvesting and synthesizing more data than a human can. Or increase the effectiveness of using weak or stolen credentials on the growing number of connected devices. Or help attackers scan for vulnerabilities, boosting the speed of attacks and shortening the time from discovery to exploitation.
Whenever defenders come out with something new, the attackers try to learn as much about it as possible. Adversaries have been doing this for years with malware signatures and reputation systems, for example, and we expect them to do the same with the machine learning models.
This will be a combination of probing from the outside to map the model, reading published research and public domain material, or trying to exploit an insider. The goal is evasion or poisoning. Once attackers think they have a reasonable recreation of a model, they will work to get past it, or to damage the model so that either their malware gets through or nothing gets through and the model is worthless.
Combining machine learning, AI, and game theory to probe for vulnerabilities
On the defenders’ side, we will also combine machine learning, AI, and game theory to probe for vulnerabilities in both our software and the systems we protect, to plug holes before criminals can exploit them. Think of this as the next step beyond penetration testing, using the vast capacity and unique insights of machines to seek bugs and other exploitable weaknesses.
Because adversaries will attack the models, defenders will respond with layers of models—operating independently—at the endpoint, in the cloud, and in the data centre. Each model has access to different inputs and is trained on different data sets, providing overlapping protections. Speaking of data, one of the biggest challenges in creating machine learning models is gathering data that is relevant and representative of the rapidly changing malware environment. We expect to see more progress in this area in the coming year, as researchers gain more experience with data sets and learn the effects of old or bad data, resulting in improved training methods and sensitivity testing.
The machines are rising. They will work with whoever feeds them data, connectivity, and electricity. Our job is to advance their capabilities faster than the attackers, and to protect our models from discovery and disruption. From what we can see, human-machine teaming shows great potential to swing the advantage back to the defenders.
For more insights on cybersecurity, join McAfee Asia Pacific’s Chief Technology Officer, Ian Yip, at ConnecTechAsia Summit 2018’s EmergingTech Track. Marina Bay Sands, 26 June 2018. Delegates may register for the Summit here.