Artificial Intelligence and Cybersecurity: What's the Risk?

JUNE 24TH, 2019
The risk associated with artificial intelligence (AI) is a popular topic these days. Will AI take human jobs and become an enemy of the people? Will AI develop killer robots with no moral compass? Will AI become a superpower with greater control over humans?
inline
These are all important issues, but those tasked with cybersecurity currently have their own growing worries about AI. A Neustar report finds that 82 percent of 301 senior technology and security workers say they are concerned about the possibility of attackers using AI against their company, with 50 percent ranking stolen data as their greatest worry. The loss of customer trust is cited by 19 percent as the greatest concern, while 16 percent ranked business performance and cost implications as topping their list. As a result of their concerns, nearly 60 percent of those surveyed are “apprehensive” about adopting AI technology within their organizations, the report finds. “We’re at a crossroads,” says Rodney Joffe, head of NISC and Neustar senior vice president and fellow. “Organizations know the benefits, but they are also aware that today’s attackers have unique capabilities to cause destruction with that same technology. As a result, they’ve come to a point where they’re unsure if AI is a friend or foe.” Among the fears cited in the report: DDoS attacks, system compromise and ransomware. For those concerned about such issues, there are some steps to take:
  1. DDoS. The first major DDoS attack was 18 years ago when a Canadian teenager basically broke the Internet. Small DDoS prank attacks have grown since then (there are more than 160 attacks a day) and become part of a cyberwar against businesses and governments. Aatish Pattni, regional director of UK and Ireland Link11, advises organizations to route their Internet traffic through an external, cloud-based protection service because it provides a way to digitally fingerprint incoming traffic. That way, organizations can build an index of normal traffic and malicious traffic – and block the latter when detected in the traffic flow, he says. As the AI algorithm learns, it can also then block attacks – within seconds – even if there isn’t a current fingerprint.
  2. System compromise. Organizations must always be on the lookout for new hacking strategies. For example, if hackers identify an algorithm’s data source or training method, they can find ways to circumvent an organization’s filtering of spam emails. The tech industry is aware of such problems and is moving toward a fix. “Total encryption of data – made possible by technological advancements in processors – will go a long way in the fight against data manipulation if organizations decide to use it,” notes Christopher Zheng, writing for the Council on Foreign Relations.
  3. Ransomware. Ransomware damages hit more than $5 billion last year, a big leap from the $325 million in 2015. Hackers are using AI to ramp up their malicious attacks and continually move onto new targets. To combat it, organizations also are using use AI to help identify and predict attacks before they occur.
Finally, experts advise that preventing AI attacks also must involve the ongoing training of employees so that they understand the dangers – and how to prevent them whether it’s avoiding phishing emails or unsecure servers. As Neil Jacobstein, head of AI at Singularity University says: “It’s not artificial intelligence I’m worried about, it’s human stupidity.”

You May Also Like