Cybercrime is a thriving, and sophisticated, industry. Millions of stolen credit card numbers, bank accounts and medical records, as well as malware, ransomware, DDoS services and all sorts of other advanced hacking tools are for sale on the Dark Web, that encrypted part of the internet that is accessed through specialized software. It’s a booming market that no amount of law enforcement has been able to diminish; cyber criminals just invent new attack technologies and evasion tactics.
At the BraveIT session titled “Harnessing Artificial Intelligence & Emerging Technologies for Data Security”, TierPoint’s Chief Security Officer Paul Mazzucco described the inner workings of the Dark Web and what IT professionals need to know about the role of AI and machine learning in cyber-crime and IT security.
The Dark Web has prospered for three main reasons, according to Mazzucco and co-presenter Carl Herberger, Vice President of Security Solutions for Radware, a TierPoint partner.
The first is the TOR (short for The Onion Router) software that provides access to the Dark Web and protects the anonymity of users through layers of encryption and network relays.
The second contributor is the emergence of crypto-currencies like Bitcoin, Ripple and Ethereum. Without the transactional anonymity provided by crypto-currencies, Dark web buyers and sellers would have to use traceable payment methods such as checks, credit cards or PayPal that law enforcement could use to track them down.
Finally, developments in AI and machine learning are enabling more sophisticated Dark Web exploits. AI can analyze and customize attacks for specific targets and it can learn from past attacks to improve evasion techniques. Once inside a system, AI tools can scan files and pinpoint valuable data.
“One of the main issues we see in protecting our Federal and Corporate infrastructures is the lack of available cybersecurity specialists in today’s competitive markets. Due to this unfortunate reality, the widespread adoption of machine learning has grown exponentially to fill the skills gap”. – Paul Mazzucco
Fortunately, AI and machine learning are also helping the good guys to more quickly and effectively identify, thwart and mitigate those attacks.
For example, security services providers are using AI for automated analysis of security events and to identify changes in traffic patterns. It can also perform forensics through search and analysis of big data, as well as identify inter-dependencies between attack variables to predict future targets.
Likewise, machine learning enables defensive systems to improve as they learn more about what they are protecting. With “neural networks,” which are based on the workings of the human brain and use connected systems and continuous algorithmic testing, security systems can become much smarter at identifying potential attacks and suspicious files.
Another AI security approach is behavioral modeling, in which the behavior of users, applications and files are tracked and classified. Behavioral flows are categorized as either good or bad and written into “contracts” that set strict rules on how to handle the activity. The system also constantly monitors for false positives and negatives to use to adjust the rules, improving over time.
Of course, all these models are only as good as the data and the system’s ability to process that data in real time. Cloud-based security services providers with robust networks such as TierPoint analyze massive amounts of data collected by social media and cloud platform companies like Google, Facebook and Amazon to identify emerging threats before they can infiltrate customers’ systems.
Even with all that data and AI analysis, there remain some security decisions that require human input. AI and machine learning are evolving rapidly, but for now, the most effective security systems are a collaboration between humans and AI.