I can recall in my youth watching Star Wars and gravitating toward characters such as R2-D2 and C-3PO because they were unique and futuristic. As if it were yesterday, I remember asking myself questions like: Could the movie be a portrayal of the future? Would I see a time in my life when robots and humans had meaningful interactions? The question of all questions – will robots ever evolve to such an extent that they are as smart as or smarter than humans are? I could not really grapple with those questions then, but I believe I have some interesting insights now!
The advent of the personal computer (PC) and its mainstream adoption in the early 1980’s ushered in an unforeseen world of opportunity. Given the mathematical capability of the computer, research, development, and engineers flourished like never before leapfrogging technological advancement in record times. Shortly thereafter, we began to see groups of computers connecting together on local networks and over long distances via Plain Old Telephone Service(POTS) lines. PC modems were commercialized making it easy to access Bulletin Board Systems (BBS) that offered legal, illegal and other nefarious services. Information sharing and online service offerings exploded in 1989 with the introduction of the World Wide Web – also known as the Internet.
Fast forward to present day and we see the proliferation of personal computers in all facets of our daily lives. At the very core, products, services, and information sharing are still the primary tenants of the Internet. We see advancements across just about every business vertical as a direct result of the computer. Cures for debilitating diseases, solutions for complex environmental issues and the speed with which an honest hardworking person can have their identity stolen and shmeared can occur within a matter of minutes. The age-old problem that humankind has faced since the beginning of time (good vs. evil) has worked its way into our computing lives. How do we prevent the hostile impact of computer bad actors? With Artificial Intelligence (AI) and Machine Learning (ML).
AI is an area of computer science that concentrates on the creation of intelligence of computers, also coined as machines. The big idea is to have them work while simulating human behavior. The typical activities or use cases commonly associated with AI are speech recognition, learning, planning, and problem-solving. ML is the application of AI. Software Programmers make use of algorithms and statistical models to perform tasks without specific instructions relying on patterns, anomalies, and inferences in the data. Hence the term Machine Learning. The AI and ML practices would not be possible without big data.
"We see advancements across just about every business vertical as a direct result of the computer"
Big data is the accumulation and continuous expansion of the source data its metadata – data about the source data. How does AI and ML impact cybersecurity?
Red Team Operations
Taken from Military combat operations, the term “Red Team” is synonymous with the opposing force while the “Blue Team” corresponds to the defender. In this case, the red team refers exclusively to bad actors, hacktivists and nation-state actors. At the time of this writing, we have no documented cases of attacks stemming from ML. The theoretical impact or advantage ML could create for the red team is frightening. Malware programs could be written to install a specific variant depending on the vulnerability found on the target computer. Additionally, ML behavioral analytics could be trained to focus exclusively on compromising perimeters or targets with an extremely high percentage of security coverage. Instead of looking for weaknesses in the 99.5%, ML programs would focus the attack on the .05% shortfall between 99.5% and 100% efficacy. The typical cybersecurity program would not account for or detect these so-called “smart attacks”. Moreover, the traditional red team evaluations, such as vulnerability and penetration tests would prove ineffective as well.
Fortunately, the red team is finding it very challenging to get their hands on the hardware and software components necessary to develop bad-actor algorithms. However, powerful big data open-source platforms like Hadoop offer the opposing force the capability to ingest hundreds of terabytes of data per day. We established that the big data element is foundational for ML. Add this capability to Cloud-based platforms like Microsoft Azure, Amazon AWS and Google Cloud where bad actors can rent powerful computer resources for pennies on the dollar and suddenly, the theoretical fears do not seem so far-fetched. All aspects of cybersecurity ranging from data center protection to endpoint security are moving at an accelerated pace to leverage ML to defend against the opposition.
Blue Team Operations
When we speak to executives from organizations that support the blue team, such as Optiv Security, Cyber Security Experts and Birch Cline Technologies, we find a healthy and promising divergence. Since the very beginning of cyber blue team planning and operations the perspective has been one of a defensive posture. We have come to understand that this dichotomy is being transformed from a defensive approach to an offensive one because of ML. Cybersecurity frameworks like National Institute of Science and Technology (NIST) 800-53 require framework adopters to establish a baseline configuration for the computer systems that comprise an organization. Organizations that complete this task can layer in ML-based security controls that can detect, identify, respond and protect against attacks much faster than a Security Analyst. Unlike the red team restrictions on program code and algorithms the blue team is situated nicely with these assets, as well as vendors, software coders and executives eager to purchase the technology. As such, we have the formation of an edge for the blue team. With ML baseline pattern and behavioral anomaly technology deployed at the network edge, at core, distribution and access layers we have a formidable security posture that would thwart the most common attack, which is incredibly encouraging for the blue team.
Our analysis of the impact of AI and ML on cybersecurity security suggests in the blue team security capability will outpace that of the red team. However, it should be noted that without a healthy appreciation and implementation of cybersecurity basics, ML advantages will be much harder to realize. Before we go overboard with AI and ML branded solutions, let’s be sure to first adopt a security framework, have a credible security firm perform a risk assessment with gap analysis and create a 12 to 18month remediation roadmap. As we go through the prioritized remediation roadmap, we can layer in ML technologies at the appropriate time. This is the most responsible approach for developing an enterprise cybersecurity posture and risk management program where ML technology benefits can be most effective.
The human brain has the capability to process 400 billion bits of information per second that equates to 40 GB/second. NVidia debuted the smallest AI supercomputer in February of 2019 called the Jetson Xavier NX for $399. The NX can process 51.2 GB/second. At about the same time, researchers at the Korea Advanced Institute of Science and Technology (KAIST), the University of Cambridge, Japan’s National Institute for Information and Communications Technology (NICT), and Google DeepMind headlined with “Brain code can now be copied for AI, robots.” It seems evident that for $399, one can obtain more raw computing capability than the human brain. Until R&D is perfected to mirror the reasoning, emotional and cognitive capacity of the brain, humankind remains at the top of the evolutionary order. At the current rate of change and technology advancement, it remains to be seen how much longer we retain our superiority!