blog
    When Worlds Collide: Arti ...
    27 March 19

    When Worlds Collide: Artificial Intelligence and Cyber Security

    Posted byJosue Vargas
    facebooktwitterlinkedin
    news-featured

    It’s Here!

    The time for integration of AI and cybersecurity has come and we’re all involved in one way or another. Long past are the days when artificial intelligence was at its dawn.AI is rapidly transforming the technology landscape and being progressively applied to more aspects of human industry, entertainment and our daily lives.

    We’ve all heard of Alexa, Cortana and Siri. However, these days the application of artificial intelligence also extends to business analytics, medicine, law and many other fields traditional to humans. For both good and bad, AI is a reality in the cybersecurity world as well. In this post, we will explore various aspects of this subject. First I would like to help you define and differentiate three core concepts essential to understanding artificial intelligence further.

     

    Not just “Buzz Words”

    Chances are you have heard the terms “artificial intelligence”, “machine learning” and “deep learning” used together or interchangeably, which can be confusing. A way to start drawing the line is to understand that they’re all related and their relationship goes from general to more specific.

    Artificial Intelligence is the development of technologies allowing computing systems to perform activities that would typically require human intelligence. Examples include face recognition technologies used by Facebook or Apple, speech recognition and others. The concept is broad and encompasses the other, more specific, ones following.

    Machine Learning is when we hear about statistical techniques used to help a machine or algorithm improve performance in specific tasks. Algorithms learn from labeled datasets and improve their ability to predict results based on previous data consumed.

    Deep Learning uses neural networks, which mimic the activity of the human brain, in order to provide machines the capability to learn even from unstructured or unlabeled data. This is also called unsupervised learning and it’s a field under heavy development with important milestones already accomplished.

     

    The “Arms Race”

    As you probably guessed, these developments provide a variety of tools meant to enhance productivity, accuracy and effectiveness through software, but they can also be used for evil. This means a sort of “double-edged” sword has appeared in the conflict between hackers and security professionals. Those who learn how to use these developments as weapons will cause great trouble for the opposite side.

    So how could hackers benefit from the features artificial intelligence provides? The possibilities are overwhelming, but let me give you a few examples:

    Rogue Chatbots: Who hasn’t had a nice (sometimes awkward) conversation with our AI friends the chatbots? They ask you for information and offer information concerning the services you’re browsing on a website, that’s their job, to replace customer service human agents via chat. Just as hackers compromise websites to manipulate forms, or to entice you to click on malicious links, they’re also able to mess with chatbots. Hackers can use chatbots to collect all sorts of valuable, personal information from visitors. Just picture the huge database of names, emails, phone numbers or even credit card numbers that could be built by compromised chatbots. And you don’t even have to look for victims, they will surrender their information voluntarily.

    Tampering with Training Data: You already read above that machine learning algorithms rely on labeled or structured datasets which are provided to them as a means of training. What if you’re a hacker who realizes that an organization is training an M.L. algorithm to gain intelligence on attack types, but before the system is at its most robust you can introduce training data of your own? That’s right, you can effectively mis-train the M.L. algorithm to ignore the information related to your own attacks and effectively render the system useless against you.

    AI BotMaster: If you are familiar with botnets, you know about the “Command and Control Center.” Botnets are large groups of compromised devices which have been recruited (through malware typically). These devices will act upon instructions that a human hacker gives. Imagine the scale of a campaign where a hacker can rely on AI admins to run botnet operations. Besides achieving more stealthiness, the scalability of attacks would be multiplied dramatically.

    On the other side of the equation are the security professionals and their operations. Here are some examples of how AI can also boost their job performance:

    Zero-Day Malware Detection: This is a field which several security firms are using artificial intelligence for. Zero-Day malware has a “surprise” factor since there’s often a time window where it will be dangerously effective before it’s discovered and proper signatures are created against it. With machine/deep learning, anti-malware engines can start recognizing attacks, even if no signature is published, based on deviations from secure behavior and known malicious activities and modifications. An example of this is how Sophos’s InterceptX product analyzes thousands of malware samples daily and learns from them in order to help create this capability.

    SOC Operations Streamlining: If you’ve had exposure to SOC environments and SIEM technology, you’ll recall the difficulties entailed in keeping SIEM engines fine-tuned with proper correlation rules. SIEM platforms offer amazing benefits and advance in the detection of security events, what if we can boost them with self-learning and get staff to focus on alerts? This is the proposal of innovative companies like Seceon aiSIEM and LogRhythm. These companies are utilizing artificial intelligence engines to build dynamic threat models based on collected metadata, flows and logs, which perform event correlation and behavioral analytics in a way so that the manual intervention is drastically decreased.

    Self-Healing Networks: Self-healing networks use artificial intelligence so that devices can learn the proper working state of an environment. It’s also possible for those devices to learn about deviations from that baseline. AI powered resilience solutions use AI engines to automatically generate instructions. These instructions are then used to allow a compromised environment to return back to a safe state after the operation of a network has been damaged.

     

    Food for thought

    I sincerely hope this article has captured your interest in the development and application of artificial intelligence in cybersecurity. On a final note, you may have heard of the famous “cybersecurity staff global shortage.” You should know that it is a reality worldwide and there’s definitely a need for more cybersecurity professionals. There’s a lot of room for growth and opportunity in this field. But yet another secret to share… there’s also a huge need for people who, besides cybersecurity, are also well-versed in data science and artificial intelligence. Want to pick a career path that takes care of you for the foreseeable future? Then listen to your buddy Josué on this one, you can thank me later.

    If you’re interested in learning more about this subject, I highly recommend that you check out my certified ethical hacker v10 Technology course.

    {{cta('f032e349-8eff-4873-b95b-442d561d2012')}}

     

    Hey! Don’t miss anything - subscribe to our newsletter!

    © 2022 INE. All Rights Reserved. All logos, trademarks and registered trademarks are the property of their respective owners.
    instagram Logofacebook Logotwitter Logolinkedin Logoyoutube Logo