Cybersecurity

AI in cybersecurity: Risks and opportunities

AI is unavoidable in today’s technology landscape, so it’s important for your organization to understand its role in your cybersecurity strategy.

banner image

The emergence of AI in cybersecurity

Artificial Intelligence (AI) has received a lot of attention over the past few years. Several organizations are considering what it can do to help improve their business operations. President Biden has even issued a new Executive Order to establish AI standards.

AI can provide better operations and improved cybersecurity, but it could also open the door to more risks. AI has raised a number of concerns, especially about it eliminating jobs or supporting threat actors in developing more sophisticated attacks. There are discussions around whether AI should be considered a new form of shadow IT.

This article will explore some of the main considerations surrounding AI and cybersecurity and provide an outlook to the future.

AI cybersecurity threats: New risks

While AI can facilitate better operations and cybersecurity, it could also open the door to more risks.

Cyber criminals have already started using AI to better their attack skills. The very tools that can fortify systems and data can also be manipulated to compromise them. There has been a rise in the use of malicious counterparts to ChatGPT, such as WormGPT and FraudGPT, among others. Some of the ways criminals are making use of AI include:

  1. Improved phishing attacks: AI can be used to automate and personalize phishing emails or messages. Tailored messages with better grammar have a better chance of influencing individuals to reveal sensitive information. At the same time, broad and affordable access to AI solutions has reduced the cost of attacks, resulting in both a substantial increase in total attacks and, more alarmingly, a significant increase in the quality of phishing attacks. Traditional efforts relied on widespread attacks with low efficacy but given that AI can accurately consume information as well as create it, attackers are able to do very targeted spear phishing attacks at a much lower cost. What would have previously required hours or days of social media research to craft an extremely targeted attack can now be accomplished in minutes or hours, motivating attackers to hit a wider variety of targets.
  2. Automated hacking: AI can quickly discover vulnerabilities in a system to allow a hacker to exploit them. It can also learn how detection systems function, allowing the attacker to circumvent them. This enables cyber criminals to attack a higher volume of systems in a shorter time span. Traditional hacking attacks required human effort and staffing, meaning that attackers had to be selective about their targets and a successful attack method could accomplish a limited amount before being publicized and compromised. AI tools allow attackers to move through more targets more quickly, allowing each attack to be more effective before a response can be created.
  3. Data theft: Using AI, cyber criminals can analyze and sift through vast amounts of stolen data more efficiently, increasing their speed and efficiency in exfiltrating. Simultaneously, a successful exfiltration of data can be leveraged much more quickly because it can be consumed easily, reducing the time available to respond after a breach has been identified. Additionally, traditional protective measures such as CAPTCHA can often be fooled by trained AI solutions using descriptive image recognition to interpret images and fuzzy characters correctly to create fake accounts or access protected applications.
  4. AI-powered malware: Cyber criminals can leverage AI to develop more effective malware, making it more effective in infecting systems and evading detection. AI-based malware can be independently adaptive and make informed decisions about how to attack. In some cases, it can monitor communications and security protocols to determine the most effective payload for a specific scenario without any human intervention. This capability makes it more evasive, allows it to observe other, unrelated attacks to learn how a given security system responds to an attack, and strike at an optimal time using optimal techniques.
  5. Use of deepfakes: Deepfakes are highly realistic fake videos or audio recordings that are used primarily for misinformation campaigns. They have been used to trick individuals into believing they're listening to and sometimes interacting with a trusted entity. Misinformation has been a huge part of cyber campaigns, especially over geopolitical issues such as the wars between Ukraine and Russia and Israel and Hamas. As more video and audio of organizational leaders is shared via social media and public websites, attackers have more high-quality training content for deepfake solutions. A recent elaborate and targeted deepfake effort was discovered in late January in which attackers successfully simulated a company’s CFO via video conference and convinced employees trained to identify cyberattacks to transfer over $25 million to fake vendor accounts. As of this writing, the attackers have not been identified and the money has not been able to be recovered.

Another concern is the unauthorized use of AI within an organization. Shadow AI, like Shadow IT, is the unsanctioned use of AI outside of IT governance. This can introduce additional threats and open organizations up to intentional or unintentional malicious activity within their infrastructure. With the speed at which AI is being deployed, it can be challenging for organizations to enforce limits through technology as opposed to policy. IT organizations should lead the charge in preparing for AI in the enterprise with an early emphasis on establishing usage guidelines, a code of ethics, and instituting wide-reaching educational campaigns about the capabilities and potential consequences of AI.

AI in cybersecurity defense

Despite these risks, there are several ways that you can leverage AI to improve your cybersecurity planning:

  1. Detection and response: Organizations can use AI to help detect and respond to attacks more swiftly and efficiently. AI and machine learning (ML) can learn from each attack, enhancing their ability to counter future threats.
  2. Advanced threat intelligence: Using AI-driven threat intelligence platforms can help organizations stay ahead of new and evolving threats. These platforms provide cost efficiencies, speed, scalability, prediction capabilities, and reduced human error.
  3. More effective patch management: Regularly updating systems and software is crucial. Patches to operating systems (OS), key applications, and other systems are announced by CISA daily.1 It can be challenging to keep up. AI and ML can help by automatically deploying testing and implementation, reducing the burden on IT staff.
  4. User behavior analytics: According to studies, 74% of organizations believe they are at least moderately vulnerable to insider threats, with more than half having experienced one in the past year.2 AI-based tools can detect patterns, trends, and correlations in user behavior, which will help mitigate these risks and their impacts.
  5. Data backups: Data backups are a vital part of data recovery. AI can be used to provide this on a more frequent basis, reducing the reliance on the IT and security teams to perform these tasks. AI can also provide predictive analysis to detect future trends as well as assist with intelligent disaster recovery.
  6. Offensive security testing: AI can become part of an organization’s offensive security operations, such as penetration testing. The main benefit is the improved delivery time. This is due to the efficiency with which a security practitioner may perform research and all other necessary tasks, all while eliminating redundancy and burnout. Many of the tasks in which AI is best used have been identified as Reconnaissance, Service and Vulnerability Scanning, Exploitation, Web Application Security, and Social Engineering campaigns.

In addition to these AI-based approaches, other methods such as multi-factor authentication (MFA), encryption, cybersecurity incident response plans (CIRP) with tabletop exercises (TTX), and user training are effective. It's important to remember that cybersecurity is an ongoing effort, requiring constant vigilance and adaptability as new threats emerge.

Leverage AI in cybersecurity to protect your organization

It’s important to understand AI and how it can be effectively used in cybersecurity. A few recommendations include:

  • Decide how AI can and should be used in your organization.
  • Educate your team on Generative AI.
  • Develop an AI policy for the organization.

Uncertainty brings a lot of excitement and anxiety, and we can see that many are reacting to AI. However, it was just over 30 years ago when the World Wide Web began, modernizing ARPRnet whose origins date back to 1969. By 1994, 11 million American households were online, with President Bill Clinton as one of the first world leaders to embrace its potential.3 While others were concerned about where this advancement in technology would lead, others embraced it. Now it is widely used, revolutionizing communications, breaking down geographical barriers, optimizing business, and understood as a normal part of our everyday lives. AI may not be far from the same, and it’s unavoidable in today’s technology landscape, so your organization needs to understand its role in your cybersecurity strategy.


Endnotes

  1. “Cybersecurity Alerts & Advisories: CISA.” Cybersecurity and Infrastructure Security Agency CISA, February 6, 2024. https://www.cisa.gov/news-events/cybersecurity-advisories.
  2. “Report: More than Half of Organizations Have Experienced an Insider Threat in the Past Year.” Security Today. Accessed February 6, 2024. https://securitytoday.com/articles/2023/02/06/reportmore-than-half-of-organizations-have-experienced-an-insider-threat-in-the-past-year.aspx.
  3. “Technology in the American Household.” Pew Research. Accessed February 6, 2024. https://assets.pewresearch.org/wp-content/uploads/sites/5/legacy-pdf/582.pdf.

Let's talk!

Interested in learning more? We'd love to connect and discuss the impact CAI could have on your organization.

All fields marked with * are required.

Please correct all errors below.
Please agree to our terms and conditions to continue.

For information about our collection and use of your personal information, our privacy and security practices and your data protection rights, please see our privacy policy and corresponding cookie policy.