Skip To The Main Content
Return to Issue

Operations & Technology

AI IN CYBERSECURITY: THE GOOD, THE BAD AND BEING ON THE PRECIPICE OF A NEW ERA IN TECHNOLOGY

It may seem like artificial intelligence appeared out of the ether—or interwebs—in the past year, but AI is nothing new.  

Just ask your phone a question. Hey, Siri; OK, Google ...  

People have been thinking about ways to bring technology to “life” for generations as evidenced by Elektro, the Westinghouse Moto-Man, which made its debut at the 1939 World’s Fair.   

The anthropomorphic robot was purportedly designed to be subservient to humans “if you use me well,” a key point for any technological innovation really. Bad often emerges from good.  

Elektro was impressive for its time, but the robot relied on humans to control its information output. Today’s AI is very much the opposite in that the technology is designed to give insights to humans by consuming and examining data with unparalleled speed.  

Influencers like Mira Murat, who heads up the tech team at ChatGPT, have helped bring AI into the limelight this past year.   

But as good technology and software is created, bad actors work around the clock to use it or exploit it. There are reports, for instance, of a WormGPT being used in underground forums. Its purpose is to use AI to create more sophisticated phishing and email compromise attacks.  

As security researcher Daniel Kelley told The Hacker News, “Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack.”  

As you might expect with cybersecurity, battlelines are being drawn between the people creating AI solutions to help protect companies and the people making AI software that is designed to find vulnerabilities in areas designed to protect data; systems; financial and personal information; intellectual property (IP); and Industrial Internet of Things (IIoT) and other IoT devices.  

The potential for AI bias and intentional trickery to get false results through so-called data poisoning is also a concern when it comes to AI misinformation and misdirection. As are AI advancements in producing deepfake technology intentionally designed to trick people into revealing sensitive information. There are never any guarantees that everyone using AI is doing so with good intentions.   

Also, one area that I talk a lot about is password security. These days, it’s important to use multifactor authentication to help protect information and systems. Passwords alone are not enough. AI is making passwords even more vulnerable than ever. In one study, an AI password cracker tool was able to crack 71% of all common passwords in less than a day.   

Additionally, it can crack any password with less than eight characters, even including special characters, in six minutes. The good news is it would take 6 quintillion years for it to crack an 18-character password with symbols, numbers, letters and a mix of upper and lowercase letters. This is why using suggested passwords and password managers is so important.   

And it should go without saying by now, but AI will use all the information that goes into it, so please do not put sensitive data/IP into AI without thoroughly vetting who will be able to tap into that information.  

That said, we do believe the benefits of AI outweigh the risks, but don’t think the potential for bad actors using AI to increase their cyberattack success rate doesn’t keep chief information security officers up at night.   

MOVING AT LIGHTSPEED  

Today’s companies cannot be asleep at the wheel when it comes to cybersecurity. That’s because the battle between those trying to protect software and systems and those trying to penetrate and damage or steal from them is escalating with every AI advancement.   

The technology is evolving so fast that 30,000 AI experts have called for a pause on training AI with fears that, gone unchecked, AI could have unforeseen consequences.   

Attacking technology and IIoT is what cyber criminals do best, so empowering them through AI is truly a scary proposition. In addition to the Software as a Service (SaaS)/cloud-based technology we use every day to make our lives easier, IIOT helps support and deliver utility services (electrical system, water systems, etc.).  

There’s a good reason VentureBeat recently wrote, “AI and machine learning (ML) are becoming attackers’ preferred technologies.”  

It works on a scale and at speeds that were previously impossible.  

When it comes to cybersecurity, Professional Employer Organizations (PEOs) play a critical role in safeguarding not only their sensitive data but also that of their small and medium-size business (SMB) clients. As we always say, everyone has a role to play when it comes to keeping an organization safe from a cyberattack. Just one slip up can create a domino effect.   

Still, while AI cyberattacks generate most of the buzz, there are several AI technologies you can lean on to help you keep the bad guys out. Here are a few you should consider including in your cybersecurity efforts.  

HELPFUL AI   

We need to make one thing clear: No cybersecurity tool is infallible, but AI can up your cybersecurity game immensely and help reduce the time required to identify, respond and mitigate a cyberattack.  

For example, 74% of cybersecurity experts say that the dynamic nature of the cloud leads to “poor visibility” and “blind spots,” according to cloud cybersecurity company CrowdStrike. The software is designed to elevate an organization’s ability to hunt for threats by using AI, among other things.   

Mike Sentonas, CrowdStrike’s president, said in a news release about its AI tool: “Our approach has always been rooted in the belief that the combination of AI and human intelligence together will transform cybersecurity.”  

We couldn’t agree more.  

Last year, African university’s AI technology was used to thwart an attack.  The attackers tried to infiltrate the tech university through malware.  

By using intrusion detection, AI was able to find a suspicious desktop connection and stop the infiltration. Intrusion detection is an AI tool that basically scours a system for anomalies and alerts users if something seems amiss.  

Before a threat walks through the door, many companies are also using AI for vulnerability management. This technology is designed to search for “cracks” in the foundation before a cyberattacker can find them. Let’s face it: Humans can only spot so many things. If you’ve ever bought a property, did the inspector find everything that was wrong with the house? Probably not. People are human and miss things and make mistakes. (Albert Einstein even made some from time to time.) There are things that can go undetected from even the best inspectors, but, unlike humans, AI has the ability to continuously search for vulnerabilities. As well, accuracy and speed are increasingly important to make sure we are able to efficiently deal with risk.  

As we mentioned, AI can play a key role in malware detection. We’re sure you’ve heard this a lot, but every errant click from a phishing email could open a huge problem. Once an attacker gets through the door, the danger could easily spread throughout the organization, especially if an administrator is involved. Traditionally, malware detection worked with signatures to identify threats, which is known as “trapping.” Now, AI has helped move malware detection to a process called “hunting,” which is the inverse. In hunting, AI-powered malware detection determines what is “good” and then judges dangers based on that. The software basically says, “That’s not normal for an application to behave that way.” It’s a heuristic approach to problem-solving, and it’s an essential area where AI improves cybersecurity.  

One of the biggest challenges we have as humans is handling redundant tasks. If you’ve ever seen a crepe chef make mistakes while spinning out the dough even though you’re sure they’ve made thousands of them in their career, you’ll understand. As humans, we’re just not wired to do things over and over again without eventually making a mistake. That’s where AI can really help with task automation in terms of cybersecurity. One can only look at a screen for so long before their eyes start to go blurry, but AI never lacks focus. AI task automation simplifies threat detection and increases the ability to recognize threats faster through machine learning.  

Thanks to companies like Microsoft, CrowdStrike and others, these AI tools are readily available to all PEOs, and PEOs can also benefit from the pooled resources provided by PEO Defender.  

We are clearly at the precipice of a new tech age based on artificial intelligence. As weird as it might seem, we will need AI to protect us from AI, and there’s no time to waste. Remember that what you put into AI stays there. We see that in Natural Language Processing (think ChatGPT) where inputs turn into actions. Companies must do everything they can to ensure the safety of their data.   

As they said near the end of a promotional film for Elektro, “All he lacks is a heart.”  

That is the essence of what will separate humans from technology. Caring for people and doing no harm is what cybersecurity is all about —including in this new era of AI.     

 

DWAYNE SMITH
Chief Information Security Officer
PrismHR
Hopkinton, MA   

Thank you to all ​PEO Insider Advertisers!