May 08 2023
Security

How Agencies Can Mitigate Threats Created by AI Enhancements

OpenAI’s fourth iteration of ChatGPT highlights the rapid evolution of artificial intelligence. Agencies must be proactive to keep up.

Artificial intelligence platforms have captured the public consciousness, introducing existential questions and concerns around tech in our lives. AI also raises cybersecurity concerns, and with a recent ChatGPT update, state and local governments are more vulnerable to phishing and ransomware attacks.

GPT-4, ChatGPT’s newest iteration, is more capable of creating convincing phishing emails that could compromise systems. As Route Fifty reports, OpenAI says its latest version of ChatGPT grants users increased “steerability.”

The AI can adopt a prescribed personality rather than keeping the “classic ChatGPT personality,” Route Fifty reports. So, it can express itself differently based on user customizations instead of just using a default mode.

For now, humans may still be better at phishing, but AI could quickly catch up. ChatGPT’s development is a microcosm of how fast AI technology is evolving.

The other side of the cybersecurity challenge is using AI for good without compromising an agency’s security. Therefore, government organizations need to answer a couple of questions:

  • How can an agency bolster its cybersecurity defenses against a smarter AI?
  • How can government use AI tools such as ChatGPT responsibly?

“One piece of good news is that AI in cybersecurity is not new. ChatGPT just accelerated it,” says Srinivas Mukkamala, chief product officer at Ivanti, which has released its own government cybersecurity assessment. “You’re going to have a lot of variety, and you’re going to have a huge volume of attacks.”

Click the banner below to explore cybersecurity strategies when you become an Insider.

A Zero-Trust Architecture Can Help Defeat Attacks

The rate at which technology evolves now is staggering, but the usual cybersecurity principles still apply: Reduce the attack surface, don’t trust implicitly and regularly conduct security posture assessments.

The difference AI has made is that attacks are going to happen faster and more often. Government agencies can no longer be reactive on cybersecurity. Instead, they need to proactively bolster their protections, Mukkamala says.

“You have to assume that attacks are going to happen at a much faster velocity,” he says. “You have to constantly look for vulnerabilities and prioritize them, and look for exploits and patch them pretty much in real time.”

Part of the solution is simply doubling down on maintaining cyber hygiene by writing secure code, configuring systems securely and patching applications regularly. The best way to protect systems proactively is to make network entry points very tight to know who’s coming and who’s going at all times.

Ultimately, organizations that adopt a zero-trust architecture will be best positioned to ward off evolving threats.

LEARN MORE: How state and local agencies can establish zero trust.

Agencies Must Train Employees on Responsible AI Management

When using any kind of GPT, or generative pretrained transformer, employees shouldn’t give it any personal details or sensitive information.

“People started giving way too much sensitive data to OpenAI. They’ll say, ‘Open my desktop and access my files,’” Mukkamala says. “You don’t even know who is sharing your desktop with OpenAI because OpenAI has an API that will say, ‘Can I access your folders?’ And with remote work, your desktop has access to all of your cloud resources as well.”

Suddenly, information that should be kept under wraps is in the hands of an external organization, and employees have inadvertently made confidential information vulnerable. Any organization must make its employees aware of this threat. This is especially true for state and local agencies, given the volume and importance of the data they collect (birth and death certificates, school records, loans, licenses, registrations, etc.).

Employees also need to be diligent in detecting phishing scams, considering that platforms such as ChatGPT are growing more convincing. Phishing is all about impersonation, and AI can learn someone’s patterns, writing style, signatures and personal preferences in just minutes to craft eerily realistic messages.

Perhaps ironically, this is where AI-powered cybersecurity tools are helpful. For example, companies such as Google have used machine learning to better detect phishing attacks. Beyond that, it’s about preparing employees to understand that they’re targets.

“It comes down to awareness and layered defenses,” Mukkamala says. “You really have to train your people so that every single individual becomes a defense layer, not a weak point.”

gorodenkoff/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT