A Zero-Trust Architecture Can Help Defeat Attacks
The rate at which technology evolves now is staggering, but the usual cybersecurity principles still apply: Reduce the attack surface, don’t trust implicitly and regularly conduct security posture assessments.
The difference AI has made is that attacks are going to happen faster and more often. Government agencies can no longer be reactive on cybersecurity. Instead, they need to proactively bolster their protections, Mukkamala says.
“You have to assume that attacks are going to happen at a much faster velocity,” he says. “You have to constantly look for vulnerabilities and prioritize them, and look for exploits and patch them pretty much in real time.”
Part of the solution is simply doubling down on maintaining cyber hygiene by writing secure code, configuring systems securely and patching applications regularly. The best way to protect systems proactively is to make network entry points very tight to know who’s coming and who’s going at all times.
Ultimately, organizations that adopt a zero-trust architecture will be best positioned to ward off evolving threats.
Agencies Must Train Employees on Responsible AI Management
When using any kind of GPT, or generative pretrained transformer, employees shouldn’t give it any personal details or sensitive information.
“People started giving way too much sensitive data to OpenAI. They’ll say, ‘Open my desktop and access my files,’” Mukkamala says. “You don’t even know who is sharing your desktop with OpenAI because OpenAI has an API that will say, ‘Can I access your folders?’ And with remote work, your desktop has access to all of your cloud resources as well.”
Suddenly, information that should be kept under wraps is in the hands of an external organization, and employees have inadvertently made confidential information vulnerable. Any organization must make its employees aware of this threat. This is especially true for state and local agencies, given the volume and importance of the data they collect (birth and death certificates, school records, loans, licenses, registrations, etc.).
Employees also need to be diligent in detecting phishing scams, considering that platforms such as ChatGPT are growing more convincing. Phishing is all about impersonation, and AI can learn someone’s patterns, writing style, signatures and personal preferences in just minutes to craft eerily realistic messages.
Perhaps ironically, this is where AI-powered cybersecurity tools are helpful. For example, companies such as Google have used machine learning to better detect phishing attacks. Beyond that, it’s about preparing employees to understand that they’re targets.
“It comes down to awareness and layered defenses,” Mukkamala says. “You really have to train your people so that every single individual becomes a defense layer, not a weak point.”