Measure Your Organization’s AI Maturity
Secure, responsible AI use requires a level of literacy on the technology. That’s particularly true when using commercial AI models because shadow AI is prevalent and very likely happening within your organization. Shadow AI refers to the unauthorized use of a generative AI model that’s outside of IT governance, which — well-meaning or not — creates potentially devastating cybersecurity and data privacy issues. If you’re not careful, it’s easy to feed a commercial AI model sensitive or confidential information that could leak into other models, putting it at risk. All it takes is for one bad actor to find and exploit a vulnerability in a model such as ChatGPT.
To bolster AI maturity and make sure employees stay secure, organizations should ask themselves:
What is our current AI literacy?
- What is the organization’s maturity curve on AI overall?
- Have we assessed the potential risks of commercial AI models, including shadow AI?
- Have we scanned our server logs to see how many people are using commercial AI models regularly?
Another way to make AI use more secure: Not far off is the ability to have your own private instance of a generative AI model such as ChatGPT running in your local cloud or, theoretically, on-premises in your server network. Then, when using internal systems, organizations can block users from accessing a public AI model or redirect them to the organization’s private model.
READ MORE: Implementing data governance strategies for AI success.
Use AI to Bolster Security
AI can create vulnerabilities, but it can also shore up defenses when used effectively. There are frameworks, such as AIOps, where AI is used to automate and streamline operations, including security functions, at a time when attack surfaces are growing and data collection is increasing exponentially. Organizations can use AI to combat alert fatigue by automating the handling of security alerts — an essential capability for state and local governments, which are generally more vulnerable to cyberattacks because of staff and budgetary constraints. AI tools can analyze alerts as they come in, trigger automatic incident responses and generate risk analyses.
Soon, we’ll see enhancements of AI in security with the use of multi-agent AI models that use personas to turn security automation from a rules-based system to a logical, reasoned and predictive-based setup. Organizations will be able to write specific personas, such as a white-hat hacker or a black-hat hacker, and use them to pressure-test their defense systems against multiple decision-making agents with different focuses and areas of expertise. Then, agencies can load those personas into different AI models.
This article is part of StateTech’s CITizen blog series. Please join the discussion on X (formerly Twitter).