
Like what you read and curious about the conversation? Visit CISO Perspectives to get further insights into this topic. CISO Perspectives is a weekly column and podcast where Kim Jones explores the evolving landscape of cybersecurity leadership, talent, and risk—because success in cybersecurity is about people, not just technology.
AI and cyber practicum
Welcome to the CISO Perspectives Weekly Briefing, where we break down this week’s conversation, providing insights into relevant research and information to help you further understand the topics discussed.
At 425 words, this briefing is about a 4-minute read.
Deploying AI securely.
Artificial intelligence (AI) holds the potential to dramatically improve productivity, customer experience, and data processing across an organization. However, each of these benefits comes with significant challenges, which, if not addressed, could become major security and operational risks. This dynamic puts security leaders in a challenging position where businesses want to quickly adopt AI models, oftentimes foregoing the underlying risks associated with them.
However, there are ways to go about effectively deploying an AI system securely. In April 2024, a group of government agencies published a report on how to effectively deploy a secure and resilient AI system. With this report, these agencies looked to:
- Improve the confidentiality, integrity, and availability of AI systems.
- Help ensure that known AI system vulnerabilities are appropriately mitigated.
- Provide methodologies to better protect, detect, and respond to malicious activity targeting AI systems or related data.
Regarding secure deployment, there are several core principles that can help with this process. First, it is critical to secure the deployment environment. Through effective governance, hardened configurations, and robust architecture, organizations can ensure their networks and systems are as secure as possible before AI systems are introduced.
Next, organizations need to continuously protect their AI systems. By securing exposed APIs, actively monitoring their model’s behavior, and protecting their model’s weight, organizations can ensure their systems are acting accordingly to address vulnerabilities, discover weaknesses, and prevent the introduction of malicious code.
Lastly, organizations must control the human side of AI. Meaning, leaders need to enforce strict access controls, prioritize end-user awareness and training, and implement logging and monitoring mechanisms.
Balancing AI.
The pressures to adopt AI are only going to continue to grow, and many organizations will feel compelled to move fast. However, in this rush to innovate, many are deploying AI models that are not secure and lack effective monitoring systems. These unsecured and unmonitored AI systems will likely introduce risks that are equal to, or even far greater than, not deploying AI at all.
AI is worth considering and investing in if the fit is right, but it needs to be implemented deliberately and securely. The question for organizations should not be whether they can deploy AI, but whether they can do so responsibly.