Kidding, of course.
In fact, as artificial intelligence (AI) and generative AI permeate and transform businesses, chief information security officers are adding even more responsibilities to their already jam-packed workloads. They’re learning how to manage the security challenges that AI presents, capitalize on its opportunities, and adapt to new ways of working – all of which demand new leadership priorities in this fast-moving and constantly changing era of AI.
“AI has matured to the extent that it’s now in every aspect of our lives,” says Candy Alexander, CISO and cyber risk practice lead at technology advisory company NeuEon. “And while the impact has been largely positive for organizations, it’s also more challenging, particularly for CISOs. They need to make sure they’re putting the appropriate parameters around the use of AI and machine learning, but without squelching creativity and innovation – and that’s a big challenge.”
1. Guide the C-suite
As businesses rush to implement AI effectively, CISOs can play an important role in guiding the C-suite on a variety of matters, starting with vetting AI use cases, Alexander says. “These are conversations with technologists, security, and the business. You can’t just jump into the AI game without really understanding what it is you want to do and how you want to do it,” she says. “You want to improve your customer experience? Great. From there, you can build that approach program but also have protections in place from the start.”You can’t just jump into the AI game without really understanding what it is you want to do and how you want to do it.
Candy Alexander, CISO and cyber risk practice lead, NeuEon
Similarly, CISOs should be involved in conversations around governance, Alexander adds. “AI is really shining the light on the need for data governance,” she says. “Who owns the data? Who consumes the data? Who should have access to it? How will the data life cycle morph and change? How will you protect that data? These are all conversations CISOs need to be part of.”
2. Emphasize organizational literacy
Organizations are experimenting with AI in a number of ways, from writing marketing copy to developing code, but these use cases are not always recognized from an enterprise perspective, Alexander warns. Employees, for example, may not understand that unauthorized uses of AI can put sensitive corporate information at risk. “Without guardrails, you could have people inputting confidential information into a generative AI [tool], which then becomes part of the language training model. It’s absolutely terrifying.”CISOs used to only need to understand the business needs of the data, but now they need to understand the business needs and the implications.
Jordan Rae Kelly, senior managing director and head of cybersecurity for the Americas, FTI Consulting
CISOs should focus this corporatewide awareness on how AI is used across various business processes, the ethical implications of AI, the organization’s policies on responsible AI use, and the potential security threats and best practices for mitigating them.
[Read also: Racing to deploy GenAI? Security starts with good governance]
For guidance on driving organizational literacy in AI, Alexander recommends reviewing resources from industry organizations, such as the Cloud Security Alliance (CSA) and Open Web Application Security Project.
3. Prioritize education and training in security teams
A big challenge that security organizations face is having both breadth and depth of knowledge in areas like AI, which are rapidly changing, Kelly says.In fact, according to a 2024 report from the CSA, C-suite executives demonstrate a notably higher (52%) self-reported familiarity with AI technologies than their staff (11%). This goes against the conventional thinking we hear about security leaders and AI, and the assumption that “everyone is scared,” said Caleb Sima, chair of CSA’s AI security alliance, in a recent interview with VentureBeat. The survey contests the notion that every junior staffer, just by virtue of age, is somehow fluent in the latest iterations of AI and that “every CISO is saying no to AI, it’s a huge security risk, it’s a huge problem,” Sima said. If anything, it’s a good reminder that corporate-wide awareness strategies (discussed above) must include specific education initiatives for IT departments.
[Read also: Here’s your ultimate guide to AI cybersecurity – benefits, risks, and rewards]
Though teams may already be stretched thin, it’s important for CISOs to intentionally build dedicated time into their teams’ schedules for focused training in AI, Alexander says. This training should prioritize the latest AI tools and technologies, their implications for cybersecurity and team members’ specific roles, and emerging threats.
4. Create a culture of curiosity
While it’s important for CISOs to prioritize AI training within their teams, it’s also important to encourage their teams to experiment with AI, Sadhir tells the SANS Institute.Leaders have to lead from the back, not the front. You have to let thinkers think. A lot of ideas are coming from the team members themselves.
Gatha Sadhir, global CISO, Carnival Corporation
Encouraging security teams to experiment with AI has a number of benefits. It motivates those teams to explore new AI technologies and methodologies, which can lead to new solutions for complex security challenges. It also promotes ongoing skill development, encourages teams to collaborate and share insights, and ultimately helps security teams understand how AI can support and align with broader organizational objectives and strategies. It can also give a boost to a worker’s overall employee experience, something CISOs and enterprise leaders are paying closer attention to in today’s pressurized job market.
As CISOs maneuver in the changing AI landscape, it’s important that they assume a leadership role in the AI strategy of the organization, Kelly says. “[CISOs] are no longer a back-of-house job,” she says. “They need to have a full leadership role and the ability to work within an organization to anticipate what the company is doing and make those decisions about a strategic AI investment.”