Discover what it’s like to be a CISO dealing with security risks in the era of artificial intelligence (AI).
Today, it seems impossible to escape the buzz and discussions around AI. These two letters have sparked unprecedented innovation and unlocked tools we didn’t even know we needed. However, this double-edged sword presents both opportunities and risks. The unmanaged use of AI tools by employees can pose significant threats to organizations, and just as we harness AI, cybercriminals are using it too, making their attacks more sophisticated than ever before.
So I sat down with WalkMe’s CISO, Daniel Chechik, and asked him some tough questions about how he sees this fine balance between innovation and risk management. Together we delved into how this can be formed using what tools. If you’re still searching for answers concerning the risks of AI, I invite you to join us in an upcoming webinar on October 2nd, where we’ll explore the complexity of navigating AI risks more extensively, and share some lessons learned on how we do that at WalkMe.
Daniel, you’ve been with WalkMe for over 7 years now, maintaining the secure operations of over 1,000 of our employees scattered across dozens locations. Tell me, what potential risks do AI tools introduce and how do you mitigate them?
- AI tools have made significant progress in relation to Generative AI. However, by its very nature, this concept introduces additional risks, such as the need for content response sanitization and the potential for manipulating GenAI tools to produce unintended content, among other issues. Essentially, the use of AI is beneficial for productivity and efficiency. I view it more as an opportunity than a risk that needs to be disabled or prevented. We must, however, approach its use with caution. It’s crucial to assess each use case individually and evaluate how we can manage it effectively.
We’ve seen how ChatGPT introduced significant advancement in Generative AI, demonstrating that this technology can be easily adopted and used by everyone. Shortly after its’ release, we began to see many free services leveraging GenAI based on ChatGPT technology. - Right, the issue starts when you don’t really know which tools are being used throughout your organizations, and if they’re safe. And let’s say they are indeed safe, how do you know that employees are using those tools in a secure manner, without sharing any sensitive information like code snippets of code, products that weren’t released yet, or customers’ personal information.
Then, how do you ensure the ethical and responsible use of AI in your organization?
- It’s challenging for sure. We initially look to address the most commonly used AI tools among our employees using Shadow AI, our proprietary tool that helps uncover which tools are being used, by who and how. The tools uncovered thousands of AI tools, and our next step is to guide employees to manage those AI tools according to our compliance, security, and data privacy requirements.
To do so, we used our digital adoption platform to provide informed guidelines for using the tools we permitted. For certain tools, we not only offered guidelines but also redirected users to our managed solutions.
Today, it is fairly quick and easy to identify the AI tools being used, create quick actions for distribution to users, and monitor their effectiveness.
It seems like you’ve built a well-oiled machine that proactively keeps an eye out for risks and gently guides employees towards secure practices. How do you balance the benefits of AI with the need for data privacy and protection?
- We evaluate the benefits on a case-by-case basis. Essentially, we try to understand the tools that people want to use, evaluate whether these address a genuine business pain point, and then search for a commercial product that can meet this need.
Thanks for sharing your take on the matter, Daniel, AI has undeniably unleashed a Pandora’s box of possibilities in the realm of technology, leaving us with an abundance of questions, leading to even more questions.
In our upcoming webinar, titled “AI in the workplace: Balancing innovation and security,” we will delve into the intricacies of AI management, ensuring that it remains an invaluable asset rather than a potential liability. If you have any questions of your own, don’t worry, there will be dedicated time for a Q&A session – so be sure to register.