Technology > Cybersecurity10/9/2024 9:00 AM
As AI systems based on large language models become more powerful and widespread, more cybersecurity challenges emerge. From jailbreaks to indirect prompt injections, they are vulnerable to a wide array of LLM-specific threats. However, all these problems can be boiled down to one single issue: LLMs are probabilistic algorithms that are inherently unreliable, just like any other ML system. Does it mean they cannot be useful? Absolutely not! In this talk, we will discuss the challenges of secure and reliable LLM-based applications and the way to make them safer and more aligned with your business goals.
Presented by Vladislav Tushkanov - Research Development Group Manager | Machine Learning Technology |AI Research | Kaspersky.