The importance of governance and least privilege for secure AI
Secure AI demands robust governance and least privilege
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Artificial intelligencehas rapidly become a cornerstone of modern business, driving innovation and efficiency across industries. Yet, as companies increasingly rely on AI to handle sensitive tasks, they are also opening themselves up to new security vulnerabilities.
Businesses integratingAIinto their operations means AI entities are becoming more autonomous and gaining access to more sensitive data and systems. As a result, CISOs are facing newcybersecuritychallenges. Traditional security practices, designed for human users and conventional machines, fall short when applied to AI. So, it’s vital for companies to address emerging vulnerabilities if they are to prevent security issues from unchecked AI integration and secure their most valuable data assets.
Offensive Research Evangelist at CyberArk Labs.
AI: More than just machines
Every single type ofidentityhas a different role and capability. Humans usually know how to best protect theirpasswords. For example, it seems quite obvious to every individual that they should avoid reusing the same password multiple times or choosing one that’s very easy to guess. Machines, includingserversand computers, often hold or manage passwords, but they are vulnerable to breaches and don’t have the capability to prevent unauthorized access.
AI entities, including chatbots, are difficult to classify with regard to cybersecurity. These nonhuman identities manage critical enterprise passwords yet differ significantly from traditional machine identities like software, devices, virtual machines, APIs, and bots. So, AI is neither a human identity nor a machine identity; it sits in a unique position. It combines human-guided learning with machine autonomy and needs access to other systems to work. However, it lacks the judgment to set limits and prevent sharing confidential information.
Rising investments, lagging security
Businesses are investing heavily in AI, with 432,000 UK organizations – accounting for 16% – reporting they have embraced at least one AI technology. AI adoption is no longer a trend; it’s a necessity, so spending on emerging technologies is only expected to keep rising in the coming years. The UK AI market is currently worth over £16.8 billion, and is anticipated to grow to £801.6 billion by 2035.
However, the rapid investment in AI often outpaces identitysecuritymeasures. Companies don’t always understand the risks posed by AI. As such, following best practices for security or investing enough time in securing AI systems is not always top of the priority list, leaving these systems vulnerable to potential cyberattacks. What’s more, traditional security practices such as access controls and least privilege rules are not easily applicable to AI systems. Another issue is that, with everything they already have going on, security practitioners are struggling to find enough time to secure AI workloads.
CyberArk’s 2024 Identity Security Threat Landscape Report reveals that while 68% of UK organizations report that up to half of their machine identities access sensitive data, only 35% include these identities in their definition of privileged users and take the necessary identity security measures. This oversight is risky, as AI systems, loaded with up-to-date training data, become high-value targets for attackers. Compromises in AI could lead to the exposure of intellectual property, financial information, and other sensitive data.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The threat of cloud attacks on AI systems
The security threats to AI systems aren’t unique, but their scope and scale could be. Constantly updated with new trainingdatafrom within a company, LLMs quickly become prime targets for attackers once deployed. Since they must use real data and not test data for training, this up-to-date information can reveal valuable sensitive corporate secrets, financial data, and other confidential assets. AI systems inherently trust the data they receive, making them particularly susceptible to being deceived into divulging protected information.
In particular, cloud attacks on AI systems enable lateral movement and jailbreaking, allowing attackers to exploit a system’s vulnerabilities and trick it into disseminating misinformation to the public. Identity and account compromises in thecloudare common, with many high-profile breaches resulting from stolen credentials and causing significant damage to major brands across the tech, banking and consumer sectors.
AI can also be used to perform more complex cyberattacks. For example, it enables malicious actors to analyze every single permission that’s linked to a particular role within a company and assess whether they can use this permission to easily access and move through the organization.
So, what’s the sensible next step? Companies are still at the beginning of the integration of AI and LLMs, so establishing robust identity security practices will take time. However, CISOs can’t afford to sit back and wait; they must proactively develop strategies to protect AI identities before a cyberattack happens, or a new regulation comes into place and forces them to do so.
The key steps for strengthening AI security
While there is no silver bullet security solution for AI, businesses can put certain measures in place to mitigate the risks. More specifically, there are some key actions that CISOs can take to enhance their AI identity security posture as the industry continues to evolve.
•Identifying overlaps:CISOs should make it a priority to identify areas where existing identity security measures can be applied to AI. For example, leveraging existing controls such as access management and least privilege principles where possible can help improve security.
•Safeguarding the environment:It’s crucial that CISOs understand the environment where AI operates to protect it as efficiently as possible. While purchasing an AI security platform isn’t a necessity, securing the environment where the AI activity is happening is vital.
•Building an AI security culture:It’s hard to encourage all employees to adopt best identity security practices without a strong AI security mindset. Involving security experts in AI projects means they can share their knowledge and expertise with all employees and ensure everyone is well aware of the risks of using AI. It’s also important to consider how data is processed and how theLLMis being trained to encourage employees to think of what using emerging technologies entails and be even more careful.
The use of AI in business presents both great opportunities and unprecedented security challenges. As we navigate this new landscape, it becomes clear that traditional security measures are insufficient for the unique risks posed by AI systems. The role of CISOs is no longer simply about managing conventional cybersecurity threats; it now involves recognising the distinct nature of AI identities and securing them accordingly. So, businesses must make sure they invest time and resources in finding the right balance between innovation and security to keep up with the latest trends while protecting their most valuable assets.
We’ve featured the best AI phone.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro
Andy Thompson, Offensive Research Evangelist, CyberArk Labs.
Logitech Brio 105 for Business review
Sihoo Doro S100 ergonomic office chair review
Owl Labs Meeting Owl 4+ review