Could AI be the answer to the UK’s productivity problem? More than half (58%) of organizations think so, with many experiencing a diverse range of AI-related benefits including increased innovation, improved products or services and enhanced customer relationships.
You don’t need me to tell you this – chances are you’re one of the 7 million UK workers already using AI in the workplace. Whether you’re saving a few minutes on emails, summarizing a document, pulling insights from research, or creating workflow automations.
Yet while AI is a real source of opportunities for companies and their employees, pressure for organizations to adopt it quickly can inadvertently give rise to increased cybersecurity risks. Meet shadow AI.
What is shadow AI?
Feeling the heat to do more with less, employees are looking to GenAI to save time and make their lives easier – with 57% of office workers globally resorting to third-party AI apps in the public domain. But when employees start bringing their own tech to work without IT approval, shadow AI rears its head.
Today this is a very real problem, with as many as 55% of global workers using unapproved AI tools while working, and 40% using those that are outright banned by their organization.
Further, internet searches for the term “shadow AI” are on the rise – leaping by 90% year-on-year. This shows the extent to which employees are “experimenting” with GenAI – and just how precariously an organization’s security and reputation hangs in the balance.
Primary risks associated with shadow AI
If UK organizations are going to stop this rapidly evolving threat in its tracks, they need to wake up to the threat of shadow AI – and fast. This is because the use of LLMs within organizations is gaining speed, with over 562 companies around the world engaging with them last year.
Despite this rapid rise in use cases, 65% of organizations still aren’t comprehending the implications of GenAI. But each unsanctioned tool leads to significant vulnerabilities that include (but are not limited to):
1. Data leakage
When used without proper security protocols, shadow AI tools raise serious concerns about the vulnerability of sensitive content, e.g. data leakage through the learning of information in LLMs.
2. Regulatory and compliance risk
Transparency around AI usage is central to ensuring not just the integrity of business content, but users’ personal data and safety. However, many organizations lack expertise or knowledge around the risks associated with AI and/or are deterred by cost constraints.
3. Poor tool management
A serious challenge for cybersecurity teams is maintaining a tech stack when they don’t know who is using what – especially in a complex IT ecosystem. Instead, comprehensive oversight is needed and security teams must have visibility and control over all AI tools.
4. Bias perpetuation
AI is only as effective as the data it learns from and flawed data can lead to AI perpetuating harmful biases in its responses. When employees use shadow AI companies are at risk of this – as they have no oversight of the data such tools draw upon.
The fight against shadow AI begins with awareness. Organizations must acknowledge that these risks are very real before they can pave the way for better ways of working and higher performance – in a secure and sanctioned way.
Embracing the practices of tomorrow, not yesterday
To realize the potential of AI, decision makers must create a controlled, balanced environment that puts them in a secure position – one where they can begin to trial new processes with AI organically and safely. Crucially though, this approach should exist within a zero-trust architecture – one which prioritizes essential security factors.
AI shouldn’t be treated as a bolt-on. Securely leveraging it requires a collaborative environment that prioritizes safety. This ensures AI solutions enhance – not hinder – content production. Adaptive automation helps organizations adjust to changing conditions, inputs, and policies, simplifying deployment and integration.
Any security experience must also be a seamless one, and individuals across the business should be free to apply and maintain consistent policies without interruption to their day-to-day. A modern security operations center looks like automated threat detection and response that not only spot threats but handles them directly, making for a consistent, efficient process.
Robust access controls are also key to a zero-trust framework, preventing unauthorized queries and protecting sensitive information. While these governance policies have to be precise, they must also be flexible to keep pace with AI adoption, regulatory demands, and evolving best practices.
Finding the right balance with AI
AI could very well be the answer to the UK’s productivity problem. But for this to happen, organizations need to ensure there isn’t a gap in their AI strategy where employees feel limited by the AI tools available to them. This inadvertently leads to shadow AI risks.
Powering productivity needs to be secure, and organizations need two things to ensure this happens – a strong and comprehensive AI strategy and a single content management platform.
With secure and compliant AI tools, employees are able to deploy the latest innovations in their content workflows without putting their organization at risk. This means that innovation doesn’t come at the expense of security – a balance that, in a new era of heightened risk and expectation, is key.
We list the best IT management tools.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Add Comment