Shadow AI
A 2024 report on AI at work by Microsoft alleges that ‘75% of knowledge workers use AI at work today, and 46% of users started using it less than six months ago’. The study highlights the importance of AI skills in product development and in freeing up time for important tasks by reducing that eaten up by mundane activities including checking emails and summarising meetings.
While business recognises the importance of AI it needs to set up controls and policies to define exactly what can be used to work on potentially sensitive data. The Generative AI engines use what they already know to provide appropriate responses. This can include corporate data entered for seemingly harmless activities such as tidying up reports or summarising meetings. Meeting assistants have the ability to turn voice into text so will automatically capture potentially confidential information. The engine could return this data to any external user either accidentally or if that user were to ask it the right questions. The sheer volume of data that AI systems are based on will make it difficult for their system administrators to control specific outputs even if a valid complaint is filed.
Having a governed in-house product for Generative AI should be the answer but the popular business solutions mount up in cost. Microsofts’ Copilot corporate license starts at around £30 per month per employee. These could eat up a large proportion of a corporate IT budget. Where controls have been set or services made unavailable due to cost or systems availability there is the risk of staff turning to AI systems accessed through personal devices or browser based solutions. There is an expanding range of such products many based on proven engines such as Open AI’s GPT-4 model but it is less certain how these operators treat any information passed to and from them. At present there do not seem to be any obvious malware hosting or fraudulent AI solutions but it is wise to assume that if none are out there now some will exist in the future.
This is the world of ‘Shadow AI’. Like ‘Shadow IT’ if users are adopting 3rd party solutions an organisation needs to consider what risks that might entail. AI is becoming increasingly ubiquitous on the web. Users may not realise that any content created for them is also sucking up the data that they have input. Search engines such as Google rely on AI algorithms. These are generally reliable with the occasional significant failure such as Google suggesting that cheese be glued to pizzas to stop it sliding off. The latter example is thought to have been caused by Google having paid Reddit to trawl the latter’s posts and comments for data subsequently used in search results. A web search will throw up multiple short references but an AI report or image could include many details or a great volume of text making it harder to spot minor errors. The banner images of this article demonstrates the benefits and pitfalls of AI; Kindus used the ChatGPT function of the Microsoft Copilot smartphone app to generate an image of cricket in a mountain scene. This engine has clearly been confused by what the request ‘cricket’ implies.
As AI solutions become firmly entrenched within the IT world businesses need to rely on user training to govern their use and control what data is passed to them. They need to be regarded as web searches and email use; the ultimate responsibility for their use relying on the individual user.