Data Poisoning

Data Poisoning is an attempt to fool a system by inserting rogue data.  It could be a threat to AI systems because they build up their learning models from large data sets.  If those source data sets are not to be relied upon neither are the decisions made by the AI.

An early example of successful data poisoning comes from 2016 when the Microsoft Twitter Chatbot ‘Tay’ was manipulated by its users to output offensive tweets.  Microsoft swiftly took down the service and within the big AI picture the incident might be regarded as a joke although in poor taste.  A 2023 study demonstrated how sophisticated Data Poisoning attacks could be achieved at a relatively low cost to the attacker.  These were based around taking control of selected sources that would be mined by the AI.  Wikipedia for example can be manipulated by changing data in the short but identifiable time span before it is harvested by the AI as a data snapshot and also before Wikipedia itself reverts to a ‘known-good’ content.  Other key sources such as web pages can be manipulated by adding false content or taking control of expired domains.  The actual degree of change to influence the AI could be relatively low.  A worrying implication is that data poisoning could lead to subtle changes in AI output.  Fake news could be generated or management decisions influenced by changing what an AI system sees as ‘the truth’.  A more generic effect might be to influence SPAM or malware filters to allow harmful content to be passed rather than blocked.

Data Poisoning can also be used to protect the data source.  Although this might harm the AI model the issues are of trust and ownership rather than any deliberate attempt to cause harm.  Artists would wish to prevent AI copying and manipulating their images without prior consent.  The AI copies represent a loss of income to the original authors, in addition inferior copies dilute the brand or reputation of the original artwork.

Protecting these images is relatively easy as an AI engine does not see pictures but has to rely on their context and code representing blocks of shapes and colours within the file.  By deliberately manipulating those prompts so that they do not match the actual image then the AI can be fooled.  A real-world example is Nightshade which uses subtle changes within the shading of an image to make it appear as something else to the AI scraper.  Nightshade will not necessarily make it impossible for AI to accurately categorise an image but does make the processing required noticeably harder.  Another tool is Glaze notable in that it received special mention by Time magazine amongst its best inventions of 2023.   Glaze also manipulates the image composition but to make it appear as a different artwork style for example abstract rather than realistic.  Both Glaze and Nightshade can be used on the same image.

The additional computational work required by AI engines to deal with Data Poisoning, either as an attack or data protection device will result in increased power demands in an field that is already claimed to be consuming as much electricity as a small country.  Energy concerns have yet to have a serious impact on the spread of AI.  At the current state of its deployment the key issue is to what extent the output of AI can be trusted either in its authenticity or ownership?

More from Security

04/12/2024

Sitting Duck Attacks

The Sitting Duck attack revolves around taking control of a domain and then using it to distribute malware or as a source for phishing …

Read post

25/11/2024

Developers Hit By Compromised Software Packages

A Typosquat campaign uses slight variations on well-known names to mislead a user to access a rogue rather than genuine asset.  It is well …

Read post

04/11/2024

UK Data (Use and Access) Bill

The Data (Use and Access) Bill had its first reading in the Lords on 23 October 2024.  This step is merely a formal introduction …

Read post

28/10/2024

Zero-Day Attacks

In October 2024 Google Mandiant reported on 138 exploited vulnerabilities since 2023.  They concluded there had been an increase in the number and speed …

Read post

Sign Up

Sign up to our newsletter list here.

    Successful sign up

    Thank you for signing up to our newsletter list.

    Check your inbox for all the latest information from Kindus

    Categories