AI Washing

Artificial Intelligence systems have become the buzz word in computing developments and product features.  AI washing refers to products taking advantage of the buzz behind AI but delivering nothing new in reality.

In March 2024 the USA Securities and Exchange Commission issued a cease and desist order and fined two investment advisors Delphia ($225,000) and Global Predictions ($175,000).  Both had claimed to use AI and machine learning to work with client data as a means of improving the investment advice they gave.  The judgement concluded that false and misleading statements had been made regarding the use of AI.

It is however not always clear what a promise of AI encompasses.  A consumer might reasonably assume that AI would involve some form of Generative AI or Large Language Model:  Some system created from a large bank of existing data that can in turn produce new, relevant content.  Computing systems have long been able to make decisions based on programmed logic.  If you choose ‘a’ we will do ‘c’ not ‘b’.  This logic can be enhanced by machine learning where prior choices affect the decisions offered on subsequent occasions that the same choices come up.  These are sophisticated engines but are not to be seen or marketed as AI assistance or advice.  In 2019 the India based startup Engineer.ai was sued by its own chief business officer for claiming that it used AI tools to automate app making whereas the bulk of decision making was human focused.  The inclusion of an element of AI was seen as a sap to attract investors.  The actual AI in use was limited to decision trees and algorithms to estimate pricing, timelines and allocation of resources.

A product could mistakenly over emphasise its use of computing models to attract customers.  It would then be just as liable as if it had deliberately made claims that were not true.  Any promotion campaign needs to carefully consider how its claims will be justified. In February 2023 the USA Federal Trade Commission issued advice on limiting claims of what AI solutions might do.  The FTC warned:

  • Not to exaggerate what AI can do.
  • Not to claim that AI will perform better than a non-AI solution.
  • Consider the risks if the AI does not perform how you expected.
  • If you do not use AI but claim to do so; investigators will find out.

The customer also needs some way to be sure that any AI claims are genuine and will be of benefit to them.  A significant purchase or investment might be involved and any buyer should request evidence of any AI engines involved.  The benefit of the product as a whole should be considered.  Even where AI is in use it may not justify the effort involved or costs involved in embracing it.  AI in itself should not be a guaranteed selling point or purchase requirement.

More from Technology

10/07/2024

UK Government 2024– All Change at the Top

The recent (July 2024) General Election in the UK may or may not change the rules and boundaries on software and AI in the …

Read post

05/06/2024

Shadow AI

A 2024 report on AI at work by Microsoft alleges that ‘75% of knowledge workers use AI at work today, and 46% of users …

Read post

15/04/2024

EU AI Legislation

The EU Artificial Intelligence Act became law from 13th March 2024.  In other countries: China has AI laws already in place that prohibit the …

Read post

03/01/2024

AI Coding Pitfalls

The Feeling of Power is a 1958 Science Fiction short story by Isaac Asimov.  It proposes a world that relies on computers to construct …

Read post

Sign Up

Sign up to our newsletter list here.

    Successful sign up

    Thank you for signing up to our newsletter list.

    Check your inbox for all the latest information from Kindus

    Categories