AI Text Poses a New Risk from Scammers

Until very recently the first signs to look for in recognising a fake or scam was poor grammar.  The evolution of AI that can create decent believable text from a few relevant prompts is a boon to the scammer.  ChatGBT and similar platforms are providing automated text with good grammar and spelling.   This content could be used in phishing emails, fake websites, corporate investment portfolios and on-line chatbots all attempting to relieve the target of financial or personal information.

From a credulity point of view text that reads well with no mistakes could now be the sign of a scam or fake news.

With a little more work the operator of the AI engine can add some human-created content to make the whole system even more believable.  A ChatBot used to negotiate viewings for property rentals has used human operators to step in when the bot cannot provide a meaningful response.  This example is not an out and out scam but aims to fool the customer that it is human rather than a bot while using the AI agent to reduce the time taken up by real employees on each customer interaction.

The AI engines could add some kind of token or tag within the content to show that it is not created by humans.  They could also save the generated text for subsequent authenticity checking.  It is, however, probable that scammers would find some way to get around this sort of protection.  There is a possibility of detecting AI generated text by analysing the language created by the bots.  It may be grammatically good but the nature of computer creation means that it will follow certain patterns whereas human content would be expected to be more random.  GPT-2 is an example solution that will report on the probability of text being AI generated.  Rest assured that this article rated as 0.02% fake on GPT-2.  The GPTZero application was designed to spot computer generated code submitted in academia but could equally well be used on suspect ‘business’ documents.

A related privacy issue comes from sites that pretend to be AI generators or AI detectors but which are primarily hosts for malware or gatherers of personal information.  When using any such site preference should be given to those that allow text to be pasted in rather than files uploaded.  No confidential data should be entered.  If a site requires registration then any emails and passwords should be from safe ‘throwaway’ accounts not corporate or preferred personal accounts.

Some of the ‘red flags’ of phishing texts and bots are certainly changing but others still remain in force:

  • Is the spelling or grammar particularly poor?
  • Is the spelling or grammar too perfect?
  • Is the style of prose flat or repetitive?
  • Are there clues that the author really knows or cares about you or your interests?
  • Is the offer or deal ‘too good to be true’?

More from Privacy

20/08/2024

Doxing

Doxing (or Doxxing) is the dropping of documents or information onto the Internet.  It is generally taken to mean the disclosing of information that …

Read post

05/08/2024

Surveillance Pricing

In July 2024 the US Federal Trade Commission issued orders to 8 companies seeking information about their possible use of Surveillance Pricing.  These orders …

Read post

17/07/2024

Blocking AI Web Crawlers

In general web sites, posts, tweets and other on-line content all rely on sharing information.  Links to other relevant content makes it easier to …

Read post

19/06/2024

Apple Intelligence

Apple Intelligence is forecast to release as a beta in Autumn 2024 and as a full version in 2025.  It will be included in …

Read post

Sign Up

Sign up to our newsletter list here.

    Successful sign up

    Thank you for signing up to our newsletter list.

    Check your inbox for all the latest information from Kindus

    Categories