AI Text Poses a New Risk from Scammers

Until very recently the first signs to look for in recognising a fake or scam was poor grammar.  The evolution of AI that can create decent believable text from a few relevant prompts is a boon to the scammer.  ChatGBT and similar platforms are providing automated text with good grammar and spelling.   This content could be used in phishing emails, fake websites, corporate investment portfolios and on-line chatbots all attempting to relieve the target of financial or personal information.

From a credulity point of view text that reads well with no mistakes could now be the sign of a scam or fake news.

With a little more work the operator of the AI engine can add some human-created content to make the whole system even more believable.  A ChatBot used to negotiate viewings for property rentals has used human operators to step in when the bot cannot provide a meaningful response.  This example is not an out and out scam but aims to fool the customer that it is human rather than a bot while using the AI agent to reduce the time taken up by real employees on each customer interaction.

The AI engines could add some kind of token or tag within the content to show that it is not created by humans.  They could also save the generated text for subsequent authenticity checking.  It is, however, probable that scammers would find some way to get around this sort of protection.  There is a possibility of detecting AI generated text by analysing the language created by the bots.  It may be grammatically good but the nature of computer creation means that it will follow certain patterns whereas human content would be expected to be more random.  GPT-2 is an example solution that will report on the probability of text being AI generated.  Rest assured that this article rated as 0.02% fake on GPT-2.  The GPTZero application was designed to spot computer generated code submitted in academia but could equally well be used on suspect ‘business’ documents.

A related privacy issue comes from sites that pretend to be AI generators or AI detectors but which are primarily hosts for malware or gatherers of personal information.  When using any such site preference should be given to those that allow text to be pasted in rather than files uploaded.  No confidential data should be entered.  If a site requires registration then any emails and passwords should be from safe ‘throwaway’ accounts not corporate or preferred personal accounts.

Some of the ‘red flags’ of phishing texts and bots are certainly changing but others still remain in force:

  • Is the spelling or grammar particularly poor?
  • Is the spelling or grammar too perfect?
  • Is the style of prose flat or repetitive?
  • Are there clues that the author really knows or cares about you or your interests?
  • Is the offer or deal ‘too good to be true’?

More from Privacy


Push Notifications

Push Notifications are primarily seen as a marketing or advertising tool.   Another popular use is within chat applications; ensuring that subscribers keep up with …

Read post


EU Digital Services Act Implications

Kindus has already outlined the EU Digital Markets Act.  The Digital Services Act is another EU law that came into force on all platforms …

Read post


Web Browser Privacy

Various services offer the concept of private browsing.  In most cases this means that browsing history is kept secret from other users of the …

Read post


EU Digital Markets Act

The EU Digital Markets Act became law in 2022 and its terms began to apply from May 2023. Major Internet content providers needed to …

Read post

Sign Up

Sign up to our newsletter list here.

    Successful sign up

    Thank you for signing up to our newsletter list.

    Check your inbox for all the latest information from Kindus