AI In The Office
The growth of AI has been so fast that it is becoming part of the computerised workplace mainstream more quickly than its impact can be assessed. Tasks can be simplified and experiences ‘improved’ but to what extent can these new media be trusted?
Consider the relatively harmless action of reading from a script. Kindus has already described how spoken text can be wholly created through software. A more subtle approach is to have a live reading but disguise the presence of the script. Software such as Eye Contact manipulates a video image so that the presenter appears to be looking directly into a camera rather than reading a script off-screen. Solutions such as this are designed for video feeds that will be edited before release rather than live streams. The consequences of adapting this to live video would be to give a false impression in work presentations, interviews, legitimate and fraudulent ‘sales’.
Another innovation is the concept of a digital assistant; software that aims to take much of the drudgery out of computer work. Microsoft are launching Copilot as part of Office 365 and Google have launched Duet AI. The cost and functionality of Copilot depends on any existing subscription that a user has with Microsoft. A BBC investigation in October 2023 quoted a price of £25 a month. Duet AI is currently available as a free-trial release. Both have the same general aims and depend on access to existing documents within a user’s account to produce a satisfactory product.
Proposed uses include summarising email chains or on-line meetings and can go so far as to suggest an appropriate response. A user could avoid an on-line meeting completely, relying on a final summary or be present (in body if not in mind) but rely on a summary for later understanding. The meeting camera does not know if you are entranced in the meeting, watching a video or simply ignoring the discussion.
Consider this example from the Google Duet AI publicity ‘Imagine you’re a financial analyst and you get an email at 5 PM from your boss asking for a presentation on Q3 performance by 8 AM tomorrow’. The page goes on to suggest that the AI will look through documents on the user’s Google account and create a handy summary. Doubtless it can but the quality of the information will depend on the documents available. It would be an even better use of everyone’s time if the manager were able to run the request themselves and skip contacting their colleague. The respondent is doing no more than pressing buttons. Some would agree that the removal of many of these management tasks and job roles is no bad thing. There needs to be some human reasoning or justification of any results to give a task real meaning. Any AI information gathering sees words and numbers but may not be able to weigh the underlying relevance behind the data. If an AI system is being used to summarise then many of the feeder tasks such as email discussions and drawn out on-line meetings themselves drop in significance. Why put too much effort into opinions when the software will filter them out?
These AI systems are relatively secure in that they rely on access to specific accounts for data and how they interpret it. If those accounts were to be compromised then the whole house of cards falls down. The use of information collated from AI tools makes it harder to determine if an email sender or even presentation attendee is who they say they are. An insider threat might involve a user deliberately or unknowingly using these systems to harvest information. The criminal would not only get the information they want but have it packaged in a handy summary format.