The rise of generative artificial intelligence – which can quite convincingly come up with allegedly factual “news stories” about whatever subject you like – means it’s more important than ever for people around the world to know what sources of information they can actually trust. That’s the reason why the BBC has just launched ‘BBC Verify‘, an extension of its existing fact-checking service, to further position the Corporation as an organisation whose global news output can be relied upon, perhaps more than any other. It’s a useful extension of Britain’s ‘soft power’. It’s also why it’s more important than ever before for company spokespeople or opinion-formers to make use of such trusted channels to get their message across – because those trusted channels are seen as bestowing a sort of ‘mark of quality’ – after all, why would the BBC/FT/Bloomberg etc seek this person’s opinion if they weren’t a leading and trusted player in their market? To qualify however, they need to look and sound authoritative, as well as knowing what points to make, and how to defend themselves effectively if there’s a crisis. For most people that doesn’t come without good quality media training, and practice.
Sadly, AI means it’s now so much easier for anyone to quickly generate all sorts of absurd stories, but with just enough of an element of credibility that some of the more gullible will believe them – and sometimes the rest of us will be taken in too. Social media makes it easy to disseminate such content – many people are used to getting most of their news via TikTok nowadays. I was struck by the point made by a former UK government ‘head of disinformation monitoring’ at a recent PRWeek conference, who warned that the amount of this rogue content is now growing by orders of magnitude. Apart from the usual Covid-type conspiracies (“drinking hot water can cure Covid”, says US Hospital Board) there’s the rise of convincing AI deepfake images, with deepfake videos that are actually realistic soon to follow. AI can already clone peoples’ voices, so you can essentially create fake videos of public figures saying whatever you want them to say (“It’s time to nuke the Russians” says Trump/Biden/whoever).
As an example, a few weeks ago someone sitting at home thought it would be fun to get AI image-generating software to create a picture of the Pope wearing a cool puffer jacket instead of his usual Pontiff’s robes. This circulated around the Reddit forums for a bit then got shared more widely – fooling many who had no idea it wasn’t real. So as people become more suspicious of everything they see and read, make sure those trusted media see you and your media representatives as the reliable source in your field.
More on our Media Training courses