Since it burst into our consciousness in 2022, ChatGPT and other generative AI models have disrupted many industries and led to heated speculation about the future of work.
While the adoption of AI in modern communications has been rapid, we are still at an early stage of this technology, with its uses – and misuses – still evolving. From increased efficiency to better customer service outcomes, the many benefits of incorporating generative AI into your business have been breathlessly boosted by the media.
But with internet content increasingly being generated by bots, and a once ‘social’ media becoming dominated by hate-spewing AI accounts arguing with each other, public scepticism of ChatGPT and its ilk is increasing. How can your business strike the right balance?
The allure of AI – and its pitfalls
It’s easy to be dazzled by the computing power and speed of a generative AI model in both business communications and in our personal lives. For those who have been in the industry a long time, the efficiency AI can provide in structuring content, ensuring messaging consistency, and generating ideas is revolutionary. It has transformed workflows, accelerated research and enhanced productivity across various sectors.
AI-powered tools can assist in drafting reports, generating creative content, and automating repetitive tasks, allowing businesses to allocate human expertise where it is most valuable. For marketing teams, AI can provide data-driven insights to refine messaging, personalise customer interactions and streamline campaigns.
However, the overuse of AI in communications can negatively affect tone and empathy, reduce creativity and allow misinformation to sneak in, if content is not properly fact checked. In stakeholder engagement, you risk alienating your audience if it’s clear you’re using computer generated content instead of genuine human interaction. Neglecting nuanced cultural and social contexts may result in missteps that damage trust and credibility.
Since AI models are trained on existing material, plagiarism is also a concern when generating ideas with these tools. If you do use AI to come up with creative concepts, it’s critical you do your own research to see if it has been done elsewhere before. This helps prevent copyright infringements and ensures you don’t spend time and resources building out an idea that has already been used.
On a personal level, I know friends who sometimes use ChatGPT to help write texts to loved ones, as it can help put difficult feelings into more eloquent words. But when an authentic voice is replaced by the generic therapy-speak of an AI model, which doesn’t truly understand your relationships, you risk taking a position or making a point which isn’t truly reflective of your own views. Moreover, a study has shown that when people find out a letter from a friend has been written by AI, 72% find it less personal.
For those of us who have spent time on the internet in recent years, especially LinkedIn, robot-created content has become easier to spot. AI tools tend to churn out a homogenous reflection of our language, and phoney posts stand in stark contrast to those written from the heart. Of course, as these models become more sophisticated, we will be less able to distinguish the bot from the not, which may further erode trust in online communication.
Striking the right balance
So, how can you get the right balance between AI and authenticity in communications to ensure you don’t lose connection with your human audience? In short, you need to establish a hybrid approach which combines genuine human expertise and authentic language with AI tools which enhance efficiency.
For example, you could use AI as a starting point when developing a comms strategy – particularly when you’re struggling with ideas generation and structure. Think press release templates or brainstorming sessions.
But beyond that, it’s important to remember that writing is thinking. That’s what makes a good communicator. If you want to genuinely engage with a topic, you should be taking the time to write about it, or risk losing nuance and that deeper understanding and connection with your audience.
In the editorial phase, AI can help turn bad phrasing into good, improve clarity for your audience and spot errors which passed you by. However, relying on AI alone to check your work comes with risks, and you should always review its outputs for tone, contextual understanding and accuracy.
How we can help
At JBP Communications, which has been supporting strategic communications and engagement for some 40 years, we do not use AI as a substitute for genuinely engaging with the information, and people, in front of us. We critically evaluate our use of technology to ensure we use emerging tools for the right purposes never allowing them to hinder authentic, human connections.
For help with your communications strategy as you navigate difficult decisions, get in touch with us today.