Adventures in Generative AI: A content creator’s guide to GPT4
- James Gill
- Mar 8, 2024
- 16 min read

The first thing to say is that this has been written by a human (which is why it will be littered with mistakes and have a subtext of neurotic imposter syndrome).
This article aims to capture five months of working with ChatGPT and make it a useful document for marketers and writers who may be starting to use the tool.
There’s a bunch of preamble, but the article will eventually:
Explain the problem that generative AI solves
Describe ChatGPT: what it does and how it works
Describe my learning journey with GPT
Discuss the role of the writer when you have GPT
Explain how to write GPT prompts
List and describe the limits of GPT
Discuss whether AI content will rank and whether chat will replace search
Discuss the role of generative AI and the marketing business model.
My journey into the present
I am a career writer - 10 years as a music journalist followed by 10 as a content marketer.
I read a lot of science fiction and listen to a lot of sociology/psychology podcasts so I have been listening to lots of public intellectuals, writers and thinkers about AI. What it can and can’t do. Should we be worried? Whose jobs will it replace? It was exciting.
In December 2022, a friend told me that generative AI was finally available to the public… and it was free. I immediately signed up and did what we all did, which was to ask it a bunch of mad questions (including asking questions about myself. Spoiler: it doesn’t know anything).
As a content creator, its potential was obvious, and I started exploring it in a work context. More on that later.
The Problem
Once you’ve built Facebook, you can create an almost infinite number of accounts, fast and with little customer effort or business resources needed.
But content is hard to do at scale. Let me rephrase that: it’s hard to do well at scale.
A few years back, it was announced that publishers such as Forbes and Reuters were using basic AI to turn financial and sports reports into news articles, removing the need for writers.
Then, a year or two ago, I checked out Articoolo, an app that promised to create ‘unique textual content in a flash’ based on a five-word brief. It was poor. It sounded like someone playing the BBC radio game show Just A Minute, where people have to fill a minute talking about a subject they know little about. Phew, I wasn’t out of a job yet.
Sites like Fiverr proved that even the wrong humans weren’t great at producing content. (For those who haven’t commissioned content on Fiverr, it tends to be short and thin).
So brands would have to continue to pay for professional content creators to write their case studies, blogs, FAQs, thought leadership, social media, guides and ‘about us’ content.
Of course, different articles or pages take different amounts of time to create, but here are some examples:
500-word news item based on a press release - 2 hours
1,200 Q&A including interview - 5 hours
1,200-word guide on sector trends - 7 hours
For many brands, content is comparatively expensive, and there’s often a need for lots of it. I once worked on a high street bank’s fintech app launch, and they needed thousands of pages to compete with Monzo and Revolut for a long list of search terms. Even 100 pages at £700 was going to cost them £70,000. That’s quite an outlay for a brand, and for most SMEs this is completely prohibitive.
The solution
Enter generative AI.
End of section.
Experiments with ChatGPT
It’s January 2023. My exploration of ChatGPT at work begins. I ask it: Please write an article on the role of a mental health first aider in the workplace. Its answer was about 350 words, accurate and syntactically and grammatically perfect. But it felt superficial, skimming the surface of the topic. It certainly wasn’t offering anything better than the top page of Google would return.
I asked it: Why is it important to have a mental health first aider? It listed eight reasons and included 30 words about each. It was better, but I could see a repetitive rhythm in the semantics and language. “A mental health first aider is important because…” over and again, followed by “Use a first aider to…”.
The response was still too short to be an article, so I asked it to expand on what it had said. It did so but the response remained quite superficial. It lacked details, specificity and examples. And now that it was writing more, I could see that the writing itself was narrow: all the sentences were short and simple and had a repetitive structure with no use of dashes, asides, brackets, caveats, addenda etc.
I sort of wanted the first response and the second response, as well as something else.
I could see that it needed more input. Its response was thin because my brief was thin. I’ve commissioned content for 20 years, surely I could commission GPT so that I get what I want.
So I wrote:
Please write an article on the role of a mental health first aider in the workplace.
Please list the various roles of a mental health first aider: what they are, why they’re important and how they serve both employees and organisations
Please write a section on how to become a mental health first aider - including resources
Please write a section on how to make the most of your workplace first aider, including examples and case studies as necessary.
Its response was still short, but it was finally delivering the depth and complexity that the article needed.
The final stage was to go ‘modular’ - or ‘incremental’. I combined the list of reasons to have a mental health first aider with the extended prompt and asked it to write the first with an extended prompt, then the second and third and so on. I also added in extra things as I wrote each prompt as the topic required and as occurred to me. Some needed examples, some had supplementary lists, and some needed data and insights.
I pulled each answer into a Google Doc and had a look at the sum of its parts. It was better: It was a lot longer, more thorough and detailed but the writing still felt… uninspired. I thought of the scene from The Fly where Seth/Jeff Goldblum teleports the steak and Veronica/Geena Davis says it tastes synthetic. The machine can mimic reality, but there’s still something human and organic missing. We can’t always describe it, but we sense it.
So I reran all of my prompts for each section again, but added in: Please vary the sentence length with some short sentences and some longer, some more complex sentences. Please use a rich mix of language with lots of synonyms and different ways to say the same thing to avoid repetition. Include relevant examples and illustrations of the points made to help explain concepts and ideas to the reader. Please avoid repetition of phrases that you have used elsewhere in this chat and work to use synonyms to keep the copy varied and not repetitive.
Wow. The results were great. It read like a human. Of course it did. By this time, we’d all seen the funny experiments where you ask it to write a piece about UK conveyance law in the style of Jerry Seinfeld etc. etc. It can do what you ask; you just have to ask. It was so simple, but it was important to remember that you get out what you put in. And this takes a little work.
I would add it still needed tweaking - but no more than when working with a new freelance writer who doesn’t quite get the brand’s tone of voice yet. Some of the metaphors and examples were a little cheesy, and its pursuit of synonyms instead of repetition led it on some verbose journeys. Nothing a quick Grammarly check couldn’t spot in 0.01 seconds.
Speaking of time. As mentioned, an average 1,200-word article takes about five hours to create. My new article took less, but it was also longer - far more comprehensive.
The role of writer in the time of generative AI
A client said that one of their people (not a writer) had asked GPT to create an article for their website, but it was rubbish: repetitive and short, “It read like it was written by a robot.” My guess is that this person wrote a short prompt like my first attempt and was disappointed by what they got back.
Of course, I was relieved to hear this. More importantly, it made me realise that writers weren’t going to be replaced; we would just become GPT operators. This may not sound fun, but we’re not out of a job yet. It has the power to speed up the content creation process, but it can’t (yet) do everything well. I still had to be involved in the process, even if it was only to ask and receive.
I wanted to map exactly what GPT was and wasn’t doing in the content creation process.

GPT4 is changing day by day, and I’m just one person, but it seems to me that GPT’s main benefit to writers is that it speeds up laborious research and turns that research into new text.
While GPT may be able to do things I have allocated to the writer, my experience is that it can’t do them well, for example using data and insights (brand guidelines, keyword research) or using humour.
For example, GPT is great at coming up with ideas. Please create a list of potential thought leadership articles for a renewable energy company. But it will take an industry insider to know which one is the best and whether it needs to be modified and how. The result also needs to be checked and judged to see if it is a good execution of that idea.
GPT prompts
As I have used GPT more and more, I have learned how to get better responses faster.
I have created a document of boilerplate prompts, for example: Assuming everything in this chat is for one article, please write an introduction for this article. Focusing on the key points, summarising why and how the article can help.
There are lots of blogs and paid-for services offering ‘phrasebooks’ packed full of prompts for a variety of purposes, from social posts to product pages. The AIRPRM Chrome plugin also has GPT plug-ins that automate prompts for a variety of purposes.
Other useful additions to prompts include:
You are an expert copywriter
Please write for an audience of healthcare experts
Please use UK English.
One of the most important things to consider while briefing is how specifically, how literally GPT takes your words. Humans are often loose with our language - we know that the wider context will help iron out ambiguities and errors. Words mean a lot to GPT - it hears exactly what you say. If you ask for a guide, you will get a guide. Not advice and guidance, a guide.
Likewise, words like ‘what’, ‘why’ and ‘how’ are instructive - and therefore important to include if relevant. It won’t answer these questions if you don’t ask them directly.
My prompts now look like an incongruous combination of:
A comprehensive and detailed brief for an expert copywriter
An introduction to copywriting for someone who has never written or read a piece of content in their life.
A colleague at another agency has taken to showing GPT some content, then asking it to give him a prompt that will result in that type of content: Assuming I would like you to do this again, please provide a prompt that will help you deliver that. This has had mixed results.
The Limits of GPT
A rich reading experience
Simple prompts deliver simple results: Short, clear, simple sentences in a neutral tone of voice.
You can ask GPT to ‘write in a fun tone for kids’ or ‘write for an audience of experts’ and this will change the language it uses.
You can also ask it to vary the sentence length and structure it uses. However, its use of various linguistic devices is limited. As mentioned, I have yet to see it use brackets, italics and clauses in between dashes like this:
The thing with ChatGPT - in fact with all generative AI - is that it will inevitably end in a Cyberdyne Systems-esque takeover.
Or addenda, like this:
The thing with ChatGPT is that it will inevitably end in a Cyberdyne Systems-esque takeover - or worse.
Or ellipsis, like this:
The thing with ChatGPT is that it will inevitably end in a Cyberdyne Systems-esque takeover - or will it…?
And now that I've written that, I have yet to see it ask questions - rhetorical or otherwise. Though this may be due to my prompts.
GPT can write jokes, but I have yet to see it use humour within an article, unprompted.
This isn’t going to get in the way of GPT delivering great answers to most queries, but it may reduce a reader’s enjoyment of long-form content and will certainly reduce the ways in which GPT can communicate ideas. This may be particularly true if brands exclusively use GPT-created content so that all of their output is simple sentences and no metaphors. Imagine how boring a newspaper or magazine would be if it was all written in the same, dull way.
Judging quality
A key challenge with GPT is that it can do anything, but is it any good? It might look good, but is it? Or is it just Goldblum’s steak? Would an expert in the subject matter see errors that you won’t?
Even if you say please write for an expert audience of astrophysicists, are you in a position to judge whether the output is apt for such an audience?
A perfect analogy is AI art. A creative director friend gave MidJourney a brief: create a picture of a posh pool party. From a distance, they look perfect, especially the items that are simple, for example, the houses. It seems that the AI knows what a house is. However, look closely and you see that the AI doesn’t really know what a person is. It sees patterns and commonality in other pictures and reproduces an approximation, which, if you’re not concentrating, might pass as accurate.

And GPT content will be the same, only the mistakes are harder to see because symbolic language isn’t as nuanced as a visual representation of a human. So we must police AI content (just as we would human content) for errors.
Errors
Related to the above, I had heard that GPT would occasionally ‘hallucinate’ - especially if there was limited information on the topic. This may make GPT immediately redundant - you can’t use a ‘writer’ who’s making stuff up! But writers can make mistakes and so can the sources they use, so we’re no further forward or back.
The key is to understand its limits because it won’t always tell you if it doesn’t know something. Add: do not hallucinate into your prompt dictionary. I don’t have a scientific and foolproof method for this, but if the topic you’re writing about is obscure (relative term), GPT will struggle to get it right.
For example, I asked it to tell me about my favourite album (Sasha & Digweed’s Northern Exposure), and it kept making mistakes about the first track on the album that the Wikipedia page could correct.
SEO
There’s a plug-in on GPT called ‘SEO Outline’. You just put in your keyword and it promises to tell you the sections for a ranking article. Great! And the output looks great - super comprehensive. But I reviewed the actual boilerplate prompt behind the plug-in, and of course, it doesn’t actually contain any SEO. GPT doesn’t have access to SEMrush or Ahrefs etc., so it’s a false promise. It’s a comprehensive outline but it’s based on the AI algorithm - the connections between words and topics. It’s not even sweeping the top 10 ranking articles to see what it should include.
I understand that Jasper.AI and SEO Surfer and others promise to plug SEO tools into generative AI. I have yet to explore this.
The key message here is, again, GPT will say it can do anything for you, but it might just look like the answer, not really be the answer.
Other formats
So far, I’ve assumed that the article I’m writing involves only desk research. GPT can’t (yet) interview a subject and it can’t write you a case study unless you input the source material.
It might also be argued that GPT can’t create genuine thought leadership - not least because the idea needs to come from somewhere. And that’s the thing: having a million ideas is as useful as having none if you don’t know which one the best is.
The other element of thought leadership is that it is something that has not been said before, it is imaginative and predictional (not predictive). AI uses past information, so it can only predict things that will stay the same, and not that things are going to change. Its output is often an average of its research, rather than a new idea spawned from it.
For every person who says, GPT can’t write you a brand mission statement, someone will say, yes it can, look! A friend was working on brand values at the agency he works at. He was complaining that what the ideation session had produced was generic and didn’t communicate anything unique. To prove the point, he asked GPT: Please create a set of brand values for a digital marketing agency. The output was identical to what the group had come up with.
Sure, GPT can write you just about anything, but will it be any good? It’s a powerful tool, but even an F1 car is useless if you need to drive across a field.
Will AI content rank?
So we have a tool that can help us make better content, faster. But what if Google decides that AI-generated content is ‘bad content’? What if Google uses AI content detector software to find and de-rank content?
Well, for a start, why would they? Not only do they have their own generative AI, but the output of AI content can be great. And, I don’t need to tell you, there’s lots of low-quality, spamming nonsense content made by humans out there! Why not just use the existing Google algorithm with its EATT methodology (experience, expertise, authoritativeness and trustworthiness) to judge whether the content is any good?
This assumes that it’s a binary: AI content or human content. Nothing I have created is 100% GPT. It might range from large chunks to a couple of elements - all of which I subedit. So, is that AI-generated?
Also, AI content detectors aren’t 100% accurate. I have found that small sections pass as human, but in the wider context, it says they’re AI-generated. Also, by using prompts such as ‘please be colloquial’ you can trick the AI detectors.
Also, links remain such a powerful Google signal that it is still outsourcing much of its judgement of EATT to users - which is as it should be.
In summary, I hope that Google just uses its existing algorithms to judge whether something deserves to rank.
Constant improvement
In the five months I’ve been using GPT, barely a week goes by without me noticing some new change, for example, the little 3.5/GPT4 toggle at the top.
I don’t have quantitative data, but the output also feels better. This is no doubt partly to do with improved prompts, but it also feels like the AI is getting better at writing.
By the time you read this, there will no doubt be further updates.
Will Chatbots replace search?
If content creators are using GPT to create answers to search queries, won’t people just do that themselves and avoid Google completely?
Yes.
As adoption of GPT spreads into the consumer realm, people will no doubt use it for things like:

As this happens, content that once drove lots of traffic - and delivered brand awareness - will be replaced (This has already been happening with zero-click Google results). This means that the opportunity for content to reach audiences in search is reduced.
What can brands do if search visibility declines?
So what will brands do instead? How will they leverage GPT to reach audiences? What will they do with budget allocated to creating search content?
My prediction - and indeed my hope - is that B2B brands will stop creating simple search articles that answer queries, and focus on thought leadership in its many forms. B2C brands may invest more in entertaining content to win attention.
Red Bull knows that no one cares about the drink itself, what’s in it, how it tastes, what it does. As such there’s little point in creating content about the product.
Instead, their content strategy has long been to reach their audience of young men and boys by creating content about ‘cool s***’: extreme sports, plane races and a guy parachuting to Earth from space.

If no one is discovering your brand through product or even informational search, you might use the Red Bull approach and simply focus on more of what your audience want - regardless of how relevant it is to your product.
Where will the information come from?
Let’s pull on that thread. So brands stop producing new content because GPT is making it redundant. GPT will be able to use legacy content to answer queries related to the past. But what happens when GPT is asked to address something recent or current? Before 2022, any number of organisations might have created content about a new topic or query, but now they’ve stopped. So where will GPT get its information from? Will it say I don't know or just make it up?
Is this in fact a remaining opportunity for brands to reach audiences through search? If brands can react to emerging search trends they might have a window in which to reach audiences through ‘traditional’ search.
Watching and reacting to search trends has long been a strategy for some brands - especially those who want to be known as ahead-of-the-curve, authoritative, market leaders.
ChatGPT, digital marketing and the business model
GPT is reducing the time it takes to create certain types of content. Will agencies pass on these savings to their clients - if only by producing more of it?
There might be a moment before clients understand how the cost of content creation has gone down but disruptive ‘AI content services’ will no doubt enter the market and undercut traditional agencies/fees. Sooner or later, this will drive down what agencies can charge.
This assumes that clients are happy to use AI-generated content. While there may be scepticism at first, as we normalise AI-generated content, more and more brands will leverage it and enjoy the savings. The key will be transparency: letting clients know when and how they’re using AI and when they’re not. And of course, clients will be able to choose.
Smart agencies will adapt ahead of this curve and offer their clients more value by producing more and/or pivoting to the types of content that AI cannot generate: case studies, about us, our mission, team interviews, testimonials, surveys, podcasts, white papers and other thought leadership content.
Will content creation go in-house?
Will clients even outsource their content any more?
To create content in-house, brands will still need someone who can do all of the parts of the process included in the content creation table above, that GPT doesn’t do.
The idea
The brief (the GPT prompt)
(GPT does the research and the writing)
Quality checking.
Many brands will be happy for a marketing person to do this. Especially as the marketing person can tell as well as a writer (if not better) what the idea might be. They can probably use the prompt libraries that exist online - or even ask GPT to create the prompts. The only thing missing will be the fine-tuning of language and the flare of a great turn of phrase - the difference between good and ok. In a competitive market, that’s an important difference and many brands want better than just ok.
Commitment to excellence
Some brands currently use non-experts and non-writers to create thin, generic content that drives no engagement. Their use of GPT will be no different.
While their GPT content might be better than stuff created by non-experts, it will put them no further forward because other brands who don’t value quality content will be doing the same.
In conclusion
Generative AI is a powerful tool that marketers should not ignore.
Generative AI will not replace writers*, it will become just another tool they use. This will mean that productivity goes up for certain types of content
Generative AI still needs people who have the skills to create high-quality content with it. Being awesome at writing prompts will be the key skill in creating that high-quality content.
A modular approach to long-form content will produce the best results
Brands will be able to invest in more creative, unique content - in fact, they will have to as chat replaces search. This may also mean that paid promotion and PR become more important than ever as organic reach through search is reduced.
*Famous last words. I may be on a welding course with my writer pals by year’s end…



Comments