Artificial intelligence can write a novel. Can it autocomplete my tasks?

Although ELIZA managed to fool many people, technology has improved since then – but recent leaps and bounds have surprised Walsh, who has worked in the field for 40 years.

The state of affairs

One of the most well-known programs for text generation is ChatGPT. “It’s autocomplete, like it’s on your phone, but on steroids,” he says.

For auto-completion, “they took a dictionary of words and their frequency and trained it on what was the most likely letter and word that would complete what you typed. ChatGPT and GPT3 are very similar, but on a much larger scale – and they didn’t take the dictionary, they took the whole internet. So not only can it complete the word, it can complete the sentence, the paragraph.”

Done right, using programs that can produce natural-sounding text should mean we can skip the boring parts of life. “We have always used tools to empower ourselves. What’s new is that instead of tools that can strengthen our muscles, we now have tools that can strengthen our minds,” says Walsh.

How is it already being used?

When I told a colleague about this article, his first question was, “But won’t students use it to cheat?”. The short answer is yes, of course. The longer one is: they already are. With just one prompt, these programs can create compelling essays, potentially making last-minute late nights a thing of the past. There are programs designed to detect this type of cheating by detecting regularities in the words used and the rhythms. “The problem here is that it’s always an arms race,” says Walsh. “And you’re never going to win the arms race in the end.” Once a checker is developed, it’s used by the teams working on the text generators to improve the output text. And so on and on it goes.

But maybe this is an opportunity to improve things. “It’s about thinking carefully. Do we need to stop asking people to write essays that could be written convincingly enough by computers? Or should we embark on something more meaningful and test their intellectual abilities more severely?”

Are the machines coming for our jobs?

In December 2022, business consultant Danny Richman tweeted about using GPT3 to “help a young bloke with poor literacy skills starting a landscaping business.” The man would type in a simple string of text and the program would convert it into formal business language.

“This is a wonderful example of how ChatGPT can be used in a positive way. It doesn’t put anyone out of a job, it creates jobs,” says Walsh.

However, if we track the likely course of these programs, it seems likely that jobs will be lost. Just as the work for typists decreased as people started writing their own letters, programs for AI-generated text improved, and chatbots became more sophisticated, roles that rely on repetitive text-based tasks are likely to disappear.

As AI gets better and better at taking on repetitive tasks, we may need to rethink what “work” looks like.Credit:Matt Davidson (digital editing)/iStock

For many people it is a red light on the horizon. Chatbots are already being used in some customer-facing applications, something you’ll already know if you’ve ever experienced the annoying politeness loop that arises when trying to solve a basic problem online outside of business hours. As technology improves, they are used more and more often. Writers of all kinds — copywriters, creative writers, even journalists — will likely see these programs creep into their fields, at least for the tasks that are repetitive, formulaic, and don’t require human intuition.

But maybe that’s a good thing. In theory, outsourcing work that’s boring and repetitive—work that Walsh argues should never be done by humans—should be cause for celebration; should mean that people are free to do other things in life, things that they do want do.


There is a difficult road ahead that will require some shifting perspectives. “There’s no shortage of work, maybe a shortage of jobs,” he says. Caring for children and the elderly are key examples that Walsh cites. “There are many caring jobs that we humans don’t pay for that we could afford if we had income generated by machines.”

He also highlights the market for artisan goods – that people will pay more for “things touched by human hands”. Mass-produced goods are cheaper, but “we pay extra for handmade bread or cheese”.

Okay, let’s offload some drudgery to the AI

Angus Thomson, one of our journalists, contacted ChatGPT to create a cover letter.

“I’m drawing toward the end of my 12-month internship…which means I have to prove to senior editors that I deserve a full-time contract. I asked ChatGPT to write a persuasive essay herald Editor Bevan Shields to hire me as a full-time journalist with examples of my published work,” he writes.

“The bot seems to have a good understanding of cover letter conventions and describes me as ‘a passionate and dedicated writer with a strong track record of creating quality, engaging content.’ But when it comes to speaking about my past experiences, the truth can get a little loose. It boasts (on my behalf) that it works for them heraldABC News and internship at the Guardian. That all sounds great, except only one of them is true.

“Whether the AI ​​software incorrectly scraped information off LinkedIn or is lying straight through its virtual teeth, it seems it has a long way to go before it learns a key pillar of good journalism: telling the truth.”

With each update and improvement, it becomes harder to tell what is written by AI and what is written by humans. Credit:Matt Davidson (digital editing)/AdobeStock

To be honest…

How these programs handle the truth is one of the greatest dangers of AI-generated text. “The fact that these programs just make things up and do it in a very convincing way is problematic,” says Walsh. It’s fun when it’s a short story or a tongue-in-cheek attempt at a cover letter, but in the wrong hands it can do a lot of damage.

These programs work quickly and rely on prompts or questions. As a tool, they can be used to personalize phishing scams to increase the likelihood that someone will click on a link or disclose information, Walsh explains.

There is also the problem of fake news. Filling websites with lies is easy with these programs. “If you have a favorite conspiracy theory, you can write some very plausible content and post a lot of tweets to support and support your conspiracy theory.”

Is it better to be ourselves than we are?

Last week I saw a post from Australian Twitter user Jamie Moffatt: “[the above] Tweet was generated by training an AI on my tweets and making them suggest new ones in my style. jAImie is freaking funnier than me and I don’t know what to make of it.” They had used which generated 10 different options – and the one they chose is difficult to see as it differs from their regular posts.


Early in our conversation, Walsh brought up the idea of ​​using AI to write complaint letters. He then announced that eventually there would be so many of these letters that a program would be developed on the other side to process them. In the end, computers would simply communicate with other computers without needing the formal human language that we originally programmed them to use. Where does that leave us? Standing politely in the margin while using 1’s and 0’s to argue about why the postman always slips magazines in the mail slot I suppose.

Our novelists are probably safe. While the AI ​​could likely spit out a very entertaining, formulaic story over time, Walsh believes that, similar to art, it won’t be able to dig into deeper issues. “A machine will never have these human experiences and will never speak to us the way a fellow human being speaks to us in a novel.”

It seems strange that we have this technology and use it to generate things like creative texts or tweets, although maybe that’s just curiosity. In an ideal scenario – with many caveats – we use these tools to skip the boring parts and give ourselves more time for the things we care about and want to do. But we always have to keep the big picture in mind, the natural endpoint.

“As with all technology, especially artificial intelligence, there are positives and negatives,” says Walsh. “There are useful things we can do with these technologies. But we should also be aware of the risks involved.”

With Helen Pitt and Angus Thomson

A cultural guide to going out and making love in the city. Sign up for our Culture Fix newsletter here. Artificial intelligence can write a novel. Can it autocomplete my tasks?

Jaclyn Diaz

InternetCloning is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – The content will be deleted within 24 hours.

Related Articles

Back to top button