dark mode light mode Search

Generative AI, friend or foe?

As a writer constantly writing on the web, I am first to try anything that helps me improve my writing.

When Grammarly first came out, I was thrilled. English is not my first language, and even though I am pretty proficient at grammar, I still make mistakes from time to time. I am a terrible speller. And for that, Grammarly was godsent.

So far, so good. A friend to the writer. Helping writers write better.

But now we have this thing called generative AI. There’s a more industry-specific term for it: Synthetic Media.

From Wikipedia:

“A catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through artificial intelligence algorithms, such as to mislead people or change an original meaning.”

Let’s talk about artificial production. What exactly is that?

Grammarly tells you what’s wrong with what you have written. Imagine a tool that writes for you, and then you have to say to it if it makes sense.

Bye. Bye. Writer’s block.

To some extent, that’s what artificial production is. And there’s been a lot of that happening recently.

It feels like software is starting to create content on its own. And it’s not limited to writing. DALL-E 2 is for generating images. Copy.ai is for generating marketing copy. LEX and Moonbeam help you write better or generate ideas you can build on.

In the next few months, there will be a lot of machine-generated content on social media and the blogosphere.

It might not all be great. But then, human writers will make it better. The question is, is this a good thing or a bad thing?

On the one hand, it’s great that we have tools to help us create content faster. On the other hand, it’s worrisome that the line between human-generated and machine-generated content is getting blurrier and blurrier.

It’s also worrying that AI is getting better. E.g., you can provide Moonbeam with an outline, and it will write the entire blog post.

But here’s the thing. My concern is not with the content AI is generating. It’s about how it’s being developed.

For AI to help you write, given the context, it needs to learn what to write about. For this to happen, AI (GPT3) was fed a lot of writing from the internet. Real humans created this writing with original or borrowed ideas. All that data was provided to artificial intelligence, and we can now leverage those ideas to develop our content.

So does that mean we steal every time we use AI? Or, as they say, if you steal from one, it’s plagiarism, but if you steal from many, it’s research.

I don’t know. I just used Grammarly and Wordtune for this post. Full disclosure.