Skip to content

OK Google! write this blog post for me

by Peter Hyun

How should I start this blog post? Maybe I don’t need to start it at all…

“Artificial intelligence today is more complex than ever before, and there’s a growing number of companies and research groups working on ways to make it more intelligent. One such company is called the Brain in a Bottle project, which has been developing the technology for the past decade. The project is aimed at bringing artificial intelligence to the battlefield. The Brain in a Bottle project is not a government-funded project. Rather, it’s the brainchild of a group of scientists who believe that the use of artificial intelligence is the future.” – GPT-2 July 14, 2021

The above paragraph was created through a text generating artificial intelligence model, GPT-2. Full disclosure, Brain in a Bottle, at least in the form described above, does not exist. The model was created by OpenAI and released in 2019, and Max Woolf created a Google Colaboratory page to run the Python code needed to run the model. I simply copied Max Woolf’s code, fed the model with a prefix “Artificial intelligence today”, and it did the rest. I didn’t need to learn about the inner workings of the GPT-2 model. I didn’t need to concern myself with having to train the model with pages and pages of text in order to churn out cohesive and understandable English. I definitely did not worry myself with questions like which algorithm should I use, or how will I code in python to get this working. And the model didn’t need to know my intent. It wasn’t concerned with the meaning of the words or how they would be perceived by others. I gave it a prompt and asked for 100 words, and that’s what it did. To be honest, it was too easy.

OpenAI hypothesizes this tool could be used for malicious intents including:

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

I definitely find these to be troubling, but even as I sit over my laptop trying to finish this post, I wonder, is Natural Language Generation really necessary? Anyword offers a service that will “Generate effective copy for ads, emails, landing pages, and content.”, Rytr, and Grammarly all offer to aid, edit, or write content for us. When GPT-2 starts the process of generating a collection of words together that form a sentence by calculating the next best word ranked by frequency of use in comparison to its gigabytes of sample text, how does that compare to what I do to formulate sentences that bare some meaning to me. Words with meaning that I wish to share with the person who then reads my words, my thoughts, and my feelings? At the end of the day, what does it matter what the SEO value assigned to the blog post is if I simply used my own algorithm to generate keywords to artificially boost a ranking done by another algorithm? Computer-on-Computer crime as it were.

(By the way, Grammarly scored my last paragraph a 92, and GPT-2’s paragraph a 96. Not that I am keeping track or anything.)

I’m not trying to sound like a High-tech Heretic a la Clifford Stoll, but in a world that looks to optimize my entire life and squeeze every ounce of productivity out of me, maybe there is something worthwhile in having to sit down, remove all distractions, and come up with an idea about what it is we’re asking of AI, and what we’re offering to AI in exchange. These words are mine, and I hope that I have effectively shared their meaning with you, poor Grammarly score be damned.