Augmented Blogging - The Art of Prompting

2023-04-09 Personal Augmented-by-AI Innovation Machine-Learning Nerdy Thoughts

This is a landmark blog post. I just introduced a tag called Augmented-by-AI. And this post is tagged with it.

This is going to be the new normal. Let me explain …

First things first: This blog post is not written by chatGPT.

It is Sunday morning, 6am and I am sitting here in my living room in my PJ with a cup of coffee and I am typing this. Believe me …

By now we have all used chatGPT a couple of times for various tasks (telling jokes, writing essays and poems, writing job descriptions, having a fire-side chat, …).

The reactions to the experience and the results range from …

OMG … this is amazing. Let’s use it for everything (e.g. let’s ask it to develop a cure for cancer and a plan for world peace (or at least a plan for peace in the Ukraine))

… to …

OMG … this is horrible and dangerous and we have to shut it down (or at least stop using it for 6 month) until we can make sure that the AIs are not going to replace us as the apex predator in our eco-system (on earth)).

And there are lot’s of articles that (rightfully) put a lot of good shades of grey on both of these (extreme) statements (I leave it as an exercise to the reader to ask chatGPT about it). The truth is probably (as always) somewhere in the middle.

What chatGPT cannot do (right now) is to tell you what I am thinking about all of this (mainly because I am not famous enough and mainly because the data that was used to train it (obviously) does not include all of the discussions that have happened in the last 6 month).

Here are a couple of my thoughts …

  • I do not think that AIs will replace us anytime soon. But … I am convinced that there is value in using AIs to get to better outcomes. Basically combine the best of a human being with the best of an AIs. Means … I do not foresee this to be(come) an either-or situation, but more of a symbiosis, where AIs will augment humans and turn us into super-humans (seeing, hearing, reading, writing, …). For instances blogging will move from writing down answers to writing down really good thoughtful prompts. Blogging will be(come) the Art of Prompting. The answers will get (re)generated frequently (initially once a month; later any time somebody looks at the post). I would call this Augmented-Blogging. And you can see how this will work well: First, we are far away from AIs that can ask good questions and second, the questions age much less fast than the answers and writing up the (good) answers takes a lot of time that is probably better spend to think about good questions/prompts. Very soon we will get worksheets that guide an AI through a series of prompts to articulate a/the result of the discussion.
  • And … it is not only going to be Augmented-Blogging. It is going to be Augmented-Everything. Which is not really new, because we have been doing Augmented-Driving and Augmented-Cooking (my dad is using YouTube for cooking; he is 85) and Augmented-Learning and Augmented-??? … for a while now.
  • But … there are also problems with using AIs or chatGPT in particular. One problem that needs solving is transparency and trust. One thing that would go a long way with me is to make sure we continue to understand the source of a statement. Who said this? Was it Roland? Or Churchill? Or an AI? Who made that statement? In that context, I think we need water-marking (as a law) and an explain feature and a bias-filter, where content that is generated by an AI is (water-)marked as AI-generated and you can also ask the AI to explain how it came to the conclusion that this is a good answer and maybe even put a confidence interval on the quality of the answer and lastly see if there is a bias in the generated answer and what that bias is. THAT would be fantastic and would address a good part of the concerns that I have about using AIs.
  • And … I think the water-marking is also needed for another reason. Over the next couple of years we want to continue to train models to get better and better, but we run a risk that more and more the content we are using to train the models was generated by previous versions of the model. And that content is obviously lacking originality and creativity. chatGPT can rehash/rearrange/reformat existing content/ideas in a way to answer the question, but it cannot come up with a new idea. Means we need to find a way to exclude AI-generated content from training the next version of the model. This is not only try for language models. This is also true for image-processing and other kinds of machine-learning.
  • Last but not least, I think I need to make piece with the fact that going forward I will not blog for other people to read it, but to provide training material for the next better model that will be trained in 6 month from now. I hope that my blog posts will help that model to be better than the last one. I also think that I will continue to write blog posts not because I think that people will read them, but because it is my way to reflect on a topic (like this one). Almost like journalling. Or the good old quote from Albert Einstein, that the best way to learn about relativity is to give a lecture about it (means you are not developing the lecture for the students; you are developing it for you and use it as a tool to think something through).

And here is what ChatGPT had to say about the topic …