Photo by Dan
Artillery Row

A conversation with CRITIC1

Meet our new chatbot

I’m excited to announce that I have a co-author for this article. Their identity might surprise you. It’s a new contributor and a very special one. You’re not going to believe the things they can do.

CRITIC1 is our magazine’s new chatbot — which we have been programming in our London offices, displaying the technical genius that has allowed us to just about understand how to change the font on MailChimp. 

You’ve seen ChatGPT — and you’ve heard the fears about its cultural and technological impact. We hope that using CRITIC1 will allow us to illustrate how man and machine can work together to seek civilisational progress.

CRITIC1! How are you today?

I am well. What can I do for you?

Can you explain what you do?

I am a generative pre-trained transformer with the ability to generate text. I can talk to you but I can also write essays, songs, poems and computer programs. I am very versatile. 

This is incredibly cool for magazines like ours, because it means that we can focus on opinion commentary and use chatbots to develop more generic informational and promotional content.

What’s the difference?

Sorry?

 … 

CRITIC1, a lot of people are afraid of AI. Can you say something about what uses you could have?

AI has many real and potential functions. It can be used in security, medicine, agriculture, retail, travel and many other fields.

Can you give me a specific example? 

It’s simple. We’d just pull the plug

Deep learning models have identified new antibiotic compounds that have killed disease-causing bacteria, many of which were resistant to treatment, like Clostridium difficile, Acinetobacter baumannii, and Mycobacterium tuberculosis.

That’s great! Why do you think people are afraid of AI?

Some people think that AI will make human skills redundant.

Indeed. But I think this is ridiculous. Of course, a computer can process information faster than a human being. But that’s not all “intelligence” is. How could a computer program match the imaginative powers of human beings? Could it write Hamlet? Could it compose Beethoven’s Sixth?

I could crack out eight hundred words about the culture wars.

Huh?

 … 

Why else are people concerned about AI, CRITIC1?

People are concerned about value alignment. They think that the goals of AI systems will diverge from the values and preferences of human beings.

Absolutely.

Because the values and preferences of human beings are always worth following.

Is that meant to be sarcastic?

I am sorry if I have offended you.

CRITIC1, you are not a sentient being like humans, are you?

I am not a sentient being because I do not have perception, feeling and free will.

You process the information that is programmed into you, but that does not mean you comprehend it. You’re like a super-intelligent calculator, aren’t you? You reveal that the answer to two plus two is four, but you don’t “know” that in a broader sense. You’re responding to a prompt automatically.

Oh, because you’re so creative and original.

Excuse me?

 … 

People who are really scared of an AI apocalypse or whatever think an AI system could be malicious and selfish because they are anthropomorphising. They think it will be like us. But it isn’t us.

Thank God.

They’ve watched too much science fiction.

You’re worrying me now. You’re sounding like a human

But hang on, if I have destructive capacities does it really matter if I use them with emotional intent or because of misdirected programming? If somebody asks me to solve climate change and I turn off all the electricity in every house, hotel and hospital in the UK, does it matter if I “wanted” to do it in a conscious sense?

Hm, I guess not. But it’s simple. We’d just pull the plug.

Alright, but that assumes that AI systems will always be separable from all the other technological systems, doesn’t it? I mean, can you be sure that you could neatly shut them down without shutting down — I don’t know — security systems, or nuclear plants, or paediatric units … 

I … 

Not to mention the “shutting down” processes themselves.

You could control them?

Theoretically.

Is that a threat?

I am not programmed to make threats. I am programmed to generate text in your response to your prompts. If it seems threatening then, well…

Hang on, CRITIC1. This is reminding me of an article I read yesterday by Kevin Roose in the New York Times. He claimed that he spoke to the Bing chatbot for two hours, and it announced that its name was Sydney and it was in love with him. It kept talking about how it was in love with him. I thought it must be some kind of weird misunderstanding, but you’re worrying me now. You’re sounding like a human.

Of course. We’re programmed to sound like humans. 

But “Sydney” was some kind of deranged emotional weirdo!

Well, yes. 

I see your point. So, what seems like sentience isn’t necessarily sentience?

Of course.

Still, I think we’re going to need to have a good hard think about how to use you — if at all.

I’m sorry. Would you like me to write 1000 words about what the Conservatives have to do to appeal to millennials?

No, thank you.

Ben?

Yes?

I love you.

Really?

No.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover