Can AI be playing Socrates?

Henryk A. Kowalczyk
8 min readJul 5, 2023
The above picture was created by the OpenAI program DALL.E when asked for a graphic vision showing that the future could be better than the present.

Below, I summarize my opinions about AI, as my submission to the United States Office of Science and Technology Policy in response to the “Request for Information: National Priorities for Artificial Intelligence.” My sincere thanks to Shelly Palmer for bringing that request to my attention.

Can AI detect human misinformation?

I played with ChatGPT to get a sense of what it can do now and what could be next. I reported my experiences in two separate texts. “Politicians have already outsmarted AI” is about my testing to determine whether the current version of AI can help us find a better immigration policy.

ChatGPT repeated the argument of many pundits and politicians that foreigners cause our immigration problems, and it is practically impossible to do anything more than what has not been working for about a century. I expected a much more intelligent answer.

At the same time, I was impressed with the amount of relevant information that ChatGPT could intelligently pull from its database. In my second test, I sought how AI can evaluate the truthfulness of contradicting information pieces, “Can AI help Americans overcome the political divide?” It led to an intriguing question: Can AI be an unbiased moderator in political debates? It could be a path to overcome our deep political divide.

Presently, all extreme-view holders believe that the truth is on their side and their opponents are wrong because of their ideological biases and, kindly speaking, lack of wisdom. My humble human intelligence concluded that it is worse; almost all Americans are wrong on at least a few important issues. AI will have a steep uphill climb playing Socrates if I am right.

Will AI increase or reduce misinformation?

It depends on the political decisions we make now.

An article by two Harvard professors, Archon Fung and Lawrence Lessig, “How AI could take over elections — and undermine democracy,” voices typical concerns about the dangers of AI. The authors bring an example of AI applications that can spread falsehoods affecting elections. It is true. But it is not new; it was also true in the 18th century with political pamphlets when printing became affordable. It was true with recent elections as well.



Henryk A. Kowalczyk

Many tell us what to think. I write to ask you to inquire. Question me. Have fun. Contact: