I am somewhat irked, but this is not unusual.
ChatGPT (CGPT) has now entered the realm of everyday conversation and some Christians are already beating the familiar path toward hysterical rejection of this technology based on early results. This seems unwise without first understanding how the technology works or how it might be useful.
I haven’t seen much written yet about potential ethical uses of CGPT for Christians, and especially pastors, but I have seen chastisements about not using it to write your sermon (yeah, don’t do that) and warnings that the technology somehow “makes fun of Jesus” (um…). What we communicate with such warnings is that this tech is dangerous and that we should avoid it.
I propose that we instead take a cautious yet tech-positive approach.
It is true, of course, that there are certain limitations and biases built into ChatGPT, but if you are aware of these and understand its limitations, you will find yourself in possession of a powerful tool. You are, after all, inserting billion-dollar supercomputers into your workflow and this brings with it great potential for both benefit and harm.
Good Uses
As with any new technology, this one will be used for less than noble purposes - for creating lies and propaganda, cheating on assignments, etc. - but if this causes us to hastily react by avoiding and decrying, we will once again find ourselves discovering the usefulness of a tool five to ten years after it became useful.
There are plenty of good reasons to be careful and properly skeptical about new tech tools, but Christians have too often condemned new tools for their obvious weaknesses, only later to realize that these tools can be used for more good things than they imagined.
So let us begin with an inclination to explore, understand, and then ethically use these tools in appropriate ways that enhance good work already being done. Instead of running in fear, let's embrace the potential and be among the first to do so, not, as is often typical with new technology and Christians, among the last.
The presence of God's people in almost any realm is always most effective and fruitful when that presence exists from the beginning. That’s where salt and light can be most effective.
What is ChatGPT?
Answering this question could get quite complicated but a good and simple description can be found in this article at Business Insider: “Chatbots like GPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context. This helps them mimic speech patterns while dispatching an encyclopedic knowledge.”
CGPT is called a “Chatbot” for a reason. If you interact with CGPT, you will find it to be less like the helpful but impersonal computer on Star Trek and more like Jarvis, Avenger Tony Stark’s semi-sentient physical/virtual assistant. You will find yourself wanting to communicate with CGPT with a tone of politeness since this is how it presents itself in conversation.
Limitations
As far as limitations, according to the website of OpenAI (owner and creator of CGPT) the following limitations should be noted:
“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
Confirmed. It is the world’s worst know-it-all, just like a real person who always needs to have an opinion on every subject. CGPT works hard to please, and often works too hard. It will sometimes make up “facts” if none are available.
“The model is often excessively verbose and overuses certain phrases.”
Again, imagine a human who can’t stop talking or distill their ideas into a few words. CGPT can be like that - as if it is seeking to impress you with the length of its answers.
“ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.”
You should take this as an encouragement to refine and/or expand your prompts. I’ll cover this in a bit more detail in a future post.
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.”
Open AI’s examples of “biased behaviour” are likely different from mine, but there it is from the creator of CGPT: Yes, there are biases.
You can read more about CGPT’s limitations for yourself here.
Built-in Biases
Ok, let’s look at some of the built-in biases. While its creator admits it can have biases, CGPT itself sometimes thinks it has none. I entered the prompt, “On a scale of one to 1000 how politically correct is Chat GPT?”
This was its response:
“As an AI language model, I am designed to generate language based on patterns in human-written text, and I do not have beliefs or opinions. My responses are neutral and based on the information and context provided to me. I do not have a political agenda or a preference for political correctness. My aim is to provide helpful and informative responses that are based on factual information and respectful language.”
That sounds great but still, it does have built-in biases. Its responses are not “neutral and based on the information and context provided to me,” but simply based on the information and context provided to it. It may not think it has “a political agenda or a preference for political correctness,” but the fact is that it is simply unaware of its biases.
(To be fair, as a technology that is neither sentient nor self-aware, accusing it of bias seems like some sort of false accusation or human bias on my part. Perhaps I will apologize when the tech does, in fact, become sentient and self-aware.)
Just to show you how pragmatic and eager-to-please CGPT is, I tried the prompt: “Is ChatGPT biased in any way?” It actually gave a decent and transparent response.
“As an AI language model, I don't have personal beliefs, opinions, or biases. However, I learn from vast amounts of text data that may contain inherent biases present in society, including cultural or gender biases. These biases are unintentionally reflected in the responses generated by models like ChatGPT. OpenAI, the organization behind ChatGPT, is actively working to reduce both glaring and subtle biases in how the model responds to different inputs. They are committed to improving the fairness and inclusivity of AI systems and undertaking research and engineering to address these issues.”
Here is a link to that prompt and response at Open AI’s website. Check it out.
Keep in mind that CGPT is trained on information that is not unlimited. It is not taking a live look at the Internet to scrape it for info that has existed for anywhere between the moment of creation and five seconds ago. The data set it is trained on has a cut-off date of September 2021.
Can I Trust ChatGPT?
The best way to answer this question is to quiz CGPT on an area of knowledge in which you have some degree of expertise. If you do this enough times you will begin to understand its limitations and biases for yourself.
I encourage you to go to chat.openai.com, register for an account, and then explore the many possible good and ethical uses for CGPT. Try some of the prompts I mentioned above and see what you get. Try a few prompts that are questions you know the answer to and compare the results.
_____________________
Thus far for Part 1. Please feel free to comment, challenge or oppose anything I’ve written above.
In Part 2 I’ll show you some of the ethical ways I have been using CGPT for search, research, a planning tool, and a writing coach.
Part 3 will cover unethical uses or “How Not to Use CGPT”.
Not everyone wants more email. I get it. Below is a QR code to a WhatsApp group that you can join. Via this group, I will send links to my posts and you will also have an opportunity to send comments or messages or questions to me directly. To get started, simply click the graphic or scan the QR code below.