ChatGPT in Writing: Enhancing Productivity or Cheating?

Exploring the Role of ChatGPT
in Professional Writing and Its Ethical Implications

I remember the first time my math teacher introduced calculators for exams. I couldn’t believe my luck. No more frantic scribbling on scratch paper, struggling against the ticking clock on the teacher’s desk. From that moment, I believed math would be a breeze. Gone were the days of painstakingly hand-written calculations and the inevitable errors that came with them. I felt liberated. But then came Algebra, and I realized the calculator was merely a tool, not a savior.

ChatGPT in Writing: Enhancing Productivity or Cheating?

This memory resurfaced as I sat at my desk for the first time, using ChatGPT. It offered freedom from the mundane task of producing business content, eliminating hours of research, typing, deleting, rewording, and endless proofreading. In one fell swoop, ChatGPT significantly reduced my workload. Could it be true?

With ChatGPT, productivity and accuracy soared. Suddenly, good writing became accessible to those who struggled to string words together coherently. Individuals lacking writing skills could now claim credit for well-crafted articles replete with data, documentation, and citations.

As a published author, it did feel to me like cheating. Many of us may agree, although it’s not that straightforward. For the purposes of this article, let’s assume we’re discussing ChatGPT in a professional context. Using it for personal gain, such as having your children write their English papers with a chatbot, would indeed be considered cheating.

Let’s Hear From The Source Itself

Me: Do you consider it cheating to use ChatGPT in professional writing?

ChatGPT: As an AI language model, I do not possess opinions or beliefs, and I am incapable of labeling anything as cheating. It is ultimately up to individuals to determine whether they feel comfortable employing AI language models like me in their writing and to adhere to any rules or ethical standards established by their specific context or community.

The answer is fair enough. However, it raises the question: is it ethical to misrepresent oneself in the workplace by producing high-quality written content or communications when lacking such writing skills?

To shed light on this matter, let’s turn to an analogy. Before hammers were invented, carpenters would use rocks or improvised tools to build things. Imagine one carpenter having the brilliant idea of a hammer and eagerly showing it to a fellow carpenter. This innovation revolutionized their craft, boosting productivity, accuracy, and quality. Soon, they began selling hammers and quality carpentry materials across the land.

Did the carpenter cheat using a hammer while others still used rocks? How about the advent of cordless screwdrivers in the construction realm? While civilization struggled with manual screwdrivers, the person with the cordless drill swiftly drove screws, outpacing the manual laborer.

Thanks to new technologies, both scenarios could be perceived as misrepresentation, presenting oneself as more skilled and efficient than reality. So, is Grammarly cheating? I am using it while writing this article. Grammarly, an AI-powered tool, improves our grammar. As you can see, the lines can rapidly blur when discussing Artificial Intelligence (AI), as illustrated by these simple analogies, so I posed the hammer analogy to ChatGPT.

Me: Discuss how ChatGPT may be likened to a carpenter using a hammer for increased efficiency.

ChatGPT: Similar to a hammer enhancing a carpenter’s efficiency and effectiveness, ChatGPT serves as a tool that helps writers generate content more quickly and accurately. In both cases, the tool does not replace the skill and expertise of the user, but rather enhances their abilities and facilitates superior outcomes.

Excellent response, ChatGPT.

What Do We Really Know About ChatGPT?

Perhaps we should find out a bit more regarding ChatGPT before we continue. What is it really? I will consolidate the answers provided by ChatGPT to shed light on what ChatGPT truly is and present some statistics for us to consider.

ChatGPT is a specialized version of the GPT (Generative Pre-trained Transformer) architecture developed by OpenAI. While it shares its foundation with GPT-3, ChatGPT is designed specifically for conversational applications, making it ideal for interactive discussions and written conversations. It’s important to note that ChatGPT is not the same as GPT-3, which is a larger and more general-purpose language model.

GPT-3, the larger model, boasts an astonishing 175 billion parameters, making it one of the most powerful and extensive language models ever created. It has been trained on a massive dataset of over 45 terabytes of text, encompassing web pages, books, and articles. For some context, training ChatGPT-3 on 45 terabytes of text is like reading about 10 billion books, each containing 300 pages, cover to cover. GPT-3’s vast parameter count enables it to perform a wide range of language-related tasks, including translation, summarization, and question-answering.

While ChatGPT is a neural network language model trained on conversational data, including chat logs and other conversational text, it has a smaller parameter count than GPT-3. The most recent version, ChatGPT-4, has 6 billion parameters, expanding its capabilities further.

However, it’s essential to acknowledge that ChatGPT has limitations like any other AI model. As a machine learning model, it may occasionally make mistakes or provide incorrect information if the input data is incomplete or inaccurate. Therefore, it’s crucial to verify the information provided by ChatGPT and not solely rely on its responses.

Productivity Or Cheating?

Now that we have a little clearer understanding of ChatGPT’s nature and statistics, let’s revisit the original question.

Is it considered cheating to use AI in your daily work? Unfortunately, there is no one-size-fits-all answer. Different companies and contexts may have varying perspectives on using AI in professional writing. Some may embrace it as a powerful tool that enhances productivity and accuracy, while others may view it as an unfair advantage or misrepresentation.

Personally, I perceive ChatGPT as a tool, much like Grammarly, Microsoft Office, or Adobe Creative Cloud. It is the modern-day equivalent of a hammer in an office worker’s toolkit, enabling increased efficiency and accuracy. However, it’s crucial to recognize that ChatGPT is not a substitute for the individual’s skill and expertise. It complements their abilities, augments their productivity, and aids in achieving better results.

It would be considered cheating if someone intentionally misrepresents themselves, particularly for personal gain. Still, for most of us, ChatGTP can help raise the overall communication of thoughts and ideas within a business environment.

Some Final Thoughts

After using AI technology since it became publicly available, I have noticed some of its limitations and peculiarities. For example, I have had to correct it several times when the information output was simply incorrect. When I inform it of the error, it politely apologizes and then generates the correct information. In one case, there was a significant error related to a particular company’s legal history. I believe that, in general, it provides about 85% of an article without the need for human alteration or intervention. However, human intervention is often still required, and if you run ChatGPT’s text through Grammarly or other word processors, additional corrections are made.

During my conversation with the bot, it also suggested that I touch on how “I should further expand on how the model generates responses and the potential biases or limitations associated with it.” These factors are real and should be taken into account. Remember, ChatGPT was trained on us.

One major concern is the potential for biases in the training data. If the data used for training contains biases or reflects societal prejudices, the model may inadvertently generate biased or unfair responses. It is crucial to be aware of these limitations and critically evaluate the responses provided by AI models to avoid perpetuating or amplifying biases in written content.

In other words, if something (or someone) with authority, like ChatGPT-produced content, states misinformation, it can get repeated many times over and soon becomes a self-replicating falsehood.

Finally, the ethical implications of using AI in professional writing depend on individual perspectives, company policies, and the specific context. While AI models like ChatGPT can provide valuable assistance, it is essential to maintain transparency, uphold ethical standards, and ensure that such tools enhance rather than replace human skill and expertise.

As we navigate the ever-evolving landscape of AI technology, engaging in ongoing discussions and establishing guidelines to promote responsible and ethical use is crucial. With the right balance between human creativity and AI assistance, we can harness the power of AI language models like ChatGPT to achieve greater heights in our written communication and foster a more productive future.

Is ChatGPT cheating? No, but anything can be abused.

Copyright © 2002-2023
Digital Marketing Services, Inc.

Hours of Operations

Office Hours

Online Store