ChatGPT in Writing: Enhancing Productivity or Cheating?

Exploring the Role of ChatGPT
in Professional Writing and Its Ethical Implications

I remember the first time my math teacher introduced calculators for exams. I couldn’t believe my luck. No more frantic scribbling on scratch paper, struggling against the ticking clock on the teacher’s desk. From that moment, I believed math would be a breeze. Gone were the days of painstakingly hand-written calculations and the inevitable errors that came with them. I felt liberated. But then came Algebra, and I realized the calculator was merely a tool, not a savior.

 

This memory resurfaced as I sat at my desk for the first time, using ChatGPT. It offered freedom from the mundane task of producing business content, eliminating hours of research, typing, deleting, rewording, and endless proofreading. In one fell swoop, ChatGPT significantly reduced my workload. Could it be true?

With ChatGPT, productivity and accuracy soared. Suddenly, good writing became accessible to those who struggled to string words together coherently. Individuals lacking writing skills could now claim credit for well-crafted articles replete with data, documentation, and citations.

As a published author, it did feel to me like cheating. Many of us may agree, although it’s not that straightforward. For the purposes of this article, let’s assume we’re discussing ChatGPT in a professional context. Using it for personal gain, such as having your children write their English papers with a chatbot, would indeed be considered cheating.

Let’s Hear From The Source Itself

Me: Do you consider it cheating to use ChatGPT in professional writing?

ChatGPT: As an AI language model, I do not possess opinions or beliefs, and I am incapable of labeling anything as cheating. It is ultimately up to individuals to determine whether they feel comfortable employing AI language models like me in their writing and to adhere to any rules or ethical standards established by their specific context or community.

The answer is fair enough. However, it raises the question: is it ethical to misrepresent oneself in the workplace by producing high-quality written content or communications when lacking such writing skills?

To shed light on this matter, let’s turn to an analogy. Before hammers were invented, carpenters would use rocks or improvised tools to build things. Imagine one carpenter having the brilliant idea of a hammer and eagerly showing it to a fellow carpenter. This innovation revolutionized their craft, boosting productivity, accuracy, and quality. Soon, they began selling hammers and quality carpentry materials across the land.

Did the carpenter cheat using a hammer while others still used rocks? How about the advent of cordless screwdrivers in the construction realm? While civilization struggled with manual screwdrivers, the person with the cordless drill swiftly drove screws, outpacing the manual laborer.

Thanks to new technologies, both scenarios could be perceived as misrepresentation, presenting oneself as more skilled and efficient than reality. So, is Grammarly cheating? I am using it while writing this article. Grammarly, an AI-powered tool, improves our grammar. As you can see, the lines can rapidly blur when discussing Artificial Intelligence (AI), as illustrated by these simple analogies, so I posed the hammer analogy to ChatGPT.

Me: Discuss how ChatGPT may be likened to a carpenter using a hammer for increased efficiency.

ChatGPT: Similar to a hammer enhancing a carpenter’s efficiency and effectiveness, ChatGPT serves as a tool that helps writers generate content more quickly and accurately. In both cases, the tool does not replace the skill and expertise of the user, but rather enhances their abilities and facilitates superior outcomes.

Excellent response, ChatGPT.

What Do We Really Know About ChatGPT?

Perhaps we should find out a bit more about ChatGPT before we continue. What is it really? Let me consolidate what ChatGPT has shared to understand its nature and capabilities better, especially with the advancements introduced by GPT-4.

ChatGPT is part of OpenAI’s series of language models based on the GPT (Generative Pre-trained Transformer) architecture. Its purpose is simple yet profound: to generate human-like text responses based on input. The earlier GPT-3 model was a groundbreaking achievement, boasting an astonishing 175 billion parameters. To put that into perspective, it was trained on a dataset of over 45 terabytes of text. Imagine reading roughly 10 billion books, each 300 pages long, cover to cover—that’s the magnitude of data GPT-3 processed to become one of the most powerful language models ever created.

While GPT-3 demonstrated incredible versatility, ChatGPT’s design is tailored for conversational applications. This specialization makes it more interactive and better suited for writing, brainstorming, and business communication tasks. The latest iteration, GPT-4, builds upon this foundation with even greater capability. Though OpenAI has not disclosed the exact parameter count of GPT-4, it surpasses GPT-3 in contextual understanding and creative reasoning. It processes longer and more intricate prompts while maintaining coherence and accuracy.

One of GPT-4’s most exciting features is its multimodal capability, allowing it to handle both text and image inputs. This innovation opens new possibilities for people who rely on visual data, enabling tasks like interpreting charts, analyzing diagrams, or integrating images into detailed content.

However, ChatGPT, including GPT-4, is not without its limitations. As with any machine learning model, it can provide incorrect or outdated information. Additionally, biases present in its training data can influence its responses. These quirks remind us that while ChatGPT is an incredibly powerful tool, it still requires human oversight to ensure accuracy.

In many ways, ChatGPT is like an advanced hammer for the knowledge worker—it enhances productivity and unlocks new potential but doesn’t replace the need for human skill. With GPT-4, the possibilities are greater than ever, yet the principles remain the same: it’s a tool to amplify, not replace, human expertise.

Productivity Or Cheating?

Now that we have a little clearer understanding of ChatGPT’s nature and statistics, let’s revisit the original question.

Is it considered cheating to use AI in your daily work? Unfortunately, there is no one-size-fits-all answer. Different companies and contexts may have varying perspectives on using AI in professional writing. Some may embrace it as a powerful tool that enhances productivity and accuracy, while others may view it as an unfair advantage or misrepresentation.

Personally, I perceive ChatGPT as a tool, much like Grammarly, Microsoft Office, or Adobe Creative Cloud. It is the modern-day equivalent of a hammer in an office worker’s toolkit, enabling increased efficiency and accuracy. However, it’s crucial to recognize that ChatGPT is not a substitute for the individual’s skill and expertise. It complements their abilities, augments their productivity, and aids in achieving better results.

It would be considered cheating if someone intentionally misrepresents themselves, particularly for personal gain. Still, for most of us, ChatGTP can help raise the overall communication of thoughts and ideas within a business environment.

Some Final Thoughts

After using AI technology since it became publicly available, I have noticed some of its limitations and peculiarities. For example, I have had to correct it several times when the information output was simply incorrect. When I inform it of the error, it politely apologizes and then generates the correct information. In one case, there was a significant error related to a particular company’s legal history. I believe that, in general, it provides about 85% of an article without the need for human alteration or intervention. However, human intervention is often still required, and if you run ChatGPT’s text through Grammarly or other word processors, additional corrections are made.

During my conversation with the bot, it also suggested that I touch on how “I should further expand on how the model generates responses and the potential biases or limitations associated with it.” These factors are real and should be taken into account. Remember, ChatGPT was trained on us.

One major concern is the potential for biases in the training data. If the data used for training contains biases or reflects societal prejudices, the model may inadvertently generate biased or unfair responses. It is crucial to be aware of these limitations and critically evaluate the responses provided by AI models to avoid perpetuating or amplifying biases in written content.

In other words, if something (or someone) with authority, like ChatGPT-produced content, states misinformation, it can get repeated many times over and soon becomes a self-replicating falsehood.

Finally, the ethical implications of using AI in professional writing depend on individual perspectives, company policies, and the specific context. While AI models like ChatGPT can provide valuable assistance, it is essential to maintain transparency, uphold ethical standards, and ensure that such tools enhance rather than replace human skill and expertise.

As we navigate the ever-evolving landscape of AI technology, engaging in ongoing discussions and establishing guidelines to promote responsible and ethical use is crucial. With the right balance between human creativity and AI assistance, we can harness the power of AI language models like ChatGPT to achieve greater heights in our written communication and foster a more productive future.

Is ChatGPT cheating? No, but anything can be abused.

Copyright © 2002-2023
Digital Marketing Services, Inc.

Hours of Operations

Office Hours

Online Store