<aside> <img src="notion://custom_emoji/d31d05e8-3abe-400a-9d74-6c4928237bcb/1890c7d7-5716-809a-aa61-007a25acccd9" alt="notion://custom_emoji/d31d05e8-3abe-400a-9d74-6c4928237bcb/1890c7d7-5716-809a-aa61-007a25acccd9" width="40px" />

By Tarek Awwad @ Sydyk

</aside>

postPic.png

In this article, I will discuss AI-assisted writing and the importance of tools to assess its contributions, which enhances transparency in published works. I’ll also introduce Refine, a Notion integration I built to facilitate measuring AI involvement and promote responsible use of AI in the writing process.

AI-Assisted Writing — Where’s the Limit?

In recent years, the democratization of Large Language Models (LLMs) has reshaped how people research, question, understand, and communicate the ideas and topics they care about. The accessibility and ease of use of LLMs, dictated by its inherent ability to comprehend and generate human language, transformed them into natural companions to virtually any writing task—whether it’s writing articles, developing code, compiling reports, or more. Unfortunately, this convenience has also led to the proliferation of AI-generated written content which, without exaggeration, poses a real threat to the intellectual and artistic integrity of written expression

<aside> <img src="/icons/bugle_green.svg" alt="/icons/bugle_green.svg" width="40px" />

Ideas, opinions, and the work itself are what truly matter. Yet, the way one expresses those ideas also contributes to humanity’s intellectual and artistic heritage—because writing is, after all, an art. Preserving the authenticity of expression and fostering its diversity is essential.

</aside>

Relying on AI, in the form of LLMs, to fully express our thoughts in its own way puts this authenticity and diversity—as well as their natural evolution over time— at a significant risk of gradually eroding. It is therefore crucial to promote a responsible approach of AI in the writing process. One in which not only the ideas are dictated by the author, but also the style, the reasoning and the structure.

To support this, authors need to devise workflows that help them harness AI’s strengths without compromising the originality and integrity of their style. I believe that a key part of developing these workflows is the ability to quantify the AI generated contribution in a given text. Doing so allows authors to refine their workflows until AI’s involvement reaches a level they’re comfortable with and, just as importantly, to be transparent towards their readers about the extent of that involvement.

In the next section, I share my transparency workflow. It describes how a) I leverage LLMs to write my articles, b) how I keep an eye on the amount of AI contributed text, by continuously measuring and aggregating it and c) how I try to be transparent about it.

The Transparency Workflow

How do I use LLMs?

For all the reasons mentioned earlier, when it comes to writing, my use of LLMs is limited to enhancing the linguistic quality and clarity of my articles. My goal is simply to ensure that the reader isn’t distracted or overwhelmed by unclear language or awkward phrasing. To this end, the workflow I used—up until recently—can be summarized as follows:

  1. The prompt: In ChatGPT (GPT-4), which I found to best “mimic” my writing style among available AI tools and models I’ve tested (the tests were neither exhaustive nor rigorous!), I start a new conversation with these instructions as the first prompt:

<aside> <img src="/icons/user_green.svg" alt="/icons/user_green.svg" width="40px" />

I will share sections of a post with you. Refine them while strictly following these rules:

  1. The writing process: when I write a paragraph, I iterate on it until the idea is clear enough and the text structure reflects the reasoning I want the reader to follow. I copy the paragraph into the conversation and ask ChatGPT to refine it. I then review the output to ensure it aligns with both the structure and content of the original text. While the output often meets my expectations, there are instances where it does not. This can happen for the following reasons: Either a) the initial instructions are not strict or clear enough—though this is now less likely since I have refined them as part of this workflow. Or b) my input text lacks clarity—small linguistic nuances can create significant ambiguities in conveying an idea. My rule of thumb is that if ChatGPT misunderstands it, most people would too. In such cases, I revise my input text to remove any ambiguity and refeed it into ChatGPT for refinement if needed. When the output is satisfactory, I copy both my input text and the generated output into a separate file for later use.
  2. The transparency step: Once I complete the article, I use a Python script to compute similarity measures between the initial text (all input texts) and the final text (all output texts). The detail of this computation are described further in this article. I include this similarity score at the bottom of my article to give readers an idea—albeit self-computed and unverifiable—of the extent of assistance I received from the “AI”.