<aside> <img src="notion://custom_emoji/d31d05e8-3abe-400a-9d74-6c4928237bcb/1890c7d7-5716-809a-aa61-007a25acccd9" alt="notion://custom_emoji/d31d05e8-3abe-400a-9d74-6c4928237bcb/1890c7d7-5716-809a-aa61-007a25acccd9" width="40px" />
By Tarek Awwad @ Sydyk
</aside>

In this article, I will discuss AI-assisted writing and the importance of tools to assess its contributions, which enhances transparency in published works. I’ll also introduce Refine, a Notion integration I built to facilitate measuring AI involvement and promote responsible use of AI in the writing process.
In recent years, the democratization of Large Language Models (LLMs) has reshaped how people research, question, understand, and communicate the ideas and topics they care about. The accessibility and ease of use of LLMs, dictated by its inherent ability to comprehend and generate human language, transformed them into natural companions to virtually any writing task—whether it’s writing articles, developing code, compiling reports, or more. Unfortunately, this convenience has also led to the proliferation of AI-generated written content which, without exaggeration, poses a real threat to the intellectual and artistic integrity of written expression
<aside> <img src="/icons/bugle_green.svg" alt="/icons/bugle_green.svg" width="40px" />
Ideas, opinions, and the work itself are what truly matter. Yet, the way one expresses those ideas also contributes to humanity’s intellectual and artistic heritage—because writing is, after all, an art. Preserving the authenticity of expression and fostering its diversity is essential.
</aside>
Relying on AI, in the form of LLMs, to fully express our thoughts in its own way puts this authenticity and diversity—as well as their natural evolution over time— at a significant risk of gradually eroding. It is therefore crucial to promote a responsible approach of AI in the writing process. One in which not only the ideas are dictated by the author, but also the style, the reasoning and the structure.
To support this, authors need to devise workflows that help them harness AI’s strengths without compromising the originality and integrity of their style. I believe that a key part of developing these workflows is the ability to quantify the AI generated contribution in a given text. Doing so allows authors to refine their workflows until AI’s involvement reaches a level they’re comfortable with and, just as importantly, to be transparent towards their readers about the extent of that involvement.
In the next section, I share my transparency workflow. It describes how a) I leverage LLMs to write my articles, b) how I keep an eye on the amount of AI contributed text, by continuously measuring and aggregating it and c) how I try to be transparent about it.
For all the reasons mentioned earlier, when it comes to writing, my use of LLMs is limited to enhancing the linguistic quality and clarity of my articles. My goal is simply to ensure that the reader isn’t distracted or overwhelmed by unclear language or awkward phrasing. To this end, the workflow I used—up until recently—can be summarized as follows:
<aside> <img src="/icons/user_green.svg" alt="/icons/user_green.svg" width="40px" />
I will share sections of a post with you. Refine them while strictly following these rules: