The Manuscript That Would Not Write Itself

By the sixth session of The Research Clinic, the room had changed.

The team had crossed from why to how. They had seen Dr. Junaid’s promotion rejection up close, navigated the PMDC and CPSP regulatory maze, understood why evidence-based medicine matters at three in the morning, and absorbed the 120-day roadmap from first question to final submission. They were no longer frightened of research as a concept.

What frightened them now was the blank page.

Dr. Yaqoob could see it the moment they walked in. Dr Sumaira Talib, FCPS Surgery trainee, arrived first, the one who had sent that first WhatsApp message, “I’m signing up, who’s in?” but tonight she looked tired in a different way. Not the tiredness of a long surgical shift. The tiredness of someone who had been sitting in front of a screen for hours with very little to show for it.

She placed a printed draft on the desk without saying a word. The title was there. The methods section looked solid. But the introduction, the most important few hundred words of any manuscript, was a mess of crossed-out sentences and margin notes.

“I have the data,” she said quietly. “I just cannot make the words work.”

Dr. Junaid arrived next, holding his notebook the same one he had carried since the beginning, now half-filled with careful handwriting. He glanced at Sumaira’s draft and nodded slowly. Fifteen years of clinical practice had made him fluent in diagnoses, ward rounds, and patient families. Academic writing was still a foreign language.

Dr. Hammad Ali bounded in behind him, still enthusiastic, still slightly chaotic. He had printed something from the internet and was waving it around. “Sir, I found a tool,” he said before he had even sat down. “ChatGPT wrote my entire background section in thirty seconds. Should I just use that?”

The room went still.

Dr. Hassan Raza looked up from his phone. Dr. Zunaira Malik turned from the window. Even Dr. Sumaira straightened slightly. Because Hammad had said out loud the thing every single one of them had already tried, or thought about trying, or felt guilty for wondering about.

Dr. Yaqoob set down his coffee.

“Close the door,” he said to Hammad. “And sit down. Because this is the conversation we were always going to have and tonight is the right time to have it.”

The Tools That Never Get Tired

He opened his laptop and turned the screen so everyone could see.

“Before I answer Hammad’s question,” he said, “let me show you what is actually out there. Because most of you are already using these tools, you just do not know how to use them correctly.”

He pulled up a list with multiple names. Each one represents a different kind of assistant.

ChatGPT. Created by OpenAI. Trained to process natural language meaning it understands how humans write, ask questions, and explain ideas. “You can ask it to help you restructure a muddled introduction,” he said, “or rephrase a sentence that feels awkward, or explain a statistical concept you half-understood in your methods course.” Dr Sumaira leaned forward. That was exactly what she needed.

“But here is the danger,” he added. “ChatGPT sometimes invents references. It will give you a citation that looks completely real journal name, volume number, page range, author names and the paper simply does not exist. It is called hallucination, and it has destroyed more than one researcher’s credibility. You use it for flow, not for findings.”

He wrote that on the whiteboard in capital letters.

FLOW, NOT FINDINGS.

Gemini. Google’s AI. Unlike ChatGPT, this one is more connected to the internet in real time. “It can fetch recent studies, summarise recent papers, and update you on the latest guidelines,” he said. “If you need to know what the WHO published last month on hypertension, Gemini can find it.” Hassan raised an eyebrow. He liked efficiency. “But the same rule applies. For every study it mentions, verify it yourself in PubMed or Google Scholar. Real science requires validation, not convenience.”

Claude. Developed by Anthropic. “If you upload your entire research draft as a PDF, thirty, forty, or fifty pages, Claude can read all of it without losing context. Then you ask it to rewrite your discussion section, and it will do so while remaining consistent with everything it read before.” He watched Hammad’s expression shift from excitement to a more careful tone. “That is genuinely useful during revisions. But you are still the author. You still own every sentence that goes into the final version.”

Grok. Developed by xAI, integrated with X, formerly Twitter. “This one scans fast-moving conversations, policy announcements, and public health discussions in real time,” he said. “Good for brainstorming. Good for spotting an emerging topic. Not good for evidence. Treat anything you find here as a lead, not as proof. Always verify in peer-reviewed databases.”

Dr Yaqoob also discussed about Perplexity AI and Meta AI.

Then he talked about Elicit. Specialists in literature discovery find relevant papers quickly across large databases. And then Scite.ai is among the most used tools in the room. “Scite tells you not just how many times a paper has been cited, but whether later studies supported or contradicted it. For systematic reviews, that is invaluable.”

He looked around the room. Six faces, no longer tired. Curious.

“And for polishing your language without changing your meaning,” he added, “Grammarly and Quillbot. Research Rabbit for visualising relationships between papers. Consensus.app for summarising overall scientific agreement on a topic.”

What Sumaira Asked That Changed the Conversation

He had barely finished the list when Dr Sumaira raised her hand.

“Sir,” she said, with the directness he had come to expect from her, “if we use AI to help us write, is that plagiarism? Is it unethical?”

It was the right question. The most important one in the room.

“That,” he said, “is the question I was waiting for.”

He set down the marker.

“Using AI without acknowledgement or review, taking what it produces and passing it off as your own thinking, then yes, that crosses a line. But using it transparently, critically, as a grammar tool or a brainstorming partner? That is acceptable. The same way you use Grammarly. The same way you ask a colleague to read your draft. The tool helps with the surface. You are responsible for the science underneath.

He pulled up the ICMJE guidelines on screen, the International Committee of Medical Journal Editors, the body that sets authorship standards for thousands of journals worldwide. Their position is clear: AI tools cannot be listed as co-authors. They cannot take responsibility for published work. They cannot be held accountable for errors. A human researcher must always.

“COPE, the Committee on Publication Ethics, says the same thing,” he continued. “And journals like Nature, Elsevier, and JAMA now require that you disclose AI use in your acknowledgement section. Not as a confession. As a standard part of the record.”

He showed them the example that he had drafted for a manuscript earlier in the week:

“The authors used ChatGPT (OpenAI, 2025) and Perplexity AI to assist with language editing and reference organisation. All analysis, interpretation, and conclusions are the authors’ own.”

Dr Junaid, who had been listening with the focused attention of someone catching up on fifteen years of missed information, wrote that sentence down word for word.

The Warning Nobody Talks About

“Before you all go home and start uploading your data to every AI platform you can find,” he said, “there is one more thing.”

He looked at Hammad specifically. He had that expression.

Anything you share with a public AI tool may be stored on servers outside your control. It may be used to train future models. If your data includes patient records, imaging, lab results, even anonymised in your mind but identifiable in combination, you could unintentionally violate patient privacy laws and your institution’s ethics agreements. For sensitive projects, the only tools you should use are secured or offline systems approved by your organisation.

“AI learns from what you give it,” he said. “Be careful what you teach it.”

Hassan, the practical one, asked the follow-up he expected: “What about detection? Can journals tell if AI wrote part of a paper?”

“Yes,” he said. “And increasingly, they check. Turnitin’s AI Detector is now running alongside plagiarism checkers at many journals. They do not automatically reject AI-assisted text. They flag it for editorial review. Your transparency and proper disclosure are still your best protection. If you have been honest about your use, you have nothing to fear.”

The Rule Hammad Needed to Hear

He turned back to Hammad and his printed ChatGPT background section.

“Can I use this?” he asked again.

“Read it to me,” he said.

He read three sentences. They were smooth, well-structured, and completely devoid of any reference to the specific context of his study, his patient population, his local variables, or his research question.

“That,” he said, “is the problem. AI writes for everyone, which means it writes for no one in particular. Your introduction needs to earn its way into your specific paper. It needs to show the gap that your study fills. ChatGPT does not know what your gap is. Only you know that.”

Hammad looked at the printout, then at his blank notebook, and slowly set the printout aside.

“So I still have to think,” he said, with the slightly defeated air of someone who had hoped to avoid exactly that.

“You still have to think,” he confirmed. “AI can organise your ideas. It cannot generate them. It can polish your words. It cannot give them meaning.”

Before You Leave Tonight

He ended the session with the same thing he does at most: a practical summary. Here is how to use AI tools in your research workflow — correctly:

  • Use AI for: brainstorming, restructuring drafts, grammar and clarity, paraphrasing for flow, finding literature leads, summarising long documents you have already verified.
  • Never use AI for: generating or interpreting your results, producing references without verification, uploading unpublished or patient data, replacing your own analysis or conclusions.
  • Always do: verify every claim, cross-check every citation, disclose AI use in your acknowledgements, anonymise any data before using AI tools, and apply the same critical eye to AI output that you would apply to any other source.

And remember the one rule that never changes:

AI hallucinations sound confident. Confident does not mean correct. Every fact it provides is a starting point for your own verification, not an endpoint.

What Happened After the Session

Dr. Sumaira Talib stayed back, as she often does.

She opened her laptop, pulled up her struggling introduction, and asked ChatGPT to summarise the three longest PDFs she had been trying to read simultaneously for two weeks. It took four minutes. She read the summaries, verified the key points against the original papers, and started writing her own words, her own framing, but with the fog finally lifted.

At midnight, she sent him a message:

“Introduction done. Verified everything. Acknowledged the tools. Sir, this is the first time writing has not felt like surgery.”

He replied with one word.

Good.

Because that is what AI in research should feel like. Not a shortcut. Not a ghost writer. A pair of reading glasses is something that helps you see your own work more clearly, so that the thinking, the honesty, and the authorship remain exactly where they belong.

With you.

References

[1] ICMJE — Defining the Role of Authors and Contributors: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

[2] COPE Position on Authorship and AI Tools: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

Follow the UPMED Medical Consultancy Channel to stay updated on the 120-post journey of this research series. We will share daily posts covering all the latest updates and progress. Link: https://whatsapp.com/channel/0029VaCu9r86buMKJD4wx40j

You can also connect with the writer of this blog post series to share or receive suggestions: Dr. Junaid Rashid (Founder of UPMED) at 03042397393 (WhatsApp).

List of all the posts in this journey.
Shopping Cart
Scroll to Top