top of page

Reading, Reasoning, and the Rise of AI - Why we must stay human in an algorithmic age

1. Introduction - When AI Becomes a Research Companion

 

Artificial intelligence (AI) has rapidly evolved from a curiosity to a valuable research companion. From tools that summarise complex papers to models that aid in drafting entire grant proposals, AI now sits at the heart of academic productivity. For many researchers, not using it can even feel like falling behind.


"AI may well be the new calculator of our age: indispensable, powerful, and transformative, but only if we remember that the calculator never solved the problem. It was always the human behind it."

However, lately, my Instagram feed has turned into a classroom I never signed up for. Between endless clips of study hacks and PhD confessions, a new wave of “academic influencers” has emerged, each promoting a different AI tool that promises to make research effortless. Among the flood of new platforms, one tool, Anara, has gained unusual visibility, largely through social media: an app that claims to help users read papers and even “chat” with their references.


It’s easy to understand the appeal. In a world where academics are buried under mountains of unread PDFs, AI tools promise relief, faster reading, cleaner notes, fewer late nights. But beneath the polished reels and smiling faces lies a growing unease. When research becomes marketing content, what happens to integrity, transparency, and the very idea of scholarship?

 

2. The Rise of Anara - When Research Meets Reels

 

Anara presents itself as a helpful research assistant; an AI tool that can read, summarise, and build chats around scientific papers. On its surface, it sounds like a dream come true for anyone overwhelmed by the constant reading load of academic life. Yet the reason it has become so visible isn’t academic journals or conferences, it’s social media.


Over the past few months, Instagram has been flooded with near-identical videos from supposed PhD students, early-career researchers, and study influencers. Each follows a familiar pattern: soft lighting, a friendly voice, and the promise of a shortcut.


So I started my own research online and found only a few posts and comment threads, such as discussions on Reddit’s r/PhD forum, pointing out how coordinated this content appears. A recent blog post from Unfiltered Academia also warns that these campaigns risk trivialising scholarship, turning research into a performance rather than a process of discovery. I reached out to almost twenty of these accounts and did not receive any replies. I also contacted the author of the blog I found, Louise Pay, who kindly agreed to be interviewed about this. 


In one clip, a creator confidently declares, “My biggest hot take is if you’re a PhD or master’s student, you shouldn’t be using ChatGPT anymore, let me show you a better tool.” She then opens Anara, demonstrating how she can upload “as many papers as she wants” and find “newer related articles” in seconds, a feature that is, in reality, available only for the paid version.


Another influencer assures viewers that she’s not a genius but knows “how to write a killer research paper,” showing how she “talks to her papers” through Anara’s chat feature and gets “super detailed answers instantly.” 


A third presents it almost like a secret weapon passed down from postgrads: “Apparently, PhD students were gatekeeping this new AI website from us undergrads,” she says, claiming it’s “changed my life, so let me show you. I’m about to change yours, too.”


Across these videos, the tone, pacing, and even camera gestures are strikingly similar, as if pulled from the same script. Each presents Anara, not as a professional tool, but as a lifestyle upgrade, packaged in the language of relatability and FOMO (fear of missing out), a tool you must have. None of these posts are marked as an advertisement.


This formula fits perfectly within the broader ecosystem of influencer marketing, transplanted into a space where authenticity is supposed to matter most: academia. Over the last few years, brands have shifted from traditional influencers to UGC (user-generated content) creators - people paid to produce promotional videos that appear personal and “unpaid.” The goal is to sell trust, not just products.


Many of the Anara accounts fit this new model almost too neatly. Their feeds consist almost entirely of similar videos about the same tool, often with identical editing styles, background sounds, and captions. Some creators describe themselves as PhD students or researchers, yet offer no visible trace of actual academic life - no lab photos, no publications, no research-related content - just a stream of Anara promotions. Their profiles read more like templates than personal journeys. (Just type #anara in your search bar to find examples of these.) 


As Louise Pay explained when I interviewed her, the problem goes beyond repetitive scripts. She told me she had looked into many of these accounts and found troubling patterns: “I looked into some of them, their LinkedIn profiles, what I could find about them online, and some of them are not even grad students. They’re undergrads who are obviously being paid as marketers, but they’re presenting themselves online as though they are grad students doing research, publishing papers.” She was equally concerned about the impact this might have on early-career researchers: “There’s so much bad advice out there already on how to write scientifically or actually learn how to read a paper. New PhD students who are very much online are going to see this and think, ‘this is how we do this,’ if they don’t catch that those are just undisclosed ads, basically rage-baiting people into buying the product.”


This raises uncomfortable questions about authenticity and manipulation. Are these genuine students sharing tools that truly help them, or are they content creators hired to perform academic credibility? The absence of clear ‘ad’ disclosure makes it impossible for viewers to tell. It also undermines trust in legitimate academic voices who use social media to share knowledge transparently. In a community that depends on peer review and citation, the quiet creep of marketing masquerading as mentorship blurs ethical lines in ways that go beyond a simple sponsored post.


It’s not difficult to understand why this strategy works. Many students and researchers are exhausted, overwhelmed by publication pressures and the endless reading demands of academia. A tool that promises to handle all that, to summarise, organise, and even “talk” to your papers, sounds like salvation. But when the message shifts from helping you think to thinking for you, it starts to feel less like empowerment and more like an invitation to disengage.


“Anything that’s being marketed as something that’s going to do part of your job for you, instead of being something that is a companion to your work, is bad” Louise Pay told me.


Anara might genuinely be a useful tool, but the way it’s being promoted raises deeper questions about trust, transparency, and authenticity in academic spaces. What happens when the people teaching you to “do better research” aren’t mentors or peers, but marketers?

 

3. The Ethical Equation

 

The challenge of using AI in academia is not the tool itself, but how it reshapes the act of thinking. Reading, questioning, and interpreting are not chores to be automated, they are the very mechanisms through which scientific reasoning develops. Science is also not only analytical, it is deeply creative. Every step of research, from forming hypotheses to interpreting unexpected results, requires imagination, and when we automate too much, we risk shrinking the creative space where new ideas are born. Comprehension is not passive; it’s the process by which ideas collide, contradictions emerge, and new theories take shape. When we outsource that friction to a machine, we lose the part of research that transforms information into understanding. The danger isn’t only misinformation or hallucinated facts, it’s the quiet erosion of intellectual ownership. When every answer arrives pre-digested, we stop wrestling with uncertainty, and that’s where discovery lives.


There’s also a broader ethical concern about how these systems handle the materials they process (and not to mention the environmental impact of all of this). Much of the content uploaded into tools like Anara includes paywalled papers, copyrighted research, or sensitive datasets. Even if the platform claims not to use the data for external training, the act of mass-uploading academic papers into a private system raises legitimate questions about consent, licensing, and data security. Who owns the knowledge once it’s fed into an algorithm? And what happens when the incentive to “get results faster” outweighs the responsibility to respect authors’ rights? 


Felin and Holweg’s 2024 paper, Theory Is All You Need: AI, Human Cognition, and Causal Reasoning offers a striking lens for this discussion. They argue that AI operates through data-based prediction, while human cognition relies on theory-based causal reasoning. In simpler terms, AI learns from what already exists; humans learn by imagining what doesn’t yet exist. AI can mirror patterns in vast amounts of data, but it cannot step outside them to propose genuinely new ideas. It can reproduce the past with dazzling fluency, but not invent the future.


The authors give the example of Galileo. If an AI trained on 17th-century texts had been asked whether the Earth revolved around the Sun, it would have confidently sided with the prevailing geocentric model, because that was the dominant view in its training data. The same logic applies today: AI tools like Anara can only reinforce the patterns of knowledge already present in their datasets. They may summarise a paper perfectly, but they cannot make the conceptual leap that challenges it.


This distinction is vital for researchers. The task of science is not to repeat or repackage what is known, but to go beyond it. As Felin and Holweg put it, human thought involves forward-looking causal logic, the ability to form theories, test them, and generate new data. AI, by contrast, is backward-looking and imitative: it predicts based on correlations, not understanding. It can process the world, but it cannot intervene in it.


This is why the way Anara is marketed feels troubling. Its message, “make reading effortless”, risks confusing efficiency with insight. Reading is not wasted effort; it’s the cognitive process that allows scientists to detect gaps, errors, and new possibilities. To remove that struggle is to remove the opportunity for innovation itself.


From my point of view, this cultural shift resembles what happened last century. When calculators first entered classrooms, many feared they would destroy mathematical ability. Instead, they became tools that extended it, but only because students still learned how to calculate by hand first. The calculator amplifies existing understanding; it doesn’t create it. And crucially, there is always a human behind the calculator, someone who interprets, questions, and decides what the numbers mean.


AI in research should serve a similar purpose. It can accelerate routine work, surface patterns, and lighten the cognitive load, but it cannot grasp theory, intention, or meaning. Without a human mind interpreting and questioning its results, AI’s outputs remain static reflections of existing knowledge. At the same time, dismissing AI entirely would be shortsighted; the technology itself isn’t the enemy, it’s how we integrate it. AI can serve as a powerful partner in research, helping to identify patterns, summarise vast amounts of literature, or spark new connections that a single human reader might miss. It can democratise access for students without institutional resources, especially when paired with open science initiatives. The challenge is keeping the human at the centre of the process.


Interestingly, Louise Pay also pointed out that this shift is already well underway and that AI has become impossible to avoid. “AI is going to be everywhere. It’s in Grammarly, in Word, on every new computer. The solution isn’t ‘never use it’, it’s learning how to use it ethically, as a complement rather than something that does things for you”. She later added that: “If there’s anything you’re using it for that you think you couldn’t do without AI, you need to go back and learn how to do it without AI first.”


So the ethical question is not “Should we use AI?” but “Who is doing the thinking?” The danger lies in mistaking the tool for the thinker. As Felin and Holweg argue, knowledge emerges not from data alone but from theory, curiosity, and imagination, the uniquely human capacity to ask what if? and why not?


AI may well be the new calculator of our age: indispensable, powerful, and transformative, but only if we remember that the calculator never solved the problem. It was always the human behind it.

 

4. Conclusion

 

The rise of AI in academia reveals both our aspirations and our anxieties: the wish to think faster, produce more, and never fall behind. It’s no surprise that tools like Anara find such a receptive audience among researchers who are exhausted by the demands of productivity culture. But when the promise of convenience overshadows the purpose of curiosity, something essential begins to erode.


AI can make research more accessible and efficient, but it cannot make it more meaningful. The real risk lies not in using these systems, but in believing they can replace the messy, uncertain, and deeply human process of learning. The challenge for our generation of scientists and scholars is to integrate technology without surrendering the very skills it was meant to support: reasoning, comprehension, creativity, and doubt.


I want to be transparent here: I used AI in the making of this very piece. It helped me transcribe the videos I analysed, organise my ideas, and polish my English. Without those tools, this essay would likely have taken me twice, if not three times, as long to complete. But that assistance didn’t replace the thinking; it supported it. The arguments, structure, and reflections were still the result of reading, questioning, and wrestling with the topic myself. The AI helped me shape my words, but it didn’t tell me what to say.


And that’s the distinction I hope we don’t lose sight of. AI is a tool, one that can extend our reach but not define our reasoning. It can summarise, polish, and assist, but it cannot imagine, challenge, or feel the weight of a question. Just as a calculator can help you work through an equation but not decide what problem is worth solving, AI can help us think faster, but it cannot think for us.


Anara, in many ways, represents the dream of what such tools could be. The idea of “discussing with papers”, turning passive reading into active dialogue, is powerful and potentially transformative for how we engage with research. Yet, when a tool like this is marketed through opaque influencer campaigns and choreographed authenticity, transparency becomes the first casualty. The problem isn’t the idea, it’s how it’s being sold.

When I asked Louise Pay what she believes we, as researchers and as a broader scientific community, should do, her answer was straightforward: “Talk about it. If you see a video like this, comment underneath it saying, ‘this is an undisclosed ad.’ Make people aware. It’s not yet a threat that’s infiltrating universities, but if nobody pushes back, it could become one.” She also reassured me that part of my concern might come from being immersed in this online bubble, where the problem can start to feel larger than life. As she put it: "Most people don't want to outsource the thing that they enjoy about their work. So it's not, you know, it's not a massive issue, but it needs to be talked about. So it doesn't become one. I don't think we're seeing Anara going into universities and marketing to the people who purchase software for the universities. I think that's a red flag as well.” 

As researchers, our task isn’t to reject technology, but to use it wisely, to preserve the struggle, the doubt, and the intellectual friction that make discovery possible. Because in the end, progress doesn’t come from how quickly we can generate answers, but from how bravely we keep asking new questions.


This article was written by Ginevra Sperandio and edited by Rebecca Pope, with graphics produced by Ginevra Sperandio. If you enjoyed this article, be the first to be notified about new posts by signing up to become a WiNUK member (top right of this page)! Interested in writing for WiNUK yourself? Contact us through the blog page and the editors will be in touch.

Comments


bottom of page