Generative AI, especially large language models (LLMs), present exciting and unprecedented opportunities and complex challenges for academic research and scholarship.
As the different versions of LLMs (such as ChatGPT, Gemini, Claude, Perplexity.ai and Grok) continue to proliferate, academic research is beginning to undergo a significant transformation.
Students, researchers and instructors in higher education need AI literacy knowledge, competencies and skills to address these challenges and risks.
In a time of rapid change, students and academics are advised to look to their institutions, programs and units for discipline-specific policy or guidelines regulating the use of AI.
Researcher use of AI
A recent study led by a data science researcher found that at least 13.5 per cent of biomedical abstracts last year showed signs of AI-generated text.
Read more:
AI-detection software isn’t the solution to classroom cheating — assessment has to shift
Large language models can now support nearly every stage of the research process, although caution and human oversight are always needed to judge when use is appropriate, ethical or warranted — and to account for questions of quality control and accuracy. LLMs can:
-
Help brainstorm, generate and refine research ideas and formulate hypotheses;
-
Design experiments and conduct and synthesize literature reviews;
-
Write and debug code;
-
Analyze and visualize both qualitative and quantitative data;
-
Develop interdisciplinary theoretical and methodological frameworks;
-
Suggest relevant sources and citations, summarize complex texts and draft abstracts;
-
Support the dissemination and presentation of research findings, in popular formats.
However, there are significant concerns and challenges surrounding the appropriate, ethical, responsible and effective use of generative AI tools in the conduct of research, writing and research dissemination. These include:
-
Misrepresentation of data and authorship;
-
Difficulty in replication of research results;
-
Data and algorithmic biases and inaccuracies;
-
User and data privacy and confidentiality;
-
Quality of outputs, data and citation fabrication;
-
And copyright and intellectual property infringement.
(AP Photo/Seth Wenig)
AI research assistants, ‘deep research’ AI agents
There are two categories of emerging LLM-enhanced tools that support academic research:
1. AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support:
-
Concept mapping (Kumu, GitMind, MindMeister);
-
Literature and systematic reviews (Elicit, Undermind, NotebookLM, SciSpace);
-
Literature search (Consensus, ResearchRabbit, Connected Papers, Scite);
-
Literature analysis and summarization (Scholarcy, Paper Digest, Keenious);
-
And research topic and trend detection and analysis (Scinapse, tlooto, Dimension AI).
2. ‘Deep research’ AI agents: The field of artificial intelligence is advancing quickly with the rise of “deep research” AI agents. These next-generation agents combine LLMs, retrieval-augmented generation and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses.
Research is currently being conducted to evaluate the quality and effectiveness of deep research tools. New evaluation criteria are being developed to assess their performance and quality.
Criteria include elements such as cost, speed, editing ease and overall user experience — as well as citation and writing quality, and how these deep research tools adhere to prompts.

(A.C./Unsplash)
The purpose of deep research tools is to meticulously extract, analyze and synthesize scholarly information, empirical data and diverse perspectives from a wide array of online and social media sources. The output is a detailed report, complete with citations, offering in-depth insights into complex topics.
In just a short span of four months (December 2024 to February 2025), several companies (like Google Gemini, Perplexity.ai and ChatGPT) introduced their “deep research” platforms.
The Allen Institute for Artificial Intelligence, a non-profit AI research institute based in Seattle, is experimenting with a new open access research tool called Ai2 ScholarQA that helps researchers conduct literature reviews more efficiently by providing more in-depth answers.
Emerging guidelines
Several guidelines have been developed to encourage the responsible and ethical use of generative AI in research and writing. Examples include:
THE CANADIAN PRESS/Sean Kilpatrick
LLMs support interdisciplinary research
LLMs are also powerful tools to support interdisciplinary research. Recent emerging research (yet to be peer reviewed) on the effectiveness of LLMs for research suggests they have great potential in areas such as biological sciences, chemical sciences, engineering, environmental as well as social sciences. It also suggests LLMs can help eliminate disciplinary silos by bringing together data and methods from different fields and automating data collection and generation to create interdisciplinary datasets.
Helping to analyze and summarize large volumes of research across various disciplines can aid interdisciplinary collaboration. “Expert finder” AI-powered platforms can analyze researcher profiles and publication networks to map expertise, identify potential collaborators across fields and reveal unexpected interdisciplinary connections.
This emerging knowledge suggests these models will be able to help researchers drive breakthroughs by combining insights from diverse fields — like epidemiology and physics, climate science and economics or social science and climate data — to address complex problems.
Read more:
The world is not moving fast enough on climate change — social sciences can help explain why
Research-focused AI literacy
Canadian universities and research partnerships are providing AI literacy education to people in universities and beyond.
The Alberta Machine Intelligence Institute offers K-12 AI literacy programming and other resources. The institute is a not-for profit organization and part of Canada’s Pan-Canadian Artificial Intelligence Strategy.
Many universities are offering AI literacy educational opportunities that focus specifically on the use of generative AI tools in assisting research activities.
Collaborative university work is also happening. For example, as vice dean of the Faculty of Graduate & Postdoctoral Studies at the University of Alberta (and an information science professor), I have worked with deans from the University of Manitoba, the University of Winnipeg and Vancouver Island University to develop guidelines and recommendations around generative AI and graduate and postdoctoral research and supervision.

(Mapbox/Unsplash)
Considering the growing power and capabilities of large language models, there is an urgent need to develop AI literacy training tailored for academic researchers.
This training should focus on both the potential and the limitations of these tools in the different stages of the research process and writing.

The post “How large language models are transforming research” by Ali Shiri, Professor of Information Science & Vice Dean, Faculty of Graduate & Postdoctoral Studies, University of Alberta was published on 07/21/2025 by theconversation.com