Reclaiming our words: how generative AI helps multilingual scholars find their voice

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Profile Image

Redi Pudyanti

Contact details

Redi Pudyanti is an educator and researcher pursuing her PhD on the influence of local wisdom on graduate employability. Her other research interests are Indigenisation, decolonisation, and generative AI.

A futuristic landscape featuring colossal tree-like structures with pink canopies, blending organic forms with advanced architecture. These towering constructs rise from a lush, green valley dotted with smaller pink and yellow trees. A winding river flows through the valley, with distant mountains and a soft, glowing sky adding depth and serenity to the scene.

Acknowledgement: This blog post is a shortened version of the presentation we are delivering at the Higher Education Research and Development Society of Australasia (HERDSA) Conference 2025. We acknowledge the other co-authors of our paper, as it was a truly collaborative project: Huy-Hoang Huynh, Ziqi Li, Abdul Qawi Noori, and Zhiheng Zhou. As the South African proverb says: “If you want to run fast, run alone; if you want to run far, run together”.

In the halls of academia, where prestige often correlates with fluency in a particular kind of English communication, having a voice can feel like a privilege, not a right. For multilingual scholars, this can create a disconnect between who they are and what academia expects of them. These scholars have rich and diverse intellectual contributions, but these are often filtered, flattened, or forgotten by the English-language customs of academia. This isn’t just about grammar or vocabulary. It’s about whose knowledge counts, whose voice is deemed legitimate, and how power circulates in scholarly spaces.

Academic writing is often seen as a neutral skill: something that anyone can learn with enough practice, feedback, and hard work. In reality, though, this idea of neutrality is misleading. Beneath the surface, academic writing carries a host of hidden expectations: about how to structure an argument, what kind of tone sounds “professional”, which sources are seen as credible, and even what types of ideas are considered valuable or “good”. These expectations aren’t universal: they’re shaped by English-speaking academic traditions and Western ways of thinking.

For multilingual scholars, especially those coming from different cultural and educational backgrounds, this can feel like stepping into a performance where the rules haven’t been explained. They’re expected not just to write clearly, but to sound a certain way: to mirror the phrasing, logic, and stylistic choices of native English speakers who have been immersed in this Western ways of thinking from an early age. It’s a bit like being asked to join a play mid-scene, in a language that’s not your own, with the added pressure of sounding polished and convincing. The result is often a quiet and persistent pressure to conform: to smooth out cultural expression, to set aside familiar ways of knowing, and to rewrite one’s voice to match what academia deems “legitimate”. In this context, writing isn’t just about communicating ideas, it becomes a test of belonging.

This experience can be deeply isolating. When your ideas are dismissed because they don’t fit a particular format, or when you constantly feel like your writing is being judged through the lens of language proficiency rather than substance, it can leave you feeling invisible. For many multilingual scholars, it’s not just a matter of learning the rules; it’s the emotional weight of having to silence parts of who you are just to be taken seriously. Over time, this can lead to a sense of marginalisation, where your contributions feel undervalued, and your cultural perspective feels out of place. It’s certainly not that these scholars lack ideas or insight; it’s that the academic system often fails to make room for how those ideas are expressed.

We have found that these challenges can make academic life feel like a constant uphill battle, especially when the very structures meant to support learning and innovation exclude our ways of thinking and being. Yet, rather than remain silent or adapt unquestioningly, we have been actively seeking new ways to engage with academia in ways that honour both our cultural identities and scholarly ambitions. This is where our latest research began: with a shared desire to not only survive academia, but to reshape it. Through community, reflection, and the careful integration of generative AI, we began to imagine what a more just and inclusive academic future could look like.

Writing together, thinking together: a decolonising vision for academic writing

Our new paper offers a timely vision of the future: one where academic spaces are reimagined as inclusive, relational, and linguistically diverse, and where generative AI is embraced not as a threat to academic integrity and rigour, but as a partner in knowledge creation. To develop this vision of academia, we combined the Southern African philosophy of Ubuntu (a philosophy that says, “I am because we are”) with collaborative autoethnography and the strategic use of generative AI to reframe generative AI as a relational tool for epistemic justice. As noted in another blog post, epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. In our vision of a more just and inclusive academic future, multilingual scholars will feel empowered to contribute fully, confidently, and in ways that honour their linguistic and cultural identities within global scholarly conversations.

One of the most important parts of our study was how we chose to think about and use generative AI. Ubuntu reminded us that we’re shaped by our relationships with others, and that knowledge and growth are shared, not owned by any one person. In many academic settings, writing is treated as something you do alone. Seeing academic writing through the Ubuntu philosophy, however, we saw knowledge creation and dissemination as something academia should do together. In our group, we gave each other feedback not to criticise, but to support and learn from one another. In this spirit, generative AI became more than just a helper. It became a kind of thinking partner that joined us in our conversations, helping us express our ideas more clearly while still keeping our voices true to who we are. Our generative AI use empowered us while also honouring our identities as multilingual speakers engaging with global academia.

At its heart, our work is about challenging the status quo in academia: we aim to decolonise how knowledge can be created and shared in academia. As shown in our figure below, we started with Ubuntu, a philosophy that puts relationships, community, and shared responsibility at the centre. From there, we used a method called collaborative autoethnography, which allowed us to tell our personal stories, learn from each other in a supportive, reflective way, and explore the cultural complexities present within academia. Then we brought in generative AI, not to make our writing faster, but to help us express our ideas more clearly, question academic norms, and speak up in ways that felt true to ourselves. These three elements aren’t separate steps. Like threads in a tapestry, they are woven together to create a new way of doing research. Together, they helped us imagine a more inclusive kind of academic voice: one that’s ethical, shared, and shaped by many perspectives, not just one. The dots in the diagram show how these ideas flow between people, values, and technology, all working together to build a better future for academic work.

Stories of reclamation and agency

One of the most vivid examples from the study involves the translation of a Chinese idiom which, when processed through a conventional tool, was reduced to a flat literalism: “seeing flowers in the mist, looking at the moon in the water”. While technically accurate, the translation missed the metaphorical essence of the idiom. When Ziqi, one of the authors, posed the same phrase to ChatGPT, the response captured both the poetic beauty and interpretive depth she needed by offering: “The situation is shrouded in mystery, constantly shifting, and challenging to grasp”. In that moment, the idiom didn’t just survive translation, it transcended it. For Ziqi, this wasn’t merely a linguistic success; it was a profound moment of affirmation. Her cultural ways of knowing embedded in the metaphor’s symbolism and rhythm didn’t have to be abandoned or diluted to be legible in academic English. They could be translated with meaning, not despite it.

For others in the group, generative AI proved equally transformative in different contexts. It supported the generation of constructive feedback, assisted in structuring complex presentations, and offered clarity around dense theoretical frameworks. In Redi’s case of balancing the demands of doctoral research with motherhood, generative AI became an unexpected ally in maintaining wellbeing. Whether generating weekly schedules, planning meals, or brainstorming research questions, it helped her lighten the cognitive load, carving out space for reflection, family, and rest.

Importantly, though, our paper isn’t a love letter to generative AI. We are acutely aware of the ethical tensions. Generative AI tools are shaped by the biases of their training data, data which are often steeped in colonial logics, linguistic hierarchies, and Western-centric perspectives. The risk of overreliance or uncritical adoption is real. Yet, what shines through in our reflections is not techno-optimism, but intentionality. We didn’t blindly accept what generative AI offered. Instead, we engaged with it critically, revising, interrogating, and adapting output to ensure the content preserved cultural nuance and scholarly integrity.

Implications for academia

What we did wasn’t about quietly fitting in or changing ourselves to match the usual expectations of academic writing. It’s something more powerful: it was an act of academic reclamation. We used generative AI thoughtfully and with care, not to erase our voices, but to amplify them. Our voices are shaped by different cultures, languages, and ways of thinking, and these types of voices don’t always fit neatly into the typical mould of English-speaking academia. By working with generative AI, not just relying on it, we found ways to express our ideas more clearly without losing who we are. We’re not just trying to keep up, we’re helping to change what academic writing can be. We’re showing that it’s possible to honour cultural and linguistic diversity in research, and that there’s real value in broadening what counts as a “legitimate” academic voice. In doing so, we’re not just joining the conversation, we’re reshaping it. Prompt by prompt, paragraph by paragraph, we’re building an academic world that listens to more voices, tells more stories, and reflects more diverse ways of knowing.

Our study calls on educators, institutions, publishers, and policy-makers to rethink what counts as “good writing” and whose voices are heard in academic discourses. It invites everyone to question the academic orthodoxy that frames multilingual ways of thinking as flawed or generative AI use as inherently dishonest. It shows that when used ethically and reflexively, generative AI can level the playing field, not by simplifying scholars’ ideas, but by enabling them to be expressed more fully. By integrating Ubuntu, collaborative autoethnography, and generative AI, we empowered each other and contributed to decolonising the academy by advancing non-traditional voices. Our research presents a compelling vision for a more inclusive academy: one where multilingualism is celebrated, not hidden, and where academic voice is something to be reclaimed, not earned through conformity. As Lynette notes in the paper:

I have also had many discussions with colleagues in other countries who seem to believe that the use of generative AI has led to the loss of academic rigour or critical thinking in students’ work. They either lament that they cannot clearly detect AI written work with tools such as Turnitin, or claim that whenever they see the words “delve” or “tapestry” they know that it is written by AI and should therefore be considered as cheating. […] I see this viewpoint as a form of academic orthodoxy, where written academic work is considered “rigorous” only when it has been written as it has always been. […] I wonder whether the same debates were circulating in academia when the typewriter was invented and those who used to write academic missives by hand thought that the typewriter would be a danger to academic rigour?

By the way, Lynette is also sharing two other studies at this year’s HERDSA conference. Follow the links below to explore those studies in more detail.

Questions to ponder

Whose standards define “good” academic writing? How do linguistic norms in academia privilege certain voices while marginalising others? In what ways might generative AI disrupt or reinforce these norms, and what responsibilities do scholars have in shaping its use?

Can technology be decolonial? Given that most generative AI tools are trained on predominantly Western data sources, is it possible for them to support decolonial knowledge practices? What conditions would need to be met for generative AI to serve as a truly inclusive and relational academic partner?

What does ethical generative AI use look like? Reflecting on the Ubuntu-inspired approach described in the post, how can scholars use generative AI tools ethically, without losing their cultural specificity or scholarly voice? How might institutions better support this kind of critical and agentive generative AI engagement?

Call for abstracts for our book on positionality and reflexivity in research!

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


We are seeking expressions of interest for our new book provisionally titled Positionality & Reflexivity in Research (Editors: Sun Yee Yip and Lynette Pretorius from Monash University).

Whose research is it? Who owns it? Whose interests does it serve? Who benefits from it? Who has designed its questions and framed its scope? Who will carry it out? Who will write it up? How will its results be disseminated?

(Smith, 2021, p. 10)

Research across various knowledge traditions has challenged the notions of neutrality and objectivity, increasingly recognising that framing a research problem is inextricably linked to those granted the power to participate in knowledge creation within the institutional spaces of the academy and who have access to that knowledge. 

To address the presence and impact of knowledge makers on the forms that knowledge takes, social science research has introduced “position” and often “positionality statements” as genres in which researchers typically consider certain social identities, including but not limited to race, class, gender, sexual orientation, and (dis)ability. A researcher’s positionality can influence all aspects of the research process, including study design, research questions, data collection, and analysis and understanding one’s positionality can shape the outcomes and trustworthiness of the results (Yip, 2023). While traditionally a common feature in qualitative research, some researchers have recently also argued for its relevance in quantitative research (Jamieson et al., 2022).

Reflexivity, the process of critically examining one’s assumptions, biases, and perspectives and how they might impact the research process, is considered a fundamental element in addressing a researcher’s positionality. It challenges researchers to critically analyse their positionality—their role, assumptions, and influence on the research process—and to reflect on how their engagement shapes their understanding of the issue under investigation, their research design, findings, and theories they develop and the communication of results (Addyman, 2025; Smith, 2021).

Yet, despite the growing recognition of the importance of positionality and reflexivity, there remains a surprising lack of evidence in resulting publications of researchers explicitly addressing their lived experiences in the field and how they practice reflexivity. This lack of transparency obscures the iterative and adaptive role that reflexivity plays in shaping research practices, insights, and contributions to theory development. By conceptualising their positionality and embracing reflexivity more effectively, researchers can examine their impact on the research process, reveal their work’s relational and emotional dynamics, and contribute in academically rigorous and practically relevant ways. 

Given the increasing demand for researchers to disclose their positions in relation to the research they conduct and articulate their reflexivity practices, we invite chapters that offer profound and critical insights into personal experiences of examining positionalities and engaging in reflexivity within your research. This may pertain to your PhD projects or beyond.

We suggest that the chapter address (but is not limited to) the following guiding questions:

  1. What is your research about?
  2. What motivated you to embark on this research?
  3. What is your position or standpoint in relation to your research?
  4. How does your position impact different aspects of your research? e.g. research design, methodology, findings/results, theorisation etc.
  5. How did you practice reflexivity? What strategies did you adopt/not adopt? Why did you utilise these strategies? In what ways are they helpful/unhelpful? What are the challenges? How did you address these challenges/overcome them?
  6. What did you learn in the process?
  7. How has this shaped your future practice?

Please express your interest by submitting an abstract/chapter proposal of no more than 500 words by clicking on the button below. You can also refer to this list, which we regularly update to reflect relevant published work on this topic.

All abstracts must be submitted by the 31st of July 2025. Notices of abstract acceptance will be emailed to authors by the 30th of September 2025.

If your abstract is accepted, you will develop your chapter, bearing in mind the following features: 

  • Write in the first-person narrative style. 
  • Chapters should be between 5,000 and 6,000 words, including abstract, references, footnotes, figures, and tables.
  • Include a 200-word abstract at the start of the chapter
  • Use APA 7 referencing style.

The full manuscript is due on 31 January 2026 and will be subjected to rigorous peer review. 

If you have any questions regarding the expression of interest process, please email:
Dr Sun Yee Yip at sunyee.yip@monash.edu 

Learn the basics of generative AI

Profile Image

The AI Literacy Lab

Contact details

Republished from The AI Literacy Lab.

The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.

Generative AI is reshaping the world, one image, paragraph, and data point at a time. Whether you’re a curious newcomer, an educator trying to keep up with the latest trends, or a student dipping your toes into artificial intelligence, you’re in the right place. Let’s unpack what generative AI actually is, why it matters, and how you can begin your learning journey with a few hand-picked videos.

What is generative AI?

Generative AI refers to a class of artificial intelligence that can create new content like text, images, music, code, and more. Think of it as a creative partner trained on vast amounts of data. These systems learn patterns, styles, and structures, and then use that knowledge to generate novel outputs that often feel surprisingly human-like.

Why should you care?

Generative AI isn’t just a tech trend, it’s a shift in how we produce knowledge, express creativity, and interact with machines. For educators, it’s reshaping pedagogy. For students, it’s changing how assignments are written and evaluated. For researchers, it’s opening up new methods of inquiry, simulation, and communication.

How do you use generative AI?

Prompt design is crucial when using generative AI because the quality of your prompt directly shapes the relevance, clarity, and creativity of the AI’s response. Well-crafted prompts help you guide generative AI more effectively, turning it into a powerful tool for learning, research, and problem-solving.

Developing AI literacy

Understanding the basics isn’t just about staying relevant; it’s about becoming literate in a rapidly evolving digital world. AI literacy is now a key component of digital citizenship, academic integrity, and lifelong learning.

Want to learn more? These posts are a good start:

Call for chapters for our new open access book on AI!

Profile Image

The AI Literacy Lab

Contact details

Republished from The AI Literacy Lab.

The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.

Are you exploring how generative AI is transforming the research landscape? Have you developed innovative approaches, ethical insights, or practical applications regarding AI in research? If so, we invite you to contribute a chapter to our forthcoming open access book: Generative AI-Enhanced Research: Ethical, Practical, and Transformative Approaches.

This edited collection will serve as a go-to resource for researchers, academics, educators, and students interested in harnessing generative AI tools across the research lifecycle. Our aim is to showcase a diverse range of perspectives, theoretical frameworks, and methodological innovations that illuminate the evolving role of AI in academic work.

We welcome contributions in the form of conceptual papers, empirical studies, reflective case narratives, and practical guides. Key areas of interest include (but are not limited to):

  • Ethical challenges and considerations in generative AI-enhanced research
  • Generative AI in research design and literature review
  • Generative AI in data collection and analysis
  • Generative AI in writing, publishing, and dissemination
  • Generative AI and research training, critical thinking, and future trends

Interested? Learn more and submit your abstract here.

Abstracts are due by 30 June 2025!

Update: Abstract deadline extended to 18 July 2025 – get your abstracts in quick if you haven’t yet!

Join us at the 2025 International Conference on AI for Higher Education!

Profile Image

The AI Literacy Lab

Contact details

Republished from The AI Literacy Lab.

The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.

You are warmly invited to participate in the International Conference on AI for Higher Education (AI4HE). Facilitated by the Human-AI Collaborative Knowledgebase for Education and Research (HACKER) and the AI Literacy Lab, the conference provides an opportunity to share knowledge of AI in Higher Education, network with peers and participate in practical workshops.

The conference will be on 26 and 27 November 2025 and will run electronically through Zoom. The conference is FREE 🙂

Presentations can take various formats and should focus on the use of generative AI in higher education settings. Some questions you can use to prompt your thinking are:

  • What constitutes AI literacy for researchers today?
  • How can we effectively embed AI literacy into research training and higher education curricula?
  • What new methodological possibilities or tensions arise when generative AI is integrated into the research process?
  • How do we ethically use generative AI in research without compromising scholarly integrity, originality, trustworthiness, and rigour?
  • Who gets to decide what constitutes ‘authorship’ or ‘contribution’ when generative AI tools are involved in the production of knowledge?
  • How does the use of generative AI in research reshape our understanding of the researcher’s role, voice, and epistemic authority?
  • What does it mean to ‘position oneself’ in relation to a generative AI tool? Is it a collaborator, instrument, co-author, or something else entirely?

Abstracts are due by the 20th of June. To submit an abstract or register to attend, click on the button below. See you there!

ChatGPT as a qualitative research partner

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Profile Image

Chris Pretorius

Contact details

Chris Pretorius is a doctoral candidate specialising in spiritual health and practice, with an interest in the intersections between technology and theology.

The rise of generative AI has sparked new conversations about its role in academic research. While generative AI tools like ChatGPT have proven effective for summarisation, pattern recognition, and text classification, their potential in deep, interpretive qualitative data analysis remains underexplored. In our recent study, we examine the integration of ChatGPT as an active collaborator in qualitative data analysis. Our findings highlight ChatGPT’s ability to streamline initial coding, enhance reflexivity and higher-order thinking, and support knowledge co-construction while emphasising the necessity of human oversight.

Our study marks an exciting step forward in the integration of generative AI into qualitative inquiry. By approaching generative AI as a partner rather than a passive tool, we believe researchers will be able to its potential while preserving the richness and depth that define qualitative research.

As illustrated in another blog post, qualitative data analysis is often a laborious process, requiring meticulous coding, interpretation, and reflection. Traditional computer-assisted qualitative data analysis software, such as NVivo and MAXQDA, has long been used to help streamline aspects of qualitative data analysis. However, generative AI, and specifically ChatGPT, introduces an additional layer of adaptability, offering real-time feedback and dynamic analytical capabilities. This made us wonder how effective it would be in the qualitative data analysis process.

In our paper, we explore how ChatGPT can function beyond a simple data processing tool by actively participating in the interpretive process. Rather than merely classifying text, we found that ChatGPT could highlight implicit themes, suggest theoretical frameworks, and prompt deeper reflections on the data from both the researcher and participant. However, ChatGPT’s capacity is highly contingent on the researcher’s ability to craft well-designed prompts.

One of the key takeaways from the study is the significance of effective prompt design. We note that ChatGPT’s responses were only as good as the prompts it received. Initially, we found that the ChatGPT’s responses lacked depth or were fixated on single aspects of a topic while neglecting others. By refining our prompts, explicitly defining key concepts, and structuring questions carefully, we were able to guide ChatGPT toward more nuanced and insightful analyses.

We developed a series of 31 prompts to explore our dataset (see the prompts here). This iterative prompting process not only improved ChatGPT’s analytical output but also helped the researcher clarify her own theoretical perspectives. Our study consequently frames this prompt design process as a reflexive exercise, demonstrating how the act of crafting prompts can refine a researcher’s conceptual thinking and analytical approach.

An unexpected yet valuable outcome of using ChatGPT in the research process was its ability to stimulate the researcher’s higher-order thinking. By engaging with the ChatGPT-generated interpretations, the researcher was prompted to critically assess underlying assumptions, refine theoretical lenses, and explore alternative perspectives she might not have initially considered. This process encouraged deeper engagement, pushing the researcher to interrogate her own biases and methodological choices. As a result, the interaction with ChatGPT became an intellectual exercise in itself, allowing the researcher to refine and expand her analytical thinking in ways that traditional methods may not have facilitated as effectively.

One of the most striking findings from our study was ChatGPT’s ability to uncover implicit meanings within qualitative data. For example, when asked about concepts like “illusio” (investment in the socially constructed values within a field), ChatGPT was able to infer instances of this concept even when it was not explicitly mentioned in the data. However, we also found that the ChatGPT-generated interpretations sometimes diverged from participants’ own perspectives. This emphasises the critical role of human oversight. Generative AI lacks self-awareness (at least at the moment!), meaning that its responses must be carefully evaluated. Generative AI can be a powerful tool for organising and prompting analysis, but it is the researcher’s interpretive lens that ultimately determines the depth and rigour of qualitative inquiry.

One of the most innovative aspects of our study is its participatory approach, in which both the researcher and the participant engaged with ChatGPT’s analyses. Instead of using generative AI as a behind-the-scenes tool, the study involved participants in critically appraising ChatGPT’s findings, thereby decentralising the researcher’s authority over data interpretation. This triadic model (researcher, participant, and ChatGPT) fostered greater participant agency in the research process. By giving participants the opportunity to review and respond to ChatGPT-generated interpretations, we ensured that the generative AI-assisted analyses did not overwrite or misrepresent participants’ lived experiences. This approach not only enhanced the ethical integrity of the generative AI-assisted research but also enriched the depth and authenticity of the findings.

Questions to ponder

What are the potential benefits and risks of using AI tools like ChatGPT in qualitative research?

How can researchers ensure that ChatGPT-assisted analyses remain ethically sound and participant-driven?

The ETHICAL framework for responsible generative AI use

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


The advent of generative artificial intelligence (GenAI) has opened up transformative possibilities in academic research. Tools like ChatGPT, Gemini, and Claude hold the potential to help with idea and content development, structure and research design, literature review and synthesis, data management and analysis, as well as proofreading and editing. However, as enticing as these advancements are, they bring ethical challenges that require careful navigation. To bridge this gap between potential and responsibility, my colleagues and I developed the ETHICAL framework for GenAI use, which has just been published open access!

The ETHICAL framework offers a structured approach, with each letter in the acronym representing a principle that users should embed into their practices. The framework has been summarised in this handy picture.

The ETHICAL Framework for Responsible Generative AI Use, republished from here under a CC-BY license.

Examine policies and guidelines
Researchers must consult international, national, and institutional GenAI policies. This involves not only aligning with global GenAI ethics recommendations but also understanding the specifics of local guidelines. Adhering to these ensures compliance and fosters trust​​. As an example, my institution has an entire policy suite relating to responsible GenAI use in both teaching and research.

Think about the social impacts
GenAI can reinforce biases and perpetuate inequalities. Researchers should critically evaluate the societal consequences of using GenAI, considering both environmental sustainability and digital equity​​.

Harness understanding of the technology
A robust understanding of how GenAI tools operate (beyond their surface-level functionalities) is essential. Researchers must grasp the limitations and ethical implications of the technologies they use and should promote AI literacy within their academic communities​​. I have written other blog posts about what AI literacy is and how you can build your AI literacy. This handy quick video explains the components of AI literacy.

Indicate use transparently
Transparency is key to maintaining academic integrity. Researchers should explicitly disclose where and how GenAI tools were used, documenting their role in the research process. This fosters accountability and mitigates risks related to copyright and authorship disputes​. This video provides a simple guide to formatting GenAI acknowledgements.

Critically engage with outputs
GenAI outputs are not infallible and require rigorous validation. Researchers bear the ultimate responsibility for ensuring that GenAI-generated content aligns with disciplinary standards and is free from inaccuracies or ethical breaches​.

Access secure versions
Security and privacy are paramount when using GenAI. Free versions of tools may not offer adequate protections for sensitive data, underscoring the need for secure, institutional subscriptions or private deployments of GenAI models​.

Look at user agreements
Many GenAI tools have complex user agreements, which can have significant implications for data ownership and privacy. Researchers should carefully review these terms to ensure ethical compliance and to safeguard their intellectual property​.

The ETHICAL framework encourages universities to incorporate AI literacy into their curricula, ensuring that both students and faculty are prepared to navigate the ethical complexities of GenAI-enhanced research. The ETHICAL framework is also not just a set of guidelines, it’s a call to action. For educators, researchers, and institutions alike, the message is clear: the future of GenAI in higher education depends on our collective ability to navigate its challenges responsibly. The ETHICAL framework provides a compass for doing just that, fostering a research culture that is as ethical as it is forward-thinking.

Questions to ponder

How can universities integrate AI literacy into their existing curricula effectively?

What steps can researchers take to ensure equitable access to GenAI tools across diverse socio-economic contexts?

How should publishers and peer-review committees adapt to the growing use of GenAI in manuscript preparation?

Whose story is it anyway? The transformative power of pseudonym choice

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Profile Image

Dr Sweta Vijaykumar Patel

Contact details

Dr Sweta Vijaykumar Patel is a lecturer, researcher, and mentor specialising in early childhood education, creative methodologies, teacher education and culturally responsive pedagogy.

As qualitative researchers, we’ve often used pseudonyms in our work to protect the identities of participants. It’s a standard practice and one that’s meant to safeguard confidentiality while ensuring their stories remain authentic. But recently, we conducted a study that made us pause and rethink how we approach pseudonyms. It highlighted the power of inviting participants to choose their own pseudonyms and how that simple act can transform the research process.

In our study, 40 doctoral students shared their experiences of academia, and part of that was choosing pseudonyms for themselves and their institution. They were also asked to explain the reasons why they chose those names. Reading through their choices, we were struck by how much thought and emotion they poured into these names. For some, the pseudonym chosen was deeply personal. One participant, for instance, chose “Chess” to reflect their strategic navigation through life as an autistic, trans individual. Another participant selected “Kurdi,” proudly emphasising their Kurdish heritage and lifelong pursuit of knowledge. These names were more than identifiers; they were declarations of identity, resilience, and aspiration. Some picked hopeful names like “The University of Dreams” for their institutions, reflecting admiration or ambition. But not all pseudonyms were positive. One participant, for example, referred to their institution as “The University of Business,” critiquing the commodification of education. Another layered nuance onto their name, highlighting disillusionment with systemic issues they encountered. These choices offered us a window into their experiences, highlighting both their struggles and triumphs.

Letting participants name themselves isn’t just a small methodological tweak, it was a purposeful act of empowerment. By giving participants the opportunity to take control of their own representation, we were able to disrupt the traditional power dynamics that so often define research. It wasn’t just about collecting data; it was about fostering trust, collaboration, and authenticity.

Of course, there are challenges with this approach. Participants might feel pressure to choose names that conform to researchers’ expectations or worry about how their pseudonyms will be interpreted. It also takes time and effort to create a supportive environment where participants feel comfortable making these decisions. But the benefits (including greater trust, richer data, and more ethical representation) far outweigh the hurdles.

Conducting this study has changed how we think about our own research practices. It’s a reminder that the small details, even something as simple as giving a participant the chance to name themselves, can carry huge implications. When participants take control of their representation, it deepens the authenticity of their stories and strengthens the research process.

We also see this as a challenge to examine the systems within which we work. We’ve shown that names aren’t just labels; they’re an opportunity for participants to reclaim their stories, critique their environments, and express their identities on their own terms. As noted in another blog post, epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. As researchers, we’re in positions of power, and it’s easy to perpetuate epistemic injustice without even realising it. But when we hand the reins to participants, we’re making a deliberate choice to amplify their voices and honour their expertise.

For us, this study is also a reminder to slow down, reflect, and listen. Research isn’t just about collecting data; it’s about honouring the people behind the stories. And sometimes, it starts with something as simple and as profound as the researcher asking, “What’s in a name?”

Questions To Ponder

How can you create a space where research participants feel truly empowered to represent themselves?

What does it mean to approach research as a collaboration, rather than a process of data extraction?

How can naming practices become tools for resistance and critique in your own work?

You can now subscribe to the blog!

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Hi everyone,

I’ve got some exciting news to share with you today and, yes, I know, it’s been a long time coming. After a fair bit of tinkering, troubleshooting, and maybe a little procrastination (let’s be honest), I’ve finally figured it out: you can now subscribe to this blog! 🎉

Why should you subscribe? Well, it’s pretty straightforward: once you subscribe, you’ll get a handy email notification when a new post is published. No more missing out on updates or frantically trying to remember the name of the blog (though I hope it’s etched into your heart by now). Also, it is completely free!

So, if you enjoy the content here, subscribing is the easiest way to stay in the loop. Whether you’re here for research tips, musings on academic life, or something else entirely, I’m excited to have you along for this journey. So click that subscribe button and let’s keep the conversation going.

Join my 28 subscribers!

Oh hi there 👋
It’s nice to meet you.

Sign up to receive an email
when new posts are published.

We don’t spam! Read our privacy policy for more info.

Psychological capital in the PhD

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Hi there! Have you ever wondered what it really takes to thrive during a PhD? For over a decade, I’ve worked closely with graduate students, helping them navigate the academic and emotional hurdles of this journey. Along the way, I’ve seen a lot—students overwhelmed by isolation, stress, and uncertainty. Many come to me not knowing why they feel the way they do or where to turn for help.

Let’s talk about the bigger picture. Mental health is a major issue in our society. Did you know that nearly half of all Australians will experience a mental illness at some point in their lives? Now imagine the pressures of a PhD—high workloads, financial insecurity, and a lack of work-life balance. It’s then no surprise that PhD students are twice as likely to struggle with mental health compared to the general population. Some studies even compare their stress levels to those of incarcerated individuals. Shocking, right?

So how can we make this better? This is where the concept of psychological capital comes in, and is the focus of my recent paper. The definition I like to use refers to psychological capital as the HERO within. HERO stands for hope, efficacy, resilience, and optimism. These are the mental tools that help us stay motivated, face challenges, bounce back from setbacks, and believe in our ability to succeed.

While there’s plenty of research on mental health issues among PhD students, we rarely hear their personal stories. These stories matter because they show what it’s really like and can drive change in universities. In my study, I focused on one PhD student, who I call “Em,” at a large Australian university. This was done by using narrative ethnography as a methodology. Narrative ethnography is a type of autoethnography that blends the researcher’s own experiences with the stories and insights of others. It focuses on understanding others’ experiences while adding depth by including the researcher’s perspective.

I gathered Em’s data through an online survey, email-based conversations, and my own field notes. The vivid and emotional imagery in her story comes directly from her, reflecting the challenges of her PhD journey. To ensure the analysis was trustworthy, I cross-checked information from the survey, emails, and my experiences, with Em actively helping to interpret the data and providing additional context where needed. This means Em and I worked together to shape the insights, turning what started as a survey into a collaborative and ongoing conversation.

I collected my own data using a simple reflective approach, starting with recorded Zoom conversations with myself. As I wrote the manuscript, I kept reflecting on my thoughts and feelings about the experiences I was describing. This process brought back memories from my childhood that shaped my identity, helping me connect more deeply with my experiences and Em’s story. Em’s vulnerability also pushed me to think more critically about my own time in academia. I even talked with my family to confirm my memories and reviewed old documents, like leave applications and my thesis examiner reports. Finally, I looked back at my field notes to add richer context to Em’s responses.

So what did I find? Reflecting on my PhD journey, I can see how the challenges I faced shaped my academic identity. A lab accident left me with a severe allergy, forcing me to change the focus of my research and teaching. Later, a serious injury required months of recovery and even learning to walk again. Despite these setbacks, I adapted and managed to complete my PhD in under four years with the support of my supervisors and family. These experiences tested my resilience and resourcefulness, but my strong sense of purpose and hope kept me moving toward my goal of becoming a university lecturer. My upbringing played a key role, as my parents encouraged my love for learning from an early age, which gave me confidence in my ability to succeed.

These formative experiences, along with my faith and a strong sense of identity, gave me the optimism and determination to overcome adversity. The challenges I faced during my PhD also sparked my interest in improving doctoral education, particularly in addressing the mental health struggles many students experience. I believe it’s crucial to create academic environments that nurture hope, resilience, and a sense of belonging. By doing so, we can help future researchers thrive and transform academia into a more inclusive and supportive space.

Em’s PhD journey highlights the immense challenges she faced as an international student in Australia. Passionate about her research topic, Em began her doctoral studies with optimism but soon encountered significant psychological distress. Her struggles included isolation, pressure to publish, and concern over her ability to secure research funding. At one point, her mental health deteriorated so severely that she experienced frequent panic attacks, leaving her physically and emotionally exhausted. Despite these challenges, Em initially avoided seeking help due to the stigma surrounding mental illness and fear of being judged, particularly within her cultural and religious circles.

Over time, Em reached a turning point when she realised her wellbeing was essential for completing her PhD. This shift in mindset helped her prioritise self-care and develop strategies to manage her mental health while continuing her studies. She described the process as steering a small boat through stormy seas, learning new skills along the way to stabilise her journey. Em found support in her faith, her curiosity for her research, and a growing sense of compassion for herself. Importantly, she began sharing her struggles with fellow PhD students, discovering a sense of community and mutual encouragement that helped her regain resilience and optimism. By sharing her story, Em not only found strength in vulnerability but also highlighted the importance of creating spaces where PhD students can thrive despite the challenges inherent in academia.

What can we learn from this? First, we need to create supportive spaces where students can connect and collaborate—like writing groups that foster not just skills but also a sense of belonging. Second, we need to reframe adversity as an opportunity for growth. By sharing our own challenges as educators, we can model resilience and show that failure is part of the process.

Lastly, let’s normalise seeking help. Supervisors should have open, honest conversations about mental health and encourage students to access professional support when needed. Resilience isn’t about enduring hardship alone—it’s about having the tools and support to bounce back stronger.

PhDs are tough, but they don’t have to be isolating. By fostering hope, resilience, and optimism, we can create an academic environment where students don’t just survive—they thrive. Thank you for watching, and let’s keep working together to make academia a more inclusive and supportive space.

Questions to ponder

What role does cultural and societal stigma play in preventing students like Em from seeking help, and how can institutions effectively counteract these barriers?

In what ways can the researcher’s own reflections and personal journey add value to the study’s insights, and how might this influence the interpretation of participants’ experiences?

What specific steps can universities take to normalise discussions about mental health and integrate support systems that build resilience and community among PhD students?