Call for abstracts for our book on positionality and reflexivity in research!

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


We are seeking expressions of interest for our new book provisionally titled Positionality & Reflexivity in Research (Editors: Sun Yee Yip and Lynette Pretorius from Monash University).

Whose research is it? Who owns it? Whose interests does it serve? Who benefits from it? Who has designed its questions and framed its scope? Who will carry it out? Who will write it up? How will its results be disseminated?

(Smith, 2021, p. 10)

Research across various knowledge traditions has challenged the notions of neutrality and objectivity, increasingly recognising that framing a research problem is inextricably linked to those granted the power to participate in knowledge creation within the institutional spaces of the academy and who have access to that knowledge. 

To address the presence and impact of knowledge makers on the forms that knowledge takes, social science research has introduced “position” and often “positionality statements” as genres in which researchers typically consider certain social identities, including but not limited to race, class, gender, sexual orientation, and (dis)ability. A researcher’s positionality can influence all aspects of the research process, including study design, research questions, data collection, and analysis and understanding one’s positionality can shape the outcomes and trustworthiness of the results (Yip, 2023). While traditionally a common feature in qualitative research, some researchers have recently also argued for its relevance in quantitative research (Jamieson et al., 2022).

Reflexivity, the process of critically examining one’s assumptions, biases, and perspectives and how they might impact the research process, is considered a fundamental element in addressing a researcher’s positionality. It challenges researchers to critically analyse their positionality—their role, assumptions, and influence on the research process—and to reflect on how their engagement shapes their understanding of the issue under investigation, their research design, findings, and theories they develop and the communication of results (Addyman, 2025; Smith, 2021).

Yet, despite the growing recognition of the importance of positionality and reflexivity, there remains a surprising lack of evidence in resulting publications of researchers explicitly addressing their lived experiences in the field and how they practice reflexivity. This lack of transparency obscures the iterative and adaptive role that reflexivity plays in shaping research practices, insights, and contributions to theory development. By conceptualising their positionality and embracing reflexivity more effectively, researchers can examine their impact on the research process, reveal their work’s relational and emotional dynamics, and contribute in academically rigorous and practically relevant ways. 

Given the increasing demand for researchers to disclose their positions in relation to the research they conduct and articulate their reflexivity practices, we invite chapters that offer profound and critical insights into personal experiences of examining positionalities and engaging in reflexivity within your research. This may pertain to your PhD projects or beyond.

We suggest that the chapter address (but is not limited to) the following guiding questions:

  1. What is your research about?
  2. What motivated you to embark on this research?
  3. What is your position or standpoint in relation to your research?
  4. How does your position impact different aspects of your research? e.g. research design, methodology, findings/results, theorisation etc.
  5. How did you practice reflexivity? What strategies did you adopt/not adopt? Why did you utilise these strategies? In what ways are they helpful/unhelpful? What are the challenges? How did you address these challenges/overcome them?
  6. What did you learn in the process?
  7. How has this shaped your future practice?

Please express your interest by submitting an abstract/chapter proposal of no more than 500 words by clicking on the button below. You can also refer to this list, which we regularly update to reflect relevant published work on this topic.

All abstracts must be submitted by the 31st of July 2025. Notices of abstract acceptance will be emailed to authors by the 30th of September 2025.

If your abstract is accepted, you will develop your chapter, bearing in mind the following features: 

  • Write in the first-person narrative style. 
  • Chapters should be between 5,000 and 6,000 words, including abstract, references, footnotes, figures, and tables.
  • Include a 200-word abstract at the start of the chapter
  • Use APA 7 referencing style.

The full manuscript is due on 31 January 2026 and will be subjected to rigorous peer review. 

If you have any questions regarding the expression of interest process, please email:
Dr Sun Yee Yip at sunyee.yip@monash.edu 

Learn the basics of generative AI

Profile Image

The AI Literacy Lab

Contact details

Republished from The AI Literacy Lab.

The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.

Generative AI is reshaping the world, one image, paragraph, and data point at a time. Whether you’re a curious newcomer, an educator trying to keep up with the latest trends, or a student dipping your toes into artificial intelligence, you’re in the right place. Let’s unpack what generative AI actually is, why it matters, and how you can begin your learning journey with a few hand-picked videos.

What is generative AI?

Generative AI refers to a class of artificial intelligence that can create new content like text, images, music, code, and more. Think of it as a creative partner trained on vast amounts of data. These systems learn patterns, styles, and structures, and then use that knowledge to generate novel outputs that often feel surprisingly human-like.

Why should you care?

Generative AI isn’t just a tech trend, it’s a shift in how we produce knowledge, express creativity, and interact with machines. For educators, it’s reshaping pedagogy. For students, it’s changing how assignments are written and evaluated. For researchers, it’s opening up new methods of inquiry, simulation, and communication.

How do you use generative AI?

Prompt design is crucial when using generative AI because the quality of your prompt directly shapes the relevance, clarity, and creativity of the AI’s response. Well-crafted prompts help you guide generative AI more effectively, turning it into a powerful tool for learning, research, and problem-solving.

Developing AI literacy

Understanding the basics isn’t just about staying relevant; it’s about becoming literate in a rapidly evolving digital world. AI literacy is now a key component of digital citizenship, academic integrity, and lifelong learning.

Want to learn more? These posts are a good start:

Call for chapters for our new open access book on AI!

Profile Image

The AI Literacy Lab

Contact details

Republished from The AI Literacy Lab.

The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.

Are you exploring how generative AI is transforming the research landscape? Have you developed innovative approaches, ethical insights, or practical applications regarding AI in research? If so, we invite you to contribute a chapter to our forthcoming open access book: Generative AI-Enhanced Research: Ethical, Practical, and Transformative Approaches.

This edited collection will serve as a go-to resource for researchers, academics, educators, and students interested in harnessing generative AI tools across the research lifecycle. Our aim is to showcase a diverse range of perspectives, theoretical frameworks, and methodological innovations that illuminate the evolving role of AI in academic work.

We welcome contributions in the form of conceptual papers, empirical studies, reflective case narratives, and practical guides. Key areas of interest include (but are not limited to):

  • Ethical challenges and considerations in generative AI-enhanced research
  • Generative AI in research design and literature review
  • Generative AI in data collection and analysis
  • Generative AI in writing, publishing, and dissemination
  • Generative AI and research training, critical thinking, and future trends

Interested? Learn more and submit your abstract here.

Abstracts are due by 30 June 2025!

Join us at the 2025 International Conference on AI for Higher Education!

Profile Image

The AI Literacy Lab

Contact details

Republished from The AI Literacy Lab.

The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.

You are warmly invited to participate in the International Conference on AI for Higher Education (AI4HE). Facilitated by the Human-AI Collaborative Knowledgebase for Education and Research (HACKER) and the AI Literacy Lab, the conference provides an opportunity to share knowledge of AI in Higher Education, network with peers and participate in practical workshops.

The conference will be on 26 and 27 November 2025 and will run electronically through Zoom. The conference is FREE 🙂

Presentations can take various formats and should focus on the use of generative AI in higher education settings. Some questions you can use to prompt your thinking are:

  • What constitutes AI literacy for researchers today?
  • How can we effectively embed AI literacy into research training and higher education curricula?
  • What new methodological possibilities or tensions arise when generative AI is integrated into the research process?
  • How do we ethically use generative AI in research without compromising scholarly integrity, originality, trustworthiness, and rigour?
  • Who gets to decide what constitutes ‘authorship’ or ‘contribution’ when generative AI tools are involved in the production of knowledge?
  • How does the use of generative AI in research reshape our understanding of the researcher’s role, voice, and epistemic authority?
  • What does it mean to ‘position oneself’ in relation to a generative AI tool? Is it a collaborator, instrument, co-author, or something else entirely?

Abstracts are due by the 20th of June. To submit an abstract or register to attend, click on the button below. See you there!

ChatGPT as a qualitative research partner

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Profile Image

Chris Pretorius

Contact details

Chris Pretorius is a doctoral candidate specialising in spiritual health and practice, with an interest in the intersections between technology and theology.

The rise of generative AI has sparked new conversations about its role in academic research. While generative AI tools like ChatGPT have proven effective for summarisation, pattern recognition, and text classification, their potential in deep, interpretive qualitative data analysis remains underexplored. In our recent study, we examine the integration of ChatGPT as an active collaborator in qualitative data analysis. Our findings highlight ChatGPT’s ability to streamline initial coding, enhance reflexivity and higher-order thinking, and support knowledge co-construction while emphasising the necessity of human oversight.

Our study marks an exciting step forward in the integration of generative AI into qualitative inquiry. By approaching generative AI as a partner rather than a passive tool, we believe researchers will be able to its potential while preserving the richness and depth that define qualitative research.

As illustrated in another blog post, qualitative data analysis is often a laborious process, requiring meticulous coding, interpretation, and reflection. Traditional computer-assisted qualitative data analysis software, such as NVivo and MAXQDA, has long been used to help streamline aspects of qualitative data analysis. However, generative AI, and specifically ChatGPT, introduces an additional layer of adaptability, offering real-time feedback and dynamic analytical capabilities. This made us wonder how effective it would be in the qualitative data analysis process.

In our paper, we explore how ChatGPT can function beyond a simple data processing tool by actively participating in the interpretive process. Rather than merely classifying text, we found that ChatGPT could highlight implicit themes, suggest theoretical frameworks, and prompt deeper reflections on the data from both the researcher and participant. However, ChatGPT’s capacity is highly contingent on the researcher’s ability to craft well-designed prompts.

One of the key takeaways from the study is the significance of effective prompt design. We note that ChatGPT’s responses were only as good as the prompts it received. Initially, we found that the ChatGPT’s responses lacked depth or were fixated on single aspects of a topic while neglecting others. By refining our prompts, explicitly defining key concepts, and structuring questions carefully, we were able to guide ChatGPT toward more nuanced and insightful analyses.

We developed a series of 31 prompts to explore our dataset (see the prompts here). This iterative prompting process not only improved ChatGPT’s analytical output but also helped the researcher clarify her own theoretical perspectives. Our study consequently frames this prompt design process as a reflexive exercise, demonstrating how the act of crafting prompts can refine a researcher’s conceptual thinking and analytical approach.

An unexpected yet valuable outcome of using ChatGPT in the research process was its ability to stimulate the researcher’s higher-order thinking. By engaging with the ChatGPT-generated interpretations, the researcher was prompted to critically assess underlying assumptions, refine theoretical lenses, and explore alternative perspectives she might not have initially considered. This process encouraged deeper engagement, pushing the researcher to interrogate her own biases and methodological choices. As a result, the interaction with ChatGPT became an intellectual exercise in itself, allowing the researcher to refine and expand her analytical thinking in ways that traditional methods may not have facilitated as effectively.

One of the most striking findings from our study was ChatGPT’s ability to uncover implicit meanings within qualitative data. For example, when asked about concepts like “illusio” (investment in the socially constructed values within a field), ChatGPT was able to infer instances of this concept even when it was not explicitly mentioned in the data. However, we also found that the ChatGPT-generated interpretations sometimes diverged from participants’ own perspectives. This emphasises the critical role of human oversight. Generative AI lacks self-awareness (at least at the moment!), meaning that its responses must be carefully evaluated. Generative AI can be a powerful tool for organising and prompting analysis, but it is the researcher’s interpretive lens that ultimately determines the depth and rigour of qualitative inquiry.

One of the most innovative aspects of our study is its participatory approach, in which both the researcher and the participant engaged with ChatGPT’s analyses. Instead of using generative AI as a behind-the-scenes tool, the study involved participants in critically appraising ChatGPT’s findings, thereby decentralising the researcher’s authority over data interpretation. This triadic model (researcher, participant, and ChatGPT) fostered greater participant agency in the research process. By giving participants the opportunity to review and respond to ChatGPT-generated interpretations, we ensured that the generative AI-assisted analyses did not overwrite or misrepresent participants’ lived experiences. This approach not only enhanced the ethical integrity of the generative AI-assisted research but also enriched the depth and authenticity of the findings.

Questions to ponder

What are the potential benefits and risks of using AI tools like ChatGPT in qualitative research?

How can researchers ensure that ChatGPT-assisted analyses remain ethically sound and participant-driven?

The ETHICAL framework for responsible generative AI use

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


The advent of generative artificial intelligence (GenAI) has opened up transformative possibilities in academic research. Tools like ChatGPT, Gemini, and Claude hold the potential to help with idea and content development, structure and research design, literature review and synthesis, data management and analysis, as well as proofreading and editing. However, as enticing as these advancements are, they bring ethical challenges that require careful navigation. To bridge this gap between potential and responsibility, my colleagues and I developed the ETHICAL framework for GenAI use, which has just been published open access!

The ETHICAL framework offers a structured approach, with each letter in the acronym representing a principle that users should embed into their practices. The framework has been summarised in this handy picture.

The ETHICAL Framework for Responsible Generative AI Use, republished from here under a CC-BY license.

Examine policies and guidelines
Researchers must consult international, national, and institutional GenAI policies. This involves not only aligning with global GenAI ethics recommendations but also understanding the specifics of local guidelines. Adhering to these ensures compliance and fosters trust​​. As an example, my institution has an entire policy suite relating to responsible GenAI use in both teaching and research.

Think about the social impacts
GenAI can reinforce biases and perpetuate inequalities. Researchers should critically evaluate the societal consequences of using GenAI, considering both environmental sustainability and digital equity​​.

Harness understanding of the technology
A robust understanding of how GenAI tools operate (beyond their surface-level functionalities) is essential. Researchers must grasp the limitations and ethical implications of the technologies they use and should promote AI literacy within their academic communities​​. I have written other blog posts about what AI literacy is and how you can build your AI literacy. This handy quick video explains the components of AI literacy.

Indicate use transparently
Transparency is key to maintaining academic integrity. Researchers should explicitly disclose where and how GenAI tools were used, documenting their role in the research process. This fosters accountability and mitigates risks related to copyright and authorship disputes​. This video provides a simple guide to formatting GenAI acknowledgements.

Critically engage with outputs
GenAI outputs are not infallible and require rigorous validation. Researchers bear the ultimate responsibility for ensuring that GenAI-generated content aligns with disciplinary standards and is free from inaccuracies or ethical breaches​.

Access secure versions
Security and privacy are paramount when using GenAI. Free versions of tools may not offer adequate protections for sensitive data, underscoring the need for secure, institutional subscriptions or private deployments of GenAI models​.

Look at user agreements
Many GenAI tools have complex user agreements, which can have significant implications for data ownership and privacy. Researchers should carefully review these terms to ensure ethical compliance and to safeguard their intellectual property​.

The ETHICAL framework encourages universities to incorporate AI literacy into their curricula, ensuring that both students and faculty are prepared to navigate the ethical complexities of GenAI-enhanced research. The ETHICAL framework is also not just a set of guidelines, it’s a call to action. For educators, researchers, and institutions alike, the message is clear: the future of GenAI in higher education depends on our collective ability to navigate its challenges responsibly. The ETHICAL framework provides a compass for doing just that, fostering a research culture that is as ethical as it is forward-thinking.

Questions to ponder

How can universities integrate AI literacy into their existing curricula effectively?

What steps can researchers take to ensure equitable access to GenAI tools across diverse socio-economic contexts?

How should publishers and peer-review committees adapt to the growing use of GenAI in manuscript preparation?

Whose story is it anyway? The transformative power of pseudonym choice

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Profile Image

Dr Sweta Vijaykumar Patel

Contact details

Dr Sweta Vijaykumar Patel is a lecturer, researcher, and mentor specialising in early childhood education, creative methodologies, teacher education and culturally responsive pedagogy.

As qualitative researchers, we’ve often used pseudonyms in our work to protect the identities of participants. It’s a standard practice and one that’s meant to safeguard confidentiality while ensuring their stories remain authentic. But recently, we conducted a study that made us pause and rethink how we approach pseudonyms. It highlighted the power of inviting participants to choose their own pseudonyms and how that simple act can transform the research process.

In our study, 40 doctoral students shared their experiences of academia, and part of that was choosing pseudonyms for themselves and their institution. They were also asked to explain the reasons why they chose those names. Reading through their choices, we were struck by how much thought and emotion they poured into these names. For some, the pseudonym chosen was deeply personal. One participant, for instance, chose “Chess” to reflect their strategic navigation through life as an autistic, trans individual. Another participant selected “Kurdi,” proudly emphasising their Kurdish heritage and lifelong pursuit of knowledge. These names were more than identifiers; they were declarations of identity, resilience, and aspiration. Some picked hopeful names like “The University of Dreams” for their institutions, reflecting admiration or ambition. But not all pseudonyms were positive. One participant, for example, referred to their institution as “The University of Business,” critiquing the commodification of education. Another layered nuance onto their name, highlighting disillusionment with systemic issues they encountered. These choices offered us a window into their experiences, highlighting both their struggles and triumphs.

Letting participants name themselves isn’t just a small methodological tweak, it was a purposeful act of empowerment. By giving participants the opportunity to take control of their own representation, we were able to disrupt the traditional power dynamics that so often define research. It wasn’t just about collecting data; it was about fostering trust, collaboration, and authenticity.

Of course, there are challenges with this approach. Participants might feel pressure to choose names that conform to researchers’ expectations or worry about how their pseudonyms will be interpreted. It also takes time and effort to create a supportive environment where participants feel comfortable making these decisions. But the benefits (including greater trust, richer data, and more ethical representation) far outweigh the hurdles.

Conducting this study has changed how we think about our own research practices. It’s a reminder that the small details, even something as simple as giving a participant the chance to name themselves, can carry huge implications. When participants take control of their representation, it deepens the authenticity of their stories and strengthens the research process.

We also see this as a challenge to examine the systems within which we work. We’ve shown that names aren’t just labels; they’re an opportunity for participants to reclaim their stories, critique their environments, and express their identities on their own terms. As noted in another blog post, epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. As researchers, we’re in positions of power, and it’s easy to perpetuate epistemic injustice without even realising it. But when we hand the reins to participants, we’re making a deliberate choice to amplify their voices and honour their expertise.

For us, this study is also a reminder to slow down, reflect, and listen. Research isn’t just about collecting data; it’s about honouring the people behind the stories. And sometimes, it starts with something as simple and as profound as the researcher asking, “What’s in a name?”

Questions To Ponder

How can you create a space where research participants feel truly empowered to represent themselves?

What does it mean to approach research as a collaboration, rather than a process of data extraction?

How can naming practices become tools for resistance and critique in your own work?

You can now subscribe to the blog!

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Hi everyone,

I’ve got some exciting news to share with you today and, yes, I know, it’s been a long time coming. After a fair bit of tinkering, troubleshooting, and maybe a little procrastination (let’s be honest), I’ve finally figured it out: you can now subscribe to this blog! 🎉

Why should you subscribe? Well, it’s pretty straightforward: once you subscribe, you’ll get a handy email notification when a new post is published. No more missing out on updates or frantically trying to remember the name of the blog (though I hope it’s etched into your heart by now). Also, it is completely free!

So, if you enjoy the content here, subscribing is the easiest way to stay in the loop. Whether you’re here for research tips, musings on academic life, or something else entirely, I’m excited to have you along for this journey. So click that subscribe button and let’s keep the conversation going.

Join my 28 subscribers!

Oh hi there 👋
It’s nice to meet you.

Sign up to receive an email
when new posts are published.

We don’t spam! Read our privacy policy for more info.

Psychological capital in the PhD

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Hi there! Have you ever wondered what it really takes to thrive during a PhD? For over a decade, I’ve worked closely with graduate students, helping them navigate the academic and emotional hurdles of this journey. Along the way, I’ve seen a lot—students overwhelmed by isolation, stress, and uncertainty. Many come to me not knowing why they feel the way they do or where to turn for help.

Let’s talk about the bigger picture. Mental health is a major issue in our society. Did you know that nearly half of all Australians will experience a mental illness at some point in their lives? Now imagine the pressures of a PhD—high workloads, financial insecurity, and a lack of work-life balance. It’s then no surprise that PhD students are twice as likely to struggle with mental health compared to the general population. Some studies even compare their stress levels to those of incarcerated individuals. Shocking, right?

So how can we make this better? This is where the concept of psychological capital comes in, and is the focus of my recent paper. The definition I like to use refers to psychological capital as the HERO within. HERO stands for hope, efficacy, resilience, and optimism. These are the mental tools that help us stay motivated, face challenges, bounce back from setbacks, and believe in our ability to succeed.

While there’s plenty of research on mental health issues among PhD students, we rarely hear their personal stories. These stories matter because they show what it’s really like and can drive change in universities. In my study, I focused on one PhD student, who I call “Em,” at a large Australian university. This was done by using narrative ethnography as a methodology. Narrative ethnography is a type of autoethnography that blends the researcher’s own experiences with the stories and insights of others. It focuses on understanding others’ experiences while adding depth by including the researcher’s perspective.

I gathered Em’s data through an online survey, email-based conversations, and my own field notes. The vivid and emotional imagery in her story comes directly from her, reflecting the challenges of her PhD journey. To ensure the analysis was trustworthy, I cross-checked information from the survey, emails, and my experiences, with Em actively helping to interpret the data and providing additional context where needed. This means Em and I worked together to shape the insights, turning what started as a survey into a collaborative and ongoing conversation.

I collected my own data using a simple reflective approach, starting with recorded Zoom conversations with myself. As I wrote the manuscript, I kept reflecting on my thoughts and feelings about the experiences I was describing. This process brought back memories from my childhood that shaped my identity, helping me connect more deeply with my experiences and Em’s story. Em’s vulnerability also pushed me to think more critically about my own time in academia. I even talked with my family to confirm my memories and reviewed old documents, like leave applications and my thesis examiner reports. Finally, I looked back at my field notes to add richer context to Em’s responses.

So what did I find? Reflecting on my PhD journey, I can see how the challenges I faced shaped my academic identity. A lab accident left me with a severe allergy, forcing me to change the focus of my research and teaching. Later, a serious injury required months of recovery and even learning to walk again. Despite these setbacks, I adapted and managed to complete my PhD in under four years with the support of my supervisors and family. These experiences tested my resilience and resourcefulness, but my strong sense of purpose and hope kept me moving toward my goal of becoming a university lecturer. My upbringing played a key role, as my parents encouraged my love for learning from an early age, which gave me confidence in my ability to succeed.

These formative experiences, along with my faith and a strong sense of identity, gave me the optimism and determination to overcome adversity. The challenges I faced during my PhD also sparked my interest in improving doctoral education, particularly in addressing the mental health struggles many students experience. I believe it’s crucial to create academic environments that nurture hope, resilience, and a sense of belonging. By doing so, we can help future researchers thrive and transform academia into a more inclusive and supportive space.

Em’s PhD journey highlights the immense challenges she faced as an international student in Australia. Passionate about her research topic, Em began her doctoral studies with optimism but soon encountered significant psychological distress. Her struggles included isolation, pressure to publish, and concern over her ability to secure research funding. At one point, her mental health deteriorated so severely that she experienced frequent panic attacks, leaving her physically and emotionally exhausted. Despite these challenges, Em initially avoided seeking help due to the stigma surrounding mental illness and fear of being judged, particularly within her cultural and religious circles.

Over time, Em reached a turning point when she realised her wellbeing was essential for completing her PhD. This shift in mindset helped her prioritise self-care and develop strategies to manage her mental health while continuing her studies. She described the process as steering a small boat through stormy seas, learning new skills along the way to stabilise her journey. Em found support in her faith, her curiosity for her research, and a growing sense of compassion for herself. Importantly, she began sharing her struggles with fellow PhD students, discovering a sense of community and mutual encouragement that helped her regain resilience and optimism. By sharing her story, Em not only found strength in vulnerability but also highlighted the importance of creating spaces where PhD students can thrive despite the challenges inherent in academia.

What can we learn from this? First, we need to create supportive spaces where students can connect and collaborate—like writing groups that foster not just skills but also a sense of belonging. Second, we need to reframe adversity as an opportunity for growth. By sharing our own challenges as educators, we can model resilience and show that failure is part of the process.

Lastly, let’s normalise seeking help. Supervisors should have open, honest conversations about mental health and encourage students to access professional support when needed. Resilience isn’t about enduring hardship alone—it’s about having the tools and support to bounce back stronger.

PhDs are tough, but they don’t have to be isolating. By fostering hope, resilience, and optimism, we can create an academic environment where students don’t just survive—they thrive. Thank you for watching, and let’s keep working together to make academia a more inclusive and supportive space.

Questions to ponder

What role does cultural and societal stigma play in preventing students like Em from seeking help, and how can institutions effectively counteract these barriers?

In what ways can the researcher’s own reflections and personal journey add value to the study’s insights, and how might this influence the interpretation of participants’ experiences?

What specific steps can universities take to normalise discussions about mental health and integrate support systems that build resilience and community among PhD students?

Data coding for qualitative research

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Coding is an essential step in transforming raw and often messy data into structured insights that reveal the nuanced layers of human experiences and perceptions. In this post, I will explore the basics of data coding. It is important to note that there is no one “correct” way to code, with different researchers preferring different approaches. As such, this post explores a general strategy that is applicable across methodologies.

What is Data Coding?

Data coding is the method by which researchers assign labels, or “codes”, to segments of data (such as lines or passages in an interview transcript). These codes categorise information and can be used to identify recurring themes, patterns, or unique insights. Unlike quantitative data, where analysis often relies on numbers, qualitative coding seeks to draw out meanings, emotions, and context. Think of coding as sorting a vast array of thoughts and words into labelled tags. Each tag represents a concept or idea that helps in making sense of the information collected. Coding provides a foundation for further analysis and interpretation, guiding researchers towards a deeper understanding of the underlying messages within their data.

Before diving into the coding process, certain preparatory steps can help clarify your objectives and streamline your approach: First, define your research questions. Knowing what you’re aiming to understand or explore will guide you towards relevant codes and themes. Second, spend time familiarising yourself with your data. Read through the data a few times to understand the overall flow and main ideas. This initial reading is crucial for getting a feel for the tone, structure, and range of content in the dataset. Third, decide on your coding approach. Decide whether you will use deductive coding (where you start with a predefined set of codes) or inductive coding (where codes emerge from the data as you go along). Inductive coding is particularly useful in exploratory studies where themes are not predetermined. Note you can use both deductive and inductive coding, which is usually the strategy I prefer. Finally, organise your work process. Whether you’re coding manually (with highlighters and notes) or digitally (with software like NVivo or MAXQDA), set up a system that allows you to easily store, retrieve, and organise your codes.

How Do You Code?

Now that you are ready to begin, here is a step-by-step approach.

Initial Coding (Open Coding):

Go through your data line-by-line or paragraph-by-paragraph and assign descriptive codes to sections that seem relevant to your research questions or themes of interest. These codes should capture the essence of each segment. The open coding stage is often exploratory, and it’s normal to have a large number of codes that may seem disconnected. Coding can feel overwhelming at this stage, especially when dealing with large volumes of data. Break down the coding process into manageable sessions and focus on specific sections.

As an example, I’ll use the coding I did for a recent paper I wrote. Let’s imagine a participant said:

“I then worked with the two co-editors to get the people who were part of the writing group to submit some abstracts for what they would like to write for a book chapter. When we received these abstracts, I was quite surprised because they actually fit quite neatly into three categories.”

As I read this quote, I can see several concepts or ideas mentioned, including collaboration, teamwork, writing groups, book authorship, chapter authorship, emotional response to texts, categorisation, and similar experiences. These can be considered inductive codes and are the ones I would assign to this sentence. This also highlights that one sentence can have multiple codes because ideas are often complex and interrelated. As I mentioned earlier, I tend to use both inductive and deductive coding approaches. To do deductive coding of these sentences, I need to use the concepts of my theoretical framework (which in this study was Communities of Practice). As such, I coded these sentences under the community element of the communities of practice framework.

Review and Refine Codes:

Once the initial coding is done, it’s time to refine. Read through your list of codes, combining those that overlap or represent similar concepts, and eliminate codes that appear redundant. This process reduces the number of codes and creates a clearer structure. For example, let’s say I had codes for teamwork, working as a team, collaborating, and working together in the overall coding of my dataset. This highlights one challenge of coding: code drift. Over time, the researcher may use slightly different wordings, or the meaning of certain codes can evolve. Keeping a codebook (a reference document that defines each code) can help maintain consistency. During this refining stage, the four codes I mentioned above can be collapsed into one code (e.g., working together) because having four separate codes for the same idea is redundant. You want to make sure that the codes you have are representative of unique concepts, even though they may be closely related.

Group Codes (Axial Coding):

Axial coding involves grouping related codes into larger categories, sometimes known as thematic clusters. At this stage, your job as the researcher is to start looking for connections between codes. Here, you’ll determine the relationships between your codes, creating categories and sub-categories that add coherence to the coded data. For example, let’s say I had codes for book authorship, chapter authorship, deciding author order, editorial decisions, and tasks in the publication process. These five ideas could be grouped into a cluster, such as “complexities of publishing“, since they are all closely related.

Further Selective Coding to Create Themes:

Once you have your categories, the final step is to create your themes. A theme represents the core idea of several of your categories, thereby giving overarching insights that help you answer your research questions. There are different approaches to creating themes, as I highlighted in another blog post, but I tend to use Braun and Clarke’s reflexive thematic analysis in my work.

Let’s look back at that initial quote I had. In the final paper, this quote was under a theme called “Same, same, but different: Everybody has a story“. This theme is most closely related to that initial code I had called similar experiences. However, while the initial code was descriptive of my data, it did not yet fully reflect the nuance and complexity of the meaning of my participants’ quotes. I had to use my deep understanding of my participants’ words to develop a theme which provided answers to my research question. When I looked back at my codes, I noticed that my participants used words like “everybody has a story” and that they noted everyone’s experiences “were all similar to each other and at the same time different from each other”. These ideas were frequently repeated, and so were clustered together during axial coding. To then create my theme, I used my participant’s words (“everybody has a story”, sometimes termed an in vivo code) and combined it with a catchy phrase (“same, same, but different”). This helped me to answer my research question, which was related to what participants learnt from reading and providing feedback on each others’ work.

It is also important to note that themes are often interrelated, reflecting the complexity of human experience. It can, therefore, be useful to create a detailed explanation for your reader of how the themes work together to address your research topic. For example, this is what I wrote in my paper to explain the connection between the first and second themes in my study:

The first theme (“same, same, but different: everybody has a story”) underscores a dual realisation among participants: while everyone brings distinct and unique life stories and perspectives to the table, there is a profound commonality in the challenges and experiences they share, particularly in the context of writing and self-reflection. The second theme (“I am not alone: everyone has problems”) is related to the first, highlighting the transformative power of shared experiences in academic settings. By recognising the commonalities in their struggles, participants felt that they were able to foster a supportive community that valued openness, mutual support, and collective growth, ultimately enhancing their PhD journey and personal development.

Final Thoughts

Coding qualitative data may seem daunting at first, but the process becomes clearer with practice. At its core, coding is about translating real human stories into research findings that can inform, inspire, and change our understanding of complex issues. Through careful, thoughtful coding, you unlock the full potential of qualitative data: capturing not just what people say, but the deeper insights hidden within their words. Happy coding!

You can also learn more about research designs and methods by watching the videos below.

Questions to ponder

What criteria might a researcher use to decide whether a code is redundant or unique enough to retain during the coding refinement phase?

What are the potential advantages and limitations of using qualitative data analysis software (like NVivo or MAXQDA) compared to manual coding?

What role does familiarity with the data (from initial readings) play in the accuracy and depth of the coding process? Could familiarity also pose any risks?