Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
Hi there! Have you ever wondered what it really takes to thrive during a PhD? For over a decade, I’ve worked closely with graduate students, helping them navigate the academic and emotional hurdles of this journey. Along the way, I’ve seen a lot—students overwhelmed by isolation, stress, and uncertainty. Many come to me not knowing why they feel the way they do or where to turn for help.
Let’s talk about the bigger picture. Mental health is a major issue in our society. Did you know that nearly half of all Australians will experience a mental illness at some point in their lives? Now imagine the pressures of a PhD—high workloads, financial insecurity, and a lack of work-life balance. It’s then no surprise that PhD students are twice as likely to struggle with mental health compared to the general population. Some studies even compare their stress levels to those of incarcerated individuals. Shocking, right?
So how can we make this better? This is where the concept of psychological capital comes in, and is the focus of my recent paper. The definition I like to use refers to psychological capital as the HERO within. HERO stands for hope, efficacy, resilience, and optimism. These are the mental tools that help us stay motivated, face challenges, bounce back from setbacks, and believe in our ability to succeed.
While there’s plenty of research on mental health issues among PhD students, we rarely hear their personal stories. These stories matter because they show what it’s really like and can drive change in universities. In my study, I focused on one PhD student, who I call “Em,” at a large Australian university. This was done by using narrative ethnography as a methodology. Narrative ethnography is a type of autoethnography that blends the researcher’s own experiences with the stories and insights of others. It focuses on understanding others’ experiences while adding depth by including the researcher’s perspective.
I gathered Em’s data through an online survey, email-based conversations, and my own field notes. The vivid and emotional imagery in her story comes directly from her, reflecting the challenges of her PhD journey. To ensure the analysis was trustworthy, I cross-checked information from the survey, emails, and my experiences, with Em actively helping to interpret the data and providing additional context where needed. This means Em and I worked together to shape the insights, turning what started as a survey into a collaborative and ongoing conversation.
I collected my own data using a simple reflective approach, starting with recorded Zoom conversations with myself. As I wrote the manuscript, I kept reflecting on my thoughts and feelings about the experiences I was describing. This process brought back memories from my childhood that shaped my identity, helping me connect more deeply with my experiences and Em’s story. Em’s vulnerability also pushed me to think more critically about my own time in academia. I even talked with my family to confirm my memories and reviewed old documents, like leave applications and my thesis examiner reports. Finally, I looked back at my field notes to add richer context to Em’s responses.
So what did I find? Reflecting on my PhD journey, I can see how the challenges I faced shaped my academic identity. A lab accident left me with a severe allergy, forcing me to change the focus of my research and teaching. Later, a serious injury required months of recovery and even learning to walk again. Despite these setbacks, I adapted and managed to complete my PhD in under four years with the support of my supervisors and family. These experiences tested my resilience and resourcefulness, but my strong sense of purpose and hope kept me moving toward my goal of becoming a university lecturer. My upbringing played a key role, as my parents encouraged my love for learning from an early age, which gave me confidence in my ability to succeed.
These formative experiences, along with my faith and a strong sense of identity, gave me the optimism and determination to overcome adversity. The challenges I faced during my PhD also sparked my interest in improving doctoral education, particularly in addressing the mental health struggles many students experience. I believe it’s crucial to create academic environments that nurture hope, resilience, and a sense of belonging. By doing so, we can help future researchers thrive and transform academia into a more inclusive and supportive space.
Em’s PhD journey highlights the immense challenges she faced as an international student in Australia. Passionate about her research topic, Em began her doctoral studies with optimism but soon encountered significant psychological distress. Her struggles included isolation, pressure to publish, and concern over her ability to secure research funding. At one point, her mental health deteriorated so severely that she experienced frequent panic attacks, leaving her physically and emotionally exhausted. Despite these challenges, Em initially avoided seeking help due to the stigma surrounding mental illness and fear of being judged, particularly within her cultural and religious circles.
Over time, Em reached a turning point when she realised her wellbeing was essential for completing her PhD. This shift in mindset helped her prioritise self-care and develop strategies to manage her mental health while continuing her studies. She described the process as steering a small boat through stormy seas, learning new skills along the way to stabilise her journey. Em found support in her faith, her curiosity for her research, and a growing sense of compassion for herself. Importantly, she began sharing her struggles with fellow PhD students, discovering a sense of community and mutual encouragement that helped her regain resilience and optimism. By sharing her story, Em not only found strength in vulnerability but also highlighted the importance of creating spaces where PhD students can thrive despite the challenges inherent in academia.
What can we learn from this? First, we need to create supportive spaces where students can connect and collaborate—like writing groups that foster not just skills but also a sense of belonging. Second, we need to reframe adversity as an opportunity for growth. By sharing our own challenges as educators, we can model resilience and show that failure is part of the process.
Lastly, let’s normalise seeking help. Supervisors should have open, honest conversations about mental health and encourage students to access professional support when needed. Resilience isn’t about enduring hardship alone—it’s about having the tools and support to bounce back stronger.
PhDs are tough, but they don’t have to be isolating. By fostering hope, resilience, and optimism, we can create an academic environment where students don’t just survive—they thrive. Thank you for watching, and let’s keep working together to make academia a more inclusive and supportive space.
Questions to ponder
What role does cultural and societal stigma play in preventing students like Em from seeking help, and how can institutions effectively counteract these barriers?
In what ways can the researcher’s own reflections and personal journey add value to the study’s insights, and how might this influence the interpretation of participants’ experiences?
What specific steps can universities take to normalise discussions about mental health and integrate support systems that build resilience and community among PhD students?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
Coding is an essential step in transforming raw and often messy data into structured insights that reveal the nuanced layers of human experiences and perceptions. In this post, I will explore the basics of data coding. It is important to note that there is no one “correct” way to code, with different researchers preferring different approaches. As such, this post explores a general strategy that is applicable across methodologies.
What is Data Coding?
Data coding is the method by which researchers assign labels, or “codes”, to segments of data (such as lines or passages in an interview transcript). These codes categorise information and can be used to identify recurring themes, patterns, or unique insights. Unlike quantitative data, where analysis often relies on numbers, qualitative coding seeks to draw out meanings, emotions, and context. Think of coding as sorting a vast array of thoughts and words into labelled tags. Each tag represents a concept or idea that helps in making sense of the information collected. Coding provides a foundation for further analysis and interpretation, guiding researchers towards a deeper understanding of the underlying messages within their data.
Before diving into the coding process, certain preparatory steps can help clarify your objectives and streamline your approach: First, define your research questions. Knowing what you’re aiming to understand or explore will guide you towards relevant codes and themes. Second, spend time familiarising yourself with your data. Read through the data a few times to understand the overall flow and main ideas. This initial reading is crucial for getting a feel for the tone, structure, and range of content in the dataset. Third, decide on your coding approach. Decide whether you will use deductive coding (where you start with a predefined set of codes) or inductive coding (where codes emerge from the data as you go along). Inductive coding is particularly useful in exploratory studies where themes are not predetermined. Note you can use both deductive and inductive coding, which is usually the strategy I prefer. Finally, organise your work process. Whether you’re coding manually (with highlighters and notes) or digitally (with software like NVivo or MAXQDA), set up a system that allows you to easily store, retrieve, and organise your codes.
How Do You Code?
Now that you are ready to begin, here is a step-by-step approach.
Initial Coding (Open Coding):
Go through your data line-by-line or paragraph-by-paragraph and assign descriptive codes to sections that seem relevant to your research questions or themes of interest. These codes should capture the essence of each segment. The open coding stage is often exploratory, and it’s normal to have a large number of codes that may seem disconnected. Coding can feel overwhelming at this stage, especially when dealing with large volumes of data. Break down the coding process into manageable sessions and focus on specific sections.
As an example, I’ll use the coding I did for a recent paper I wrote. Let’s imagine a participant said:
“I then worked with the two co-editors to get the people who were part of the writing group to submit some abstracts for what they would like to write for a book chapter. When we received these abstracts, I was quite surprised because they actually fit quite neatly into three categories.”
As I read this quote, I can see several concepts or ideas mentioned, including collaboration, teamwork, writing groups, book authorship, chapter authorship, emotional response to texts, categorisation, and similar experiences. These can be considered inductive codes and are the ones I would assign to this sentence. This also highlights that one sentence can have multiple codes because ideas are often complex and interrelated. As I mentioned earlier, I tend to use both inductive and deductive coding approaches. To do deductive coding of these sentences, I need to use the concepts of my theoretical framework (which in this study was Communities of Practice). As such, I coded these sentences under the community element of the communities of practice framework. This also highlights that one sentence can have multiple codes because ideas are often complex and interrelated.
Review and Refine Codes:
Once the initial coding is done, it’s time to refine. Read through your list of codes, combining those that overlap or represent similar concepts, and eliminate codes that appear redundant. This process reduces the number of codes and creates a clearer structure. For example, let’s say I had codes for teamwork, working as a team, collaborating, and working together in the overall coding of my dataset. This highlights one challenge of coding: code drift. Over time, the researcher may use slightly different wordings, or the meaning of certain codes can evolve. Keeping a codebook (a reference document that defines each code) can help maintain consistency. During this refining stage, the four codes I mentioned above can be collapsed into one code (e.g., working together) because having four separate codes for the same idea is redundant. You want to make sure that the codes you have are representative of unique concepts, even though they may be closely related.
Group Codes (Axial Coding):
Axial coding involves grouping related codes into larger categories, sometimes known as thematic clusters. At this stage, your job as the researcher is to start looking for connections between codes. Here, you’ll determine the relationships between your codes, creating categories and sub-categories that add coherence to the coded data. For example, let’s say I had codes for book authorship, chapter authorship, deciding author order, editorial decisions, and tasks in the publication process. These four ideas could be grouped into a cluster, such as “complexities of publishing“, since they are all closely related.
Further Selective Coding to Create Themes:
Once you have your categories, the final step is to create your themes. A theme represents the core idea of several of your categories, thereby giving overarching insights that help you answer your research questions. There are different approaches to creating themes, as I highlighted in another blog post, but I tend to use Braun and Clarke’s reflexive thematic analysis in my work.
Let’s look back at that initial quote I had. In the final paper, this quote was under a theme called “Same, same, but different: Everybody has a story“. This theme is most closely related to that initial code I had called similar experiences. However, while the initial code was descriptive of my data, it did not yet fully reflect the nuance and complexity of the meaning of my participants’ quotes. I had to use my deep understanding of my participants’ words to develop a theme which provided answers to my research question. When I looked back at my codes, I noticed that my participants used words like “everybody has a story” and that they noted everyone’s experiences “were all similar to each other and at the same time different from each other”. These ideas were frequently repeated, and so were clustered together during axial coding. To then create my theme, I used my participant’s words (“everybody has a story”, sometimes termed an in vivo code) and combined it with a catchy phrase (“same, same, but different”). This helped me to answer my research question, which was related to what participants learnt from reading and providing feedback on each others’ work.
It is also important to note that themes are often interrelated, reflecting the complexity of human experience. It can, therefore, be useful to create a detailed explanation for your reader of how the themes work together to address your research topic. For example, this is what I wrote in my paper to explain the connection between the first and second themes in my study:
The first theme (“same, same, but different: everybody has a story”) underscores a dual realisation among participants: while everyone brings distinct and unique life stories and perspectives to the table, there is a profound commonality in the challenges and experiences they share, particularly in the context of writing and self-reflection. The second theme (“I am not alone: everyone has problems”) is related to the first, highlighting the transformative power of shared experiences in academic settings. By recognising the commonalities in their struggles, participants felt that they were able to foster a supportive community that valued openness, mutual support, and collective growth, ultimately enhancing their PhD journey and personal development.
Final Thoughts
Coding qualitative data may seem daunting at first, but the process becomes clearer with practice. At its core, coding is about translating real human stories into research findings that can inform, inspire, and change our understanding of complex issues. Through careful, thoughtful coding, you unlock the full potential of qualitative data: capturing not just what people say, but the deeper insights hidden within their words. Happy coding!
You can also learn more about research designs and methods by watching the videos below.
Questions to ponder
What criteria might a researcher use to decide whether a code is redundant or unique enough to retain during the coding refinement phase?
What are the potential advantages and limitations of using qualitative data analysis software (like NVivo or MAXQDA) compared to manual coding?
What role does familiarity with the data (from initial readings) play in the accuracy and depth of the coding process? Could familiarity also pose any risks?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
Unlocking meaningful insights from data starts with selecting the right analysis strategy. Each approach to data analysis offers unique pathways for understanding complex information, yet choosing the best fit depends on knowing what each method can reveal. In this post, I explore five key strategies: statistical analysis, content or document analysis, thematic analysis, phenomenological analysis, and visual analysis. By unpacking their distinct strengths and limitations, I explore how these methods empower researchers to make well-informed, impactful decisions tailored to their specific research goals. It is important to note that most qualitative data analysis methods require you to do qualitative data coding. You can read more about how to do data coding in this blog post.
Statistical Analysis
Statistical analysis is a quantitative method that processes numerical data to uncover patterns, relationships, or trends. Researchers can make inferences about populations based on data by applying statistical techniques, such as regression analysis, ANOVAs, or t-tests. Statistical analysis allows for reliable data-backed conclusions and predictions, often supporting findings with measurable probabilities or confidence intervals.
The strength of statistical analysis is in its rigour and objectivity. This method allows researchers to draw generalisable conclusions from data and provides a level of precision that is invaluable in fields like healthcare, economics, and the natural sciences. When well-designed, statistical studies yield insights that can influence policies, predict trends, and drive impactful organisational decisions. Here is an example of how statistical analysis might be conducted in a research context. Imagine a researcher wants to determine if a new medicine significantly improves patients’ recovery from illness compared to the traditional method. To do this, they select two separate groups of patients: one that receives the traditional medicine and one using the new medicine method. Both groups’ recovery times are collected over a period of time. After collecting the data, the researcher would need to apply a statistical test to compare these two groups to see if there is an observed difference that is statistically significant. A statistically different result would indicate that one medicine improved recovery time compared with the other. Statistical calculations are often done using software.
Statistical analysis has its limitations, however. One major drawback is that it requires large, high-quality datasets to yield dependable results. Small sample sizes can introduce significant errors, as they may not accurately represent the larger population, leading to misleading conclusions. Poorly collected or biased data also compromises results, as it can distort findings or exaggerate relationships that are not truly there, reducing the accuracy and generalisability of insights drawn from the analysis. Misuse or misinterpretation of statistical tools can further compound these issues. Without careful application, analysts may draw incorrect conclusions. Over-reliance on p-values or neglecting assumptions underlying statistical tests can also lead to erroneous claims. These pitfalls make statistical literacy essential for researchers, as understanding the limitations and appropriate applications of statistical techniques is crucial for producing valid and actionable results.
Content or Document Analysis
Content and document analysis is a method used to systematically review and interpret how topics, themes, and narratives are represented in texts or documents, such as news articles, speeches, policy documents, or social media posts. Researchers segment content into manageable sections, categorising them into themes to identify patterns and relationships. By systematically coding data, analysts can explore the underlying discourses, values, and beliefs present in the material, shedding light on how social issues, ideas, or viewpoints are communicated over time. Imagine a researcher is interested in understanding how the government’s language around literacy programs has evolved over the past decade, focusing on whether policies emphasise economic benefits, educational equity, or community development. The researcher would identify and collect a series of key government policy documents on literacy published over the last twenty years. A coding framework would then need to be created to categorise the main themes, keywords, and topics that appear in these documents. The researcher can then begin systematically coding each document, highlighting instances of relevant themes and noting their frequency and context. By comparing the frequency and placement of these themes, the researcher can identify patterns, such as whether the focus on literacy policy has shifted from an economic and workforce development perspective to a broader emphasis on lifelong learning and social inclusion.
One of the key strengths of content and document analysis is its ability to make sense of qualitative data in a structured, quantitative-like manner. This method allows researchers to conduct reviews efficiently, identifying trends and shifts in discourse over time or between different contexts. By examining context, document analysis reveals nuanced cultural and historical insights that might otherwise remain obscured. This approach is also highly adaptable, meaning it can be used across various fields, from sociology and media studies to political science and business research. Additionally, it enables researchers to process large volumes of textual data systematically, making it ideal for studies with extensive datasets. With a robust coding framework, content analysis also provides a replicable process, allowing other researchers to verify findings or apply similar frameworks in different contexts.
Despite its usefulness, content and document analysis have several limitations that researchers must consider. One significant drawback is the potential for researcher bias. Without a well-defined and consistently applied coding framework, results may vary considerably between different researchers or even the same researcher over time. Additionally, content analysis may fail to capture deeper, underlying meanings or nuanced insights if the categories are too rigid or simplistic, leading to an oversimplification of complex messages. Another limitation is the dependency on the quality and relevance of the chosen content; if the material does not comprehensively represent the research topic or if the data source is limited, findings may not accurately reflect broader trends or social phenomena.
Thematic Analysis
Thematic analysis is a flexible qualitative method that involves identifying, analysing, and reporting patterns or themes within data. Researchers use this approach to make sense of qualitative data by sorting segments of information into recurring themes, which can reveal insights into the attitudes, motivations, and concerns of participants. Thematic analysis is lauded for its versatility, as it can be used across a wide range of research contexts and types of qualitative data. Unlike more rigid methods, thematic analysis is adaptable and accessible, allowing researchers to approach data with a flexible coding framework. This makes it easier to derive meaningful interpretations from raw data, especially when exploring complex social or cultural issues. Furthermore, it provides an excellent balance of depth and structure, enabling researchers to gain valuable insights without requiring advanced technical skills.
There are various forms of thematic analysis, each offering unique perspectives on how themes are identified and interpreted. One well-known form that I like to use is Braun and Clarke’s reflexive thematic analysis, which emphasises the active role of the researcher in identifying, coding, and constructing themes. In this method, themes are developed through an iterative, reflective process that is deeply influenced by the researcher’s insights and engagement with the data. For example, imagine a researcher examining interviews about people’s experiences in adult literacy programs. Following this approach, the researchers would first familiarise themselves with the data, repeatedly reading transcripts to get an initial sense of participants’ responses. Next, they would begin generating initial codes, such as “empowerment,” “challenges in accessing resources,” or “improvements in confidence.” As the researchers move through the iterative stages of analysis, these codes are grouped into potential themes that are refined and reviewed against the dataset, ultimately allowing the researcher to construct meaningful, nuanced themes such as “literacy is the gateway to success”. Through this reflexive process, a rich, context-sensitive understanding that captures both the explicit and implicit meanings in participants’ narratives is developed.
On the downside, thematic analysis is susceptible to researcher bias due to the subjectivity involved in constructing themes. This subjectivity is both a strength, as it allows for rich, multifaceted interpretations, and a challenge, as it introduces inconsistencies that can complicate analysis. There is also the risk of oversimplification, as complex data may be reduced to general themes that lack the depth and detail needed to fully represent participants’ experiences or the nuances of the research topic. Thorough documentation and transparency are crucial in minimising these risks and ensuring that the analysis process is both rigorous and trustworthy. Finally, thematic analysis can be time-consuming, especially in larger datasets, as it requires careful, repeated reading and coding.
Phenomenological Analysis
Phenomenological analysis is a qualitative research method designed to uncover individuals’ lived experiences and subjective perceptions regarding a specific phenomenon. Unlike other methodologies, phenomenology prioritises personal perspectives, allowing researchers to understand the meaning and significance of phenomena as experienced by individuals themselves. Through in-depth interviews or narrative analysis, researchers gather first-hand accounts and, by reflecting on the essence of these experiences, reveal what it is like to live through particular events, conditions, or relationships.
The power of phenomenological analysis lies in its depth and authenticity, allowing researchers to explore phenomena from the insider’s perspective. By focusing on the unique context of each participant’s experience, phenomenology illuminates insights that can inform personalised interventions or support programmes tailored to the specific needs of individuals. Imagine a researcher is interested in understanding the lived experiences of casual workers in the fast food industry, particularly the challenges and feelings associated with their roles. The goal is to uncover the essence of what it is like to work in a demanding, often high-turnover environment, exploring both the daily struggles and occasional rewards that come with casual, temporary employment in fast food.
Step 1: Transcribe and Identify Significant Statements After gathering the data, the researcher transcribes the interviews and begins identifying significant statements or phrases that capture the core experiences of these workers. Statements like “feeling undervalued,” “pressure to work quickly under stress,” or “sense of camaraderie with coworkers” highlight key aspects of their daily lives. These meaningful phrases are then extracted from each participant’s narrative and listed as individual units of meaning.
Step 3: Develop Themes and Patterns Next, the researcher clusters these significant statements into overarching themes that reflect shared experiences. Common themes might include “navigating stressful customer interactions,” “lack of stability and security,” and “finding support among coworkers.” Each theme represents a pattern that emerges across multiple participants, although the researcher also notes any unique perspectives that deepen the understanding of individual differences within the group.
Step 4: Describe the Essence of the Experience Finally, the researcher synthesises these themes to describe the essence of being a casual worker in the fast food sector. This description may reveal that while workers face a lack of job security, low wages, and high levels of stress, many also experience a strong bond with their coworkers, who share similar challenges and help them get through demanding shifts. The analysis highlights the dual reality of fast food work: a job often marked by emotional and physical demands, but also by a sense of resilience and solidarity among workers who rely on each other for support.
Phenomenological analysis, while valuable for exploring personal experiences, has several limitations. One major limitation is that it typically relies on small sample sizes, as the focus is on in-depth, detailed accounts rather than broader trends. This means that while the findings may offer profound insights into individual experiences, they may not accurately represent the experiences of a larger population, limiting the ability to generalise results. Additionally, phenomenological analysis is highly dependent on participants’ abilities to articulate their feelings and experiences clearly, which can vary greatly among individuals and may lead to incomplete or skewed data. Finally, the researcher’s own biases and interpretations must be carefully managed, given the focus on individual lived experiences. This makes it essential for researchers to maintain reflexivity and transparency throughout the process.
Visual Analysis
Visual analysis is a qualitative method focused on interpreting visual data, such as photographs, paintings, videos, or digital media. Researchers use this approach to explore what images convey and how they might reflect cultural, social, or political realities. Analysts can understand how visuals communicate ideas, influence perceptions, and reinforce or challenge narratives by examining elements like composition, symbolism, and visual rhetoric.
The main strength of visual analysis is its ability to reveal insights beyond text-based data, capturing nuances that are often more immediately impactful or emotive. Visual analysis allows researchers to assess how visuals shape public opinion, influence social norms, and reflect societal values, often bringing forth perspectives that written data might overlook. Here is an example of how visual analysis on a photograph used in a social media campaign about climate change could help the researcher explore messaging about climate change.
Step 1: Observing and Describing the Visual Imagine the photo shows a lone polar bear stranded on a small bank of melting ice surrounded by vast, open water under a grey, clouded sky. The analyst begins by describing the scene in detail without adding interpretations. Key elements would include the polar bear’s posture, the ice floe’s size, and the cold, muted tones of the colour palette.
Step 2: Analysing the Composition and Symbolism Next, the analyst examines the visual’s composition and symbolic meaning. The isolated bear on a shrinking piece of ice might symbolise the vulnerability of polar species in the face of climate change. The expansive ocean suggests a loss of habitat and the looming threat of extinction. The choice of muted colours and the cloudy sky could symbolise a bleak future, reinforcing the urgency of the climate message.
Step 3: Interpreting the Social and Cultural Context Finally, the analyst interprets the broader social context. This image might aim to elicit an emotional response from viewers, compelling them to act or advocate for environmental conservation. Given the popularity of this type of imagery in climate change campaigns, the photo not only serves as a call to action but also taps into widely shared cultural understandings about climate risks and human responsibility for environmental impact. The analysis would consider how effectively this image communicates its message and resonates with the public.
However, interpreting visual data can be highly subjective, leaving significant room for varying interpretations depending on the researcher’s perspective and background. Each viewer may bring their own cultural, historical, and personal biases to the analysis, which can influence how elements within an image are perceived and understood. For accurate analysis, researchers need a strong foundation in visual literacy, including concepts like semiotics, which explores the relationship between symbols and their meanings, and symbolism, which examines how visual elements can represent abstract ideas or emotions. Understanding these concepts is essential for decoding the deeper layers of meaning within an image, yet applying them correctly can be complex and time-intensive. Additionally, the effectiveness of visual analysis is limited by the context and quality of the visuals themselves. Images captured in one context may not convey the same meanings elsewhere, as cultural interpretations can vary widely. Without sufficient contextual information, researchers may draw flawed conclusions or miss important nuances.
You can also learn more about research designs and methods by watching the videos below.
Questions to ponder
How might the researcher influence the data analysis? Is this something that should be controlled for or should this subjectivity be valued as part of the analysis process?
What challenges might arise when using visual analysis to interpret visuals from diverse cultural backgrounds, and how could researchers account for these differences in their analyses?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
In research, data collection is the cornerstone of meaningful analysis. Whether you’re conducting a small-scale qualitative study or a large quantitative survey, the method you use determines the depth, breadth, and reliability of your findings. Imagine you’re trying to understand how people form habits such as saving money, staying fit, or using technology. Do you observe them in their natural environment? Ask them about their experiences in a one-on-one interview? Or perhaps send out a survey to gather insights from a larger group? The data collection method you choose shapes not only the data you gather but also the kinds of insights you can uncover. Each strategy offers unique advantages depending on your research goals. In this post, we’ll explore some of the most common data collection strategies used in research.
Collecting Artefacts
Artefacts, objects that hold significance for the participant or community involved in the research, are powerful tools in research. Researchers often collect artefacts like historical photographs, tools, artwork, or social media histories to understand the lived experiences of participants. Artefacts can reveal much about how individuals or groups engage with their environment, what they value, and how they express themselves. By analysing these items, researchers can uncover layers of meaning that may not be easily articulated through words alone.
One of the strengths of using artefacts in research is their ability to offer concrete, visual, or material evidence of practices. This makes them particularly valuable in anthropological, archaeological, and or social science research, where these objects help reconstruct historical contexts and everyday life. Artefacts can provide insights into how people interact with their environments and how these interactions evolve over time. For example, in a study on family traditions, examining heirlooms passed down through generations could reveal shifting cultural identities and values. This historical dimension allows researchers to trace changes in practices, beliefs, and social norms, adding depth to their understanding of how communities adapt over time. Artefacts also offer a unique way to bridge the gap between the tangible and intangible aspects of culture. They allow researchers to explore not only what people produce or use but also the symbolic meanings attached to these objects. For instance, a simple household item such as a handmade quilt can hold multiple layers of meaning representing family heritage, craftsmanship, and personal stories of those who contributed to its creation. Finally, artefacts like online posts or digital art also act as significant cultural markers that help researchers understand contemporary social dynamics.
However, artefacts also come with limitations. The interpretation of artefacts is often subjective and context-dependent, which can introduce bias or misinterpretation. Researchers must be careful not to impose their own meanings on the artefacts they study, as the significance of an object can vary widely between individuals and cultures. Additionally, artefacts may not always provide a complete picture of a community or participant’s experience. They can also be difficult to access, especially in historical research, where certain items may have been lost or destroyed, or in digital contexts, where data may be incomplete or obscured. Artefact collection also requires careful contextual analysis to avoid over-reliance on material evidence alone, which might overlook the symbolic or emotional dimensions of social life. Despite the challenges, artefacts remain invaluable, as they provide concrete evidence of how people navigate and negotiate their worlds through material culture.
Collecting Documents
Documents, whether they are reports, policies, letters, or media articles, serve as rich sources of data that reflect the discourses operating within a society. By analysing documents, researchers can explore the language, ideas, and ideologies that shape public and private life. This method is particularly helpful in analysing discourse, where the focus is on understanding how power, identity, and knowledge are constructed through text. This method is also flexible, allowing for both content analysis, which examines what is explicitly stated in the text, and critical analysis, which explores underlying power dynamics, biases, and ideologies embedded in the language.
One of the key strengths of using documents in research is their ability to capture the official and formal expressions of institutions and governments. For example, policy documents, government reports, or corporate memos offer direct insight into the decision-making processes, policies, and organisational culture of institutions. In addition, documents can also serve as powerful tools for examining the relationship between society and governance. For instance, legal documents, such as court rulings or legislative texts, can reveal how laws and policies are interpreted and enforced, shedding light on issues of justice, inequality, or rights. Similarly, media articles or public speeches can provide clues about public sentiment, government propaganda, or resistance movements. Documents can also be used to track shifts in public perception, showing how key events or social movements are framed in different contexts. In this way, document analysis not only helps to trace the evolution of ideas but also highlights how different stakeholders, such as governments, corporations, or social groups, compete to control narratives and influence public opinion. Finally, documents provide a historical record, making it possible to trace how policies or social norms have evolved over time, revealing patterns of change and continuity in societal values or institutional priorities.
However, documents are not without limitations. One significant challenge is that they often reflect the perspectives of dominant groups or institutions, potentially marginalising the voices and experiences of less powerful or underrepresented populations. For example, government reports may present an idealised view of policy implementation, leaving out the practical challenges or opposition encountered. Furthermore, documents are often created for specific purposes or audiences, which may influence the way information is framed or omitted. As a result, researchers need to critically assess the context in which documents were produced and the intentions behind their creation to avoid drawing incomplete or biased conclusions. The availability and accessibility of documents, particularly in historical research, can also pose challenges, as certain materials may be lost, censored, or restricted. Despite their limitations, documents remain essential for understanding power dynamics and societal change across time and space, offering a window into both public discourse and the hidden workings of institutions.
Diaries and Reflections
Diaries and personal reflections provide intimate access to participants’ thoughts, emotions, and daily routines. These self-recorded accounts enable researchers to explore individuals’ subjective experiences over time. This method can also encourage participants to not only record their experiences but also engage in deeper reflection, offering more nuanced and insightful data as the collection process continues.
One of the key strengths of diaries and reflections is their ability to capture real-time data, allowing researchers to understand how participants experience events as they happen, without the distortions of hindsight. Unlike retrospective interviews, where participants may unintentionally reshape their memories, diaries minimise the risk of memory bias by recording emotions, thoughts, and experiences in the moment. Diaries enable researchers to delve into the minutiae of everyday life, offering insights into personal decision-making processes, routines, and small events that might not otherwise surface in structured research. In addition to providing real-time insights, diaries and personal reflections can also foster a sense of participant agency, giving individuals greater control over how they narrate their experiences. This can be particularly important in sensitive research areas such as mental health, trauma, or marginalised communities, where participants may feel more comfortable expressing themselves privately rather than in face-to-face interviews. Diaries offer a space for self-reflection, allowing participants to articulate thoughts or emotions they might struggle to share openly. Furthermore, researchers can use diary entries to analyse patterns over time, identifying triggers or recurring themes in participants’ lives.
However, diaries and reflections come with certain limitations. One challenge is that they require a significant commitment from participants, who must regularly engage in recording their thoughts and experiences. This can lead to participant fatigue or incomplete data if individuals struggle to maintain consistent entries. Another limitation is the potential for participants to edit or censor their reflections, especially when writing about sensitive or painful topics. In such cases, they may choose to omit certain details, leading to gaps in the data. Additionally, the subjective nature of diaries means that the data can be highly personal and unique to each participant, making it difficult to generalise findings across larger populations. Researchers must also be mindful of the ethical implications of analysing such intimate, personal data, particularly in sensitive research areas. Despite the challenges, diaries and reflections remain invaluable tools for capturing both the immediacy of lived experience and the complex process of self-examination, making them essential for studies that aim to explore personal growth, emotional responses, or long-term behavioural changes.
Focus Groups
Focus groups bring together small groups of participants to discuss a particular topic in a structured way. They are particularly useful for exploring collective opinions, social norms, and group dynamics. Through moderated discussions, researchers can observe how ideas are shaped through interaction and debate, offering insights that might not emerge in individual interviews.
One of the key strengths of focus groups is their ability to generate rich, interactive discussions, where participants can build on each other’s ideas or challenge differing perspectives. This interaction often leads to deeper exploration of topics, uncovering group opinions, shared values, and the nuances of social norms. In addition to promoting rich discussions, focus groups can also reveal the influence of peer dynamics on individual opinions and behaviours. By observing how participants respond to each other, researchers can explore how group pressures, conformity, or dissent shape collective decision-making. The interactive nature of focus groups can also encourage participants to challenge or refine their initial thoughts, resulting in more reflective and considered responses. This dynamic makes focus groups an ideal method for understanding both consensus and conflict within a group, offering a deeper understanding of social norms and collective thought processes. Finally, the method is flexible, allowing researchers to ask specific, structured questions or explore more open-ended themes, making it adaptable to a wide range of topics and disciplines.
However, focus groups also have certain weaknesses. Group dynamics can sometimes inhibit individual expression, particularly when strong personalities dominate the conversation or when participants feel pressured to conform to the majority opinion. This “groupthink” effect can skew the data, making it difficult to discern individual perspectives. Additionally, the presence of a moderator can influence the discussion, either consciously or unconsciously guiding participants toward certain viewpoints. The setting of focus groups also limits their use for highly personal or sensitive topics, as some participants may be reluctant to share openly in a group. Furthermore, the data collected from focus groups is inherently qualitative and subjective, making it less generalisable than data collected through larger, more quantitative methods like surveys. Despite the challenges, focus groups remain a powerful tool for capturing group-level insights and understanding how social interactions shape opinions and behaviours.
Interviews
Interviews are one of the most popular data collection methods in qualitative research. They allow researchers to gather in-depth, personal responses on a wide range of topics. Interviews can range from highly structured (with predetermined questions) to semi-structured or unstructured, allowing for flexibility and exploration of unexpected themes.
One of the key strengths of interviews is their ability to provide rich, detailed data that offers insight into individual perspectives, experiences, and emotions. As researchers can ask follow-up questions and adapt their line of inquiry based on participants’ responses, this method allows for a more nuanced understanding of the topic. In addition to offering depth, interviews can be adapted to suit various research contexts. Interviews also offer flexibility in terms of the medium, with telephone or video interviews becoming increasingly common. Furthermore, interviews can foster a trusting, one-on-one environment that encourages participants to share more openly, especially on sensitive or personal topics. This direct interaction allows researchers to pick up on non-verbal cues, such as body language or tone of voice, which can offer additional layers of meaning beyond spoken words. Finally, follow-up interviews can be used to clarify or deepen initial responses, allowing researchers to probe into evolving themes. This makes interviews particularly valuable for exploring complex, multifaceted issues where a deep understanding of participant perspectives is crucial.
However, interviews also present some challenges. One of the main weaknesses is their time-intensive nature, both in conducting the interviews and in analysing the data. Interviews typically require significant time and resources, especially in large-scale studies. Additionally, the quality of the data collected depends heavily on the skill of the interviewer. Bias, leading questions, or poor rapport with the participant can affect the reliability of the responses. Interviews also rely on the participants’ ability to articulate their thoughts and memories accurately, which can be limited by factors such as recall bias, personal discomfort, or cultural differences in communication. Furthermore, interview data is subjective and context-dependent, making it difficult to generalise findings across larger populations. Despite their limitations, interviews remain one of the most effective tools for capturing personal experiences and rich qualitative data.
Observations
Observation is a powerful method for understanding practices in real-time, in their natural setting. By immersing themselves in the environment they are studying, researchers can capture behaviours, interactions, and routines that might otherwise be missed. Observations can be overt, where participants know they are being watched, or covert, where they do not.
One of the key strengths of observational research is its ability to capture complex social interactions as they unfold naturally, without the interference of structured questioning. Observations offer rich, contextual data that reveals not just what people do, but how and why they do it. This method is especially useful in fields like ethnography, education, and organisational studies, where understanding the subtle dynamics of human interaction is crucial. In addition to capturing real-time interactions, observation also allows researchers to see the unspoken or routine aspects of social life, which reveal cultural norms, power structures, and social roles that participants might not have consciously articulated. Observational data can be particularly useful in uncovering contradictions between what people say and what they do, offering insights into gaps between stated beliefs and actual practices. This makes observation an indispensable tool for capturing the nuanced, often subtle dimensions of human behaviour and social organisation that might remain invisible through other methods.
However, observational research also has its limitations. One challenge is the potential for observer bias, where the presence of the researcher or their personal interpretations influence the data being collected. Even in covert observations, researchers bring their own perspectives, which can affect how they perceive and record events. Furthermore, observational studies can be time-consuming and resource-intensive, requiring researchers to spend extended periods in the field to build trust and capture a comprehensive picture of the environment. Another limitation is the potential ethical concerns that arise in covert observation, where participants are unaware they are being observed. This can raise issues around informed consent and privacy, particularly in sensitive research contexts. Since observations rely heavily on the researcher’s interpretation, they may lack the objectivity that other data collection methods, like surveys or interviews, can provide. Despite its challenges, observation remains one of the most effective ways to understand real-world interactions and social processes in depth.
Psychological Measures
Psychological measures, such as standardised tests and scales, are often used in research on human behaviour and cognition. These tools help quantify aspects of the mind, such as emotional states. Researchers use these measures to assess correlations between psychological factors and other variables, such as academic performance or wellbeing.
One of the key strengths of psychological measures is their ability to turn complex, abstract constructs, like anxiety or resilience, into quantifiable data that can be statistically analysed. This enables researchers to draw meaningful comparisons across individuals or groups and to detect patterns that may not be evident through qualitative methods. These tools are particularly valuable in large-scale studies, as they provide a reliable and consistent way to gather data from diverse populations, making them crucial in fields like psychology, education, and healthcare. Additionally, because these measures are often standardised, they can be used to track changes over time or assess the impact of interventions, providing objective benchmarks for evaluating progress or treatment outcomes. For example, researchers might use cognitive tests to assess the impact of educational programmes on children’s problem-solving abilities or employ depression scales to evaluate the effectiveness of therapeutic interventions.
However, psychological measures also have limitations. One major challenge is ensuring that the tools are culturally appropriate and valid for the population being studied. Psychological constructs may be interpreted differently across cultures, which can lead to biased results if the measures are not adapted accordingly. Additionally, while standardised tests aim to be objective, they may not capture the full complexity of human experience, reducing rich, multifaceted behaviours or emotions to simple scores. This can sometimes lead to an oversimplified understanding of psychological phenomena, neglecting the nuances that qualitative approaches might uncover. Another limitation is the potential for test anxiety or other external factors to influence the results, which can affect the reliability of the data. Despite their limitations, they remain indispensable for capturing and understanding the key psychological factors that influence human behaviour.
Surveys and Questionnaires
Surveys and questionnaires are go-to methods for gathering large amounts of data from a broad audience. Surveys are useful for identifying trends, patterns, and generalisable insights about populations. They are particularly valuable in quantitative research, where the goal is to collect numerical data that can be statistically analysed. It is important to note, though, that surveys can also be used in qualitative research, where researchers can gather information through a series of open-ended questions.
One of the major strengths of surveys and questionnaires is their scalability. Researchers can administer surveys to thousands of participants at once, whether online, by mail, or in person, allowing them to gather data quickly and efficiently. This scalability makes them ideal for large-scale studies that require a broad reach, such as national polls on public opinion or global studies on health trends. Surveys are also versatile, covering a wide range of topics from political attitudes to consumer preferences, making them an essential tool in disciplines like sociology, marketing, and public health. By collecting standardised data from many individuals, surveys enable researchers to make population-wide inferences and identify large-scale patterns or correlations that may not be visible in smaller studies.
However, surveys and questionnaires also come with certain limitations. One significant challenge is ensuring that the questions are well-designed to avoid biases that can skew results. Poorly worded questions, leading questions, or questions with ambiguous wording can result in misinterpretation, making the data less reliable. Additionally, surveys rely on self-reporting, which can introduce response bias if participants provide socially desirable answers or if they misunderstand the question. Response rates can also vary significantly depending on the length, complexity, or perceived relevance of the survey, which can impact the representativeness of the data. Furthermore, surveys may not capture the depth or nuance of participant experiences, particularly when dealing with complex or sensitive topics, where qualitative methods like interviews might provide richer insights. Despite these challenges, when properly executed, surveys and questionnaires remain a cornerstone of research, providing comprehensive, actionable insights on a large scale.
Visual Methods
Visual methods use photographs, videos, or other visual media to collect and analyse data. These methods are particularly powerful for capturing complex social realities that are difficult to describe in words. Researchers may ask participants to take photos of their environment or document particular processes through video, adding a layer of richness to the data.
One of the major strengths of visual methods is their ability to capture rich, multifaceted data that can reveal subtle social and cultural dynamics. Visual data, whether photographs, drawings, or videos, enable participants to express themselves creatively, offering a participatory dimension to research that empowers individuals to represent their own experiences and perspectives. By allowing participants to use cameras or create visual artefacts, researchers can access more nuanced data that reflects not just what people say, but how they see and experience their world. Visual methods also provide an immediacy and emotional impact that words sometimes struggle to convey. This immediacy makes visual methods particularly effective for engaging with audiences beyond academia, helping to communicate research findings to the public, stakeholders, and decision-makers. Visual methods are also well-suited for research on space, place, identity, and memory, as they allow researchers to “see” how individuals interact with their surroundings and capture details that may otherwise go unnoticed. In this way, visual methods can enhance the richness and depth of qualitative research, making them a powerful complement to traditional text-based methods.
However, visual methods come with their own set of challenges. One significant limitation is the potential for subjectivity in both the collection and interpretation of visual data. The meaning behind a visual artefact can vary widely depending on the viewer’s perspective, cultural background, or emotional response. Researchers must be cautious not to impose their own interpretations on the visual data without considering the context in which it was created. Additionally, while visual methods are valuable for capturing rich qualitative data, they can be resource-intensive, requiring time, technology, and skills to produce and analyse. Ethical concerns can also arise, particularly around consent and privacy, especially when working with vulnerable groups or in public spaces. Ensuring participants’ rights to control how their images are used is crucial to maintaining ethical standards in visual research. Despite the challenges, visual methods offer a unique and compelling way to capture the complexity of human experience and translate research findings into powerful, accessible narratives.
Conclusion
Selecting the right data collection method is not just a procedural step in research; it is a decision that shapes the quality, depth, and relevance of the findings. Whether you are collecting artefacts, conducting interviews, or using psychological measures, each method offers unique insights and comes with its own set of strengths and challenges. The most effective research often combines multiple approaches, allowing for a more comprehensive understanding of complex issues.
As research continues to evolve with advances in technology and new methodologies, the ability to critically evaluate and adapt data collection strategies becomes even more important. Researchers must consider not only the nature of their research questions but also ethical concerns, cultural contexts, and the practicalities of time and resources. By thoughtfully selecting and integrating various methods, researchers can ensure that their studies yield meaningful, reliable, and impactful data.
You can also learn more about research designs and methods by watching the videos below.
Questions to ponder
What ethical considerations should researchers keep in mind when using personal narratives such as diaries and reflections?
How might visual methods offer unique insights into social practices compared to traditional textual data collection methods?
In what contexts might interviews be more effective than surveys, and vice versa, for gathering rich, qualitative data?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
The way researchers select their participants impacts the validity and reliability of their findings, making participant recruitment one of the most crucial steps in the research process. But how do researchers go about this task? What strategies do they use to ensure their sample accurately reflects the broader population or the group they are investigating? Let’s explore some common participant recruitment strategies, breaking down their strengths, weaknesses, and best use cases. This post will cover six key sampling techniques: convenience sampling, purposive sampling, snowball sampling, random sampling, stratified sampling, and quota sampling.
Convenience Sampling
Convenience sampling, as the name implies, revolves around ease of access and availability. This method involves selecting participants who are nearby, easily accessible, and willing to take part in the study. It’s a go-to choice for researchers when they need to collect data quickly and with minimal effort. Instead of engaging in time-consuming and resource-intensive processes to identify and recruit participants, convenience sampling allows researchers to gather data from those who happen to be in the right place at the right time, or who meet the study’s basic criteria and are easy to contact.
One of the most notable benefits of convenience sampling is its speed and cost-effectiveness. Since participants are easy to reach, researchers can save both time and resources that would otherwise be spent on recruitment strategies, travel, or extensive outreach. For example, if you’re studying employee engagement in the workplace, you might simply survey your colleagues, since they are readily available and meet the general criteria of being employees. You don’t need to look far or conduct an elaborate recruitment process. This ease of implementation is especially valuable when dealing with limited budgets or tight deadlines. Convenience sampling also simplifies logistics, as researchers don’t need to source participants from outside their immediate environment, which can be particularly helpful in the initial stages of research where the primary goal is to test ideas or gather preliminary data.
Despite its practicality, convenience sampling carries a significant risk of bias. Since the sample is drawn from a pool of easily accessible participants, it may not reflect the diversity of the broader population. This lack of representation can lead to skewed results, limiting the generalisability of the study’s conclusions. Moreover, convenience sampling often captures a very specific subset of individuals: those who are willing to participate. People who are available and motivated to take part may differ significantly from those who are harder to reach, potentially introducing a self-selection bias. This means that the participants in your study might share certain characteristics that make them different from the larger group you’re trying to understand, thereby limiting the accuracy and breadth of the findings.
Convenience sampling is best suited for exploratory research, pilot studies, or projects where time and resources are constrained. It’s a practical method when the research goal is to test hypotheses, gather preliminary data, or explore an emerging field. However, for studies where generalising findings to a larger population is critical, convenience sampling is not recommended. In these cases, a more representative sampling method, such as random or stratified sampling, would yield more reliable and valid results.
Purposive Sampling
Purposive sampling, also known as purposeful sampling, is a strategically driven approach to participant selection, designed to align closely with the objectives of the research. Purposive sampling involves the deliberate selection of individuals who possess specific characteristics, knowledge, or experiences that are directly relevant to the study’s focus. The intention here is not to gather a wide, diverse group of participants, but to choose individuals whose particular insights can provide depth and richness to the data. In purposive sampling, researchers carefully define the criteria for inclusion, selecting participants based on how well they fit the study’s needs. This targeted approach guarantees that the participants are not only suitable but also capable of offering the kind of focused and contextually relevant information that the research seeks to uncover.
The primary strength of purposive sampling lies in its efficiency and precision. By handpicking participants based on specific criteria, researchers can ensure that every individual involved in the study has a direct connection to the research topic, which enhances the quality of the data collected. For instance, if a researcher is investigating the experiences of people recovering from cancer, they would purposefully select participants who have undergone cancer treatment, ensuring that the data collected is directly relevant to the research question. This method is especially useful in qualitative research, where the goal is often to gain a deeper understanding of a particular phenomenon rather than to generalise findings to a larger population. Moreover, purposive sampling is often more practical when working with small or hard-to-reach populations. In studies involving niche groups, such as people with rare medical conditions or members of specific subcultures, purposive sampling enables researchers to focus on finding individuals who meet the study’s strict criteria, bypassing the need for broader recruitment efforts that may yield less relevant participants.
While purposive sampling offers many advantages in terms of relevance and efficiency, it also comes with inherent limitations, the most significant of which is the risk of selection bias. Since participants are chosen subjectively by the researcher, there is always the potential for bias in the selection process. The researcher’s choices may be influenced by preconceived notions about who would provide the most useful data, which could result in an unbalanced or unrepresentative sample. Since the sample is intentionally selective, it does not provide an accurate cross-section of a broader group. As a result, purposive sampling is not ideal for studies where broad generalisability is a key objective.
Purposive sampling is most commonly employed in qualitative research, where the goal is to explore specific themes, experiences, or phenomena in great detail. It is particularly useful when researchers are investigating a clearly defined group or phenomenon, such as in case studies, ethnographic research, or studies focusing on specialised areas like mental health, education, or organisational behaviour. Additionally, purposive sampling is often used in evaluation research, where the goal is to assess a programme, policy, or intervention. By focusing on individuals with firsthand experience, researchers can gather detailed feedback that is crucial for evaluating the effectiveness of the intervention.
Snowball Sampling
Snowball sampling is a participant recruitment method that relies heavily on social networks and personal referrals to build a sample. The process begins with a small group of initial participants who are chosen based on their relevance to the study. These participants are then asked to refer others they know who meet the study’s criteria, who in turn refer more people, and so on, creating a snowball effect. Over time, the sample grows organically, expanding through connections within a specific community or network.
This method is especially useful when researchers are working with hard-to-reach populations. These might include people in marginalised groups, individuals involved in illegal activities, or those with experiences that are not easily accessible through conventional recruitment methods, such as people who have experienced homelessness or are part of underground subcultures. In many cases, people within these groups may not want to reveal their identities to researchers, especially if their involvement in the group is sensitive or stigmatised. However, through personal referrals from trusted peers, they may be more likely to participate. The trust established between members of the community can make them more comfortable with sharing their experiences, allowing researchers to collect rich, authentic data from participants who would otherwise be unreachable. Snowball sampling can also be highly cost-efficient and flexible.
Despite its advantages, snowball sampling has several potential drawbacks, the most notable being the risk of bias. Since participants are recruited through personal networks, the sample is often restricted to people who are socially connected, which can limit the diversity of the sample. This lack of diversity can skew the results, making it difficult to generalise findings to the broader population. Moreover, snowball sampling can create a chain of referrals that is disproportionately shaped by the initial participants. If the first few participants are not representative of the population being studied, their referrals may perpetuate this imbalance, further reducing the sample’s representativeness. Another challenge is the difficulty in controlling the sample size. Since snowball sampling relies on personal referrals, the growth of the sample can be unpredictable. In some cases, the “snowball” may gather momentum quickly, leading to a large, varied participant pool. In other instances, recruitment may stall if participants are unwilling or unable to refer others, resulting in a sample that is too small to draw meaningful conclusions.
Given its strengths and limitations, snowball sampling is most effective in studies where recruiting participants through traditional methods would be difficult or impractical. It is particularly well-suited for research involving rare populations, sensitive topics, or hidden communities where members may be reluctant to come forward on their own. This method is also useful in qualitative research, where the goal is to collect in-depth, nuanced data from a specific group rather than to achieve broad generalisability. In exploratory research, snowball sampling can help researchers generate preliminary data about populations that are otherwise difficult to access. It allows for a gradual expansion of the sample, giving researchers the flexibility to adjust their recruitment strategy based on the data collected. However, because of the potential for bias, snowball sampling is generally not recommended for studies that require representative samples or where generalisability to the broader population is a primary concern.
Random Sampling
Random sampling, as the name suggests, is a method where each individual in the population has an equal chance of being selected, which makes the process akin to drawing names out of a hat. By giving every person an equal opportunity to be included, random sampling maximises the likelihood that the sample will accurately represent the broader population. A simple example would be assigning numbers to everyone in a population and using a random number generator to pick participants. This approach minimises bias and maximises the likelihood that the sample is representative. This quality is what makes random sampling a preferred choice in large-scale surveys and experimental research, where the goal is to ensure that the findings can be applied to a larger group.
One of the most notable strengths of random sampling is its ability to provide high external validity. Since the method does not favour any particular subset of the population, the findings from a study using random sampling are more likely to be generalisable—meaning that they can be applied to the wider population with a greater degree of confidence. Another key benefit is the reduction of systematic bias. In other sampling methods, certain individuals or groups may be over-represented due to researcher influence or convenience. With random sampling, this risk is minimised because the selection process is completely unbiased. The random nature of this method ensures that personal preferences, biases, or logistical factors do not affect who is chosen for the study.
Despite its many advantages, random sampling can be challenging to implement, particularly in studies with large populations. Some of the main drawbacks are the time and cost involved. To conduct random sampling on a large scale, researchers need access to a complete and up-to-date list of the population from which they’re drawing their sample. In some cases, obtaining such a list can be difficult or impossible, especially when working with fragmented or hard-to-reach populations. Additionally, there can be significant logistical hurdles. In small populations, random sampling may be fairly straightforward, but when dealing with larger populations, coordinating a random selection process can become complex. This can involve significant costs, not just in terms of the initial recruitment of participants, but also in terms of travel, communication, and follow-up procedures.
Given the costs and logistical challenges, random sampling is best suited for large quantitative studies, particularly those where generalisability is the primary goal. If the research is designed to draw conclusions about the broader population, such as in public health research, market research, or large-scale sociological studies, random sampling is ideal because it provides the most unbiased and representative data possible. In cases where time and budget constraints are more pressing, or where the research is exploratory rather than aiming for population-level generalisability, other sampling methods (such as convenience or purposive sampling) might be more appropriate.
Stratified Sampling
Stratified sampling is a method used by researchers to ensure that their sample accurately reflects the diversity of the population by focusing on key subgroups, or “strata.” The basic idea is that the population is divided into distinct groups based on important characteristics such as age, gender, income level, education, or ethnicity. Once these groups are defined, participants are then randomly selected from each stratum. This approach allows researchers to ensure that the sample mirrors the proportions of these subgroups in the overall population, leading to more precise and reliable findings. For example, if a population consists of 40% males, 55% females, and 5% transgender people, the researcher ensures that the sample has the same proportional representation. This method is particularly effective in studies where the population consists of individuals with varying characteristics that could influence the outcome of the study. By ensuring that all relevant subgroups are proportionally represented, stratified sampling helps researchers avoid over-representing or under-representing certain groups.
One of the main strengths of stratified sampling is its ability to produce a highly representative sample of the population. By ensuring that each subgroup is properly represented, this method increases the precision of the results, which in turn improves the reliability of the study’s findings. This is especially important in research where differences between subgroups are a key focus. Moreover, by dividing the population into strata and then randomly selecting participants from each group, stratified sampling ensures a more balanced and accurate representation, which minimises the risk of sampling errors. Finally, the ability to analyse subgroup differences is a key advantage of stratified sampling, particularly in fields like sociology, economics, and public health, where understanding these differences is critical.
While stratified sampling offers many advantages, it does come with certain challenges, particularly in terms of the time and resources required to implement it. One of the most time-consuming aspects of this method is the need to define and organise the strata before selecting participants. Researchers must have a clear understanding of which characteristics are most relevant to the study and must have detailed information about the population to create the strata. Furthermore, in some cases, this information may not be readily available, or the population may be too complex to neatly divide into well-defined strata. Stratified sampling can also be more logistically complicated than simpler methods like convenience sampling. Researchers need to ensure that they have enough participants in each stratum to allow for meaningful analysis, which can require more recruitment efforts. If some strata are smaller or harder to reach, the researcher may need to put in extra effort to find participants from those groups, increasing both time and costs.
Given its ability to provide a highly representative sample, stratified sampling is best used in studies where representation across key subgroups is critical. It is particularly useful when researchers are interested in analysing differences between subgroups, such as age, income, or geographic location. Stratified sampling is also valuable in demographic studies, where the goal is often to understand the characteristics of various subgroups within a population.
Quota Sampling
Quota sampling is a sampling method that shares certain goals with stratified sampling, particularly the aim of capturing diversity across specific subgroups. However, the fundamental difference lies in how the sample is selected. While stratified sampling relies on random selection from each subgroup, quota sampling allows researchers to directly control who is recruited by actively seeking participants to fill predefined quotas based on certain characteristics, such as age, gender, education level, or income. Once the quota for each subgroup is filled, no further participants from that group are recruited, ensuring that the final sample meets the predetermined criteria for representation.
One of the main advantages of quota sampling is that it guarantees the inclusion of specific subgroups in the sample. By setting quotas for each group, the researcher ensures that the final sample reflects the desired characteristics or proportions, which is particularly important when the goal of the research is to compare different groups. Another key benefit of quota sampling is its efficiency. Since the researcher can directly seek out participants who meet the required criteria, the process can be completed more quickly and at a lower cost than methods like stratified sampling. Moreover, quota sampling offers a greater degree of control over the composition of the sample. The researcher can adjust the quotas based on the needs of the study, ensuring that specific groups are represented according to the study’s objectives.
Despite its advantages, quota sampling also has several limitations, the most significant of which is the potential for bias. Since participants are not selected randomly, there is a risk that the sample may not accurately represent the broader population, even if the quotas are met. The recruitment process is subjective, as it relies on the researcher’s judgement and outreach methods, which can introduce selection bias. This lack of randomisation means that the results from a quota sample may not be generalisable to the larger population, especially if certain characteristics or perspectives are overlooked during recruitment. Additionally, quota sampling can lead to incomplete representation within each subgroup. While the researcher may set quotas based on broad characteristics like age or gender, other important factors may not be considered. This can result in a sample that, while meeting the quota criteria, lacks internal diversity within the subgroups, which can limit the depth and richness of the data collected. Another challenge with quota sampling is that it requires detailed knowledge of the population beforehand. The researcher must have a clear understanding of the proportions of different groups within the population to set accurate quotas. This can be difficult if reliable demographic data is not available, or if the population is highly fragmented or diverse in ways that are not easily captured by simple quotas.
Quota sampling is best suited for studies where the primary goal is to compare specific groups or ensure representation across key subgroups. It is commonly used in market research, opinion polling, and social research, where researchers need to gather data quickly and cost-effectively while ensuring that certain groups are represented. This method is also useful in studies where strict randomisation is not feasible or necessary. For example, in research involving focus groups or interviews, where the goal is to gather in-depth insights from specific subgroups, quota sampling allows the researcher to select participants who fit the desired profile without the logistical complexities of random selection.
You can also learn more about research designs and methods by watching the videos below.
Questions to ponder
How do different sampling methods influence the validity of research findings?
Can convenience sampling ever be justified in large-scale research?
In what scenarios might snowball sampling offer a better solution than random sampling?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
In research, the design chosen plays a pivotal role in determining how data are collected, analysed, and interpreted. Each design provides a unique lens through which researchers can explore their questions, offering distinct advantages and limitations. Below, I summarise ten common research designs, spanning qualitative, quantitative, and mixed methods approaches.
Action Research
Action research is a collaborative and iterative approach that seeks to solve real-world problems while simultaneously generating knowledge. Action research is characterised by its participatory nature, where researchers and participants collaborate to identify problems and implement solutions. This collaborative process ensures that the research is deeply rooted in the needs and realities of the community or organisation being studied. By involving stakeholders in every step, action research not only increases the relevance of the findings but also empowers participants by giving them ownership of the process. This makes it particularly impactful in settings like schools, where teachers and administrators can actively contribute to shaping educational practices.
What sets action research apart is its cyclical nature. Unlike traditional research, where data are collected and analysed in a linear fashion, action research involves continuous cycles of planning, acting, observing, and reflecting. Another important feature of action research is its adaptability. As new insights emerge, the research design can be adjusted to address unforeseen challenges or opportunities. This flexibility allows for iterative learning and continuous improvement, fostering a more dynamic and responsive research environment. This makes it particularly well-suited for environments where ongoing change is necessary, such as schools or businesses aiming to improve their operations or outcomes. However, this adaptability also introduces challenges, particularly in maintaining rigour and objectivity. Balancing the need for scientific validity with the practical demands of real-world problem-solving requires careful planning and reflective practice, often making the role of the researcher one of facilitator as much as investigator.
I have previously written another blog post which explains autoethnography in detail. In essence, autoethnography is a research design that combines the study of personal experience with broader social and cultural analysis. In this approach, the researcher uses their own life as the primary source of data, reflecting on their personal experiences to explore larger cultural or societal issues. Researchers are the participants in their own studies and the stories which are told often explore transformative experiences for the researcher, frequently taking the form of epiphanies that significantly influenced the author’s worldview. By blending autobiography and ethnography, autoethnography allows researchers to provide an insider’s perspective on their own social context, making it a powerful tool for examining how individual identity and experiences are shaped by—and in turn, shape—cultural norms, values, and power dynamics.
One of the strengths of autoethnography is its ability to highlight marginalised voices or experiences that are often overlooked in traditional research. It provides a platform for self-reflection and critical analysis, allowing researchers to connect their individual stories to larger collective experiences. However, the highly personal nature of this research design also presents challenges. Balancing subjectivity with academic rigour requires careful reflection to avoid the research becoming overly introspective or self-indulgent. Autoethnographers must navigate the fine line between personal storytelling and scholarly analysis, ensuring that their narrative contributes meaningfully to the understanding of broader social or cultural issues. Despite these challenges, autoethnography remains a powerful approach for exploring the intersection of the personal and the political, offering rich, emotionally resonant insights into the complexities of human experience.
Note that autoethnography can be done by one researcher or by a group of researchers. When done together, this type of autoethnography is called collaborative autoethnography. Collaborative autoethnography is particularly pertinent when examining complex social phenomena, such as marginalisation and the pursuit of social justice, as it facilitates the inclusion of multiple perspectives and voices. In this way, the individual voices of the researchers work together to illuminate common themes or experiences.
Case study research is particularly effective for exploring complex phenomena in depth and within their real-life context. The case study design focuses on an in-depth examination of a ‘case,’ which could be an individual, group, organisation, or event. Case studies can be either descriptive, exploring what is happening, or explanatory, seeking to understand why and how something occurs. They often use multiple data sources—such as interviews, observations, and documents—to provide a comprehensive understanding of the case. Unlike other designs that seek to generalise findings across large populations, case studies focus on the intricacies of a ‘case’. The depth of focus of a case study also presents limitations—namely, the findings from a single case may not be applicable to other contexts. Despite this, case studies are often used as a stepping stone for further research, providing in-depth insights that can inform broader studies.
The distinction between single-case and multiple-case designs lies in the scope and focus of the research. A single-case design centres around an in-depth examination of one particular case, which is often chosen because it is either unique, critical, or illustrative of a broader phenomenon. This design is beneficial when the case is exceptional or offers significant insight into a rare or novel situation. In contrast, a multiple-case design involves studying several cases to compare and contrast findings across different contexts or instances. Multiple-case designs offer more robust evidence, as they allow researchers to identify patterns or variations across cases, increasing the potential for generalising findings to a broader population or set of circumstances.
Document or policy analysis is a qualitative research design that involves systematically reviewing and interpreting existing documents to extract meaningful data relevant to a research question. These documents can range from government reports, personal letters, and organisational records to media articles, policy documents, and historical texts. It involves examining the formulation, implementation, and outcomes of documents or policies by analysing relevant data, understanding stakeholder perspectives, and evaluating the potential impacts of various options. Researchers use document analysis to identify patterns, themes, or trends within written materials, which can offer valuable insights into social, political, or organisational contexts. One of the strengths of document analysis is that it allows researchers to access data that is already available, making it a relatively unobtrusive approach that does not require direct interaction with participants.
This research design is particularly useful when studying past events, policies, or organisational practices, as documents can provide a rich historical or contextual backdrop. Additionally, document analysis can be used in conjunction with other research designs, such as case studies, to triangulate findings and enhance the depth of the research. However, one of the challenges of this design is assessing the credibility, bias, or completeness of the documents. Researchers must critically evaluate the sources to ensure that the information is reliable and relevant to their study. Despite these challenges, document analysis remains a valuable tool for exploring existing written records and uncovering insights that may not be easily accessible through other research designs.
Ethnography is a deeply immersive research design that involves the researcher becoming part of the community or environment they are studying. This approach allows researchers to gather first-hand insights into the social dynamics, practices, and beliefs of a group from the inside. Rather than relying on external observation or second-hand accounts, ethnographers immerse themselves among their participants, often for extended periods. This enables them to capture the complexities of human behaviour in its natural setting, offering a nuanced understanding of cultural practices and social interactions.
One of the unique aspects of ethnography is its emphasis on the participants’ perspectives. By prioritising the voices and experiences of the people being studied, ethnographers aim to represent the world as seen through the eyes of the participants. However, this approach also raises challenges, particularly around maintaining objectivity and managing the researcher’s role in influencing the group they are observing. Ethnography requires careful ethical considerations, such as gaining informed consent and respecting privacy, given the often intimate nature of the research. Despite these challenges, the rich, contextual insights that ethnography provides make it a powerful approach for understanding the lived experiences of individuals within their cultural and social environments.
Experimental research is a highly controlled design that seeks to establish cause-and-effect relationships by manipulating one or more independent variables and observing their impact on dependent variables. This research design typically involves two groups: an experimental group that receives the treatment or intervention and a control group that does not. By randomly assigning participants to these groups, researchers can minimise bias and ensure that differences in outcomes are directly attributable to the variable being tested, rather than external factors. This randomisation strengthens the internal validity of the experiment.
Quasi-experimental designs are similar to experimental research but differ in one key aspect: they lack the random assignment of participants to experimental and control groups. In cases where randomisation is either impractical or unethical—such as in educational settings or when studying pre-existing groups—quasi-experimental designs provide a valuable alternative. While researchers still manipulate an independent variable and observe its effect on a dependent variable, the absence of randomisation means that there may be pre-existing differences between groups. As a result, researchers must account for these differences when analysing the outcomes, often using statistical methods to control for confounding variables.
Grounded theory is a qualitative research design designed to generate theory directly from the data rather than testing an existing hypothesis or using a pre-existing theoretical framework. Unlike more traditional research approaches, grounded theory allows the theory to emerge naturally through the iterative process of data collection and analysis. Researchers continuously compare new data with previously gathered information. This ongoing comparison enables them to identify recurring patterns, concepts, and categories, which are then refined into a coherent theoretical framework. Grounded theory is particularly useful when studying processes, interactions, or behaviours where existing theories do not exist or may not fully explain the phenomena.
One of the major advantages of grounded theory is its flexibility. Since it does not require researchers to adhere to a rigid hypothesis or framework from the start, the design allows for the exploration of unexpected insights that may arise during data collection. This makes it a powerful approach for investigating complex or under-researched topics. However, the open-ended nature of grounded theory can also be a challenge, as it requires researchers to be highly reflexive and adaptable throughout the research process. The absence of a pre-set framework means that analysis can be time-consuming, with researchers needing to sift through large amounts of data to construct a meaningful theory that adequately reflects the participants’ experiences and emerging patterns.
Narrative inquiry is a qualitative research design that focuses on the stories people tell about their personal experiences, aiming to understand how individuals construct meaning in their lives. Unlike other research approaches that may prioritise external observation or objective measurement, narrative inquiry dives into the subjective world of the participant. Researchers collect these narratives through interviews, journals, letters, or even autobiographies, and analyse how individuals structure their stories to make sense of their experiences. This approach is particularly useful in fields where understanding personal identity, life transitions, or cultural contexts requires a close examination of how people frame and interpret their lived experiences.
A key feature of narrative inquiry is its emphasis on the co-construction of meaning between the researcher and the participant. The researcher does not just passively collect stories but actively engages in dialogue, interpreting the narratives while considering how their own perspectives and biases influence the analysis. This collaborative process allows for a richer understanding of the subject matter but also demands a high level of reflexivity from the researcher. Since narratives are shaped by memory, culture, and social influences, researchers must carefully navigate issues of subjectivity, ensuring that the participant’s voice is authentically represented while also providing a critical analysis of how the story fits within broader social or cultural patterns.
Phenomenology is a qualitative research design that seeks to explore and understand individuals’ lived experiences of a particular phenomenon. Rather than focusing on objective measures or external observations, phenomenology prioritises subjective experience, aiming to uncover the essence of how people perceive, interpret, and make sense of their experiences. Researchers using this design typically collect data through a variety of in-depth methods such as interviews or reflections, allowing participants to describe their personal encounters with the phenomenon in their own words. The goal is to view the experience as closely as possible through the eyes of the individuals who lived it, capturing its richness and complexity without external influence.
While this research design provides deep insights into human consciousness and subjective experience, it can be challenging to generalise the findings due to the intensely personal nature of the data. Nevertheless, phenomenology’s strength lies in its ability to provide a profound, context-rich understanding of how individuals uniquely experience and interpret specific aspects of life, making it invaluable for exploring complex, emotionally charged, or abstract phenomena.
Survey research is a widely utilised design in both quantitative and qualitative research that involves gathering data from a large group of respondents, typically through structured questionnaires. This approach is highly versatile, allowing researchers to collect information about a wide range of topics, including attitudes, behaviours, preferences, and demographic characteristics. One of the main advantages of survey research is its ability to gather data from a broad population efficiently, making it possible to identify trends, correlations, or patterns within large datasets. Surveys can be administered in various formats, such as online, by phone, or in person, providing flexibility in how researchers reach their target audience.
However, the quality and reliability of the data collected through surveys depend heavily on the survey’s design. Well-constructed surveys require carefully worded questions that avoid bias and confusion, and they must be designed to ensure that respondents understand and can accurately answer the questions. Another challenge is ensuring a high response rate, as low participation can skew results and affect the study’s representativeness. Despite these limitations, survey research remains a powerful tool in fields like marketing, social sciences, public health, and education, where large-scale data collection is necessary to inform policies, identify trends, or make generalisations about a population’s characteristics or behaviours.
You can also learn more about research designs and methods by watching the videos below.
Questions to ponder
How does the nature of the research question influence the decision to use a particular research design?
How do ethical concerns shape the choice of research design?
What types of research questions are best suited for case study research, and how do these differ from questions better addressed through autoethnography?
Dr Basil Cahusac de Caux is an Assistant Professor with a specialization in the sociology of higher education, postgraduate research, and the sociology of language.
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
Have you ever wondered how doctoral students can navigate the challenging journey of academic writing? For many, the answer lies in the strength of community and the power of collaborative feedback. Our recent paper explores this very subject, examining how doctoral writing groups can transform the academic experience through peer feedback and collective learning.
Our study centres on a collaborative book project where doctoral students wrote and peer-reviewed each other’s chapters, ultimately producing a book titled Wellbeing in Doctoral Education: Insights and Guidance from the Student Experience. This project wasn’t just about writing; it was about creating a community of practice, where students learned together, shared experiences, and supported each other through the arduous process of academic writing. The concept of communities of practice is pivotal in understanding this study. These communities are formed by individuals who share a passion or concern for something they do, learning to do it better through regular interaction.
In the context of our specific doctoral writing groups, the shared domain was academic writing and publishing of the academic book, and the community was formed through mutual engagement and support. Participants were united by their commitment to improving their academic writing through peer feedback. This shared focus provided a common ground for all members, fostering a sense of belonging and purpose. Building a supportive community was crucial. The writing groups created a space where students felt safe to share their work, provide feedback, and discuss their challenges. This environment of trust and collegiality was essential for effective learning and personal growth. Through their interactions, the group developed a shared repertoire of resources, experiences, and practices. This included not just the technical aspects of writing but also the emotional and psychological support needed to thrive in academia. Participants learned from each other, gaining insights into different writing styles, feedback techniques, and academic expectations.
One of the most significant findings from our study was the transformative power of peer feedback. Participants found that receiving and giving feedback was instrumental in improving their writing. Feedback was not only about correcting mistakes but also about providing affirmation and recognising the potential and effort of the writers. This helped build confidence and self-esteem. Another powerful aspect of peer feedback was the opportunity to learn from others. This process helped participants identify their own mistakes and areas for improvement. By reviewing peers’ work, participants also gained new perspectives and ideas that they could apply to their own writing.
Our findings illustrate how peer feedback and collaborative practices within writing groups can significantly enhance the doctoral experience. Participants discovered that, despite their unique backgrounds and stories, they shared common challenges in their academic journeys. This realisation fostered a sense of community and mutual understanding. Our findings highlight the dual nature of the doctoral experience: each student has a unique narrative, yet their struggles and successes resonate with others. This shared experience of uncovering commonalities amidst diversity facilitated a deeper understanding and appreciation of one another’s viewpoints, thereby fostering a sense of community and collegiality within the group. This collective recognition of shared struggles also helped alleviate feelings of isolation and promoted a supportive environment. Our findings also emphasise the importance of reflective writing and feedback in promoting personal growth and academic development. Through sharing their stories, participants articulated and reshaped their identities in academia, which helped them navigate both personal and academic development.
Our study highlights the immense value of collaborative writing and peer feedback in doctoral education. By fostering a supportive community of practice, doctoral students can navigate the complexities of academic writing more effectively, develop their academic identities, and build the confidence needed to succeed in academia. This approach not only improves writing skills but also provides emotional and psychological support, making the doctoral journey a more enriching and less isolating experience.
The findings of our study have several important implications for doctoral education:
Institutions should encourage the formation of writing groups and other collaborative learning opportunities to help doctoral students develop their writing skills and academic identities.
Developing students’ ability to give and receive feedback is crucial. Our study shows that feedback literacy can significantly enhance the quality of academic writing and the overall learning experience.
Creating a safe and supportive environment where students can share their work and experiences is essential for their personal and academic growth.
Taken together, our study shows that embracing the power of community and collaboration could be the key to transforming the doctoral experience, making it more supportive, inclusive, and ultimately, more successful for all students involved.
Questions to ponder
How do your emotions influence academic writing and reactions to feedback?
Are there hidden practices of publishing that should be discussed more openly?
How can academic institutions better support the formation of communities of practice among doctoral students?
What are some challenges that might arise in implementing peer feedback systems, and how can they be addressed?
In what ways can the process of giving and receiving feedback be made more effective and less emotionally taxing for students?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
Dr Basil Cahusac de Caux is an Assistant Professor with a specialization in the sociology of higher education, postgraduate research, and the sociology of language.
In an era where generative artificial intelligence (AI) permeates every aspect of our lives, AI literacy in higher education has never been more crucial. In our recent paper, we delve into our own journeys of developing AI literacy, showcasing how educators can seamlessly integrate AI into their teaching practices. Our goal is to cultivate a new generation of AI-literate educators and graduates. Through our experiences, we also created a comprehensive framework for AI literacy, highlighting the transformative potential of embracing AI in educational settings.
We embraced AI with optimism and enthusiasm, seeing it as a tool to be harnessed rather than feared. In our recent paper, we passionately argue that AI literacy is an indispensable skill for today’s graduates. We emphasise that this mindset requires a significant cultural shift in higher education, advocating for the integration of AI as a valuable learning aid. By fostering this change, we can unlock AI’s potential to enhance education and empower students to thrive in an increasingly digital world.
Our journey began with curiosity and a willingness to experiment with AI in our educational practices. Lynette, for instance, integrated AI into her role, showcasing its capacity as an academic language and literacy tutor. She encouraged her students, many of whom are from non-English speaking backgrounds, to use tools like Grammarly and ChatGPT to improve their academic writing. By doing so, she highlighted the importance of collaboration between students and AI, promoting deeper learning and engagement.
In a Master’s level course on autoethnography, Lynette inspired her students to harness generative AI for creative data generation. She showcased how tools like DALL-E could be used to create artworks that visually represent their research experiences. This approach not only ignited students’ creativity but also deepened their engagement with their assignments, allowing them to explore their research from a unique and innovative perspective.
Basil introduced his students to the power of generative AI through hands-on assignments. One notable task involved creating a public awareness campaign centred around the UN’s Sustainable Development Goals. Students utilised DALL-E to produce compelling visuals, showcasing AI’s ability to amplify creativity and enhance learning outcomes. This practical approach not only highlighted the transformative potential of AI but also encouraged students to engage deeply with important global issues through innovative and impactful media.
While the benefits of AI in education were clear to us, we also encountered ethical considerations and challenges. In our paper, we emphasised the importance of transparency and informed consent when using AI in research and teaching. For example, we ensured that students and research participants were aware of how their data would be used and the potential biases inherent in AI-generated content. Moreover, we highlighted the environmental impact of using AI technologies. The energy consumption of AI models is significant, raising concerns about their sustainability. This awareness is crucial as educators and institutions navigate the integration of AI into their practices.
From our experiences and reflections, we developed a groundbreaking AI literacy framework for higher education, encompassing five domains: foundational, conceptual, social, ethical, and emotional. As illustrated in the figure below, this comprehensive framework is designed to empower educators and students with the essential skills to adeptly navigate the intricate AI landscape in education. By promoting a holistic and responsible approach to AI literacy, our framework aims to revolutionise the integration of AI in academia, fostering a new generation of informed and conscientious AI users.
From these essential domains of AI literacy, we have crafted a comprehensive framework for AI literacy in higher education.
The framework underscores the following key features:
Foundational Understanding: Mastering the basics of accessing and using AI platforms.
Information Management: Skillfully locating, organising, evaluating, using, and repurposing information.
Interactive Communication: Engaging with AI platforms as interlocutors to create meaningful discourse.
Ethical Citizenship: Conducting oneself ethically as a digital citizen.
Socio-Emotional Awareness: Incorporating socio-emotional intelligence in AI interactions.
Our AI literacy framework has significant implications for higher education. It provides a structured approach for integrating AI into teaching and research, emphasising the importance of ethical considerations and emotional awareness. By fostering AI literacy, educators can prepare students for a future where AI plays a central role in various professional fields.
Embracing AI literacy in higher education is not just about integrating new technologies; it’s about preparing students for a rapidly changing world. Our AI literacy framework offers a comprehensive guide for educators to navigate this transition, promoting ethical, effective, and emotionally aware use of AI. As we move forward, fostering AI literacy will be crucial in shaping the future of education and empowering the next generation of learners.
Questions to ponder
How can educators ensure that all students, regardless of their technological proficiency, can access and utilise generative AI tools effectively?
In what ways can generative AI tools be used to enhance students’ conceptual understanding of course materials?
How can the concept of generative AI as a collaborator be integrated into classroom discussions and activities?
How can educators model ethical behaviour and digital citizenship when using generative AI tools in their teaching?
How can understanding the emotional impacts of generative AI interactions improve the overall learning experience?
How can the AI literacy framework be practically integrated into different academic disciplines and curricula?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
I have recently developed and delivered a masterclass about how you can develop your AI literacy in your writing and research practice. This included a series of examples from my own experiences. I thought I’d provide a summary of this masterclass in a blog post so that everyone can benefit from my experiences.
Artificial intelligence (AI) has been present in society for several years and refers to technologies which can perform tasks that used to require human intelligence. This includes, for example, computer grammar-checking software, autocomplete or autocorrect functions on our mobile phone keyboards, or navigation applications which can direct a person to a particular place. Recently, however, there has been a significant advancement in AI research with the development of generative AI technologies. Generative AI refers to technologies which can perform tasks that require creativity. In other words, these generative AI technologies use computer-based networks to create new content based on what they have previously learnt. These types of artistic creations have previously been thought to be the domain of only human intelligence and, consequently, the introduction of generative AI has been hailed as a “game-changer” for society.
I am using generative AI in all sorts of ways. The AIs I use most frequently include Google’s built-in generative AI in email, chat, Google Docs etc. which learns from your writing to suggest likely responses. I also use Grammarly Pro to help me identify errors in my students’ writing, allowing me more time to give constructive feedback about their writing, rather than trying to find examples. This is super time-saving, particularly given how many student emails I get and the number of assignments and thesis chapters I read! I also frequently use a customised version of Chat GPT 4, which I trained to do things the way I would like them to be done. This includes responding in a specific tone and style, reporting information in specific ways, and doing qualitative data analysis. Finally, I use Leonardo AI and DALL-E to generate images, Otter AI to help me transcribe some of my research, Research Rabbit to help me locate useful literature on a topic, and AILYZE to help conduct initial thematic analysis of qualitative data.
The moral panic that was initiated at the start of 2023 with the advent of Chat GPT caused debates in higher education. Some people insisted that generative AI would encourage students to cheat, thereby posing a significant risk to academic integrity. Others, however, advocated that the use of generative AI could make education more accessible to those who are traditionally marginalised and help students in their learning. I came to believe that the ability to use generative AI would be a core skill in the future, but that AI literacy would be essential. This led me to publish a paper where I defined AI literacy as:
AI literacy is understanding “how to communicate effectively and collaboratively with generative AI technologies, as well as evaluate the trustworthiness of the results obtained”.
This prompted me to start to develop ways to teach AI literacy in my practices. I have collated some tips below.
Firstly, you should learn to become a prompt wizard! One of the best tips I can give you is to provide your generative AI with context. You should tell your AI how you would like it to do something by giving it a role (e.g., “Act as an expert on inclusive education research and explain [insert your concept here]”). This will give you much more effective results.
Secondly, as I have already alluded to above, you can train your AIs to work for you in specific ways! So be a bit brave and explore what you can do.
Thirdly, when you ask it to make changes to something (e.g., to fix your grammar, improve your writing clarity/flow), ask it to also explain why it made the changes it did. In this way, you an use the collaborative discussion you are having with your AI as a learning process to improve your skills.
The most common prompts I use in my work are listed below. The Thesis Whisperer has also shared several common prompts, which you can find here.
“Write this paragraph in less words.”
“Can you summarise this text in a more conversational tone?”
“What are five critical thinking questions about this text?”
I have previously talked about how you can use generative AI to help you design your research questions.
I have since also discovered that you can use generative AI as a data generation tool. For example, I have recently used DALL-E to create an artwork which represents my academic identity as a teacher and researcher. I have written a chapter about this process and how I used the conversation between myself and DALL-E as a data source. This chapter will be published soon (hopefully!).
Most recently, I have started using my customised Chat GPT 4 as a data analysis tool. I have a project that has a large amount of qualitative data. To help me with a first-level analysis of this large dataset, I have developed a series of 31 prompts based on theories and concepts I know I am likely to use in my research. This has allowed me to start the analysis of my data and give me direction as to areas for further exploration. I have given an example of one of the research prompts below.
In this study, capital is defined as the assets that individuals vie for, acquire, and exchange to gain or maintain power within their fields of practice. This study is particularly interested in six capitals: symbolic capital (prestige, recognition), human capital (technical knowledge and professional skills), social capital (networks or relationships), cultural capital (cultural knowledge and embodied behaviours), identity capital (formation of work identities), and psychological capital (hope, efficacy, resilience, and optimism). Using this definition, explain the capitals which have played a part in the doctoral student’s journey described in the transcript.
What I have been particularly impressed by so far is my AIs ability to detect implicit meaning in the transcripts of the interviews I conducted. I expected it to be pretty good at explaining explicit mentions of concepts, but had not anticipated it to be so good at understanding more nuanced and layered meanings. This is a project that is still in progress and I expect very interesting results.
There are some ethical considerations which should be taken into account when using generative AIs.
Privacy/confidentiality: Data submitted to some generative AIs could be used to train the generative AI further (often depending on whether you have a paid or free version). Make sure to check the privacy statements and always seek informed consent from your research participants.
Artwork: Generative AIs were trained with artwork without express consent from artists. Additionally, it is worth considering who the actual artist/author/creator of the artwork is when you use generative AI to create it. I consider both the user and the AI as collaborators working to create the artwork together.
Bias propagation: Since generative AIs are trained based on data from society, there is a risk that they may reflect biases present in the training data, perpetuating stereotypes or discrimination.
Sustainability: Recent research demonstrates that generative AI does contribute significantly to the user’s carbon footprint.
It is also important to ethically and honestly acknowledge how you have used generative AI in your work by distinguishing what work you have done and what work it has done. I have previously posted a template acknowledgement for students and researchers to use. I have recently updated the acknowledgement I use in my work and have included it below.
I acknowledge that I used a customised version of ChatGPT 4 (OpenAI, https://chat.openai.com/) during the preparation of this manuscript to help me refine my phrasing and reduce my word count. The output from ChatGPT 4 was then significantly adapted to reflect my own style and voice, as well as during the peer review process. I take full responsibility for the final content of the manuscript.
My final tip is – be brave! Go and explore what is out there and see what you can achieve! You may be surprised how much it revolutionises your practices, freeing up your brain space to do really cool and creative higher-order thinking!
Questions to ponder
How does the use of generative AI impact traditional roles and responsibilities within academia and research?
Discuss the implications of defining a ‘collaborative’ relationship between humans and generative AI in research and educational contexts. What are the potential benefits and pitfalls?
How might the reliance on generative AI for tasks like grammar checking and data analysis affect the skill development of students and researchers?
The blog post mentions generative AI’s ability to detect implicit meanings in data analysis. Can you think of specific instances or types of research where this capability would be particularly valuable or problematic?
Reflect on the potential environmental impact of using generative AI as noted in the blog. What measures can be taken to mitigate this impact while still benefiting from AI technologies in academic and research practices?
Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies.
In today’s data-driven world, there is a lot of talk about making decisions based on so-called objective data. For example, schools and universities use information about the mix of students and staff to shape how they teach and run things. Information such as age, where people live, how much schooling they have had, or their income is collected to help make these so-called “informed” decisions. But here’s the problem – we sometimes forget that the people collecting these forms of data and those making these decisions have their own biases. Decisions reflect the majority view, which means that other experiences are often sidelined.
We need to understand that different parts of our backgrounds interact and affect the way we experience the world, often in very different ways. This is what is termed intersectionality. Using intersectionality as a lens helps us to recognise that we cannot look at parts of someone’s identity in isolation. We need to see the whole person and how all parts of their identity come together, influencing their experiences and the way the world sees and treats them. It is like saying, “To understand the whole story, you can’t just read one page. You need to read the entire book.”
This highlights that researchers and decision-makers need to work to improve processes for data collection and analysis to better reflect the diversity of people’s experiences. So, why is it so crucial to bring diverse perspectives into the research mix?
Firstly, past research has not done a great job of representing everyone. Surveys can often be pretty narrow, missing out on the complete picture of who participants are, which means we are not getting the full story on how to solve problems for everyone.
Secondly, by embracing diversity in research, we stand up for fairness and social justice. Imagine surveys that only see concepts in black and white, leaving out people who do not fit neatly into specific boxes. We are missing out on understanding different experiences and perspectives, which can make our research richer and more meaningful.
Finally, acknowledging a wide range of experiences helps us dig deeper into our findings, giving us a clearer view of what is really going on in our context. This approach challenges us to think beyond the usual categories and consider the complex ways people identify themselves. By doing so, we can better reflect society’s diversity and push for changes that make society more inclusive and equitable for everyone.
Despite some improvements in how we collect data in recent years, there is still a long way to go. We need to ensure that our research methods allow people to share the full spectrum of their identities, respecting the richness of their experiences. It is all about giving everyone a voice and ensuring research serves us all, not just a privileged few.
The thing is, when we are exploring human experiences, we must embrace the messiness and all the different parts of who people are. But, sadly, many times, research just clumps people into simple categories, missing out on their full stories. This approach does not just overlook the richness of their identities; it can also make them feel like their voices do not matter, with their real-life experiences either ignored or questioned.
In my recent paper, I propose a new way of collecting data about research participants. I propose that we say, “Hey, let people tell us about themselves in their own words” rather than asking specific questions that limit their responses. To do this, I argue that researchers should include a question where people can share their own diversity stories when they fill out surveys. Why? Because it does justice to their experiences and knowledge.
I have seen firsthand how rich and deep data can be when people share their stories this way, especially when this data is combined with other open-ended research questions. My paper makes the case for letting people have a say in how they are represented in research. It is about giving them the power to share their identities in their own words. The main findings from my study include:
When I asked open questions, the replies were eye-opening: I decided to ask people to tell me about themselves in their own way, without the usual checkboxes. And wow, did I get a treasure trove of responses! Some people went the traditional route, but others shared stories and parts of their identities I would never have captured with a simple tick box. This approach really highlighted how everyone has their own unique blend of experiences and backgrounds.
Self-written diversity statements are gold mines of insights: One aspect that was particularly unique in my study is that I asked people to jot down their thoughts on what makes them, well, them. I did this by asking them to write their own diversity statement. The depth of what I got back was incredible – from personal tales of grappling with ableism to rich descriptions of cultural heritage and everything in between. It is like these self-written snippets opened a window into the real lives and challenges people face, way beyond what any standard survey could capture.
Weaving stories together to highlight the tapestry of people’s lived experiences: One of the most exciting findings from my study is how I used all these different bits of info from the surveys and weaved them into what I call holistic introductory stories. Imagine taking a bit from here and a snippet from there to stitch together a complete narrative about someone. It is like getting a full-colour, 3D picture of a person rather than a flat, 2D sketch. This way, I was not just seeing bits and pieces of someone’s identities, but I was developing a better understanding of how all those bits fit together to make my participants who they are.
My findings highlight the importance of encouraging epistemic justice in our research practices. What is epistemic justice, you may ask? Epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. It is about ensuring all perspectives are considered, especially those often ignored or undervalued. To really do justice to everyone’s knowledge, we have to be open to different, even incomplete ways of understanding. That is why I am using open questions and these stories to give everyone a platform to share their experiences. I believe stories are how we make sense of our world. As has been highlighted by other researchers, stories help us understand not just the surface-level stuff people share but the deeper, sometimes hidden layers of their lives.
My focus has been on getting people to write down their stories because there is power in writing. But now that this study is finished, I am thinking, why stop there? There are so many other ways to share and understand each other’s experiences. So, looking ahead, I am keen on mixing things up even more, using all sorts of creative methods to make sure everyone feels seen and heard, especially those who have been left out of the conversation for too long.
Questions to ponder
If you had to write a short diversity statement about yourself, what would you say?
How does the incorporation of self-written diversity statements and open-ended questions in surveys challenge traditional methods of data collection in qualitative research?
The paper advocates for epistemic justice through methodological innovations in order to reduce biases and inequalities in research. How does giving participants the agency to define themselves challenge or change the researcher’s role?
The research outlines a more artistic way of understanding participants through holistic introductory stories. What advantages does this creative approach offer, and what challenges might it pose in traditional research environments?
Leave a Comment