Participant recruitment strategies in research

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

The way researchers select their participants impacts the validity and reliability of their findings, making participant recruitment one of the most crucial steps in the research process. But how do researchers go about this task? What strategies do they use to ensure their sample accurately reflects the broader population or the group they are investigating? Let’s explore some common participant recruitment strategies, breaking down their strengths, weaknesses, and best use cases. This post will cover six key sampling techniques: convenience sampling, purposive sampling, snowball sampling, random sampling, stratified sampling, and quota sampling.

Convenience Sampling

Convenience sampling, as the name implies, revolves around ease of access and availability. This method involves selecting participants who are nearby, easily accessible, and willing to take part in the study. It’s a go-to choice for researchers when they need to collect data quickly and with minimal effort. Instead of engaging in time-consuming and resource-intensive processes to identify and recruit participants, convenience sampling allows researchers to gather data from those who happen to be in the right place at the right time, or who meet the study’s basic criteria and are easy to contact.

One of the most notable benefits of convenience sampling is its speed and cost-effectiveness. Since participants are easy to reach, researchers can save both time and resources that would otherwise be spent on recruitment strategies, travel, or extensive outreach. For example, if you’re studying employee engagement in the workplace, you might simply survey your colleagues, since they are readily available and meet the general criteria of being employees. You don’t need to look far or conduct an elaborate recruitment process. This ease of implementation is especially valuable when dealing with limited budgets or tight deadlines. Convenience sampling also simplifies logistics, as researchers don’t need to source participants from outside their immediate environment, which can be particularly helpful in the initial stages of research where the primary goal is to test ideas or gather preliminary data.

Despite its practicality, convenience sampling carries a significant risk of bias. Since the sample is drawn from a pool of easily accessible participants, it may not reflect the diversity of the broader population. This lack of representation can lead to skewed results, limiting the generalisability of the study’s conclusions. Moreover, convenience sampling often captures a very specific subset of individuals: those who are willing to participate. People who are available and motivated to take part may differ significantly from those who are harder to reach, potentially introducing a self-selection bias. This means that the participants in your study might share certain characteristics that make them different from the larger group you’re trying to understand, thereby limiting the accuracy and breadth of the findings.

Convenience sampling is best suited for exploratory research, pilot studies, or projects where time and resources are constrained. It’s a practical method when the research goal is to test hypotheses, gather preliminary data, or explore an emerging field. However, for studies where generalising findings to a larger population is critical, convenience sampling is not recommended. In these cases, a more representative sampling method, such as random or stratified sampling, would yield more reliable and valid results.

Purposive Sampling

Purposive sampling, also known as purposeful sampling, is a strategically driven approach to participant selection, designed to align closely with the objectives of the research. Purposive sampling involves the deliberate selection of individuals who possess specific characteristics, knowledge, or experiences that are directly relevant to the study’s focus. The intention here is not to gather a wide, diverse group of participants, but to choose individuals whose particular insights can provide depth and richness to the data. In purposive sampling, researchers carefully define the criteria for inclusion, selecting participants based on how well they fit the study’s needs. This targeted approach guarantees that the participants are not only suitable but also capable of offering the kind of focused and contextually relevant information that the research seeks to uncover.

The primary strength of purposive sampling lies in its efficiency and precision. By handpicking participants based on specific criteria, researchers can ensure that every individual involved in the study has a direct connection to the research topic, which enhances the quality of the data collected. For instance, if a researcher is investigating the experiences of people recovering from cancer, they would purposefully select participants who have undergone cancer treatment, ensuring that the data collected is directly relevant to the research question. This method is especially useful in qualitative research, where the goal is often to gain a deeper understanding of a particular phenomenon rather than to generalise findings to a larger population. Moreover, purposive sampling is often more practical when working with small or hard-to-reach populations. In studies involving niche groups, such as people with rare medical conditions or members of specific subcultures, purposive sampling enables researchers to focus on finding individuals who meet the study’s strict criteria, bypassing the need for broader recruitment efforts that may yield less relevant participants.

While purposive sampling offers many advantages in terms of relevance and efficiency, it also comes with inherent limitations, the most significant of which is the risk of selection bias. Since participants are chosen subjectively by the researcher, there is always the potential for bias in the selection process. The researcher’s choices may be influenced by preconceived notions about who would provide the most useful data, which could result in an unbalanced or unrepresentative sample. Since the sample is intentionally selective, it does not provide an accurate cross-section of a broader group. As a result, purposive sampling is not ideal for studies where broad generalisability is a key objective.

Purposive sampling is most commonly employed in qualitative research, where the goal is to explore specific themes, experiences, or phenomena in great detail. It is particularly useful when researchers are investigating a clearly defined group or phenomenon, such as in case studies, ethnographic research, or studies focusing on specialised areas like mental health, education, or organisational behaviour. Additionally, purposive sampling is often used in evaluation research, where the goal is to assess a programme, policy, or intervention. By focusing on individuals with firsthand experience, researchers can gather detailed feedback that is crucial for evaluating the effectiveness of the intervention.

Snowball Sampling

Snowball sampling is a participant recruitment method that relies heavily on social networks and personal referrals to build a sample. The process begins with a small group of initial participants who are chosen based on their relevance to the study. These participants are then asked to refer others they know who meet the study’s criteria, who in turn refer more people, and so on, creating a snowball effect. Over time, the sample grows organically, expanding through connections within a specific community or network.

This method is especially useful when researchers are working with hard-to-reach populations. These might include people in marginalised groups, individuals involved in illegal activities, or those with experiences that are not easily accessible through conventional recruitment methods, such as people who have experienced homelessness or are part of underground subcultures. In many cases, people within these groups may not want to reveal their identities to researchers, especially if their involvement in the group is sensitive or stigmatised. However, through personal referrals from trusted peers, they may be more likely to participate. The trust established between members of the community can make them more comfortable with sharing their experiences, allowing researchers to collect rich, authentic data from participants who would otherwise be unreachable. Snowball sampling can also be highly cost-efficient and flexible.

Despite its advantages, snowball sampling has several potential drawbacks, the most notable being the risk of bias. Since participants are recruited through personal networks, the sample is often restricted to people who are socially connected, which can limit the diversity of the sample. This lack of diversity can skew the results, making it difficult to generalise findings to the broader population. Moreover, snowball sampling can create a chain of referrals that is disproportionately shaped by the initial participants. If the first few participants are not representative of the population being studied, their referrals may perpetuate this imbalance, further reducing the sample’s representativeness. Another challenge is the difficulty in controlling the sample size. Since snowball sampling relies on personal referrals, the growth of the sample can be unpredictable. In some cases, the “snowball” may gather momentum quickly, leading to a large, varied participant pool. In other instances, recruitment may stall if participants are unwilling or unable to refer others, resulting in a sample that is too small to draw meaningful conclusions.

Given its strengths and limitations, snowball sampling is most effective in studies where recruiting participants through traditional methods would be difficult or impractical. It is particularly well-suited for research involving rare populations, sensitive topics, or hidden communities where members may be reluctant to come forward on their own. This method is also useful in qualitative research, where the goal is to collect in-depth, nuanced data from a specific group rather than to achieve broad generalisability. In exploratory research, snowball sampling can help researchers generate preliminary data about populations that are otherwise difficult to access. It allows for a gradual expansion of the sample, giving researchers the flexibility to adjust their recruitment strategy based on the data collected. However, because of the potential for bias, snowball sampling is generally not recommended for studies that require representative samples or where generalisability to the broader population is a primary concern.

Random Sampling

Random sampling, as the name suggests, is a method where each individual in the population has an equal chance of being selected, which makes the process akin to drawing names out of a hat. By giving every person an equal opportunity to be included, random sampling maximises the likelihood that the sample will accurately represent the broader population. A simple example would be assigning numbers to everyone in a population and using a random number generator to pick participants. This approach minimises bias and maximises the likelihood that the sample is representative. This quality is what makes random sampling a preferred choice in large-scale surveys and experimental research, where the goal is to ensure that the findings can be applied to a larger group.

One of the most notable strengths of random sampling is its ability to provide high external validity. Since the method does not favour any particular subset of the population, the findings from a study using random sampling are more likely to be generalisable—meaning that they can be applied to the wider population with a greater degree of confidence. Another key benefit is the reduction of systematic bias. In other sampling methods, certain individuals or groups may be over-represented due to researcher influence or convenience. With random sampling, this risk is minimised because the selection process is completely unbiased. The random nature of this method ensures that personal preferences, biases, or logistical factors do not affect who is chosen for the study.

Despite its many advantages, random sampling can be challenging to implement, particularly in studies with large populations. Some of the main drawbacks are the time and cost involved. To conduct random sampling on a large scale, researchers need access to a complete and up-to-date list of the population from which they’re drawing their sample. In some cases, obtaining such a list can be difficult or impossible, especially when working with fragmented or hard-to-reach populations. Additionally, there can be significant logistical hurdles. In small populations, random sampling may be fairly straightforward, but when dealing with larger populations, coordinating a random selection process can become complex. This can involve significant costs, not just in terms of the initial recruitment of participants, but also in terms of travel, communication, and follow-up procedures.

Given the costs and logistical challenges, random sampling is best suited for large quantitative studies, particularly those where generalisability is the primary goal. If the research is designed to draw conclusions about the broader population, such as in public health research, market research, or large-scale sociological studies, random sampling is ideal because it provides the most unbiased and representative data possible. In cases where time and budget constraints are more pressing, or where the research is exploratory rather than aiming for population-level generalisability, other sampling methods (such as convenience or purposive sampling) might be more appropriate.

Stratified Sampling

Stratified sampling is a method used by researchers to ensure that their sample accurately reflects the diversity of the population by focusing on key subgroups, or “strata.” The basic idea is that the population is divided into distinct groups based on important characteristics such as age, gender, income level, education, or ethnicity. Once these groups are defined, participants are then randomly selected from each stratum. This approach allows researchers to ensure that the sample mirrors the proportions of these subgroups in the overall population, leading to more precise and reliable findings. For example, if a population consists of 40% males, 55% females, and 5% transgender people, the researcher ensures that the sample has the same proportional representation. This method is particularly effective in studies where the population consists of individuals with varying characteristics that could influence the outcome of the study. By ensuring that all relevant subgroups are proportionally represented, stratified sampling helps researchers avoid over-representing or under-representing certain groups.

One of the main strengths of stratified sampling is its ability to produce a highly representative sample of the population. By ensuring that each subgroup is properly represented, this method increases the precision of the results, which in turn improves the reliability of the study’s findings. This is especially important in research where differences between subgroups are a key focus. Moreover, by dividing the population into strata and then randomly selecting participants from each group, stratified sampling ensures a more balanced and accurate representation, which minimises the risk of sampling errors. Finally, the ability to analyse subgroup differences is a key advantage of stratified sampling, particularly in fields like sociology, economics, and public health, where understanding these differences is critical.

While stratified sampling offers many advantages, it does come with certain challenges, particularly in terms of the time and resources required to implement it. One of the most time-consuming aspects of this method is the need to define and organise the strata before selecting participants. Researchers must have a clear understanding of which characteristics are most relevant to the study and must have detailed information about the population to create the strata. Furthermore, in some cases, this information may not be readily available, or the population may be too complex to neatly divide into well-defined strata. Stratified sampling can also be more logistically complicated than simpler methods like convenience sampling. Researchers need to ensure that they have enough participants in each stratum to allow for meaningful analysis, which can require more recruitment efforts. If some strata are smaller or harder to reach, the researcher may need to put in extra effort to find participants from those groups, increasing both time and costs.

Given its ability to provide a highly representative sample, stratified sampling is best used in studies where representation across key subgroups is critical. It is particularly useful when researchers are interested in analysing differences between subgroups, such as age, income, or geographic location. Stratified sampling is also valuable in demographic studies, where the goal is often to understand the characteristics of various subgroups within a population.

Quota Sampling

Quota sampling is a sampling method that shares certain goals with stratified sampling, particularly the aim of capturing diversity across specific subgroups. However, the fundamental difference lies in how the sample is selected. While stratified sampling relies on random selection from each subgroup, quota sampling allows researchers to directly control who is recruited by actively seeking participants to fill predefined quotas based on certain characteristics, such as age, gender, education level, or income. Once the quota for each subgroup is filled, no further participants from that group are recruited, ensuring that the final sample meets the predetermined criteria for representation.

One of the main advantages of quota sampling is that it guarantees the inclusion of specific subgroups in the sample. By setting quotas for each group, the researcher ensures that the final sample reflects the desired characteristics or proportions, which is particularly important when the goal of the research is to compare different groups. Another key benefit of quota sampling is its efficiency. Since the researcher can directly seek out participants who meet the required criteria, the process can be completed more quickly and at a lower cost than methods like stratified sampling. Moreover, quota sampling offers a greater degree of control over the composition of the sample. The researcher can adjust the quotas based on the needs of the study, ensuring that specific groups are represented according to the study’s objectives.

Despite its advantages, quota sampling also has several limitations, the most significant of which is the potential for bias. Since participants are not selected randomly, there is a risk that the sample may not accurately represent the broader population, even if the quotas are met. The recruitment process is subjective, as it relies on the researcher’s judgement and outreach methods, which can introduce selection bias. This lack of randomisation means that the results from a quota sample may not be generalisable to the larger population, especially if certain characteristics or perspectives are overlooked during recruitment. Additionally, quota sampling can lead to incomplete representation within each subgroup. While the researcher may set quotas based on broad characteristics like age or gender, other important factors may not be considered. This can result in a sample that, while meeting the quota criteria, lacks internal diversity within the subgroups, which can limit the depth and richness of the data collected. Another challenge with quota sampling is that it requires detailed knowledge of the population beforehand. The researcher must have a clear understanding of the proportions of different groups within the population to set accurate quotas. This can be difficult if reliable demographic data is not available, or if the population is highly fragmented or diverse in ways that are not easily captured by simple quotas.

Quota sampling is best suited for studies where the primary goal is to compare specific groups or ensure representation across key subgroups. It is commonly used in market research, opinion polling, and social research, where researchers need to gather data quickly and cost-effectively while ensuring that certain groups are represented. This method is also useful in studies where strict randomisation is not feasible or necessary. For example, in research involving focus groups or interviews, where the goal is to gather in-depth insights from specific subgroups, quota sampling allows the researcher to select participants who fit the desired profile without the logistical complexities of random selection.

Questions to ponder

How do different sampling methods influence the validity of research findings?

Can convenience sampling ever be justified in large-scale research?

In what scenarios might snowball sampling offer a better solution than random sampling?

Exploring 10 popular research designs: a quick guide

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

In research, the design chosen plays a pivotal role in determining how data are collected, analysed, and interpreted. Each design provides a unique lens through which researchers can explore their questions, offering distinct advantages and limitations. Below, I summarise ten common research designs, spanning qualitative, quantitative, and mixed methods approaches.

Action Research

Action research is a collaborative and iterative approach that seeks to solve real-world problems while simultaneously generating knowledge. Action research is characterised by its participatory nature, where researchers and participants collaborate to identify problems and implement solutions. This collaborative process ensures that the research is deeply rooted in the needs and realities of the community or organisation being studied. By involving stakeholders in every step, action research not only increases the relevance of the findings but also empowers participants by giving them ownership of the process. This makes it particularly impactful in settings like schools, where teachers and administrators can actively contribute to shaping educational practices.

What sets action research apart is its cyclical nature. Unlike traditional research, where data are collected and analysed in a linear fashion, action research involves continuous cycles of planning, acting, observing, and reflecting. Another important feature of action research is its adaptability. As new insights emerge, the research design can be adjusted to address unforeseen challenges or opportunities. This flexibility allows for iterative learning and continuous improvement, fostering a more dynamic and responsive research environment. This makes it particularly well-suited for environments where ongoing change is necessary, such as schools or businesses aiming to improve their operations or outcomes. However, this adaptability also introduces challenges, particularly in maintaining rigour and objectivity. Balancing the need for scientific validity with the practical demands of real-world problem-solving requires careful planning and reflective practice, often making the role of the researcher one of facilitator as much as investigator.

Autoethnography

I have previously written another blog post which explains autoethnography in detail. In essence, autoethnography is a research design that combines the study of personal experience with broader social and cultural analysis. In this approach, the researcher uses their own life as the primary source of data, reflecting on their personal experiences to explore larger cultural or societal issues. Researchers are the participants in their own studies and the stories which are told often explore transformative experiences for the researcher, frequently taking the form of epiphanies that significantly influenced the author’s worldview. By blending autobiography and ethnography, autoethnography allows researchers to provide an insider’s perspective on their own social context, making it a powerful tool for examining how individual identity and experiences are shaped by—and in turn, shape—cultural norms, values, and power dynamics.

One of the strengths of autoethnography is its ability to highlight marginalised voices or experiences that are often overlooked in traditional research. It provides a platform for self-reflection and critical analysis, allowing researchers to connect their individual stories to larger collective experiences. However, the highly personal nature of this research design also presents challenges. Balancing subjectivity with academic rigour requires careful reflection to avoid the research becoming overly introspective or self-indulgent. Autoethnographers must navigate the fine line between personal storytelling and scholarly analysis, ensuring that their narrative contributes meaningfully to the understanding of broader social or cultural issues. Despite these challenges, autoethnography remains a powerful approach for exploring the intersection of the personal and the political, offering rich, emotionally resonant insights into the complexities of human experience.

Note that autoethnography can be done by one researcher or by a group of researchers. When done together, this type of autoethnography is called collaborative autoethnography. Collaborative autoethnography is particularly pertinent when examining complex social phenomena, such as marginalisation and the pursuit of social justice, as it facilitates the inclusion of multiple perspectives and voices. In this way, the individual voices of the researchers work together to illuminate common themes or experiences.

Case Study

Case study research is particularly effective for exploring complex phenomena in depth and within their real-life context. The case study design focuses on an in-depth examination of a ‘case,’ which could be an individual, group, organisation, or event. Case studies can be either descriptive, exploring what is happening, or explanatory, seeking to understand why and how something occurs. They often use multiple data sources—such as interviews, observations, and documents—to provide a comprehensive understanding of the case. Unlike other designs that seek to generalise findings across large populations, case studies focus on the intricacies of a ‘case’. The depth of focus of a case study also presents limitations—namely, the findings from a single case may not be applicable to other contexts. Despite this, case studies are often used as a stepping stone for further research, providing in-depth insights that can inform broader studies.

The distinction between single-case and multiple-case designs lies in the scope and focus of the research. A single-case design centres around an in-depth examination of one particular case, which is often chosen because it is either unique, critical, or illustrative of a broader phenomenon. This design is beneficial when the case is exceptional or offers significant insight into a rare or novel situation. In contrast, a multiple-case design involves studying several cases to compare and contrast findings across different contexts or instances. Multiple-case designs offer more robust evidence, as they allow researchers to identify patterns or variations across cases, increasing the potential for generalising findings to a broader population or set of circumstances.

Document or Policy Analysis

Document or policy analysis is a qualitative research design that involves systematically reviewing and interpreting existing documents to extract meaningful data relevant to a research question. These documents can range from government reports, personal letters, and organisational records to media articles, policy documents, and historical texts. It involves examining the formulation, implementation, and outcomes of documents or policies by analysing relevant data, understanding stakeholder perspectives, and evaluating the potential impacts of various options. Researchers use document analysis to identify patterns, themes, or trends within written materials, which can offer valuable insights into social, political, or organisational contexts. One of the strengths of document analysis is that it allows researchers to access data that is already available, making it a relatively unobtrusive approach that does not require direct interaction with participants.

This research design is particularly useful when studying past events, policies, or organisational practices, as documents can provide a rich historical or contextual backdrop. Additionally, document analysis can be used in conjunction with other research designs, such as case studies, to triangulate findings and enhance the depth of the research. However, one of the challenges of this design is assessing the credibility, bias, or completeness of the documents. Researchers must critically evaluate the sources to ensure that the information is reliable and relevant to their study. Despite these challenges, document analysis remains a valuable tool for exploring existing written records and uncovering insights that may not be easily accessible through other research designs.

Ethnography

Ethnography is a deeply immersive research design that involves the researcher becoming part of the community or environment they are studying. This approach allows researchers to gather first-hand insights into the social dynamics, practices, and beliefs of a group from the inside. Rather than relying on external observation or second-hand accounts, ethnographers immerse themselves among their participants, often for extended periods. This enables them to capture the complexities of human behaviour in its natural setting, offering a nuanced understanding of cultural practices and social interactions.

One of the unique aspects of ethnography is its emphasis on the participants’ perspectives. By prioritising the voices and experiences of the people being studied, ethnographers aim to represent the world as seen through the eyes of the participants. However, this approach also raises challenges, particularly around maintaining objectivity and managing the researcher’s role in influencing the group they are observing. Ethnography requires careful ethical considerations, such as gaining informed consent and respecting privacy, given the often intimate nature of the research. Despite these challenges, the rich, contextual insights that ethnography provides make it a powerful approach for understanding the lived experiences of individuals within their cultural and social environments.

Experimental and Quasi-Experimental Design

Experimental research is a highly controlled design that seeks to establish cause-and-effect relationships by manipulating one or more independent variables and observing their impact on dependent variables. This research design typically involves two groups: an experimental group that receives the treatment or intervention and a control group that does not. By randomly assigning participants to these groups, researchers can minimise bias and ensure that differences in outcomes are directly attributable to the variable being tested, rather than external factors. This randomisation strengthens the internal validity of the experiment.

Quasi-experimental designs are similar to experimental research but differ in one key aspect: they lack the random assignment of participants to experimental and control groups. In cases where randomisation is either impractical or unethical—such as in educational settings or when studying pre-existing groups—quasi-experimental designs provide a valuable alternative. While researchers still manipulate an independent variable and observe its effect on a dependent variable, the absence of randomisation means that there may be pre-existing differences between groups. As a result, researchers must account for these differences when analysing the outcomes, often using statistical methods to control for confounding variables.

Grounded Theory

Grounded theory is a qualitative research design designed to generate theory directly from the data rather than testing an existing hypothesis or using a pre-existing theoretical framework. Unlike more traditional research approaches, grounded theory allows the theory to emerge naturally through the iterative process of data collection and analysis. Researchers continuously compare new data with previously gathered information. This ongoing comparison enables them to identify recurring patterns, concepts, and categories, which are then refined into a coherent theoretical framework. Grounded theory is particularly useful when studying processes, interactions, or behaviours where existing theories do not exist or may not fully explain the phenomena.

One of the major advantages of grounded theory is its flexibility. Since it does not require researchers to adhere to a rigid hypothesis or framework from the start, the design allows for the exploration of unexpected insights that may arise during data collection. This makes it a powerful approach for investigating complex or under-researched topics. However, the open-ended nature of grounded theory can also be a challenge, as it requires researchers to be highly reflexive and adaptable throughout the research process. The absence of a pre-set framework means that analysis can be time-consuming, with researchers needing to sift through large amounts of data to construct a meaningful theory that adequately reflects the participants’ experiences and emerging patterns.

Narrative Inquiry

Narrative inquiry is a qualitative research design that focuses on the stories people tell about their personal experiences, aiming to understand how individuals construct meaning in their lives. Unlike other research approaches that may prioritise external observation or objective measurement, narrative inquiry dives into the subjective world of the participant. Researchers collect these narratives through interviews, journals, letters, or even autobiographies, and analyse how individuals structure their stories to make sense of their experiences. This approach is particularly useful in fields where understanding personal identity, life transitions, or cultural contexts requires a close examination of how people frame and interpret their lived experiences.

A key feature of narrative inquiry is its emphasis on the co-construction of meaning between the researcher and the participant. The researcher does not just passively collect stories but actively engages in dialogue, interpreting the narratives while considering how their own perspectives and biases influence the analysis. This collaborative process allows for a richer understanding of the subject matter but also demands a high level of reflexivity from the researcher. Since narratives are shaped by memory, culture, and social influences, researchers must carefully navigate issues of subjectivity, ensuring that the participant’s voice is authentically represented while also providing a critical analysis of how the story fits within broader social or cultural patterns.

Phenomenology

Phenomenology is a qualitative research design that seeks to explore and understand individuals’ lived experiences of a particular phenomenon. Rather than focusing on objective measures or external observations, phenomenology prioritises subjective experience, aiming to uncover the essence of how people perceive, interpret, and make sense of their experiences. Researchers using this design typically collect data through a variety of in-depth methods such as interviews or reflections, allowing participants to describe their personal encounters with the phenomenon in their own words. The goal is to view the experience as closely as possible through the eyes of the individuals who lived it, capturing its richness and complexity without external influence.

While this research design provides deep insights into human consciousness and subjective experience, it can be challenging to generalise the findings due to the intensely personal nature of the data. Nevertheless, phenomenology’s strength lies in its ability to provide a profound, context-rich understanding of how individuals uniquely experience and interpret specific aspects of life, making it invaluable for exploring complex, emotionally charged, or abstract phenomena.

Survey Research

Survey research is a widely utilised design in both quantitative and qualitative research that involves gathering data from a large group of respondents, typically through structured questionnaires. This approach is highly versatile, allowing researchers to collect information about a wide range of topics, including attitudes, behaviours, preferences, and demographic characteristics. One of the main advantages of survey research is its ability to gather data from a broad population efficiently, making it possible to identify trends, correlations, or patterns within large datasets. Surveys can be administered in various formats, such as online, by phone, or in person, providing flexibility in how researchers reach their target audience.

However, the quality and reliability of the data collected through surveys depend heavily on the survey’s design. Well-constructed surveys require carefully worded questions that avoid bias and confusion, and they must be designed to ensure that respondents understand and can accurately answer the questions. Another challenge is ensuring a high response rate, as low participation can skew results and affect the study’s representativeness. Despite these limitations, survey research remains a powerful tool in fields like marketing, social sciences, public health, and education, where large-scale data collection is necessary to inform policies, identify trends, or make generalisations about a population’s characteristics or behaviours.

Questions to ponder

How does the nature of the research question influence the decision to use a particular research design?

How do ethical concerns shape the choice of research design?

What types of research questions are best suited for case study research, and how do these differ from questions better addressed through autoethnography?

The power of collaborative writing and peer feedback in doctoral writing groups

Profile Image

Dr Basil Cahusac de Caux

Contact details

Dr Basil Cahusac de Caux is an Assistant Professor with a specialization in the sociology of higher education, postgraduate research, and the sociology of language.

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

Have you ever wondered how doctoral students can navigate the challenging journey of academic writing? For many, the answer lies in the strength of community and the power of collaborative feedback. Our recent paper explores this very subject, examining how doctoral writing groups can transform the academic experience through peer feedback and collective learning.

Our study centres on a collaborative book project where doctoral students wrote and peer-reviewed each other’s chapters, ultimately producing a book titled Wellbeing in Doctoral Education: Insights and Guidance from the Student Experience. This project wasn’t just about writing; it was about creating a community of practice, where students learned together, shared experiences, and supported each other through the arduous process of academic writing. The concept of communities of practice is pivotal in understanding this study. These communities are formed by individuals who share a passion or concern for something they do, learning to do it better through regular interaction.

In the context of our specific doctoral writing groups, the shared domain was academic writing and publishing of the academic book, and the community was formed through mutual engagement and support. Participants were united by their commitment to improving their academic writing through peer feedback. This shared focus provided a common ground for all members, fostering a sense of belonging and purpose. Building a supportive community was crucial. The writing groups created a space where students felt safe to share their work, provide feedback, and discuss their challenges. This environment of trust and collegiality was essential for effective learning and personal growth. Through their interactions, the group developed a shared repertoire of resources, experiences, and practices. This included not just the technical aspects of writing but also the emotional and psychological support needed to thrive in academia. Participants learned from each other, gaining insights into different writing styles, feedback techniques, and academic expectations.

One of the most significant findings from our study was the transformative power of peer feedback. Participants found that receiving and giving feedback was instrumental in improving their writing. Feedback was not only about correcting mistakes but also about providing affirmation and recognising the potential and effort of the writers. This helped build confidence and self-esteem. Another powerful aspect of peer feedback was the opportunity to learn from others. This process helped participants identify their own mistakes and areas for improvement. By reviewing peers’ work, participants also gained new perspectives and ideas that they could apply to their own writing.

Our findings illustrate how peer feedback and collaborative practices within writing groups can significantly enhance the doctoral experience. Participants discovered that, despite their unique backgrounds and stories, they shared common challenges in their academic journeys. This realisation fostered a sense of community and mutual understanding. Our findings highlight the dual nature of the doctoral experience: each student has a unique narrative, yet their struggles and successes resonate with others. This shared experience of uncovering commonalities amidst diversity facilitated a deeper understanding and appreciation of one another’s viewpoints, thereby fostering a sense of community and collegiality within the group. This collective recognition of shared struggles also helped alleviate feelings of isolation and promoted a supportive environment. Our findings also emphasise the importance of reflective writing and feedback in promoting personal growth and academic development. Through sharing their stories, participants articulated and reshaped their identities in academia, which helped them navigate both personal and academic development.

Our study highlights the immense value of collaborative writing and peer feedback in doctoral education. By fostering a supportive community of practice, doctoral students can navigate the complexities of academic writing more effectively, develop their academic identities, and build the confidence needed to succeed in academia. This approach not only improves writing skills but also provides emotional and psychological support, making the doctoral journey a more enriching and less isolating experience.

The findings of our study have several important implications for doctoral education:

  • Institutions should encourage the formation of writing groups and other collaborative learning opportunities to help doctoral students develop their writing skills and academic identities.
  • Developing students’ ability to give and receive feedback is crucial. Our study shows that feedback literacy can significantly enhance the quality of academic writing and the overall learning experience.
  • Creating a safe and supportive environment where students can share their work and experiences is essential for their personal and academic growth.

Taken together, our study shows that embracing the power of community and collaboration could be the key to transforming the doctoral experience, making it more supportive, inclusive, and ultimately, more successful for all students involved.

Questions to ponder

How do your emotions influence academic writing and reactions to feedback?

Are there hidden practices of publishing that should be discussed more openly?

How can academic institutions better support the formation of communities of practice among doctoral students?

What are some challenges that might arise in implementing peer feedback systems, and how can they be addressed?

In what ways can the process of giving and receiving feedback be made more effective and less emotionally taxing for students?

The AI literacy framework for higher education

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

Profile Image

Dr Basil Cahusac de Caux

Contact details

Dr Basil Cahusac de Caux is an Assistant Professor with a specialization in the sociology of higher education, postgraduate research, and the sociology of language.

In an era where generative artificial intelligence (AI) permeates every aspect of our lives, AI literacy in higher education has never been more crucial. In our recent paper, we delve into our own journeys of developing AI literacy, showcasing how educators can seamlessly integrate AI into their teaching practices. Our goal is to cultivate a new generation of AI-literate educators and graduates. Through our experiences, we also created a comprehensive framework for AI literacy, highlighting the transformative potential of embracing AI in educational settings.

Listen to a podcast about this paper (created by NotebookLM)

We embraced AI with optimism and enthusiasm, seeing it as a tool to be harnessed rather than feared. In our recent paper, we passionately argue that AI literacy is an indispensable skill for today’s graduates. We emphasise that this mindset requires a significant cultural shift in higher education, advocating for the integration of AI as a valuable learning aid. By fostering this change, we can unlock AI’s potential to enhance education and empower students to thrive in an increasingly digital world.

Our journey began with curiosity and a willingness to experiment with AI in our educational practices. Lynette, for instance, integrated AI into her role, showcasing its capacity as an academic language and literacy tutor. She encouraged her students, many of whom are from non-English speaking backgrounds, to use tools like Grammarly and ChatGPT to improve their academic writing. By doing so, she highlighted the importance of collaboration between students and AI, promoting deeper learning and engagement.

In a Master’s level course on autoethnography, Lynette inspired her students to harness generative AI for creative data generation. She showcased how tools like DALL-E could be used to create artworks that visually represent their research experiences. This approach not only ignited students’ creativity but also deepened their engagement with their assignments, allowing them to explore their research from a unique and innovative perspective.

Basil introduced his students to the power of generative AI through hands-on assignments. One notable task involved creating a public awareness campaign centred around the UN’s Sustainable Development Goals. Students utilised DALL-E to produce compelling visuals, showcasing AI’s ability to amplify creativity and enhance learning outcomes. This practical approach not only highlighted the transformative potential of AI but also encouraged students to engage deeply with important global issues through innovative and impactful media.

While the benefits of AI in education were clear to us, we also encountered ethical considerations and challenges. In our paper, we emphasised the importance of transparency and informed consent when using AI in research and teaching. For example, we ensured that students and research participants were aware of how their data would be used and the potential biases inherent in AI-generated content. Moreover, we highlighted the environmental impact of using AI technologies. The energy consumption of AI models is significant, raising concerns about their sustainability. This awareness is crucial as educators and institutions navigate the integration of AI into their practices.

From our experiences and reflections, we developed a groundbreaking AI literacy framework for higher education, encompassing five domains: foundational, conceptual, social, ethical, and emotional. As illustrated in the figure below, this comprehensive framework is designed to empower educators and students with the essential skills to adeptly navigate the intricate AI landscape in education. By promoting a holistic and responsible approach to AI literacy, our framework aims to revolutionise the integration of AI in academia, fostering a new generation of informed and conscientious AI users.

Elements of AI Literacy in Higher Education. Download here.

From these essential domains of AI literacy, we have crafted a comprehensive framework for AI literacy in higher education.

The framework underscores the following key features:

  • Foundational Understanding: Mastering the basics of accessing and using AI platforms.
  • Information Management: Skillfully locating, organising, evaluating, using, and repurposing information.
  • Interactive Communication: Engaging with AI platforms as interlocutors to create meaningful discourse.
  • Ethical Citizenship: Conducting oneself ethically as a digital citizen.
  • Socio-Emotional Awareness: Incorporating socio-emotional intelligence in AI interactions.
The AI Literacy Framework for Higher Education. Download here.

Our AI literacy framework has significant implications for higher education. It provides a structured approach for integrating AI into teaching and research, emphasising the importance of ethical considerations and emotional awareness. By fostering AI literacy, educators can prepare students for a future where AI plays a central role in various professional fields.

Embracing AI literacy in higher education is not just about integrating new technologies; it’s about preparing students for a rapidly changing world. Our AI literacy framework offers a comprehensive guide for educators to navigate this transition, promoting ethical, effective, and emotionally aware use of AI. As we move forward, fostering AI literacy will be crucial in shaping the future of education and empowering the next generation of learners.

Questions to ponder

How can educators ensure that all students, regardless of their technological proficiency, can access and utilise generative AI tools effectively?

In what ways can generative AI tools be used to enhance students’ conceptual understanding of course materials?

How can the concept of generative AI as a collaborator be integrated into classroom discussions and activities?

How can educators model ethical behaviour and digital citizenship when using generative AI tools in their teaching?

How can understanding the emotional impacts of generative AI interactions improve the overall learning experience?

How can the AI literacy framework be practically integrated into different academic disciplines and curricula?

Developing AI literacy in your writing and research

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

I have recently developed and delivered a masterclass about how you can develop your AI literacy in your writing and research practice. This included a series of examples from my own experiences. I thought I’d provide a summary of this masterclass in a blog post so that everyone can benefit from my experiences.


Artificial intelligence (AI) has been present in society for several years and refers to technologies which can perform tasks that used to require human intelligence. This includes, for example, computer grammar-checking software, autocomplete or autocorrect functions on our mobile phone keyboards, or navigation applications which can direct a person to a particular place. Recently, however, there has been a significant advancement in AI research with the development of generative AI technologies. Generative AI refers to technologies which can perform tasks that require creativity. In other words, these generative AI technologies use computer-based networks to create new content based on what they have previously learnt. These types of artistic creations have previously been thought to be the domain of only human intelligence and, consequently, the introduction of generative AI has been hailed as a “game-changer” for society.

I am using generative AI in all sorts of ways. The AIs I use most frequently include Google’s built-in generative AI in email, chat, Google Docs etc. which learns from your writing to suggest likely responses. I also use Grammarly Pro to help me identify errors in my students’ writing, allowing me more time to give constructive feedback about their writing, rather than trying to find examples. This is super time-saving, particularly given how many student emails I get and the number of assignments and thesis chapters I read! I also frequently use a customised version of Chat GPT 4, which I trained to do things the way I would like them to be done. This includes responding in a specific tone and style, reporting information in specific ways, and doing qualitative data analysis. Finally, I use Leonardo AI and DALL-E to generate images, Otter AI to help me transcribe some of my research, Research Rabbit to help me locate useful literature on a topic, and AILYZE to help conduct initial thematic analysis of qualitative data.

The moral panic that was initiated at the start of 2023 with the advent of Chat GPT caused debates in higher education. Some people insisted that generative AI would encourage students to cheat, thereby posing a significant risk to academic integrity. Others, however, advocated that the use of generative AI could make education more accessible to those who are traditionally marginalised and help students in their learning. I came to believe that the ability to use generative AI would be a core skill in the future, but that AI literacy would be essential. This led me to publish a paper where I defined AI literacy as:

AI literacy is understanding “how to communicate effectively and collaboratively with generative AI technologies, as well as evaluate the trustworthiness of the results obtained”.

Pretorius, L. (2023). Fostering AI literacy: A teaching practice reflection. Journal of Academic Language & Learning, 17(1), T1-T8. https://journal.aall.org.au/index.php/jall/article/view/891/435435567   

This prompted me to start to develop ways to teach AI literacy in my practices. I have collated some tips below.

  • Firstly, you should learn to become a prompt wizard! One of the best tips I can give you is to provide your generative AI with context. You should tell your AI how you would like it to do something by giving it a role (e.g., “Act as an expert on inclusive education research and explain [insert your concept here]”). This will give you much more effective results.
  • Secondly, as I have already alluded to above, you can train your AIs to work for you in specific ways! So be a bit brave and explore what you can do.
  • Thirdly, when you ask it to make changes to something (e.g., to fix your grammar, improve your writing clarity/flow), ask it to also explain why it made the changes it did. In this way, you an use the collaborative discussion you are having with your AI as a learning process to improve your skills.

The most common prompts I use in my work are listed below. The Thesis Whisperer has also shared several common prompts, which you can find here.

  • “Write this paragraph in less words.”
  • “Can you summarise this text in a more conversational tone?”
  • “What are five critical thinking questions about this text?”

I have previously talked about how you can use generative AI to help you design your research questions.

I have since also discovered that you can use generative AI as a data generation tool. For example, I have recently used DALL-E to create an artwork which represents my academic identity as a teacher and researcher. I have written a chapter about this process and how I used the conversation between myself and DALL-E as a data source. This chapter will be published soon (hopefully!).

Most recently, I have started using my customised Chat GPT 4 as a data analysis tool. I have a project that has a large amount of qualitative data. To help me with a first-level analysis of this large dataset, I have developed a series of 31 prompts based on theories and concepts I know I am likely to use in my research. This has allowed me to start the analysis of my data and give me direction as to areas for further exploration. I have given an example of one of the research prompts below.

In this study, capital is defined as the assets that individuals vie for, acquire, and exchange to gain or maintain power within their fields of practice. This study is particularly interested in six capitals: symbolic capital (prestige, recognition), human capital (technical knowledge and professional skills), social capital (networks or relationships), cultural capital (cultural knowledge and embodied behaviours), identity capital (formation of work identities), and psychological capital (hope, efficacy, resilience, and optimism). Using this definition, explain the capitals which have played a part in the doctoral student’s journey described in the transcript.

What I have been particularly impressed by so far is my AIs ability to detect implicit meaning in the transcripts of the interviews I conducted. I expected it to be pretty good at explaining explicit mentions of concepts, but had not anticipated it to be so good at understanding more nuanced and layered meanings. This is a project that is still in progress and I expect very interesting results.

There are some ethical considerations which should be taken into account when using generative AIs.

  • Privacy/confidentiality: Data submitted to some generative AIs could be used to train the generative AI further (often depending on whether you have a paid or free version). Make sure to check the privacy statements and always seek informed consent from your research participants.
  • Artwork: Generative AIs were trained with artwork without express consent from artists. Additionally, it is worth considering who the actual artist/author/creator of the artwork is when you use generative AI to create it. I consider both the user and the AI as collaborators working to create the artwork together.
  • Bias propagation: Since generative AIs are trained based on data from society, there is a risk that they may reflect biases present in the training data, perpetuating stereotypes or discrimination.
  • Sustainability: Recent research demonstrates that generative AI does contribute significantly to the user’s carbon footprint.

It is also important to ethically and honestly acknowledge how you have used generative AI in your work by distinguishing what work you have done and what work it has done. I have previously posted a template acknowledgement for students and researchers to use. I have recently updated the acknowledgement I use in my work and have included it below.

I acknowledge that I used a customised version of ChatGPT 4 (OpenAI, https://chat.openai.com/) during the preparation of this manuscript to help me refine my phrasing and reduce my word count. The output from ChatGPT 4 was then significantly adapted to reflect my own style and voice, as well as during the peer review process. I take full responsibility for the final content of the manuscript.

My final tip is – be brave! Go and explore what is out there and see what you can achieve! You may be surprised how much it revolutionises your practices, freeing up your brain space to do really cool and creative higher-order thinking!

Questions to ponder

How does the use of generative AI impact traditional roles and responsibilities within academia and research?

Discuss the implications of defining a ‘collaborative’ relationship between humans and generative AI in research and educational contexts. What are the potential benefits and pitfalls?

How might the reliance on generative AI for tasks like grammar checking and data analysis affect the skill development of students and researchers?

The blog post mentions generative AI’s ability to detect implicit meanings in data analysis. Can you think of specific instances or types of research where this capability would be particularly valuable or problematic?

Reflect on the potential environmental impact of using generative AI as noted in the blog. What measures can be taken to mitigate this impact while still benefiting from AI technologies in academic and research practices?

Moving beyond binaries in research: weaving the tapestry of participants’ experiences

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

In today’s data-driven world, there is a lot of talk about making decisions based on so-called objective data. For example, schools and universities use information about the mix of students and staff to shape how they teach and run things. Information such as age, where people live, how much schooling they have had, or their income is collected to help make these so-called “informed” decisions. But here’s the problem – we sometimes forget that the people collecting these forms of data and those making these decisions have their own biases. Decisions reflect the majority view, which means that other experiences are often sidelined.

We need to understand that different parts of our backgrounds interact and affect the way we experience the world, often in very different ways. This is what is termed intersectionality. Using intersectionality as a lens helps us to recognise that we cannot look at parts of someone’s identity in isolation. We need to see the whole person and how all parts of their identity come together, influencing their experiences and the way the world sees and treats them. It is like saying, “To understand the whole story, you can’t just read one page. You need to read the entire book.”

This highlights that researchers and decision-makers need to work to improve processes for data collection and analysis to better reflect the diversity of people’s experiences. So, why is it so crucial to bring diverse perspectives into the research mix?

  • Firstly, past research has not done a great job of representing everyone. Surveys can often be pretty narrow, missing out on the complete picture of who participants are, which means we are not getting the full story on how to solve problems for everyone.
  • Secondly, by embracing diversity in research, we stand up for fairness and social justice. Imagine surveys that only see concepts in black and white, leaving out people who do not fit neatly into specific boxes. We are missing out on understanding different experiences and perspectives, which can make our research richer and more meaningful.
  • Finally, acknowledging a wide range of experiences helps us dig deeper into our findings, giving us a clearer view of what is really going on in our context. This approach challenges us to think beyond the usual categories and consider the complex ways people identify themselves. By doing so, we can better reflect society’s diversity and push for changes that make society more inclusive and equitable for everyone.

Despite some improvements in how we collect data in recent years, there is still a long way to go. We need to ensure that our research methods allow people to share the full spectrum of their identities, respecting the richness of their experiences. It is all about giving everyone a voice and ensuring research serves us all, not just a privileged few.

The thing is, when we are exploring human experiences, we must embrace the messiness and all the different parts of who people are. But, sadly, many times, research just clumps people into simple categories, missing out on their full stories. This approach does not just overlook the richness of their identities; it can also make them feel like their voices do not matter, with their real-life experiences either ignored or questioned.

In my recent paper, I propose a new way of collecting data about research participants. I propose that we say, “Hey, let people tell us about themselves in their own words” rather than asking specific questions that limit their responses. To do this, I argue that researchers should include a question where people can share their own diversity stories when they fill out surveys. Why? Because it does justice to their experiences and knowledge.

I have seen firsthand how rich and deep data can be when people share their stories this way, especially when this data is combined with other open-ended research questions. My paper makes the case for letting people have a say in how they are represented in research. It is about giving them the power to share their identities in their own words. The main findings from my study include:

  • When I asked open questions, the replies were eye-opening: I decided to ask people to tell me about themselves in their own way, without the usual checkboxes. And wow, did I get a treasure trove of responses! Some people went the traditional route, but others shared stories and parts of their identities I would never have captured with a simple tick box. This approach really highlighted how everyone has their own unique blend of experiences and backgrounds.
  • Self-written diversity statements are gold mines of insights: One aspect that was particularly unique in my study is that I asked people to jot down their thoughts on what makes them, well, them. I did this by asking them to write their own diversity statement. The depth of what I got back was incredible – from personal tales of grappling with ableism to rich descriptions of cultural heritage and everything in between. It is like these self-written snippets opened a window into the real lives and challenges people face, way beyond what any standard survey could capture.
  • Weaving stories together to highlight the tapestry of people’s lived experiences: One of the most exciting findings from my study is how I used all these different bits of info from the surveys and weaved them into what I call holistic introductory stories. Imagine taking a bit from here and a snippet from there to stitch together a complete narrative about someone. It is like getting a full-colour, 3D picture of a person rather than a flat, 2D sketch. This way, I was not just seeing bits and pieces of someone’s identities, but I was developing a better understanding of how all those bits fit together to make my participants who they are.

My findings highlight the importance of encouraging epistemic justice in our research practices. What is epistemic justice, you may ask? Epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. It is about ensuring all perspectives are considered, especially those often ignored or undervalued. To really do justice to everyone’s knowledge, we have to be open to different, even incomplete ways of understanding. That is why I am using open questions and these stories to give everyone a platform to share their experiences. I believe stories are how we make sense of our world. As has been highlighted by other researchers, stories help us understand not just the surface-level stuff people share but the deeper, sometimes hidden layers of their lives.

My focus has been on getting people to write down their stories because there is power in writing. But now that this study is finished, I am thinking, why stop there? There are so many other ways to share and understand each other’s experiences. So, looking ahead, I am keen on mixing things up even more, using all sorts of creative methods to make sure everyone feels seen and heard, especially those who have been left out of the conversation for too long.

Questions to ponder

If you had to write a short diversity statement about yourself, what would you say?

How does the incorporation of self-written diversity statements and open-ended questions in surveys challenge traditional methods of data collection in qualitative research?

The paper advocates for epistemic justice through methodological innovations in order to reduce biases and inequalities in research. How does giving participants the agency to define themselves challenge or change the researcher’s role?

The research outlines a more artistic way of understanding participants through holistic introductory stories. What advantages does this creative approach offer, and what challenges might it pose in traditional research environments?

Theoretical and conceptual frameworks in research

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

Frameworks in research play a crucial role in shaping the direction of a research project. They serve as the foundation upon which studies are built and analysed, offering a lens through which researchers can interpret their findings. However, they are also a source of confusion for researchers so, in this blog post, I explain the differences between theoretical and conceptual frameworks, why they are important in research, and how researchers can choose a framework for their study.

Theoretical and conceptual frameworks: what are the differences?

The most common confusion I see when talking to researchers about frameworks is that they do not understand the difference between a theoretical and a conceptual framework. So what are the differences?

  • A researcher uses a theoretical framework when they are using a pre-established theory for their study. A theoretical framework is founded upon a well-established theory, focuses on explaining phenomena using this existing theory, and is chosen based on examining which theories are commonly used in the field.
  • A researcher uses a conceptual framework when they select concepts from several theories and construct them into a framework for a specific study. A conceptual framework is constructed by the researcher, aims to explore relationships between different concepts, and is developed by synthesising different ideas from the field together.

It is important to note that you should choose either a theoretical or a conceptual framework – usually, one project does not have both. This is because they serve the same purpose – they provide the vision for your study. Having two competing frameworks, therefore, would confuse your study. The only time you would have two frameworks is if your project had two distinct parts that were unrelated to each other.

Why do frameworks matter in research?

So why is it important to have a framework in your study? Frameworks are important because they provide a foundation and vision for your project. They help the researcher make sense of the chaos of the data by revealing the theories or concepts that will be used to analyse and explain findings. Frameworks influence:

  • The research design: Theoretical and conceptual frameworks help shape the research design, including the choice of data collection and analysis methods. By setting out the key concepts and their relationships, they provide a roadmap for conducting the study.
  • The data analysis: They offer a lens for interpreting the data collected during the research. This is particularly crucial in qualitative data analysis, where the researcher seeks to understand complex phenomena through the perspectives of participants.
  • The rigour and relevance of a study: The use of a theoretical or conceptual framework enhances the rigour of a study by ensuring that its findings can be linked back to broader discussions in the field.
  • How research builds on existing knowledge: Grounding research in a theoretical or conceptual framework ensures that new studies build upon or challenge the existing body of knowledge.

Frameworks also reveal the beliefs and assumptions of the researcher. This necessitates that the researcher carefully articulates their framework, explaining key concepts within the framework and why they are important for the particular study. Concepts can often mean different things to different people. For example, if you mention to your friend that you just adopted a dog but do not provide further details, the image of your dog in your head could be quite different from what your friend imagines. In the same way, research concepts are often multifaceted and require careful explanation in a research project.

Identifying frameworks in the literature

Identifying frameworks in published articles is a crucial skill for researchers, enabling them to understand a study’s theoretical or conceptual underpinnings, replicate research designs, or apply similar frameworks in their work. However, this is not always easy, and sometimes frameworks are not explicitly stated. So how do you identify a framework in a published study?

  • Look for explicit mentions: This is the easiest strategy, provided that the framework is mentioned explicitly of course. Search for sentences such as “This study is grounded in X theory” or “We employ Y’s conceptual model as a framework.”
  • Consider the research questions: The nature of the research questions or hypotheses can often indicate the type of framework being used.
  • Trace the references: Frameworks can be identified by looking at the citations of foundational works within a published source. Checking these references can provide a deeper understanding of the framework that the authors have used. Below, I have listed some of the most common frameworks used in social sciences, particularly educational research. Searching for words such as these will help you find the framework in a study.
    • Bourdieu’s Theory of Practice
    • Foucault’s theories on power, discourse, and knowledge
    • Bronfenbrenner’s Ecological Systems Theory
    • Cultural-historical theory (Vygotsky) or Cultural-historical activity theory (Vygotsky and Leontiev)
    • Theories of learning (e.g., community of practice, experiential learning, transformative learning)
    • Performativity and/or embodiment (e.g., gender performativity)
    • Intersectionality and/or epistemic (in)justice
    • New literacies theory or digital literacies
    • TPACK (Technological pedagogical content knowledge)

So how do you choose a framework for your study?

Choosing the right framework for your research is a critical decision that shapes your study’s direction, coherence, and trustworthiness. Whether you opt for a theoretical or conceptual framework depends on your research objectives, the nature of your study, and the existing literature in your field.

  • The first step in choosing a framework is to clarify your research objectives: What are you trying to discover, explore, explain, or analyse? Your research objectives will guide your choice of framework by highlighting the key concepts, theories, or models that are most relevant to addressing your research questions.
  • The second step is to conduct an extensive literature review to identify potential frameworks that have been applied to similar research problems. By examining how these frameworks have been used in past studies, you can identify gaps (i.e., look for areas where existing frameworks may not fully address your research problem or where there is room for further exploration), assess applicability (i.e., determine how well the frameworks align with your research objectives and the specific context of your study), and draw inspiration (i.e., even if existing frameworks do not fit your needs exactly, they can provide a starting point for developing your own).
  • The third step is to assess the theoretical alignment by ensuring the framework’s underlying assumptions align with your research philosophy. This involves thinking about the philosophical assumptions (e.g., your ontology, epistemology, and axiology) that underlie different frameworks and whether they are compatible with your view of the world.
  • The fourth step is to consider the methodological fit of each potential framework. The framework you choose should complement your research methodology, as it will influence the design, data collection, and analysis methods you will use.
  • The fifth step is to explore the flexibility of the framework to your specific research context. Some frameworks may be too rigid, limiting your ability to explore the nuances of your research problem fully. Others might offer the adaptability needed to address unique aspects of your study.
  • Finally, the sixth step is to reflect on the potential originality and contribution of the framework. Your chosen framework should enable you to make a meaningful contribution to your field. This might involve applying an existing framework in a new context, combining frameworks innovatively, or developing a new conceptual framework based on your findings.

Remember that context matters. Just because a theory is common in your field does not mean it is appropriate for your particular context. You might have to modify it by integrating theories or concepts relevant to your context, ensuring it more accurately reflects the cultural, political, religious, or additional aspects of your environment.

  • Conduct a context analysis: Understand the broader and specific conditions in which your research is situated.
  • Engage with local knowledge: Leverage insights from local experts and contextual literature to understand the context-specific nuances.
  • Evaluate frameworks for context sensitivity: Assess potential frameworks for their ability to accommodate or adapt to the specificities of your research context.
  • Consider the adaptability of the framework: Determine if the framework can be modified to better align with contextual demands without losing its theoretical integrity.

Incorporating context into the decision-making process for selecting a research framework underscores a commitment to conducting thoughtful, rigorous, and impactful research that resonates with its intended audience and addresses the complexities of the real world.

Some final thoughts…

The adoption of a well-defined framework in research is not merely a formal requirement but a cornerstone for ensuring the rigour, coherence, and depth of your study. A framework acts as a compass, guiding researchers through the complexities of their inquiry, from formulating research questions to interpreting findings. It provides a structured lens through which data can be examined, theories tested, and new insights uncovered, ensuring that research is not conducted in a vacuum but is connected to, and informed by, the wider scholarly community. Moreover, selecting and applying a framework necessitates a deep engagement with the subject matter and the methodological approaches best suited to exploring it. This engagement fosters a more nuanced understanding of the research problem and enriches the research design, ultimately enhancing the quality and impact of the findings.

Questions to ponder

What challenges might arise from applying a framework in a context different from where it was originally developed, and how can these be addressed?

Can a study be considered rigorous without a clearly defined theoretical framework? Why or why not?

Demystifying research paradigms

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

Let’s talk about one of the most complex parts of research – understanding the philosophical underpinnings of your worldview and how this shapes the way your research is done. This is called a research paradigm and is one of the areas I get the most frequent questions about from graduate research students. The popularity of my research paradigm videos also show that this is clearly a concept that is difficult to understand and hard to articulate. So, in this blog post, I demystify what a research paradigm is, explore the most common research paradigms, demonstrate how a research paradigm influences the research process, and explain why it is important to articulate your paradigm in your research.

What is a research paradigm?

A research paradigm explains what you believe reality is and how you think knowledge can be understood. The technical terms used to articulate a research paradigm are ontology, epistemology, and axiology.

Ontology refers to the nature of reality. When people see reality as objective, it is called a realist ontology. When people see reality as subjective, it is called a relativist ontology. In other words, you need to explain whether you think reality is a singular, objective entity waiting to be discovered, or if it is a construct of individual experiences and perceptions.

Epistemology is concerned with the nature of knowledge and how we come to know something. There are three common epistemologies. First, reality can be objectively known if you can measure it with the right tools. Second, reality can only be partially known because it is created in the minds of individual people. Third, reality is always changing, so it is impossible to fully comprehend at any given moment.

Axiology deals with what is valued in research. It prompts us to consider how our own perspectives and values can shape the ways our research is designed and our findings are interpreted. In other words, you need to explain whether you think research should be value-free or whether you think research is value-laden. Do you think that steps should be taken to remove the researcher’s influence from the research, or is subjectivity inevitable?

What are the most common research paradigms?

I preface this discussion by noting that these are certainly not the only research paradigms that exist. Rather, these are just the ones that are most commonly found in research.

Positivism: Positivism is grounded in the belief that reality is singular and can be objectively observed and quantified. From this viewpoint, the researcher is independent of the subject of research, and knowledge is generated through empirical observation and measurement. Positivists believe that knowledge should be derived from empirical experience and logical reasoning, so they traditionally assert that research can and should be value-free.

Post-positivism: As the name suggests, this paradigm is closely related to positivism, having the same ontology and epistemology. However, post-positivists have a different axiology, arguing that complete objectivity is unattainable and that research cannot be entirely free from values, as choices about what to study, how to study it, and how findings are presented inevitably reflect value judgements. Consequently, they take steps to minimise the researcher’s influence in the way they design their studies by being reflexive about their own biases and the potential value implications of their work.

Constructivism: Constructivists believe that reality is subjective and that this reality can only be partially known because it is constructed in the minds of individual people. Constructivism emphasises that individuals’ realities are constructed through social interactions, so they seek to understand how individuals construct their realities and how these realities are influenced by social, cultural, and historical contexts. Constructivists embrace the value-laden nature of research, emphasising reflexivity and ethical responsibility, and valuing the co-construction of knowledge with participants. This paradigm acknowledges the subjective and interpretive nature of knowledge creation, viewing it as a strength that enriches understanding and contributes to the depth and authenticity of research findings. Constructivist researchers are also often concerned with the social implications of their research, seeking not only to understand the world but also to contribute positively towards social change.

Interpretivism: Interpretivists also believe reality is subjective and that reality can only be partially known because it is constructed in the minds of individual people. Interpretivism emphasises that individuals are the experts in their own experiences and focuses on understanding the subjective meanings and interpretations that individuals attach to their experiences. While interpretivism recognises the influence of researcher and participant values, it leans more towards understanding and interpreting the values and meanings inherent within the study context. Consequently, they believe that the researcher’s role is to immerse themselves in the participants’ experiences to gain a deep, empathetic understanding of their perspectives.

Pragmatism: Pragmatism is a flexible paradigm that suggests research methods should be chosen based on what best addresses the research question. Pragmatists believe reality is constantly changing or debated, so it does not commit to a single reality or method of inquiry. This allows for a mix of qualitative and quantitative methods. Pragmatism encourages an integrative approach to values, blending both subjective and objective perspectives. It recognises the importance of researchers’ and participants’ values and experiences in shaping research processes and outcomes. Yet, it also values objective evidence and the outcomes of research practices. This balance reflects pragmatism’s broader philosophical stance that truth and value are found in the practical implications of research and how it can address real-world problems.

Critical realism: Critical realists believe that an objective reality exists, but that it is layered, consisting of different levels that are not always observable. They believe our understanding of the world is subjective because it is influenced by societal and historical contexts and power relations. Critical realism is deeply intertwined with a commitment to understanding the world as it is, while also striving to transform it for the better. Critical realists believe that researchers bring their own beliefs, biases, and values to the research process, and urge researchers to critically reflect on and disclose their values and how these might impact their research.

I also recognise the importance of acknowledging that historically, some perspectives and bodies of knowledge have been overlooked or marginalised in research. This extends to the development and recognition of research paradigms as well. As such, I encourage you to also explore other diverse paradigms, especially those that may be considered ‘non-traditional’ or originate from non-Western contexts. Embracing a broader spectrum of paradigms can enrich our research approaches and outcomes, offering a wider range of valuable insights and perspectives.

How do research paradigms influence the research process?

Understanding and choosing a research paradigm is crucial because it shapes the entire research process—from framing the research question to deciding on methods, and interpreting the results. In essence, the research paradigm not only guides the technical aspects of study design, data collection, and analysis but also influences how researchers perceive and interact with their subject matter. It shapes the ethical considerations, the relationship with participants, and the ultimate goals of the research. By understanding and consciously choosing a research paradigm, researchers ensure that their work is coherent, meaningful, and aligned with their philosophical perspectives on reality, knowledge, and values. Let’s delve into how these philosophical underpinnings can shape each stage of the research process with some examples.

Formulating research questions

The paradigm a researcher adopts fundamentally shapes the nature of the questions they ask. For instance, in positivism, research questions are often framed to test hypotheses or measure variables, seeking to establish causal relationships or correlations. Constructivist, interpretivist, and critical realist paradigms, however, encourage questions that explore the meanings, experiences, and perceptions of participants, aiming to understand the complexity of human behaviour in particular contexts. These questions are then further refined based on the focus of the paradigm, leading to studies that focus on either societal change or a deeper understanding of lived experience. Pragmatism allows for flexibility, meaning research questions can be designed to solve specific problems, often leading to mixed-methods approaches.

Choosing methodologies and methods

The choice of methodologies and methods is also deeply influenced by the researcher’s paradigmatic stance. Positivists might lean towards structured methodologies that mimic the scientific method, employing quantitative measures such as surveys or experiments. Constructivists and interpretivists prefer qualitative methods like in-depth interviews, participant observation, or thematic analysis, which allow for a deep dive into participants’ lived experiences and the meanings they ascribe to them. Pragmatists select methods based on what best answers the research question, often combining qualitative and quantitative approaches in a pragmatic, problem-solving orientation. Critical realists might use a combination of qualitative and quantitative methods to explore both the surface phenomena and the underlying social or structural mechanisms contributing to these phenomena.

Data collection, analysis, and presentation of findings

Paradigms also dictate how data are collected and analysed, influencing the interaction with participants and the interpretation of data. Positivist approaches tend to emphasise objectivity and detachment, aiming for a neutral stance that minimises the researcher’s influence on the data. Findings are usually presented as objective truths or confirmed hypotheses, using statistical analysis to support conclusions. In contrast, constructivist and interpretivist paradigms view the researcher as a key instrument in the research process, engaging in reflective practices to interpret nuanced meanings within data, acknowledging their subjective influence. Findings are usually presented as insights into the participants’ perspectives, often narratively or through rich, descriptive accounts, emphasising the subjective nature of knowledge. The pragmatist paradigm focuses on practical outcomes, guiding the selection of data collection and analysis techniques that are most likely to produce actionable insights. Results are usually presented in a way that highlights their practical implications. The critical realist paradigm involves analysing data to identify not just what is happening but why it is happening, looking for patterns that reveal the influence of hidden structures or power relations. Findings are usually presented in a way which highlights or critiques the underlying societal mechanisms leading to a particular phenomenon.

Why is it important to explain your research paradigm?

Explaining the research paradigm in research publications or theses is pivotal for several reasons, each contributing to the clarity, integrity, and impact of the research. This explanation serves not just as a methodological formality but as a fundamental component that illuminates the researcher’s philosophical stance, guiding principles, and the rationale behind methodological choices. Here’s why delineating this paradigmatic foundation is crucial:

  • Enhancing transparency and trustworthiness: Detailing the research paradigm enhances the transparency of the study, allowing readers to understand the foundational assumptions that underpin the research. This clarity helps peers, reviewers, and readers assess the rigour or trustworthiness of the research process and its findings because it allows for an assessment of how well the research design, methodology, and methods align with the paradigmatic assumptions.
  • Justifying methodological choices: As noted above, the research paradigm informs the researcher’s methodological choices. By explaining this in their publications or theses, researchers provide a rationale for their methodological decisions. This explanation helps readers understand why certain methods were chosen over others and how these choices are consistent with the researcher’s philosophical stance, enhancing the coherence and integrity of the study.
  • Aiding interpretation of findings: Understanding the paradigmatic perspective of a study aids in the interpretation of its findings. Different paradigms can lead to different interpretations of the same phenomenon. By stating their research paradigm, researchers help readers place the findings within the appropriate conceptual and philosophical framework, ensuring that interpretations are made with an understanding of the underlying assumptions.
  • Contributing to disciplinary dialogue: The explicit articulation of research paradigms contributes to ongoing disciplinary dialogue and debate about the nature of reality and knowledge construction in a field. It encourages reflexivity among researchers, prompting them to consider and articulate their own positions. This reflexivity enriches the field by fostering a diversity of perspectives and approaches, leading to a more robust and dynamic academic discourse.
  • Helping others build upon your study: For those looking to replicate or build upon a study, understanding its paradigmatic underpinnings is essential. It ensures that subsequent research is grounded in the same philosophical assumptions, or consciously diverges from them, maintaining a level of methodological consistency and rigour across studies.

So in summary, understanding and explaining your research paradigms in publications or theses is not merely a theoretical exercise but a practical necessity that underpins the trustworthiness, clarity, and impact of research. It serves as a bridge connecting the philosophical foundations of a study with its practical execution and interpretation, enriching both the research process and its contributions to knowledge.

To cite this article, you can reference the pre-print. The correct APA citation for the article is: Pretorius, L. (2024). Demystifying research paradigms: Navigating ontology, epistemology, and axiology in research. [Pre-print]. Monash University Research Repository. https://doi.org/10.26180/27013843

Questions to ponder

Should research be value-free or is it value-laden?

In what ways do your ontology, epistemology, and axiology shape the way you conduct research?

Consider a recent study you’ve encountered. What research paradigm does it seem to align with, and how did this influence its findings and conclusions?

Accurately assessing students’ use of generative AI acknowledgements in assignments

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

Lecturers play a pivotal role in shaping the learning of their students. In a metric-focused university environment, this learning necessitates the assessment of students’ learning throughout their educational journey. Assessing assignments not only gauges the understanding of the subject matter but also evaluates the development of critical academic skills. These skills, such as research, analysis, and effective communication, are integral components of a well-rounded higher education.

Assessing transferable skills

The skills assessed must align with what is taught within the unit. When students perceive a direct connection between what is taught and what is assessed, their engagement and comprehension are heightened. Consequently, if we are going to assess students not only on their content knowledge but also their transferable skills, we need to provide them with the tools to succeed.

I believe that transferable skills enhance the applicability of students’ disciplinary knowledge. For years, I have worked to develop a suite of academic skills resources which are now embedded across the units within our Faculty. These resources include a suite of just-in-time online videos freely available on YouTube, as well as two written booklets (Doing Assignments and Writing Theses) that explicitly teach academic communication skills.

Over the years, I have also worked to improve the assignment rubrics within our Faculty to more accurately assess the skills that are taught within individual units. For example, I have worked with another staff member to develop templates for staff to provide feedback on academic language and literacy. We designed these templates to allow assessors to label specific mistakes for students and to provide students with referrals to appropriate support. Giving students specific labels for their errors helps them to see where they can improve. The referrals to appropriate resources and support help the student improve their skills, encouraging self-directed learning.

It is important to note that we usually recommend that these skills account for no more than 10% of the total grade for the assignment. This is because the main focus of the assessment should be the content – students should be able to clearly demonstrate an understanding and critical evaluation of the topic of the assignment. However, the students’ use of academic language and academic literacy can enhance the quality of their disciplinary content, or it can hinder the meaning of their ideas. As such, our templates allow for 5% to be attached to academic language (specifically, the elements listed in blue here) and 5% to academic literacy (the elements listed in purple here).

Assessing AI literacy

In the era of rapid technological advancement, the rise of generative artificial intelligence (AI) introduces a new dimension to education. As students are increasingly exposed to AI tools, it becomes imperative for educators to teach them how to use these tools effectively. As I have highlighted in another blog post, I firmly believe that it is our role as educators to teach students how to collaborate effectively with AI and evaluate the results obtained, a concept termed AI literacy. I see AI literacy as an essential transferable skill.

A key component of using AI ethically is acknowledging it effectively in written work. It is important to highlight, though, that if we are going to require students to demonstrate AI literacy, including the accurate acknowledgement of the use of AI tools, we need to teach it in our units and also assess it accurately. In my units, I teach students that an acknowledgement should include the name of the AI used, a description of how it was used (including the prompt used where appropriate), and an explanation of how the information was then adapted in the final version of the document. I also provide students with the example below so that they can see how an acknowledgement is used in practice.

I acknowledge that I used ChatGPT (OpenAI, https://chat.openai.com/) in this assignment to improve my written expression quality and generate summaries of the six articles I used in the annotated bibliography section. The summary prompt provided to the AI was “Write a 350 word abstract for this article. Include a summary of the topic of the article, the methodology used, the key findings, and the overall conclusion”. I adapted the summaries it produced to reflect my argument, style, and voice. I also adapted the summaries to better link with my topic under investigation. When I wanted the AI to help me improve my writing clarity, I pasted my written text and asked it to rewrite my work “in less words”, “in a more academic style”, or “using shorter sentences”. I also asked it to explain why it made the changes it did so that I could use this collaborative discussion as a learning process to improve my academic communication skills. I take responsibility for the final version of the text in my assignment.

Clear guidelines within rubrics should also be established to evaluate the ethical and responsible use of AI, reinforcing the importance of acknowledging the role of these tools in academic work. Given my previous work developing rubric templates for staff, I have recently developed a template for the acknowledgement of AI use within assignments. In my template, this criterion falls within the “academic literacy” section of the rubric I mentioned earlier. I have included the rubric criteria below so that other educators can use it as needed. The grading scale is the one used in my university, but it can be easily adapted to other grading scales.

  • High Distinction (80-100%): There was an excellent explanation about how generative AI software was used. This included, where appropriate, explicit details about the software used, the prompts provided to the AI, and explanations as to how the output of the generative AI was adapted for use within the assignment.  
  • Distinction (70-79%): There was a clear explanation about how generative AI software was used. This included, where appropriate, sufficient detail about the software used, the prompts provided to the AI, and explanations as to how the output of the generative AI was adapted for use within the assignment. 
  • Credit (60-69%): There was a reasonably clear explanation about how generative AI software was used. The explanation lacked sufficient details regarding one of the following: the software used, the prompts provided to the AI, and/or explanations as to how the output of the generative AI was adapted for use within the assignment.
  • Pass (50-59%): There was some explanation about how generative AI software was used. The explanation lacked several of the following: the software used, the prompts provided to the AI and/or explanations as to how the output of the generative AI was adapted for use within the assignment.
  • Fail (Below 50%): There was little or no explanation about how generative AI software was used.

Questions to ponder

The blog post outlines a rubric for assessing the acknowledgement and use of generative AI in student assignments. Considering the varying levels of detail and adaptation of AI-generated content required for different grades, what are your thoughts on the fairness and effectiveness of this approach?

How might this rubric evolve as generative AI technology becomes more advanced and commonplace in educational environments?

Autoethnography: What is it and how do you do it?

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher in the fields of academic language, literacy, research skills, and research methodologies. 

Autoethnography has become an increasingly popular research methodology, particularly within the humanities and social sciences. I use it regularly because of its emphasis on personal experiences, reflexivity, and storytelling which allows for a deeper exploration of complex experiences and societies. So what is autoethnography? The name autoethnography comes from three core aspects: self, culture, and writing. So, literally, autoethnography is an approach to research and writing that seeks to describe and systematically analyse personal experience to better understand cultural experiences.

As I’ve noted in a recent book chapter, there are several reasons why I find autoethnography a particularly compelling research methodology.

  • First, autoethnography allows researchers to purposely explore personal experiences to understand a particular culture or society. Researching personal experiences is becoming increasingly important as individuals’ stories are recognised as important sources of knowledge. Personal experiences can provide unique insights into social, cultural, and historical contexts and highlight the complexities of human experience.
  • Second, autoethnography considers insider knowledge as a valuable source of data. Researchers are the participants in their own studies and the stories which are told often explore transformative experiences for the researcher, frequently taking the form of epiphanies that significantly influenced the author’s worldview. I believe that this allows researchers to provide more meaningful insights into complex phenomena compared with more traditional objective research methods.
  • Third, autoethnography empowers researchers as it allows them to embrace emotionality and uncertainty and highlight topics that may be considered hidden or taboo. Autoethnography allows researchers to connect with their own emotions and experiences and, in doing so, find their voice. It allows them to challenge the dominant narratives that often dominate research and to tell their own stories in their own words.
  • And finally, autoethnography is a more accessible type of research for those outside of academia because it is written from personal experience in easy-to-understand language. The autoethnographer also does not merely narrate an experience for their audience. Instead, they try to engage the audience in the conversation so that the audience can understand experiences which may be different from their own. By sharing their own experiences, they can create a space for others to share theirs, fostering a more equitable and inclusive research process.

It is important to note that autoethnography does have some challenges. Some researchers critique it as a methodology because it is not scientific enough, while others say it is not artistic enough. I believe, however, that these critiques fail to see the value of combining both science and art when exploring complex phenomena. In this way, autoethnographers can advocate for social change to address perceived societal wrongs.

So how do you actually do autoethnography in your research project? It is important to remember that there is no one way to do autoethnography. What is most important is to develop systematic data collection and analysis methods that help you deeply explore your personal experience.

First, it is important to have a series of reflective prompts to help you explore your experiences. I use a simple prompt strategy, which gives very open initial prompts to allow me to delve into my personal experiences, analyse my emotions and thoughts during that period, reflect on how I feel now, and determine how my previous experiences have impacted my current philosophy or practice.

  • Describing the experience
    • What happened?
    • What did I do?
  • Analysing the experience
    • What was I thinking and feeling?
    • How do I feel now?
    • What went well?
    • What could I have done better?
  • Creating a step-by-step plan
    • How will this information be useful in the future?
    • How can I modify my practice in the future?
    • What help do I need?

Second, you need a way to record your reflections. I like to start my reflection journey by voice or video recording a conversation I have with myself, thinking about my past experiences. I start by thinking about what happened, what I did, what I was thinking and feeling at the time, and how I feel now. Then, I explore how I think the experience has informed my way of being now. How has it shaped my future practice? Why? After finishing the recording, I transcribe the recording and use this transcription as my initial data.

Third, you can also consult relevant artefacts as part of your autoethnography, such as photos and documents from the past to help you think and reflect more deeply about an experience. You can also consult other important figures such as family or friends from your past to help you see the experience from multiple viewpoints. It is important to note that you will require ethics approval for your study if you use photos with other people in them, or the significant people you consult are possibly identifiable in your final project.

Fourth, you use the writing process as part of your reflection process.  Through the writing process, you further reflect on what you were thinking and feeling during the experiences you are describing. These reflections can remind you of other experiences that shaped your understanding of that experience. This continuous writing and re-writing of your story becomes further data sources that allow you to engage more deeply with your experiences. Remember to lean into your story’s more emotive and vulnerable parts, as this will allow you to uncover hidden perspectives in your understanding more effectively. Ask yourself, why did this experience make me feel this way? What does it tell me about the context I found myself?

Finally, as you write about your experiences, you should incorporate your theoretical analysis. Start looking for key concepts you have identified in your reflections and how they link to your overarching research problem. Which theoretical concepts do they reflect? What can others learn from your experience?

In conclusion, good quality autoethnography explores personal experiences to illuminate a particular cultural context.  Autoethnography is not merely telling your story. It is analysing your story to uncover previously ignored perspectives within a particular research context.

Questions to ponder

Autoethnography emphasises the value of personal experiences in understanding cultural contexts. Reflect on an experience from your life that could offer unique insights into a particular cultural or societal aspect. How could analysing this personal experience using autoethnography enhance our understanding of broader cultural phenomena?

What are your thoughts on balancing the scientific rigour and artistic expression in autoethnography? Can you think of any specific situations or contexts where this methodology might be particularly beneficial or problematic?