When conducting a Hybrid Type 1 design conducting a process evaluation of implementation in the context of a clinical effectiveness trial , the qualitative data could be used to inform the findings of the effectiveness trial.
Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation.
Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both.
In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes i.
In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.
Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity.
Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues this issue. Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy.
Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation i.
Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues Brown et al, ; Brown et al. Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research.
They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject or community for a group-based trial receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages Denzen, Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants.
This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment as in the optimal design to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.
In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases Patton, at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience. On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study Wisdom et al.
Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative.
Kemper and colleagues identify seven such principles: Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes.
This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies.
Such a strategy is necessary for the task of finding the optimal balance between internal and external validity. Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question in which case a strategy emphasizing variation and dispersion is preferred or the for answering related questions in which case, a strategy emphasizing similarity and central tendencies is preferred.
Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other.
Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.
Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements.
Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems.
For states engaged in EBP implementation, the need for these solutions is urgent. National Center for Biotechnology Information , U. Adm Policy Ment Health. Author manuscript; available in PMC Sep 1. The publisher's final edited version of this article is available at Adm Policy Ment Health. See other articles in PMC that cite the published article.
Abstract Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Principles of Purposeful Sampling Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources Patton, Types of purposeful sampling designs There exist numerous purposeful sampling designs.
Table 1 Purposeful sampling strategies in implementation research. Strategy Objective Example Considerations Emphasis on similarity Criterion-i To identify and select all cases that meet some predetermined criterion of importance Selection of consultant trainers and program leaders at study sites to facilitators and barriers to EBP implementation Marshall et al.
Can be used to identify cases from standardized questionnaires for in- depth follow-up Patton, Criterion-e To identify and select all cases that exceed or fall outside a specified criterion Selection of directors of agencies that failed to move to the next stage of implementation within expected period of time.
Typical case To illustrate or highlight what is typical, normal or average A child undergoing treatment for trauma Hoagwood et al.
Often used for selecting focus group participants Snowball To identify cases of interest from sampling people who know people that generally have similar characteristics who, in turn know people, also with similar characteristics.
Emphasis on variation Intensity Same objective as extreme case sampling but with less emphasis on extremes Clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation.
Maximum variation Important shared patterns that cut across cases and derived their significance from having emerged out of heterogeneity.
Sampling mental health services programs in urban and rural areas in different parts of the state north, central, south to capture maximum variation in location Bachman et al. Can be used to document unique or diverse variations that have emerged in adapting to different conditions Patton, Depends on recognition of key dimensions that make for a critical case.
Particularly important when resources may limit the study of only one site program, community, population Patton, Theory-based To find manifestations of a theoretical construct so as to elaborate and examine the construct and its variations Sampling therapists based on academic training to understand the impact of CBT training versus psychodynamic training in graduate school of acceptance of EBPs Sample on the basis of potential manifestation or representation of important theoretical constructs.
Sampling on the basis of emerging concepts with the aim being to explore the dimensional range or varied conditions along which the properties of concepts vary. Confirming and disconfirming case To confirm the importance and meaning of possible patterns and checking out the viability of emergent findings with new data and additional cases.
Once trends are identified, deliberately seeking examples that are counter to the trend. Usually employed in later phases of data collection. Confirmatory cases are additional examples that fit already emergent patterns to add richness, depth and credibility.
Disconfirming cases are a source of rival interpretations as well as a means for placing boundaries around confirmed findings Stratified purposeful To capture major variations rather than to identify a common core, although the latter may emerge in the analysis Combining typical case sampling with maximum variation sampling by taking a stratified purposeful sample of above average, average, and below average cases of health care expenditures for a particular problem.
This represents less than the full maximum variation sample, but more than simple typical case sampling. Purposeful random To increase the credibility of results Selecting for interviews a random sample of providers to describe experiences with EBP implementation. Not as representative of the population as a probability random sample.
Nonspecific emphasis Opportunistic or emergent To take advantage of circumstances, events and opportunities for additional data collection as they arise. Usually employed when it is impossible to identify sample or the population from which a sample should be drawn at the outset of a study. Used primarily in conducting ethnographic fieldwork Convenience To collect information from participants who are easily accessible to the researcher Recruiting providers attending a staff meeting for study participation.
Although commonly used, it is neither purposeful nor strategic. Open in a separate window. Challenges to use of purposeful sampling Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study.
Purposeful Sampling in Implementation Research Characteristics of Implementation Research In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods Palinkas et al.
Table 2 Purposeful sampling strategies and mixed method designs in implementation research. Refers to sequential structure; refers to simultaneous structure. Summary On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. Advancing a conceptual model of evidence-based practice implementation in child welfare.
Implementation of evidence-based practice in child welfare: Care, Health and Development. Research methods in anthropology: Qualitative and quantitative approaches. Bloom HS, Michalopoulos C. When is the story in the subgroups? Strategies for interpreting and reporting intervention effects for subgroups. Dynamic wait-listed designs for randomized trials: New designs for prevention of youth suicide.
Intent-to-treat analyses for integrating the perspectives of person, place, and time. Drug and Alcohol Dependence. Adaptive designs for randomized trials in public health. Annual Review of Public Health. Stigma theory posits that individuals are socially marked or stigmatized by negative cultural evaluations because of visible differences or deformities, as defined by the community.
Patterns of avoidance and denial of the disabled mark the socially conditioned feelings of revulsion, fear, or contagion. Personal experiences of low self-esteem result when negative messages are internalized by, for example, persons with visible impairments, or the elderly in an ageist setting. Management of social stigma by individuals and family is as much a focus as is management of impairments.
Stigma is related significantly to compliance with prescribed adaptive devices Zola ; Luborsky a. A graphic case of this phenomenon are polio survivors who were homebound due to dependence on massive bedside artificial ventilators. With the recent advent of portable ventilators, polio survivors gained the opportunity to become mobile and travel outside the home, but they did not adopt the new equipment, because the new independence was far outweighed by the public stigma they experienced Kaufert and Locker A final point is that sampling for meaning can also be examined in terms of sampling within the data collected.
For example, the entire corpus of materials and observations with informants needs to be examined in the discovery and interpretive processes aimed at describing relevant units for analyses and dimensions of meaning.
This is in contrast to reading the texts to describe and confirm a finding without then systematically rereading the texts for sections that may provide alternative or contradictory interpretations.
As discussed earlier, probability sampling techniques cannot be used for qualitative research by definition, because the members of the universe to be sampled are not known a priori, so it is not possible to draw elements for study in proportion to an as yet unknown distribution in the universe sampled. A review of the few qualitative research publications that treat sampling issues at greater length e. A consensus among these authors is found in the paramount importance they assign to theory to guide the design and selection of samples Platt These are briefly reviewed as follows.
First, convenience or opportunistic sampling is a technique that uses an open period of recruitment that continues until a set number of subjects, events, or institutions are enrolled. Here, selection is based on a first-come, first-served basis. This approach is used in studies drawing on predefined populations such as participants in support groups or medical clinics.
Second, purposive sampling is a practice where subjects are intentionally selected to represent some explicit predefined traits or conditions.
This is analogous to stratified samples in probability-based approaches. The goal here is to provide for relatively equal numbers of different elements or people to enable exploration and description of the conditions and meanings occurring within each of the study conditions.
The objective, however, is not to determine prevalence, incidence, or causes. Third, snowballing or word-of-mouth techniques make use of participants as referral sources.
Participants recommend others they know who may be eligible. Fourth, quota sampling is a method for selecting numbers of subjects to represent the conditions to be studied rather than to represent the proportion of people in the universe. The goal of quota sampling is to assure inclusion of people who may be underrepresented by convenience or purposeful sampling techniques.
Fifth, case study Ragin and Becker ; Patton samples select a single individual, institution, or event as the total universe. A variant is the key-informant approach Spradley , or intensity sampling Patton where a subject who is expert in the topic of study serves to provide expert information on the specialized topic.
When qualitative perspectives are sought as part of clinical or survey studies, the purposive, quota, or case study sampling techniques are generally the most useful. How many subjects is the perennial question.
There is seldom a simple answer to the question of sample or cell size in qualitative research. There is no single formula or criterion to use. The question of sample size cannot be determined by prior knowledge of effect sizes, numbers of variables, or numbers of analyses—these will be reported as findings. Sample sizes in qualitative studies can only be set by reference to the specific aims and the methods of study, not in the abstract.
The answer only emerges within a framework of clearly stated aims, methods, and goals and is conditioned by the availability of staff and economic resources.
In practice, from 12 to 26 people in each study cell seems just about right to most authors. In general, it should be noted that Americans have a propensity to define bigger as better and smaller as inferior. Quantitative researchers, in common with the general population, question such small sample sizes because they are habituated to opinion polls or epidemiology surveys based on hundreds or thousands of subjects. However, sample sizes of less than 10 are common in many quantitative clinical and medical studies where statistical power analyses are provided based on the existence of very large effect sizes for the experimental versus control conditions.
Other considerations in evaluating sample sizes are the resources, times, and reporting requirements. In anthropological field research, a customary formula is that of the one to seven: Thus, in studies that use more than one interviewer, the ability to collect data also increases the burden for analyses. An outstanding volume exploring the logic, contributions, and dilemmas of case study research Ragin and Becker reports that survey researchers resort to case examples to explain ambiguities in their data, whereas qualitative researchers reach for descriptive statistics when they do not have a clear explanation for their observations.
Again, the choice of sample size and group design is guided by the qualitative goal of describing the nature and contents of cultural, social, and personal values and experiences within specific conditions or circumstances, rather than of determining incidence and prevalence. In the tradition of informant-based and of participatory research, it is assumed that all members of a community can provide useful information about the values, beliefs, or practices in question.
Experts provide detailed, specialized information, whereas nonexperts do so about daily life. In some cases, the choice is obvious, dictated by the topic of study, for example, childless elderly, retirees, people with chronic diseases or new disabilities. In other cases, it is less obvious, as in studies of disease, for example, that require insights from sufferers but also from people not suffering to gain an understanding for comparison with the experiences and personal meanings of similar people without the condition.
Comparisons can be either on a group basis or matched more closely on a one-to-one basis for many traits e. However, given the labor-intensive nature of qualitative work, sometimes the rationale for including control groups of people who do not have the experiences is not justifiable. Currently, when constructing samples for single study groups, qualitative research appears to be about equally split in terms of seeking homogeneity or diversity.
There is little debate or attention to these contrasting approaches. For example, some argue that it is more important to represent a wide range of different types of people and experiences in order to represent the similarities and diversity in human experience, beliefs, and conditions e.
In contrast, others select informants to be relatively homogeneous on several characteristics to strengthen comparability within the sample as an aid to identifying similarities and diversity. To review, the authors suggest that explicit objective criteria to use for evaluating qualitative research designs do exist, but many of these focus on different issues and aspects of the research process, in comparison to issues for quantitative studies.
This article has discussed the guiding principles, features, and practices of sampling in qualitative research. The guiding rationale is that of the discovery of the insider's view of cultural and personal meanings and experience. Major features of sampling in qualitative research concern the issues of identifying the scope of the universe for sampling and the discovery of valid units for analyses. The practices of sampling, in comparison to quantitative research, are rooted in the application of multiple conceptual perspectives and interpretive stances to data collection and analyses that allow the development and evaluation of a multitude of meanings and experiences.
This article noted that sampling concerns are widespread in American culture rather than in the esoteric specialized concern of scientific endeavors Luborsky and Sankar Core scientific research principles are also basic cultural ideals Luborsky Knowledge about the rudimentary principles of research sampling is widespread outside of the research laboratory, particularly with the relatively new popularity of economic, political, and community polls as a staple of news reporting and political process in democratic governance.
Core questions about the size, sources, and features of participants are applied to construct research populations, courtroom juries, and districts to serve as electoral universes for politicians. The cultural contexts and popular notions about sampling and sample size have an impact on scientific judgments. It is important to acknowledge the presence and influence of generalized social sensibilities or awareness about sampling issues.
Such notions may have less direct impact on research in fields with long-established and formalized criteria and procedures for determining sample size and composition.
The generalized social notions may come to exert a greater influence as one moves across the spectrum of knowledge-building strategies to more qualitative and humanistic approaches.
Even though such studies also have a long history of clearly articulated traditions of formal critiques e. The authors suggested that some of the rancor between qualitative and quantitative approaches is rooted in deeper cultural tensions. Prototypic questions posed to qualitative research in interdisciplinary settings derive from both the application of frameworks derived from other disciplines' approaches to sampling as well as those of the reviewers as persons socialized into the community where the study is conceived and conducted.
Such concerns may be irrelevant or even counterproductive. The guiding logic of qualitative research, by design, generally prevents it from being able to fulfill the assumptions underlying statistical power analyses of research designs. The discovery-oriented goals, use of meanings as units of analyses, and interpretive methods of qualitative research dictate that the exact factors, dimensions, and distribution of phenomena identified as important for analyses may not always be specified prior to data analyses activities.
These emerge from the data analyses and are one of the major contributions of qualitative study. No standardized scales or tests exist yet to identify and describe new arenas of cultural, social, or personal meanings. Meaning does not conform to normative distributions by known factors. No probability models exist that would enable prediction of distributions of meanings needed to perform statistical power analyses.
Qualitative studies however can, and should, be judged in terms of how well they meet the explicit goals and purposes relevant to such research. The authors have suggested that the concept of qualitative clarity be developed to guide evaluations of sampling as an analog to the concept of statistical power. Qualitative clarity refers to principles that are relevant to the concerns of this type of research.
That is, the adequacy of the strength and flexibility of the analytic tools used to develop knowledge during discovery procedures and interpretation can be evaluated even if the factors to be measured cannot be specified. The term clarity conveys the aim of making explicit, for open discussion, the details of how the sample was assembled, the theoretical assumptions and the pragmatic constraints that influenced the sampling process.
These are briefly described next. In the absence of standardized measures for assessing meaning, the analogous qualitative research tools are theory and discovery processes. Strong and well-developed theoretical preparation is necessary to provide multiple and alternative interpretations of the data. The relative degree of theoretical development in a research proposal or manuscript is readily apparent in the text, for example, in terms of extended descriptions of different schools of thought and possible multiple contrasting of interpretive explanations for phenomena at hand.
In brief, the authors argue that given the stated goal of sampling for meaning, qualitative research can be evaluated to assess if it has adequate numbers of conceptual perspectives that will enable the study to identify a variety of meanings and to critique multiple rich interpretations of the meanings.
Sampling within the data is another important design feature. The discovery of meaning should also include sampling within the data collected. The entire set of qualitative materials should be examined rather than selectively read after identifying certain parts of the text to describe and confirm a finding without reading for sections that may provide alternative or contradictory interpretations. As a second component of qualitative clarity, sensitivity to context refers to the contextual dimensions shaping the meanings studied.
It also refers to the historical settings of the scientific concepts used to frame the research questions and the methods. Researchers need to be continually attentive to examining the meanings and categories discovered for elements from the researchers' own cultural and personal backgrounds. The first of these contexts is familiar to gerontologists: Another more implicit contextual aspect to examine as part of the qualitative clarity analysis is evidence of a critical view of the methods and theories introduced by the investigators.
Because discovery of the insiders' perspective on cultural and personal meanings is a goal of qualitative study, it is important to keep an eye to biases derived from the intrusion of the researcher's own scientific categories.
Qualitative research requires a critical stance as to both the kinds of information and the meanings discovered, and to the analytic categories guiding the interpretations.
One example is recent work that illustrates how traditional gerontological constructs for data collection and analyses do not correspond to the ways individuals themselves interpret their own activities, conditions, or label their identities e. A second example is the growing awareness of the extent to which past research tended to define problems of disability or depression narrowly in terms of the individual's ability, or failure, to adjust, without giving adequate attention to the societal level sources of the individual's distress Cohen and Sokolovsky Thus researchers need to demonstrate an awareness of how the particular questions guiding qualitative research, the methods and styles of analyses, are influenced by cultural and historical settings of the research Luborsky and Sankar in order to keep clear whose meanings are being reported.
To conclude, our outline for the concept of qualitative clarity, which is intended to serve as the qualitatively appropriate analog to statistical power, is offered to gerontologists as a summary of the main points that need to be considered when evaluating samples for qualitative research.
The descriptions of qualitative sampling in this article are meant to extend the discussion and to encourage the continued development of more explicit methods for qualitative research. Ongoing support for the second author from the National Institute of Aging is also gratefully acknowledged. Federal and foundation grants support his studies of sociocultural values and personal meanings in early and late adulthood, and how these relate to mental and physical health, and to disability and rehabilitation processes.
He also consults and teaches on these topics. His gerontological research interests include social relations of the elderly, childlessness in later life, and the home environments of old people.
National Center for Biotechnology Information , U. Author manuscript; available in PMC Nov 3. The publisher's final edited version of this article is available at Res Aging. See other articles in PMC that cite the published article. Abstract In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. Contributions, Logic and Issues in Qualitative Sampling Major contributions Attention to sampling issues has usually been at the heart of anthropology and of qualitative research since their inception.
Ideals and Techniques of Qualitative Sampling The preceding discussion highlighted the need to first identify the ideal or goal for sampling and second to examine the techniques and dilemmas for achieving the ideal.
Core ideals include the determination of the scope of the universe for study and the identification of appropriate analytic units when sampling for meaning Defining the universe This is simultaneously one of qualitative research's greatest contributions and greatest stumbling blocks to wider acceptance in the scientific community.
Sampling for meaning The logic or premises for qualitative sampling for meaning is incompletely understood in gerontology. Techniques for selecting a sample As discussed earlier, probability sampling techniques cannot be used for qualitative research by definition, because the members of the universe to be sampled are not known a priori, so it is not possible to draw elements for study in proportion to an as yet unknown distribution in the universe sampled.
Who and who not? Homogeneity or diversity Currently, when constructing samples for single study groups, qualitative research appears to be about equally split in terms of seeking homogeneity or diversity. Summary and Reformulation for Practice To review, the authors suggest that explicit objective criteria to use for evaluating qualitative research designs do exist, but many of these focus on different issues and aspects of the research process, in comparison to issues for quantitative studies.
Qualitative Clarity as an Analog to Statistical Power The guiding logic of qualitative research, by design, generally prevents it from being able to fulfill the assumptions underlying statistical power analyses of research designs. Rich and diverse theoretical grounding In the absence of standardized measures for assessing meaning, the analogous qualitative research tools are theory and discovery processes.
Sensitivity to contexts As a second component of qualitative clarity, sensitivity to context refers to the contextual dimensions shaping the meanings studied.
Who Cares for the Elderly. Temple University Press; Philadelphia: A Path Not Taken: Unraveling the Mystery of Health. A Continuity Theory of Aging. Continuity After a Stroke: Family and Social Networks. Cohen Carl, Sokolovsky Jay. Old Men of the Bowery. Journal of Life History and Narrative. The Self, the Third, and Desire. Psychosocial Theories of the Self.
Shweder R, LeVine R, editors. Essays on Mind, Self, and Emotion. These are discussed in greater detail in the Qualitative Ready module covering data types.
In order to collect these types of data for a study, a target population, community, or study area must be identified first. It is not possible for researchers to collect data from everyone in a sample area or community. Therefore, the researcher must gather data from a sample, or subset, of the population in the study.
In quantitative research, the goal would be to conduct a random sampling that ensured the sample group would be representative of the entire population, and therefore, the results could be generalized to the entire population.
The goal of qualitative research is to provide in-depth understanding and therefore, targets a specific group, type of individual, event or process. To accomplish this goal, qualitative research focus on criterion-based sampling techniques to reach their target group.
There are three main types of qualitative sampling: The following descriptions describe the reasons for choosing a particular method. A note on sample size - Once a sampling method has been determined, the researcher must consider the sample size. In qualitative studies, sampling typically continues until information redundancy or saturation occurs.
This is the point at which no new information is emerging in the data. Therefore, in qualitative studies is it critical that data collection and analysis are occurring simultaneously so that the researcher will know when the saturation point is reached.
It is important to understand that the saturation point may occur prematurely if the researcher has a narrow sampling frame, a skewed analysis of the data, or poor methodology. Because of this, the researcher must carefully create the research question, select an appropriate target group, eliminate his or her own biases and analyze data continuously and thoroughly throughout the process to bring validity to the data collected.
The following slideshare presentation, Collecting Qualitative Data , and the Resource Links on this page provide additional insight into qualitative sampling. Qualitative Research Methods - A Data Collectors Field Guide - This comprehensive, detailed guide describes various types of sampling techniques and provides examples of each, as well as pros and cons.
Qualitative Sampling Methods. The following module describes common methods for collecting qualitative data. Learning Objectives: Describe common types of qualitative sampling methodology. Explain the methods typically used in qualitative data collection. Describe how sample size is determined.
How to do sampling for qual and quant research designs.
Qualitative Research Methods: A Data Collector’s Field Guide One advantage of qualitative methods in exploratory research is that use of open-ended questions we briefly describe three of the most common sampling methods used in qualitative research: purposive sampling, quota sampling, and snowball sampling. Mar 01, · A review of the few qualitative research publications that treat sampling issues at greater length (e.g., Depoy and Gitlin ; Miles and Huberman ; Morse ; Ragin and Becker ) identify five major types of nonprobability sampling techniques for qualitative research.
What are the most appropriate sampling methods in qualitative research? we briefly describe three of the most common sampling methods used in qualitative research: purposive sampling, quota. poseful sampling and to provide clariﬁcation on the use method of analysing qualitative data in order to produce of theoretical sampling in nursing research. a theory. It must be noted at this point that Glaser &.