{"id":200,"date":"2018-12-28T13:32:08","date_gmt":"2018-12-28T13:32:08","guid":{"rendered":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/chapter\/module-3-chapter-4\/"},"modified":"2024-08-19T12:19:11","modified_gmt":"2024-08-19T12:19:11","slug":"module-3-chapter-4","status":"publish","type":"chapter","link":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/chapter\/module-3-chapter-4\/","title":{"rendered":"Module 3 Chapter 4: Participant Recruitment, Retention, and Sampling"},"content":{"raw":"Until this point, most of our discussions have treated intervention and evaluation research as being very similar. One major way in which they differ relates to participants and the pool or population from which they are selected. Recall that intervention research aims to draw conclusions and develop generalizations about a population based on what is learned with a representative sample. Thus, intervention research is strongest when the participants are systematically drawn from the population of interest. The aim of evaluation research is different: the knowledge gained is to be used to inform the practice and\/or program being evaluated, not generalized to the broader population. As a result, evaluation research typically engages participants receiving those services. While the principles of systematic and random sampling might apply in both scenarios, the pool or population of potential participants is different, and the generalizability of results derived from the sample of participants differs, as well. The principles learned in our prior course about sampling and participant recruitment to understand social work problems and diverse populations applies to social work intervention and evaluation research for understanding interventions. Because much of evaluation and intervention research is longitudinal in nature, participant retention, as well as participant recruitment, is of major concern.\n\nIn this chapter you :\n<ul>\n \t<li>review features of sample size and filling a study design, and learn how they apply to effect sizes and research for understanding social work interventions;<\/li>\n \t<li>review features of participant recruitment and retention, and learn how they apply to research for understanding social work interventions;<\/li>\n \t<li>learn about random assignment of participants to study design conditions in intervention and evaluation research.<\/li>\n<\/ul>\n<h2><strong>Sample Size Reviewed &amp; Expanded<\/strong><\/h2>\nSample size is not a significant issue if interventions are being evaluated from a qualitative approach where the aim is depth of data rather than generalizability from a sample to a population. Sample size in qualitative studies is generally kept relatively small as means of keeping manageable the volume of data needing to be analyzed.\n\nSample size does matter in quantitative approaches where investigators will generalize from the sample to a population. In our prior course you learned how sample size matters in terms of the sample\u2019s ability to represent the population. Remember the green M&amp;Ms example where the small samples were quite varied compared to each other and to the true population, but the larger (combined) samples were less different? Sample size issues remain important in intervention research where generalizations are to be made to the population based on the sample. This might be an issue, as well, in evaluation research where there are many participants involved in the intervention being evaluated and the investigators choose to work with data from a sample rather than participants representing the entire population served. In either case, intervention or evaluation research, investigators need to determine what constitutes an adequately sized sample. Two issues need to be addressed: numbers needed to fulfill the requirements of a study design and sample size needed to detect meaningful effects.\n<h3><strong>Filling a quantitative study design: <\/strong><\/h3>\nYou may recall from our prior course how a study design relates to the number of study participants that need to be recruited (and retained). The study design might include two or more independent variables (the ones being manipulated or compared). To ensure sufficient numbers of participants for analyzing these variables, investigators need to be sure that participants of the designated types are recruited and retained so that their outcome (dependent variable) data can be analyzed. Here is an example of the numbers of each type needed to fulfill a 2 X 3 design. This example has neighborhoods as the unit of analysis; individual participants are embedded within those neighborhood units. This example is relevant to research for understanding social work interventions at a meso or macro level.\n\nImagine a study concerning the impact of a community empowerment intervention designed to help members of local communities improve health outcomes by reducing exposure to air and water environmental toxins and contaminants inside and outside of their homes. Investigators are concerned that the intervention might differently impact very low-income, low-income, and moderate-income neighborhoods. They have chosen to conduct a random assignment study where \u00bd of the neighborhoods receive the intervention immediately and the other \u00bd receive it one year later (delayed intervention with the no intervention period serving as the control). They have determined that for the purposes of their analysis plan, they need a minimum of 12 neighborhoods in each condition. The sampling design would look like this:\n\n<img class=\"aligncenter size-full wp-image-188\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM.png\" alt=\"Neighborhood income status sampling design\" width=\"602\" height=\"120\">\n\nFilling the study design cells for this 2 X 3 design requires a minimum of 72 neighborhoods (6 cells times 12 units each=72 units total). These would be recruited as: 24 very low income, 24 low income, and 24 moderate income neighborhoods. Within each neighborhood, they hope to engage 15-20 households, meaning that they will engage with between 1080 and 1440 households (15 x 72=1080, 20 x 72=1440).\n<h3><strong>Sample size related to effect size.<\/strong><\/h3>\nPreviously in this chapter you read about differences that are clinically meaningful. Intervention researchers are often asked to consider an analogous problem: what is the size of the effect detected in relation to the intervention? While an observed difference might be statistically significant, it is important to know whether the size of that difference is meaningful. <strong><em>Effect size<\/em><\/strong> information helps interpret statistical findings related to interventions\u2014their power to effect meaningful amounts of change in the desired outcomes. The size or magnitude of the effect detected is determined statistically, and sample size is one part of the formula for computing effect sizes. As a result, the size of a study\u2019s sample has an impact on the size of effect that can be detected.\n\nHere, the logic can sometimes become a bit confusing. This diagram helps explain the relationship between effect size and sample size without getting into the detailed statistics involved.\n\n<img class=\"aligncenter size-full wp-image-189\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM.png\" alt=\"diagram illustrating the relationship between Sample Size and Effect Size\" width=\"680\" height=\"420\">\n\nIn other words, if an investigator wishes to be as sure as possible to detect an effect if it exists, a larger sample size will help; having a small sample leaves the question unanswered if no effect is detected (see the small\/small peach colored box)\u2014the study will have to be repeated to determine if there really is no effect (see the large\/large pink colored box) or there actually is an effect of the intervention\u2014large or small.\n\nIn order to refresh your skills in working with Excel and gain practice with the topic of sample size related to effect size, we have an exercise in the Excel workbook to visit.\n<div class=\"textbox textbox--learning-objectives\"><header class=\"textbox__header\">\n<p class=\"textbox__title\">Interactive Excel Workbook Activities<\/p>\n\n<\/header>\n<div class=\"textbox__content\">\n\nComplete the following Workbook Activity:\n<ul>\n \t<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-1-exercise-on-sample-and-effect-sizes\/\">SWK 3402.3-4.1 Exercise on Sample and Effect Sizes<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<h3><strong>Participant diversity. <\/strong><\/h3>\nIn our prior course we also examined issues related to participant diversity and heterogeneity in study samples. Intervention and evaluation research working with samples need to consider the extent to which those samples are representative of the diversity and heterogeneity present in the population to which the intervention research will be generalized or the population of those served by the program being evaluated. Ideally, the strategies for <strong><em>random selection<\/em><\/strong> that you learned in our prior course (probability selection) aid in the effort to achieve representativeness. Convenience sampling and snow-ball sampling strategies, as you may recall, tend to tap into <strong><em>homophily<\/em><\/strong> and create overly homogeneous samples.\n\nDiversity among participants in qualitative studies is not intended to be representative of the diversity occurring in the population. Instead, heterogeneity among participants is intended to provide breadth in the perspectives shared, as a complement to depth in the data collected.\n<h2><strong>Participant Recruitment &amp; Retention Reviewed &amp; Expanded<\/strong><\/h2>\nIn our prior course you learned about the distinction between <strong><em>participant recruitment<\/em><\/strong> (individuals entering into a study) and <strong><em>participant retention <\/em><\/strong>(individuals remaining engaged with a longitudinal study over time). Intervention and evaluation research investigators need to have a strong recruitment and retention plan in place to ensure that the study can be successfully completed. Without the right numbers and types of study participants, even the best designed studies are doomed to failure: over 80% of clinical trials in medicine fail due to under-enrollment of qualified participants (Thomson CenterWatch, 2006)! How suitably this figure represents what happens in behavioral and social work intervention research is unknown (Begun, Berger, &amp; Otto-Salaj, 2018). In fact, several authors have recommended having study personnel assigned specifically to the tasks of implementing a detailed participant recruitment and retention plan over the lifetime of the intervention or evaluation study (Begun, Berger, &amp; Otto-Salaj, 2018; Jimenez &amp; Czaja, 2016).\n\n<img class=\"aligncenter size-full wp-image-190\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Pie-chart-representing-20-percent-success-and-80-percent-failure.png\" alt=\"Pie chart representing 20 percent success and 80 percent failure\" width=\"303\" height=\"253\">\n\nHere is a brief review of some of those principles and elaboration of several points relevant to intervention and evaluation research.\n<h3><strong>Participant recruitment.<\/strong><\/h3>\nInvestigators are notorious for over-estimating the numbers of participants that are available, eligible, and willing to participate in their studies, particularly intervention studies (Thoma et al., 2010). This principle has been nicknamed Lasagna\u2019s Law after the scientist Louis Lasagna who first described this phenomenon:\n<blockquote><em>\"Investigators all too often commit the error of (grossly) overestimating the pool of potential study participants who meet a study\u2019s inclusion criteria and the number they can successfully recruit into the study\" (Begun, Berger, &amp; Otto-Salaj, p. 10).<\/em><\/blockquote>\n<img class=\"aligncenter size-full wp-image-191\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/lasagna-on-a-plate.png\" alt=\"lasagna on a plate\" width=\"189\" height=\"160\">\n\nYou may recall learning in our prior course about a 3-step process related to participant recruitment (adapted from Begun, Berger, &amp; Otto-Salaj, 2018): generating contacts, screening, and consent.\n<h4><em>Generating contacts.<\/em><\/h4>\nThe first step involved generating initial contacts, soliciting interest from potential participants. Important considerations included the media applied (e.g., newsletter announcements, radio and television advertising, mail, email, social media, flyers, posters, and others) and the nature of the message inviting participation. Remember that recruitment messages are invitations to become a participant and need to respond to:\n<ul>\n \t<li>why someone might wish to engage in the study<\/li>\n \t<li>need for details to make an informed choice about volunteering<\/li>\n \t<li>how they can become involved<\/li>\n \t<li>cultural relevance of the invitation message.<\/li>\n<\/ul>\nOne strong motivation for individuals to participate in intervention research is the potential for receiving a new form of intervention, one to which they might not otherwise have access. This might be particularly motivating for someone who has been dissatisfied with other intervention options. The experimental option might seem desirable because it is different from something that has not worked well for them in the past, it seems more relevant, it is more practical\/feasible than other options, or they were not good candidates for other options.\n\nOn the other hand, one barrier to participation is the experimental, unknown nature of the intervention\u2014the need to conduct the study suggests that the outcomes are somewhat uncertain, and this may include unknown side effects. This barrier might easily outweigh the influence of a different motivation: the altruistic desire to contribute to something that might help others. Altruism is sometimes a motivation to engage in research but cannot be relied on alone to motivate participation in studies with long-term or intensive commitments of time and effort. It may be sufficient to motivate participation in an evaluation for interventions that are part of their service or treatment plan, however.\n\n<img class=\"aligncenter size-full wp-image-192\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-saying-you-are-invited.png\" alt=\"Illustration saying you are invited\" width=\"191\" height=\"147\">\n<h4><em>Screening.<\/em><\/h4>\nScreening is part of the intervention research recruitment process. Intervention and evaluation research typically require participants to meet specific study criteria\u2014<strong><em>inclusion criteria <\/em><\/strong>and <strong><em>exclusion criteria<\/em><\/strong>. These criteria might relate to their condition or problem, their past or present involvement in services or treatment, or specific demographic criteria (e.g., age, ethnicity, income level, gender identity, sexual orientation, or others). For example, inclusion criteria for a study might specify including only persons meeting the DSM-5 diagnostic criteria for a substance use disorder. Exclusion criteria for that same study might specify excluding all persons with additional DSM-5 diagnoses such as schizophrenia or dementia unrelated to substance use and withdrawal.\n\nStudy investigators need to establish clear and consistent screening protocols for determining who meets inclusion\/exclusion criteria for participation. Screening might include answers to simple questions, such as \u201cAre you over the age of 18?\u201d or \u201cAre you currently pregnant?\u201d Screening might also include administering one or more standardized screening instruments, such as the Alcohol Use Disorder Identification Test (AUDIT), the Patient Health Questionnaire screening for depression, the mini mental state examination (MMSE) screening for possible dementia, or the Hurt, Insult, Threaten, Scream (HITS) screening for intimate partner violence.\n\nRegardless of the tool, screening information is not data since screening occurs prior to participant consent. The sole purpose of screening is to determine whether a potential volunteer is eligible to become a study participant. Ethically, screening protocols also need to include a strategy for referring persons who made an effort to participate in the intervention (expressed a need for intervention) but could not meet the inclusion criteria. In other words, if the door into the study is closed to them, alternatives need to be provided.\n\n<img class=\"aligncenter size-full wp-image-193\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/door-imposed-upon-numerous-polygons.png\" alt=\"door imposed upon numerous polygons\" width=\"231\" height=\"231\">\n<h4><em>Consent.<\/em><\/h4>\nInformed consent is the third phase of the recruitment process. You learned in our prior work what is required for informed consent. For intervention research with a generalizability aim the consent process should be reviewed by an Institutional Review Board (IRB). Evaluation research, on the other hand, that is to be used primarily to inform the practitioner, program, agency, or institution might not require IRB review. However, the agency should secure consent to participate in the evaluation, particularly if any activities involved fall outside of routine practice and record-keeping.\n\nImportant to keep in mind throughout the three phases of recruitment is that all three phases and what transpires during intervention relate to participant retention over the course of a longitudinal study. While it is critical during recruitment to consider, from the participants\u2019 point-of-view, why they might wish to become involved in an intervention or evaluation study, in longitudinal studies it is equally important to consider why they might wish to continue to be involved over time. This topic warrants further attention.\n\n<img class=\"aligncenter size-full wp-image-194\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Pen-on-top-a-sheet-of-paper-that-says-I-agree.png\" alt=\"Pen on top a sheet of paper that says I agree\" width=\"237\" height=\"158\">\n<h3><strong>Participant retention. <\/strong><\/h3>\nA great deal of resources and effort devoted to participant recruitment and delivering interventions is wasted each time a study participant drops out before the end of a study (called study attrition, this is the opposite of retention). Furthermore, the integrity of study conclusions can be jeopardized when study attrition occurs. An interesting meta-analysis was conducted to assess the potential impact on longitudinal studies of our nation\u2019s high rates of incarceration, especially in light of extreme racial disparities in incarceration rates (Wang, Aminawung, Wildeman, Ross, &amp; Krumholz, 2014). The investigators combined the samples from 14 studies into a complete sample of 72,000 study participants. Based on U.S. incarceration rates, they determined that longitudinal studies stand to lose up to 65% of black men from their samples. Under these conditions, study results and generalizability conclusions are potentially seriously impaired, especially since participant attrition is not occurring in a random fashion equivalently across all groups.\n\n<img class=\"aligncenter size-full wp-image-195\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Person-looking-on-into-the-abyss.png\" alt=\"Person looking on into the abyss\" width=\"339\" height=\"227\">\n<h3><strong>Relationship &amp; Retention. <\/strong><\/h3>\nAs previously noted, a major factor influencing participants\u2019 willingness to remain engaged with an intervention or evaluation study over time is the nature of their experiences with the study. An important consideration for intervention and evaluation researchers is how they might reduce or eliminate barriers and inconveniences associated with participation in their studies (Jimenez &amp; Czaja, 2016). These might include transportation, time, schedule, child care, and other practical concerns. Another factor influencing potential participants\u2019 decisions to commit to engaging with a study concerns stigma\u2014the extent to which they are comfortable becoming identified as a member of the group being served. Consider, for example, the potential stigma associated with being diagnosed with a mental illness, identified as a victim of sexual assault, or categorized as \u201cpoor,\u201d or labeled with a criminal record. Strategies for minimizing or eliminating the stigma associated with participation in deficit-defined study and emphasizing the strengths base would go a long way toward encouraging participation. For example, recruiting persons concerned about their own substance use patterns is very different from recruiting \u201caddicts\u201d (see Begun, 2016).\n\nBefore launching a new program or extending an existing program to a new population, social workers might solicit qualitative responses from potential participants, to determine how planned elements are likely to be experienced by future participants. This may be performed as a preliminary focus group session where the group provides feedback and insight concerning elements of the planned intervention. Or, it may be conducted as a series of interviews or open-ended surveys with representatives of the population expected to be engaged in the intervention. Similarly, investigators sometimes conduct these preliminary studies with potential participants concerning the planned research activities, not only the planned intervention elements.\n\nTwo examples of focus groups assisting in the planning or evaluation of interventions come from Milwaukee County. HEART to HEART was a community-based intervention research project designed to reduce the risk of HIV exposure among women at risk of exposure by virtue of their involvement in risky sexual and\/or substance use behavior. Women were to be randomly assigned to a preventive intervention protocol (combined brief HIV and alcohol misuse counseling) or a \u201ccontrol\u201d condition (educational information provided about risk behaviors). Focus group members helped plan the name of the program, its identity and branding, many of the program elements, and the research procedures to ensure that it was culturally responsive, appropriate, and welcoming. Features such as conducting the work in a non-stigmatizing environment (a general wellness setting rather than a treatment center), creating a welcoming environment, gender and ethnic relevant materials, and providing healthful snacks were considered strong contributors to the women\u2019s ongoing participation in the longitudinal study (Begun, Berger, &amp; Otto-Salaj, 2018).\n\nIn a different intervention research project, a focus group was conducted with partners of men engaged in a batterer treatment program. The purpose of the focus group was to develop procedures for safely collecting evaluation data from and providing research incentive payments to the women at risk of intimate partner violence. The planned research concerned the women\u2019s perceptions of their partners\u2019 readiness to change the violent behavior, and investigators were concerned that some partners might respond abusively to a woman\u2019s involvement in such a study. The women helped develop protocols for the research team to follow in communicating safely with future study participants and for materials future participants could use in safely managing their study participation (Begun, Berger, &amp; Otto-Salaj, 2018).\n\n<img class=\"aligncenter size-full wp-image-196\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/figure-of-a-heart-made-of-ribbons.png\" alt=\"figure of a heart made of ribbons\" width=\"249\" height=\"227\">\n<h2><strong>Random Assignment Issues<\/strong><\/h2>\nFirst, it is essential to remember that random selection into a sample is very different from the process of randomization or random assignment to experimental conditions.\n<blockquote><strong><em>Random selection<\/em><\/strong><em> refers to the way investigators generate a study sample that is reflective and representative of the larger population to which the study results are going to be generalized (external validity)\u2026<strong>Random assignment<\/strong>, often colloquially called <strong>randomization<\/strong>, has a different goal and is used at a different point in the intervention research process. Once we have begun to randomly select our participants, our study design might call for us to assign these recruited individuals to experience different intervention conditions\u201d (Begun, Berger, &amp; Otto-Salaj, 2018, p. 17-18).<\/em><\/blockquote>\nSeveral study designs examined in Chapter 3 of this module involved random assignment of participants to one or another experimental condition. The purpose of random assignment is to improve the ability to attribute any observed group differences in the outcome data to the groups rather than to pre-existing differences among group members (internal validity). For example, if we were comparing individuals who received a novel intervention with those who received a treatment as usual (TAU) condition we would be in trouble if there happened to be more women in the novel treatment group and more men in the TAU group. We would not know if differences observed at the end were attributable to the intervention or if they were a function of gender instead.\n\nConsider, for example, the random controlled trial (RCT) design from an intervention study to prevent childhood bullying (Jenson et al, 2010). A total of 28 elementary schools participated in this study, with 14 having been randomly assigned to the experimental condition (the new intervention) and 14 to the no-treatment control group. This allowed investigators to compare outcomes of the two treatment conditions with considerable confidence that the observed differences were attributable to the intervention; however, they were somewhat unlucky in their randomization effort since a greater percentage of children in the experimental condition were Latino\/Latina than in the control condition. This ethnicity factor needs to be taken into consideration in conclusions about the observed significant reduction in bully victimization among students in the experimental schools compared to the control group.\n\n<img class=\"aligncenter size-full wp-image-197\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-of-an-angry-figure-with-a-word-cloud-of-negative-terms.png\" alt=\"Illustration of an angry figure with a word cloud of negative terms\" width=\"350\" height=\"270\">\n<h3><strong>Random assignment success &amp; failure. <\/strong><\/h3>\nRandomly assigning participants to different experimental or intervention conditions requires investigators to introduce chance to the process. Randomness means a lack of systematic assignment. So, if you were to alternately assign participants to one condition or the other based alternating how each enrolled for the study a certain degree of chance is invoked: persons 1, 3, 5, 7 and so forth=control group, persons 2, 4, 6, 8 and so forth=experimental group. This system is only good if there is nothing systematic about how they were accepted into the study\u2014nothing alphabetical or gendered or otherwise nonrandom. Systems of chance include lottery, roll of the dice, playing card draws, or use of a random numbers table\u2014the same kinds of systems you read about in our prior course when we discussed how individuals might be randomly selected for participation in the sample. What could possibly go wrong?\n\n<img class=\"aligncenter size-full wp-image-198\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/two-dice-being-rolled.png\" alt=\"two dice being rolled\" width=\"114\" height=\"129\">\n\nUnfortunately, relying on chance does mean that random assignment (randomization) may fail to result in a balanced distribution of participants based on their characteristics even if the size of different assigned groups is even. This unfortunate luck was evident in the Jenson et al (2010) study previously mentioned where the distribution of Latino\/Latina students was disproportionate in the two intervention condition groups. However, those investigators were only unlucky on this one dimension\u2014there was reasonable comparability on a host of other variables.\n\nAnother way that random assignment sometimes goes wrong is through failure to stick to the rules of the randomization plan. Perhaps a practitioner really wants a particular client to experience the novel intervention (or the client threatens to participate only if assigned to that group). Suppose the practitioner somehow manipulates the individual\u2019s assignment with the intent to replace that person with another individual so the numbers assigned to each group even out. Unfortunately, the result is reduced integrity of the overall study design\u2014those individuals\u2019 assignments not being random means that systematic assignment has crept into the study, jeopardizing study conclusions. \u201cRandomization ensures that each patient has an equal chance of receiving any of the treatments under study\u201d and generates comparable groups \u201cwhich are alike in all the important aspects\u201d with the notable exception of which intervention the groups receive (Suresh, 2011, p. 8). In reality, investigators, practitioners, and participants may be tempted to \u201ccheat\u201d chance to achieve a hoped-for assignment.\n\n<img class=\"aligncenter size-full wp-image-199\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/person-with-hand-behind-back-with-fingers-crossed.png\" alt=\"person with hand behind back with fingers crossed\" width=\"237\" height=\"177\">\n<h3><strong>Assessing randomization results.<\/strong><\/h3>\nInvestigators need to determine the degree to which their random assignment or randomization efforts were successful in creating equivalent groups. To do this they often turn to the kinds of statistical analyses you learned about in our prior course: chi-square, independent samples t-test, or analysis of variance (Anova), depending on the nature of the variables involved. The major difference here, compared to the analyses we previously practiced, is what an investigator hopes the result of the analysis will be. Here is the logic explained:\n<ol>\n \t<li>the null hypothesis (Ho) is no difference exists between the groups.<\/li>\n \t<li>if the groups are equivalent, the investigator would find no difference.<\/li>\n \t<li>the investigator hopes not to find a difference\u2014this does not guarantee that the groups are the same, only that no difference was observed.<\/li>\n<\/ol>\nTo refresh your memory of how to work with these three types of analyses and to make them relevant to the question of how well randomization worked, we have three exercises in our Excel workbook.\n<div class=\"textbox textbox--learning-objectives\"><header class=\"textbox__header\">\n<p class=\"textbox__title\">Interactive Excel Workbook Activities<\/p>\n\n<\/header>\n<div class=\"textbox__content\">\n\nComplete the following Workbook Activities:\n<ul>\n \t<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-2-exercise-testing-randomization-with-chi-square-analysis\/\">SWK 3402.3-4.2 Exercise Testing Randomization with Chi-Square Analysis<\/a><\/li>\n \t<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-3-exercise-testing-randomization-with-independent-samples-t-test-analysis\/\">SWK 3402.3-4.3 Randomization Check: Independent Samples <em>t<\/em>-Test<\/a><\/li>\n \t<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-4-exercise-testing-randomization-with-analysis-of-variance-anova\/\">Workbook SWK 3402.3-4.4 Randomization Check: Anova<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<h2><strong>Chapter Summary<\/strong><\/h2>\nIn this chapter you reviewed several concepts related to samples, sampling, and participant recruitment explored in our prior course. The sample size topic was expanded to address how sample size relates to effect size in intervention and evaluation research. Issues related to participant recruitment were reviewed, particularly as they relate to the need for engaging a diverse and representative sample of study participants and how these concerns relate to a study\u2019s external validity. This topic was expanded into a 3-phase model of recruitment processes: generating contacts, screening volunteers for eligibility, and consenting participants. You then read about issues concerning participant retention over time in longitudinal intervention and evaluation studies, especially the importance of participants\u2019 experiences and relationships with the study. This included a discussion of participants\u2019 experiences with random assignment to study conditions, depending on the study design, and how randomization might or might not work. You learned a bit about how to assess the adequacy of the randomization effort in our Excel exercises and to think about how randomization successes and failures might affect a studies integrity and internal validity.\n<h2>Stop and Think<\/h2>\n<div style=\"float: left;min-height: 120px;width: 99%;margin-bottom: 10px;padding: 10px;background-color: #f1f7fe\">\n\n<img class=\"size-full wp-image-194 alignleft\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Stop_ThinkV2-150x150-1.png\" alt=\"Stop and Think\" width=\"150\" height=\"150\">Take a moment to complete the following activity.\n\n[h5p id=\"10\"]\n\n<\/div>","rendered":"<p>Until this point, most of our discussions have treated intervention and evaluation research as being very similar. One major way in which they differ relates to participants and the pool or population from which they are selected. Recall that intervention research aims to draw conclusions and develop generalizations about a population based on what is learned with a representative sample. Thus, intervention research is strongest when the participants are systematically drawn from the population of interest. The aim of evaluation research is different: the knowledge gained is to be used to inform the practice and\/or program being evaluated, not generalized to the broader population. As a result, evaluation research typically engages participants receiving those services. While the principles of systematic and random sampling might apply in both scenarios, the pool or population of potential participants is different, and the generalizability of results derived from the sample of participants differs, as well. The principles learned in our prior course about sampling and participant recruitment to understand social work problems and diverse populations applies to social work intervention and evaluation research for understanding interventions. Because much of evaluation and intervention research is longitudinal in nature, participant retention, as well as participant recruitment, is of major concern.<\/p>\n<p>In this chapter you :<\/p>\n<ul>\n<li>review features of sample size and filling a study design, and learn how they apply to effect sizes and research for understanding social work interventions;<\/li>\n<li>review features of participant recruitment and retention, and learn how they apply to research for understanding social work interventions;<\/li>\n<li>learn about random assignment of participants to study design conditions in intervention and evaluation research.<\/li>\n<\/ul>\n<h2><strong>Sample Size Reviewed &amp; Expanded<\/strong><\/h2>\n<p>Sample size is not a significant issue if interventions are being evaluated from a qualitative approach where the aim is depth of data rather than generalizability from a sample to a population. Sample size in qualitative studies is generally kept relatively small as means of keeping manageable the volume of data needing to be analyzed.<\/p>\n<p>Sample size does matter in quantitative approaches where investigators will generalize from the sample to a population. In our prior course you learned how sample size matters in terms of the sample\u2019s ability to represent the population. Remember the green M&amp;Ms example where the small samples were quite varied compared to each other and to the true population, but the larger (combined) samples were less different? Sample size issues remain important in intervention research where generalizations are to be made to the population based on the sample. This might be an issue, as well, in evaluation research where there are many participants involved in the intervention being evaluated and the investigators choose to work with data from a sample rather than participants representing the entire population served. In either case, intervention or evaluation research, investigators need to determine what constitutes an adequately sized sample. Two issues need to be addressed: numbers needed to fulfill the requirements of a study design and sample size needed to detect meaningful effects.<\/p>\n<h3><strong>Filling a quantitative study design: <\/strong><\/h3>\n<p>You may recall from our prior course how a study design relates to the number of study participants that need to be recruited (and retained). The study design might include two or more independent variables (the ones being manipulated or compared). To ensure sufficient numbers of participants for analyzing these variables, investigators need to be sure that participants of the designated types are recruited and retained so that their outcome (dependent variable) data can be analyzed. Here is an example of the numbers of each type needed to fulfill a 2 X 3 design. This example has neighborhoods as the unit of analysis; individual participants are embedded within those neighborhood units. This example is relevant to research for understanding social work interventions at a meso or macro level.<\/p>\n<p>Imagine a study concerning the impact of a community empowerment intervention designed to help members of local communities improve health outcomes by reducing exposure to air and water environmental toxins and contaminants inside and outside of their homes. Investigators are concerned that the intervention might differently impact very low-income, low-income, and moderate-income neighborhoods. They have chosen to conduct a random assignment study where \u00bd of the neighborhoods receive the intervention immediately and the other \u00bd receive it one year later (delayed intervention with the no intervention period serving as the control). They have determined that for the purposes of their analysis plan, they need a minimum of 12 neighborhoods in each condition. The sampling design would look like this:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-188\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM.png\" alt=\"Neighborhood income status sampling design\" width=\"602\" height=\"120\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM.png 602w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM-300x60.png 300w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM-65x13.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM-225x45.png 225w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2018\/12\/Screen-Shot-2018-12-28-at-10.18.54-AM-350x70.png 350w\" sizes=\"auto, (max-width: 602px) 100vw, 602px\" \/><\/p>\n<p>Filling the study design cells for this 2 X 3 design requires a minimum of 72 neighborhoods (6 cells times 12 units each=72 units total). These would be recruited as: 24 very low income, 24 low income, and 24 moderate income neighborhoods. Within each neighborhood, they hope to engage 15-20 households, meaning that they will engage with between 1080 and 1440 households (15 x 72=1080, 20 x 72=1440).<\/p>\n<h3><strong>Sample size related to effect size.<\/strong><\/h3>\n<p>Previously in this chapter you read about differences that are clinically meaningful. Intervention researchers are often asked to consider an analogous problem: what is the size of the effect detected in relation to the intervention? While an observed difference might be statistically significant, it is important to know whether the size of that difference is meaningful. <strong><em>Effect size<\/em><\/strong> information helps interpret statistical findings related to interventions\u2014their power to effect meaningful amounts of change in the desired outcomes. The size or magnitude of the effect detected is determined statistically, and sample size is one part of the formula for computing effect sizes. As a result, the size of a study\u2019s sample has an impact on the size of effect that can be detected.<\/p>\n<p>Here, the logic can sometimes become a bit confusing. This diagram helps explain the relationship between effect size and sample size without getting into the detailed statistics involved.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-189\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM.png\" alt=\"diagram illustrating the relationship between Sample Size and Effect Size\" width=\"680\" height=\"420\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM.png 680w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM-300x185.png 300w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM-65x40.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM-225x139.png 225w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Screen-Shot-2018-12-28-at-10.19.51-AM-350x216.png 350w\" sizes=\"auto, (max-width: 680px) 100vw, 680px\" \/><\/p>\n<p>In other words, if an investigator wishes to be as sure as possible to detect an effect if it exists, a larger sample size will help; having a small sample leaves the question unanswered if no effect is detected (see the small\/small peach colored box)\u2014the study will have to be repeated to determine if there really is no effect (see the large\/large pink colored box) or there actually is an effect of the intervention\u2014large or small.<\/p>\n<p>In order to refresh your skills in working with Excel and gain practice with the topic of sample size related to effect size, we have an exercise in the Excel workbook to visit.<\/p>\n<div class=\"textbox textbox--learning-objectives\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\">Interactive Excel Workbook Activities<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p>Complete the following Workbook Activity:<\/p>\n<ul>\n<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-1-exercise-on-sample-and-effect-sizes\/\">SWK 3402.3-4.1 Exercise on Sample and Effect Sizes<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<h3><strong>Participant diversity. <\/strong><\/h3>\n<p>In our prior course we also examined issues related to participant diversity and heterogeneity in study samples. Intervention and evaluation research working with samples need to consider the extent to which those samples are representative of the diversity and heterogeneity present in the population to which the intervention research will be generalized or the population of those served by the program being evaluated. Ideally, the strategies for <strong><em>random selection<\/em><\/strong> that you learned in our prior course (probability selection) aid in the effort to achieve representativeness. Convenience sampling and snow-ball sampling strategies, as you may recall, tend to tap into <strong><em>homophily<\/em><\/strong> and create overly homogeneous samples.<\/p>\n<p>Diversity among participants in qualitative studies is not intended to be representative of the diversity occurring in the population. Instead, heterogeneity among participants is intended to provide breadth in the perspectives shared, as a complement to depth in the data collected.<\/p>\n<h2><strong>Participant Recruitment &amp; Retention Reviewed &amp; Expanded<\/strong><\/h2>\n<p>In our prior course you learned about the distinction between <strong><em>participant recruitment<\/em><\/strong> (individuals entering into a study) and <strong><em>participant retention <\/em><\/strong>(individuals remaining engaged with a longitudinal study over time). Intervention and evaluation research investigators need to have a strong recruitment and retention plan in place to ensure that the study can be successfully completed. Without the right numbers and types of study participants, even the best designed studies are doomed to failure: over 80% of clinical trials in medicine fail due to under-enrollment of qualified participants (Thomson CenterWatch, 2006)! How suitably this figure represents what happens in behavioral and social work intervention research is unknown (Begun, Berger, &amp; Otto-Salaj, 2018). In fact, several authors have recommended having study personnel assigned specifically to the tasks of implementing a detailed participant recruitment and retention plan over the lifetime of the intervention or evaluation study (Begun, Berger, &amp; Otto-Salaj, 2018; Jimenez &amp; Czaja, 2016).<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-190\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Pie-chart-representing-20-percent-success-and-80-percent-failure.png\" alt=\"Pie chart representing 20 percent success and 80 percent failure\" width=\"303\" height=\"253\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pie-chart-representing-20-percent-success-and-80-percent-failure.png 303w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pie-chart-representing-20-percent-success-and-80-percent-failure-300x250.png 300w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pie-chart-representing-20-percent-success-and-80-percent-failure-65x54.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pie-chart-representing-20-percent-success-and-80-percent-failure-225x188.png 225w\" sizes=\"auto, (max-width: 303px) 100vw, 303px\" \/><\/p>\n<p>Here is a brief review of some of those principles and elaboration of several points relevant to intervention and evaluation research.<\/p>\n<h3><strong>Participant recruitment.<\/strong><\/h3>\n<p>Investigators are notorious for over-estimating the numbers of participants that are available, eligible, and willing to participate in their studies, particularly intervention studies (Thoma et al., 2010). This principle has been nicknamed Lasagna\u2019s Law after the scientist Louis Lasagna who first described this phenomenon:<\/p>\n<blockquote><p><em>&#8220;Investigators all too often commit the error of (grossly) overestimating the pool of potential study participants who meet a study\u2019s inclusion criteria and the number they can successfully recruit into the study&#8221; (Begun, Berger, &amp; Otto-Salaj, p. 10).<\/em><\/p><\/blockquote>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-191\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/lasagna-on-a-plate.png\" alt=\"lasagna on a plate\" width=\"189\" height=\"160\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/lasagna-on-a-plate.png 189w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/lasagna-on-a-plate-65x55.png 65w\" sizes=\"auto, (max-width: 189px) 100vw, 189px\" \/><\/p>\n<p>You may recall learning in our prior course about a 3-step process related to participant recruitment (adapted from Begun, Berger, &amp; Otto-Salaj, 2018): generating contacts, screening, and consent.<\/p>\n<h4><em>Generating contacts.<\/em><\/h4>\n<p>The first step involved generating initial contacts, soliciting interest from potential participants. Important considerations included the media applied (e.g., newsletter announcements, radio and television advertising, mail, email, social media, flyers, posters, and others) and the nature of the message inviting participation. Remember that recruitment messages are invitations to become a participant and need to respond to:<\/p>\n<ul>\n<li>why someone might wish to engage in the study<\/li>\n<li>need for details to make an informed choice about volunteering<\/li>\n<li>how they can become involved<\/li>\n<li>cultural relevance of the invitation message.<\/li>\n<\/ul>\n<p>One strong motivation for individuals to participate in intervention research is the potential for receiving a new form of intervention, one to which they might not otherwise have access. This might be particularly motivating for someone who has been dissatisfied with other intervention options. The experimental option might seem desirable because it is different from something that has not worked well for them in the past, it seems more relevant, it is more practical\/feasible than other options, or they were not good candidates for other options.<\/p>\n<p>On the other hand, one barrier to participation is the experimental, unknown nature of the intervention\u2014the need to conduct the study suggests that the outcomes are somewhat uncertain, and this may include unknown side effects. This barrier might easily outweigh the influence of a different motivation: the altruistic desire to contribute to something that might help others. Altruism is sometimes a motivation to engage in research but cannot be relied on alone to motivate participation in studies with long-term or intensive commitments of time and effort. It may be sufficient to motivate participation in an evaluation for interventions that are part of their service or treatment plan, however.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-192\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-saying-you-are-invited.png\" alt=\"Illustration saying you are invited\" width=\"191\" height=\"147\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-saying-you-are-invited.png 191w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-saying-you-are-invited-65x50.png 65w\" sizes=\"auto, (max-width: 191px) 100vw, 191px\" \/><\/p>\n<h4><em>Screening.<\/em><\/h4>\n<p>Screening is part of the intervention research recruitment process. Intervention and evaluation research typically require participants to meet specific study criteria\u2014<strong><em>inclusion criteria <\/em><\/strong>and <strong><em>exclusion criteria<\/em><\/strong>. These criteria might relate to their condition or problem, their past or present involvement in services or treatment, or specific demographic criteria (e.g., age, ethnicity, income level, gender identity, sexual orientation, or others). For example, inclusion criteria for a study might specify including only persons meeting the DSM-5 diagnostic criteria for a substance use disorder. Exclusion criteria for that same study might specify excluding all persons with additional DSM-5 diagnoses such as schizophrenia or dementia unrelated to substance use and withdrawal.<\/p>\n<p>Study investigators need to establish clear and consistent screening protocols for determining who meets inclusion\/exclusion criteria for participation. Screening might include answers to simple questions, such as \u201cAre you over the age of 18?\u201d or \u201cAre you currently pregnant?\u201d Screening might also include administering one or more standardized screening instruments, such as the Alcohol Use Disorder Identification Test (AUDIT), the Patient Health Questionnaire screening for depression, the mini mental state examination (MMSE) screening for possible dementia, or the Hurt, Insult, Threaten, Scream (HITS) screening for intimate partner violence.<\/p>\n<p>Regardless of the tool, screening information is not data since screening occurs prior to participant consent. The sole purpose of screening is to determine whether a potential volunteer is eligible to become a study participant. Ethically, screening protocols also need to include a strategy for referring persons who made an effort to participate in the intervention (expressed a need for intervention) but could not meet the inclusion criteria. In other words, if the door into the study is closed to them, alternatives need to be provided.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-193\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/door-imposed-upon-numerous-polygons.png\" alt=\"door imposed upon numerous polygons\" width=\"231\" height=\"231\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/door-imposed-upon-numerous-polygons.png 231w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/door-imposed-upon-numerous-polygons-150x150.png 150w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/door-imposed-upon-numerous-polygons-65x65.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/door-imposed-upon-numerous-polygons-225x225.png 225w\" sizes=\"auto, (max-width: 231px) 100vw, 231px\" \/><\/p>\n<h4><em>Consent.<\/em><\/h4>\n<p>Informed consent is the third phase of the recruitment process. You learned in our prior work what is required for informed consent. For intervention research with a generalizability aim the consent process should be reviewed by an Institutional Review Board (IRB). Evaluation research, on the other hand, that is to be used primarily to inform the practitioner, program, agency, or institution might not require IRB review. However, the agency should secure consent to participate in the evaluation, particularly if any activities involved fall outside of routine practice and record-keeping.<\/p>\n<p>Important to keep in mind throughout the three phases of recruitment is that all three phases and what transpires during intervention relate to participant retention over the course of a longitudinal study. While it is critical during recruitment to consider, from the participants\u2019 point-of-view, why they might wish to become involved in an intervention or evaluation study, in longitudinal studies it is equally important to consider why they might wish to continue to be involved over time. This topic warrants further attention.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-194\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Pen-on-top-a-sheet-of-paper-that-says-I-agree.png\" alt=\"Pen on top a sheet of paper that says I agree\" width=\"237\" height=\"158\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pen-on-top-a-sheet-of-paper-that-says-I-agree.png 237w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pen-on-top-a-sheet-of-paper-that-says-I-agree-65x43.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Pen-on-top-a-sheet-of-paper-that-says-I-agree-225x150.png 225w\" sizes=\"auto, (max-width: 237px) 100vw, 237px\" \/><\/p>\n<h3><strong>Participant retention. <\/strong><\/h3>\n<p>A great deal of resources and effort devoted to participant recruitment and delivering interventions is wasted each time a study participant drops out before the end of a study (called study attrition, this is the opposite of retention). Furthermore, the integrity of study conclusions can be jeopardized when study attrition occurs. An interesting meta-analysis was conducted to assess the potential impact on longitudinal studies of our nation\u2019s high rates of incarceration, especially in light of extreme racial disparities in incarceration rates (Wang, Aminawung, Wildeman, Ross, &amp; Krumholz, 2014). The investigators combined the samples from 14 studies into a complete sample of 72,000 study participants. Based on U.S. incarceration rates, they determined that longitudinal studies stand to lose up to 65% of black men from their samples. Under these conditions, study results and generalizability conclusions are potentially seriously impaired, especially since participant attrition is not occurring in a random fashion equivalently across all groups.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-195\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Person-looking-on-into-the-abyss.png\" alt=\"Person looking on into the abyss\" width=\"339\" height=\"227\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Person-looking-on-into-the-abyss.png 339w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Person-looking-on-into-the-abyss-300x201.png 300w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Person-looking-on-into-the-abyss-65x44.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Person-looking-on-into-the-abyss-225x151.png 225w\" sizes=\"auto, (max-width: 339px) 100vw, 339px\" \/><\/p>\n<h3><strong>Relationship &amp; Retention. <\/strong><\/h3>\n<p>As previously noted, a major factor influencing participants\u2019 willingness to remain engaged with an intervention or evaluation study over time is the nature of their experiences with the study. An important consideration for intervention and evaluation researchers is how they might reduce or eliminate barriers and inconveniences associated with participation in their studies (Jimenez &amp; Czaja, 2016). These might include transportation, time, schedule, child care, and other practical concerns. Another factor influencing potential participants\u2019 decisions to commit to engaging with a study concerns stigma\u2014the extent to which they are comfortable becoming identified as a member of the group being served. Consider, for example, the potential stigma associated with being diagnosed with a mental illness, identified as a victim of sexual assault, or categorized as \u201cpoor,\u201d or labeled with a criminal record. Strategies for minimizing or eliminating the stigma associated with participation in deficit-defined study and emphasizing the strengths base would go a long way toward encouraging participation. For example, recruiting persons concerned about their own substance use patterns is very different from recruiting \u201caddicts\u201d (see Begun, 2016).<\/p>\n<p>Before launching a new program or extending an existing program to a new population, social workers might solicit qualitative responses from potential participants, to determine how planned elements are likely to be experienced by future participants. This may be performed as a preliminary focus group session where the group provides feedback and insight concerning elements of the planned intervention. Or, it may be conducted as a series of interviews or open-ended surveys with representatives of the population expected to be engaged in the intervention. Similarly, investigators sometimes conduct these preliminary studies with potential participants concerning the planned research activities, not only the planned intervention elements.<\/p>\n<p>Two examples of focus groups assisting in the planning or evaluation of interventions come from Milwaukee County. HEART to HEART was a community-based intervention research project designed to reduce the risk of HIV exposure among women at risk of exposure by virtue of their involvement in risky sexual and\/or substance use behavior. Women were to be randomly assigned to a preventive intervention protocol (combined brief HIV and alcohol misuse counseling) or a \u201ccontrol\u201d condition (educational information provided about risk behaviors). Focus group members helped plan the name of the program, its identity and branding, many of the program elements, and the research procedures to ensure that it was culturally responsive, appropriate, and welcoming. Features such as conducting the work in a non-stigmatizing environment (a general wellness setting rather than a treatment center), creating a welcoming environment, gender and ethnic relevant materials, and providing healthful snacks were considered strong contributors to the women\u2019s ongoing participation in the longitudinal study (Begun, Berger, &amp; Otto-Salaj, 2018).<\/p>\n<p>In a different intervention research project, a focus group was conducted with partners of men engaged in a batterer treatment program. The purpose of the focus group was to develop procedures for safely collecting evaluation data from and providing research incentive payments to the women at risk of intimate partner violence. The planned research concerned the women\u2019s perceptions of their partners\u2019 readiness to change the violent behavior, and investigators were concerned that some partners might respond abusively to a woman\u2019s involvement in such a study. The women helped develop protocols for the research team to follow in communicating safely with future study participants and for materials future participants could use in safely managing their study participation (Begun, Berger, &amp; Otto-Salaj, 2018).<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-196\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/figure-of-a-heart-made-of-ribbons.png\" alt=\"figure of a heart made of ribbons\" width=\"249\" height=\"227\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/figure-of-a-heart-made-of-ribbons.png 249w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/figure-of-a-heart-made-of-ribbons-65x59.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/figure-of-a-heart-made-of-ribbons-225x205.png 225w\" sizes=\"auto, (max-width: 249px) 100vw, 249px\" \/><\/p>\n<h2><strong>Random Assignment Issues<\/strong><\/h2>\n<p>First, it is essential to remember that random selection into a sample is very different from the process of randomization or random assignment to experimental conditions.<\/p>\n<blockquote><p><strong><em>Random selection<\/em><\/strong><em> refers to the way investigators generate a study sample that is reflective and representative of the larger population to which the study results are going to be generalized (external validity)\u2026<strong>Random assignment<\/strong>, often colloquially called <strong>randomization<\/strong>, has a different goal and is used at a different point in the intervention research process. Once we have begun to randomly select our participants, our study design might call for us to assign these recruited individuals to experience different intervention conditions\u201d (Begun, Berger, &amp; Otto-Salaj, 2018, p. 17-18).<\/em><\/p><\/blockquote>\n<p>Several study designs examined in Chapter 3 of this module involved random assignment of participants to one or another experimental condition. The purpose of random assignment is to improve the ability to attribute any observed group differences in the outcome data to the groups rather than to pre-existing differences among group members (internal validity). For example, if we were comparing individuals who received a novel intervention with those who received a treatment as usual (TAU) condition we would be in trouble if there happened to be more women in the novel treatment group and more men in the TAU group. We would not know if differences observed at the end were attributable to the intervention or if they were a function of gender instead.<\/p>\n<p>Consider, for example, the random controlled trial (RCT) design from an intervention study to prevent childhood bullying (Jenson et al, 2010). A total of 28 elementary schools participated in this study, with 14 having been randomly assigned to the experimental condition (the new intervention) and 14 to the no-treatment control group. This allowed investigators to compare outcomes of the two treatment conditions with considerable confidence that the observed differences were attributable to the intervention; however, they were somewhat unlucky in their randomization effort since a greater percentage of children in the experimental condition were Latino\/Latina than in the control condition. This ethnicity factor needs to be taken into consideration in conclusions about the observed significant reduction in bully victimization among students in the experimental schools compared to the control group.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-197\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-of-an-angry-figure-with-a-word-cloud-of-negative-terms.png\" alt=\"Illustration of an angry figure with a word cloud of negative terms\" width=\"350\" height=\"270\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-of-an-angry-figure-with-a-word-cloud-of-negative-terms.png 350w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-of-an-angry-figure-with-a-word-cloud-of-negative-terms-300x231.png 300w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-of-an-angry-figure-with-a-word-cloud-of-negative-terms-65x50.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/Illustration-of-an-angry-figure-with-a-word-cloud-of-negative-terms-225x174.png 225w\" sizes=\"auto, (max-width: 350px) 100vw, 350px\" \/><\/p>\n<h3><strong>Random assignment success &amp; failure. <\/strong><\/h3>\n<p>Randomly assigning participants to different experimental or intervention conditions requires investigators to introduce chance to the process. Randomness means a lack of systematic assignment. So, if you were to alternately assign participants to one condition or the other based alternating how each enrolled for the study a certain degree of chance is invoked: persons 1, 3, 5, 7 and so forth=control group, persons 2, 4, 6, 8 and so forth=experimental group. This system is only good if there is nothing systematic about how they were accepted into the study\u2014nothing alphabetical or gendered or otherwise nonrandom. Systems of chance include lottery, roll of the dice, playing card draws, or use of a random numbers table\u2014the same kinds of systems you read about in our prior course when we discussed how individuals might be randomly selected for participation in the sample. What could possibly go wrong?<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-198\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/two-dice-being-rolled.png\" alt=\"two dice being rolled\" width=\"114\" height=\"129\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/two-dice-being-rolled.png 114w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/two-dice-being-rolled-65x74.png 65w\" sizes=\"auto, (max-width: 114px) 100vw, 114px\" \/><\/p>\n<p>Unfortunately, relying on chance does mean that random assignment (randomization) may fail to result in a balanced distribution of participants based on their characteristics even if the size of different assigned groups is even. This unfortunate luck was evident in the Jenson et al (2010) study previously mentioned where the distribution of Latino\/Latina students was disproportionate in the two intervention condition groups. However, those investigators were only unlucky on this one dimension\u2014there was reasonable comparability on a host of other variables.<\/p>\n<p>Another way that random assignment sometimes goes wrong is through failure to stick to the rules of the randomization plan. Perhaps a practitioner really wants a particular client to experience the novel intervention (or the client threatens to participate only if assigned to that group). Suppose the practitioner somehow manipulates the individual\u2019s assignment with the intent to replace that person with another individual so the numbers assigned to each group even out. Unfortunately, the result is reduced integrity of the overall study design\u2014those individuals\u2019 assignments not being random means that systematic assignment has crept into the study, jeopardizing study conclusions. \u201cRandomization ensures that each patient has an equal chance of receiving any of the treatments under study\u201d and generates comparable groups \u201cwhich are alike in all the important aspects\u201d with the notable exception of which intervention the groups receive (Suresh, 2011, p. 8). In reality, investigators, practitioners, and participants may be tempted to \u201ccheat\u201d chance to achieve a hoped-for assignment.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-199\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/person-with-hand-behind-back-with-fingers-crossed.png\" alt=\"person with hand behind back with fingers crossed\" width=\"237\" height=\"177\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/person-with-hand-behind-back-with-fingers-crossed.png 237w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/person-with-hand-behind-back-with-fingers-crossed-65x49.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-content\/uploads\/sites\/185\/2024\/08\/person-with-hand-behind-back-with-fingers-crossed-225x168.png 225w\" sizes=\"auto, (max-width: 237px) 100vw, 237px\" \/><\/p>\n<h3><strong>Assessing randomization results.<\/strong><\/h3>\n<p>Investigators need to determine the degree to which their random assignment or randomization efforts were successful in creating equivalent groups. To do this they often turn to the kinds of statistical analyses you learned about in our prior course: chi-square, independent samples t-test, or analysis of variance (Anova), depending on the nature of the variables involved. The major difference here, compared to the analyses we previously practiced, is what an investigator hopes the result of the analysis will be. Here is the logic explained:<\/p>\n<ol>\n<li>the null hypothesis (Ho) is no difference exists between the groups.<\/li>\n<li>if the groups are equivalent, the investigator would find no difference.<\/li>\n<li>the investigator hopes not to find a difference\u2014this does not guarantee that the groups are the same, only that no difference was observed.<\/li>\n<\/ol>\n<p>To refresh your memory of how to work with these three types of analyses and to make them relevant to the question of how well randomization worked, we have three exercises in our Excel workbook.<\/p>\n<div class=\"textbox textbox--learning-objectives\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\">Interactive Excel Workbook Activities<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p>Complete the following Workbook Activities:<\/p>\n<ul>\n<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-2-exercise-testing-randomization-with-chi-square-analysis\/\">SWK 3402.3-4.2 Exercise Testing Randomization with Chi-Square Analysis<\/a><\/li>\n<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-3-exercise-testing-randomization-with-independent-samples-t-test-analysis\/\">SWK 3402.3-4.3 Randomization Check: Independent Samples <em>t<\/em>-Test<\/a><\/li>\n<li><a href=\"https:\/\/ohiostate.pressbooks.pub\/swk3401workbook\/chapter\/swk-3402-3-4-4-exercise-testing-randomization-with-analysis-of-variance-anova\/\">Workbook SWK 3402.3-4.4 Randomization Check: Anova<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<h2><strong>Chapter Summary<\/strong><\/h2>\n<p>In this chapter you reviewed several concepts related to samples, sampling, and participant recruitment explored in our prior course. The sample size topic was expanded to address how sample size relates to effect size in intervention and evaluation research. Issues related to participant recruitment were reviewed, particularly as they relate to the need for engaging a diverse and representative sample of study participants and how these concerns relate to a study\u2019s external validity. This topic was expanded into a 3-phase model of recruitment processes: generating contacts, screening volunteers for eligibility, and consenting participants. You then read about issues concerning participant retention over time in longitudinal intervention and evaluation studies, especially the importance of participants\u2019 experiences and relationships with the study. This included a discussion of participants\u2019 experiences with random assignment to study conditions, depending on the study design, and how randomization might or might not work. You learned a bit about how to assess the adequacy of the randomization effort in our Excel exercises and to think about how randomization successes and failures might affect a studies integrity and internal validity.<\/p>\n<h2>Stop and Think<\/h2>\n<div style=\"float: left;min-height: 120px;width: 99%;margin-bottom: 10px;padding: 10px;background-color: #f1f7fe\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-194 alignleft\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/185\/2024\/08\/Stop_ThinkV2-150x150-1.png\" alt=\"Stop and Think\" width=\"150\" height=\"150\" \/>Take a moment to complete the following activity.<\/p>\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-10\" class=\"h5p-iframe\" data-content-id=\"10\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"Module 3 ch 4\"><\/iframe><\/div>\n<\/div>\n","protected":false},"author":3,"menu_order":5,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-200","chapter","type-chapter","status-publish","hentry"],"part":110,"_links":{"self":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/chapters\/200","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/wp\/v2\/users\/3"}],"version-history":[{"count":1,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/chapters\/200\/revisions"}],"predecessor-version":[{"id":201,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/chapters\/200\/revisions\/201"}],"part":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/parts\/110"}],"metadata":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/chapters\/200\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/wp\/v2\/media?parent=200"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/pressbooks\/v2\/chapter-type?post=200"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/wp\/v2\/contributor?post=200"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/swk627\/wp-json\/wp\/v2\/license?post=200"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}