Psychosocial program evaluation study involving older adults

What’s Your Strategy for Managing Knowledge
July 28, 2020
Describe what the powers are for each branch
July 28, 2020

Psychosocial program evaluation study involving older adults

Psychosocial program evaluation study involving older adults

Description
For this paper, you are to identify and critique a psychosocial program evaluation study involving older adults. This should be a published study appearing in the research literature. You cannot use the Bruce et al. (2004) article described in Unit 8.
Your critique should address each of the criteria described in Unit 8 fitting to program evaluation research with older adults (e.g., meaningful between group differences, use of multiple measures of response) (PLEASE see the following content and uploaded material (chapter 11) to find the unit 8 description). You are to provide a summary/overall appraisal of the effectiveness of the program examined and the evaluation study conducted.
UNIT 8:
Introduction
More and more, governments and funding agencies want to see evidence of the benefits and cost effectiveness of social interventions to ensure that limited resources are spent wisely. As a result, evaluation research has become one of the most common types of applied research in the social sciences. Rarely is such research theoretically driven; instead, evaluation research examines practical concerns (e.g., cost/benefit analyses of a new functional maintenance program for older adults in comparison to existing programs). As with study in other domains, many of the best evaluation research studies combine quantitative and qualitative design elements.
Learning Objectives
This unit will help you understand the intent and general design of evaluation research.
By the end of this unit, you should be able to:
• describe the intent of evaluation research and how it differs from theory-driven research
• compare and contrast each of the five types of program evaluation research
• describe how quasi-experimental design is used in evaluation.
Elements of Evaluation Research
The second assignment for this course entails the selection and critique of a specific gerontological evaluation study. To assist you in this task, read the Bruce et al. (2004) study. Critique this study using the following criteria:
1. Identify the experimental design Is it a true experiment, quasi-experiment, or qualitative study? Does this study reflect a combined design (i.e., qualitative and quantitative methodologies)? How might the design of this study been improved?



2. Did the researcher(s) make use of multiple outcome measures? List each of the response variables measured in this study. Do they provide a sufficient range of data (i.e., provide complete data in order to address the study’s purposes)? Are there any conflicting findings, and, if so, how adequately are these explained by the authors?
3. Do the treatment and control groups differ with respect to key secondary variables? How might between-group differences with respect to secondary variables be related to dependent variables?
4. Are between-group differences meaningful or just statistical artefacts? Particularly with repeated-measures analyses and large sample sizes, it is possible to identify statistically significant between-group differences that have little practical relevance. For example, a between-group difference of one or two points on the Beck Depression Inventory (BDI-II) might be sufficient to indicate statistical significance; in terms of the participants’ experience of symptoms; however, this difference may be meaningless (i.e., the relative experience of the two groups is virtually identical). It is important that evaluation studies identify both statistical and practical between-group differences in order to demonstrate program effectiveness.

One reason statistical significance may result when between-group differences are practically meaningless relates to sample sizes. As sample sizes increase, it becomes possible to identify smaller between-group differences as statistically significant (i.e., the analyses acquire what is known as greater statistical power). The corollary holds true for the opposite scenario; with small sample sizes, between-group statistical significance may not result though participants in control groups as opposed to treatment groups may perceive substantive differences. In other words, small sample sizes can obscure meaningful between-group differences and lead the researcher to the erroneous conclusion that groups do not differ. (For further discussion of statistical power, see Cohen, 1992.)
5. Were cost/benefit analyses performed? In other words, was a dollar value derived to quantify program benefits relative to the costs of the intervention? Often, interventions will demonstrate the intended effects (e.g., 24-hour, in-home nursing care will keep many seniors out of institutions). Yet when the costs are examined, there may be no net benefit of the program (e.g., cost per day for institutional care is less than 24-hour private nursing care).
6. Were contamination effects considered? This point concerns internal validity. Are we certain that the resulting benefits of the program are a result of the intervention? Or are they caused by extraneous factors? Was control for contamination effects by the researchers sufficient to rule out such extraneous effects? As an example, some interventions are delivered by a single clinician. It is possible, therefore, that observed benefits arise as a result of the personality, enthusiasm, or specific skills of this one clinician. Any intervention delivered by that clinician may appear significant irrespective of the quality of the intervention (e.g., reading tea leaves). For this reason, it is ideal to have interventions delivered by more than one clinician and to have analyses performed to determine if net program benefits are the same for all.
7. Was multiple time-series measurement performed? Were the time intervals between points of measurement sufficient? Was there adequate follow-up after discontinuation of the intervention to ensure that the full effect of program benefits was assessed? This was a criticism of early program evaluation studies that performed only very brief follow-ups (e.g., three months after program termination). When researchers claim that their intervention provides prophylactic effects (e.g., reduced rate of relapse), it is particularly important that the effects of programs be measured over an extended period (often one year or more).

Process
It is strongly recommended that you contact your tutor-marker with the title and abstract of your proposed study before you begin writing to confirm that your choice is, in fact, an evaluation study.
A good place to start would be with one of the electronic research databases such as Ageline or PsycINFO. The evaluation study you critique can be based on any program for older adults. This does not include pharmaceutical randomized clinical trials only psychosocial interventions. (Also avoid pilot studies, or pre-evaluation studies with no comparison group.)