Abstract

Purpose: This article reviews the effectiveness of a new training program for improving nursing staffs' detection of depression within long-term care facilities. The course was designed to increase recognition of the Minimal Data Set (MDS) Mood Trigger items, to be brief, and to rely on images rather than didactics. Design and Methods: This study used a delayed intervention design. Twenty nurses from two facilities participated in all four sessions of the study. Results: Staff exposed to the intervention (Site 1) improved significantly in their ability to detect mood symptoms in videotaped patients after completing the training course compared with those exposed to the delayed intervention (Site 2). Improvement in detection skills at Site 2 following the training confirmed the intervention's utility. The improvement was demonstrated across levels of staff (licensed and unlicensed). Maintenance of skills was demonstrated at the 4-month follow-up. Implications: Staff successfully improved knowledge and skill of MDS mood triggers. This method may lend itself to other MDS domains.

Decision Editor: Eleanor S. McConnell, RN, PhD

Depression and other psychiatric illnesses are highly prevalent among nursing home residents, but nursing staff are often ill equipped to detect symptoms of these disorders (National Institutes of Health Consensus Development Conference 1991; Parmalee, Katz, and Lawton 1989; Rovner et al. 1991; Wood et al. 2000). The federally mandated Minimal Data Set (MDS) requires observations of mood symptoms, but few tools for implementing these requirements are available (Phillips et al. 1997). An education intervention that could improve the detection of depression symptoms by nursing staff has the potential to increase the number of residents treated for depression. The goal of the current project was to develop a brief training program specifically designed for the nursing home culture and its many training challenges.

The MDS has been found to be a reliable and valid tool when completed by research nurses for most domains. However, reliability is lowest for mood symptoms, even when completed by trained research nurses, with a kappa correlation of less than .4 (Phillips et al. 1997). Burrows, Morris, Simon, Hirdes, and Phillips 2000 recently developed an MDS-based depression rating scale and reported that these ratings compared favorably to independent assessments with the Geriatric Depression Scale when nurses used standard observations. Its efficacy compared with typical nurse ratings has not been reported. In a recent study, Schnelle, Wood, Schnelle, and Simmons 2001 demonstrated that MDS-derived quality indicators for depression underestimated prevalence as documented by independent assessment in long-term care facilities. A facility with on-site psychiatric services was much more accurate in its MDS ratings than was one that relied solely on nursing staff to complete the MDS mood ratings. Although the two homes had equivalent rates of depression based on independent evaluations, the state normative ratings for these homes—75th percentile for the first home (indicating high rates of depression) and 25th percentile for the second (indicating low rates of depression)—suggest most facilities underdetect depression.

In a separate study (Wood et al. 2000), licensed vocational nurses' (LVNs') ratings of patients on the Neuropsychiatric Inventory—Nursing Home version were compared with an independent assessment of mood symptoms. We found that although the licensed staff were reasonably accurate at the detection of some triggers like agitation and psychosis, they were very poor informants in terms of depression recognition. The nursing assistants (NAs) in that study performed at roughly chance levels. Taken together, these studies suggest that the development of an effective staff training program to address these deficiencies is warranted.

In 1986, the Institute of Medicine issued a report encouraging training interventions with nursing home staff in order to improve skill, boost self-esteem, and improve resident care (Institute of Medicine 1986). In 1987, the Omnibus Budget Reconciliation Act mandated an increase in the amount of training received by NAs (Omnibus Budget Reconciliation Act, 1987). There have been numerous attempts by intervention researchers to meet this challenge over the years, with specific emphasis on behavioral management of agitation, increasing self-reliant behavior of residents, and incontinence care (Baltes, Neumann and Zank 1994; McCallion, Toseland, and Freeman 1999; Schnelle, Newman, and Fogarty 1990; Stevens et al. 1998). These training interventions have been successful but have either required a significant time commitment from the staff or have relied on research staff. For example, Stevens and associates reported significant improvement in behavioral management skills after a 5-hr inservice plus 3 weeks of on-the-job training. Baltes and colleagues reported an increase in independence-supportive behaviors of staff after 10 group sessions and 4 weeks of skills practice. McCallion and associates recently designed a staff intervention to improve communication between nurses' aides and dementia residents that was less time intensive, but the training required about 6 hr of inservice time and significant follow-up from the paid trainer. These interventions have the potential to improve quality of care, but because of the significant cost in implementing the procedures, it is unlikely that they will be broadly used in typical for-profit facilities. Furthermore, these attempts to improve care have not specifically been linked to the MDS.

Training nursing staff is a challenge for numerous reasons, including high turnover rates (about 50%–80% per year), various primary languages, burnout leading to poor motivation, and time constraints due to the heavy workloads and frequent deadlines (Burgio and Burgio 1990; Waxman, Carner, and Berkenstock 1984). Consequently, we designed our training program to be (a) brief (the training took about 60 min total in a series of four modules); (b) flexible, meaning that it could be administered one-on-one or to a group and all at once or over four sessions; (c) relevant, based directly on the MDS mood triggers and not unfamiliar psychiatric constructs; and (d) engaging, using video-based scenarios versus English-based lectures. We included all levels of staff. We hypothesized that the staff would significantly improve on our outcome measures, which consisted of a series of videotapes of residents from other homes that may or may not have MDS mood triggers and a written test that asked questions about the MDS and depression in general.

Methods

Nursing Staff Participants

The nursing staff were recruited at two local not-for-profit 75-bed facilities. All nursing staff at the facility were invited to participate, and the first 15 to do so at each site were enrolled. At Site 1, participants included 5 licensed staff (LVN or registered nurse) and 10 unlicensed staff (NAs). At Site 2, participants included 8 licensed staff and 12 unlicensed staff. In total, there were 6 men and 29 women. The participants at the intervention site (Site 1) and the participants at the control site (Site 2) did not differ significantly in terms of age, gender, ethnic heritage, level of education, or total length of service (Table 1 ). The average age of participants was 44 years, and the average length of service was 12 years. Fifty percent of the participants reported English as their first language, 20% reported Spanish, and 14% reported Tagalog; overall, six languages were represented. Over the course of the 6-month study, there was significant attrition. At Time 2 (posttraining), 29 of the 35 staff remained. At the 4-month follow-up, 20 of 35 staff remained (7 at Site 1 and 13 at Site 2). The noncompleters did not differ significantly from the groups as a whole in terms of scores on pretest measures, length of service, or age. The nursing staff was paid $50 for their participation.

Study Personnel/Trainers

Stacey Wood, a licensed clinical psychologist familiar with the MDS and mental health issues in the nursing home, developed the curriculum and training tool. The primary trainer was a bachelor's-level project coordinator with an undergraduate degree in psychology. Two secondary trainers were psychology students in their senior year at Scripps College. It was important that the training program be structured so that it could be carried out successfully by a bachelor's-level trainer, as most curriculum directors at nursing homes do not have advanced degrees.

Training Curriculum

The primary course agenda was to improve the identification of the 17 mood triggers listed in the MDS. The training was done in four sessions. Each session had a theme and consisted of 3–5 residents on videotape with some of the target behaviors for that theme. The videotape clips for the training sessions contained some of the residents from that home and some unfamiliar residents. The use of familiar residents helped to engage the staff in discussion. Module 1 focused on sad affect and negative statements. This tape included residents exhibiting sad facial expressions, residents making negative statements, and evidence of hopelessness and tearfulness. The theme for Module 2 was anxiety and isolation. This tape focused on the major anxiety symptoms from Section E of the MDS and included an anxious and socially isolated resident. Module 3 focused on the triggers of irritability, anger, and somatic complaints. The fourth module was a wrap-up session, including an array of Section E MDS mood items for review, which included crying, somatic complaints, sad affect, and no symptoms.

A training tool was developed to help the staff analyze the video clips according to the MDS and to familiarize them with MDS language for mood triggers. The tool consisted of a list of all 17 mood triggers from Section E of the MDS. Each trigger was explained in simple terms, and an example of the behavior was given. For instance, the statement "Patient makes negative statements like ‘I want to die’" exemplified the first trigger (see Appendix for complete listing of the tool).

Outcome Measures

Three outcome measures were used in this study: a videotaped vignette test, a written test, and a qualitative course evaluation/motivation survey. The vignette test consisted of five nursing home residents displaying a total of 22 mood triggers overall. The participants saw this clip at the beginning of the program and were instructed to list all the MDS mood triggers that they saw for each resident. The test vignettes were never discussed and were not used in the following training sessions. At the end of the study, the same videotape was used, and the participants were asked to list all the MDS triggers.

The second measure was a written test. This measure consisted of five questions, including one that asked staff to list five symptoms of depression and a second that asked staff to list as many of the MDS triggers for mood as they could. These two questions were directly related to the course agenda, and data from these questions were saved and analyzed. The three other questions asked about the difference in the appearance of depression in young people and elderly people, possible effects of untreated depression, and familiarity with the MDS instructions for assessing mood. These were not scored.

The third measure was a qualitative course evaluation and a motivation survey. The course evaluation simply asked the staff if they found the course helpful, what they liked or disliked about it, and if they preferred the vignette style to the traditional inservice-style training. The motivation survey consisted of five questions concerning the staff's motivation for taking the class. They were asked to state their primary motivation for course participation, their interest in continuing education credits, and whether or not they would participate without a financial incentive. We did not offer continuing education credit for this program because at the time the State of California required that all inservices be 50 min long, and our typical module was 20 min in duration.

Procedure

The training program was as highly scripted as possible so that if it were successful, it could be implemented easily at other sites. Before the training, the primary trainer and the psychologist viewed the video clips and rated each resident separately using the MDS checklist tool described earlier. After achieving consensus on the symptoms displayed by each resident, a scoring key was created (i.e., Resident 1 displayed negative statements and tearfulness). They next completed each module together for the first resident to ensure that the trainer and psychologist were consistent in the description of the symptoms.

Following recruitment and informed consent, a pretest was scheduled with each participant according to his or her schedule. The vignette pretest was administered first, followed by the written pretest. The participants were asked to list any mood symptoms that they saw in each of the videotaped residents. After the pretest was administered, the first training session was scheduled. Each training session followed the same format: (a) a vignette pretest, (b) a case-by-case discussion, and (c) a vignette posttest. For the pretest, participants viewed a series of vignettes and wrote down any mood symptoms they observed in each resident. Next, the learning tool was provided, including a list of all of the mood triggers from Section E of the MDS and examples of each behavioral cue. The researcher then played the same tape and paused after each patient to discuss with the staff what they saw and what behaviors corresponded to the MDS learning tool. After the discussion, the participants were instructed to watch the vignette series again and, using their tool, write down everything they could remember about the mood and behavior symptoms presented in the videotaped vignettes from the previous discussion. Vignette Sessions 1, 2, 3, and 4 all followed the same procedure. The sessions were given either individually or in small groups of 3 to 4 according to the staff's preference. After Training Session 4 was completed, a posttest was scheduled. Approximately 4 months following the last session, a follow-up session was scheduled with remaining staff in order to determine maintenance of skills. At this session, the posttest was administered.

Design

This study used a parallel-group, delayed-intervention design. Site 1 served as the intervention site and Site 2 served as the control/delayed intervention site. It was not feasible to randomly assign staff to either control or intervention because both homes were small and participants would likely communicate with each other. At Site 1, the nursing staff took the pretest during a specified week, and then individual sessions were scheduled. Average duration for the entire program was 4 weeks, including the posttest. At Site 2, participants took the pretest, had a delay of 4 weeks, and then were administered the posttest. The pretest and posttest were identical. At that point, Site 2 began the training program, and a second posttest was administered at completion approximately 4 weeks later. Approximately 4 months following the last session, the follow-up session was scheduled to assess maintenance of skills. At follow-up, the staff took the videotaped vignette and written tests a final time. The use of the same measure allowed for direct comparisons (see Table 2 for design).

Analysis

All analyses were done in SPSS version 9.0 (SPSS Inc., Chicago, IL) using the general linear model repeated-measures analysis of variance (ANOVA). The statistical analyses used a mixed design, with time (time of Pretest 1 vs time of Pretest 2) as the within-subject variable and site (intervention and control) as the between-subject variable. A separate ANOVA was done to examine the performance of licensed versus unlicensed staff. Finally, a within-subjects analysis was done to compare the delayed intervention at Site 2 with the Pretest 2 after training. Time 4 was a 4-month follow-up to assess maintenance of skills.

Results

There was no significant difference at Time 1 between the intervention and the control sites on the pretest videotaped vignette measure, t(1, 28) = 1. 7, ns, and on the pretest written measure, t(1, 28) = 1.1, ns. These results suggest that the control group and the intervention group had equivalent baseline skills (see Table 1 for baseline means). There was a difference between the levels of nursing staff. Licensed staff performed significantly better on the written pretest than did unlicensed staff across sites (p < .05) but did not perform significantly better on the vignette test. To assess stability, we calculated a Pearson correlation for the videotaped vignette test and for the written test at Site 2 for Time 1 and Time 2 (there was no intervention between these two times in the delayed site). The Pearson correlations were significant, r = .800, p < .01, for the vignette test and r = .504, p < .05, for the written test. These results indicate that the outcome measures were highly stable and that changes in performance were not due to practice effects alone.

To evaluate the effect of the training at Site 1 versus the delayed control site, we performed a repeated measures ANOVA. In terms of the videotaped vignette test, there was a significant main effect for time, F(1, 24) = 16.17, p < .01; a significant main effect for site, F(1, 24) = 12.16, p < .01; and a significant Site × Time interaction, F(1, 24) = 5.1, p < .05. These results indicate that performance on the vignette test was better after training for Site 1, the intervention site, in comparison to the control site. Similar analyses were done for the written test. There was a significant main effect for time, F(1, 24) = 15.93, p < .01; a significant main effect for site, F(1, 24) = 6.67, p < .01; and a significant Site × Time interaction, F(1, 24) = 5.06, p < .01. These results demonstrate improvement at the training site in comparison to the control site (see Fig. 1 and Fig. 2).

To evaluate whether the training was effective at Site 2 after the delay, a paired samples t test was done for Site 2 data comparing the pretest at Time 2 to the posttest following implementation of the training at the control site. There was significant improvement for both the vignette test, t(1, 12) = −4.40, p < .01, and the written measure, t(1, 12) = −2.63, p = .02. These results confirm the findings at Site 1 and increase confidence in the effect of the intervention (see Fig. 3).

After 4 months, a follow-up evaluation was done to determine if staff had maintained their skills. Twenty out of the 28 residents participated. These participants did not differ significantly from the original cohort in terms of baseline skills assessed by the pretest or on any other demographic variable. The average time posttraining was 16 weeks, 10 days. There was significant improvement compared with baseline for both the vignette test, t(1, 12) = −4.791, p < .01, and the written measure, t(1, 12) = −3.35, p = .03. These results suggest that staff maintained their skills (see Fig. 4).

Separate analyses were done to examine the effect of staff level of training. In terms of the videotaped vignette, a repeated measures ANOVA demonstrated a significant difference for time, F(1, 24) = 32.78, p < .01, and for Time × Nurse Level significance was approached, F(1, 24) = 3.79, p = .06, suggesting that licensed staff benefited more from the training intervention than did the NAs. However, there was no interaction between time and performance on the written measure, indicating equal improvement for this measure. Paired t tests demonstrated that at the 4-month follow-up, both groups maintained skills (p < .05; see Fig. 5).

During the training, it appeared as though the participants were performing progressively better on the module pretests. Using a repeated-measures ANOVA, we found that there was a significant effect for module number on pretest percentage score, F(3, 24) = 10.37, p < .01. A trend analysis showed a significant linear effect, F(3, 24) = 18.18, p < .01, and quadratic effect, F(3, 24) = 4.61, p < .05, revealing a steep improvement after Session 2. This finding suggests a learning curve and, more important, an application of a skill.

Qualitative data were collected through a course evaluation and a motivation survey (Table 3 ). The results were favorable. Of the 25 participants who filled out the course evaluation, 21 reported that they liked the training sessions. Nine participants (36%) reported that the inservice was interesting, and 20 participants (80%) reported that it was informative. The course evaluation also indicated 19 participants (76%) felt the inservice was very helpful in improving patient care. Nineteen participants completed a motivation survey. All 19 participants who filled out the motivation survey reported that they would take the class for continuing education credit. When asked about their motivation levels, only 1 out of the 19 participants (5%) reported that money was the primary motivator. Furthermore, the responses on the motivation survey indicated that 95% of those who responded favored this inservice over traditional inservices led by a doctor (see Table 4 ).

Discussion

The videotaped training technique was successful at teaching nursing home staff to identify MDS mood triggers in residents on videotape and to familiarize themselves with general knowledge about the MDS and depression. These results were achieved without lecture-style teaching. Improvement was demonstrated at all levels of staffing included in the training, and this improvement was evident 4 months later. We believe that the type of interactive teaching described in this program engages the nursing staff and gives them tools that are directly relevant to their jobs. The minimal time commitment and use of images rather than English-based lectures work well in the nursing home culture of high workloads, high turnover, and diversity of staffing.

The intervention was successful across levels of staffing; however, the licensed staff appeared to benefit most, at least on the videotaped vignette aspect. Although both levels of staff demonstrated significant maintenance of skills, the licensed staff maintained their skills at a higher level then did the NAs. These findings are consistent with our previous work that suggested that, ultimately, LVNs and registered nurses are better informants regarding patient behavior (Wood et al. 2000). However, the significant improvement on the videotaped and written tests, enthusiastic participation, and the presence of a learning curve across levels of staffing allow us to be cautiously optimistic for all levels of staff.

In contrast to most interventions of this type, staff demonstrated maintenance of skills 4 months posttraining with no incentive of any type. This was of concern because of the poor maintenance following other behavioral interventions (Schnelle et al. 1990). Stevens and colleagues 1998 reported improved maintenance of behavioral management skills using incentives, written feedback from supervisors, and self-monitoring. These findings indicate that maintenance is enhanced when NAs believe that their behavior can make a difference and is rewarded. These findings suggest to us that staff had incorporated the skills into their routine, and this served to reinforce the skills. The inclusion of NAs after training in case conferences held to rate the MDS would probably improve the resident ratings and the self-efficacy of the staff even further.

The best training outcomes leading to behavior change have been reported using academic detailing techniques (Avorn and Sourmerai 1983). Academic detailing, a marketing technique that has been widely applied by the pharmaceutical industry, entails working around staff's schedules, having a clear take-home message, and using professional audiovisual tools. Our program is similar in design—it's flexible and has a simple take-home message per session. Currently, the licensing laws in California allow staff to receive continuing education credit for two 30-min inservice sessions. The program could be easily structured to meet this requirement. The modules could be combined to meet the requirements in other states.

The staff appeared to enjoy this training program. Results from our qualitative course evaluations suggested that the staff preferred this type of training to other alternatives currently available. The choice of the word doctor in our questionnaire may have been unfortunate given that physicians rarely devote time to training nursing staff. In the future, when assessing staff preferences we would ask the staff to compare our program with traditional inservices or those taught by experts. This type of program, on videotape, could potentially be converted to DVD. An interactive program could be individualized to take into account education, language, and general sophistication with the MDS. Cost is currently minimal and could be decreased further if a trainer need not be present. Furthermore, this program could be implemented in sites using the MDS-based depression rating scale described by Burrows and associates 2000 to improve the validity of the data.

There are other MDS domains that might lend themselves to this simple approach. For example, ratings of delirium, psychosocial well-being, and behavioral symptoms such as agitation and psychosis could be improved using videotaped clips. At the very least, these types of clips could be used to assess the baseline skills of a facility's MDS mood rater. In California, the Department of Health Services has mandated that NAs take 6 continuing education hours' worth of training on Alzheimer's disease. Videotaped clips could be used to illustrate the disorder's typical behavioral and cognitive symptoms and effective behavioral strategies to reduce aggression.

There are several caveats to the current study. The first concern is the relatively small sample enlisted in the study. Thirty-five participants began the study, but only 20 were present at the 4-month follow-up, representing a 41% turnover. Although the cross-over effect demonstrated at Site 2 increases our confidence in the program's effectiveness, the small sample size may limit the generalizability of the results. Future studies should begin with a sample large enough to withstand the expected attrition common after 6 months. A second concern is the possibility of practice effects. The staff were shown the videotaped vignette test three times at Site 1 and four times at Site 2. We attempted to control for this confound by using a delayed intervention design so that Site 2 was tested after a control period and by withholding feedback to the staff about their test performances. However, it is possible that after three viewings, staff could improve by practice alone. Future studies may need to include two delay periods to control for the potential impact of practice. Another caveat pertains to mode of administration of the program: individual versus group. We specifically allowed the staff to choose the timing and type of session best suited to their needs and did not control for group versus individual sessions. It would be helpful to determine in future studies which mode is most effective.

The current study is clearly the first step in achieving the objective of improving care of depressed nursing home residents. Although we have demonstrated an improvement in staff's ability to identify symptoms in videotaped residents, it will be necessary to determine whether the program translates to an improvement in MDS ratings and quality of care. Future studies that use this type of program should include other important outcome measures such as MDS ratings, physician referrals, and psychotropic use in order to determine whether the training will lead to improvement in quality of care.

Practice Concepts

The Forum

Book Reviews

Appendix

Mood and Behavior Patterns From the Minimum Data Set Manual

1. makes negative statements like "I want to die now"

2. says bad things about themselves like "I am a burden to my family"

3. sad, pained, worried facial expressions like wrinkled brow, pout

4. frequent crying

5. repeats questions often like "Where am I? When am I going home?"

6. says same thing over and over like "Help me, Help me"

7. has fears that are unfounded like "My husband is having an affair, I know it!"

8. thinks something awful is about to happen like believes he is going to die before Christmas

9. complains of health constantly like "I have pneumonia and I am going to die" repeated over and over when resident only has a cold

10. repetitive physical movements like tapping chair or table repeatedly

11. anxiously complains often like repeatedly asks when his clothes will be cleaned

12. is angry and easily irritated like angry at care takers, strikes out at them

13. is unpleasant in the morning

14. has difficulty falling asleep, staying asleep, or oversleeping

15. has drastic change in eating patterns like not eating enough food including the ice cream that the resident always used to eat

16. no longer goes to activities he/she once enjoyed like does not go to Bingo even when prompted by nurses

17. less social interaction like rarely talks to other residents or staff, has no interest in phone calls from family

Table 2.

Delayed Intervention Design

SiteTime 1Time 2Time 3Time 4
Site 1PretestTrainingPosttestFollow-up
Site 2PretestDelayPretestTrainingPosttestFollow-up
SiteTime 1Time 2Time 3Time 4
Site 1PretestTrainingPosttestFollow-up
Site 2PretestDelayPretestTrainingPosttestFollow-up
Table 2.

Delayed Intervention Design

SiteTime 1Time 2Time 3Time 4
Site 1PretestTrainingPosttestFollow-up
Site 2PretestDelayPretestTrainingPosttestFollow-up
SiteTime 1Time 2Time 3Time 4
Site 1PretestTrainingPosttestFollow-up
Site 2PretestDelayPretestTrainingPosttestFollow-up
Table 1.

Demographics of Nursing Home Staff: Demographics of Staff at Beginning of the Study

DemographicSite 1Site 2p
Age (years)
M4047ns
SD1411ns
Gender3 men, 12 women3 men, 17 womenns
Level of education5 licensed, 10 unlicensed9 licensed, 11 unlicensedns
Length of service (years)
M8.714.2ns
SD6.48.0ns
Pretest vignette mean11.08.5ns
Pretest written mean8.16.6ns
DemographicSite 1Site 2p
Age (years)
M4047ns
SD1411ns
Gender3 men, 12 women3 men, 17 womenns
Level of education5 licensed, 10 unlicensed9 licensed, 11 unlicensedns
Length of service (years)
M8.714.2ns
SD6.48.0ns
Pretest vignette mean11.08.5ns
Pretest written mean8.16.6ns
Table 1.

Demographics of Nursing Home Staff: Demographics of Staff at Beginning of the Study

DemographicSite 1Site 2p
Age (years)
M4047ns
SD1411ns
Gender3 men, 12 women3 men, 17 womenns
Level of education5 licensed, 10 unlicensed9 licensed, 11 unlicensedns
Length of service (years)
M8.714.2ns
SD6.48.0ns
Pretest vignette mean11.08.5ns
Pretest written mean8.16.6ns
DemographicSite 1Site 2p
Age (years)
M4047ns
SD1411ns
Gender3 men, 12 women3 men, 17 womenns
Level of education5 licensed, 10 unlicensed9 licensed, 11 unlicensedns
Length of service (years)
M8.714.2ns
SD6.48.0ns
Pretest vignette mean11.08.5ns
Pretest written mean8.16.6ns
Table 3.

Course Evaluation and Motivation Survey Results: A Complete Breakdown of Course Evaluation and Motivation Survey Responses and Percentages

Question and response%
1. Were the inservices interesting, informative, and easy to follow?
Liked the inservices84 (21/25)
Noted the inservices were interesting36 (9/25)
Noted the inservices were informative80 (20/25)
Noted dislike of Geriatric Depression Scale questions (2/6), quality of videotapes (2/6), depiction of crying on tapes (1/6), and sessions being too spread out (1/6)24 (6/25)
2. How helpful were the inservices in terms of improving your ability to detect depressive symptoms in your own patients?
Not helpful0 (0/25)
Helpful24 (6/25)
Very helpful76 (19/25)
3. How approachable and helpful was the trainer in answering your questions?
Not at all0 (0/25)
Somewhat8 (2/25)
Very92 (23/25)
1. Would you take this course if there was no monetary gain?
Yes95 (18/19)
No5 (1/19)
2. Would you choose our inservice over another, more traditional, inservice?
Yes95 (18/19)
No5 (1/19)
3. Would you take this course if no money was offered, but continuing education credit was given?
Yes100 (19/19)
No0 (0/19)
Question and response%
1. Were the inservices interesting, informative, and easy to follow?
Liked the inservices84 (21/25)
Noted the inservices were interesting36 (9/25)
Noted the inservices were informative80 (20/25)
Noted dislike of Geriatric Depression Scale questions (2/6), quality of videotapes (2/6), depiction of crying on tapes (1/6), and sessions being too spread out (1/6)24 (6/25)
2. How helpful were the inservices in terms of improving your ability to detect depressive symptoms in your own patients?
Not helpful0 (0/25)
Helpful24 (6/25)
Very helpful76 (19/25)
3. How approachable and helpful was the trainer in answering your questions?
Not at all0 (0/25)
Somewhat8 (2/25)
Very92 (23/25)
1. Would you take this course if there was no monetary gain?
Yes95 (18/19)
No5 (1/19)
2. Would you choose our inservice over another, more traditional, inservice?
Yes95 (18/19)
No5 (1/19)
3. Would you take this course if no money was offered, but continuing education credit was given?
Yes100 (19/19)
No0 (0/19)
Table 3.

Course Evaluation and Motivation Survey Results: A Complete Breakdown of Course Evaluation and Motivation Survey Responses and Percentages

Question and response%
1. Were the inservices interesting, informative, and easy to follow?
Liked the inservices84 (21/25)
Noted the inservices were interesting36 (9/25)
Noted the inservices were informative80 (20/25)
Noted dislike of Geriatric Depression Scale questions (2/6), quality of videotapes (2/6), depiction of crying on tapes (1/6), and sessions being too spread out (1/6)24 (6/25)
2. How helpful were the inservices in terms of improving your ability to detect depressive symptoms in your own patients?
Not helpful0 (0/25)
Helpful24 (6/25)
Very helpful76 (19/25)
3. How approachable and helpful was the trainer in answering your questions?
Not at all0 (0/25)
Somewhat8 (2/25)
Very92 (23/25)
1. Would you take this course if there was no monetary gain?
Yes95 (18/19)
No5 (1/19)
2. Would you choose our inservice over another, more traditional, inservice?
Yes95 (18/19)
No5 (1/19)
3. Would you take this course if no money was offered, but continuing education credit was given?
Yes100 (19/19)
No0 (0/19)
Question and response%
1. Were the inservices interesting, informative, and easy to follow?
Liked the inservices84 (21/25)
Noted the inservices were interesting36 (9/25)
Noted the inservices were informative80 (20/25)
Noted dislike of Geriatric Depression Scale questions (2/6), quality of videotapes (2/6), depiction of crying on tapes (1/6), and sessions being too spread out (1/6)24 (6/25)
2. How helpful were the inservices in terms of improving your ability to detect depressive symptoms in your own patients?
Not helpful0 (0/25)
Helpful24 (6/25)
Very helpful76 (19/25)
3. How approachable and helpful was the trainer in answering your questions?
Not at all0 (0/25)
Somewhat8 (2/25)
Very92 (23/25)
1. Would you take this course if there was no monetary gain?
Yes95 (18/19)
No5 (1/19)
2. Would you choose our inservice over another, more traditional, inservice?
Yes95 (18/19)
No5 (1/19)
3. Would you take this course if no money was offered, but continuing education credit was given?
Yes100 (19/19)
No0 (0/19)
Table 4.

Table of Means With Standard Deviation for Vignette Test and Written Test by Site and by Nurse Level

SiteTime 1Time 2Time 3Time 4
Mean of written test by site
Site 1
M8.1413.8511.43
SD2.985.003.41
Site 2
M7.009.2215.5311.54
SD4.614.124.034.98
Mean of vignette test by site
Site 1
M11.0715.2314.00
SD3.692.392.58
Site 2
M8.209.8913.4114.00
SD4.534.124.353.56
LevelTime 1Time 2Time 3Time 4
Mean of written test by level
Licensed
M9.859.4315.3814.44
SD4.584.724.992.79
Unlicensed
M6.009.0914.359.09
SD2.833.944.144.04
Mean of vignette test by level
Licensed
M11.2311.4316.3114.44
SD3.002.232.432.79
Unlicensed
M8.368.9112.5913.63
SD4.775.243.733.56
SiteTime 1Time 2Time 3Time 4
Mean of written test by site
Site 1
M8.1413.8511.43
SD2.985.003.41
Site 2
M7.009.2215.5311.54
SD4.614.124.034.98
Mean of vignette test by site
Site 1
M11.0715.2314.00
SD3.692.392.58
Site 2
M8.209.8913.4114.00
SD4.534.124.353.56
LevelTime 1Time 2Time 3Time 4
Mean of written test by level
Licensed
M9.859.4315.3814.44
SD4.584.724.992.79
Unlicensed
M6.009.0914.359.09
SD2.833.944.144.04
Mean of vignette test by level
Licensed
M11.2311.4316.3114.44
SD3.002.232.432.79
Unlicensed
M8.368.9112.5913.63
SD4.775.243.733.56
Table 4.

Table of Means With Standard Deviation for Vignette Test and Written Test by Site and by Nurse Level

SiteTime 1Time 2Time 3Time 4
Mean of written test by site
Site 1
M8.1413.8511.43
SD2.985.003.41
Site 2
M7.009.2215.5311.54
SD4.614.124.034.98
Mean of vignette test by site
Site 1
M11.0715.2314.00
SD3.692.392.58
Site 2
M8.209.8913.4114.00
SD4.534.124.353.56
LevelTime 1Time 2Time 3Time 4
Mean of written test by level
Licensed
M9.859.4315.3814.44
SD4.584.724.992.79
Unlicensed
M6.009.0914.359.09
SD2.833.944.144.04
Mean of vignette test by level
Licensed
M11.2311.4316.3114.44
SD3.002.232.432.79
Unlicensed
M8.368.9112.5913.63
SD4.775.243.733.56
SiteTime 1Time 2Time 3Time 4
Mean of written test by site
Site 1
M8.1413.8511.43
SD2.985.003.41
Site 2
M7.009.2215.5311.54
SD4.614.124.034.98
Mean of vignette test by site
Site 1
M11.0715.2314.00
SD3.692.392.58
Site 2
M8.209.8913.4114.00
SD4.534.124.353.56
LevelTime 1Time 2Time 3Time 4
Mean of written test by level
Licensed
M9.859.4315.3814.44
SD4.584.724.992.79
Unlicensed
M6.009.0914.359.09
SD2.833.944.144.04
Mean of vignette test by level
Licensed
M11.2311.4316.3114.44
SD3.002.232.432.79
Unlicensed
M8.368.9112.5913.63
SD4.775.243.733.56

Figure 1.

Mean of vignette pretest and posttest scores by site. Site 1 served as the intervention site and Site 2 served as the control site. *p < .05.

Figure 2.

Mean of written pretest and posttest scores by site. Site 1 served as the intervention site and Site 2 served as the control site. *p < .05.

Figure 3.

Mean vignette test scores at pretest, posttest, and follow-up for delayed intervention site (e.g., Site 2). *p < .05.

Figure 4.

Mean written test scores at pretest, posttest, and follow-up for delayed intervention site (e.g., Site 2). *p < .05.

Figure 5.

Mean of vignette test scores by site across time. *p < .05.

This project was funded by an Alzheimer's Disease Center of California grant, a National Institute on Aging Alzheimer's Research Center Award AG 16570, the Sidell-Kogan Foundation, and a Scripps College Faculty Development Award. Portions of this article have been presented at the annual meetings of the American Psychological Association (August 2000) and The Gerontological Society of America (November 2000). We would like to thank the staff at Hillcrest Woods in La Verne, California, and the staff at Mount San Antonio Gardens in Claremont, California, for their enthusiasm and support of this project. The videotapes and training materials can be obtained by contacting Stacey Wood.

References

Avorn J., Sourmerai S. B.,

1983
. Improving drug therapy decisions through educational outreach: A randomized controlled trial of academic detailing.
New England Journal of Medicine
320:
227
-232.

Baltes M. M., Neumann E.-M., Zank S.,

1994
. Maintenance and rehabilitation of independence in old age: An intervention program.
Psychology and Aging
9:
179
-188.

Burgio K., Burgio L.,

1990
. Institutional staff training and management: A review of the literature and a model for geriatric, long-term care facilities.
International Journal on Aging and Human Development
30:
(4)
287
-302.

Burrows A. B., Morris J. N., Simon S. S., Hirdes J. P., Phillips C.,

2000
. Development of a Minimum Data Set-based depression rating scale for use in nursing homes.
Age and Aging
29:
165
-172.

Institute of Medicine

1986
.
Improving the quality of care in nursing homes
National Academy of Sciences Press, Washington, DC.

McCallion P., Toseland R., Freeman K.,

1999
. An evaluation of the family visit education program.
Journal of the American Geriatrics Society
47:
203
-214.

National Institutes of Health Consensus Development Conference. (1991). Diagnosis and treatment of depression in late life (Consensus Statement Vol. 9, No. 3). Bethesda, MD: Office of Medical Applications of Research.

Omnibus Budget Reconciliation Act of 1987, Pub. L. No. 100-203, 42 U.S.C. ß1395i-3 (1987).

Parmalee P., Katz I., Lawton M.,

1989
. Depression among institutionalized aged: Assessment and prevalence estimation.
Journal of Gerontology: Medical Sciences
44:
M22
-M29.

Phillips C. D., Morris J. N., Hanks C., Fries B. E., Mor V., Nennsteil M., Iannacchione V.,

1997
. Association of the Resident Assessment Instrument with changes in function, cognition, and psychosocial status.
Journal of the American Geriatrics Society
45:
986
-993.

Rovner B., German P., Brant L., Clark R., Burton L., Folstein M.,

1991
. Depression and mortality in nursing homes.
Journal of the American Medical Association
265:
993
-996.

Schnelle J. F., Newman D. R., Fogarty T.,

1990
. Management of patient continence in long-term care nursing facilities.
The Gerontologist
30:
373
-376.

Schnelle J. F., Wood S., Schnelle E. R., Simmons S. F.,

2001
. Measurement sensitivity and the MDS depression quality indicator.
The Gerontologist
41:
401
-405.

Stevens A. B., Burgio L. D., Bailey E., Burgio K. L., Paul P., Capilouto E., Nicovich P., Hale G.,

1998
. Teaching and maintaining behavior management skills with nursing assistants in a nursing home.
The Gerontologist
38:
379
-384.

Waxman H. M., Carner E. A., Berkenstock G.,

1984
. Job turnover and job satisfaction among nursing home aides.
The Gerontologist
24:
503
-509.

Wood S., Cummings J. L., Hsu M., Barclay T., Veen Wheatley M., Yarema K. T., Schnelle J. F.,

2000
. The use of the Neuropsychiatric Inventory in nursing homes residents: Characterization and measurement.
American Journal of Geriatric Psychiatry
8:
(1)
75
-83.