Improving Math Computation via Self-Monitoring and Performance Feedback

TitleImproving Math Computation via Self-Monitoring and Performance Feedback
Publication TypeAcademic Article
AuthorsOssa, N, Pham, AV, Peláez, M, Lazarus, PJ
Full Text

The development of math computation skills is necessary for academic success as well as independent living (Codding, Hilt-Panahon, Panahon & Benson, 2009). Research suggests that math curricula and instruction that involve daily and routine practice lead to improved math achievement outcomes (Agodini et al., 2009). However, a large proportion of students in the United States have struggled to attained proficiency in mathematics. In 2013, only 41 percent of fourth graders, 34 percent of eight graders, and 26 percent of twelfth graders scored at or above proficiency in mathematics (U.S. Department of Education, Institute of Education Sciences, 2013). Thus, there is a great need to develop and implement classroom interventions that benefit students who are not proficient in math computation (U.S. Department of Education, Office of Planning, Evaluation and Policy Development, 2010).

Self-management interventions allow students to monitor, regulate, or manage academic behavior in order to increase positive educational outcomes (Montague, 2007; Rafferty, 2010). One of the more widely used self-management interventions to improve students’ math proficiency is self-monitoring (Rafferty & Raimondi, 2009). Self-monitoring is an effective technique that researchers and practitioners have applied most frequently due to its versatility in school settings (McDougall, Skouge, Farrell, & Hoff, 2006). Several studies support the use of self-monitoring in math performance in students with disabilities, including those within inclusive or special education settings (Falkenberg & Barbetta, 2013; Lannie & Martens, 2008; Rock, 2005; Shimabukuro, Prater, Jenkins, & Edelen-Smith, 1999). Studies have also shown that if implemented regularly, self-monitoring interventions can improve mathematical proficiency by increasing math accuracy and fluency (e.g., Rock, 2005).

The importance of monitoring performance for student achievement has been well documented in numerous studies (e.g., McDougall, Saunders, & Goldenberg, 2007). When combined with performance feedback, self-monitoring interventions can improve students’ accurate and fluent responding (Eckert, Ardoin, Daisey, & Scarola, 2000), and overall academic performance (Codding, Chan-Iannetta, George, Ferreira, & Volpe, 2011). Performance feedback interventions have been described as procedures that provide students with information regarding their specific performance on an academic task (Ysseldyke & Elliott, 1999) and can serve as motivation for children to exceed prior performance (Shapiro, 2004). Some examples include verbal feedback, self-scoring, and graphing data. Many studies have found that performance feedback to students was effective in several academic areas, particularly in reading (Eckert, Dunn, & Ardoin, 2006), although there have been inconsistent results regarding its effectiveness with mathematics (Carson & Eckert, 2003; Codding, Eckert; Fanning, Shiyko, & Solomon, 2007; Skinner, Bamberg, Smith, & Powell, 1993).

Skinner et al. (1993) examined the effectiveness of a Cover-Copy-Compare math intervention on students’ completion of division facts of three elementary school-age students in a self-contained classroom using a within-subjects, across-tasks, multiple-baseline design. Two of the students improved in their computation fluency using solely the math intervention while the third student required the intervention along with performance feedback and goal setting to reach mastery. These findings provided preliminary evidence of the effectiveness of performance feedback in conjunction with the math intervention. In another study using a similar Cover-Copy Compare math intervention, Codding et al. (2007) compared the isolated effects of the intervention with two types of performance feedback (e.g., verbal and graphical) of three general education students struggling with math computation. Using an alternating treatments design, they found no differentiation between treatment conditions, and therefore could not conclude that inclusion of performance feedback provided better performance in math fluency (digits correct per minute [DCPM]). In other words, the students performed similarly whether they received or did not receive feedback. However, the students in the study did not produce their own graphs which may have provided more motivation and self-monitoring of their academic progress.

The use of both verbal and graphical feedback can aid students in managing their academic behaviors and in monitoring progress. Students can record their own frequency, duration, accuracy, or rate of their own behavior on a bar or line graph. By graphing their own performance, students can obtain feedback immediately after they complete a task, and are more cognizant of their progress. Moreover, self-monitoring techniques in conjunction with performance feedback have been shown to improve academic skills in children with or at risk of learning disabilities compared to repeated practice, tutoring, or reinforcement alone (Morgan & Sideridis, 2006). Consistently measuring and monitoring progress toward achieving educational goals has also been shown to improve students' overall academic outcomes (Brophy & Good, 1986).

Several studies have examined the effects of self-monitoring on academic behavior of children with learning disabilities. Maag, Reid, and DiGangi (1993) measured math progress of six students with learning disabilities in fourth and sixth grade general education classrooms, using a combination of multiple schedule and multiple baseline across subjects design. After the baseline and intervention phase, students were trained to use three types of self-monitoring interventions (e.g., accuracy, productivity, and on-task behavior) before fading the intervention. Findings showed that those students who used the self-monitoring interventions increased only in actual accuracy and productivity (Maag et al., 1993). No specific relationship was found between self-monitoring of on-task behavior (i.e., attention) and actual on-task behavior. Although accuracy and productivity have been shown to increase in previous studies, the conditions responsible for these increases varied by grade (Maag et al., 1993) and by individual student (Lannie & Martens, 2008). Additional research is needed to understand the conditions under which math performance is likely to improve as a result of self-monitoring.

Similarly, Shimabukuro et al. (1999) studied the effects of self-monitoring of academic performance on students with learning disabilities and Attention-Deficit Hyperactivity Disorder (ADHD). Three male students from a self-contained classroom participated in this study (one sixth grader and two seventh graders). Results indicated that self-monitoring of academic performance increased academic productivity, accuracy, and on-task behavior during independent class work. The authors found that both academic productivity and on-task behavior improved for each student in various subject areas. Gains in productivity were greater than gains in accuracy, and productivity gains were greater for both reading comprehension and math than for written expression (Shimabukuro et al., 1999).

Other studies have investigated whether self-monitoring would be useful for children without disabilities within the general education setting. For instance, Rock (2005) examined the effects of the ACT-REACT intervention, on the academic engagement, non-targeted problem behavior, performance, and attention of elementary school-age students with and without disabilities who were enrolled in inclusive classrooms. ACT-REACT is a self-monitoring strategy that represents a six-step process and require students to: 1) Articulate academic and behavioral goals, 2) Create a self-monitoring work-plan to record academic and behavioral performance, 3) Take picture(s) of behavioral goals using self-modeling, 4) Reflect on the academic and behavioral goal attainment after each class, 5) Evaluate academic and behavioral progress over time, and 6) ACT again continuously. Students were given a graphic organizer, timing device, and self-monitoring handouts which included visual cues and goal statement prompts (e.g., “My math goal today is to complete…”). Findings from that study suggested that the ACT-REACT was effective for increasing academic engagement (i.e., time on task), accuracy (i.e., percentage correct of completed problems) and productivity (i.e., total number of completed problems daily) on math tasks (Rock, 2005). In a follow-up study, Rock and Thead (2007) continued to use the self-monitoring intervention with five elementary school-age students with and without disabilities. However, their findings revealed variability and inconsistencies in students’ academic accuracy and productivity when completing independent math seatwork. They attributed the lack of gains due to limited instructional time and guided practice of math concepts prior to initiating seatwork.

A limited number of studies have investigated the combined use of self-monitoring techniques along with reinforcement during the intervention phase (e.g., Falkenberg & Barbetta, 2013; Legge, DeBar, & Alber-Morgan, 2010). For example, Lannie and Martens (2008) examined the effects of a self-monitoring program with a reinforcer menu on fifth grade students’ math performance. Using a multiple-baseline approach, their findings showed that three out of the four students increased in DCPM when self-monitoring their productivity, but only after meeting on-task and accuracy criteria (Lannie & Martens, 2008). Because the researchers used a set criterion within the self-monitoring phase instead of establishing stability in the participants’ performance, this weakened the demonstration of experimental control (Lannie & Martens, 2008).

Falkenberg and Barbetta (2013) investigated the efficacy of a self-monitoring intervention on math and spelling homework completion and accuracy rates of four fourth grade students with disabilities within an inclusive classroom. The self-monitoring intervention consisted of a self-monitoring worksheet containing homework tips, which was completed at home, and a computer-based self-monitoring software called KidTools, which was completed at school. The performance feedback consisted of a brief conference with the special education teacher to review the self-monitoring sheets with the students. Results provided strong support for the effectiveness of self-monitoring in improving completion and accuracy of spelling and math homework for students with disabilities (Falkenberg & Barbetta, 2013). However, this study is limited by the use of home self-monitoring sheets completed by the parents. Because parents might have prompted their child to complete homework, or assisted and checked homework without knowledge of the researchers, it was difficult to determine the level of self-monitoring exhibited by the students occurring at home due to potentially increased parent involvement. Nevertheless, results from the study provided further support for using both self-monitoring and performance feedback for children with disabilities within an inclusive setting.

Despite previous research which supported the use of self-monitoring techniques, few studies have examined the efficacy of self-monitoring on math computation, of children at risk for math difficulties in the general education classroom. Of the extant studies, many were conducted in self-contained or special education classrooms, with a limited number of studies using curriculum-based measurement in math (M-CBM) in inclusive settings. The present study aimed at examining the combined effects of an intervention consisting of self-monitoring and performance feedback on children who are at risk of math difficulties using M-CBM. Although the participants monitored the accuracy and productivity of their performance for the purposes of the intervention, we decided to collect data on their completion rate by calculating the DCPM for each session. Unlike the previous studies which used a multiple baseline design, the self-monitoring intervention was implemented using a B-A-B withdrawal design to determine efficacy of the intervention in the classroom. The research questions were as follows: 1) What are the effects of self-monitoring and performance feedback on completion of math problems (productivity) for students in an inclusive general education classroom? 2) What are the effects of self-monitoring and performance feedback on math computation accuracy for students in an inclusive general education classroom?

Method

Participants and Setting

The participants included four second-grade elementary school students (three males and one female) enrolled in a general education classroom (Aaron, Barry, Clara, and Dean). They were selected from a K-8 school in a large urban school district in southeastern Florida. The ethnic and racial backgrounds of the participants included African American, Hispanic, and Caucasian. The mean age of the students was seven-years old.

All four participants were recommended for the study by their classroom teacher based on their below average performance in math within the past four months. Their current achievement levels (grade of Cs or below), as reported by the teacher, indicated that the students demonstrated moderate difficulties in math computation. Additional screening took place to find students whose accuracy rate on math computation probes (M-CBM) was less than 75%. Students who met this criteria were included in the study. Although each of the participants had below average math performance, none were receiving any additional tutoring or math support inside or outside school. They enrolled in the same classroom which was comprised of 36 students and two classroom teachers. After the selection of the participants, consent forms were sent home to their parents/legal guardians. All consent forms were signed by their parents/legal guardians and returned to the investigators allowing the students to participate in the study.

All experimental sessions took place in the participants’ general education classroom during math instruction over the course of 11 sessions during a six-week time period. The two teachers and the school principal also provided their consent for the study to be conducted in the classroom. The session length varied according to the study phase. Each session was carried out by the graduate student researcher in a quiet area of the classroom.

Materials

Curriculum-based measurement in math (M-CBM)

Math computation probes were created using an online CBM website, Intervention Central (www.interventioncentral.org), which randomly generated calculation problems according to the math skills chosen. This method was chosen as these measures were similar to the math content taught in the classroom. Previous studies have shown that M-CBM provides high reliability and validity and can be used for early identification and formative evaluation (e.g., Clarke & Shinn, 2004). M-CBM probes were created based on input provided by the teacher and matched each student’s level of instruction. Each participant was given one M-CBM probe per session for all phases of the study. Initial screening using M-CBM was conducted to determine whether they met criteria to participate in the study (<75% accuracy rate). The math skills chosen were appropriate for second grade. Based on the students’ second grade curriculum, the following types of math problems were included in the computation probes: 1-to-2-digit number addition without regrouping, 3-digit number addition without regrouping, 1-to-2 digit number addition with regrouping, 1-digit number subtraction, and 2-digit number subtraction without regrouping. Each probe contained 20 total problems arranged in 4 x 5 matrix where each problem was aligned in a vertical format on one side of the paper.

M-CBM has been described as a brief, repeatable method of monitoring academic progress (Shapiro, 2004). These fluency-based measures are sensitive to improvement of students’ achievement over time towards mastery of a specific skill, and can provide formative data for educational planning and regular monitoring of instruction. Research has supported the reliability of the M-CBM (Clarke & Shinn, 2004). The test-retest reliability of M-CBM over a one-week interval was good (= .82). Data collected from the M-CBM included the DCPM from each probe.

Self-monitoring graph

The self-monitoring graph was used for each session of the intervention phases. It was used to graph the number of problems answered correctly per session of the intervention. The graph served two purposes; it allowed participants to record their current performance and to track their progress over time. It was printed on both sides of an 8.5-inch by 11-inch sized sheet of paper. On each side of the sheet, the dates of the scored probes were labeled on the abscissa (x-axis) and labeled “Session Date”, and total number of correct math computation problems completed was labeled in the ordinate (y-axis). The graphs used vertical bars which represented the number of correct problems completed for each successive session and were labeled with the child’s name.

Performance feedback

After each intervention session, the researcher reviewed the self-monitoring graph with each participant. Also, the researcher reviewed the participant’s previous scores and provided verbal feedback on each participant’s progress throughout the intervention sessions. The verbal feedback consisted of few questions and reminders which included, “How many correct math problems did you have?” and “How many math problems did you do?” After verbal feedback was given, the researcher and participant collectively decided whether the participant surpassed the previous intervention session’s score. If the participant surpassed his or her previous score, he or she received a tangible reward from the reinforcement menu. After jointly determining whether the participant made progress, the researcher would set a goal for the participant to attain one more problem correct or complete one more problem for the following session.

Reinforcement menu

The reinforcement menu was composed of a list of tangible rewards for the second-graders who participated in this study. The menu was developed in consultation with the classroom teachers and aligned with the pre-existing incentive systems already incorporated in the classroom. The completed reinforcement menu consisted of the following: pencils, stickers, and erasers. Each participant was allowed to choose one reward every time they surpassed the previous session’s score. The teachers accepted the list of rewards to be used in conjunction with the self-monitoring intervention to promote work completion in the classroom. Alternative positive reinforcement (e.g., snacks, activities, verbal praise) were initially considered but were not used due to the possibility of classroom disruption and to minimize confounders.

Measurement

Participants were given two minutes to complete each M-CBM probe for each session. During the intervention and withdrawal phases, each participant counted the number of math computation problems answered correctly (accuracy) and the number of problems (productivity score) he or she attempted for each session. Next, the participant recorded and graphed the number of total correct items on the self-monitoring graph. 

Although both math computation accuracy and productivity were recorded and graphed by the participant as part of the intervention procedures, math completion rates were also recorded by calculating DCPM for each probe in order to monitor progress and fluency. Thus, when providing verbal feedback for each participant, performance was discussed in relation to the total number of correct answers (accuracy) found in each probe. For the purposes of the study, DCPM was also collected and reported as the dependent variable to determine efficacy of the intervention. DCPM was calculated by adding the number of correct digits from the child’s responses in each probe and dividing the sum by two.

Experimental Design and Procedures

A repeated measures B-A-B withdrawal design was used to evaluate the efficacy of the intervention consisting of self-monitoring and performance feedback and their effects on math computation rate. This particular design was chosen for practicality and the time constraints of this study, as the implementation of the self-monitoring intervention took place during the last few months of the school year. One advantage of using this design is the immediate implementation of the intervention while also concluding the study with the intervention in its final phase. Limitations include lack of baseline or substantial preintervention data, but results would indicate there is a combined effect of the intervention if there is growth across the sessions. The intervention phases consisted of the participants completing one M-CBM probe per session for two minutes. Participants then checked their answers with an answer key that was provided. Along with the graduate researcher, participants graphed the number of problems answered correctly (i.e., accuracy) on their self-monitoring graph. The graduate researcher also calculated the DCPM for each probe. Participants transitioned to the next phase when a stable performance was established. Stability of performance was defined as 80% of the scores within 15% of the mean (Tawney & Gast, 1984). The withdrawal phase (A) consisted of the participants completing an M-CBM probe per session for two minutes without the intervention.

Intervention Phases

Before starting the sessions, each participant was taught how to develop the self-monitoring graph for the intervention phases of the study. The graduate student researcher modeled the procedures for each participant. During each phase, the participant’s accuracy and productivity on the M-CBM probes were measured and recorded. The participant was given the following materials during each session: pencil, crayon, a self-monitoring graph, M-CBM probe, and answer sheet, which corresponded to each M-CBM probe. The participant was given two minutes to complete the M-CBM probe per session. While each participant completed the M-CBM probe, the answer sheets were laid face down on the table. Upon completion, all participants checked their responses using the answer sheets. Participants recorded the number of problems completed correctly and then illustrated the results on the self-monitoring graph using a crayon. The graduate student researcher subsequently recorded and calculated DCPM for each M-CBM probe. Performance feedback was given after each session regarding progress, and only accuracy was graphed by the participants, since it was easier for the second-grade students to graph and monitor one variable in this study as opposed to two. The graduate student researcher reviewed the self-monitoring graph with each participant and provided feedback on their progress. Each participant had the opportunity to earn a reward at the end of each session. The participant must have surpassed their previous accuracy score to obtain a reward. The intervention phases continued until each participant demonstrated performance stability.

Withdrawal Phase (A)

During baseline, the participants were given two minutes to complete an M-CBM probe per session. No additional materials were provided to the participants. In other words, no self-monitoring graph, performance feedback, or reward were given during this phase. For each session, the participants were instructed to complete math computation problems within two minutes. The researcher informed each participant to stop working on the probe after two minutes. Afterwards, math computation accuracy and productivity were determined. Stability of the performance data was also established with a mean line. The graduate researcher also calculated DCPM for each probe.

Inter-observer Agreement

One classroom teacher served as an independent observer for the sessions. The independent observer accompanied the graduate student researcher on approximately 30% of the observations. The independent observer and graduate student researcher reviewed each participant’s responses on the M-CBM probes and recorded the number of problems answered correctly (accuracy) and the number of problems completed per M-CBM probe (productivity). Inter-observer reliability was calculated by dividing the number of agreements by the sum of the agreements and disagreements and multiplying by 100. This resulted in an inter-observer agreement of 100%

Treatment Integrity

Treatment integrity was also reviewed by the independent observer during 30% of the sessions for each participant. The graduate researcher reviewed the list of steps for each session. The steps outlined the following information: (a) the intervention procedures, (b) M-CBM scoring, and (c) the materials required. Each intervention and M-CBM probe step was described and included questions that the graduate researcher was to ask the student during each intervention phase. The observer recorded whether the graduate researcher followed each steps. Treatment integrity across observed sessions was 100%.

Results

Math Computation Rate

Figure 1 presents the mathematics computation rate (i.e., DCPM) results for each participant. All students started at instructional level (range of 14-21 DCPM) in mathematics, according to M-CBM fluency benchmarks compiled for second grade (Burns, VanDerHeyden, & Jiban, 2006). Because a majority of the data points indicated gradual effects during intervention, all four remained at instructional level by the end of the intervention. For Aaron, performance was relatively higher and more stable during the initial phase of the intervention with a median of 18.5 DCPM, compared to the withdrawal phase, where performance ranged from 13 to 20 DCPM (median = 14 DCPM). The last intervention phase showed modest improvement (median = 17) compared to the withdrawal phase, but was variable during the last few sessions with a range of 15-19 DCPM. Although performance in the last phase, based on visual analysis, slightly decreased compared to the initial phase of the intervention, the median DCPM was higher when the intervention was in place, indicating some differentiation between conditions.

For Barry, performance ranged from 13-18 DCPM during the initial phase of the intervention, but improved particularly during the last 6 sessions (sessions 6-11) before the end of the second phase of the intervention. A gradual increase in DCPM emerged and an increasing trend was observed. Performance increased up to 19.5 DCPM during the second intervention phase over the highest performance of the initial intervention phase (18 DCPM). An increasing trend was evident yet the performance during the withdrawal phase was not clear to differentiate it from the treatment condition.

For Clara, performance was relatively stable during the initial intervention phase (median = 14 DCPM); however, during the withdrawal phase, a change in level and performance indicated a decline from completing 19.5 DCPM during session 5 at the start of the withdrawal phase to 12.5 DCPM by the end of the phase. This unusual pattern might suggest that the participant was probably prepared to discuss her performance and graph it during session 5, not knowing that the self-monitoring intervention was withdrawn. Only at the start of session 6 did the participant realize that she would not have an opportunity to discuss and graph her data. It was not until the second intervention phase, Clara’s performance showed gradual increase in DCPM (median = 17 DCPM) during the last four sessions of that phase with a range between 16 and 18 DCPM. Thus, the improvement in DCPM was modest here as well.

Figure 1: Math Fluency Data: Digits Correct Per Minute

Figure 1: Math Fluency Data: Digits Correct Per Minute

For Dean, an increasing trend was also evident where the median performance during the initial intervention phase was 14 DCPM (range = 12-17), 16 DCPM (range = 13-18) during the withdrawal phase, and 17.5 DCPM during the second intervention phase (range = 17.5 -19). Although there was slight variability similar to Aaron and Barry during the withdrawal phase, performance appeared to be less variable by the end of the intervention, where the participant also attained the highest performance in math computation at a rate of 19 DCPM during the final session.

Discussion

The purpose of this study was to evaluate the efficacy of a classroom intervention consisting of self-monitoring and performance feedback on math computation of four second-grade students in a general education classroom setting. It was found that the self-monitoring intervention provided modest improvements on math computation accuracy and rate, particularly when comparing to the withdrawal phase. However, all participants remained at instructional level throughout the conditions during the short time the intervention was implemented. Nevertheless there was an unusual pattern during the initial session of the withdrawal phase when all participants had a slight increase (session 5) in DCPM but subsequently their performance decreased in the next session (session 6), particularly for Aaron and Clara. This may be due to their initial expectation of the intervention. Participants subsequently realized after they completed their tasks that they were not going to receive performance feedback or the visual graph during the withdrawal phase. As a result, their performance declined somewhat until the second intervention phase.

The second-grade students chosen to participate in this study were, at best, low to average achieving students in math computation, based on teacher input. Participants had not received failing grades or scores on math tests, but demonstrated inconsistencies in their performance in math computation, where they were able to complete math problems successfully one day, but less successfully the next day. Thus, a self-monitoring intervention may prove useful for those students who lacked automaticity in completing math problems efficiently, and may require extra support and self-monitoring to maintain consistent performance. All participants appeared to have improved when the intervention was in place, particularly by the second intervention phase, although it would be ideal to continue with the intervention sessions to see if this was to be maintained over time. However, considering that their performance level was instructional level throughout the sessions, any additional increases after the second intervention phase would have been minimal by the end of the school year, since all approached 19 to 20 DCPM during the final sessions. Students who demonstrated inconsistent math computation fluency may often be overlooked for additional support since they are not as described as failing. Thus, the intervention can assist those who may be considered at risk for math difficulties, especially if students do not attain or build on the fundamental basic math skills before starting on more challenging complex computational problems (e.g., multi-digit multiplication or division).

Findings from the current study supported results from previous studies (Lannie & Martens, 2008; Maag et al., 1993) where students demonstrated improvement in math computation during the implementation of the intervention phases. However, findings from this study should be considered in light of several limitations, including lack of maintenance data and minimal sessions or data points across all phases of the study. Additional screening information may include standardized math assessment measures which could have determined the students’ math proficiency compared to same-aged peers as well as criterion and predictive validity of the M-CBM. These screening measures may include a larger number and variety of math problems in order to avoid a ceiling effect. Nevertheless, each student was knowledgeable to some degree on basic math computation, which made the self-monitoring intervention more useful as it taught each student to monitor behavior and progress rather than directly teaching fundamental computation skills. Each student was individually assessed using a curriculum-based measure to determine the number of problems completed within two minutes, and to determine whether their performance in math computation was less than 75% (i.e., <15 out of 20 correct problems). Only those students whose accuracy in math computations was less that 75% were included in the present study. Future research can screen students and determine efficacy of self-monitoring interventions of children with varying cognitive and academic skill levels, along with assessment in other mathematical domains besides computation and fluency, such as math word problems.

Strengths of using this particular withdrawal design include the immediate implementation of the intervention as well as ending the intervention at the completion of the study. Even though the design did not use a recommended minimum of four phases with at least five data points based on What Works Clearinghouse design standards (Kratochwill et al., 2010), upon visual inspection of the data, there were few overlapping data points between the different phases, and changes in the condition resulted in changes in math performance. Additionally, because there was less variability observed during the second intervention phases (e.g., Barry, Clara, and Dean), this provided evidence to show that greater likelihood of stability if the intervention continued. Of course, the design of the study could have been strengthened by including measures of maintenance and generalization of intervention data. Because the data were collected near the end of the school year, the study was conducted on a very restricted timeline (6 consecutive weeks) and thus data regarding maintenance or final grades were not collected.

Additionally, the use of twenty items for every M-CBM probe might have resulted in a ceiling effect, where students approached the maximum number of items during the sessions. Including additional computation items over time would have addressed this limitation, although students might have felt uncertain or overwhelmed if they observed more items added after each session. Although the intervention called for monitoring accuracy (i.e., percent of completed items that are correct), this also has its limitations. If the solution to a problem is found to contain one or more incorrect digits, that solution is marked wrong and the student receives no credit. In contrast, assessing DCPM allows the student to receive credit to each individual correct digit appearing in the solution to the math problem. Therefore, even if the actual solution to an item is incorrect, the child may still receive partial credit if one of the digits is correct within a two-digit response. Scoring computation problems by the digit using DCPM rather than as a whole answer allows for closer analysis of a child's computational skills. When separately scoring each digit in the response to determine DCPM, the researcher is able to recognize the student's partial math competencies. However when comparing accuracy and DCPM data across sessions, the trends and variability were very similar across each phase.

Another method of recording rate and efficiency of math computation is errors per minute (EPM). Although collecting EPM provides information related to accuracy or lack thereof, all four participants committed few to no errors across all conditions of the study; therefore, only DCPM was provided. Future research may investigate EPM in children with math disabilities, who may be more likely to commit errors in math computation and fluency problems. Considering that discussion of DCPM may be difficult for students to understand, choosing more familiar concepts such as problems correct (accuracy) or problems completed (productivity) may lead to more favorable outcomes in responding.

Because participants were able to acquire rewards as part of the intervention, the inclusion of additional reinforcement might have confounded the results of the study in each intervention phase. Therefore, it is difficult to tease out the potential additive effects that the rewards may provide over the self-monitoring intervention alone. Since teachers were already using tangible rewards in the classroom, and have incorporated some form of positive reinforcement schedule prior to the study, it was not feasible for the researchers to remove the rewards from the intervention. Basically, it was considered part of the overall behavior management strategy used by the two classroom teachers, and was not considered an essential part of the study. On the other hand, it may not necessarily be practical for every classroom to provide these incentives on a regular basis.

The self-monitoring intervention provided modest improvement in computation rate but in varying degrees across participants. Efforts were made to minimize threats to internal validity by implementing the intervention with fidelity. However, it may be probable that external factors influenced the results of this study, including distractions within the classroom environment. On the other hand, the study was not conducted in a laboratory, a separate classroom, or in an office, and therefore reflects the distractions that typically occur in classrooms. Consequently, the results of this study may be more generalizable to schools where extra space is difficult to find and interventions need to occur in the classroom. Future research should collect maintenance data to determine how long the positive effects may have lasted, along with conducting this for even younger children (e.g., first grade), considering that at this developmental stage, self-monitoring behaviors at this age are still emerging. Additional studies can also investigate whether self-monitoring interventions can be used for homework or in other academic content areas (e.g., spelling, writing) that require frequent monitoring of behavior.

In conclusion, the improvements in math computation and accuracy were modest from the study. Increasing trends were also observed during the withdrawal phase for some students, which makes it difficult to determine whether improvements were due to the actual intervention. It is unknown whether a particular component of the intervention is entirely responsible for its limited efficacy or whether the combined components (e.g., individualized attention, support and encouragement by the researcher) were responsible. Although computation accuracy and rate increased, individual differences were observed. Over time, sharper slopes were found for Aaron and Clara during removal of the intervention likely due to their expectation of receiving the intervention. Steady increase was observed when the intervention was implemented during the second phase, particularly for Barry, Clara, and Dean. These are important considerations for teachers and practitioners and suggest that individual evaluation and progress monitoring are important to provide the most efficacious interventions. In addition, the brevity of the intervention makes it practical for use by general education classroom teachers who have pressure to increase students' achievement while dealing with the reality of time constraints.

References

Agodini, R., Harris, B., Atkins-Burnett, S., Heaviside, S., Novak, T. & Murphy, R. (2009). Achievement effects of four early elementary school math curricula: Findings from first graders in 39 schools (NCEE 2009-4052). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.

Brophy, J., & Good, T. (1986). Teacher behavior and student achievement. In M. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 328–375). New York: Macmillan.

Burns, M. K., VanDerHeyden, A. M., & Jiban, C. L. (2006). Assessing the instructional level for mathematics: A comparison of methods. School Psychology Review, 35, 401-418.

Carson, P. M., & Eckert, T. L. (2003). An experimental analysis of mathematics instructional components: Examining the effects of student-selected versus empirically selected interventions. Journal of Behavioral Education, 12, 35‐54.

Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics Curriculum-Based Measurement. School Psychology Review, 33, 234-248.

Codding, R. S., Chan-Iannetta, L., George, S., Ferreira, K., & Volpe, R. (2011). Early number skills: Examining the effects of class-wide interventions on kindergarten performance. School Psychology Quarterly, 26, 85-96.

Codding, R. S., Eckert, T. L., Fanning, E., Shiyko, M., & Solomon, E. (2007). Comparing mathematics interventions: The effects of cover-copy-compare alone and combined with performance feedback on digits correct and incorrect. Journal of Behavioral Education, 16, 125-141.

Codding, R. S., Hilt-Panahon, A., Panahon, C. J., & Benson. J. L., (2009). Addressing mathematics computation problems: A review of simple and moderate intensity interventions. Education and Treatment of Children, 32, 279-312.

Eckert, T. L., Ardoin, S. P., Daisey, D. M. & Scarola, M. D. (2000). Empirically evaluating the effectiveness of reading interventions: The use of brief experimental analysis and single case designs. Psychology in the Schools, 37, 463-473.

Eckert, T. L., Dunn, E. K., & Ardoin, S. P. (2006). The effects of alternate forms of performance feedback on the oral reading fluency of elementary-aged students. Journal of Behavior Education, 15, 149-162.

Falkenberg, C. A., & Barbetta, P. M. (2013). The effects of a self-monitoring package on homework completion and accuracy of students with disabilities in an inclusive general education classroom. Journal of Behavioral Education, 22, 190-210.

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. 2010. Retrieved November 24, 2015 from  http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_scd.pdf

Lannie, A. L., & Martens, B. K. (2008). Targeting performance dimensions in sequence according to the instructional hierarchy: Effects on children's math work within a self-monitoring program. Journal of Behavioral Education, 17, 356-375.

Legge, D. B., DeBar, R. M., & Alber-Morgan, S. R. (2010). The effects of self-monitoring with a MotivAider on the on-task behavior of fifth and sixth graders with autism and other disabilities. Journal of Behavior Assessment and Intervention in Children, 1, 43-52.

Maag, J. W., Reid, R., & DiGangi, S. A. (1993). Differential effects of self-monitoring attention, accuracy, and productivity. Journal of Applied Behavior Analysis, 26, 329-344.

Math work - Math worksheet generator. (n.d.). Retrieved February 12, 2015, from http://www.interventioncentral.org/teacher-resources/math-work-sheet-gen...

McDougall, D., Saunders, W.M., & Goldenberg, C. (2007) Inside the black box of school reform: Explaining the how and why of change at Getting Results schools. International Journal of Disability, Development and Education, 54, 51-89.

McDougall, D., Skouge, J., Farrell, C. A., Hoff, K. (2006). Research on self-management techniques used by students with disabilities in general education settings: A promise fulfilled. Journal of the American Academy of Special Education Professionals; 1, 36–73.

Montague, M. (2007). Self-regulation and mathematics instruction. Learning Disabilities Research & Practice, 22, 75-83.

Morgan, P. L., & Sideridis, G. D. (2006). Contrasting the effectiveness of single-subject interventions on fluency for students with learning disabilities: A multilevel random coefficient modeling meta-analysis. Learning Disabilities: Research and Practice, 21, 191-210.

Rafferty, L. A. (2010). Step-by-step: Teaching students to self-monitor. Teaching Exceptional Children, 43, 50-58.

Rafferty, L. A., & Raimondi, S. (2009). Self-monitoring of attention versus self-monitoring of performance: Examining the differential effects among students with emotional disturbance engaged in independent math practice. Journal of Behavioral Education, 18, 279-299.

Rock, M. L. (2005). Use of strategic self-monitoring to enhance academic engagement, productivity, and accuracy of students with and without exceptionalities. Journal of Positive Behavior Interventions, 7, 3-17.

Rock, M. L., & Thead, B. K. (2007). The effects of fading a strategic self-monitoring intervention on students’ academic engagement, accuracy, and productivity. Journal of Behavioral Education, 16, 389-412.

Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd Edition). New York: The Guilford Press.

Shimabukuro, S. M., Prater, M. A., Jenkins, A., & Edelen-Smith, P. (1999).The effects of self-monitoring of academic performance on students with learning disabilities and ADD/ADHD. Education & Treatment of Children, 22, 397-414.

Skinner, C. H., Bamberg, H. W., Smith, E. S., & Powell, S. S. (1993). Cognitive cover, copy, and compare: Subvocal responding to increase rates of accurate division responding. Remedial and Special Education, 14, 49–56.

Tawney, J. W., & Gast, D. L. (1984). Single subject research in special education. Columbus, OH: Merill.

U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics. (2013). National Assessment of Educational Progress (NAEP), various years, 1992–2013 Mathematics and Reading Assessments, Washington D.C.

U.S. Department of Education, Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service. (2010). Evaluation of the Comprehensive School Reform Program Implementation and Outcomes: Fifth-Year Report, Washington, D.C.

Ysseldyke, J., & Elliott, J. (1999). Effective instructional practices: Implications for assessing educational environments. In C. R. Reynolds & T. B. Gutkin, Eds., The handbook of school psychology (3rd ed.) (pp. 497-518). New York: Wiley.

 

Journal keywords: 
Undefined

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.