Effects of Technology on Mathematics Education in Middle School

Abstract: 

The literature suggests a strong positive correlation between the success of students using technology in an effective manner and improvement in standardized test scores. Using technology to teach math will improve student skills not only in math, but also in using technology as well. Unfortunately, there are three major barriers that impede progress in implementing this idea: cost, training, and technology failure. Teachers are skeptical, and rightly so, about how much progress can be made when continual updates and installations are constantly putting the technology on hold. This paper analyzes how these concepts, ideas, and problems have been discussed in the past in order to form a solid platform that will support technology in schools in the future. It will also discuss the suggestive findings of the project and how future research on this topic could be improved.

Table of Contents: 

    Introduction

    “Modes of assessment should reflect what is important for the world beyond school,” wrote Sutherland et al. (2004, p. 424).

    Math teachers have to listen to the same complaints from their students: Why will I ever need to know this? For math teachers, this is especially difficult to answer because complex math is rarely seen outside of school, unlike other courses of instruction. Math teachers must constantly fight for their students’ attention because students just cannot relate to math the way they can to other subjects. One of the increasingly popular ways to handle these issues is to focus on technology to illustrate the relationship of math to students' everyday lives in a learner-friendly way. Technology has the potential to unlock new ways to allow students a glimpse into an analytical world and to acquire the tools to explore it.

    Background

    Computer programs have been focused on mathematics education since they were first created. In fact, mathematics software is one of the most popular categories of software (TERC, 1999). The implementation of these programs in the classroom has been a long-term goal for many researchers because of the potential that this technology has for the future math student. Contrary to popular belief, this technology is meant as an aid for the teacher to use in combination with other teaching techniques and is in no way a replacement for the teacher (TERC, 1999).

    Literature Review

    In this literature review, I compare and contrast previous studies in order to learn how technology has evolved over the years to be more effective in the classroom. This analysis will reveal which methods have worked well and which factors have contributed to the failure of technological innovations to improve performance. These results can then be used to evaluate the new project that is the focus of this research project, Interactive Classroom Technology [ICT].

    ICT refers to the use of technology in the classroom, whether it is by computer, overhead projector, television, or any other electronic device that allows students to instantly interact with the mathematical problems at hand. The major focus of ICT in the classroom is understanding and the literature indicates that “…goals are best served by the creation of communities of learners in which students are actively engaged in the process of mathematical sense making [sic]” so that each student may learn a different way to approach a topic (TERC, 1999, p. 3). The ICT communities are built mainly among students, as well as among students and teachers, and can be quite varied. Results indicate that classes involving discussion boards tend to do better on standardized tests than those who do not use discussion boards, especially in the more analytical subjects.

    The most common form of ICT in the classroom is a set of classroom computers that the students use to access assignments and participate in activities that allow them to manipulate problems, verbalize mathematical processes, and get immediate feedback about why certain answers are correct while others are not (Bottino, 2004). This form helps students stay current with their work by reducing the amount of turn-around time it would normally take to get a detailed explanation of why a particular line of reasoning fails in one case but works in another. In some cases, the software can tell students immediately that a mistake was made and give them a detailed description of how to correct it, which allows the students to fix mistakes before they finish the assignment (Bottino, 2004).

    A sort of virtual classroom can also be formed from ICT technology. It can be useful for students who may have difficulty receiving instruction on a regular basis. Students from rural areas or medically disadvantaged persons, for example, do not always have the option of being in school everyday, but now with a modem connection there is a way to get the information to the students. The interactive whiteboard is another method that can be used by the teacher to communicate with students in remote locations, a major advance for students who miss school regularly (Sutherland et. al., 2004).

    A common complaint among teachers and administrators are the little things that completely derail the project after weeks of planning. “Computer parts such as towers, monitors, keyboards, or mice disappear, bulbs burn out, batteries run out, viruses attack, and even theft alarms can sound in the middle of class sessions” research has shown (Bunz, 2001, p. 7). Even when these problems do not come up, it still takes a large amount of planning time from teachers to be able to effectively allocate the time to navigate these systems into their lessons.

    Cost, of course, is the most commonly cited reason for failure to implement technological integration. The technology is incredibly expensive for a school budget. Furthermore, even when schools have initial help installing technology, maintenance and repair of these systems falls on the schools (Dale, Robertson, and Shortis, 2004). Government involvement typically does not last more than a year or two after the completion of the research and then the schools are stuck paying for all the advancements and upgrades until the entire system becomes so obsolete that they are forced to scrap the program in its entirety.

    Data Analysis

    A survey of teachers using technology to teach (Reynolds, Treharne, and Tripp 2003) found that the majority of teachers think that computers can raise the educational bar. The dissent over the use of technology arose mainly over the implementation of the technology. Few lacked faith in the technology itself. Teachers who are now veterans of the system are in complete support of it, finding it hard to give up the immediacy of the results (Gerace, Dufresne, and Leonard, 1999). Teachers seem to like the immediate feedback because it lets them know how to pace the rest of the lecture rather than just plunging on with the lesson and hoping that students will ask when something does not make sense.

    While most of the studies found that technology in the classroom was proving effective most of the time, not all were supportive of it. Those who claimed to have more problems than solutions with ICT usually cited upkeep, typical technological issues, and training gaps that left the teachers unaware of how to properly use these materials. Cost has always been very important to schools, so convincing them to spend so much money on technology that will be obsolete in five years is difficult, let alone convincing them of the benefits while also claiming no responsibility when things go awry.

    Attitudes toward ICT might be an important factor. In schools where this type of education was available, it was more successful when teachers worked harder on their lesson plans, took more prep time to ready their classroom, and fully threw their support behind the program. Teachers who were less enthusiastic in the beginning have found that their results are significantly lower than the results of other classrooms. This may mean that courses for teachers should address their attitudes toward ICT, as well as their skills with the technology.

    Summary

    All these studies focused on the problems that arose with the technology and how the students progressed or failed, along with possible reasons for each. Some teachers think of this as an “out” for not teaching the actual material when that is not the purpose of the technology-based approach. When this happens, the teachers blame ICT for not picking up the slack and researchers are frustrated with the teachers who expect machines to completely take over their classroom (TERC, 1999). Each study also included teacher and/or researcher remarks about how and why the results were the way they were.

    The results indicate that the more effective ICT was, the more willing the teacher was to use it in the first place. This may prove to be a significant factor later where studies could directly link the teacher’s views with the outcome success rate. It may also be that teachers’ opinions of how the technology should work might have caused some teachers to use the technology in ways it was not intended to be used, in effect causing the approach to fail (Reynolds, Treharn, and Tripp, 2003).

    Despite the dissent from a minority of teachers, most teachers saw a dramatic improvement in their students following technology instruction. “Secondary schools stated that ICT is particularly useful for motivating traditional underachievers,” meaning those students who rarely interact with their classes now have a way to communicate (Reynolds, Treharne, and Tripp, 2003, p. 161). The one question that was never really asked in any of these studies was how the students felt about the new methodology for teaching. This should be an important research question for any researcher and it is interesting to speculate on why this question was not asked.

    Hypothesis

    The main theory behind this project is that students who receive technology-based instruction only will do better than the instructional control group. Based on the literature available, technology seems to be quite effective in the classroom when it is properly used, so it is reasonable that this trend would continue. However, two additional conditions must be added to this response: 1) the students must be consulted about their opinions on the methods, and 2) guidelines for how and when teachers use the technology, as well as their attitudes toward the use of technology, must be more strictly measured and controlled in order to achieve uniform results. Based on the findings from prior studies, an interactive site for messages should be included to promote discussion pertinent to the material being covered in order to increase students’ communication in and outside of the classroom. Teachers again are not replaced by technology, but should use it to enhance what they are teaching in their classroom to help students visualize the methods being used and what makes them more effective than other methods.

    Importance

    If teachers can learn to use ICT to their advantage by interacting more with it themselves or allowing their students greater freedom on the machines, it may help the students gain skills and experience that they will need later on in life. Now, the major focus should be on what the most effective method is, how it should be used, and at what point in the learning process it should it be introduced. After this preliminary work, the most effective methods should be evaluated as an option for students who are geographically isolated or on extended periods of absence so that they may keep up with their class work and not fall behind, but this is still a goal to be left in the future. Although the potential exists, the kinks must be worked out of the system at the school level first.

    Methods

    Briefly, students from Strickland Middle School in grades seven and eight were separated into control and experimental groups at random and all were given a pretest over the material to be covered during the two-week period for each session. The control group was taught only by the instructor and the experimental group was taught only by the computer programs during the two weeks following the pretest. Students were given a posttest at the end of two weeks with the same questions as the pretest to determine the amount of progress they made. At the end of the third session, interviews were conducted with students at random to understand their views of the different methods of instruction. Both sets of data were analyzed and transcribed.

    Consent

    Notices were sent home with the proposed subjects informing them of the project and the risks/rewards that were associated with it. Once those forms had been read, signed, and returned, the study began.

    Grouping

    The 10 subjects were divided into a control group and an experimental group with 5 students in each group. The groups were chosen randomly from the 10 participants and students were not identified in any potentially compromising way. The students were known only through coded pseudonyms such as. “EMary” and “CScott” where the “E” denotes experimental and “C” denotes the control group.

    Instruction

    The control group learned through the traditional teaching method where the teacher instructed the classroom with minimal technological aids, meaning no computer programs or games including the GoToLearn™ diagnostic software. The experimental group used both the traditional teaching methods and the approved computer programs. In this case, teachers taught the lesson for one day and allowed students to practice the concepts with the computers in order to provide them with, in theory, a more interactive lesson.

    Evaluation

    The study began with a pretest before each session that is based on the Texas Assessment of Knowledge and Skills (TAKS) objectives for the seventh grade level. This test was given to all of the participating subjects as a baseline for the experiment. From this point the amount of progress can be measured with each group. During the last week of the third two-week session, the investigator began interviews of randomly selected students from each group. These interviews examined the way the students viewed the technology in the classroom and their evaluation of its effectiveness. At the end of the first two-week session, students were given a posttest and the scores will be compared to their original test scores to measure for differences. After scores were reported, the pseudonyms used for the comparison were destroyed properly so that only the scores and group (control or experimental) identifiers remain. The students were then labeled, for example, ESubject1, in no particular order so that each student’s data were still matched up, but no names were used. This allowed publication of results without compromising anonymity.

    The second and third sessions ran as with the first using the same 10 students in the study. These sessions still had five students in the control group and five students in the experimental group. All other terms are likewise the same as the first study.

    Results

    Quantitative Results

    The first of the three two-week sessions did not support the hypothesis that the students relying only on technology were more successful than their control group counterparts As shown in Table 1, in the ANOVA for the first week of data the two groups did not differ on the pretest scores (F = .914, p = .367), which we would expect if the students were randomly assigned to the experimental and control groups. The two groups did not differ on the posttest either (F = .050, p = .829). As shown in Table 2 in the second two-week period, the experimental group did not differ on the pretest as expected (F = .600, p = .461), but they were significantly different on the posttest (F = 5.556, p = .046). In a student-to-student comparison, the control group actually had more successful improvement in scores than the experimental group in both weeks 1 and 2, shown in Figure 1 and Figure 2. Due to problems with the execution of the third two-week session, those data were not analyzed but will be further explained in a later section of the paper.

    Qualitative Results

    This was one of the more enlightening parts of the study because it focused on how the students felt toward and responded to their method of instruction. Each of the students in the control group, for example, believed that using computers in their instruction time would have been very helpful to them because they were more “fun and more interactive” than the teacher-only instruction. The students in the experimental group, however, believed that the computers were not nearly as helpful as the teacher had been, and while the repetition of concepts was fine when they understood the lesson, it made it very hard to grasp a concept when the student did not understand. These experimental students felt that it “didn’t help to keep getting it wrong” and not be able to ask why. Most of the students in the experimental group saw problems with technology being integrated into their math classes in the future because of technical problems, but none of the control group students saw any problems with it. The students from both groups agreed that an approach that included a mixture of teacher and technology-based instruction would probably have worked out better. While none of the students admitted that math was easy for them typically, most of the students felt they had a better grasp on it at the end of the study than they had at the beginning. One comment all of the students made when asked how they thought it would help them to use computers in the future was that the computers would not yell at them like the teacher would. This indicates that while technology provided very little structure and guidance for the students—and while the teacher did provide guidance and structure—the students suggested that less is more. In the future, it is therefore recommended that research take these findings into account and check into the progress made by math students in an environment with both technology and instruction in addition to control groups comprised of technology only, instruction only, or both.

    Limitations of the Study

    There were many problems in the collection of data for this project that may be useful to future researchers and thus warrant inclusion to this thesis. Problems came up at the start of each data collection set with the pre- and posttests used in the experiments. Each time, the tests had to be researched, compiled, reformatted, proofread, worked out, compared to material covered, and related to the level of ability for the students participating in usually less than three days. Due to the nature of educational research, topics can be planned months in advance but until the day is imminent, no one can be sure that the students will be covering the proposed lesson. This rush created problems with proofreading the test questions for accuracy and even though the questions were used nationwide for math testing, some did not provide a correct answer in the multiple-choice test.

    There was also a problem with student absences. As with any activity involving students, there was always at least one student from the study not in class for each day of the sessions. This was due to illness, fire drills, and other excused absences sometimes, but some of the students simply decided not to go to class and just “skipped” that part of the day. In this class of remedial math students, it was not uncommon for the students to skip class, and the faculty did not seem to be disturbed by the students not being there. This suggests that the students do not want to be in class because they do not enjoy the time they spend in the classroom. That alone should be reason enough to keep trying to find a way to keep these children interested in their education because these students are only in middle school. These absences caused students to miss hours of instruction, delayed testing, and often their scores reflected this negatively, perpetuating the hopeless feeling the students have toward math achievement.

    The students were not the only ones missing from this study. One of the privileges of being a teacher is paid vacation days. During the course of the evaluations, there were two days during the second two-week session and two days during the third session when the teacher took personal days. While she was perfectly within her rights to do so, it created problems with the study because on those days the students sat in a room with a substitute doing worksheets with no instruction. The students who were supposed to be learning with technology did not have access to computers and students who were supposed to have instruction were not given any instruction.

    Any teacher will agree that middle school students are also not the most reliable students when it comes to taking paperwork home and bringing it back. This may explain how 50 permission forms went out to the students, but only 12 came back and only 10 of those had consent. The teacher provided grade point incentives for the return of the form but most students did not return the forms, so the sample size was considerably limited compared to the projected size at the beginning of the study.

    Class periods at this middle school are only 45 minutes long, so the tests could only be at a maximum of 10 questions due to time constraints. This did not allow for a very detailed summary of the student’s abilities and made the differences in scoring much greater. There were also some grading discretions that counted some students wrong for one answer and other students right for the same answer, so all tests were re-graded upon submission to the researcher. During the first session, after looking over the pretests two hours before testing, the teacher aiding the project decided that the questions (all word problems as TAKS questions typically are) were too difficult for the students. She formed a new test that was simply computations without answer selections. This did not comply with the format agreed to earlier, so the computational results were not used and the original tests were given the next day. This cost a day of instruction as well.

    The third two-week session was held after the students had completed the TAKS test and during a time that is locally termed “TAKS meltdown” where basically the students are apathetic and the teachers lack the strength to try to put more into their knowledge banks. The teacher for this study felt it appropriate to play games and draw Mandalas (a type of symmetrical pattern involving regular shapes). This is difficult to test, so it was agreed that for the purpose of this study, they would review adding and subtracting numbers and practice absolute value. These concepts were admittedly simple at this level of cognition, so it was agreed that the entire length of the study would run six class days, two of which would be for pre- and posttesting. This left four days of instruction, the first of which was the general lesson introduction where all students were introduced to the concepts to be covered. The second day of this session the teacher forgot to separate the students into the control and experimental groups, so all were taught with instruction, and the last two days of instruction the students had a substitute. This study session was therefore shorter than most, integrated, and lacked the instruction needed for the experiment, so these data were therefore excluded from this study.

    Conclusion

    Students participating in this study from the experimental group did much better with the lesson involving more of the algebraic material than did their classmates, but fell short when tested on more geometrical concepts. The program they were using was centered on repetition of information and rules but was not able to provide them with any physical manipulative. This suggests that for the abstract concepts of algebra and arithmetic, where knowledge of the rules dominates the success rate of the students, technology would be very useful because it forces the students to read and reread the rules for better understanding, working through examples to demonstrate points patiently. For the more spatial relations, however, technology was not as successful because students in this age range will typically do better if they have something concrete and tangible to work with. In both cases, though, the students are more reassured if there is a teacher around to answer questions.

    There is no way to find an educational setting where there are not going to be problems with absenteeism, school activity interruptions, and testing irregularities. With classrooms where there are 30 students to one teacher, such problems are expected. The question is how do teachers expect to teach them, and will technology be part of that answer? The findings in this paper affirm that this is a possible strategy. Although the results were not all significant, it indicates that technology is a tool that should be useful in increasing student learning. Technology should definitely be in addition to rather than in place of a teacher, and both should be used fully. The earlier computers are introduced, the more comfortable students will be using them. They are excellent tools for concept and definition/theory practice but if concrete models are available, those should be used in addition to an instructive lesson from the teacher. As with anything in education, there should be more research conducted in this area before anything is officially decided. For now it is safe to say there is a definite reason to look into it further and implement it in the schools that support the programs.

    References

    • Bottino, R. (2004). The evolution of ICT-based learning environments: Which perspectives for the school of the future? British Journal of Educational Technology, 35, pp. 553-67.
    • Bunz, U. (2001). “Theoretically, that’s how you do it…”: Using narratives when computers let you down in the technology classroom. Unpublished doctoral dissertation, University of Kansas , Lawrence. Retrieved September 19, 2005, from ERIC database.
    • Dale, R., Robertson, S. and Shortis, T. (2004). ‘You can’t not go with the technological flow, can you?’ Constructing ‘ICT’ and ‘teaching and learning.’ Journal of Computer-Assisted Learning, 20, pp. 456-70.
    • Gerace, W., Dufresne, R., and Leonard, W. (1999). Using technology to implement active learning in large classes. Massachusetts University at Amherst, Physics Education Research Group. Retrieved September 19, 2005, from ERIC database.
    • Reynolds, D., Treharne, D., and Tripp, H. (2003). ICT-the hopes and the reality [Abstract]. British Journal of Educational Technology, 34, pp.151-67. Retrieved October 3, 2005, from ERIC database.
    • Sutherland, R., Armstrong, V., Barnes, S., Brawn, R., Breeze, N., Gall, M., et al. (2004). Transforming teaching and learning: Embedding ICT into everyday classroom practices. [Electronic version]. Journal of Computer-Assisted Learning, 20, pp. 413-25.
    • TERC. (1999, December). Technology meets math education: Envisioning a practical future in education. [Electronic version]. Cambridge, MA: Rubin, A. Retrieved October 2, 2005, from ERIC database.

    Table 1: ANOVA comparing the experimental and control group on the pretest and posttest for first two-week period

        Sum of Squares df Mean Square F Sig.
    pretest Between Groups
    Within Groups
    Total
    160.000
    1400.000
    1560.000
    1
    8
    9
    160.000
    175.000
    .914 .367
    pretest Between Groups
    Within Groups
    Total
    10.000
    1600.000
    1610.000
    1
    8
    9
    10.000
    200.000
    .050 .829

     

    Table 2: ANOVA comparing the experimental and control group on the pretest and posttest for second two-week period

        Sum of Squares df Mean Square F Sig.
    pretest Between Groups
    Within Groups
    Total
    90.000
    1200.000
    1290.000
    1
    8
    9
    90.000
    150.000
    .600 .461
    pretest Between Groups
    Within Groups
    Total
    1000.000
    1440.000
    2440.000
    1
    8
    9
    1000.000
    180.000
    5.556 .046

     

    Figure 1: Comparison of the experimental and control group subjects for the first two-week period

    E: experimental/technology only 
    C: control/Instruction only

    Figure 2: Comparison of the experimental and control group subjects for the second two-week period

    E: experimental/technology only
    C: control/Instruction only