Measuring Student Success: The Effectiveness of Scholastic Reading Counts!

Abstract: 

Stanovich (1986) observed a phenomenon in literacy development he labeled the Matthew Effect. This notion proposes that there is a correlative connection between the amount of reading students accomplish and their advancement in reading ability. The current action research investigates his idea through the study of Scholastic Reading Counts! data in a fifth grade classroom. The purpose of this study was to examine the effectiveness of using Reading Counts! as an indicator of student success. The Reading Counts! program promotes independent reading and provides software-based assessment on various titles. Information was collected from the Reading Counts! portal that revealed each student’s personal Lexile level and individual data on every quiz taken. The findings indicated that as a student’s personal Lexile level increased their percentage of on-level texts read decreased, resulting in an inverse relationship. These results suggest the need for a more monitored approach to using Reading Counts!.

Table of Contents: 

    Introduction

    When walking in to an average elementary classroom, it is evident that reading and literacy development are of the utmost importance. Language is typically strewn across the walls in forms of various anchor charts and print-rich posters while shelves are stocked with books of all levels. This perceived stress on reading is valued by researchers today, stating that literacy development is a gateway to future success (Arya, Hiebert, & Pearson, 2011; Fiester, 2010; Lee, 1999; Ray & Meyer, 2011; Sumara, 2002). These researchers suggest that the ability to read has a correlation to students’ health as adults, being able to understand symptoms and their options for treatment in the healthcare system.

    In addition, students will have frequent and persistent exposure to higher-level texts as they progress through subsequent grade levels. Middle school and high school texts’ reading require students to have prior experience and developed strategies. Furthermore, the number of non-fiction textbooks in middle and high school increases in comparison with elementary school reading. Students will struggle with success across all content areas if they are not able to comprehend the topics within expository texts. The students who have a lack in reading ability by the time they reach fourth grade are said to already be on the track to dropping out, decreasing the pool of competent, educated individuals in the future work force (Fiester, 2010). Reading also expands students’ perceived amount of possibilities in life and their imagination and interpretation of commonplace occurrences, deepening their worldly experiences and knowledge (Sumara, 2002).  While students have a more urgent need to read in regards to climbing the academic ladder to graduation, there are the aforementioned real-world consequences related to a lack of reading ability and life skills development.

    Based on the necessity and importance of reading, teachers in elementary schools seek a way to track students’ reading competence. Reading management programs, electronic systems that assess and reward students’ independent reading from books, such as Accelerated Reader® 360 (Renaissance Learning, 2015)and Scholastic Reading Counts! ® (Scholastic, 2015) serve these purposes.  Each program operates differently, but all provide data on students’ reading comprehension performance on independent texts. Comprehension is tested with short quizzes following the completion of each independent text. The data management systems strive to motivate readers to read more by assigning points earned for successful completion of each quiz. Both systems generate individual student data in the form of: Lexile reading levels of students, Lexile levels of books they have read, passing rates on quizzes, points accumulated, grade level equivalences, and titles of books chosen. In essence, teachers are supplied with a plethora of data from data management reading programs. However, the question lies in discerning what information teachers should track regarding the growth of the students’ reading competence and growth toward complex reading skill development needed beyond the elementary school.

    Problem

    Students could potentially use the accumulation of reading points with little regard for Lexile reading levels of books, thereby minimizing advancement in reading skills and literature read, particularly in the non-fiction genre category.

    Purpose

    The purpose of this study was to determine the relationship between students’ personal Lexile levels and reading choices, points earned, and types of books chosen, as measured by Scholastic Reading Counts! data.

    Research Questions

    1. What is the relationship between the personal Lexile level (PLL) of the student and the average Lexile level (ALL) of student book choices?
    2. What is the relationship between PLL and the percentage of books (PB) that were on or above their level?
    3. What is the relationship between PLL and points accumulated (PA) for passing book reading tests?
    4. What is the relationship between PLL and the percentage of non-fiction books (NF) that were on or above their level?

    Hypothesis

    1. It was hypothesized that the relationship between PLL and ALL would be positive.
    2. It was hypothesized that the relationship between PLL and PB would be positive.
    3. It was hypothesized that the relationship between PLL and PA would be positive.
    4. It was hypothesized that the relationship between PLL and NF would be positive.

    General Definitions

    Lexile Reading Level 

    A reader’s Lexile level expresses a numerical measure of his/her reading ability on the MetaMetrics Lexile scale – 550L is 550 Lexile. A higher Lexile level expresses a higher reading level ability. This number anticipates a 75 percent comprehension rate for a book of the same Lexile measure (MetaMetrics, 2015a).

    Lexile Text Level 

    A text’s Lexile level is determined by its sentence structure and vocabulary features and represented as a numerical measure. The Lexile level of a text is displayed the same as a reader’s measure – 600L is 600 Lexile. The higher the Lexile level the higher text complexity of a book (MetaMetrics, 2015a).

    Grade Level Equivalent 

    This is the readability of a book by grade level. The number represents the grade level in which a student reading on grade would be able to independently read the text (Manna, 2015).

    Literature Review

    According to Stanovich (1986), there is a phenomenon observed in literacy development coined the Matthew Effect. This concept suggests that there is a reciprocal relationship between the reading volume of students and their growth in reading ability. As children read more, they garner exposure to new, higher vocabulary and further their decoding skills. Children with less extensive vocabulary bases and a lack of enjoyment for reading tend to develop their vocabulary knowledge at a slower rate and hinder the growth of their reading ability compared to peers. Essentially, the rich get richer and the poor get poorer (Stanovich, 1986). The inequitable supply of knowledge and experiences that students come in to the classroom with imparts itself upon their individual efforts in comprehension of a text (Shea, 2011). The consistent inadequacy of the “poor” has an adverse effect on their attitude toward reading, weakening the likelihood that they will pick up a book on their own volition. “The more successful experiences that the students have, the more likely it is that the students will continue to read (Culmo, 2009, p. 12).” Therefore, the factor of motivation has become an increasing concern for educators as they consider how to remedy the aforementioned achievement gap.

    Independent reading time has become a strategy utilized by teachers to boost motivation and reading achievement in students. A study conducted by Jill R. Culmo (2009) on the impact of structured daily independent self-selected reading on a second grade classroom looked at two groups of students, those that had independent reading time with follow-up activities and those with no follow-up activities. The class with no follow-up activities still maintained independent self-selected reading time. Both testing groups resulted in consistently increased motivation scores at the conclusion of the study. In regards to reading achievement, the impact of structured daily independent reading of self-selected texts proved to be a considerable amount of gain in scores for both testing groups from pre to posttest.

    The opportunity for choice is a significant contributing factor to student engagement and interest in reading. According to Vieira and Grantham (2011), students who perceive control over book choice generally develop an interest in what they are reading and become more engaged in the content. Those that did not perceive choice simply went through the motions of reading, not becoming emotionally invested in the story. The self-selected component of independent reading time allows students to devote to a book that appeals to them, increasing engagement and vocabulary acquisition. Culmo (2009) noticed within her study that students also develop specific preferences towards books and authors due to the extended exposure to literature. In addition, students establish a more refined strategy for choosing books that they are then able to articulate to others. A significant piece that rose from her study was no student pronounced they felt like they were a good reader or a bad reader. They all established themselves as readers through their relay of personal preferences and individual ways to become a better reader.

    Since there are few resources on the relationship between student book choices and his or her success in the Reading Counts! (RC) program, there is no viable resource to provide a consensus on the matter. Culmo (2009) came upon four distinct categories of theme when interviewing students about personal preference in book choice: favorite authors, series, fiction/nonfiction, and affect. Many students maintained a favorite author throughout the study who had writing characteristics appealing enough for the students to read multiple works by them. Other students chose to read books from a series, enjoying the predictable nature or the growing storyline in each subsequent book. In regards to fiction/nonfiction, students preferred one, the other, or a combination of both types when making book choices. Some students relished the fantasy aspect of fiction books, having moments such as animals talking or doing human actions. The students who preferred nonfiction appreciated learning about “real things” as opposed to the contrary. The one overarching theme to student book choices was a preference for books that made them feel something while reading. The general agreement among student responses was that they preferred books they thought to be funny. Kragler and Nolley (1996) also found a recurring theme of book choices related to student preference of author or interest in a particular series with fourth grade students. The researchers went a step further and studied the students’ abilities to choose books at appropriate difficulty levels. The classroom teacher conducted conferences with each student regularly and chose three random records to help determine an individual’s capacity to self-select a book on his or her level. Kragler and Nolley collected two pieces of data from the conferences: 1) a readability check on each book discussed in-conference; and 2) the teacher took notice of significant miscues as the student read. 62% of book choices were at a student’s independent reading level. Independent reading level is defined as relatively easy for the student to read, having the student score a 95% word accuracy (Partnership for Reading, 2001). 25% of book choices were at a student’s instructional level, making it challenging but manageable for the reader with a score of 90% word accuracy (Partnership for Reading, 2001). The last 18% of book choices were at frustration level, making the text difficult for the student to read with a less than 90% word accuracy (Partnership for Reading, 2001). Even though only 14% of students reported choosing a book based on the appropriate difficulty level, almost 90% of the books chosen by students were within a suitable reading level. This demonstrated that the majority of students, whether consciously or not, tended to choose books appropriate for their skillset.  

    Method

    Participants and Setting

    The participants were 51 students in two sections of a fifth grade Language Arts/Social Studies classroom. Of the student group, 23 students were girls and the remaining 28 were boys. The student demographic consisted of 9.8% African American students, 9.8% Hispanic students, 2% Asian students, and 78.4% White students. The participants had differing academic performance levels. There were 8 students participating in EXPO, a gifted and talented program for high performing students. Six students were identified as special education and 2 were identified with dyslexia. One of the special education students was also identified as an English Language Learner and one of the dyslexia students was identified as special education as well. Students had exposure to the Scholastic Reading Counts! program in earlier school years.

    Instrument

    For the purposes of this study, Scholastic Reading Counts! was the program used to examine independent reading success. Reading Counts! (RC) is a system of electronic instruments that provide reading/testing practice, formative assessment of reading competence, and evaluation of student reading level. The Scholastic Reading Inventory (SRI), the evaluative component of the RC, is a computer-generated test that evaluates students’ reading comprehension which results in a Personal Lexile reading level for each student (Scholastic, 2015). Reader consistency correlation for the SRI produced an r value of 0.894, suggesting a high reliability for the assessment (Scholastic, 2007).

    The purpose of the Personal Lexile is to provide students and teachers with a recommended starting point for selecting books to read that align with or exceed Personal Lexile levels. Ascertaining appropriate difficulty levels of materials for students was important to foster maximum opportunities for learning (Shea, 2011).  A second component of the RC program are computer-generated Reading Counts! quizzes. Each quiz consists of multiple-choice questions based on the content of the book and required a default passing score of 70% to demonstrate comprehension. Points are awarded by the program for passing a quiz based on a pass/no pass stipulation. Students who failed a quiz were awarded zero points. Students who pass the quizzes are awarded the full point value for comprehending the text. Points are assigned to a text based on a combination of interest level  and word count (Scholastic, 2015). Interest levels are measured by whether an elementary, middle, or high school student would be interested in reading the text (Scholastic, 2015). Each time a student reads an RC book, they independently complete an RC test on the book. 

    The system allows students to test and retest as many times as needed to pass the test.  Each interaction with the program results in a continuous report on each child’s interaction with the system. The report includes the following: child’s personal Lexile score, the number of points accumulated, title of each book read, Lexile level of each book, grade level of each book, level of success on each quiz. These data are available to students and teachers for the purposes of tracking each student’s reading progress. Teachers are a liberty to use data for guiding students’ reading choices, testing strategies, and decision making regarding students’ reading progress related to students’ personal Lexile level.

    Procedure

    All students were required to independently select books, read, test, and track RC points. Students were instructed to select texts based on their personal interest and reading level. Students read selected books during designated times throughout the class day or when free time was available. They were also expected to read selections at home. Students accessed the Reading Counts! software on a school computer after reading a book. Students found quizzes by searching for the title of the text or the author’s name in the RC system. Three computers were available for taking quizzes in the classroom. Computers were also accessible in the school library and other teacher’s classrooms as needed. Students were responsible for tracking progress and setting goals for obtaining required points. As students completed a quiz, they were expected to examine the Student Reading Report for points accumulated in the six-week period and determine if they met the 35 points requirement set by the teacher. In addition to a specific number of RC points, the classroom teacher required students to select books across genres such as non-fiction, poetry, in addition to other personal choices students made. Multiple opportunities were provided throughout the class period to complete quizzes and obtain points.

    Once the morning tasks on the board were completed, students could read a “special friend,” an independently selected book. Students could remain in their desks or sit in a spot around the room until the day’s lesson began. Blankets, pillows, and stuffed animals were available at the front to make the reading experience more comfortable for students choosing to sit around the room. Shelves with various book titles were situated on each of the walls and were grouped according to Lexile reading level. A designated shelving unit housed all of the non-fiction books.

    Depending on the unit of study in Social Studies, the classroom teacher selected multiple non-fiction titles from the library to place on a desk at the front of the room. These texts reinforced topics across the two content areas (reading and social studies). Lexile levels were not a consideration when adding social studies/non-fiction books to the collection on the desk. Students lacking non-fiction points, were encouraged to select from the non-fiction, social studies topic books on the desk.

    Read-a-thon days were days when the students had the entire class period to read Read-a-thon days provided additional opportunities for students to acquire the required thirty-five points. To support the comfort and enjoyment of reading on these days, students were allowed to take off their shoes in the classroom and bring pillows and blankets from home. During Read-a-thon days students were free to read and quiz at will.

    No matter when a student completed a quiz, if a student failed a quiz, they were required to retake it the next day to acquire the points. As the conclusion of the six-weeks approached, students who had not obtained all required points did not participate in recess each day until the points were accumulated. All students always acquired all six weeks points by the due date. Students were very aware and oriented toward accumulating required RC points. This one accomplishment became the purpose for reading and quizzing, point accumulation.

    Data Collection

    Data were collected from the Scholastic Achievement Manager (SAM), the teacher portal for Reading Counts! The data represented all books all children had read and tested on since the beginning of the school year. Individual Student Reading Reports within SAM generated: date, book title, Lexile level of the book, reading level of the book, quiz score, points earned, and words read for each Reading Counts! quiz taken. Students completed the Scholastic Reading Inventory (SRI) in October which produced the personal Lexile levels of each student. Once a week, data were extracted from SAM to generate updated student performance information on Reading Counts! quizzes and detailed text reports.

    Data used from the Student Reading Report included: student name, personal Lexile level, average Lexile level of books tested, average reading level of books tested, average quiz scores, total points earned, and average words read. This information was loaded into an Excel spreadsheet with column headings for each category. For comparison purposes, additional data were generated that showed percentage of books read and tested that were on or above the student’s personal Lexile level. In addition, a final category was added to the spreadsheet that represented the percentage of non-fiction books read within the category of on or above student personal Lexile levels.

    Data Analysis

    Group means across all categories were calculated and are represented in Table 1.

    Multiple correlations in Excel were conducted on data in Table 1.  Correlation coefficient were calculated for the relationship between: 1) the personal Lexile level (PLL) and the average Lexile level (ALL) of student book choices, 2) PLL and the percentage of books (PB) that were on or above the student’s level, PLL and the percentage of non-fiction books on level (NF), and 4) PLL and points accumulated (PA) for passing reading tests. .

    Results

    The data analyses showed variation in relationships.  In this study, it was hypothesized that the relationship between: 1) PLL and ALL, 2) PLL and PB, 3) PLL and PA, PLL and NF would be positive. Results of the correlation analyses tested these hypotheses.

    PLL/ALL

    The correlation coefficient of 0.615 demonstrated a positive relationship between the personal Lexile level of students and the average Lexile level of student book choices. This finding supports the first hypothesis, because as a student’s Lexile level increases so does the Lexile level of the average book choices. This value suggested a strong positive relationship between the two sets of data. Figure 1 provides a graphical representation of this relationship with a trend line. The potential Lexile level range for books read due is represented on the graph as a red line. This line is the ideal Lexile level range for books read by students to provide a sufficient challenge based on recorded personal Lexile levels of students (MetaMetrics, 2015b).

    However, compared to the Lexile range possible, as noted by the red line in Figure 1 that shows a linear relationship between potential PL and ALL, most students were not approximating their PLL, particularly among higher level PL readers. Figure 1 shows that students within Lexiles 400-599 were on average reading books above their Lexile level. Those students within the 600-1299 range were on average reading texts on or below their Lexile level. Although the original hypothesis was supported, the test of expectation for reading on or above PL level was unrealized for students with PL > 600.

    PLL/PB

    The correlation coefficient of -0.870 indicates a very strong negative relationship between the personal Lexile (PL) level of the students and the percentage of books (PB) that were on or above the student’s level. This finding does not support the hypothesis that this would be positive.  Figure 2 shows the decline in percentage of books read and tested that were on or above a student’s personal Lexile level. The relationship is further supported by the data in Table 1. Students within the 400-699 personal Lexile level were reading approximately 60% of their books on or above level. A decline from 46% to 19% occurs as 700-799 personal Lexile turns into 800-899 personal Lexile and continues to decrease. Students within the highest Lexile range (1200-1299) had 0% of their book choices on or above their Lexile level. The mean for the entire student group was 22% PB. The numbers suggest that lower Lexile level students were choosing more than half of their books on or above their Lexile level while their upper level counterparts were selecting fewer than 20%.

    PLL/PA

    The relationship between PLL and points accumulated (PA) for passing book reading tests produced a correlation coefficient of 0.393 which indicated a moderate positive relationship. The hypothesis that there would be a positive relationship between PLL and PA was supported, but moderately.  Figure 3 demonstrates that as personal Lexile level increased student point totals increased. Students within the highest Lexile level had a mean accumulated point total of 430, almost double the amount students within the lowest Lexile range obtained at 158 points. These means point to students in the higher Lexile ranges reading books with higher point values and/or more books in general.

    PLL/NF

    The relationship between PLL and percentage of non-fiction books on or above level (NF) delivered a correlation coefficient of -0.551. The hypothesis that there would be a positive correlation was not supported.  The strong negative relationship is shown in Figure 4. The percentage of non-fiction books read across groups decreased as the personal Lexile level increased. Students within the highest Lexile range (1200-1299) mean read 0% non-fiction books on or above personal Lexile levels. Those students within the 500-599 PLL range had the highest mean percentage of NF books with 71%. These findings align with that of the relationship between PLL and PB in that there is a decrease in NF as Lexile level ascends.

    Discussion

    The effectiveness of the Scholastic Reading Counts! program can be both supported and refuted based on how it is used in the classroom. In a discussion of the value of points, students accumulated points expected, therefore it can be said that the program was effective in helping the teacher manage students’ reading, choices, when the quizzes were completed, and points accumulated. Students were able to retake quizzes to gain the points they lacked. The perception of reading as point values eased students’ access and points profile management. In addition, the teacher could adjust Read-a-thon days, recess loss, and encouragement to read non-fiction books on the desk, according to students’ point acquisition across the six weeks.  The fact that all students accumulated all required points every six weeks across all time attests to the effectiveness of the program, with regard to point accumulation.  In spite of this mark of effectiveness, a question arose regarding the other data produced by the program: Are the data being used to the fullest?

    The answer to the question can be found by reflecting on the research hypotheses and information found in analyzing the data for relationships, beyond nominal point counts. The relationship between the PLL and PA showed to be positive, supporting the prediction made in H3. Students were accumulating higher point totals as Lexile levels increased. This aligns with the idea that students with a higher reading level would be reading more or reading books with higher point values. The aggregation of points could be viewed as successful in the classroom as long as students had met the required points and books in each genre. H2 was also supported using the data produced in Figure 1 which shows a positive relationship between PLL and ALL.

    However, a closer examination of the data reveals upper level Lexile level readers, selected books far below personal Lexile levels, reading very lower Lexile level books to accumulate points. MetaMetrics (2015) describes the Lexile “sweet spot” as 100L below a student’s personal Lexile level to 50L above. In the case of the students in the study, PL readers ranging between 800-1300, did not meet the sweet spot recommendation.  The findings suggest that students performing on the higher Lexile levels were not choosing books within the suggested range. This could mean that students are not adequately challenging themselves, but are motivated by collecting points.

    The refutation of H3 by the data further supports this argument by showing that students were, on average, reading below their level as Lexile level increased. Advanced readers within the 1000-1299 Lexile range had average PB levels below 3%, supporting the notion that they did not challenge themselves as readers. PLL/NF findings shows, out of those books on their level, students in the upper Lexile range (900-1299) were reading less than 20% as non-fiction. The low percentage of non-fiction books read implied that these upper level Lexile readers could potentially be under-prepared for future non-fiction reading expectations in middle and high school grade levels. While these students were considered advanced readers, the lack of exposure to on or above Lexile level books could present a challenge later (Ray & Meyer, 2011).

    Findings show that students in the study focused more on the points accumulated rather than the challenge of reading, particularly those within the upper Lexile ranges. The RC data provide valuable pieces of information on the patterns and choices that students made in reading, yet the focus in the class was on the single column of total points. However, teachers have the option to use all of the data produced on the RC reports.  The comparisons conducted in this study show that a single measure from the plethora of data renders simplicity, and a lack of knowledge of students’ reading readiness for more complex text reading in subsequent years.  

    Attention should be paid to students extending their reading comprehension skills in preparation of middle school texts. Students within the upper Lexile ranges chose to read below their personal Lexile level in the interest of points. The strategy for high level readers has become one of obtaining the points needed through multiple lower level texts, not stimulating their skills as readers. If these students continue this plateau of underwhelming reading, then it is possible that they will have their complacency shaken in middle school through struggling to comprehend texts.

    Students in the lower Lexile ranges exceeded the appropriate reach for the Lexile level of their books for the same reason – points. Books with higher Lexile levels were stressed in the classroom due to the higher point value. The lower Lexile level students did not consider that the texts were beyond an adequate challenge. These same students were at risk of dropping out based on low reading levels in the grade and the battle that lies ahead of them in middle school with expository texts (Fiester, 2010). Nevertheless, they were encouraged to read books outside their range for the purpose of accumulating points by the end of the six weeks. The findings in this study are important because they communicate to teachers that this program has the potential for deeper understanding of students’ reading potential, well beyond points accumulated. It is important to consider future reading needs, current student success, as well as the relationship between students’ initial personal Lexile level and their progress toward higher levels of competence. It is possible to consider that student success can also be measured by comparisons between personal Lexile scores and other factors found on the students’ profile pages.

    Conclusion

    The results of this study refuted the effectiveness of Scholastic Reading Counts! in regards to measuring students success using a point accumulation approach only.  While students were collecting the points required, those in the upper Lexile levels were not choosing books on or above their level on average. Students within the lower Lexile levels were choosing books too far above their Lexile level on average. However, Reading Counts! could be an effective program if certain considerations are taken into account. First, it is a likely conclusion that if teachers track and manage both accumulated points and Lexile levels of book choices that students in the upper Lexile levels will be more appropriately challenged and students in the lower Lexile levels will better comprehend texts the first time. The fact that Reading Counts! produced sufficient and detailed amounts of data shows that the program is varied and valuable to teachers on multiple levels. Teachers could expand their vision to the other columns produced by the student reading reports and take into account the Lexile level of each book, the score of the quiz, and the average Lexile level of texts tested to measure success and growth.

    If students were reading within the range defined by MetaMetrics, sufficient growth would be expected in reading ability (MetaMetrics, 2015b). This growth could be shown through re-administering the SRI throughout the year to produce updated personal Lexile levels for students and new instructional goals for teachers to advance students’ reading competencies. In this alternate approach to data use, students in the upper Lexile levels would not be able to work around the system if the focal point made by the teacher is on suitable Lexile level choices in texts. Another recommendation would be to only allow students to test on books within their Lexile range to better manage how they are obtaining their points and strengthen the goal of book choices on level. This would eliminate lower level “fluff points” meant to simply bring up a student’s point total for the six weeks.

    This action research project served as a platform for future research in classrooms that use Reading Counts! as a reading data management system.  Future research in the use of Reading Counts! and strategies for implementing the program in an elementary classroom could further support the results of this study. It would be beneficial to examine student Lexile growth from SRI data in a classroom valuing point accumulation versus a classroom valuing appropriate book Lexile choices. The results could determine which strategy increases student comprehension, and thus, student preparedness for middle school reading. This study examined data from the beginning of the school year up until the fourth six weeks. It would be valuable to determine if the same pattern is followed in results across the entire school year. The results could also affirm the findings of this study in that upper Lexile level students are consistently choosing to read below level. Subsequent administrations of the SRI in a proposed study may produce an outcome suggesting upper Lexile level students are not growing in reading ability due to less ambitious book choices. These concluding events could mean that the stressed importance of points is actually hindering student growth towards readiness for middle school texts and student reading development predominately.

    References

    Arya, D. J., Hiebert, E. H., & Pearson, P. D. (2011). The effects of syntactic and lexical complexity on the comprehension of elementary science texts. International Electronic Journal of Elementary Education, 4(1), 107–125.

    Culmo, J. R. (2009, January 1). The Impact of Structured Daily Independent Self-Selected Reading on Second Grade Students. ProQuest LLC. ProQuest LLC. Retrieved from https://libproxy.library.unt.edu:9443/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=ED515968&scope=site

    Fiester, L. (2010). Early Warning!: Why Reading by the End of the Third Grade Matters. A KIDS COUNT Special Report (pp. 1–62). Retrieved from www.aecf.org

    Lee, P. P. (1999). Why Literacy Matters. Archives of Ophthalmology, 117(1), 100. doi:10.1001/archopht.117.1.100

    Manna, R. (2015). Leveled Reading Systems, Explained | Scholastic.com. Retrieved March 27, 2015, from http://www.scholastic.com/teachers/article/leveled-reading-systems-explained

    MetaMetrics. (2015a). What Does the Lexile Measure Mean? Retrieved March 27, 2015, from http://cdn.lexile.com/m/uploads/downloadablepdfs/WhatDoestheLexileMeasureMean.pdf

    MetaMetrics. (2015b). What is a Lexile Measure? | The Lexile® Framework for Reading. Retrieved March 19, 2015, from https://lexile.com/about-lexile/lexile-overview/

    Partnership for Reading. (2001). Fluency: An Introduction | Reading Rockets. Retrieved January 4, 2015, from http://www.readingrockets.org/article/fluency-introduction

    Ray, M. N., & Meyer, B. J. F. (2011). Individual differences in children’s knowledge of expository text structures : A review of literature. International Electronic Journal of Elementary Education, 4(1), 67–82.

    Renaissance Learning. (2015). Accelerated Reader - Reading Software - Accelerated Reader Tool. Retrieved March 23, 2015, from http://www.renaissance.com/products/accelerated-reader

    Scholastic. (2007). SRI Technical Guide. Retrieved March 23, 2015, from http://teacher.scholastic.com/products/sri_reading_assessment/pdfs/SRI_TechGuide.pdf

    Scholastic. (2015). Frequently Asked Questions about Scholastic Reading Counts! Retrieved January 4, 2015, from http://teacher.scholastic.com/products/independent_reading/scholastic_reading_counts/faqs.htm

    Shea, K. P. (2011). Impact of the Scholastic Reading Counts program on reading level ability: A study of third- through fifth-grade students in west central Florida. ProQuest Dissertations and Theses. Capella University, Ann Arbor. Retrieved from https://libproxy.library.unt.edu/login?url=http://search.proquest.com/docview/899257943?accountid=7113

    Stanovich, K. E. (1986). Matthew Effects in Reading: Some Consequences of Individual Differences in the Acquisition of Literacy. Reading Research Quarterly, 21(4), 360–407. doi:10.2307/747612

    Sumara, D. J. (2002). Why Reading Literature in School Still Matters: Imagination, Interpretation, Insight (p. 200). Routledge. Retrieved from https://books.google.com/books?hl=en&lr=&id=IL6QAgAAQBAJ&pgis=1

    Table 1: Group means for RC data

    Lexile range –a 100 point range of students’ personal Lexile levels

    PLL – personal Lexile level

    PA – points accumulated

    PB – percentage of books on or above personal Lexile level

    NF – percentage of non-fiction books on or above personal Lexile level

    ALL – average Lexile level of books read

    RL – average reading level of books read

    Score – average score for Reading Counts! quizzes

    WR – average words read for book selections

    Figure 1: Relationship between PLL and ALL

    Figure 2: Relationship between PLL and PB

    Figure 3: Relationship between PLL and PA

    Figure 4: Relationship between PLL and NF