University of Oregon

Current Research Projects

Project ICEBERG

Pre-K and K-2015-2020

Project Iceberg (Intensifying Cognition, Early literacy, and Behavior for Exceptional Reading Growth) aims to enhance early learning data-based decision-making for preventing reading disabilities in preschool and kindergarten classrooms. Our primary goal is to build tablet-based resources and products for helping teachers effectively address children’s identified early risk for reading difficulties. view more

Inner-Rater Reliability Study (Oregon Extended Assessment/ORExt)

Date: Spring 2018

Behavioral Research & Teaching plans to observe a sample of Oregon’s Qualified Assessors (QAs) who administer the paper/pencil version of the Oregon Extended Assessment (ORExt) to determine reliability of administration and scoring. We will not include the tablet administration or the Oregon Observational Rating Assessment (ORora). The study will be conducted in two manners:

  1. QTs in each district will observe an assigned sample of their respective QAs using the observation protocol.
  2. Expert reviewers from ODE and/or Behavioral Research & Teaching (BRT) will observe district-level QTs/QAs who give the assessment in more than one school/district.

The observation protocol will be completed for the identified QA, but the student(s) and content area(s) observed will be selected by the QT or QA. BRT researchers will contact district-level QTs prior to the test window, which runs from February 15 – April 26, 2018, to arrange multiple observations that can hopefully be completed within one school day.

Modeling Growth for Students who Participate in AA-AAS ( Alternate Assessment based on Alternate Achievement Standards)

Date: Current

Dan Farley is currently researching the prerequisites for modeling the magnitude of academic growth for students with significant cognitive disabilities (SWSCDs) who participate in alternate assessments based on alternate achievement standards (AA-AAS). Modeling the magnitude of growth presents many measurement challenges, such as the need for a common scale and missing data, even in general education statewide assessment contexts. Modeling growth for students who participate in AA-AAS is even more complex because of small sample sizes and population heterogeneity, among other factors. His research includes a literature synthesis, as well as multilevel modeling techniques to determine what the prerequisites for measuring growth, what typical growth for SWSCDs looks like, and what covariates affect growth estimates. It is hoped that the research findings will help support states attempting to implement growth-based accountability systems using AA-AAS results.

Computerized Oral Reading Evaluation (CORE)

Date: 2014-2019

Traditional oral reading fluency assessments are quite useful in predicting students’ general reading proficiency, and although it only takes a couple minutes to assess one student, it requires a lot time and resources assess an entire classroom or school because each student is tested individually. CORE would help solve this problem by containing an automated speech recognition engine that scores students’ oral reading so that many students can be tested at once. In addition, CORE uses an advanced psychometric model to overcome some of the inadequacies of traditional oral reading assessments. The purpose of CORE is to help alleviate the resource demands of one-to-one testing administration, increase the reliability of fluency scores, and reduce instructional time lost to testing for both teachers and students. view more

Spanish Language Vocabulary Measures, Grades 2-8 (easyCBM)

Date: Fall 2018    Grades: 2-8

To Be Added in the Fall: Thanks to all the teachers and students who assisted with piloting the Spanish Language Vocabulary items, we have sufficient data to enable us to assemble alternate forms for Benchmark Screening and Progress Monitoring so we can add these measures to the easyCBM District and Deluxe Edition systems this summer, in preparation for use next year.

Once the measures are added to the system, students and teachers will be provided with their raw scores after each test (just as with the other easyCBM measures). We will be adding percentile rank information to help interpret these raw scores incrementally, after each seasonal benchmark — because we need to use the scores from those students who take the tests in the fall, winter, and spring to calculate percentile ranks.

Thus, we’ll have percentile ranks for the Fall measures by November 15, 2018 percentile ranks for the Winter measures by March 15, 2019 and percentile ranks for the spring measures by July 15 of 2019. And, once we have the data, we’ll be able to provide percentile ranks hard-coded into the system for immediate retrieval in subsequent years.