Home
Project Staff
Papers, Reports
   & Publications

Project Summary
Project Studies
Contact
Links

Project Studies

The research consists of two phases, with several studies being conducted within each phase. Participants will include middle school students, undergraduate students, and mathematics graduate students.

Phase I: The overarching goal is to understand middle school students’ representations of mathematical objects. The Phase I studies will explore middle school students’ representations of categories and similarity in two content domains within mathematics: number and geometry. Similarity and category representations are the basis of inductive strategies in non-mathematical domains. One question, for example, is whether students have consistent and robust intuitions about similarities between mathematical objects and, critically, what determines those similarities. As part of this investigation we will explore expert-novice, and ability-level differences.

Study 1.1: Sorting & Tree Building
This study uses a sort-re-sort procedure. Participants are presented with a set of items and asked to form groups. Further probes ask about combining and splitting groups. When no further groupings or distinctions are made, the items are re-shuffled and participants are asked if there is another way to organize the items. Participants’ explanations for their groupings are recorded as well.

Study 1.2: Typicality & Similarity Ratings
Participants respond to an individually administered survey. Questions will be of two types: Triad similarity judgments and typicality judgments. Similarity judgments present one target object and two test objects. The task is to indicate which test is most similar to the target. Typicality ratings also present triads. In these cases the target is a category label (e.g., “Triangle”) and the test objects are two instances of the category. Participants select the test object that is the better (more representative) example of the category.

Study 1.3: Similarity & Typicality Survey
Participants respond to a written assessment comprised of similarity/typicality rating problems; specific problems will be based on results from Studies 1.1 and 1.2. Because studies in Phase II will include non-mathematical items as well, we will use the assessment to collect similarity and typicality ratings for the non-mathematical (animal) items. A small sample of participants will be selected for follow-up interviews in order to probe for the rationale behind their assessment responses.

Phase II: The overarching goal is to understand the connections between middle school students’ out-of-mathematics and in-mathematics reasoning and investigate inductive inference in different contexts. In particular, the Phase II studies will explore how students make inductive inferences. For example, do students consider the similarity and typicality of examples used in mathematical arguments when assessing the quality of the argument? Do students select different kinds of examples to evaluate different kinds of hypotheses? We will continue the Phase I focus on potential group differences in inductive strategies.

Study 2.0: Argument Formulation & Evaluation
Participants will engage in a semi-structured interview designed to elicit their argument formulations as they respond to tasks prompting the verification and justification of various hypotheses. Half of the tasks will provide hypotheses for which participants will be prompted to verify and justify without any externally-imposed structure. The remaining tasks will provide varying arguments for which participants with discuss the relative strengths and weaknesses of the arguments.

Study 2.1: Argument Evaluation & Evidence Selection Experimental Task
Participants respond to a questionnaire comprised of questions are of two types. Argument evaluation questions present a target hypothesis and a series of pieces of evidence relating to the hypothesis. Participants rank and rate the evidence for its degree of support of the hypothesis. Evidence selection questions present a hypothesis and then several sets of examples that can be examined (the results are unknown). Participants serially select which set of examples they would examine to assess the hypothesis.

Study 2.2: Survey 1: Variations in Evidence Type
The procedure follows that used in Study 1.3, and the written assessment will contain the same questions used in Study 2.1. We will seek to confirm the findings from the small scale experimental tasks with a larger sample. Do we see the same kinds of distinctions among different types of examples, and are any domain differences replicated? Do higher achieving students evaluate evidence differently than do lower achieving students? Are some students more likely to use reasoning strategies developed outside mathematics when reasoning about mathematical content? A small sample of participants will be selected for follow-up interviews in order to probe for the rationale behind their assessment responses.

Study 2.3: Survey 2: Quantified Hypothesis
The structure of the written assessment in Study 2.3 will be quite similar to that used in Study 2.2. The major change is that we will focus on differences in hypothesis types, rather than differences in evidence types. All items will involve the same set of evidence types: a few similar examples, many diverse examples, and a non-example. Hypotheses will involve one of three quantifiers: all, some, or only. For some items the evidence will provide disconfirmation (rather than a degree of confirmation) of the hypothesis. For this reason the argument evaluation measure will be changed to ask how the evidence affects one’s confidence that the hypothesis is true or false.