Grand Challenge 2:
Do current measures of spatial and temporal reasoning accurately assess the skills required in the various Geoscience specialties? If not, what other types of assessments need to be developed?
Rationale
Before assessing a spatial or temporal reasoning skill, a researcher must first establish that the particular reasoning they are studying is critical to some aspect of success in the geosciences (see GC 1). With an understanding of the essential types of spatial and temporal reasoning required by the primary geoscience specialties and tasks, we can then proceed to empirically test whether these tasks actually recruit the spatial and temporal reasoning skills that were "mapped" in GC 1. That is, if we think locating fossils requires penetrative thinking, disembedding, mental rotation, and transformation, does performance on these measures predict success in predicting fossil locations? If through this investigation there are domain-specific geoscience tasks or skills found that do not seem to align with an existing spatial or temporal reasoning measure, an important next step would be to design a more appropriate measure.
Measurement is a critical part of documenting student progress towards skill mastery, and assessing the impacts of different learning experiences (see GC 3). Many tools already exist, especially to assess spatial thinking (see spatiallearning.org for some examples), while others likely need to be developed. For example, Resnick & Shipley (2013) introduced a new measure to assess mental brittle transformation in order to distinguish some of the differences in visualization practices between geologists and organic chemists, while Dodick & Orion (2006) designed three instruments to measure perceptions of time with middle and high school students. Previous studies have used a wide array of measurement instruments to measure spatial thinking including the Geologic Block Cross-Sectioning Test (used by Atit, Gagnier, & Shipley, 2015), the Topographic Map Assessment, visualization, rotation and perceptual speed tests (used in Hambrick et al., 2012) and open-ended interviews with children (Ault, 1982) to assess different types of spatial thinking (e.g. mental rotation, penetrative thinking and disembedding in Ormand et al., 2014). Temporal thinking has received less attention, but instruments include the Geological Time Aptitude Test (GeoTAT, used in Dodick & Orion, 2003a), the Temporal Spatial Test and Strategic Factors Test (TST and SFT, respectively; used in Dodick & Orion, 2003b).
Newcombe & Shipley (2015) provide a recent review of the types of spatial thinking and assessments on spatial thinking, especially on measures for disembedding, spatial visualization, mental rotation, spatial perception and perspective taking. Uttal & Cohen (2012) and Uttal et al., (2013) reviewed studies that assessed the impact of spatial training; these reviews included reference to numerous spatial assessment instruments. Determining which of the current instruments measure domain-specific geoscience tasks or skills is an important next step.
With respect to temporal thinking, Shipp, Edwards, & Lambert, (2009) provides an extensive review of temporal focus ("the attention individuals devote to thinking about the past, present, and future," p. 1), as well as a brief overview of the other temporal constructs including a short definition, sample measures, whether the domain assessed is cognitive, affective or behavioral, and known covariates or consequences. These dimensions include time perspective, temporal orientation, temporal depth, time attitude, preferred polychronicity, hurriedness and pacing style, and have not been addressed in depth within the geoscience education research literature.
Recommended Research Strategies
- Additional literature reviews would be of great benefit in establishing what assessment tools already exist and what they measure. These would be invaluable in bringing together disparate literature from cognitive science and other DBER fields, like Physics Education Research (PER; e.g., Dori & Bara, 2001 examined the development of spatial understanding using virtual and physical molecular modeling).
- Proof of concept tests are needed to assess the "fit" of existing assessment tools. For example, if we hypothesize X domain-specific task requires Y type of spatial reasoning (see Grand Challenge 1), do we see that spatial reasoning test predicting performance of the domain specific task? Going further with that example, we might assume that mapping a bedrock anticline requires penetrative thinking; is someone's ability to map that anticline correlated with measures of penetrative thinking?
- Identify or develop additional metrics as appropriate to assess the spatial and temporal nature of geoscience tasks. This is a follow-up to Strategy 2 that may be necessary if domain-specific tasks are not found to correlate with existing measures of spatial and temporal thinking.