4  Course Competencies

4.1 Tier 1: Foundations of Quantitative Methods

4.1.1 Assessment Structure:

  • Competency: Completes Lab 1-3 with Meets Expectations
  • Mastery: Completes Case Study #1 with Meets Expectations

4.1.2 Concept 1: The Quantitative Revolution

Guiding Question: Why did geography become a quantitative discipline, and how did it change how we study space?

  • Describe the origins of the Quantitative Revolution in geography and how political and academic pressures pushed geography towards a “spatial science” 
  • Identify assumptions built into quantitative reasoning (measurability, objectivity, universality)
  • Reflect on critiques and limits of the Quantitative Revolution from human and critical geographers
  • Interpret early spatial models
  • Explain how MAUP, spatial dependence, spatial heterogeneity, and distance decay influence the structure and interpretation of spatial data
  • Connect the “spatial problem” to violations of independence in classical inference

4.1.3 Concept 2: Data and Descriptive Statistics

Guiding Question: How do we summarize and describe variation in data?

  • Classify different types of geographic data and variables (e.g., primary vs. secondary, individual vs. aggregated, qualitative vs. quantitative, level of measurement), and explain how these distinctions influence analysis
  • Explain the major sources and key dimensions of geographic (coverage, resolution, scale, temporality, completeness, positional accuracy, and bias) and how these dimensions influence analysis.
  • Describe basic geographic data structures (vector/raster)
  • Explain and evaluate measurement quality in geographic data, including precision, accuracy, validity, and reliability.
  • Summarize and interpret attribute data using descriptive statistics (central tendency, dispersion, shape) and data visualizations

4.1.4 Concept 3: Spatial Description

Guiding Question: How do we summarize and describe variation in geographic data?

  • Use maps to visually analyze spatial patterns and understand the role of classification schemes for grouping spatial data values
  • Compute and interpret spatial descriptive statistics (mean center, standard distance, standard deviational ellipse) and explain how they extend non-spatial descriptive measures

4.2 Tier 2: Classical Statistical Inference

4.2.1 Assessment Structure:

  • Competency: Completes Lab 4-6 with Meets Expectations
  • Mastery: Completes Case Study #2 with Meets Expectations

4.2.2 Concept 4: Probability

Guiding Question: How can we use probability to quantify uncertainty and understand patterns in events?

  • Explain the characteristics of stochastic processes and justify the use of probability theory in modeling systems with uncertainty
  • Explain how empirical probability is used to interpret likelihood based on observations
  • Explain how theoretical probability is used to interpret likelihood based on models of possible outcomes and their underlying assumptions
  • Describe the assumptions of the binomial and normal distributions and interpret situations in which each model is applicable
  • Compute and interpret simple event (empirical) probabilities using historical data
  • Compute and interpret probabilities for discrete and continuous data using the binomial and normal distribution

4.2.3 Concept 5: Inferential Statistics

Guiding Question: How can we use sample data to understand and estimate characteristics of a population?

  • Describe the key assumptions underlying classical statistical inference (e.g., independence, randomness in sampling)
  • Explain the Central Limit Theorem and why it allows us to generalize from samples to populations
  • Explain how sampling variability leads to differences between sample statistics and true population parameters
  • Explain the role of the standard normal (z) and t distributions in statistical inference
  • Describe the main spatial and non-spatial sampling designs (simple, systematic, stratified, cluster) and select an appropriate sampling strategy for a given research context
  • Estimate and interpret population parameters (means and proportions) from samples (calculate point estimates and confidence intervals)

4.2.4 Concept 6: Hypothesis Testing

Guiding Question: How can we generate and test hypotheses about a population using sample data?

  • Describe the issues with using classical hypothesis testing for spatial data
  • Describe the idea of “natural samples” in geography
  • Distinguish between the null (H₀) and alternative (H₁) hypotheses
  • Identify the two types of error in hypothesis testing (Type I and Type II)
  • Formulate testable hypotheses and identify an appropriate statistical test based on the characteristics of the data 
  • Run and interpret one sample, two sample, and paired t-tests to compare means between one or two groups
  • Run and interpret Chi-Square tests to assess relationships between categorical variables
  • Differentiate statistical significance from substantive importance

4.3 Tier 3: Spatial Statistics

4.3.1 Assessment Structure:

  • Competency: Completes Lab 7-10 with Meets Expectations
  • Mastery: Completes Case Study #3 with Meets Expectations

4.3.2 Concept 7: Patterns in Events

Guiding Question: How can we assess the distribution/pattern of individual events in space?

  • Define spatial point pattern data and identify datasets that qualify as point data
  • Distinguish between first-order (intensity) and second-order (interaction) spatial processes
  • Explain the null hypothesis of Complete Spatial Randomness (CSR) and its representation using a homogeneous Poisson process
  • Compute and interpret quadrat test statistics to assess first-order spatial randomness and statistical significance
  • Determine when an inhomogeneous Poisson process is required to account for spatial heterogeneity in intensity prior to conducting second-order analyses
  • Calculate and interpret nearest neighbor statistics (e.g., ANN, L-function) to evaluate second-order spatial patterns and departures from CSR.
  • Explain why Monte Carlo simulations are used to generate empirical probability distributions for assessing statistical significance in spatial point pattern analysis

4.3.3 Concept 8: Patterns in Values

Guiding Question: How can we detect patterns in attribute values across points or areas?

  • Define and justify alternative spatial neighborhood structures (k-nearest neighbors, contiguity such as Queen’s case, and distance-based thresholds) for different data types and hypothesized spatial processes
  • Compute and interpret global spatial autocorrelation statistics (e.g., Moran’s I), including statistical significance under appropriate null hypotheses of spatial randomness
  • Decompose and evaluate local spatial autocorrelation (LISA) results to identify and classify spatial regimes (hot spots, cold spots, spatial outliers), and critically interpret their statistical reliability and spatial meaning
  • Diagnose and explain how first-order and second-order spatial processes generate observed autocorrelation patterns

4.3.4 Concept 9: Bivariate Spatial Relationships

Guiding Question: How can we examine and model relationships between two variables across space?

  • Visualize and diagnose bivariate relationships using maps and scatterplots, and formulate directional, theory-informed hypotheses about underlying spatial processes
  • Estimate and interpret simple OLS models, including coefficients, residuals, and model fit, in relation to geographic processes
  • Diagnose violations of OLS assumptions by testing for spatial autocorrelation in residuals and interpreting implications for inference
  • Differentiate and apply spatial regression models (e.g., spatial lag vs. spatial error) based on hypothesized spatial processes (diffusion vs. unobserved spatial structure).
  • Compare and justify model selection between non-spatial and spatial regression approaches using both statistical evidence and theoretical expectations

4.3.5 Concept 10: Spatially Continuous Surfaces

Guiding Question: How can we estimate values at unsampled locations and understand spatial clustering and uncertainty across continuous surfaces?

  • Explain and apply inverse distance weighting (IDW) to estimate values at unsampled locations, and interpret how distance-based weighting influences predicted surfaces
  • Evaluate and justify IDW parameter choices (e.g., inverse distance power) using cross-validation metrics (e.g., RMSE), and assess how parameter changes alter spatial smoothness vs. locality
  • Interpret semivariograms by identifying nugget, range, and sill, and explain what these features indicate about spatial autocorrelation and scale of influence
  • Select and justify an appropriate semivariogram model (e.g., spherical, exponential) based on empirical patterns and theoretical considerations
  • Explain the conceptual differences between deterministic interpolation (IDW) and probabilistic approaches (kriging), including assumptions about spatial processes and treatment of uncertainty.
  • Estimate and interpret kriging models as realizations of an underlying spatial process and assess how prediction reliability varies across space