skip to navigation skip to content

Theme: Specialist Statistics

Show:
Show only:

9 matching courses


This course will provide a detailed critique of the methods and philosophy of the Null Hypothesis Significance Testing (NHST) approach to statistics which is currently dominant in social and biomedical science. We will briefly contrast NHST with alternatives, especially with Bayesian methods. We will use some computer code (Matlab and R) to demonstrate some issues. However, we will focus on the big picture rather on the implementation of specific procedures.

Evaluation Methods Mon 16 Mar 2020   10:00 [Places]

This course aims to provide students with a range of specific technical skills that will enable them to undertake impact evaluation of policy. Too often policy is implemented but not fully evaluated. Without evaluation we cannot then tell what the short or longer term impact of a particular policy has been. On this course, students will learn the skills needed to evaluate particular policies and will have the opportunity to do some hands on data manipulation. A particular feature of this course is that it provides these skills in a real world context of policy evaluation. It also focuses primarily not on experimental evaluation (Random Control Trials) but rather quasi-experimental methodologies that can be used where an experiment is not desirable or feasible.

Topics:

  • Regression-based techniques
  • Evaluation framework and concepts
  • The limitations of regression based approaches and RCTs
  • Before/After, Difference in Difference (DID) methods
  • Computer exercise on difference in difference methods
  • Instrumental variables techniques
  • Regression discontinuity design.
Factor Analysis Mon 2 Mar 2020   11:00 [Full]

This module introduces the statistical techniques of Exploratory and Confirmatory Factor Analyses. Exploratory Factor Analysis (EFA) is used to uncover the latent structure (dimensions) of a set of variables. It reduces the attribute space from a larger number of variables to a smaller number of factors. Confirmatory Factor Analysis (CFA) examines whether collected data correspond to a model of what the data are meant to measure. STATA will be introduced as a powerful tool to conduct confirmatory factor analysis. A brief introduction will be given to confirmatory factor analysis and structural equation modelling.

  • Session 1: Exploratory Factor Analysis Introduction
  • Session 2: Factor Analysis Applications
  • Session 3: CFA and Path Analysis with STATA
  • Session 4: Introduction to SEM and programming
Meta Analysis Mon 9 Mar 2020   09:00 [Full]

In this module students will be introduced to meta-analysis, a powerful statistical technique allowing researchers to synthesize the available evidence for a given research question using standardized (comparable) effect sizes across studies. The sessions teach students how to compute treatment effects, how to compute effect sizes based on correlational studies, how to address questions such as what is the association of bullying victimization with depression? The module will be useful for students who seek to draw statistical conclusions in a standardized manner from literature reviews they are conducting.

Multilevel Modelling Wed 11 Mar 2020   09:30 [Full]

In this module, students will be introduced to multilevel modelling, also known as hierarchical linear modelling. MLM allows the user to analyse how outcomes are influenced by factors acting at multiple levels. So, for example, we might conceptualise children's educational process as being influenced by individual or family-level factors, as well as by factors operating at the level of the school or the neighbourhood. Similarly, outcomes for prisoners might be influenced by individual and/or family-level characteristics, as well as by the characteristics of the prison in which they are detained.

  • Introduction to Stata/MLM theory
  • Applications I - Random intercept models
  • Applications II - Random slope models
  • Applications III - Revision session/growth models
Panel Data Analysis (Intensive) Wed 26 Feb 2020   09:00 [Full]

This module provides an applied introduction to panel data analysis (PDA). Panel data are gathered by taking repeated observations from a series of research units (eg. individuals, firms) as they move through time. This course focuses primarily on panel data with a large number of research units tracked for a relatively small number of time points.

The module begins by introducing key concepts, benefits and pitfalls of PDA. Students are then taught how to manipulate and describe panel data in Stata. The latter part of the module introduces random and fixed effects panel models for continuous and dichotomous outcomes. The course is taught through a mixture of lectures and practical sessions designed to give students hands-on experience of working with real-world data from the British Household Panel Survey.

  • Introduction to PDA: Concepts and uses
  • Manipulating and describing panel data
  • An overview of random effects, fixed effects and ‘hybrid’ panel models
  • Panel models for dichotomous outcomes
Propensity Score Matching Wed 19 Feb 2020   09:00 [Places]

Propensity score matching (PSM) is a technique that simulates an experimental study in an observational data set in order to estimate a causal effect. In an experimental study, subjects are randomly allocated to “treatment” and “control” groups; if the randomisation is done correctly, there should be no differences in the background characteristics of the treated and non-treated groups, so any differences in the outcome between the two groups may be attributed to a causal effect of the treatment. An observational survey, by contrast, will contain some people who have been subject to the “treatment” and some people who have not, but they will not have not been randomly allocated to those groups. The characteristics of people in the treatment and control groups may differ, so differences in the outcome cannot be attributed to the treatment. PSM attempts to mimic the experimental situation trial by creating two groups from the sample, whose background characteristics are virtually identical. People in the treatment group are “matched” with similar people in the control group. The difference between the treatment and control groups in this case should may therefore more plausibly be attributed to the treatment itself. PSM is widely applied in many disciplines, including sociology, criminology, economics, politics, and epidemiology. The module covers the basic theory of PSM, the steps in the implementation (e.g. variable choice for matching and types of matching algorithms), and assessment of matching quality. We will also work through practical exercises using Stata, in which students will learn how to apply the technique to the analysis of real data and how to interpret the results.

Psychometrics Tue 15 Oct 2019   14:00 Finished

An introduction to the design, validation and implementation of tests and questionnaires in social science research, using both Classical Test Theory (CTT) and modern psychometric methods such as Item Response Theory (IRT). This course aims to enable students to: be able to construct and validate a test or questionnaire; understand the strengths, weaknesses and limitations of existing tests and questionnaires; appreciate the impact and potential of modern psychometric methods in the internet age.

Week 1: Introduction to psychometrics
a. Psychometrics, ancient and modern. Classical Test Theory
b. How to design and build your own psychometric test

Week 2: Testing in the online environment
a. Testing via the internet. How to, plus do’s and don’ts
b. Putting your test online

Week 3: Modern Psychometrics
a. Item Response Theory (IRT) models and their assumptions
b. Advanced assessment using computer adaptive testing

Week 4: Implementing adaptive tests online
a. How to automatically generate ability items
b. Practical

Time Series Analysis (Intensive) Wed 19 Feb 2020   09:00 [Places]

This module introduces the time series techniques relevant to forecasting in social science research and computer implementation of the methods. Background in basic statistical theory and regression methods is assumed. Topics covered include time series regression, Vector Error Correction and Vector Autoregressive Models, Time-varying Volatility, and ARCH models. The study of applied work is emphasized in this non-specialist module. Topics include:

  • Introduction to Time Series: Time series and cross-sectional data; Components of a time series, Forecasting methods overview; Measuring forecasting accuracy, Choosing a forecasting technique
  • Time Series Regression; Modelling linear and nonlinear trend; Detecting autocorrelation; Modelling seasonal variation by using dummy variables
  • Stationarity; Unit Root test; Cointegration
  • Vector Error Correlation and Vector Autoregressive models; Impulse responses and variance decompositions
  • Time-varying volatility and ARCH models; GARCH models
[Back to top]