CoSMo 2014From Bayesian Behavior LabThis page contains course materials for the CoSMo 2014 summer school.
Introduction - overview of sensory-motor controlAug 4-5
DREAM database - Introduction to the data and model sharing initiativeAug 4 (evening) Lecturer: Gunnar Blohm You can get the DREAM project from Gunnar on a USB drive. DREAM can also be downloaded piece-wise (data sets, models, tools, and documentation) from CRCNS: http://crcns.org/data-sets/movements/dream/downloading-dream. You will need to create an account on CRCNS to be able to download the project files.
If you want "all" of DREAM (models, tools, and documentation), click here: AllDream.zip
Motor control & learningAug 6-7
Sensory-motor transformationsAug 8-9
ProstheticsAug 11-12
The Bayesian BrainAug 13-14 Schrater slides Afternoon tutorials 1 "Problem 1" In the first, start with AttractProjectGoals.m. This file has two other files, face imanalysis.mat and faceimgui.m The project explores the question "Where do cues come from?" in cue combination. Normally there are clever guesses by experimenters, but in less studied domains little is known about the information subjects use to infer properties. In this project we treat a toy version of a real problem - what are the cues to facial attractiveness? Here I have taken a database of images together with 1-10 rating scale attractiveness ratings, and done an initial unsupervised dimensionality reduction of the images. Your job is to characterize the cues to attractiveness ratings given the low dimensional image representation via a simple data analysis. Each dimension is a potential cue to attractiveness. Your goal is to characterize P(cue_j|attractiveness). Use the faceimgui to interactively view the relationship between the cues and the face images. Load the face images use the faceimananalysis.mat file. Then the challenge is to remove the cue independence assumptions and do the analysis again. The second part will require you to estimate the joint probability of the cues P(cue1,cue2,cue3,...|attractiveness). One possibility is to assume multivariate gaussian. "Problem 2" In the second problem, you will work through the explaining away model described in the class for image size and touch. The instructions are in the ExplainingAway.m file. "Problem 3" In the third problem, we will explore simple Bayes by analyzing data from the Dream database, Kording 2004. Go to http://crcns.org/data-sets/movements/dream/data-sets/Kording_2004/ and download the paper and the dataset. In this tutorial, we will predict data from Koerding and Wolpert 2004, then fit the actual data. In the experiments, they have subjects move a cursor to a target at 0cm with a random lateral shift between the hand position and the cursor. The cursor's position is rendered with 2 levels of blur or occluded, increasing the uncertainty about target position, predicting an increased reliance on the prior. On each trial, there is a true shift, xtrue, given by the cursor offset. There is also a noisy estimate of the cursor position, "sensed". From these subjects form an estimate of the cursor position from data and prior knowledge. Assuming that reach endpoints reflect the best estimate of the target location, we can compare predictions and reach endpoints. x_hat = argmax P(xtrue|xsensed), which for Gaussians is the mean of P(xtrue|xsensed) By Bayes P(xtrue|xsensed) = P(xsensed| xtrue)P(xtrue)/P(xsensed) where P(xsensed) = Int_{xtrue} P(xsensed| xtrue)P(xtrue) dxtrue We need to form the prior P(xtrue) and the likelihood P(xsensed| xtrue) Using the following parameters, for today your goal is to produce simulations that replicate the prediction graphs in the paper. mu_prior = .01; sd_prior = 0.5/100; xtrue = [-0.015 -0.01 -0.005 0 0.005 0.01 0.015]; sd_smallblur = 0.1/100; sd_largeblur = 1/100; Use Bayes rule to derive the formula relating the cursor perturbation and prior knowledge. Then compute predictions for each of the four conditions. SIMULATE THE OBSERVER FOR ONE CONDITION We will assume the observer has 1010 trials of feedback to estimate the prior. Then the subject's estimates of the prior's parameters are probably approximately correct. mu_prior_hat is approximately mu_prior; sd_prior_hat approximately sd_prior;
Treat this as data, and analyze it like in the Kording paper for ONE condition. Challenge: Relax the assumption that subject's have learned the prior accurately. 1) Simulate data as above. 2) Estimate the prior from feedback as a function of the number of learning trials 3) How accurate do we expect the prior to be after 1010 trials? a) with perfect memory b) with imperfect memory where you forget? For part b, you will need a kalman filter. Computational neuroscience in industryAug 14, 4pm Computational NeuroimagingAug 15-16 Womelsdorf_CosMo_Lecture_1_Prelude.pdf Afternoon tutorials Surveys for CoSMo 2014CoSMo overall
Final Project Presentations - Aug 15, 4:30-5:30pm and 7-11pmPresentation schedule 4:30pm - The Kalman touch 5:00pm - Rolling heads 5:30pm - 7:00pm - dinner break 7:00pm - Arousing Decisions 7:30pm - Triple Threat a.k.a. The SuperModel(ers) a.k.a. The CoSMo-nauts 8:00pm - Tic Toc Pong 8:30pm - Group X 9:00pm - Bayesic Kalman Sense 9:30pm - Followers of Titipat 10:00pm - On Fire 10:30pm - HuStLa’z Please rate project presentations HERE |