© Dan Birman 2015-Present . Code
As a graduate student working with Justin Gardner one of my major projects has been to look at how we can use contrast and coherence to understand how the brain reads out from sensory representations during perceptual decision making.
I started this project by showing that we can jointly model the perception of contrast and motion coherence using neural responses in early visual cortex and presented that work at SFN 2016. The manuscript from this work is submitted.
Using the sensitivity maps that we built in the first part of this project we then looked to understand how readout of motion visibility perception is performed. We are also preparing this manuscript for submission.
In 2013 I lived in Berlin and worked in the lab of John-Dylan Haynes on this project.
We have an intuition that we "commit" to a decision at a specific moment. Despite this intuition early neuroscience researchers found that brain activity becomes predictive of our intentions far in advance, sometimes up to 10 seconds. In this experiment we showed that in reality the point of no return, after which an action is guaranteed to happen, occurs only about 200 ms before motor activity. Until the point of no return the brain has not committed with no possibility of cancelling.
Birman, D.*, Schultze-Kraft, M.*, Rusconi, M., Allefeld, C., Görgen, K., Dähne, S., ... & Haynes, J. D. (2015). The point of no return in vetoing self-initiated movements. Proceedings of the National Academy of Sciences, 201513569. *Equal author contribution. at hayneslab
I am very interested in the idea of building better models to link human and non-human animal behavior as a way of better connecting physiological data from different model systems. We got interested in this project while looking at a set of monkey physiology data where the order of task training turned out to be critical to interpreting the results.
Birman, D., & Gardner, J. L. (2016). Parietal and prefrontal: categorical differences?. Nature neuroscience, 19(1), 5-7. at Gardner Lab.
As a graduate student working with Justin Gardner my dissertation project is looking at whether we can understand what is usually called feature-based (or category) attention through brain mechanisms that are more classically associated with spatial attention. These projects are at the pilot stage.
Let's face it. We're awful at teaching cognitive neuroscience! When you learn about physics you get to play with kinetics, electricity, magnetism, and thermodynamics. When you learn about chemistry you get to blow things up! But for some reason we seem to think that students will get excited if we show them pictures and movies about psychology and call it a day.
In my time at Stanford I've invested a lot of work into building playful brain simulation tutorials about cognitive neuroscience. These tutorials help students re-discover classic experiments about the visual system. Brain.js (below) is the new interface to all of these -- but I keep a list of older tutorials here.
Instructor: Justin Gardner
Head TA: Dan Birman
TAs: Anna Khazenzon, Natalia Velez, Anthony Stigliani, Rosemary Le, Minyoung Lee, Lior Bugatus, Zeynep Enkavi, Guillaume Riesen, Mona Rosenke, Akshay Jagadeesh, Jon Walters
Undergraduate TAs: Emma Master, Stephanie Zhang, Kawena Hirayama, Henry Ingram, Storm Foley
Instructors: Russ Poldrack, Justin Gardner
TA: Dan Birman
Instructors: Ewart Thomas, Benoit Monin
TAs: Dan Birman, Stephanie Gagnon, Robert Hawkins