The human speech production system produces meaningful sentences through a large network of cortical regions that stream and encode information at multiple time scales. We use intracranial recordings and dynamical systems theory to model how different speech features are encoded and interact together in speech-related cortical areas.
Neural activity of people with epilepsy is typically organized on several time and spatial scales. Using tools borrowed from dynamical systems theory and machine/statistical learning, we try to understand and model those dynamics. Our goal is to leverage those findings to offer effective diagnoses and forecasting tools for people with epilepsy.