I was a postdoctoral researcher between Harvard University and Caltech, focusing on statistical methods that quantify human irrationality when acting in uncertain and risky environments.
I am generally interested in mitigating the risk associated with Human-AI interactions. Most of my early work involved identifying cases when human performance can be augmented by AI or machine learning to improve situational awareness or provide decision support. Once identified, I also worked to develop algorithms to improve human performance over status quo operations, in addition to tuning the algorithms to minimize the operational risk associated with errors in model output.
Throughout my career I led high-profile projects and made contributions across several domains, such as open-source intelligence (Dataminr), defense (MIT Lincoln Laboratory), healthcare (Mayo Clinic), education (Transfr VR), and transportation (University of Minnesota). I have also instructed undergraduate courses at Wellesley College, the University of Minnesota, and Harvard University where I earned a certificate of teaching distinction.
I am currently working as an independent researcher investigating the risk associated with the misuse of AI for producing false information. Moreover, I am interested in how risk can be mitigated by offering interventions that minimize the impact of false information on human beliefs.
I developed algorithms across several domains to improve operational efficiency
I am interested in mitigating the impact of dis/misinformation on human beliefs
I have a keen interest in improving human performance under uncertainty and risk
I have vast experience quantifying risk across several mission critical environments
2007 - 2010 Harvard University and CaltechPostdoctoral Associate Cambridge, MA |
I was a postdoctoral researcher between Harvard University and Caltech, focusing on statistical methods that quantify human irrationality when acting in uncertain and risky environments. |
2000 - 2007 University of MinnesotaDoctoral Degree (PhD) Minneapolis, MN |
I completed a doctoral degree in Cognitive and Brain Sciences with a minor in Human Factors. My thesis was titled, "Statistical Decision-Theory for Human Perception-Action Cycles." |
1994 - 1998 Minnesota State UniversityBachelor of Science Mankato, MN |
I completed my B.S. degree with a major in Psychology and Minor in Biology. |
Schlicht, E.J., Lee, R., Wolpert, D., Kochenderfer, M. , and Tracey, B. (2012). Predicting the behavior of interacting humans by fusing data from multiple sources. In the Proceedings of UAI-2012.
Schlicht, E.J. (2024).Evaluating the propensity of Generative AI for producing harmful disinformation during the 2024 US election cycle. arXiv: Artificial Intelligence [cs.AI].
Schlicht, E.J. (2025) Analyzing the temporal dynamics of linguistic features contained in misinformation. arXiv: Computation and Language [cs.CL]. [Explore Data on Your Own!]
Misinformation Monitor is a volunteer effort that leverages open-source data to understand the role of technology in the creation and propagation of false information. Our mission is to better understand characteristics of dis/misinformation to help mitigate the impact of false information on human beliefs. Research and analysis is regularly produced, so make sure to check back for recent output.
Article overviews my postdoctoral work that redefined an effective 'poker face'
Article summarizes our MIT LL panel on Serious Gaming for investigating national security topics
Interview with the Machine Learning Conference regarding my work on multifidelity simulation