Videos
Video presentations, conference presentations, panels, and workshop recordings. Newest first.
2026
Brown AI Winter School 2026: Reinforcement Learning for Orbital Transfers
A 2.5-hour interactive, hands-on workshop at the 2026 AI Winter School, hosted by the Center for the Fundamental Physics of the Universe at Brown University.
An exploration of how to frame a classic orbital mechanics problem as an RL control task: using Hohmann transfer as an analytic benchmark, defining the state / action / reward / termination for a simplified 2D two-body environment, and training PPO agents with discrete and continuous thrust control. The session walks through practical RL workflow - training, debugging, and diagnosing failure modes (e.g., chatter, overcorrection, fuel inefficiency) - and compares learned trajectories against the analytic baseline using delta-v efficiency and stability diagnostics.
Venue/Host: AI Winter School 2026, Center for the Fundamental Physics of the Universe at Brown University.
2025
Brown AI Winter School 2025: Exploring LLMs and RAG
A 2.5 hour interactive workshop at the 2025 AI Winter School, hosted by the Center for the Fundamental Physics of the Universe at Brown University.
An exploration of LLM tools and techniques, including using OpenAI API, an open LLaMA model, and retrieval-augmented generation (RAG) for improving AI system performance with external data. We set up an LLM using both OpenAI's API and a locally installed LLaMa model, and build a basic RAG system to demonstrate how external data can be leveraged to enhance AI outputs.
Venue/Host: AI Winter School 2025, Center for the Fundamental Physics of the Universe at Brown University.
2024
SXSW 2024 Panel with DARPA: Real or Not - Defending Authenticity in a Digital World
A panel discussion on authenticity, trust, and detecting synthetic media in an AI-native information ecosystem.
Venue/Host: SXSW and DARPA.
2021
Using Deep Learning to Detect Abusive Sequences of Member Activity on LinkedIn
LinkedIn's Anti-Abuse AI Team presents a production deep learning model that operates directly on raw sequences of member activity to better detect and prevent adversarial abuse at scale across heterogeneous attack surfaces. The talk covers early results, including detection of logged-in accounts scraping member profile data.
Venue/Host: Scale Exchange.
2020
Preventing Abuse Using Unsupervised Learning
A presentation on applying isolation forests and unsupervised ML to prevent abuse at production scale.
Venue/Host: Spark+AI Summit 2020 (recording hosted on YouTube).
2019
Fighting Abuse @Scale: Preventing Abuse Using Unsupervised Learning
Detection of abusive activity on a large social network is an adversarial challenge with quickly evolving behavior patterns and imperfect ground truth labels. These characteristics limit the use of supervised learning techniques, but they can be overcome using unsupervised methods. To address these challenges, we created a Scala/Spark implementation of the isolation forest unsupervised outlier detection algorithm; we recently open sourced this library (github.com/linkedin/isolation-forest).
Venue/Host: @Scale Conference.