Talks

Towards Better Dynamics in Full-Duplex Spoken Language Models

November 18, 2025

Invited Talk, UT Austin, Texas, US

Slides: Towards Better Dynamics in Full-Duplex Spoken Language Models
Related Survey: Speech-Trident

This talk starts from our survey paper: “On the Landscape of Spoken Language Models” (TMLR 2025) to introduce spoken language models (SLMs) and the development of full-duplex models that can listen and speak simultaneously. It then covers two contributions: (1) the Game-Time Benchmark (ICASSP 2026), which evaluates temporal dynamics in SLMs such as reaction timing, tempo adherence, and silence management, and (2) Full-Duplex-Bench-v2, a multi-turn evaluation framework with an automated examiner for assessing full-duplex SLMs.


ICASSP 2023 Tutorial: Parameter‑Efficient Learning for Speech and Language Processing: Adapters, Prompts, and Reprogramming

Tutorial, ICASSP 2023, Rhodes Island, Greece

Presented by: Dr. Huck Yang, Dr. Pin-Yu Chen, Prof. Hung-yi Lee, Kai-Wei Chang, Cheng-Han Chiang
Slides: Parameter-Efficient Learning for Speech Processing
Related Survey: Speech-Prompts-Adapters