Date: February 1, 2024
Time: 12:00 PM - 1:00 PM
Location: 1889 Museum Road, Gainesville, Florida, 32611
Host: Department of CISE; Faculty Host: Dr. Prabhat Mishra
Zoom Link: https://ufl.zoom.us/my/prabhatmishra
Bio: Taejoon Kim is an Associate Professor of the EECS Department at the University of Kansas (KU), where he researches 6G networked systems, distributed learning, security, information theory, and agricultural AI. He leads eight projects as a PI or Co-PI. He has received numerous awards, including the KU School of Engineering Research Excellence Award, the Harry Talley Excellence in Teaching Award, the IEEE Transactions on Communications Best Paper Award (Stephen O. Rice Prize), and the IEEE PIMRC Best Paper Award. He earned his Ph.D. in ECE from Purdue University and held positions at Nokia Bell Labs, KTH Royal Institute of Technology, and the City University of Hong Kong.
Title: Joint Learning and Optimization for
Robust Channel Coding and Collaborative Learning
Abstract: Adversarial feedback channel coding is a fundamental problem in information theory that has remained unsolved for almost five decades. It involves designing codes that can reliably transfer information over a noisy channel with an active adversary who can tamper with the feedback. In this talk, we will present a novel approach to adversarial feedback coding using deep learning. We will show how deep learning can overcome the limitations of conventional feedback coding and achieve unprecedented levels of reliability and scalability. We will show how our codes can solve hard problems in channel coding such as short block length reliability.
We will also propose a new model, called the Kolmogorov Model
(KM), for learning and fusing compressed interference information in wireless networks. Interference is a major source of performance degradation and security risk in wireless networks. KM can interpret, predict, and classify interference signals and enable distributed learning. We will explain how to optimize KM using dual optimization and approximation algorithms. We will also present a novel differential privacy mechanism for collecting compressed/quantized data in networked systems. We will show how lattice quantization can balance privacy and utility in distributed learning.