Predictions and Learning Almost Surely
Date: Thu, November 06, 2025
Time: 4:30pm - 5:30pm
Location: MSB 100; online available
Speaker: Dr. Narayana Santhanam, University of Hawaiʻi at Mānoa
Hosted by ICS
Zoom Link: https://hawaii.zoom.us/j/88209756495 (Passcode: ics)
ECE Graduate Students: This will count towards your seminar credit.
Abstract:
The DESCARTES collaboration at UH Manoa applies AI to several priority sectors of Hawaii---power, healthcare and communications. Recently, we have piloted an AI companion in our courses, which we will briefly introduce before going into the research agenda for our talk.
In machine learning and statistics, one often fits models to data/observations. The model in turn makes predictions on a quantity of interest, and the quality of these predictions are scored by a loss function. While a lot of learning theory envisions a train-once, deploy-forever situation, an online framework where models are constantly being both refined and tested by observations is more natural and realistic.
We distinguish between nuances of prediction and learning when the (possibly refined sequence of) models are deployed over long time windows. The classical Jeffreys-Lindley paradox, where a significance test rejects a hypothesis even when the posterior puts arbitrarily high weight on it, even in the limit of arbitrarily large number of observations is an illustration of how the simplest scenarios here can reveal very interesting insights.
With more interesting problems, broad trends emerge with respect to regularization, the topological properties of the model parameter sets, and almost sure prediction/learning. Cover's challenge on estimating whether the bias of a coin is a rational number or not, using only observations of tosses of that coin is a striking example. We will consider more challenging problems in risk modeling, compression and slow mixing Markov processes to illustrate some of these emergent principles.
Biography:
Narayana Santhanam is a professor in the ECE department. He is the lead of the NSF funded DESCARTES collaboration at UH Manoa, which brings together students from multiple departments of UHM to study and apply AI into many sectors of Hawaii. NS was the Associate Editor for the IEEE transactions of Information Theory from 2017-2023, part of the NSF Science and Technology center (Center for Science of Information) from 2015-2024, a winner of the Information Theory Best paper award, and has been on the technical program committee for several flagship conferences in information theory and machine learning. NS research interests lie in the general areas of machine learning, statistics, AI and information theory.