Session: Learning, Robustness, and Fairness
Chair: Cagil Kocyigit
Cluster: Optimization Under Uncertainty and Data-driven Optimization
Talk 1: Fairness in Federated Learning
Speaker: Daniel Kuhn
Abstract: We study group fairness regularizers in federated learning with the aim to find a globally fair model in a distributed fashion. Distributed training poses unique challenges because the fairness regularizers based on probability metrics popular in centralized training cannot be decomposed across clients. To circumvent this challenge, we propose a function-tracking scheme for the global fairness regularizer based on a Maximum Mean Discrepancy (MMD), which incurs a small communication overhead. The proposed function-tracking scheme can readily be incorporated into most federated learning algorithms while maintaining rigorous convergence guarantees, as we will exemplify in the context of FedAvg. When enforcing differential privacy, the kernel-based MMD regularization allows for easy analysis via a change of kernel, leveraging an intuitive interpretation of kernel convolution. Numerical experiments validate our theoretical findings.
Talk 2: Robust Offline Policy Learning Under Covariate Shifts
Speaker: Phebe Vayanos
Abstract: We study the problem of distribution shifts in offline policy learning, where the policy training distribution is different from the deployment distribution and may lead to harmful or sub-optimal policy actions at deployment. In real world applications, changes to an allocation system can cause shifts in measured covariates, such as wording changes in survey questions that elicit different responses from individuals experiencing homelessness. As a result, a non-robust allocation policy may incorrectly over or under allocate resources based on the original offline data distribution. Adopting a Wasserstein distributionally robust approach, we learn an allocation policy that is not restricted to any functional form and robust to potential covariate shifts in the population of allocatees.
Talk 3: Learning Robust Risk Scores under Unobserved Confounders
Speaker: Cagil Kocyigit
Abstract: We study the problem of learning robust risk scores from observational data in the presence of unobserved confounders. In the absence of unobserved confounders, a well-known approach to adjust for confounding is inverse probability weighting (IPW) of the data. However, in the presence of unobserved confounders, estimating these weights is challenging, even in large data regimes. We formulate a robust maximum likelihood problem with the objective of maximizing the worst-case likelihood in view of all possible weights within an uncertainty set, which we construct by drawing inspiration from sensitivity analysis in observational data settings. We formulate this problem as a convex optimization problem by leveraging duality techniques rooted in robust optimization. Numerical experiments show that our robust estimates outperform both IPW and direct estimation methods on synthetic data designed to reflect realistic scenarios.