logo

学术讲座

当前位置: 首页>>学术讲座>>正文

光华讲坛—面向多任务学习和领域泛化的公平感知机器学习 Fairness-aware Machine Learning for Multi-task Learning and Domain Generalization
发布时间: 2023-05-18

主题:面向多任务学习和领域泛化的公平感知机器学习 

          Fairness-aware Machine Learning for Multi-task Learning and Domain Generalization

主讲人:美国贝勒大学(Baylor University)赵辰 助理教授

主持人:十大赌博官方正规网站 徐姗教授

时间: 2023年5月22日(周一)10:00

举办地点:经世楼B206   腾讯会议ID:207768482

主办单位:十大赌博官方正规网站 科研处


主讲人简介

Dr. Zhao is an Assistant Professor at Department of Computer Science, Baylor University, Waco Texas. Prior to joining Baylor, he was a senior R&D computer vision engineer at Kitware Inc. Dr. Zhao received his doctoral degree in computer science from The University of Texas at Dallas in 2021. In 2016, he received dual M.S. degrees in computer science and biomedical science from University at Albany, SUNY and Albany Medical College, respectively. His works focus on Machine Learning, Deep Learning, Data Mining, and Computer Vision and they have been accepted and published in premier conferences, including KDD, CVPR, ICASSP, AAAI, WWW, ICDM, PAKDD, etc. Besides, Dr. Zhao served as Program Committee members of top international conferences, such as KDD, NeurIPS, AAAI, IJCAI, ICDM, BigData, ECMLPKDD, AISTATS, WSDM, WACV, etc. Homepage: https://charliezhaoyinpeng.github.io/homepage/

赵辰,美国贝勒大学助理教授。在加入贝勒大学之前为美国Kitware公司的高级工程师。博士毕业于美国德克萨斯大学达拉斯分校,两个硕士分别毕业于纽约州立大学奥尔巴尼分校和奥尔巴尼医学院。主要研究领域为机器学习,深度学习,数据挖掘和计算机视觉。近年来,赵辰在计算机主流国际会议KDDCVPRICASSPAAAIWWW等共发表论文20余篇,并且担任这些会议的评审。个人主页参见:https://charliezhaoyinpeng.github.io/homepage/

内容简介

Nowadays, machine learning plays an increasingly prominent role in our life since decisions that humans once made are now delegated to automated systems. In recent years, an increasing number of reports stated that human bias is revealed in an artificial intelligence system applied by high-tech companies. For example, Amazon has exposed a secret that its AI recruiting tool is biased against the minority. A critical component of developing responsible and trustworthy machine learning models is ensuring that such models are not unfairly harming any population sub-groups. However, most of the existing fairness-aware algorithms focus on solving machine learning problems limited to either a single task or a static environment. How to learn a fair model (1) jointly with multiple biased tasks and/or (2) in changing environments are barely touched. In this talk, I will first focus on several selected published and ongoing works on the topic of fairness-aware machine learning with the setting of online/offline paradigms and static/changing environments. Then, some future directions and research works on other topics are introduced at last.

现如今,机器学习在我们的生活中扮演着越来越重要的角色,因为人类曾经做出的决定现在被委托给了自动化系统。近年来,越来越多的报道指出,高科技公司应用的人工智能系统揭示了人类的偏见。例如,亚马逊就暴露了一个秘密,即其人工智能招聘工具对少数群体存在偏见。开发负责任和值得信赖的机器学习模型的一个关键组成部分是确保这些模型不会不公平地损害任何人口子群体。然而,大多数现有的公平感知算法都专注于解决局限于单个任务或静态环境的机器学习问题。如何学习一个公平的模型(1)联合多个有偏见的任务和/(2)在不断变化的环境中几乎没有涉及。在这次演讲中,我将首先重点介绍一些关于公平意识机器学习的已发表和正在进行的工作,这些作品涉及在线/离线范式和静态/变化环境的设置。最后,对今后的研究方向和其他课题的研究工作进行了展望。