报告形式：腾讯会议，会议ID: 416 966 638
报告摘要：Mobile apps include privacy settings that allow their users to configure how their data should be shared. These settings, however, are often hard to locate and hard to understand by the users, even in popular apps, such as Facebook. More seriously, they are often set to share user data by default. We report the first systematic study on the problem. On the other hand, machine learning algorithm (e.g., deep learning) may also have flaws. Compared with the AEs in the digital space, the physical adversarial attack is considered as a more severe threat to the applications like face recognition in authentication, objection detection in autonomous driving cars, etc. In particular, deceiving the object detectors practically, is more challenging since the relative position between the object and the detector may keep changing. In this talk, we presented systematic solutions to build robust and practical AEs against real world object detectors and automatic speech recognition systems.
陈恺，中国科学院信息工程研究所 研究员、博士生导师。信息安全国家重点实验室副主任，《信息安全学报》编辑部主任。2010年获中国科学院研究生院博士学位。主要研究领域包括软件与系统安全、人工智能安全。在IEEE S&P、USENIX Security、ACM CCS、TIFS、TDSC等会议、期刊上发表论文100余篇；曾主持和参加国家重点研发计划、国家自然科学基金重点项目、863计划等国家部委课题40余项。入选国家“万人计划”青年拔尖人才、北京市“杰青”等。