Time: 11:00-12:00 , Jan.13
Location: SIST 1A 200
Host: Fu Song
Mobile apps are now indispensable for people's daily life. To make it more powerful, many deep learning models are now embedded into mobile apps. Compared to offloading deep learning from smartphones to the cloud, performing machine learning on-device can help improve latency, connectivity, and power consumption. However, deploying models to devices may expose model details due to the mature reverse engineering of Android apps, resulting in potential security concerns. In this talk, I will present our latest works on attacking on-device models in Android Apps. Based on a large-scale empirical study of AI deployment on real-world Android apps, we proposed different attacks, including adversarial and backdoor attacks, to mislead on-device deep learning models (ICSE’21). In addition to testing AI models, I will also briefly tell how our group adopt deep learning models in GUI testing for conventional non-AI apps.
Dr Chunyang Chen is a lecturer (Assistant Prof) in the Faculty of IT, Monash University. His main research interest lies in automated software engineering, especially data-driven mobile app development. Besides, he is also interested in Human-Computer Interaction (e.g., collaborative editing, UI design), and software security (e.g., neural network attack). He has published 50+ research papers in top venues such as ICSE, FSE, ASE, TSE, TOSEM, CSCW and obtained three ACM SIGSOFT Distinguished Paper Awards (ICSE’21, ICSE’20, ASE’18, ICSE), one best paper award (SANER’16) and one best tool demo award (ASE’16). He has established extensive collaboration with industry, including Google, Facebook (Meta).