I have produced 60+ publications in top-tier conferences/journals, including:
AI/ML: NeurIPS, CVPR, ICCV, ECCV, ICML, AAAI, UAI, AAMAS, Springer-ML
Software Engineering : ACM-TOSEM, ACM-TECS, IEEE-TSE, Elsevier-IST, ASE
Safety, Reliability and Security: Elsevier-RESS, IEEE-TR, ISSRE, DSN, CCS, SafeComp
Robotics & Autonomous Systems: Nature-CE, IEEE-RA-L, ICRA, IROS, ITSC, IV
Please refer to my (carefully maintained) Google Scholar profile for a full list of publications. If you have any problems obtaining these publications, please feel free to contact me.
Featured Publications of Explainable and Trustworthy AI:
[ICCV] Adversarial Training for Probabilistic Robustness.
[ACM TOSEM] Hierarchical Distribution-Aware Testing of Deep Learning.
[ICCV] SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability.
[UAI] BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations.
[IEEE TR] Coverage Guided Testing for Recurrent Neural Networks.
Featured Publications of Probabilistic Verification of Autonomous Systems
[Nature Comm. Eng. ] Bayesian Learning for the Robust Verification of Autonomous Robots
[ASE] Interval Change-Point Detection for Runtime Probabilistic Model Checking.
[AAAI] Probabilistic Model Checking of Robots Deployed in Extreme Environments.
Featured Publications of Reliability Assessment of Safety-Critical Systems
[IEEE TSE] The Unnecessity of Assuming Statistically Independent Tests in Bayesian Software Reliability Assessments.
[Elsevier IST] Assessing Safety-Critical Systems from Operational Testing: A Study on Autonomous Vehicles. (journal extension of the ISSRE'19 work, best paper nominee)
[Elsevier RESS] Modeling the Probability of Failure on Demand (pfd) of a 1-out-of-2 System in Which One Channel is “Quasi-Perfect”.
Featured Publications of Safety Assurance for Learning-Enabled Systems
[ACM TECS] Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance. (journal extension of the AISafety'21 work, best paper award)
[ITSC] Instance-Level Safety-Aware Fidelity of Synthetic Data and Its Calibration.
[ITSC] STPA for Learning-Enabled Systems: A Survey and a New Practice.
[SafeComp] A Safety Framework for Critical Systems Utilising Deep Neural Networks.