Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Towards Cross-task Universal Perturbation against Black-box Object Detectors in Autonomous Driving

Zhang, Q, Zhao, Y, Wang, Y, Baker, T, Zhang, J and Hu, J (2020) Towards Cross-task Universal Perturbation against Black-box Object Detectors in Autonomous Driving. Computer Networks, 180. ISSN 0169-7552

[img] Text
Towards Cross-task Universal Perturbation against Black-box Object Detectors in Autonomous Driving.pdf - Accepted Version
Restricted to Repository staff only until 15 July 2021.
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (4MB)


Deep neural network is the main research branch in artificial intelligence and suitable for many decision-making fields. Autonomous driving and unmanned vehicle often depend on deep neural networks for accurate and reliable detection, classification, and ranging of surrounding objects in real on-road environments, either locally or by swarm intelligence among distributed nodes via 5G channel. But, it has been demonstrated that deep neural networks are vulnerable to well-designed adversarial examples that are imperceptible to human eyes in computer vision tasks. It is valuable to study the vulnerability for enhancing the robustness of neural networks. However, existing adversarial examples against object detection models are image-dependent, so in this paper, we implement adversarial attacks against object detection models using universal perturbations. We find the crosstask, cross-model, and cross-dataset transferability of universal perturbations, we train universal perturbations generator firstly and then add the universal perturbations to the target images in two ways: resizing and pileup, in order to solve the problem that universal perturbations cannot be directly applied to attack object detection models. We use the transferability of universal perturbations to attack black-box object detection models. In this way, the time cost of generating adversarial examples is reduced. A series of experiments are conducted on PASCAL VOC and MS COCO datasets demonstrating the feasibility of cross-task attacks and proving the effectiveness of our attack on two representative object detectors: regression-based models like YOLOv3 and proposal-based models like Faster R-CNN.

Item Type: Article
Uncontrolled Keywords: 08 Information and Computing Sciences, 09 Engineering, 10 Technology
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Computer Science & Mathematics
Publisher: Elsevier
Date Deposited: 07 Jul 2020 08:57
Last Modified: 06 Aug 2020 10:15
DOI or Identification number: 10.1016/j.comnet.2020.107388
URI: https://researchonline.ljmu.ac.uk/id/eprint/13261

Actions (login required)

View Item View Item