Quantum computing can help secure the future of AI systems
-by Dr Muhammad Usman, lead senior author of the paper and team leader of Quantum Systems at CSIRO’s Data61. Artificial intelligence algorithms are quickly becoming a part of everyday life. Many systems that require strong security are either already underpinned by machine learning or soon will be. These systems include facial recognition, banking, military targeting applications, and robots and autonomous vehicles, to name a few. This raises an important question: how secure are these machine learning algorithms against malicious attacks? In an article published today in Nature Machine Intelligence, my colleagues at the University of Melbourne and I discuss a potential solution to the vulnerability of machine learning models. We propose that the integration of quantum computing in these models could yield new algorithms with strong resilience against adversarial attacks. The dangers of data manipulation attacks Machine learning algorithms can be remarkably accurate and efficient for many tasks. They are particularly useful for classifying and identifying image features. However, they’re also highly vulnerable to data manipulation attacks, which can pose serious security risks. Data manipulation attacks – which involve the very subtle manipulation of image data – can be launched in several ways. An attack may be launched by mixing corrupt data into a training dataset used to train an algorithm, leading it to learn things it shouldn’t. Manipulated data can also be injected during the testing phase (after training is complete), in cases where the AI system continues to train the underlying algorithms while in use. People can even carry out such attacks from the physical world. Someone could put a sticker on a stop sign to fool a self-driving car’s AI into identifying it as a speed-limit sign. Or, on the front lines, troops might wear uniforms that can fool AI-based drones into identifying them as landscape features. Either way, the consequences […]