cool hit counter GeekPwn 2018 Top AI Hackers Wanted: Are you confident in blinding AI?_Intefrankly

GeekPwn 2018 Top AI Hackers Wanted: Are you confident in blinding AI?


When it comes to machine learning, the term 'adversarial samples' should no longer be unfamiliar. And an adversarial sample takedown game might be the first time you've heard of it.

In order to accelerate the research of adversarial samples, as the world's first cutting-edge platform focusing on artificial intelligence and professional security, geekPwn2018 will innovatively set up CAAD Competition on Adversarial Attacks and Defenses (CAAD), setting up three events respectively for the research of adversarial attacks and defenses in the field of image recognition, which will officially start in May this year. Is it a Chinese field dog or a polar bear, a parrot or an ostrich, a car or an airplane in the picture presented by the contestant? To find out the answer, you'll want to come to GeekPwn 2018 to find out.

Because, this is probably the most special competition in AI security.

A 'confrontation sample' that doesn't even spare people

Although, artificial intelligence has entered millions of homes as an indispensable new technology, from face recognition access control, pupil recognition safes, down to mobile phones and house doors, AI makes everything look so convenient and beautiful. But the fact is that researchers have experienced far more AI "failures" than we have seen successful finished products, and this involves the exclusion of interference from the adversarial sample. This is because most machine learning based classifiers are highly sensitive to adversarial samples.

One example: at geekPwn Silicon Valley 2016, Ian GoodFellow, the father of generative adversarial networks, presented a demonstration of deceptive machine vision.

The addition of a small disturbance to an antagonistic sample in a picture of a panda led the system to mistake it for a picture of a gibbon. The vast majority of the time, these small modifications don't make humans pay attention, but still make the classifier wrong.

Ian showed us, by virtue of an adversarial sample that had been tampered with, that even very small changes to the sample could trick the neural network image classifier into making a southbound judgment. A good example of the vulnerability of current AI systems.

And not long ago, this deception using adversarial samples has been taken to the next level, where adversarial samples are no longer just deceiving machines, but now even humans can be deceived. As shown in the figure below: both machine models and humans will determine that the left side is a cat and the right side is a dog. And in fact, the right image is just a simple counter perturbation of the left image.

All these examples are telling us the fact that machine vision is not invulnerable, and the use of adversarial samples by attackers can pose a potential security risk for us, and they can be used to attack machine learning systems, even when models are not available. For example, when the vision system of a driverless car is spoofed, it will not be able to correctly distinguish between the classification of people, vehicles and road signs, which would have disastrous consequences.

In the long run, machine learning and AI systems are bound to become more powerful. Machine learning security vulnerabilities similar to adversarial samples could compromise or even control powerful AI's. So, what is the defense from a machine learning security perspective? And one defensive strategy that has worked so far is matchup drills. During constant model training, the training sample is not just a clean sample, but a clean sample plus an adversarial sample. As the model is trained more and more, the accuracy of clean images increases on the one hand, and the robustness to adversarial samples increases on the other.

Can your blinders pull the wool over the eyes of the machine

To find the best defense strategy to defend against these adversarial samples and explore this exciting field, in 2018, GeekPwn launched the CAAD Competition on Adversarial Attacks and Defenses (CAAD) with a total prize of $650,000 in conjunction with Google Brain's Alexey Kurakin, Ian Goodfellow, and Xiaodong Song, a professor in the Department of Computer Science at the University of California, Berkeley.

The competition will focus on the adversarial samples that make machine learning classifiers make frequent mistakes, and set up three separate projects for adversarial attacks and defense research in the field of image recognition to preview the possible risks in AI and continuously improve them, so as to promote the healthy growth of AI security. In all three programs, contestants are required to submit programs to complete the appropriate tasks, and contestants can sign up independently for one or more programs.

The first event is undirected adversarial attack, where the goal of this competition is to slightly modify the original image such that the unknown classifier misclassifies the modified image; the second event is directed adversarial attack, where the goal of this competition is to slightly modify the original image such that the unknown classifier misclassifies the modified image to the specified class; and the last event is adversarial defense, where the goal of this competition is to generate machine learning-based classifiers that are strongly defensive against adversarial samples, i.e., capable of correctly classifying the adversarial samples.

In short, GeekPwn hopes to invite the world's top AI hackers in the form of CAAD adversarial sample attack and defense tournament, and make the machine "learn" more deeply through "adversarial training", so as to effectively improve the robustness of the machine and let the machine learning system grow healthily.

The CAAD Against Sample Attacks and Defenses will take place in an online format from May 2018 - July 2018, with an awards ceremony at GeekPwn2018 Las Vegas. Registration starts and ends May 10 - July 10, 2018. It is worth mentioning that the competition's advisory board and judging panel will be composed of top industry experts, including Alexey Kurakin, senior R&D engineer at Google Brain, Song Xiaodong, professor in the Department of Computer Science at the University of California, Berkeley, Zhu Jun, associate professor at Tsinghua University and deputy director of the State Key Laboratory of Intelligent Technologies and Systems, and Wang Haibing, director of Extreme Stick Labs.

And in addition to the CAAD Sample Adversarial Attack and Defense Tournament, GeekPwn 2018 will also feature another AI proposition-specific tournament, the Data Tracking Challenge. In the age of AI and datafication, the ability to correlate data from different sources in multiple dimensions and produce accurate results is an advanced technology. Can you analyze the virus app installed in the victim's phone to uncover the culprit behind the massive amount of virus data? As long as you can "play with AI", we welcome you to enter and use your extraordinary technical skills to complete those seemingly "impossible" challenges. GeekPwn 2018 will be held in Las Vegas, USA and Shanghai, China on August 10 and October 24, respectively, so stay tuned. Poke the official GeekPwn website (2018.geekpwn.org) for more information on the event.


Recommended>>
1、Seize the screen core industry windfall Chengdu Shuangliu District to build electronic information industry clusters
2、Those things about reinforcement learning II
3、The kilogram is redefined ushering in a major change in the international system of units
4、DOSS Xiaodu smart speaker is an allround assistant that can listen and talk
5、Guess the Song is going to be played by the internet

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送

    已发送

    朋友将在看一看看到

    确定
    分享你的想法...
    取消

    分享想法到看一看

    确定
    最多200字,当前共

    发送中

    网络异常,请稍后重试

    微信扫一扫
    关注该公众号