AI weaponization, how powerful is it?
E Security February 9 Japanese Cyber threats are undoubtedly an evolving target, with security breaches now being discovered at an accelerating rate each week and organizations around the world receiving thousands of security alerts from their monitoring systems every day. Ovum, the world's leading neutral consultant to the telecommunications industry, said thirty percent The above banks receive daily20 Multiple security alerts. How do enterprise organizations need to prepare for the challenge of more rapid AI technology attacks?
Once a security breach occurs, the patch installation process is then initiated, but the system itself remains very vulnerable during the fix. Moreover, whenever IT departments adopt new technologies to deal with threats, the corresponding threat actors will also make technological improvements, which means that reacting quickly to such changes has become an important prerequisite for implementing security.
From a defense vs. offense perspective, artificial intelligence (AI) technology continues to be applied to defense teams/hacking tools, and the weaponization of AI is now widely seen as one of the most noteworthy cybersecurity threats of 2018. 62% of security experts believe AI will be a cyber attack weapon this coming year.
Real world applications of AI technology
Several Australian companies have already started exploring ways to use AI technology to secure their networks, such as the Commonwealth Bank of Australia, which announced in December 2016 the development of AI solutions to assist with cybersecurity, fraud detection, and regulatory compliance. The bank is now using machine learning techniques to make sense of large volumes of undefined data and manage alerts on areas that need attention.
For hackers, AI provides the perfect tool to efficiently execute attacks at scale and to decide for themselves when, where, how, and to whom to launch an attack, meaning that AI technology has made it easy to gather target-related information from social media and other public sources to build personalized phishing attacks.
US security firm ZeroFOX recently launched an experiment to test whether AI is better than humans at launching phishing attacks, using AI solutions to monitor user behaviour on social media and later create and distribute its own phishing lures. The AI solution, called "Snap_R", is as successful as a human attacker in luring Twitter users to click on malicious links 6x. Comparison of experimental situations.
Snap_R to 6.75 articles/1 minute rate of sending spear-phishing tweets to more than 800 users, quickly luring in275 (a) Victims.
Human attackers, on the other hand, are known for 1.075 articles/1 minute tweets at the same rate, and this approach only attracts49 Name of user.
How to defend against AI-based attacks?
To defend against such attacks, you first need to define exactly what you are protecting against, and then ensure that you have proper control over threat vulnerability management, patch management, critical data identification and encryption, and overall environment visibility. The key to working in this category is to ensure that you can change direction quickly.
Clear requirements for procedures and processes are also essential. Even in the best case scenario, the actual performance of the world's most advanced technologies will depend specifically on the actual implementation process. Technology can only assist in expanding these processes, not replacing them entirely.
Adequate understanding of the normal state of one's environment
In addition, all types of organizations must be aware of the normal state of their own environment. The lack of contextual information is a challenge for most organizations, such as the need for organizations to fully understand their assets and build contextual information through their communication/interaction methods. Once contextual information is established, anomalous events can be more easily isolated and investigated. Security and governance should be integrated into the daily process and no longer be a regular task at a certain point in time.
As defenses get stronger, criminals tend to shift tactics and target the weakest link in the chain. This can be mitigated by focusing attention on a range of key matters such as laying a solid foundation of governance, understanding one's assets, and capturing the characteristics of normal conditions.
AI technology is inherently fair and can be exploited by attackers and defenders alike, and the ability of AI to not only rapidly drive attacks, but also rapidly shift tactics and strategies, means that AI-driven defenses will be equally responsive. The key to this confrontation is to clarify the normal condition and use it as a basis for identifying irregular or abnormal activity.
Note: This article was compiled and reported by E Security, please specify the original address
https://www.easyaq.com/news/1341752595.shtml