Thanks to machine learning and artificial intelligence, security solutions can automatically initiate countermeasures in the event of a cyber attack. But what countermeasures are possible, and how far can the response to an attack go?
When machine learning and artificial intelligence found their way into the field of cyber security, most providers pointed out that artificial intelligence (AI) can only offer support; the final decision would always lie with humans. The interaction of AI and humans, i.e. AI-assisted humans are not the only way to bring artificial intelligence into security: AI could become a “lone fighter” insecurity.
According to a survey, some internet users would be okay with that: a quarter of people prefer cybersecurity powered by artificial intelligence.
According to a survey by the Association of the Internet Industry eco, AI and machine learning can support those responsible for cyber security with routine tasks, such as correctly evaluating security warnings. Sixty-five per cent of those surveyed said that AI methods should already be used today to defend against cyber attacks. But 59 per cent even think that in a few years, AI systems will essentially take over the defense against cyberattacks autonomously.
AI then changes the role from assistant security analyst and security officer to automatic security operator. This raises various questions regarding responsibility, reliability or liability. Last but not least, there is the question of how far automatic cyber defense can and may go.
AI In Security Automation
AI is already helping to automate the detection of cyber attacks and the response to detected attacks. The automated security functions include, for example, the analysis of security events, the blocking of infected end devices, the removal of malware, the detection and patching of security gaps or the logging off of suspicious users. Thanks to automation, the response to cyber attacks can be accelerated and optimized, but it is essential to define precisely what the AI solution can and cannot do.
Depending on the reaction initiated, the consequences are pretty noticeable; for example, user access is automatically blocked because the AI has become suspicious, i.e. the user’s behavior has deviated too much from ordinary. The user should possibly do something as an exception, but now he no longer has access to IT.
Therefore, such “drastic” reactions are usually first presented to the human IT security expert, who decides the proper response. However, if the autonomous cyber defense were used, humans would only be the decision-maker if the automated security processes were explicitly interrupted.
AI Must Be Secure To Create Security
One of the main problems with AI is its stability; according to the security provider: “While AI is usually a useful addition to security solutions, incorrect implementation can lead to unsatisfactory results. The use of AI and ML in malware detection requires constant fine-tuning. Today’s AI doesn’t know to ignore benign files that don’t match the expected patterns. If the mesh of a neural network is too wide, malware can evade detection; too fine, and the security solution keeps throwing false alarms.”
Malwarebytes also warns of possible misuse: “The raid-like introduction of AI in security technology also creates an opportunity for cybercriminals to use the weaknesses of the AI currently used against security providers and users. Once threat actors figure out what a security program is looking for, they can find solutions to help them bypass detection and keep their malicious files under wraps.”
AI Could Itself Be Attacked
If the AI is not sufficiently secure, this has fatal consequences for autonomous cyber defense: The insecure AI leads to an overall uncertain IT that needs to be protected.
Artificial intelligence offers enormous possibilities for shaping our economy and our information society. With the increasing networking of companies and public institutions, however, their potential vulnerability to cyber attacks is also growing .Therefore, we also have to look at the risks for cyber security and data protection. This is how AI can improve the security of IT systems. At the same time, however, potential attackers also use AI.
AI systems themselves could also be the target of hackers in the future. The resilience of AI systems against manipulation must therefore be researched more intensively.
Another possibility is that humans monitor the security of the AI systems while the AI controls the safety of other IT systems. Human-assisted AI would then be the basis for a (semi-)autonomous cyber defense. It is also evident here that the need for security experts will not disappear in the future, even with security automation.
Also Read: Storage Automation Requires AI