AI, on the other hand, can save teams a lot of time and effort in risk assessment and detecting threats. It can also help with response – although that must be done carefully. An AI model can shoulder-surf analysts to learn how they triage incidents, and then either perform those tasks on its own or prioritize cases for human review. But teams need to be sure that the right people are giving the AI instruction.
Years ago, for example, I ran an experiment where I had 10 analysts of varying skill levels review 100 cases of suspected data exfiltration. Two senior analysts correctly identified all positives and negatives, three netherlands whatsapp number data less experienced analysts got almost all of the cases wrong, and the remaining five got random results. No matter how good an AI model is, it would be useless if trained by a team like that.
AI is like a powerful car: It can do wonders in the hands of an experienced driver or a lot of damage in the hands of an inexperienced one. That’s one area where the skills shortage can affect AI’s cybersecurity impact.
Given the hype about AI, organizations might be tempted to simply rush into adopting the technology. But in addition to properly training AI, there are questions CTOs need to answer, starting with suitability issues:
Does AI fit into the organization’s ecosystem? This includes the platform, external components such as a database and search engine, free and open-source software and licensing, and also the organization’s security and certifications, backup, and failover.