Did you know that 96 percent of Singaporean businesses have reportedly suffered a data breach? And cybercrime is not slowing down. With the financial risk from cyberattacks estimated to be US$5.2 trillion between 2019 and 2023, it creates an ongoing challenge for investors, corporations, and consumers around the world. In Singapore, experts detected approximately 4.66 million web threats in 2019. This shocking statistic acts as a reinforcement for the need for innovative ways of enhancing cybersecurity within our region.
Earlier this year, Finance Minister Heng Swee Keat revealed that the Singapore government will be investing S$1 billion to strengthen its cyber and data security systems to safeguard its critical information infrastructures, as well as its citizens’ data. Moving forward as a digital economy and smart nation, and with increasingly adopted technologies like artificial intelligence (AI), Machine Learning (ML), and Internet of Things (IoT), the Singapore government will also provide more funding to local deep-tech startups and small and midsized businesses (SMBs).
While the term AI was first coined in 1956, today is it a field of computer science, focused on how machines can imitate human intelligence. Successful applications of AI include beating humans at Go, diagnosing cancer, and operating autonomous vehicles. Over the last 10 years, the potential of AI to help with cybersecurity problems has evolved from being over hyped into a critical ingredient of enterprise security programs. In their Top Security and Risk Trends for 2020, Gartner projects that “AI, and especially machine learning (ML), will continue to automate and augment human decision making across a broad set of use cases in security and digital business.”
Finding a Needle in a Haystack
While it is common knowledge to security professionals, others may be surprised by the daily volume of security logs generated by enterprises. The number of logs generated by firewalls, authentication servers, endpoints, and a variety of other devices and security tools total multiple millions every day. Security information and event management (SIEM) tools can use rules to filter and prioritize these logs into alerts but it is the job of security analysts to investigate the most critical alerts. For example, out of 10 million daily logs, hundreds may require expert human investigation.
Security analyst investigations include examining detailed log data, reviewing correlated events and threat intelligence, and looking for suspicious behavior. Analysts must quickly determine if the event has actually compromised the organization’s security, is a potential threat, or is a false positive. This difficult and time consuming work is made even more challenging by the high percentage of alerts that are false positive. This is why it is not uncommon for security analysts to get “alert fatigue” – losing motivation to thoroughly investigate alerts.
Reactive investigations are necessary but insufficient for a robust security defense. Security teams should also proactively hunt for threats that are not triggered by system alerts. Targeted attacks often aim at stealing critical data and use techniques like obtaining user credentials, upgrading access to a privileged user, and moving laterally across the network. These attacks, also known as advanced persistent threats (APTs), can result in an attacker gaining unauthorized access to a system or network and remaining there for an extended period of time without being detected. The time a hacker goes undetected on your network, or “Dwell Time”, is commonly measured in months. APTs that use multi-stage attacks that occur over longer periods, commonly referred to as low and slow attacks, are hard to detect with rule based analytics alone. The practice of hackers changing or morphing their attack techniques further adds to the challenge of threat hunting.
AI to the Rescue
Initial approaches to detecting threats used a subset of AI called unsupervised machine learning to detect anomalies. Unfortunately, while AI has been proven to predict significant future events, the range of behaviors of users, applications, and external data is so complicated it is very hard to identify malicious outliers. The result was many AI-powered products that generated too many false positives to be practical.
While unsupervised learning attempts to find patterns among data points without knowing the meaning of the data, supervised learning infers a relationship based on existing data labels. For example, an AI model can learn to recognize pictures of a table after being trained on a large number of images that are identified as tables. However, in the field of cybersecurity, it is very hard to obtain labelled data to train detection models. Additionally, hackers can change or adapt the attack techniques faster than a supervised learning model can be trained.
The solution to these limitations is active supervised learning, which engages human experts to help create and train threat hunting models. Organizations that are using both AI and humans are 20 times stronger against cyberattacks than traditional methods. The resulting AI models combined with expert feedback can quickly learn to distinguish between malicious and normal behavior. AI-powered threat hunting enables security analysts to significantly increase productivity and detect and respond to more real threats that would have otherwise resulted in a damaging breach.
Can AI Defend Against AI?
Just as security teams and technology vendors are adopting AI to detect and contain threats, hackers can also use AI to power their attacks. Hackers are expected to use AI techniques to target organizations, develop new exploits, and detect vulnerabilities. AI is expected to increase the speed of attacks while reducing cost. For example, writing an effective phishing email takes time and creativity, AI can help automate this process.
The good news is developers of security tools are also rapidly adopting AI as part of the product development and enhancements. However, there is still a lot of marketing hype around AI, so we advise you to dig into the details to assess if your vendors are fully leveraging AI/ML technologies before you make the leap.
Organizations can use machine learning to detect suspicious and unusual patterns that are nearly impossible to discover through the human eye. The intelligent detection algorithms can compare the network data packets continuously to discover anomalous traffic, then apply strategies, such as statistical monitoring and anomaly detection, to identify malware variants communicated over a network. Cybersecurity is traditionally a very time-consuming task but with effective use of AI, you can begin to make your cybersecurity teams more efficient.