The AV Era: Based on signatures and heuristics. Highly labor intensive, frequent updates, and only effective against known threats.
The Machine Learning Era: Now able to detect zero-day exploits. However, detection capabilities are limited to human selected features.
The Deep Learning Era: Even higher detection rates are achieved with the ability to skip human engineering and analyze all the available raw data in a file.
The era of Antivirus solutions: The AV software isolates suspicious files based on existing file signatures, heuristic analysis and file reputation. This is only effective against known malicious threats and vulnerabilities.
As AI technologies start to mature, we enter the era of Machine Learning: Endpoint protection, detection & response is made possible by machine learning-based static analysis, heuristic behavioral analysis, and memory protection. Indeed a big step forward, but still not optimal. Machine Learning systems rely on feature engineering which is limited to the knowledge of the security expert who has to handcraft the features for detection. Machine learning-based solutions are still producing low detection rates for new malware and high false-positive rates.
Enter Deep Learning: The autonomy of the training and prediction stages are enhanced with Deep Learning, so that the algorithm can analyze all the raw data in a file, and is not limited by an expert’s capabilities. This represents a quantum leap in computer science. For cybersecurity this enables a more advanced level of protection; with higher detection rates of unknown malware, lowest false-positive rates and the ability to detect prior to execution, effectively in zero-time.
Feature Engineering & Extraction
Requires a human domain expert to define and engineer features for conducting classification.
Looks at all the raw data in a fully autonomous manner.
A Fraction of Available Data is Analyzed
By converting the data into small vector of features, e.g. statistical correlations, it is inevitably ignoring most of the data.
Processes 100% of Available Raw Data
one of the major strengths of deep learning is the massive number of characteristics from the raw data that it processes to obtain a decision.
Limited in its Scalability
although machine learning can scale across diverse datasets, there is an information threshold, which if reached, additional data training doesn’t provide any further accuracy.
Improves on With More Exposure
The deep neural network continually improves as the training data set constantly grows, it is the only method that benefits from scaling into hundreds of millions of training samples.
Limited File Types are Covered (only PE)
Today, only PE files are supported. As the feature extraction process is time and cost intensive, it’s difficult to extend the range of file type coverage.
Coverage for Most File Types
Deep learning is input-agnostic, and therefore not file type dependent. This allows deep learning to be easily applied without requiring substantial modifications or adaptations.
High Level of False Positives Due to the feature selection approach, these models routinely misidentify benign files as malicious (false positives), resulting in a significant and unnecessary resource drain.
Low Level of False Positives
As the deep learning algorithm analyzes 100% of the data and is not subject to human error, false positives are dramatically diminished.
Traditional machine learning uses engineered features. These can be easily modified by attackers to bypass the AI model, as has been documented with commercial Next-Gen AVs.
End-to-end deep learning models, using raw features such as raw byte content, are more robust and resilient to adversarial attacks.