iboss: Reputation Scoring Brings Enhanced Security
It’s no secret that more and more attacks are slipping past traditional security solutions. One only has to look at the rise in ransomware attacks to see how poorly endpoint security solutions are dealing with that particular problem. Much of the fault lies with archaic signature based detection systems, where the endpoint security product relies on an identifiable signature to determine if any data is malicious in nature.
While signatures are great in theory, there is a significant real-world problem with signatures, one that can be summed up as you can only identify the malicious software if you already know what its signature is. In today’s connected world, that proves to be a major shortcoming, one that attackers are leveraging by morphing code to avoid signature based detection. Nowhere is that truer than in the malicious realm of APTs (Advanced Persistent Threats) and ransomware, where code obfuscation and redirection is taking its toll on many a victim.
Fortunately, there is more than one way to detect malicious software and malicious activity, something that can be achieved by using the concept of a reputation score. Although reputation scoring is not a new concept, it is making serious inroads into security platforms and can take many different forms.
For example, there are ways to validate the reputation of users, websites, applications, emails, and so forth, which in turn creates something akin to automatic white listing or black listing, where suspicious pieces of code are quarantined. It is important to understand how reputation scoring works and how those scores are calculated for the technology to be effective.
For the most part, reputation scoring systems rely on some form of machine intelligence, or more specifically, AI (Artificial Intelligence) or ML (Machine Learning). More simply put, machines need to be able to judge the user (or application) based upon expected interactions, and what has occurred in the past, present, and potentially the future. In other words, advanced algorithms are used to learn what is normal behavior and what is not. Simply put, if a new application is trying access something it should not (a website, encryption algorithm, etc.), a low reputation (since it is a new application) should prevent the activity and create an alert. Over time, the reputation scoring system learns what activity is expected, and assigns higher reputation levels to those users and processes that have been vetted.
By combining the collected data of behavior and reputation, an intelligent security system should quickly end most threats, and at the very least, warn a human that unauthorized activity may be occurring. While reputation scoring may not solve all security problems, it does go a long way towards preventing malware or unauthorized users from damaging systems. What’s more, access to websites with a low reputation (i.e. hacking sites, the dark web, and so on) can be blocked, preventing other attack vectors from compromising systems.
In the security realm, reputation has come to mean everything and constitutes whom or what a process should trust, and only allow those elements that can prove their worthiness to access resources.
You can read the original article, here.