• 7/9/2014Mark Chapman —President & CEO of PhishLine
More and more security awareness professionals are feeling trapped by management who believe the only metric that matters is the infamous “click-through rate” for mock phishing campaigns.Phishing_Key
On the surface, the idea is simple. To validate the effectiveness of our training and to measure real-world user behavior, we will send mock phishing emails to our employees. Then, we measure the % of users who clicked on the suspicious-looking link in the email. Ultimately, if our security awareness program is effective, the click-through-rate should go down over time. Case closed, game over? Not quite.
Unlike many information-security related metrics, everyone including management can understand the basic concept of “click-through rates”. It is an appealing metric based on its simplicity and the ability gain buy-in. Adding fuel to this fire are the sustained vendor-based marketing efforts that drive home the idea of “click-through rates” being the sole “magic metric” to indicate the effectiveness of your security awareness program.
So, what is the problem?
The problem is that the “click-through-rate” is rarely a direct and fair measure of training effectiveness because many other factors influence the metric.
Here are a few factors that have a significant impact on “click-through-rates”:
Theme of the phish: Is this a run-of-the-mill “You’ve won the BIG prize!” theme with an overtly obvious sending domain, or is it a directed, timely, and polished spear phish?
Design level: Was this a simple, bland text only message or is it a marketing-quality message addressed to the recipient by name with images and design that enhance the authenticity profile?
Timing: Consider the times or seasons where a particular phish will provide a different click-through-rate just by the nature of the message itself. Is a Valentine’s Day E-Card hoax more enticing and inviting around the holiday itself or around Independence Day?
Message rate: Did all the messages go out at once and trigger an onsite, “cubicle frenzy” in which colleagues abandon productivity to warn one another not to click the link we just received because “it’s just another one of those tests”?
Attack vector: Was the mock social engineering exercise based on email, SMS-text, voice, portable media or a combination?
Repetition: Have you sent the same message in the past or is this something new?
There are an infinite number of factors which minimize the relationship between the raw “click-through-rate” and the effectiveness of your security training initiatives. The hard part is deciding what do to about it.
What are acceptable risk levels?
The click-through-rate metric poses great challenges with respect to defining acceptable risk targets. One can easily argue that it is not possible to drive the metric to 0% since, given the chance, some users will always click. So, for an actionable metric, what does success look like? Perhaps benchmarking and trend line reporting can help paint a useful picture using the raw numbers.
It is reasonable to ascertain that as you lower the click-through-rate you lower your likelihood of risk. (For this blog, we will assume that all phishing attacks have the same potential risk impact. That, of course, is not true, and is yet another problem with the “magic metric”.)Percent-percentage
Even though a 10% click-through-rate is arguably “better” than a 20% rate, is the 10% literally twice as good as the 20% in terms of raw numbers with respect to your organization’s true information security posture? This leads to rhetorical questions like “How many holes does it take to sink a ship?” or, “How many technical vulnerabilities does it take to increase the likelihood of a hacking attempt?” There must be a way for us to escalate the “click-through-rate” to provide actionable contextual insights rather than treating it as a direct metric.
Can I learn more through contextual relative risk?
One of the best things to do is to provide context to the raw metrics by performing objectives-based A|B testing. For example, send out two versions of a phish. The first could have spelling errors, grammar errors or other hallmarks of phishing. The second would be similar, but would not have the same characteristics. Half of your employees get one phish, the other half gets the other.
Within this context, the raw “click-through-rate” is not as important as the ratio. If the first phish with spelling errors had a 20% click-through-rate and the one without spelling errors had a 40% rate, you might conclude that your employees are half (20/40=1/2) as likely to click on emails with spelling errors. On the other hand, if the ratios were (30/30=1/1), or (40/20=2/1), you may draw different conclusions.
From a benchmarking or trending perspective, stating things like “in our industry, the spelling mistake click-through ratio is 2/1, but for us it is 3/1” may be much more useful than, “in our industry, the click through rate is 41.2% vs us at 32.4%”.
How to use contextual risk to take reasonable actions?
The most important step then becomes the course of action you choose based on the discoveries. In this scenario, you might decide to change the curriculum that emphasizes, “watch for spelling and grammar errors”, because you have metrics that demonstrate the relationship between grammar and user susceptibility to social engineering threats. Instilling a context based approach at the human layer allows you to continuously re-prioritize your curriculum based on facts, metrics, and observations specific to your organizational culture.
You gain even more influence by sharing these results as part of your awareness initiatives; “We performed an A|B test and found that, as an organization, we are doing a relatively good job by not clicking on emails with grammar errors, but here’s where we need to improve and focus.”
People crave details on how they are performing and want to know how something affects them personally. Organizations are generating waves of momentum by taking their security awareness efforts from general and conceptual to contextual and factual. In essence, they are telling their organization’s information security story backed by facts and discoveries. This approach has an impact.
What other contexts are useful?
Objectively, you should do A|B testing to see if your training and awareness materials themselves make any impact on desired behaviors. Simply look at the relative click-through-rates for those who had online training versus those who attended a class versus those who had no formal training. Did your latest awareness program make a statistically significant difference? How long did it take before the benefits diminished? Should you be spending resources elsewhere?
Imagine what you can learn by implementing A|B testing based on user risk profile. For example, are your Data Loss Prevention (DLP) violators more or less likely to click than others? What about people who recently called the help desk for a password reset?
Again, with this approach in place you will find the raw “click-through-rate” becomes less important and less interesting than the contextual ratio. Through this context driven approach, you will start to discover non-training related factors that will influence your organizations security posture and drive change through a level of engagement that is downright effective.
When done in a thoughtful manner, you become equipped with a new set of results-based metrics to share with upper management. “In the last 3 months, we learned 4 things about our environment and we adjusted our training program as a result of these discoveries. In addition, we are working with other parts of the organization to improve our security posture through X, Y, and Z and here is the impact it’s having on the security posture of our organization.”
Where do I go from here?
While we are all fans of “actionable metrics”, sometimes a single metric takes on a life of its own. Be sure to understand and appreciate the value of context, by doing so you will be in a position to leverage the power of objectives-based perspectives and will drive real change in your organization.
In the end, you should feel empowered by metrics, not trapped.