Organisations Telling Users To ‘Avoid Clicking Bad Links’ Still Isn’t Working
By David C., Technical Director for Platforms Research and Principal Architect – UK National Cyber Security Centre
A New Approach To Cyber Threats
Why Organisations Should Avoid ‘Blame And Fear’, And Instead Use Technical Measures To Manage The Threat From Phishing Links.
Organisations: If It’s Broken, Let’s Fix It
Let’s start with a basic premise: several of the established tenets in security simply don’t work.
One example is advising users not to click on bad links. Users frequently need to click on links from unfamiliar domains to do their job, and being able to spot a phish is not their job. The NCSC carries out and reviews red team operations, and a common observation is that red teamers (and indeed criminals or hostile states) only need one person to fall for a ruse for an attacker to access a network.
We’re even aware of some cases where people have forwarded suspicious emails from their home accounts to their work accounts, assuming that the security measures in place in their organisations will protect them. And once a link in a phishing email is clicked and an attack launches, the stigma of clicking can prevent people reporting it, which then delays the incident response.
So, what if we assume that users will sometimes, completely unintentionally, click on bad links and that when they’re at work, it’s their organisations that are responsible for protecting them?
The Consequences Of ‘Bad Links’ In Organisations
Let’s first consider what happens when someone clicks on a ‘bad link’ in an email. One of two things generally happens:
- the user is persuaded to enter their log-in details into a fake page, so attackers can steal or exploit their credentials, or by using OAuth or consent phishing
- the user downloads a malicious file via a link or attachment, such as a document, executable or script
Note that although browser exploits may also be a consequence of clicking on a bad link, it’s less common and only in high-end attacks. (And if you’ve automated browser patching, only zero day exploits – which are outside the threat model for most organisations – should be a concern here.)
Mitigating Credential Theft For Organisational Services
Although attackers are very good at designing phishing pages to look genuine, your organisation can entirely mitigate the threat of credential theft by mandating strong authentication across its services, such as device-based passwordless authentication with a FIDO token. Or, if your organisation isn’t ready for passwordless authentication, you can make it much harder for actors to exploit credentials by setting up multi-factor authentication (MFA). You can then use single sign-on (SSO) for any third-party websites your organisation’s uses, which gives confidence that controls are widely applied.
For websites outside of your control, encouraging your users to use password managers and allowing autocompletion of passwords in browsers can help. A password manager in a browser shouldn’t provide a password for an incorrect site (although a user might still be persuaded to manually enter a password). Employees should also be encouraged to enable MFA on any services they use.
Organisations can also reduce the risk of credential abuse by making sure that only your organisation’s devices can access resources, or by denying OAuth/consent phishing to arbitrary sites at cloud tenancy levels (although note that this requires users to request that a site is enabled for OAuth integrations).
Mitigating Malicious Downloads Through Defence In Depth
In the other common attack method where a user downloads a malicious file through a link or attachment, files can be directly exploitable (like an executable), or they might be files that allow execution, such as Microsoft Office macros.
Attackers also use layers of different files to sneak past other controls, perhaps encrypting a zip file, or using a file users aren’t familiar with, like a .iso disk image.
It’s harder to prevent successful attacks from files like this, but it is possible. Organisations have the power to put in place technical measures that reduce the responsibility on a user. By implementing the enterprise-level actions below, it’s possible to greatly reduce the chance of successful attacks on your network.
Let’s break the measures down into three stages.
1. Preventing delivery of the phishing email:
-
- use email scanning and web proxies to help remove some threats before they arrive
- DMARC and SPF policies can significantly reduce delivery of spoofed emails to users
2. Preventing execution of initial code:
-
- put in place allow-listing to make sure that executables can’t run from any directory to which a user can write – this will prevent a significant number of attacks
- for anything not covered in allow-listing, use registry settings to ensure that dangerous scripting or file types are opened in Notepad and not executed – for PowerShell, you can minimise risk by using PowerShell constrained mode and script signing
- disable the mounting of .iso files on user endpoints
- make sure that macro settings are locked down (see the NCSC’s guidance on macro security) and that only users who absolutely need them – and are trained on the risks they present – can use them
- enable attack surface reduction rules
- ensure you update third-party software, such as PDF readers, or even better, use a browser to open such files
- keep up to date with current threats with wider reading about any new attack vectors emerging
3. Preventing further harm:
-
- allow-listing is again a powerful way to prevent further harm once a malicious file is opened
- DNS filtering tools, such as PDNS (for UK public sector and also the private sector) can block suspicious connections and prevent many early-stage attacks
- organisations can also carry out endpoint detection and response (EDR) and monitoring to look for suspicious behaviour on hosts
Does This Mean We Can Stop Training People To Recognise Suspicious Links?
Let’s be clear that if your organisation implements the measures above, and tests and maintains them, it’s likely there will be a significant drop in attackers exploiting your users to gain initial access. But it’s still worth training users to spot suspicious links. Why is this?
- Firstly, because one of the above controls may fail, and so defence in depth is always good.
- Secondly, a determined attacker who is very focused on finding a route into a particular company network may also target users’ personal accounts to get to their end objective. So it’s ideal if users can also spot suspicious emails in their personal accounts, where organisational protections aren’t in place. (This has the added benefit that it also helps protect them against phishing that seeks to steal money or otherwise extort them.)
- And finally, if users can spot suspicious emails and have the mechanisms to report them, it can be a really useful source of intelligence for organisations, throwing light on compromise attempts that otherwise might be missed. This is particularly true for organisations facing greater threats.
Building A Strong Reporting Culture
It’s time for organisations to move away from using blame and fear around clicking links, even if it’s usually unintentional. This means, for example, not running phishing exercises that chastise users for clicking on bad links.
Imagine a scenario where a user isn’t embarrassed to report when they’ve clicked on a malicious link, so they do so promptly, the security team thanks them for their swift action, and then works quickly to understand the resulting exposure. This is a much more constructive sequence of events, and with the added security benefit that an attack is identified early on.
We should also make it easy for users to report suspicious emails, such as using email add-ins widely.
Usability And Security Can Go Together
This article is a call to encourage organisations to think about the question from a different perspective. We know that telling users not to click on bad links just isn’t working, so let’s suspend belief for a second and think about it in a different way. What would we do differently if we were actually encouraging users to click links without fear?
We’re not of course, but the point here is that we don’t have to choose between usability and security. Bringing the two together can achieve the right level of security, while also allowing people to get on with their jobs, and without blaming them when things go wrong.
-/ 30 /-
What do you think about this?
Please share your thoughts in a comment below!
More ScamsNOW.com Articles
SCARS LINKS: AgainstScams.org RomanceScamsNOW.com ContraEstafas.org ScammerPhotos.com Anyscam.com ScamsNOW.com
reporting.AgainstScams.org support.AgainstScams.org membership.AgainstScams.org donate.AgainstScams.org shop.AgainstScams.org
youtube.AgainstScams.org linkedin.AgainstScams.org facebook.AgainstScams.org
Leave A Comment