Skip to main content

This is a new service – your feedback will help us to improve it.

  1. Guidance
  2. Secure by Design
  3. Activities
  4. Assessing the effectiveness of security controls

Assessing the effectiveness of security controls

Once you have determined which security controls you will apply to the risks that have been identified across your service, you need to verify that they are operating as intended.

This is a follow-up activity to performing a security risk assessment and mitigating security risks. It includes the appraisal of both technical controls (such as access management and firewalls) and administrative controls (including policies and procedures).

A robust controls testing and verification process will allow you to:

You should be testing the effectiveness of controls as these are introduced to the service. This will ideally be done during the design and build phases so delivery teams are able to embed effective security measures into the service from the outset. New services or features should not go live without being deemed secure.

There should also be routine testing of controls scheduled throughout the service lifecycle, as well as whenever there is a change to the scope of the service or the risk landscape.

Completing this activity will help you to achieve the outcomes included in the Secure by Design principles to design usable security controls, build in detect and respond security, design flexible architectures, minimise the attack surface, defend in depth and embed continuous assurance.

Who is involved

This activity should be carried out by developers and DevOps engineers in collaboration with security and technical architects.

The results of controls testing and verification activity should be reviewed and endorsed by senior leaders including your Senior Responsible Owner (SRO) and service owner as well as your organisation’s security assurance team.

How to assess the effectiveness of security controls

Step 1: Document security control implementation

An output from the mitigating security risk activity should be a risk treatment plan that is provided to your delivery team who will be applying it to the service. The security controls to be implemented may include:

Technical teams should follow best practice methods for implementing security controls, documenting what they have done in a repository that can be reviewed and verified. This information should be held securely and only accessed by those updating security controls or conducting security control reviews.

Step 2: Monitor and verify security controls

Develop a process to continuously monitor the security controls that have been implemented to assess their effectiveness. Your schedule should use a combination of manual and automated tests, for example:

This is not an exhaustive list. The checks you put in place should be specific to the controls that have been implemented and proportionate to the level of risk that they are mitigating. Not every test needs to be conducted at the same frequency. For example, you may review system logs daily, but only conduct security surveys with your team annually.

Step 3: Report your test results

Create a regular report using the information gathered, including measurable metrics where possible, such as the number of security events or the percentage of vulnerabilities addressed. This will allow for trends to be monitored over time, making it easier to spot any anomalies that would require escalation.

The results of these assessments should be used to capture residual risks and inform plans for security improvements. The information may need to be relayed back to those who conducted your risk assessment or compiled your risk register if it is determined that the controls put in place are not sufficient enough to mitigate security risks.

This information will be valuable to:

Further reading

This activity is part of the ‘Manage cyber security risks’ stage of Secure by Design, which also includes:

Read the Secure by Design activities

The Secure by Design approach will evolve to reflect the needs of government digital services. Your feedback will help us to improve it.

Last update: 31 January 2024