Workplace use of facial recognition and fingerprint scanning

February 2024

Just because you can use biometric data, doesn’t mean you should

The use of biometric data is escalating, and recent enforcement action by the UK Information Commissioner’s Office (ICO) concerning its use for workplace monitoring is worth taking note of. We share 12 key considerations if you’re considering using facial recognition, fingerprint scanning or other biometric systems.

In a personal context, many use fingerprint or iris scans to open their smartphones or laptops. In the world of banking facial recognition, voice recognition, fingerprint scans or retina recognition have become commonplace for authentication and security purposes. The UK Border Force is set to trial passport free travel, using facial recognition technology. And increasingly organisations are using biometrics for security or employee monitoring purposes.

Any decision to use biometric systems shouldn’t be taken lightly. If biometric data is being used to identify people, it falls under the definition of Special Category Data under UK GDPR. This means there are specific considerations and requirements which need to be met.

What is biometric data?

Biometric data is also special category data whenever you process it for the purpose of uniquely identifying an individual. To quote the ICO;

Personal information is biometric data if it:

  • relates to someone’s physical, physiological or behavioural characteristics (e.g. the way someone types, a person’s voice, fingerprints, or face);
  • has been processed using specific technologies (e.g. an audio recording of someone talking is analysed with specific software to detect qualities like tone, pitch, accents and inflections); and
  • can uniquely identify (recognise) the person it relates to.

Not all biometric data is classified as ‘special category’ data but it is when you use it, or intend to use it, to uniquely identify someone. It will also be special category data if, for example, you use it to infer other special category data; such as someone’s racial/ethnic origin or information about people’s health.

Special category data requirements

There are key legal requirements under data protection law when processing special category data. In summary, these comprise:

  • Conduct a Data Protection Impact Assessment
  • Identify a lawful basis under Article 6 of GDPR.
  • Identify a separate condition for processing under Article 9. There are ten different conditions to choose from.
  • Your lawful basis and special category condition do not need to be linked.
  • Five of the special category conditions require additional safeguards under the UK’s Data Protection Act 2018 (DPA 2018).
  • In many cases you’ll also need an Appropriate Policy Document in place.

Also see the ICO Special Category Data Guidance.

ICO enforcement action on biometric data use in the workplace

The Regulator has ordered Serco Leisure and a number of associated community leisure trusts to stop using Facial Recognition Technology (FRT) and fingerprint scanning to monitor workers’ attendance. They’ve also ordered the destruction of all biometric data which is not legally required to be retained.

The ICO’s investigation found the biometric data of more than 2,000 employees at 38 leisure centres was being unlawfully processed for the purpose of attendance checks and subsequent payment.

Serco Leisure was unable to demonstrate why it was necessary or proportionate to use FRT and fingerprint scanning for this purpose. The ICO noted there are less intrusive means available, such as ID cards and fobs. Serco Leisure said these methods were open to abuse by employees, but no evidence was produced to support this claim.

Crucially, employees were not proactively offered an alternative to having their faces and fingers scanned. It was presented to employees as a requirement in order to get paid.

Serco Leisure conducted a Data Protection Impact Assessment and a Legitimate Interests Assessment, but these fell short when subject to ICO scrutiny.

Lawful basis

Serco Leisure identified their lawful bases as contractual necessity and legitimate interests. However, the Regulator found the following:

1) While recording attendance times may be necessary to fulfil obligations under employment contracts, it doesn’t follow that the processing of biometric data is necessary to achieve this.

2) Legitimate interests will not apply if a controller can reasonably achieve the same results in another less intrusive way.

Special category condition

Initially Serco Leisure had not identified a condition before implementing biometric systems. It then chose the relevant condition as being for employment, social security and social protection, citing Section 9 of the Working Time Regulations 1998 and the Employment Rights Act 1996.

The ICO found the special category condition chosen did not cover processing to purely meet contractual employment rights or obligations. Serco Leisure also failed to produce a required Appropriate Policy Document.

Read more about this ICO enforcement action.

12 key steps when considering using biometric data

If you’re considering using biometrics systems which will be used to uniquely identify individuals for any purpose, we’d highly recommend taking the following steps:

1. DPIA: Carry out a Data Protection Impact Assessment.

2. Due diligence: Conduct robust due diligence of any provider of biometric systems.

3. Lawful basis: Identify a lawful basis for processing and make sure you meet the requirements of this lawful basis.

4. Special category condition: Identify an appropriate Article 9 condition for processing special category biometric data. The ICO says explicit consent is likely to most appropriate, but other conditions may apply depending on your circumstances.

5. APD: Produce an Appropriate Policy Document where required under DPA 2018.

6. Accuracy: Make sure biometric systems are sufficiently accurate for your purpose. Test and mitigate for biases. For example, bias and inequality may be caused by a lack of diverse data, bugs and inconsistencies in biometric systems.

7. Safeguards: Consider what safeguards will be necessary to mitigate the risk of discrimination, false acceptance and rejection rates.

8. Transparency: Consider how you will be open and upfront about your use of biometric systems. How will you explain this in a clear, concise, and easy to access way? If you are relying on consent, you’ll need to clearly tell people what they’re consenting to, and consent will need to be freely given. Consent: Getting it Right

9. Privacy rights: Assess how people’s rights will apply, and have processes in place to recognise and respond to individual privacy rights requests.

10. Security: Assess what security measures will be needed by your own organisation and by any biometric system provider.

11. Data retention: Assess how long you will need to keep the biometric data. Have robust procedures in place for deleting it when no longer required.

12. Documentation: Keep evidence of everything!

More detail can be found in the ICO Biometric Data Guidance.

Workplace monitoring – justified or intrusive?

October 2023

Almost one in five people believe they’ve been monitored by an employer, and would be reluctant to take a new job if they knew they were going to be monitored. Research commissioned by the UK’s Information Commissioner’s Office (ICO) also shows 70% of the public believe it’s intrusive to be monitored in the workplace.

However, the research also shows workers generally understand employers might carry out checks on the quality and quantity of their work. Similarly, they appreciate the necessity of monitoring for health and safety reasons, or to meet other regulatory requirements.

There are plenty of reasons why employers might want to monitor staff; to check they’re working, to detect and prevent criminal activity, ensuring policy compliance, and for safety and security reasons.

With more people working from home and advances in technology, there are multiple options for employers seeking to monitor their workforces;

  • Camera surveillance, including body worn cameras
  • Webcams and screenshots
  • Monitoring timekeeping or access control
  • Keystroke monitoring
  • Internet tracking for misuse
  • Covert audio recording

I’ve even heard of AI which sentiment checks emails. This scans language to detect content that might be discriminatory, bullying or aggressive. Personally, I find this terrifying. Imagine if this technology were available during the ‘Reds under the bed’ paranoia of 1950s America, or indeed 1930s Germany?

The fundamental question is this – just because you can monitor staff, should you?

The ICO has recently published guidance: Employment practices and data protection – monitoring workers. Emily Keaney, Deputy Commissioner – Regulatory Policy at the Information Commissioner’s Office, says; “While data protection law does not prevent monitoring, our guidance is clear that it must be necessary, proportionate and respect the rights and freedoms of workers. We will take action if we believe people’s privacy is being threatened.”

Summary of workplace monitoring considerations

1. Is your workplace monitoring lawful, fair and transparent?

To be lawful you need to identify a lawful basis under UK GDPR and meet relevant conditions. Remember consent would only work where employees have a genuine choice. Often an imbalance of power means consent is not appropriate in an employee context.

To be fair you should only monitor workers in ways they would reasonably expect, and in ways which wouldn’t have unjustified adverse effects on them. The ICO says you should conduct a Data Protection Impact Assessment to make sure monitoring is fair.

To be transparent you must be open and upfront about what you’re doing, monitoring should not routinely be done in secret. Monitoring conducted without transparency is fundamentally unfair. There may however be exceptional circumstances where covert monitoring is justified.

2. Will monitoring gather sensitive information?

If monitoring involves special category data, you’ll need to identify a special category condition, as well as a lawful basis.

Special category data includes data revealing racial or ethnic origin, religious, political or philosophical beliefs, trade union membership, genetic and biometric data, data concerning health or data about a person’s sex life or sexual orientation.

You may not automatically think this is relevant, but be mindful even monitoring emails, for example, is likely to lead to the processing of special category data.

3. Have you clearly set out your purpose(s) for workplace monitoring?

You need to be clear about your purpose(s) and not monitor workers ‘just in case’ it might be useful. Details captured should not subsequently be used for a different purpose, unless this is assessed to be compatible with an original purpose.

4. Are you minimising the personal details gathered?

Organisations are required to not collect more personal information than they need to achieve their defined purpose(s). This should be approached with care as many monitoring technologies and methods have the capability to gather more information than is necessary. You should take steps to limit the amount of data collected and retained.

5. Is the information gathered accurate?

The ICO says organisations must take all reasonable steps to make sure the personal information gathered through monitoring workers is not incorrect or misleading and people should have the ability to challenge the results of any monitoring.

6. Have you decided how long information will be kept?

Personal information gathered must not be kept for any longer than is necessary. It shouldn’t be kept just in case it might be useful in future. Organisations must have a data retention schedule and delete any information in line with this. The UK GDPR doesn’t tell us precisely how long this should be, organisations need to be able to justify any retention periods they set.

7. Is the information kept securely?

You must have appropriate organisational and technical measures in place to protect personal information. Data security risks should be assessed, access should be restricted, and those handling the information should receive appropriate training.

If monitoring is outsourced to a third-party processor, you’ll be responsible for compliance with data protection law. Processors will have their own security obligations under UK GDPR.

8. Are you able to demonstrate your compliance with data protection law?

Organisations need to be able to demonstrate their compliance with UK GDPR. This means making sure appropriate policies, procedures and measures are put in place for workplace monitoring activities. As with everything this must be proportionate to the risks. The ICO says organisations should make sure “overall responsibility for monitoring workers rest at the higher senior management level”.

Monitoring people is by its very nature intrusive, it must be proportionate, justified and people should in most circumstances be told it’s happening. The overriding message from the ICO is carry out a Data Protection Impact Assessment if you’re considering monitoring people in the workplace. This should fully explore any impact on people’s rights and freedoms.

Is bias and discrimination in AI a problem?

September 2022

Artificial Intelligence - good governance will need to catch up with the technology

The AI landscape

We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:

  • Cybersecurity analysis to identify anomalies in IT structures
  • Automating repetitive maintenance tasks and guiding technical support teams
  • Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
  • Reviewing job applications to identify the best-qualified candidates in HR
  • Research scientists looking for patterns in health to identify new cures for cancer
  • Predicting equipment failure in manufacturing
  • Detecting fraud in banking by analysing irregular patterns in transactions.
  • TV and movie recommendations for Netflix users
  • Inventory optimisation and demand forecasting in retail & transportation
  • Programming cars to self-drive

Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.  

In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications. 

Why recruitment applications?

AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest. 

The Amazon recruitment tool – an example of bias/discrimination

The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.

What is AI?

AI can be defined as: 

“using a non-human system to learn from experience and imitate human intelligent behaviour”

The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.  

Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes. 

In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.

You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity. 

How is AI used? 

AI can be used to make decisions:

1.     A prediction – e.g. you will be good at a job

2.     A recommendation – e.g. you will like this news article

3.     A classification – e.g. this email is spam. 

The benefits of AI

AI is generally a force for good:

1.     It can automate a process and save time

2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants)

3.     It can enhance the ability of individuals – often by speeding processes

Where do data protection and AI intersect?

An explanation of AI-assisted decisions is required: 

1.     If there is a process without any human involvement

2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job. 

Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing. 

What data is in scope?

  • Personal data used to train a model
  • Personal data used to test a model
  • On deployment, personal data used or created to make decisions about individuals

If no personal data is included in a model, AI is not in scope for data protection. 

How to approach an AI project?

 Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation. 

Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination. 

Where does bias/discrimination creep in?

Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation. 

When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination. 

Our Top 10 Tips

  1. Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
  2. Review the training inputs to identify possible bias with the use of historic data
  3. Test the outcomes of the model – this really seems so obvious but not done regularly enough
  4. Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
  5. Consider how to compensate for bias built into the training – a possible form of positive discrimination
  6. Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
  7. Incorporate your AI projects into your data protection governance structure
  8. Ensure that you’ve done a full DPIA identifying risks and mitigations
  9. Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
  10. Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised? 

In summary

AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable. 

In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged.