The ICO has now finalised the key component of its “AI Auditing Framework” following consultation. The Guidance covers what the ICO considers “best practice” in the development and deployment of AI technologies and is available here.
It is not a statutory code and there is no penalty for failing to follow the Guidance. However, there are two good reasons to comply with the Guidance in any event:
- Firstly, the ICO makes clear that it will be relying on the Guidance to provide a methodology for its internal investigation and audit teams.
- Secondly, in most cases where an organisation utilises AI, it will be mandatory to conduct a DPIA – and the ICO suggests that your DPIA process should both comply with data privacy laws generally but also conform to specific standards set out in the Guidance.
Therefore, it would be advisable for your DPO, compliance and technical teams to pay careful attention to the contents of the Guidance, as the ICO will take the Guidance into account when taking enforcement action.
The Guidance is divided into 4 sections. We set out a brief summary of the key takeaways of each section as follows:-
Accountability and Governance
Accountability issues for AI are not unlike governance issues for other technologies e.g. the ICO suggests that your organisation should set its risk appetite; ensure there is senior buy-in and that compliance is conducted by diverse, well-resourced teams and not left to the technologists.
The ICO recommends a DPIA is carried out. A DPIA must be meaningful and not a box-ticking exercise. It should be carried out at an early stage of product development and show evidence that less risky alternatives were considered than a system using AI. The Guidance includes all of the standard elements of a DPIA (as set out in GDPR) but also some interesting specifics. The DPIA should include:
- An explanation of any relevant margins of error in the performance which may affect fairness;
- An explanation of the degree of human involvement in the decision-making process and at what stage this takes place;
- Assessment of necessity (i.e. evidence you could not accomplish the purposes in a less intrusive way) and proportionality (i.e. weighing the interests of using AI against the risks to data subjects, including whether individuals would reasonably expect an AI system to conduct the processing);
- Trade-offs (e.g. between data minimisation and statistical accuracy) should also be documented “to an auditable standard”;
- Consideration of potential mitigating measures to identified risks.
As best practice there should be both a “technical” and a “non-technical” version of the DPIA, the second of which is to be used to explain AI decisions to individual data subjects.
The ICO flags that Controller and Processor relationships are a complicated area in the context of AI. However, the final version of the Guidance retreats from specific advice as to characteristics of Controllers, Processors and Joint Controllers. Instead, the ICO will consult on this with stakeholders, with a view to publishing more details in updated Cloud Computing Guidance in 2021.
Lawfulness, Fairness and Transparency
On lawfulness, a different legal basis will likely be appropriate for different “phases” of AI technology (i.e. development vs deployment).
The ICO flags key issues which relate to each different type of basis, in particular:
- Consent – if Article 6(1)(a) of GDPR is relied upon, consent must meet all the requirements of GDPR-standard consent. It may be a challenge to ensure that the consent is specific and informed given the nature of AI technology. Consent must also be capable of being easily withdrawn.
- Contract – if Article 6(1)(b) of GDPR is relied upon, it must in practice be objectively necessary for the purposes of the contract – which also includes that there is no less intrusive way of processing data to provide the same service. The ICO adds that this may not be appropriate for the purposes of the development of the AI.
- Legitimate Interests – if Article 6(1)(e) of GDPR is relied upon, the “three-part test” should be worked through in the context of a legitimate interests assessment (LIA). Where this is used for the developmentof the AI, the purposes may initially be quite broad, but as more specific purposes are identified, the LIA will have to be reviewed.
On fairness, the Guidance highlights the need to ensure that statistical accuracy (i.e. how often the AI gets the right answer) and risks of bias (i.e. the extent to which the outputs of AI lead to direct or indirect discrimination) are addressed both in development and procurement of AI systems.
On transparency, the ICO refers to their more detailed guidance on transparency, developed alongside the Alan Turing Institute (“Explaining decisions made with AI”) which is available here.
Data Security and Data Minimisation
AI poses new security challenges due to the complexity of the development process and reliance on third parties in the AI ecosystem. In addition to good practice cybersecurity measures (such as ensuring that your organisation tracks vulnerability updates in security advisories), the ICO addresses specific security challenges:
- Development phase: Technical teams should record all data flows and consider de-identification techniques being applied to training data before sharing internally or externally. Alternative privacy enhancing technologies (PETs) can be considered. There are particular challenges due to the fact most AI systems are not built entirely in-house but are based on externally maintained software, which itself may contain vulnerabilities (e.g. “NumPy” Python vulnerability discovered in 2019).
- Deployment phase: AI is vulnerable to specific types of attack e.g. “model inversion” attacks, where attackers have some personal data about an individual and can infer other personal data from how the model operates; “adversarial” attacks which involve feeding false data which compromises the operation of the system. To minimise the likelihood of attack, pertinent questions should be asked about how the AI is deployed e.g. what information should the end-user get to access – or even (if your organisation developed the AI) should your external third party client get to access the model directly, or only through an API?
Data minimisation is also a challenge because AI systems generally require large amounts of data. Nevertheless, the principle still needs to be complied with in:
- Development phase: in the training phase, your organisation needs to consider whether all the data used is necessary (e.g. not all demographic data about data subjects will be relevant to a particular purpose, such as calculating credit risk) and whether the use of personal data is necessary for the purposes of training the model. Statistical accuracy needs to be balanced with the principle of data minimisation. Privacy-enhancing methods, such as use of “synthetic” data, should be considered.
- Deployment phase: in the inference phase, it may be possible to minimise data processed e.g. by converting personal data into less “human readable” formats (e.g. facial recognition using “faceprints” instead of digital images of faces), or only processing data locally on the individual’s device.
Anonymisation may also play an important role in data minimisation in the context of AI technologies. The ICO states that they are currently developing new guidance in this field.
Individual Rights
During the AI lifecycle, organisations will have to consider how to operationalise the ability for individuals to exercise their rights:
- Development phase: it may be challenging to identify personal data of a data subject in training data, due to the “pre-processing” that is applied to data (e.g. stripping out identifiers). However, if it is personal data, your organisation will still have to respond. Where the request relates to data incorporated in the model itself, in certain cases (e.g. the individual exercises their right to erase their data) it may be necessary to erase the existing model and/or re-train the model.
- Deployment phase: typically, once deployed, the outputs of an AI system are stored in the profile of an individual (e.g. targeted advertising driven by a predictive model based on a customer’s profile) – which, of course, may be easier to access for compliance purposes. The ICO suggests requests for rectification of model outputs are more likely than for training data. Data portability does not apply to inferred data, therefore it is unlikely to apply to outputs of AI models.
Automated decision-making requires careful consideration. Article 22 of GDPR will apply, unless there is human input – which must be meaningful and not a “rubber-stamp”. Where AI is used to assist human decision-making (but human input is involved, so it is not solely automated decision-making), the ICO states that your organisation should train the decision-makers to tackle:
- Automation bias (i.e. humans routinely trusting the output of a machine as being inherently trustworthy and not using their own judgement).
- Lack of interpretability (i.e. outputs are difficult for humans to interpret, so they agree with the recommendations of the system, rather than using their own judgement).
Human reviewers should have the authority to override the output generated by the AI system and should be monitored to check whether they are routinely agreeing with the AI system’s outputs.
Conclusion
The Guidance is concise, focused and pragmatic.
There will be a forthcoming ICO “toolkit” for organisations linked to the Guidance. Whether this includes a suggested framework for an “Enhanced DPIA” remains to be seen, but would be a welcome addition for DPOs in a fast-moving industry where compliance needs to be proactive rather than reactive.
More articles on AI, including our piece on Artificial Intelligence in Smart Cities, is available on our Business Going Digital microsite.