The Information Commissioner’s Office (ICO) has published its annual report for 2019-20.
Covering what the is described as a “transformative period” for privacy and data protection and broader information rights, the report covers a range of topical issues including how facial recognition technology is used and how children are protected online.
Highlights from the report, which covers the 12 months to 31 March 2020, include:
- The Age Appropriate Design Code – introduced by the Data Protection Act 2018 and published in January 2020 this will, when it comes into force, help steer businesses to comply with current information rights legislation;
- Facial Recognition – the ICO intervened in the High Court case on the use of facial recognition technology by the South Wales Police as part of its work to ensure that the use of this technology does not infringe people’s rights;
- Brexit Implications – guidance for businesses and organisations on data protection and Brexit implementation to help businesses comply with the law once the UK leaves the EU;
- Freedom of Information – a new freedom of information strategy was launched which sets out how the ICO work to create a culture of openness in public authorities. It also commits them to making the case for reform of the access to information law.
The report also indicates that during the 2019-2020 period, the ICO:
- Received 38,514 data protection complaints;
- Closed 39,860 data protection cases (up from 34,684 in 2018/19);
- Received 6,367 freedom of information complaint cases;
- Took regulatory action 236 times in response to breaches of legislation including 54 information notices, eight assessment notices, seven enforcement notices, four cautions, eight prosecutions and 15 fines;
- Conducted over 2,100 investigations; and
- Settled a case with Facebook, which had been brought under the Data Protection Act 1998.
The ICO Report is to be found on the ICO website.
In addition to the Annual Report, the ICO have also published new guidance on AI and data protection.
The past few years have seen an increase in the uses of Artificial Intelligence (AI) in areas such as online retail, banking, and healthcare. What ethical issues does it create? How can we be sure the use of AI is lawful?
Although AI offers opportunities that could bring marked improvements for society, shifting the processing of personal data to these complex and sometimes opaque systems comes with inherent risks and understanding how to assess compliance with data protection principles can be challenging. With these things in mind, the ICO have released guidance on artificial intelligence as part of its commitment to enable good data protection practice in AI.
The guidance contains recommendations on best practice and technical measures that organisations can use to mitigate those risks caused or exacerbated by the use of this technology. The guidance is the culmination of two years of research and consultation by Professor Reuben Binns and the ICO AI team involving a wide range of stakeholders who provided feedback throughout.
The guidance can be accessed via the ICO website