Embedding 'ethical by design' AI and data practices
What we did
Federal government delivery agencies increasingly use the power of data to evolve their services to citizens. They are exposed to decisions across a large range of factors, from collection parameters to access controls, from data mining to algorithmic decisionmaking, and from robotic process automation to predictive modelling. Data science practices for delivery agencies are therefore not without risk, especially when applied to vulnerable populations. We were selected to frame, create and help an agency establish a robust ethical and technical operating environment for its teams working with data.
How we did it
We surveyed the existing landscape, which provided a surfeit of policy but little concrete guidance for teams working with data. We also uncovered a widespread belief that ethics and privacy treatments are the same. We used the Cynefin Complexity Framework to analyse the agency’s workstyle preferences in order to ensure that our solutions were tailored for a cultural fit. This included how stakeholder interactions, governance models, policies and products were developed, socialised and launched during the assignment. We researched convergence in international principles for Data and AI Ethics before opting for collaboration with the UK, writing local guidance for their existing principles. We also became a pilot partner of the EU’s “Trustworthy AI” initiative. We built a Data Workbook as the primary tool for surfacing tensions around data and ethics. Structured around the data lifecycle, and framed as open, peer-reviewed questions it provides a logbook for related project decisions. We also created and worked with a new function to to coordinate reviews, analyse risks and provide insight, guidance and build common ways of working across the agency.