You are currently viewing “Responsibility In Industry” panel – 22 September 2021

The need for responsibility in industry becomes clearer by the day. While data science, computer science, and advanced statistics make possible a wealth of new innovation, we are becoming more aware of the social and societal costs. These can be as direct as environmental impacts – for example energy investment and physical components – or as unintentional yet deeply impactful as discrimination and bias. The RTI’s Student Network recently held a “Responsibility in Industry” panel event, with excellent input from both the chair – Professor Marina Jirotka – and speakers, Alison Berthet, Jack Stilgoe, and Maria Axente. The event was focused around conceptions of what it means to be responsible in industry in the context of AI. How can companies today meaningfully take on responsibility for their role in the AI lifecycle, and what might they already be doing?

Key to the discussion was the clear acknowledgement of the fact that responsibility is an iterative process, and needs both companies themselves and the governance ecosystem surrounding them to commit to its success. Beyond accountability for accidents, companies must look at the broader and systemic impacts of both their work and their industry partners’, particularly on vulnerable groups. There is an underlying need for a consistent and devoted approach to AI ethics and human rights across industry in order to raise standards and protect societal concerns. How, then, can companies meaningfully incorporate responsible innovation, design, and development?

First, companies need to show their commitment to the cause by investing time, training, and leadership in responsible innovation. This starts with creating clear internal governance and accessible guidelines developed from a variety of stakeholders: potentially impacted groups, ethics experts, and designers, to name a few. Companies must make clear how they are defining their ethical approach: what they will do, and what they will not do, in developing their product. This may include how the product can be used, such as for safety-critical products, and in who is allowed to purchase, for products that could be used for social harms. In upholding their commitment to responsibility, companies need to foster and encourage whistleblowing both in and outside the company, holding themselves accountable not just to regulators, but to the public. Companies must also invest in providing adequate training across the corporation, and not just rely on singular teams such as design or legal to integrate an ethical approach on their own. Companies must continually reassess their impact and practices.

Second, companies must consider the entirety of the AI lifecycle. Stakeholders and actors in the industry both pre and post production need to consider how the product could and is affecting wider groups: both users and the systems they live within. A responsible AI ecosystem does not end and begin with just one company, it must be reinforced at all steps: from data gathering, to design and scoping, to training and testing, through deployment and beyond. It’s critical that companies re-assess both their impacts and their processes not only internally, but by demanding accountability from AI lifecycle partners. For example, this could include companies requiring that the training data they purchase has been lawfully and ethically obtained with full consent from the data subjects, and further ensuring that in its use that it cannot be traced back to an individual or group. This process is an ongoing one, and must not stop at an initial or cursory stage.

Third, both regulators and industry must strive to create a culture of responsibility. From the regulator, this involves cultivating a balance between innovation and impact – considering who will benefit from a technology, as well as the potential distribution of harm, and incentivizing responsible development, at the very least by de-incentivising irresponsible behaviour. For companies, this involves a willingness to dialogue with regulators, to be openly accountable to the public, and to cease harmful practices such as hype marketing in order to build a practical and practicable understanding of the technology in question. All participants in the AI ecosystem must take action to redraw the boundaries of responsibility for new impacts: as new impacts are discovered, we must not only address the harm thereof, but act to ensure it is prevented in the future.

Overall, we are beginning to see companies in AI industries take notice of the impact and importance of responsible development. But we are still in the early days, and building a culture of responsibility and consistent ethical approach to the industry requires a great deal of investment. Yet it is just that, an investment – an opportunity for companies to build trust, create better products, and foster a more equitable marketplace. Each step forward in responsible industry means saving a step back in redressing harm down the road, and today’s markers of success becomes tomorrow’s baseline of development.