Terah Lyons: AI and diversity – the cultural and societal context behind artificial intelligence



good morning everyone it's great to be here thank you so much for to the wired UK team for having me here today I'm here to talk to you a little bit about how AI is created and by whom and how ultimately we are all responsible for contributing to the creation of a future in which we all want to live and since I'm first up here among today's speakers I'm just gonna start out by actually defining what AI is it's getting a lot of attention these days and and I'm gonna be talking today a lot about why everybody is talking about it so there's not one single definition of AI that seen diversity partition errs in fact if you locked a bunch of scholars in a room together you would find yourself in the middle of an hour's long argument as we have in some of our work but some define AI loosely as a computerized system that exhibits behavior that is commonly thought of as requiring intelligence much as human forms have which is where we're gonna leave it for today for the sake of simplicity and AI very importantly isn't just one technology it's a constellation of technologies that have enabled new capabilities across many different applications and domains the current wave of progress and enthusiasm for AI began around 2010 driven by three primary factors that built upon each other the availability of big data from a number of different sources including ecommerce business governments and more which provided raw material for dramatically improved machine learning techniques and algorithms which in turn rely on capabilities of more powerful computers and all three of these factors together have produced a powerful new wave of industry AI is the new infrastructure upon which the modern technological revolution is being built and just by way of illustration this slide shows the incredible transformative power that AI supportive capabilities have had on global industry alone by 2016 as you can see here the five largest companies in the world by market cap how to all become technology venes and around that period they all also or have since at least declared themselves AI businesses explicitly AI isn't just relegated to industry however though it is true that this handful of companies are deeply ingrained in many of our day to day lives it matters because almost every single one of us here in this room uses AI or interacts with it every single day and most importantly even beyond the experience of any individual user we live in a world that is being fundamentally altered by this new technology area being deployed at scale we use companies that produce and generate it and we contribute to it ourselves by being participants in a vast data economy that feeds assessments made by machines about what we want and what we like how we think and even what we feel and even as I will talk a little bit more about later how likely we are to do things like commit or recommit a crime in some cases we are also citizens in a world that needs to figure out how likely we are excuse me to figure out how how to effectively govern the power that comes with the vastness of this information and the decisions that a AI supports and that is the focus of what I want to talk to you about today there is almost no end to the questions that we have currently about the impacts of artificial intelligence on society and this handful of recent headlines demonstrates that we are asking ourselves questions like how do we ensure that a AI technologies are safe secure and trustworthy that they represent principles of justice fairness and human rights that AI is inclusive and doesn't have unintended consequences that it supports instead of marginalizes the most vulnerable among us that the value of AI and all that it generates will be broadly distributed and in essence how do we effectively govern AI technologies and govern around the massively sale scaled societal impacts that they will bring to illustrate how important these governance questions have become to the field and to humanity here's a list of just a subset of organizations that have worked to develop and adopt their own principles for Responsible artificial intelligence development just in the last couple of years during which this topic has really exploded and it's in tiny font which I apologize for but it really is just to illustrate the volume these organizations up here span governments and multilateral institutions companies and NGOs and this list isn't even comprehensive actually so for example the European Commission just put out its own set of guidelines with a high-level expert group as did the Beijing Academy for artificial intelligence and if you squint you'll notice here that one of the earliest of these efforts was actually from the partnership on AI which was formulated back in 2016 to grapple with some of the toughest questions that the field faces currently the partnership is a broad global coalition of almost 100 organizations spanning industry civil society and academia we are all working together to define what responsible practice looks like in developing and deploying artificial intelligence the founders of the partnership were a group of AI research scientists at some of those very large companies that I was telling you about before including Apple Amazon IBM Facebook Google and Microsoft and these individuals and their capacity as scientists actually realized that there was an imperative for the AI developer community to collaborate deeply with other organizations and disciplines to identify answers to a set of questions for which they were ill-equipped to do by themselves and these questions will shape the ways in which technology in turn shapes our world we have a set of thematic pillars as an organization spanning issues ranging from the safety of AI systems to questions of fairness accountability and transparency these are the logos of some of our other partners although this is not comprehensive and we also have a son of tenants which were actually the principles to which that catalogue of AI principles was referring and they inform the work that our community community conducts at a very high level but like many principles developed by many organizations on that list that I just displayed they don't really go so far as to define what technology ethics actually looks like on the ground every day in research and product development contexts that affect millions of people every moment thus the goal of the partnership ultimately is to move beyond platitudes and toward practice and to ultimately help define what those practices should entail and I think that one of the most important and forming principles for this work which is often overlooked is one of our tenets that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders the statement as it stands alone doesn't mean much but much when you think about what it looks like in practice it takes on a lot of weight and this was the primary premise upon which the partnership on AI as an organization was founded so what does this actually look like I'm gonna tell story from our recent work that may help to illustrate back in late 2018 the state of California in the United States passed a bill called state bill 10 which replaced the money bail system in the state of California with a requirement that counties in the state use algorithmic risk assessment tools to make pretrial detention decisions about defendants in the criminal justice system now as many of you probably know the US criminal justice system is notoriously flawed and here are some visualizations that help demonstrate the scale and magnitude of this problem this slide shows incarceration rates in the United States relative to the OECD and historical baselines and that top line is the United States compared to many other countries in the world there's a midpoint with another Green Line there which is the u.s. in 1960 this figure shows US state and federal incarceration rates in the last several decades and this slide demonstrates u.s. state and federal incarceration rates relative to all reported crimes in the last several decades introducing algorithms as decision making tools into such a fraught context involves many concerns including a fundamental philosophical and legal question as to whether or not it's acceptable to make determinations about individuals Liberty based on data about their group incarceration overwhelmingly and disproportionately impacts African American communities in the United States where one of the biggest reasons why people don't show up to their court date which is a significant variable considered in some risk assessment tools including many of the kind that are being considered using used in California is because they don't have access to transportation or to child care and this issue caught our community's attention for good reason and an overwhelming majority of the partnerships consulted experts agreed that current risk assessment tools are not ready for use and helping to make decisions to detain or to continue to detain criminal defendants without the use of an individualized hearing we set out to define a baseline set of requirements for the use of such tools in the criminal justice context which we published as a community last month and broadly the concerns and risks of such tools fell into the following three categories including the issues associated with the validity accuracy and bias and the tools themselves challenges with the interface between the tools and the humans who actually interact with them including the question of whether judges for example can interpret statistical confidence intervals and questions of governance transparency and accountability and this set of concerns is not merely limited to the California context either there are other legal and judicial systems in which the use of data and where and how that data interacts with justice and with algorithms is a real challenge just two weeks ago the Law Society here in England and Wales published a report outlining similar concerns in the UK context so what does this all have to do with the accountability that I highlighted before this type of work isn't possible without a broad range of stakeholders in the room and in this case for this work we had over 40 institutions work with us on this set of recommendations and report companies civil society organisations and academics who came from a wide range of disciplines statisticians and machine learning experts criminologists lawyers and legal scholars social scientists policy experts and advocates and combined they all made it possible to understand the technology in context where and how it operates in day to day life and how it impacts or in this case how it might impact people who are on the front lines of deployment and may have their lives altered irreparably as a result of it society is the place where technology unfolds and there are structures institutions and contextual factors to keep in mind as we are building and deploying it Brian Stevenson is the founder of an organization called the equal justice initiative which is based in Montgomery Alabama it provides legal representation of prisoners who've been wrongly convicted of crimes he likes to say something which I think rings deeply true in the world of artificial intelligence especially today that you cannot be an effective problem solver from a distance we have to get close to the problems in other words that we're trying to solve and we have to involve the people that they impact whether we're building health or data or dating apps talking about online news or retail recommendations or as in this case the criminal justice system this has never been more important in technology development than it is today in this cultural historical and technological moment to create true best practices that better inform product and policy development we need the critical perspectives of those who are impacted by technology joining with those who are creating it and this is the premise of work by organizations like the partnership on AI and the hundreds of individuals in our community who we work with every day to try to build a more inclusive just and better world with the promise of technology and humanity combined thank you so much [Applause]

3 thoughts on “Terah Lyons: AI and diversity – the cultural and societal context behind artificial intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *