Ethical Responsibility in the Fourth Industrial Revolution

hello everyone really really glad that you're here really appreciate your coming out for a great Dreamforce we have a fantastic panel to hear today they're full of thought leaders across civil society but really this panel it's about them but it's really about you you are the leaders that we're looking to to shape the world we're going to live in and we really want to ask the hard questions about what that world is going to look like you know we we are part of this fourth Industrial Revolution where everything is connected to everything else and there's some profound questions that are being asked and we've got the right panel to at least think across silos to get new thinking and fresh breath into into the dialogue so with that I'm going to introduce our our panel that have come from all around the world really we have we have someone from Europe we have someone from New York so why don't you all come on out thank you okay so we have Tara Lyons from the partnership for AI father Eric's hallo bier who is with optic network Commissioner Sharon Bowen with a CFTC now with Seneca women and our very own Richard so sure who is our chief scientist here at Salesforce so thank you very much okay everyone thank you for joining us we really are excited to have you to talk about you know a I and and machine learning and all of these emerging technologies why don't we just very quickly go down the line and talk a little bit about what each and every one of your organizations are doing to provide a ethical safe sustainable inclusive future sure thanks for having us here my name is Tara Lyons I am the executive director of an organization called the partnership on artificial intelligence and we are a nonprofit organization actually based here in San Francisco were a multi-stakeholder initiative and we're a consortium comprised of over 70 member institutions spanning all sorts of different sectors and disciplines coming from civil society and advocacy organizations large for-profit industry companies like Salesforce and a lot of others and a lot of academic institutions too so our mission really is sort of part and parcel with the purpose of the conversation today we are keenly focused on the responsible use and development of artificial intelligence technologies and working to Commission research and generate practice in our member institutions against that mission area great father Eric so I think you I will not try to answer this question at the scale of the old Catholic Church because it's a lot it's big I mean the church is the first provider of education and health care worldwide and also the institution is facing some key challenges ethical challenges currently but let's say to stay in this conversation at the scale of uptick let's say we have two key responsibilities first one we are think-tank so we help people to think different about impact and ethics of Technology and so first we add them to de su m– to have a full picture of the impact of the implementation of those technologies in the society and we also add them to understand that compliance is not enough ticking the boxes of your illicit but by your ethical committee doesn't put you on the safe side and what people want to know is what are you core values why do you do you do you think you do things and actually so that's the first thing let's say helping them to think different and we also do tank so we interact with developers of those technologies to see how they can answer more easily let's say that face the challenges of the society and solve problems and so currently we work a lot on the concept of ethics by design and the point is just to help people not to have to choose between being profitable or having a positive impact but try to put all of that together and to to see how it's possible to make added value using and respecting core values great Commissioner Bowen yes happy to be here thank you so much for having me I have the privilege of thinking about this issue we're in really different lenses I'm a former commissioner of the Commodity Futures Trading Commission I serve on the board of the inter Connell exchange I'm also part of the senior leadership team of cynical women but one of the things I think all of these organizations and in fact all organizations should have in common in thinking about this is the importance of transparency I think that's important for every organization I think the second most important thing in terms of our responsibilities to make sure we have equal access and opportunity you know you shouldn't assume that the playing field is level for all of us and your whether you're talking about the financial markets so what they're talking about your own organizations making sure that each individual has equal access and opportunity and the third thing I think we need to do is to make sure we have proper incentives in place to make sure that people are in the in the same boat a few well in terms of making sure that you're part of the same mission statement and then finally I would say I think the the concept of having a safe sustainable inclusive organization frankly is everyone stop I think too often we depend on the CEO or the chief diversity officer or the chief inclusion officer of an organization and I think in many respects organizations miss the point that there should be about every employee being engaged being part of the solution great Richard so I'm the chief scientist at Salesforce and so I wear a couple of different hats I'm leading the research group in mostly artificial intelligence and then helping infuse those breakthroughs into rightie of our different products and we realized especially in AI ethical issues arise very quickly now that it actually works and we see it in all our different products and a lot of issues on potential biases and datasets that we want to make aware so we're hiring a chief ethical new main use officer at Salesforce and really I think realize the responsibilities that we have as a platform helping hundreds of thousands of other companies also think through these issues both as they use our own platform as well as work on their core expertise in a great so such a diverse panel here and we're really lucky to have you all here so thank you for coming the long way commissioner bond why don't you walk us through your thoughts about the emerging technologies and their impact on the financial markets it's been in the news a bit this week it's a pretty big topic frankly our markets have really evolved from the days of the open outcry markets of the commodities markets or the new york stock exchange floor we had specialists in the marketplace and so technology has resulted in transactions being conducted at faster and faster speeds with that it brings sort of challenges you may think what how fast is too fast and so we have challenges for they were talking about algorithmic trading in our markets high frequency trading in our markets the impact that those types of technologies you know have in terms of a level playing field at the same time we know that technology enables us to have much more liquidity a lot more competition and therefore cheaper costs to the customer to end-users so there many benefits in that respect data analytics is extremely important you know before and this is not to a place obviously human judgment but data has become powerful in helping us to make policy decisions as regulators as for organizations to make decisions you know decision-making as well but how we use that data I think also is equally important and you know in terms of the financial markets our job is to you know protect the integrity of our marketplace and so to the extent that we use artificial intelligence and machine learning it helped us to really sort of herd out if you will sort of bad behavior it's a tool for regulators to use it's a way for the marketplace also to use you know that information to mitigate the guess your risk in the marketplace thank you Thank You Tara so she's talked a lot about transparency that's a really important topic to to some of us AI and machine learning might might seem hard to access is transparency one of the topics on the mind at the partnership for AI what are the topics that are this group is coming together to talk about yeah absolutely it's a great question so we were founded back in 2016 was when this whole initiative was announced in last year so we became operational and the organization's driving the founding of the partnership were Apple Amazon Facebook IBM Microsoft Google our board also includes organizations like the ACLU the MacArthur Foundation nonprofit organizations like open AI and some independent academic board directors and when they started the organization they were really keenly focused on kind of dividing the AI space which is galactical and its scope especially if you're a technologist into different categories discrete categories for this community that I talked a little bit about before actually getting to work on so those categories for us include issues spanning a whole range of areas of focus we do work on AI labor and the economy or the future of work kind of suite of issues we have work ongoing on issues of fairness transparency and accountability which is another sort of galaxy of issues unto itself we have a set of work happening around safety and safety critical artificial intelligence what why don't you describe that a little bit what safety net entails so you know for us and actually this is an interesting study in the way that our community approaches problem-solving when you get a bunch of technologists lawyers philosophers sociologists anthropologists etc into a room together the sort of first order issue is actually coming to a common language across over those different disciplines and organizational perspectives about what it is exactly we're talking about when we name a term like safety or we name a term like transparency so what pai is doing in the immediacy in the work that we're starting to conduct is establishing a common framework for talking and thinking about these issues which spans organizations across sectors and also spans disciplinary boundaries and that's really important as you're thinking about configuring best practices which is really slowly what look that's principally what we're focused on as an organization so it's it looks something like standard setting but it's a little bit it takes place a little bit prior to that where there's still a conversation to be shaped around the way that practice should be formula formulated and implemented in organisations like the technology companies that founded this institution and certainly how those practices should be interrogated and informal ated in collaboration and conversation with these other disciplines so father Eric technology might not be the first thing people think of when they think of the Catholic Church and I'm wondering if you could just address why religion should be at the table in this dialogue yeah for sure that there's no Vatican thick yeah far from that let's say but you do hackathons right but we do hackathons and and time to time in French we say far from the Montaigne you see better than mountain and so just to get a bigger best big-picture time to time it's also good to have a bit of distance and I would say 2,000 years of track records makes different perspective and and for sure one thing what the religions have to bring it's a new set of questions instead of just providing answers I think that we come very often with a new set of questions a new way to think about ethics which is as I said is it's not only a compliant it's not only consequences it's something more Universal and actually religions were challenged all the time by the Society since centuries and so we know that everything we do builds a kind of society so my key question when I meet people stakeholders from the tech companies or policy makers which kind of society do we using those technologies and that's what we can bring outside in the conversation so new questions a different approach and also perhaps a good understanding of what it is to be human together we were talking about having an inclusive society having a fair society that's something that actually we know how to deal with even if we were trapped many times and perhaps because we we felt many times we know where the pitfalls I would say and we know how the human being works and very often that's what is missing in the conversations among very I would say very smart people but time-to-time a kind of wisdom coming a bit from the past it's really interesting we have a builder on the panel a creator why don't you Richard walk us through I mean how do you create an ethical inclusive safe algorithm sounds like a challenge so ethics is not I think is it fixes a mindset not a checklist there's not like oh just do these five things and then you'll have an ethical algorithm what's also important to understand especially for AI algorithms is that they're only as good as a training data that they get so whenever there is a biased algorithm and wasn't because some evil programmer sat there and said oh I don't like women they shouldn't get a loan I'm going to program something against this algorithm giving loans to women what happens is they have an abstract algorithm and then they're getting a training data set based on past behavior are usually human behavior and that data set will include certain biases and so if in the past an organization was part of redlining and didn't want to give loans to black people in the United States or just historically didn't have as many female applicants for loans in a certain country or something like that then those algorithms will take that training data and then learn from that and then make predictions based on that training data and that's where we see the majority of the issues and that's where we think the most important part is actually educating companies that use our platform to think about what kinds of biases could be in our data set what kinds of minorities will they might might be missing and then basically try to improve get better training data or try explicitly to mitigate those kinds of effects and so it starts really with the mindset and thinking through the potential issues that could be in your data set as they're touching people's lives and so we have actually United States some very bad algorithms that running and are running in a judicial system where you want to define like who should get parole or how long should people be in prison based on previous days in jail and it turns out if you're poor and you can't pay for a pregnant ticket you might have spent a day in jail and now that will make you more likely to go to prison for something else and that's just a very unfair system and so when AI and machine learning and religious statistics and every technology touches human lives people should think about all the stakeholders they're involved and how they'd be if and how they are affected but you mentioned that there's some bad algorithms in the United States and you mentioned the the Justice Department types of algorithms are there places in the world that are doing it right that can be a model for us it's such a new technology and it's only now getting started to be used broadly I think the most important thing at this point is almost to have a beginner's mindset because it is really we're seeing the beginning of the new technology and in some ways you can think of this as as if when the internal combustion engine was invented people already thinking about pollution they didn't and you know that that suboptimal outcomes and now that we're seeing I already getting started to be used more and more in more and more places we are thinking about this issue so it's really just crucial to have the beginner's mindset and you constantly question yourself how is this algorithm affecting people's lives and really how's your technology in general affecting people's lives and that goes this training data issue sounds kind of abstract but it's very concrete when YouTube makes a recommendation for a video that you see and you watched one conspiracy video they know you like conspiracy videos you're optimizing for they're seeing in the training data if I see people click on this video they're probably also going to look at these other conspiracy videos and then you go down some rabbit hole and you see and see more conspiracies and so you need to really carefully think about what's going on in your train data on how is that affecting your product well Tara it's so interesting you've got so many different companies in this partnership for AI together we think of them as competitors but they're coming together you know for a common cause what are some of the big projects there's a term in Silicon Valley called moonshot that we like to use a lot for the big audacious projects what are the moon shots that the partnership for AI is doing together sure yeah so a API we kind of think of ourselves as a moonshot to be honest just because of the challenge of getting the competitors hujrat this organization together to collaborate in the ways that they're now showing themselves to be capable of so I think our contextualized in that broader framing of all of our work sort of being that sort of ambitiously scaled type of project you know I would say that just bringing off actually at the point they're richer earlier made this this concept of embedding and actually this was this was also touched on earlier by Commissioner Bowen as well but this project of embedding inclusion as a sort of full-stack priority across an organization is something we're really trying to help equip our member institutions to be able to do and that's a project that requires not just technology companies whether in competition or collaboration with each other but also the the sort of meaningful participation of representation of affected communities civil society organizations and advocates sort of spanning this ecosystem of concerns and challenges associated with AI and also its opportunities so so I I think one moonshot for us is in making sure that the work we do is really meaningfully influenced by those types of voices and it's not a muscle that the technology industry has traditionally demonstrated being able to exert you know I think this is a real this is sort of a new era this is a repositioning of the way that we're thinking about the technology development process it's not just about product designers and engineers or or you know hacking your way through something sort of out geeking you know a problem it's also about consulting with the communities that technology is influencing and and situating work done in the product development context in context of the situations in which it will be scaled so they would love to be in some of the meetings where you're trying to sort through these divisions and and problems and as I'm very curious that you have so many different kinds of stakeholders what are some of the disagreements at the table mmm well so I mean as with any organization that is you know comprised of large corporations and advocates and civil society organizations some of which traditionally have you know been working not necessarily in opposition to them but you know certainly in a broader society societally scaled conversation about the way that industry does its work there there you know I think that one a big opportunity for us that we have ahead is in making sure that we're able to kind of bridge that gap and equip these different organizations with the again you'll hear me say this a lot sort of lingua franca this common language that's needed not just to cross disciplines in conversation with one another but also the different types of institutions we have working on the projects that that they're working on so I think that's a really big project for us and it's not necessarily an explicit disagreement but I think there's always danger of there being rifts that open along the lines of sectors in any multidisciplinary organization or excuse me multi-stakeholder organization there's a lot of multis in the work that we do so you know I think I think that's a big one I think there's also just disagreements in the technology community about how to deal with some of the sort of you know the big questions ahead on AI and I don't think those are just going to disappear anytime soon and our hope really isn't to quash those and and make sure that there's perfect agreement across the spectrum really to kind of on the other hand rather to tease out the disagreement to really kind of give them airtime to play out and to give to give all the players in this ecosystem the space that they need to talk to each other in a sort of a vulnerable way that allows organizations to really be collaborative about designing the best answers instead of confrontational that's great father Eric I've read that you know you worry at night about machine learning and AI and sorry I'm curious you know what is it that worries you as we move into this future what keeps you up at night about about this technology I would say first I'm more enthusiast than I'm worried I'm really technical yes that's that's for sure my point is just to try to flag the pitfalls just to be sure that we can enjoy more what the technology can bring positive everything the positive the technology can bring I would say that the danger for us is less related to the technology than to the way we use it as human beings and as the society and and for example which you were mentioning the use of AI in in criminal justice you can have better algorithms you have you can have better data but at the end of the day you know that there are some characteristics of this technology which are embedded in the technology and which make those technologies technology proper for some use but not for other ones for exam at the fact that it reinforces the same tendencies for sure it has to be take taken in consideration and something probably cannot be done with that I mean you cannot do everything with a hammer time to time you need a screwdriver or the contrary and more or less that's what happens and very often I would say that people who implement those technologies and they are not specifically from the tech companies but also I mean policy makers or people in the government administration and so on they don't take that enough in consideration and so if it's not the right tool which is used we don't get the right calm and we have those biases we have those difficulties and I think that unfortunately when it comes to the sovereign activities of the states so just is defense law enforcement in super super tricky so we try to work on that so what we do is more or less the next step from what the the partnership on AI does so trying to work also with regulators and policymakers to be sure that those technologies let's say that our society is mature enough to welcome those technologies so the point is not is the technology good enough but is the society ready to welcome it and is it the best way to use the technology if not perhaps we have not to use it immediately but try to find let's say wait for the next step and then we will be ready so that's more or less what we work on well we have a former regulator on the panel and as I'm sure you would like to jump in on your ideas of can we really regulate this and is it possible to via regulation create a safe sustainable inclusive future I think it's important regulation we definitely need your rules of the road just because you can go ahead and 20 miles an hour doesn't mean you should do that in a school zone and so I think regulation can serve a really important function and and making sure that we have a safe environment for everyone you know at the same time you know we don't want to stifle innovation and so there's always going to be this tendency to balance innovation in technology versus consumer protection and the best of all worlds you have excellent self-regulation but you know having said that there isn't a common language people don't have the same goals in mind and so their inherent conflicts of interest with some organizations whether they use you know this technology for good or bad you know you know for example whether we're talking about you know Bitcoin or a blockchain you know the CFTC we had launched the CFTC lab to better understand the virtual currencies in big corn and well you say a lot about the pros and cons of that it's clearly at least that the blockchain technology has lots of great legs and it's equally clear that the transparency that we can bring to a regulated exchange which is which has been proposed I think will make it a lot more viable to make sure that we've got the integrity the trust and the confidence of the individuals to make sure that the playing field is level but we also need corporation globally this is not a u.s. issue our markets are independent across the globe and so the common language has to go beyond our borders and you know I think some governments are taking different positions on how you deal with you know why SEOs and and how you deal with disruptive technologies if you will but I think the key is to have the effect of dialogue and to make sure we try to achieve the right balance you know with investor protection consumer protection it being in the best interest without stifling innovation so so Richard do you agree it sounds like this panel thinks regulations might be an order what what type of regulation should there be so I think it's important to understand that it doesn't make sense to regulate all of AI in the abstract but as the eye touches human lives in specific industries and verticals it makes sense to regulate it for those kinds of use cases so you know generally regulating all of computer vision algorithms that classify images doesn't really make that much sense but when you want to apply a computer vision algorithm to medical use cases and one identify breast cancer or classify brain bleeds and head CT scans and things like that then you need to go through the FDA and have them really verify that you're at least as accurate as a doctor and that also can be tricky because doctors aren't always accurate being more accurate than even a panel of like the top three doctors in the world on a certain issue might not make you a hundred percent accurate so there are issues then still but in general we do need regulation I think both in terms of the data that we have so gdpr is a good step in the right direction though it is this fine balance between innovation and not trying to suppress it as well as have consumer protection it is actually kind of tough for some smaller companies to vgd opportunity are compliant in Europe and so so there's there's this fine balance but in general on the data set side we do need regulation for privacy issues and things like that and then on the eye side and the application side we do need to work with the regulatory bodies via the FDA or transportation authorities why don't come in when it comes to self-driving cars and then there's also the macro scale of thinking about the jobs and impact on jobs and how do we I think worry about that transition I think the future where we have a lot more automation is clearly great like a hundred over a hundred fifty years ago or so over ninety percent of people work in agriculture I'm fairly confident ninety percent of the room is pretty happy that they didn't have to work in the field every day right but when all the sudden you have tractors everywhere and that certainly that rollout was slower than the rollout you can have for the eye because it's software not hardware what what do those farmers do as they're losing their job you need to think through political systems and social safety nets and healthcare and especially education to bring people on board with this transition and make that transition as smooth as possible baristas beware I was I was on Market Street just this morning and there was this automated coffee machine that was just doing it although everything that a barista does so yes displacements happening even on the streets of San Francisco today so what about the rest of you in terms of things that keep you up at night concern you I'd be really curious just to see what concerns you and then maybe a little bit about what each of your organization is doing concretely at a granular level something that you're proud of Terra sure man that's a tough question you know I I mean I think this is a theme that's been drawn through all the discussion that we've had today on this panel but I I am really concerned about the this sort of challenge ahead insofar as connecting affected community fairly disenfranchised from the technology development process directly to the types of conversations that we all have a luxury of being a part of here in Silicon Valley in San Francisco I'm pretty sure that's the only robotic barista you know free I certainly would be in San Francisco but you know I think the types of discussions that we have here in the valley certainly in the tech industry more generally sometimes can confine themselves to somewhat a bubble and so I think it's really important especially as we're talking about the ways in which AI can have discrete and and definable impacts on these communities that we find a way to really kind of bring them directly into the conversation so that you know that would be my worry I think for our part at Pai like I said before we're really working hard on that problem so part of how we've done it is by directly sponsoring the deep involvement of the nonprofit organizations we have involved in our membership so we're global body were comprised of over 60% nonprofit institutions even though we were founded by a lot of large technology companies and making sure that they're invited to conversation is not sufficient for the work that we're doing we really have to directly sponsor their deep involvement in those conversations and again make sure that there's a common language that all the participants are speaking around the the sort of the direct subject of the work that were grappling with so so that that's part of what we're giving thought to at Pai there's a lot of different directions I think for that work in the future and in the immediate term it's just meant that we're very conscious about making sure that we're not just inviting people to show up at the table but we're really making sure that they they get their great father Eric yeah I would say for me the key point is to be sure that we build a society which is inclusive enough in two ways perhaps first of all we see that we used to have a let's say digital divide but now it's a tech divide between continents and countries and people in the same country and we know that the more people I need the more the technology can help them to solve the problem the problem is between people having those problems and people having potential solutions a connection is not so easy that's why we've started to organise hackathons let's say a global scale – at local and thin global and because we have the opportunity we have the chance to have a real international institution with NGOs everywhere in the world directly on the field working with the people we try to leverage this experience to identify what are the real problem and how to solve that and then scale up those solutions let's say helping those big NGOs let's say to implement those technologies to help as many people as possible so that's one point the second one is when I say inclusive it means also that if this technology is only designed by the same type of people I mean studying in the same places coming from the same places and so on I'm not so sure that it will be relevant for everybody and it will fit I mean one size doesn't fit all and we see that people coming from some countries in Africa and Latin America will probably not be able to use those technologies just because it's too far from the mindset it's too far from their culture so my question and that's also what we're also working on is how can we bridge the gap between the people to help more people to be in the loop of the design of those technologies not only of the use of that or those technologies in a way to be sure that those technologies will really be suitable for everybody okay of course we're in my former regulatory hat you know my biggest concern you know we may have solved for the last financial crisis but I'm really worried about the next one and the threat of cyber attacks and in hacking or real threats I think to our financial markets and in that respect I think we need a global solution and global corporation don't think we can be isolated and in our decision making in this regard because I mean there's no boundaries in terms of where people went a hike markets or where they would like to attack our systems from sort of where in my hat in terms of diversity inclusion one of the things I'm most proud about that we're doing it's Innoko women is that we have just created our first mobile platform for companies to use to engage the employees it's a powerful tool I think for companies to really take the pulse of their organizations and for employees to be able to engage with the company about helpful tips and tactics and ways to overcome obstacles because I think the 360-degree conversation is really important I think you know for organizations there's a real disconnect in that sense in terms of things being much more from the top down as opposed to from the bottom up and so I think you know to the extent that we have tools to empower our employees and others to you know bring their purpose to work every day I think that's really meaningful great Richard so I think in terms of worries it's really about the Omni or dual Omni use capabilities of AI it that worried me a little bit I've done a lot of research over the last decade in artificial intelligence and just getting algorithms to be more accurate but once they're accurate and they work and we open sourced them now everybody can use them it's very easy to misuse them but now the genies out of the bottle and like you you want the good uses but you don't want some of the bad news you don't want it to be used for military and autonomous weapons and things like that but it's the same underlying technology that can help you identify breast cancer it's literally sometimes the exact same algorithm that could to say should not route versus this type of breast cancer or not and so you want the good uses you don't want the bad choices and in general we have a platform or a platform company to and we don't always look exactly into customers data in fact we never do trust our normal on value and so we want to just educate people and help them be part of this conversation and think about the ethical implications and so on the positive side I'm actually I feel very fortunate to be part of Salesforce because we have I think coming from the top a lot of really great values mark talks about stakeholder capitalism's shareholder capitalism about how the business of businesses to improve the state of the world and that seeps through all the values and into all our many employees and so we have trust and equality diversity as as real values I'm really excited that trailhead our learning platform is part of our strategy to help people gain new skills to help also companies empower their employees to gain new skills as maybe parts of their job will just get augmented but maybe others in service could in the next couple of years get replaced and so I think there's a lot to be to be proud of and happy at Salesforce with the values and that we bring into the way it's supposed to brought this group together I want to close we just have a few minutes left but I'm going to go back down the line here one more time there's a really great Wall Street Journal article about a week ago that talked a little bit about the difference between humanity and what AI and the machine learning can do expressly it pointed out that the machines even the best machines even the best AI doesn't have empathy can't really be creative that there will always need to be a creator or a maker behind they're calling the shots I'm wondering about your thoughts Richard on on that is do you agree with that and and as this idea that that it's the humans that drive the values and all of this AAI is that is that really important for the future as we design I do think so I do think it's important and in some ways I think I will help us also be able to focus more on what makes us uniquely human that is having empathy with each other you can have an algorithm do a lot of things for you and automate a lot of things but if you want another person to be empathetic to your situation you you cannot program that almost by definition you can pretend and pretend better and better or in the future but there are a lot of things where you have you require empathy and creativity and those kinds of jobs I think will be valued more and more in the future and in that sense and with the right economic and political systems I think manatees bound to benefit a lot from AI in the future right Commissioner Bowen well there's never a substitute for good judgment and so I think the human element will always be important it you know at the same time I think it's that we really have to have everyone have a seat at the table it can't be a an audience of people all think the same way who look the same way and so I think it's important that we help everyone participate in this discussion otherwise I think we will be solving problems that could be made worse frankly in that respect thank you to my mind I agree let's say that the machine cannot do everything the program is as human being we're lazy to be honest and we are happy not to make decisions we want to be free but we are so happy if someone else make the decision and very often we see that if the machine can order the pizza it's better whatever the pizza is or whatever where it comes from but I think it can be the same in other let's say fields more for more tricky questions even related to defense and so on because choosing or engaging a target is a difficult decision so time to time I think that the danger could be not from the side or the technology but from the side of the human being to lose or to let its agency or our agency go away and not to take our responsibilities and noting that that's exactly what makes us human is to to let's say face our responsibilities and make our own decisions so we need to be sure that the way we use those technologies especially AI which is super powerful will be in a way which will really empower us instead of let's say dehumanize dehumanizing us just because we will let things go and just to be sure that we keep the control and we use those super powerful technologies to build what you want what we want to build instead of being just in the floor let's say thank you yeah I guess I'll just add to a lot of points I already agree with by saying but I think one of the promises of AI is that it can has a promise of allowing us to solve a lot of decisions at scale and I think what's interesting about that is that it opens up new opportunities first to think about new ways of making decisions and organizing organizations around that promise and the potentiality of the impact that this technology is going to have so again I think those are muscles that a lot of organizations are in the midst of growing right now and I hope that entities like ours can be helpful to that process but more than anything returning to your original point you know I think technology is nothing if not a set of decisions that are made by human beings and you know this room is full of a lot of those human beings this panel has a lot of those human beings on it and in general I think it's really important to hold close to that empowerment and just remember the the power that we have to not change on the systems that we're talking about great so with our last two and a half minutes I'm just going to quickly go down the line and ask you are you optimistic are you pessimistic or you both net optimistic for sure I'm optimistic just pay attention about the little dangers and for sure we'll do it we'll make our way yeah great definitely an optimist I think we all better be I'm optimistic in the long term I'm worried a little bit in the short term for it can you explain that a little bit what I mentioned before in that the transition as AI automates more and more jobs the economical systems social safety nets and healthcare and things like that that are sometimes too tied to jobs and this is maybe my European background in Germany where have free healthcare free education those kinds of things will help alleviate that in those countries and if more countries are sort of helping people with the transition as certain kinds of jobs will get automated as they have to re-educate themselves and like learn new kinds of skills for the jobs of the future as long as that's happening then I'm also more optimistic okay well thank you all so much we so appreciate it everyone goods give them applause thank you really thank you for coming really really do appreciate it

Leave a Reply

Your email address will not be published. Required fields are marked *