Responsible AI with Dr. Saleema Amershi

This post has been republished via RSS; it originally appeared at: Microsoft Research.

Saleema Amershi on the Microsoft Research Podcast

Episode 105 | February 5, 2020

There’s an old adage that says if you fail to plan, you plan to fail. But when it comes to AI, Dr. Saleema Amershi, a principal researcher in the Adaptive Systems and Interaction group at Microsoft Research, contends that if you plan to fail, you’re actually more likely to succeed! She’s an advocate of calling failure what it is, getting ahead of it in the AI development cycle and making end-users a part of the process.

Today, Dr. Amershi talks about life at the intersection of AI and HCI and does a little AI myth-busting. She also gives us an overview of what – and who – it takes to build responsible AI systems and reveals how a personal desire to make her own life easier may make your life easier too.

Related:


Transcript

Saleema Amershi: We’re trying to make sure we think carefully and thoughtfully about how to build these systems, so in that sense we might need to slow things down, but we’re also trying to push the boundaries, right? That means like coming up with new techniques so we can then accelerate our progress. What are the new methodologies and techniques and tools we need to build so that we can still make rapid progress, but do so carefully?

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: There’s an old adage that says if you fail to plan, you plan to fail. But when it comes to AI, Dr. Saleema Amershi, a principal researcher in the Adaptive Systems and Interaction group at Microsoft Research, contends that if you plan to fail, you’re actually more likely to succeed! She’s an advocate of calling failure what it is, getting ahead of it in the AI development cycle and making end-users a part of the process.

Today, Dr. Amershi talks about life at the intersection of AI and HCI, and does a little AI myth-busting. She also gives us an overview of what – and who – it takes to build responsible AI systems, and reveals how a personal desire to make her own life easier may make your life easier too. That and much more on this episode of the Microsoft Research Podcast.

(music plays) 

Host: Saleema Amershi, welcome to the podcast.

Saleema Amershi: Thanks. Thanks for having me.

Host: You work in a group called the Adaptive Systems and Interaction Group and you’re a principal researcher there. Tell us, in broad strokes, what you do for a living and why you do it? What gets you up in the morning?

Saleema Amershi: Sure. So I work at the intersection of human computer interaction and artificial intelligence. And so I create technologies and techniques to help people both build and use AI systems. So that means, you know, I think a lot about developers and data scientists and the tools that they can use to create our AI systems, but also the end-users who ultimately will use and interact with AI systems in their everyday lives. And so I think about how we can make these systems, and people interacting with them, more efficient and effective. And then more recently, I’ve started to think a lot more about responsible AI and how we can help people interact with these things safely and responsibly so that they can trust them and use them to help them in their everyday lives.

Host: Right.

Saleema Amershi: And so, you know, if I think about what wakes me up in the morning, it’s this responsible AI stuff. It’s both exciting and terrifying, you know? There’s so much potential for AI to help people and society, but also a lot of potential for harm, so that gives me a lot of work to do.

Host: Well, before we get specific, I want to get a little geeky, from an academic point of view, and talk about methodology. I don’t talk about it that much on the show. And I think it would be interesting to kind of take a quick look at the tools that you use to equip ML software engineers with the tools they’ll use.

Saleema Amershi: Mmm-hmmm. So a lot of my work involves designing and building new interactive AI systems. So that means I end up using both design tools and prototyping tools as well as, you know, development tools and machine learning tools to build AI models. And I think, you know, to build these systems effectively, you really need to understand some of, like, how these AI and machine learning systems work, you know? You need to understand what knobs are available to give to people so that they can interact with them more effectively. So I think a lot of my work involves actually like building and developing systems. And then, in terms of methodologies, I use both qualitative and quantitative methods. You know, I really like to understand the needs of people before I start building things. That’s really the, you know, user centered design approach, right? Where you really, like… all your decisions are based on user needs.

Host: Right.

Saleema Amershi: And often times for that, you use a lot of qualitative methods, interviewing techniques and surveys to get that sort of rich feedback. But at the same time, I come from a math background so I really like to see numbers and hard evidence, so I also do a lot of quantitative research, so controlled studies where we, you know, statistically compare things so I can, you know, trust my own work and understand whether or not the things I build actually help people. I think there are benefits and limitations to all these methods. You know, if you use quantitative techniques, you really have to control a lot of variables and that means you can really only answer very narrow questions.

Host: Right.

Saleema Amershi: Right. So yes, you can get statistical significance and numbers, but you don’t really understand, qualitatively, like, why this is happening and why is this working better for people or not? And I think I really like to use both methods in all the work I do. I like to have a quantitative and a qualitative perspective because they really feed into each other and that’s how you can really understand why things are working or not.

Host: Let’s get into your work right now. There’s a lot of discussion today about ethics in AI. I think it’s because we’re starting to see some of the ramifications of these systems that we’re putting out in the real world. And Microsoft has actually been a leader in this space, so I want to talk about several threads of research you’ve undertaken and the implications of your findings. And I want to start by setting the stage and operationalizing the term Responsible AI, or RAI. What is RAI, and what does it look like IRL, in the real world?

Saleema Amershi: I’m a really practical person, so to me, Responsible AI is really all about the how, right? Like, how do we design, develop and deploy these systems that are fair, reliable, safe and trustworthy and all those things. And to do this, I really believe that we need to think of Responsible AI as a set of socio-technical problems, okay? So that means that we need to go beyond just data and models and making those better. We have to think about the people who are ultimately going to be interacting with these systems.

Host: Right.

Saleema Amershi: Even if you collect a really huge, diverse data set and your models are tuned appropriately, if a person can’t effectively understand the AI or, you know, take over control when it inevitably fails, that can also cause problems. So I think when we create responsible AI systems we need to think about these systems responsibly, which, you know, opens up many challenges, but also new opportunities.

Host: Right. I like to say that failure has a lot of faces and not all of them are ugly. You take it a step further and say that Responsible AI requires planning to fail! Why?

Saleema Amershi: Mmm-hmm. Yeah, so this is something I’ve been thinking about a lot lately and some types of failures are really inherent and unavoidable in AI. So yes, we should be doing our due diligence and trying to make sure we deploy systems that are, you know, as error-free as possible, that we’ve debugged them carefully… we need to do all that work. But we also need to recognize that we can never get rid of all of these errors. And that’s by design. So…

Host: Wait…

Saleema Amershi: …let me give you…

Host: Wait…!

Saleema Amershi: I can give you an example.

Host: Yeah, please!

Saleema Amershi: So imagine, you know, you have a facial recognition system that’s used for access control. By design, an algorithm will be tuned to optimize for some metric. So maybe you try to optimize for precision or recall, which really, like, affects the amount of false positive and false negative errors you can have.

Host: Okay.

Saleema Amershi: You can never really get rid of all of those errors because, by definition, an AI model is a simplification of the world. You can never fully capture the world. So AI algorithms are designed to do the best under the circumstances. And that means it’ll sacrifice parts of the input space to try to get something that’s optimized accordingly. And so that means you will definitely have some false positives and false negatives. The algorithm will try to determine a model that generalizes well to new data and may sacrifice parts of the input space.

Host: Okay.

Saleema Amershi: And so the choice of parameters you use, or thresholds you use, is really important and you really need to think about the user scenario there. So if you get that choice wrong, that’s going to be costly to the users. So in the facial recognition scenario, false positives are much worse than false negatives, right? If somebody accesses your account, that can be much more costly than if you have trouble accessing it yourself.

Host: Right.

Saleema Amershi: So if you don’t get that right, that’s a failure. In the same sense, you can’t avoid all false positives and false negatives, so you need to ensure that you give people mechanisms to not only understand when those are happening, but also to override the system, or take over control when they inevitably happen. So if you can’t get into your system, provide another means of accessing your system that doesn’t rely on that technology. So anticipating those failures, as designers and developers, if you recognize these common AI failures and make sure you design interfaces and your systems to help people address those failures, that’s how we can work towards creating responsible AI systems.

Host: Go back to the “planning for failure” then, in the design part of that. What does that mean?

Saleema Amershi: So that’s about, I would say, enumerating the common types of AI failures. So false positives, false negatives, being uncertain, being only partially correct, but also, again, thinking of AI systems as socio-technical problems, right? So that means going beyond just the model errors themselves, but places where the system can fail in terms of how the user is able to interact with them.

Host: Okay.

Saleema Amershi: So again, like the mismatched expectations issue, right? I would consider that a failure. If a person has higher expectations of the system than it’s capable of, that’s a failure, right?

Host: Right.

Saleema Amershi: Because if a doctor is relying on a system to make clinical recommendations about their patients, if they think that the system is smarter than it actually is, they may over-rely on it and that can result in harms.

Host: So what’s the mitigation there because you’ve got the system that’s going to fail… is it more educating up front?

Saleema Amershi: This is where we can get creative as an industry. For some of these, we often go to, let’s just give the people all the documentation, right? Like list out everything, but nobody reads documentation, right?

Host: I was just going to say, I don’t even read the apps.

Saleema Amershi: Exactly, right? And so what are other ways that we can sort of expose the capabilities and limitations of these systems? We’ve done a lot of work in this space and trying to understand what is effective for this and other types of failures. So showing examples of the variety of things that an AI system can do effectively is a way of giving people an understanding of their capabilities.

Host: Right.

Saleema Amershi: Giving people controls to turn the knobs themselves. That gives them an understanding, but it also makes them part of the process and so they’re sort of more accepting when these things inevitably break down and are more willing to interact with them, and continue to interact with them, because they are part of that process.

(music plays)

Host: Let’s zoom in and talk about Responsible AI, writ large, and kind of following up on some of the points that you just made about planning for failure and understanding that these things fail. I know that there are some myths out there about it and some hurdles that people have to overcome, both on the making side and the using side. So talk sort of generally about what we’re facing here, and what we need to do to build Responsible AI. What are the necessary building blocks?

Saleema Amershi: Yeah, this is a great question and it’s something we’ve really started to look into recently. We’ve started to do some preliminary work to understand sort of the challenges people face in trying to build responsible AI systems.

Host: Yeah.

Saleema Amershi: And also the perceptions that people have about responsible AI issues around like bias and fairness and, you know, some of the things that we hear about in the news. And some of what we’ve been finding is really interesting. There’s AIs now that are being used to making hiring decisions or recommendations. So imagine you’re using an AI system to make recommendations about who to hire for a technology job, or an engineering job. You know, we’ve seen in the news, sometimes these systems can be biased and so maybe your system is biased against recommending women for engineering positions. If you ask people about this fairness and bias issue a lot of people will come and say, you know, well this is just reality. If we try to fix this, we’re actually just adding our own biases and who are we to change reality?

Host: Right.

Saleema Amershi: And this goes back to, you know, what I was saying earlier, which is that, this is not reality, right? Like AI models are, by definition, a simplification of the world. Like we cannot represent the world and all the factors that will impact whether or not a person will be a good hire. We can’t represent all of that.

Host: Right.

Saleema Amershi: And so, while it might be true that there’s, you know, gender disparity in technology, it’s incorrect to say that this is reality. Another thing that we hear is that, you know, this is just math, right? AI is math, so it can’t be wrong, you know? And yes, it’s true, like the math is designed to do what you tell it to do, but again, going back to what I was saying earlier, the AI is designed to do the best under the circumstances. And because you can’t fully capture the real world, that means that it will try to, you know, minimize errors. Not eliminate them, but minimize them, right? And so even if the math is doing what it’s supposed to, you know, it’s operating over data and that data can be limited, how you show that information to the user can cause problems and failures. And so the math might not be wrong, but the AI system, overall, could still be wrong.

Host: Right.

Saleema Amershi: I think we should think about these humans and machines as having different error regions, right? So yes, each of these systems will have uncertainty, but they’ll be different. And so ideally, those uncertainty regions don’t overlap that much, right? And then you can complement each other effectively.

Host: Right, right, right.

Saleema Amershi: And it’s true because AI systems can see things that people can’t, right? And vice versa. And so I think that’s the way we should be looking at these and, you know, thinking about what is that overlapping region and ensuring that people understand sort of the limitations of those systems and where it might fail versus where you might fail.

Host: What are we facing on the making end of this? We’re talking here about how these systems operate with users. Are there any challenges that we face as developers?

Saleema Amershi: I think this is where a lot of the work needs to be done. We have to, I think, re-think how we go about building these systems. I’m a firm believer that building responsible AI systems requires an interdisciplinary approach. And so for example, a lot of times we put a lot of our emphasis and resources towards building better models and getting better data, but again, you know, these are socio-technical systems. So we also have to think about how those systems will be deployed in the world and who it’s going to affect. Who are the different stakeholders? What are the implications to those different stakeholders when things go wrong? And I think that really requires a user-centered approach.

Host: Right.

Saleema Amershi: And so we should be leveraging, you know, the skills and expertise of, for example, our user researchers and designers who having the training to sort of understand the needs of people. And that understanding should be driving all of our AI decisions, including, you know, what algorithms to use. If you need a system that can provide an explanation to a user so that they can make an appropriate decision, you’re going to need an interpretable model.

Host: Right.

Saleema Amershi: So that’s going to affect the choice of algorithm you use. Understanding the users is going to affect the parameters you choose, the data you collect, right? All of those decisions should be driven by the scenario. And I think sometimes we do it the reverse way in the industry, right?

Host: Right.

Saleema Amershi: We build technologies and then sort of stick an interface on it and hope it works for people. But I think, to do it responsibly, we need to do the reverse.

Host: So there’s a level of opacity I think in terms of form factor when we deliver an AI system. When I think of what I get now, it’s a laptop, it’s a phone… but inside of that is where these AI systems are being deployed and so I don’t necessarily differentiate between my phone that used to do what it did and now my phone that now I can talk to. Or my phone that looks at my face and says okay, Gretchen, you can come in. How do you think about design on those things that alert users that they’re now dealing with AI and how do you educate about that?

Saleema Amershi: AI systems are fundamentally different than our traditional computing systems and I think our practitioners and our product teams are sort of really struggling to design effective systems because of those differences. What we know about how to design effective traditional computing systems, like making sure your interfaces are consistent, or your systems behave consistently, so people know how to use them, know what to expect.

Host: Right.

Saleema Amershi: That’s something that’s inherently very difficult for AI systems because they can operate differently in subtly different contexts or from one user to the next. How do we design for that? Where like I think we need to come up with a lot more guidance to help our teams understand what is the effective way of designing these systems so people can interact with them effectively.

Host: Right. Bill Buxton, design guru here at Microsoft Research, was on the podcast and he talked a lot about the need for great design from the get-go and spoke about who gets to make decisions about how something is designed. And we’re not just talking about the form factor, we’re talking about the entire package. And so, with these new AI systems, how can we bring new people to the table that really need to speak to the design up front into that traditional system?

Saleema Amershi: Yeah, this is something I think about a lot and it requires really a cultural shift, right? It’s about recognizing and understanding the skills and expertise that each of these different disciplines brings to the table and how they can complement each other in order to create responsible systems. That is just something that requires education. It requires trying it out, right? Like, hey, if you do bring people in earlier, you’ll likely create a better system because now your decisions are driven by what’s actually going to be useful for people. I’m actually really hopeful right now. I joined MSR about seven years ago in the machine learning group and I think things were much different back then, right? You know, it was much harder for me to explain, like, what I brought to the table for machine learning, you know, and why I was even there, you know? I was the first like HCI hire in the machine learning group, and it took me a while before people really understood what the complementary skills that I brought to the table that helped, you know, make these things effective.

Host: Right.

Saleema Amershi: And I think people are more open to that now so even though it requires this shift in our industry, I’m hopeful that it will happen.

Host: Right. Well, and Bill talked quite a bit about the fact that the arguments that you hear are we don’t have enough time, we don’t have enough money, we need to ship…” There are constraints that are inherent in that cycle.

Saleema Amershi: There are ways to plan for this, right? Like reserving some resources for dealing with responsible AI issues, you know. If you know they’re going to be there, so if you reserve some resources for that then you don’t run into this problem of we don’t have enough time or resources. I think another thing that helps is calling these issues failures. I use the word failures really intentionally because we used to call responsible AI issues, you know, issues or problems, but just that term will put it lower in the priority stack.

Host: Right.

Saleema Amershi: Right? But if you call it a failure, like if people might get harmed or if there’s a bias against people, like that’s a failure. That’s something that’s a showstopper, right? You’re not going to deploy something that has a failure.

Host: Right.

Saleema Amershi: So I think it’s important to talk about it that way so that we ensure that we’re actually prioritizing fixing them.

Host: AI brings both new challenges and new opportunities in innovation in user interface and user experience. You address this issue in a paper that proposes guidelines for human-AI interaction from a research perspective, a development perspective and a user perspective, which I think is cool. Talk about the genesis of this work and the findings that you presented at a paper in CHI this year.

Saleema Amershi: So we created the guidelines because we were really seeing that our product teams and practitioners were struggling to design for AI. And I spoke about this a bit earlier, which is, you know, this fact that AI systems are fundamentally different from traditional computing systems. So they’re going to be inconsistent. They’re going to be error prone. So what we know about designing for traditional systems doesn’t always work. This is evidenced by just the failures that we see every day in the news, you know, that range from like funny, like, auto-completion errors to like really harmful errors. And at the same time, there’s actually been a lot of research advances over the last like twenty years around how to develop these types of AI systems effectively. And for me, that was somewhat frustrating, seeing sort of people struggle, and missing out on some of what’s been going on in the academic community or industrial circles, and I felt that there was a lot of sort of reinventing the wheel and wasted time. And that’s, you know, partly because guidance is really scattered across many different industrial circles. Guidance that is available hasn’t always been evaluated in a wide variety of scenarios, so if you know something works well for like a bot, how do you know it’s going to work well for any other AI system? And sometimes guidance is presented at very different altitudes. Like you can have really high-level guidance like, make sure these things are, you know, trustworthy, but like how do you do that, right?

Host: What does that even mean?

Saleema Amershi: Exactly, right? Like so, we wanted to provide guidance that was more actionable, right?

Host: Sure.

Saleema Amershi: So we got together with a large group of people and said like, hey, let’s just try to synthesize what we know across the industry and come up with guidance that’s clear, actionable and that we know is relevant to a wide variety of scenarios. So we went through this iterative process. We collected guidance and best practice recommendations from a wide variety of sources. I think we had like two hundred to begin with, and then we iteratively grouped them, revised them, tested them with real practitioners to understand if they were really something that people could detect and notice in interfaces, and that’s how we developed them. We tried to take a really rigorous and systematic approach so that we could, you know, feel confident in recommending these as things that we know are tried and true. And I think that helps both, you know, researchers, developers, and end-users, you know. It helps practitioners design better systems. That gives end-users better systems to use. And I think it also helps accelerate research because, like I said, you know, I felt that we were sort of reinventing the wheel and I think, by synthesizing this work, it can kind of reveal where the real gaps in our knowledge are and so we can try to, then, really push the boundaries into what we don’t know.

Host: Who’s this for? I mean, when you say guidelines for human-AI interaction, who’s your audience?

Saleema Amershi: I would say primarily practitioners. So product teams. Like, we want them to, you know, have the knowledge to create good systems. But also researchers. Like I said, you know, I want to really advance the field. Here’s what we know now, let’s figure out and improve the situation and do research in areas that we have less knowledge.

Host: You currently chair a really interesting working group at Microsoft on human-AI interaction and collaboration, and it’s a part of Microsoft’s Aether effort, which is a company-wide initiative around Responsible AI. And I know there’s a lot of Venn diagram overlap with this and what we’ve been talking about up to this point, but I want you to drill in a little bit on why the issue of human-AI collaboration warrants an actual group, or task force, maybe, is a good word, and who’s involved?

Saleema Amershi: Like you said, you know, much of what I’ve talked about today, actually came out of this working group. So the, you know, the studies about people’s perceptions around responsible AI failures, the guidelines work… all came out of this Human-AI Interaction and Collaboration working group, which we call the HAIC working group. And, you know, just like you know, any new set of problems, this is a new space, right? Like we don’t know how to do human-AI interaction and collaboration well yet. And so we need people to be really trying to push the boundaries of this area, you know? People with the right expertise in order to make advances. So that includes, you know, coming up with new best practices and techniques, but also even just advancing the state-of-the-art in terms of the methodologies we use. That’s something that we’ve been looking into recently is that a lot of our techniques and methodologies for building traditional systems don’t work that well. So, you know, we have, in design, we have prototyping techniques like Wizard of Oz techniques, for example, that’s often used to do early prototyping and testing. But that’s really hard with AI systems because it’s hard to mimic the different ways an AI system will behave. But that means, if you don’t have that in place, then you need to take a dependency on a model, which means that you can’t sort of test your interface early, and that causes problems, right?

Host: Right.

Saleema Amershi: So we really need experts, people who understand human-computer interaction, user research methodologies and AI systems to think deeply about new methodologies to enable rapid prototyping and iteration and other methodologies for evaluation, testing and building AI systems.

Host: Let me drill in a little bit there. Technology advances at a very rapid pace nowmaybe faster than it ever has, and you’re a group who are a little bit putting the brakes on and saying, wait, we need to make sure this isn’t going to harm anybody. How much influence do you have among the various groups of people that are putting this technology out?

Saleema Amershi: So I would characterize it not necessarily that we’re putting the brakes on things. Yes, we’re trying to make sure we think carefully and thoughtfully about how to build these systems, so in that sense we might need to slow things down, but we’re also trying to push the boundaries, right? That means like coming up with new techniques so we can then accelerate our progress. What are the new methodologies and techniques and tools we need to build so that we can still make rapid progress, but do so carefully? So I kind of see us as, you know, partially pressing the brakes, but partially, you know, pressing the accelerator!

(music plays)

Host: Well, Saleema, we’ve reached the part of the podcast where I ask all my guests the same thing: what keeps you up at night? And obviously, a lot of your work is based on what keeps all of us up at night. I’m glad you’re doing it. Ethical AI is a huge issue here and you’re tackling it head on. But that said, not everyone is going to comply with best practices, so what kinds of things can be done to mitigate the undesirable consequences of these powerful tools and ensure that I don’t lose any sleep at night?

Saleema Amershi: Yeah, you know, I mean, this space is, like I said earlier, it’s both exciting and terrifying for me, you know? And I really believe that to create ethical and responsible systems, we really need a diverse set of perspectives at the table, right? This is both at the macro level and the micro level. You know, the macro level, in terms of coming up with policies, like I think we need policies around this, but that needs to be driven both by industries and government agencies working together, right? If you have just one of those entities making decisions, sometimes you’ll come up with things that just don’t work.

Host: Right.

Saleema Amershi: And then at the micro level, you know, building individual products. I really feel that we need people with not only diverse backgrounds in terms of, you know, race and ethnicity and gender, but also different skill sets, you know? People with different experiences, different tools and methodologies that you use. How do we enable these people to work together? And I think that’s going to require a cultural shift, which is hard to do, but, you know, I’m trying my best! You know, the Aether working group and HAIC, Human-AI Interaction and Collaboration, this is sort of what we are really trying to do.

Host: Microsoft Research is not a monolith. People come from all over here. Different backgrounds and life experiences and unique personal stories. So tell us yours, Saleema. What got you started in computer science and what landed you here at Microsoft Research?

Saleema Amershi: I did my PhD at the University of Washington, which is really just across the lake from Redmond here, and so that made it really easy to collaborate with MSR and there was just so many interesting people to work with. And so I ended up doing three internships here at MSR. I just kept coming back. And even, like, when I wasn’t doing internships, I would collaborate with people. I just always loved it. I loved the energy, the breadth of experience and expertise, and everyone is just willing to talk to you and work together, and I sort of always knew that I wanted to come back here. And so after grad school, you know, I applied, came here. When I first started people would ask me if I was, you know, back for another internship.

Host: You again?!

Saleema Amershi: No, I’m really working here now!

Host: Well, rewind before University of Washington and your PhD. What got a young Saleema Amershi interested? Where did you start? Are you from Washington stateetc?

Saleema Amershi: So I grew up in Vancouver, B.C. Ummm, yup!

Host: I did not know that.

Saleema Amershi: Yeah, so I went to UBC for undergrad and my masters, and I actually started off as a math major. In fact, I never thought I would go into computer science. You know, it wasn’t like, computers were just sort of becoming common in high schools and I wasn’t really exposed to it that much, but I really liked math. That’s what I wanted to do. I wanted to be a math professor. And so when I started undergrad, at the time, you know, you had to take computer science courses as part of your math major, and that’s, I think, when I got really exposed to computer science, which is really about putting math to work. You know, it’s about making math do things. And do cool things, you know? And that’s when I started transitioning to computer science. And then I started working at the Laboratory for Computational Intelligence at UBC, working on intelligent tutoring systems, right? That’s where I got exposed to AI systems, and then, as I was building those systems, I found it was hard to do, you know? So I was trying to make my life easier, you know, by building better tools and that’s kind of what let me to HCI and here I am working now on this intersection.

Host: What’s one interesting thing – a trait, a characteristic, a life event, a side quest – that people might not know about you and maybe it’s impacted your career as a researcher?

Saleema Amershi: I think of myself as a pragmatist and most of what’s driven my work, I would say, and driven the path that I took was trying to make my own life easier, you know? So when I was working with intelligent tutoring systems back at UBC, I remember at that time, to build these things you would create, like, these giant belief networks that were hand-tuned by experts for just one system. And that was like, yes, it was powerful, but it was not scalable. And that’s what got me into machine learning, right? It’s like okay, how do we do this without having to hand-tune all these things? So that’s how I started using machine learning for intelligent tutoring systems. Then, when I was using machine learning, like I said, you know, the tools that we had for those for, you know, data collection and cleaning, and understanding and debugging, were just really hard to use. And it was really difficult. There was a steep learning curve. So that’s what got me into interactive machine learning and HCI. You know, because I wanted to create better tools for myself, you know? So I could build these things more efficiently and effectively. And then, you know, at the same time, I’m not, you know I build these systems, but I’m not a user, right? So when I interact with these AI systems in my everyday life, you know, like social networking systems or recommender systems, it really frustrates me when I can’t do what I want, you know? I can’t steer these things the way I want. And I think that’s about giving everyday people the right controls and knobs in order to steer these things. I think there’s a myth that people won’t want to put in the time and effort to interact with these or steer their systems, but I don’t think that’s true, you know. I think, if people start to understand the benefits of doing so and if you give them easy controls, they’d be willing to do so. And so really it’s about helping myself. You know, making my life easier, which in turn will help other people build and use these things effectively.

Host: At the end of every show I give my guests a chance to encourage, inspire or even instruct our listeners pretty well in any way they see fit. Do you have any parting words? Any thoughts on what we might need from next gen researchers for next gen technologies?

Saleema Amershi: Yeah, I really believe that there’s so many interesting opportunities to work at the intersection of different fields. There’s a lot of opportunities to bridge different communities, enable them to work together more effectively to create really novel solutions to our problems. So, you know, what I would recommend for, you know, the students out there, the next generation of researchers, is to explore those opportunities and work across interdisciplinary boundaries, you know, train yourself in multiple different fields so you can understand problems that might be solved by bringing those together. I think that could really help change the world.

Host: Saleema Amershi, thank you so much for joining us today!

Saleema Amershi: Thank you for having me!

(music plays)

To learn more about Dr. Saleema Amershi and how researchers are working to make AI robust and responsible, visit Microsoft.com/research

The post Responsible AI with Dr. Saleema Amershi appeared first on Microsoft Research.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.