Episode 145: The Responsible Tech Revolution with Tanya Johnson, Chief Product Officer at Auror
In this episode of Product Thinking, Tanya Johnson, Chief Product Officer at Auror, joins Melissa Perri to explore the importance of responsible and ethical use of tech and AI. They unveil the essence of ethical product development, embracing responsibility and diversity across product teams; explore strategies for building authentic diversity in tech teams; and share Auror’s framework for responsible tech and AI.
You’ll hear them talk about:
[07:00] - Tanya highlights the importance of responsible technology and AI, especially in critical sectors like retail crime intelligence. While tech should solve problems, it must also strive to minimize harm, which involves product development and data management to maximize positive impacts without compromising user safety or privacy. As our understanding of responsible tech deepens, the industry must shift from indiscriminate tech applications to more thoughtful, inclusive, and deliberate development and deployment practices that aim to increase societal good at scale.
[10:50] - When developing responsible tech products, you must consider both the target and involuntary users. Product managers should conduct thorough research beyond their core audience, considering the broader context and potential consequences. Besides restrictive policies, addressing these issues should involve proactive discovery, thoughtful design, and user training. Moreover, balancing access restrictions while ensuring responsible and broad consideration of user impact is critical for ethical product development.
[17:21] - Regarding tech responsibility, every team member, not just CEOs, bears the responsibility for ethical technology development. While there's a misconception that only bad people cause harm, it's a lack of intentionality and accountability that leads to negative outcomes. Also, it's essential to build a diverse team when recognizing risks, such as how data use could threaten vulnerable groups. The responsibility lies with creators to ensure their technology respects and protects all users.
[22:50] - Building diverse teams is vital for creating successful products that resonate with a broad user base. Cultivating diversity goes down to company founders and leaders must implement intentional hiring practices from the beginning. Objectively assess candidates by standardizing the hiring process and focusing on role-relevant criteria rather than subjective preferences. Also, ensure your hiring panels are diverse. First, however, you must understand that diversity goes beyond gender and ethnicity, including cultural, linguistic, and financial backgrounds.
[29:20] - Auror developed a framework for responsible tech and AI that prioritizes fairness, equality, and community benefit. The company commits to designing non-discriminatory AI, sticking to international and local laws, and respecting human rights. Transparency and accountability in AI processes are key, while human oversight is compulsory to balance autonomous decision-making. Additionally, Auror ensures that its technology supports the safety and well-being of its customers and the wider community.
Episode Resources:
Other Resources:
Melissa Perri - 00:00:34: Hello and welcome to another episode of the Product Thinking Podcast. Today we're talking all about Responsible Tech and AI. We're joined by Tanya Johnson, who's the Chief Product Officer of Auror, which is a retail crime intelligence platform. Tanya has been at the forefront of ethical product development, ensuring that technology not only advances, but does so with purpose and responsibility. From her advocacy for diverse teams to her pioneering work in organizations like New Zealand tech women, Tanya's journey is a testament to the profound impact of principled leadership. So I hope you really enjoy this episode because I loved talking to Tanya all about ethics and what that means for product managers. Before we turn it over to Tanya, we're going to jump into our question from Dear Melissa this week. So let's see what our listeners wrote in.
Dear Melissa How are companies thinking about using AI with the emergence of ChatGPT? We're training our product managers, but we're not sure how much we should be pivoting to help them understand AI versus just continuing their education.
So this is a great question and I know a very timely one as we have ChatGPT and all these wonderful AI advances coming out there. A lot of people are wondering how much should I actually dive into this? AI and Machine Learning and ChatGPT and everything like that is just another technological tool that I need you to keep in your tool belt, in your arsenal. Your product managers still need to know how to do great product management. People are looking at AI in many different ways, and I see a lot of companies actually scrambling to adopt AI in everywhere they possibly can, just their tech at all these problems so that they can put on their company that they have an AI component. That is not the way we should be treating it. We need to really deeply understand our users' problems first and then say, “is AI something where we can create a solution using it to help solve this problem better?” And that's how we should be looking at it. Now if you do have an AI component in your product, it is important that product managers understand how that works. And we had a really good episode previously with Christina Pawlikowski, who was talking all about how you should be thinking about AI and what it really means and what product managers need to learn very specifically about Machine Learning and AI. So if you're curious about that, I would go back to that episode and look for Christina and really check out what she has to say there because she gives a great guide for how we should think about when AI can be applied and when it cannot be applied or when it should not be applied. So that's how I would start to think about that.
Continue your product managers on the journey. I don't think you need to pivot anywhere to learn about AI, but definitely make sure that they understand some of the implications of what AI is and how it gets made. Because it's not just like building normal workflows, right? We do have a huge data component, and the models that you have in AI and Machine Learning are only as good as the data that goes into it. So we want to make sure that we understand those parts as well. We also want to make sure that we are going to look at the ethical ramifications of building AI, which is something that we're going to be talking with Tanya about very shortly. So this episode should help you think about that. When it comes to standard product management though, and applying AI, I need you to think about it in the same way you would think about any other tool, an API, a platform, all these different things. Is it going to be relevant in our solution? And is it going to make our solution better for our customers? Or if we're applying it to internal processes, which a lot of people are doing as well, will it help us be more fast, more efficient? And Tanya has a great example in the podcast coming up about where they're thinking about using AI to surface up insights internally into where people might be bad actors or using their products incorrectly. So things like that are definitely how companies are thinking about AI and Machine Learning, but don't panic, don't freak out about this new thing out there, ChatGPT, LLMs, they're fantastic. You should be looking at them, you should be seeking to understand when they can be used, but you should not be panicking and taking all your people and pivoting them in that direction, especially if it does not apply for you.
So first, think about it as a tool, train your product manager as well and great product management, and then you can approach should AI be used here and let's get them up to speed on what that actually means. So hope that helps you. And for those of you listening out there, if you have any questions for me, please submit them to dearmelissa.com. We answer them every single episode. And now we're going to go and say hi to Tanya and learn all about responsible tech. Ever wish for total alignment with executives, the end to those never-ending debates, results that make everyone sit up and take notice, amplifying influence across your organization? The secret, it's not just about managing, it's about facilitating. Level up your ability to facilitate clear, powerful conversations with stakeholders through Voltage Control's Facilitation Certification Program. Learn more and get $500 off at voltagecontrol.com/product. We'll be putting that link in the show notes for you as well. Hi, Tanya. Thanks so much for being with us today.
Tanya Johnson - 00:05:47: Thanks very much for having me.
Melissa Perri - 00:05:49: So you've been working in Responsible Tech and AI for quite a while now. I know this is a big passion of yours. How did you get involved in this field?
Tanya Johnson - 00:05:59: I got passionate about ethics and tech when I was working at a company that built education software and seeing how sometimes the things that we built weren't used the way we intended and what the potential consequences of that were. So for example, our software was installed on school laptops to be used by students. And one of the things that it did was it monitored what they did on those laptops, which we felt pretty okay about. Until we realized that the context that these students were in meant that sometimes these laptops actually went home with them. And sometimes this was the only device that their family had that connected them to the internet. And now the things that their siblings or their parents were doing on the internet were able to be monitored by the school. And so the consequences of this could have been quite embarrassing and they also could have been quite dangerous. We made rapid changes and obviously iterated to restrict the monitoring to only within school hours and only when the laptop was within the school's IP range. But it really got me thinking about how we built and tested the products that we work on and how we could build things that we wouldn't regret later.
Melissa Perri - 00:07:00: That's a really fascinating story. When you looked at the landscape of Responsible Tech as well, I know that Auror it's a really big thing at the company that you're at now, can you tell us a little bit about Auror, and why do they take Responsible Tech so seriously?
Tanya Johnson - 00:07:15: Yeah, that's a great question. Auror is a retail crime intelligence platform. And we empower the retail community to report crime, reduce loss and harm, and to make stores safer. That sounds like quite a lot, but if we're just going to break it down, what Auror is is a faster way for organizations to report crime in structured ways, but then help them to connect the dots on the crime that's happening, and help police focus on solving organized retail crime rather than petty theft. And really help our customers keep their teams and customers safe in the face of rising levels of aggression that we're seeing. And so it's quite important to us, the Responsible Tech and AI piece, because I think that if we're working in an area like movie recommendations, for example, the worst harm you might have is recommending someone a movie that they don't enjoy, and they've wasted two hours of their Saturday night. But working in crime, the consequences here can be really significant. And so we want to be exceedingly careful about what we do.
Melissa Perri - 00:08:11: Yeah, and you definitely see some of those things, especially in the US on the news lately. There was a whole article about a guy who was mistakenly arrested for theft, petty theft too, like you're saying. This is trying not to get petty theft, but because he looked like some other guy on the screen. So all of those things I think are really relevant to what everybody's going through today. When you think about Responsible Tech, like that phrase, and especially when it comes to AI. How would you define that for people who may not be familiar with this area?
Tanya Johnson - 00:08:42: So I would think about Responsible Tech as technology that solves important problems. So tech for good and technology that solves it in ways that maximize the positive impact while removing or mitigating any negative impact. And it's not just about the products that we create, but also about the ongoing way that we manage and protect the data and our tech assets that drive our products. And when we're talking about data, we're talking obviously about our own data, but also about that of our customers.
Melissa Perri - 00:09:08: And how have you seen this field really evolve over time? I know there are so many conversations out there about data, like we've got our Facebook things, we've got TikTok, huge thing in the US about how China's stealing our data, according to some people. And I feel like this is front and center news of things we're talking about all day long now. Can you tell us a little bit about when you first started looking into Responsible Tech, what did it look like, and why is it so critical now?
Tanya Johnson - 00:09:35: I think we've been on a real learning journey over the years, and I feel like and I hope that we've shifted beyond that belief that any problem can and should be solved by tech, and we tend to just apply tech to anything indiscriminately and pretty bluntly. There's a Maya Angelou quote that says, “do the best that you can until you know better, and then when you know better, do better”. I feel like this is the journey that we're on with tech and AI as we go along. We are learning so much more about the impacts and the consequences that this could have, and so seeing the impact of the products that we built when our teams were homogeneous and when we didn't think about unintended users. So, unintended users often our products are used by people that we didn't anticipate using them, and when we didn't take a deliberate approach to training our systems. So now, as tech continues to evolve and our understanding of it evolves, we need to keep doing better here, and I think in the early days we were just so excited by possibilities of it, and we didn't really think about the impacts. And I think one of the things that we do is when bias seeps into these systems, what we end up doing is propagating societal problems through code at enormous scale. It's pretty frightening, and I wonder how we can actually change it so that what we're propagating is the goods rather than the bad.
Melissa Perri - 00:10:51: Two things you said really stood out for me. One though that I want to dive into is about the unintended users piece, because I think that's a big piece of product management. So can we go a little deeper on that for a minute? I've seen a lot of product managers out there being like, “well, they're using it wrong because they're not our persona”. We didn't intend for them to use it, right? We didn't even think about them. When you start a company and then when you're scaling a company, this comes out quite a bit. Focus on your core user. Don't think about the people who are not your core, who might churn, all that type of stuff. How do you balance that as a product manager building in a Responsible Tech? What kind of things would you advise for people who are in these kind of mission critical companies, like Auror and other ones that can actually have bad intended consequences on other people's lives. Like, how do you make sure you're building this way?
Tanya Johnson - 00:11:46: That's actually such an interesting point. And I think sometimes when we hear product managers say, “oh, well, you know, they're using the product wrong”. It's such a cynical approach to take. I think it really lacks the context and it means we need to do so much more around understanding the context that those people are in and really, you know, what lives they're leading and what kind of decisions they make and how they feel about what they're using. And it definitely is something where I think in terms of us building products that are very successful financially, we do often need to identify our ideal customer profile, right? And those are the people that we're going after. But as I saw from the example when I was in working in education and our users really were teachers and school districts and students. So those unintended users for us was when those laptops went home and siblings and parents were using it. And we just hadn't thought about that. But if we'd really done a little bit more discovery and actually gone out into schools more than we had, I think we would have quickly picked up on that and thought about who else was in the home. So things that I think that we need to do, and it doesn't take a long time, right? Is thinking about not just who you're intending to use this product, but who else might use the product and how might they use it. But more than just who might use your product, who might be impacted by it. In our case, for example, working in crime, we are being used by loss prevention people, we're being used by law enforcement, but there are people directly impacted by our product. So people who are involved in crimes, there are innocent bystanders in stores around them. There are the communities that support these people. There are so many different kinds of people who are impacted, and I really do think that we need to widen our circle here and think a little bit more about it. And it isn't onerous. It's not really a deeply time-consuming process. So I don't really think there's a great excuse for us not to be doing this kind of thinking.
Melissa Perri - 00:13:41: One of the things that I think is interesting about what you're talking about and what I see companies actually do is that instead of going to do the hard work and looking at all the people who could possibly be using it and doing that user research, a lot of times they try to solve this problem through policies or restrictions or locking down their product. Can you talk a little bit about what you do at Auror, when you're trying to balance, like how do we lock this down versus like, when do we go do the research and what you've seen in companies there too.
Tanya Johnson - 00:14:12: That's an interesting question in terms of Auror because we're very much B2B. And you talk about people restricting access and things. We do restrict access and we have to restrict access because we're working with crime data. That's incredibly important to us. But I think a lot of the times when companies do that, it's because that is an easy way for them to approach it. And we often hear when we're having conversations about software, that's a people problem rather than a tech problem or a software problem. And sometimes customers come to us and ask us to solve problems in tech that would be better solved by process or by managing things on the people side by training, for example. But sometimes it seems easier to solve it with the tech and with restricting access. So it's a little bit of a difficult question for me to answer from the perspective of Auror because our access is so highly controlled.
Melissa Perri - 00:15:00: Well, that's a good point.
Tanya Johnson - 00:15:02: Yeah, which is a little bit different, I think, from, for example, working in education, where that software could really have been used by anyone within a school context. And we needed to be quite careful about different roles and permissions. But for example, you think about a teacher who leaves their laptop open on their desk and students passing by. How do you manage that kind of thing with different roles and permissions? And sometimes we saw students jokingly use their teacher's laptop to send out really inappropriate messages to students at the school. And that's the kind of thing that often you only find out about when your users actually alert you to it, unless you've got very serious instrumentation and you're watching what's happening and who's using it and how they're using it, which, to be honest, I think you should be as well.
Melissa Perri - 00:15:44: So there's a lot of customer research, it sounds like, on your end, that you promote, especially when you're looking into Responsible Tech and AI here.
Tanya Johnson - 00:15:52: Yeah, absolutely. So we do quite a lot of talking about our hypothesis, the problem that we're trying to solve, how we might understand when we've solved it. We go out and speak to people a lot. But one of the things we also use is a very significant amount of data. Our product is really well instrumented so that we can see who's using it, when, how they're using it. And then we can use that as a signal for who we actually go out and speak to. And I've got to tell you, I know that we are moving remote a lot of the time, but there is a lot to be said for actually going out and meeting your customers where they are. I've learned some really interesting things that have changed how we've built our product based on the physical spaces that we see people in. I remember going and visiting security people at a particular large retailer and realizing that the person who used our software was in the back of a really noisy warehouse and they were perched on top of a tiny filing cabinet with a very small monitor and it wasn't properly connected to the internet. I took a team of designers and developers and our QA folks in there and I watched as the QA person followed the internet cable and realized it wasn't plugged in and I watched as the designers realized that this user didn't have three massive monitors out there and we made a lot of changes from seeing that kind of thing. So I think it's really important to actually get out there in person and visit customers and understand the context that they're using your product in more widely than we do now.
Melissa Perri - 00:17:22: So when you think about responsible tech and understanding your users and figuring all these different things out, do you find that there are any misconceptions out there about what Responsible Tech means, how you operate to be responsible, let's say as well?
Tanya Johnson - 00:17:39: I think the biggest misconceptions that I've encountered around responsibility and accountability. So a lot of people that I talk to feel like the responsibility for this lies with a specific person in the organization, usually the CEO, but I think often they're happy with just anyone that's not me. When it's really the responsibility of the entire team, we're all responsible and at the end of the day, we'll all end up being accountable for what happens. And then I think that incompetence and malice are the sole cause of issues. People think that it's bad people building bad tech when really a lot of the time it's just a lack of intentionality and accountability that leads to unforeseen consequences. So like behind many of the news headlines are simple cases of bias and representative data and it can happen to anyone. I think that sometimes when we do this whole bad people building bad tech, we distance ourselves from the reality that we're the ones building tech and this could happen to us as well.
Melissa Perri - 00:18:36: You do bring up an interesting point and I do think it's true. I think there's a lot of media out there who are smearing these tech companies and these people working on it as evil for collecting this data or doing these things because they get wrapped up into something that has a negative impact. But I don't firmly believe all these companies are full of bad people. I believe the product managers who are building these things are mostly, hopefully, good, but maybe just not thinking about all the consequences. But now there's a fear out there about how people use the data. And like for me, you know, in the United States, like TikTok is a big one about we got to lock it down because the Chinese government is stealing our data. You know, people's parents are afraid of it and stuff like that. And I'm over here going, I don't really care. What are they going to get from me? I'm just watching dog and cat videos. But there is a concern out there about data and how we manage it and how we actually think about it. So when you think about Responsible Tech, how can we start to really consider how we track that data and what should we be doing to make sure that we are not having bad unintended consequences about these things?
Tanya Johnson - 00:19:46: You know, the TikTok example. So TikTok is obviously operating across the world and you're watching cute dog and cat videos. But if we think about the ways that data can leak things about us to people in places where that might not necessarily be okay. So we've heard about things like your shopping and the things that you look for, leaking pregnancies, teenage pregnancies to parents. I'm queer and the kind of things that I might look at on TikTok, if that was to be revealed in certain places would put me at quite a significant amount of risk. And so sometimes I think, you know, that comes back again to the point around diverse teams and making sure that we have people with diverse experiences and who come from different backgrounds who can raise those kinds of things as risks. I want to bring it back to an example in our platform, our platform is deliberately incorrect in one area and that is in the area of gender. Like we all know, right? That gender is not black and white. It is not male and female gender is asphyxia. However, we made a decision to leave our product like that because we understood that we operate in many regions in the world and not all of those regions are safe for people who exist outside that gender binary. And so we were at risk if we expanded that of identifying those people in places that is not safe. We really do need to think quite deeply about the data that we're collecting. And I think that's on us as the people who are building those products rather than on the consumer who just wants to use this thing, right? They're not going to read your 10 pages of terms and conditions. They just never are. I think we need to be responsible with data.
Melissa Perri - 00:21:35: That's a fantastic point. I'm passionate about this as well. It's like building the diverse teams who are actually going to think through these things. And that I think is really important what you're honing in on because you're absolutely right. For me, it's dogs and cats, but for other people, they could potentially be arrested. And I know especially even, we had this issue too at HBS when we went online. We went online to teach classes during COVID and a lot of people had to go back to their home countries. And they made a policy that was really interesting that we could not record and let the recordings of those classes be widely spread because in some of the classes, they talked about sensitive issues that were illegal to talk about in certain countries. And if that person was in that country and they were engaging in that discussion, it could put their life at risk. It also though, on the flip side, if somebody missed a class or somebody was sick, I had a student who was extremely sick, he needed the class recording. We had to go through many hoops and hurdles and stuff to be able to actually get that person the recording so they could keep up with class and still pass and still get a good grade. So it was such an interesting balance. And I do agree. It's like not Black and White there, but a lot of companies try to make it. And I think that's where the diverse teams come in when you're starting to think about these edge cases or these things that we want to get into. So can you tell us a little bit about how do you think about building diverse teams? And I know a lot of people say, “I need diversity in my team. Like that's everybody talks about that”, right? How do you actually do it? How do you make sure you're building a very well rounded team?
Tanya Johnson - 00:23:05: Yeah, I'm glad for a start that people are recognizing the importance of it, because when we're building products, if we don't have a diverse team, there's so much that we're going to miss. And your product can be so much more successful if the team's building it, representative of the people who are going to be using it. There really is significant business benefit, and I'm glad we've come around to that because it's taken quite a long time. Now, when I started out in tech, it was the late 90s, and I was almost always the only woman in the room. It was a little bit frustrating. So a number of years into it, I decided that I couldn't possibly be the only woman there. And meeting other women occasionally at conferences, we decided to start a woman in tech network. We saw men really getting ahead based on networks and just how they operate with each other, and we were missing something similar. And so we often hear about like the pipeline problem and there's not enough women in tech, and it's not actually representative of what we see from the data, which is that women do come into tech, but they tend to leave after a certain period, whether that's because they are being shifted out into less technical roles. Sometimes we're good at speaking to people and we're good at communication, and so we find ourselves in people management and that kind of thing, or whether it's because we go into hostile companies and we end up leaving. So we started looking at it from a gender perspective, but when you look at a company and you start building and you're hiring people that you know, what can often happen is, you know, you've got a male founder and some co-founders and you've heard about this great guy that someone went to college with, or when you're interviewing, you're thinking about whether or not you'd like to have a beer with this person after work. And so you end up hiring people who are like you and who think like you. And as your network expands, what you end up with in your company is diversity debt.
You now have a company with 50 engineers, for example, and they are almost all White men. And at that point, trying to get a woman in the door is unbelievably difficult because no one really wants to come into that scenario. So I think that company founders need to be thinking about this really early, and they need to be thinking about their hiring practices. So one of the things that I like to do is, and this is going to sound so nerdy, I'm really sorry, but I use a hiring rubric. So I identify what are the kind of things that I want for this particular role? What questions am I going to ask? And I specify what questions I'm going to ask and what a bad answer, a good answer, and an excellent answer is. And this means that whether I'm the one doing the interview or my team's doing the interview, we're asking everyone the same questions, and we can rate them more fairly based on the answers that they're giving us. And we entirely take out that, I would like to have a bear with this person aspect of it. We're also making sure that our hiring teams are diverse, and that's quite important. You want to see how people interact with different people in the room. One thing that's worth noting though, is a long time ago, again, you know better, you do better. A long time ago, some of the things that we would look for when we were hiring was, for example, if a woman asks a question to a male candidate, does he respond to her? How does he handle being challenged? Is the person you're interviewing making eye contact? And now we're starting to get this understanding that we've got quite a lot of neurodiverse people in the workplace, and there are so many cultural differences that can impact how people are in interviews. And so what we were doing was we were just ending up with a slightly different kind of bias, right? We were ending up with culturally homogeneous teams. And so there's so much more that we need to think about. But I think this is amazing that it's a conversation that we're having, and it impacts on Responsible Tech and AI in such a critical way that I don't know if you can really do responsible tech if you have a homogenous team. I just don't know how it would be possible.
Melissa Perri - 00:26:56: And I think it's good to maybe like talk about what does diversity actually mean here too. Like we're not just talking about gender, we're not just talking about backgrounds or where country you come from or anything like that. It goes a lot deeper. So when you're thinking through like what does diverse look like, what are all the different factors that you would actually consider for teams?
Tanya Johnson - 00:27:16: Yeah. I hear a phrase thrown around every now and then, and that phrase is diversity of thought. Diversity of thought is what we're aiming to get to when we hire diverse teams. We're aiming to get to a place where we can consider all perspectives of a thing. And to do that, you need to hire people who come from different backgrounds, who are different genders, different ethnicities. You need to think about things like financial backgrounds as well. Are you only hiring people who come from wealthy backgrounds? There is so much to think about. But what I have seen sometimes happen is that people try to leapfrog past hiring people with diverse experiences and just go straight for diversity of thought. And what you end up then is, “oh, we've got four White men in the room, but they think differently. So it's okay”. We've got diversity of thought. And that is not a thing. You get diversity of thought as an outcome and as an output of a diverse team. There is no way to leapfrog it. And so you asked about things that we think about. Gender is the first one that tends to come to mind. Ethnicity is often second. We also think about things like sexual orientation. We think about things like cultural backgrounds, language. So we think about English as a first language and English as a second language when we're building English products, for example. And we think about as I've talked about, like wealth and inequality and that kind of thing as well. And I think that there's probably a lot of different types of diversity that we're missing. And then that we'll discover over time, you and I talked briefly about neurodiversity, we look at things like disability, there are so many aspects that you can look at. But I think that if you have to start somewhere, start by looking at your target market and what that looks like and aim to at least have a team that is representative of your target market.
Melissa Perri - 00:29:12: What a fantastic idea. I think you might be the first person I've heard actually say that. And I think that's such a great way to think about it. So with Responsible Tech and AI, as we're talking about this, I know that Auror, you came up with some framework for doing Responsible Tech and AI and you've open sourced it. So thank you for other people to actually look at this. So what are some of those key principles that are in that framework?
Tanya Johnson - 00:29:36: So some of our principles that we've agreed to follow at Auror are around fairness and equality. So for example, when we are designing and developing AI systems, we want to make sure that we don't unjustly harm, exclude, disempower or discriminate against individuals or particular groups. We want to make sure that we abide by all applicable laws in the relevant jurisdictions. And I bring this one up because with tech being so ubiquitous, it's often hard to know whose laws apply. And so you'll often find tech companies, for example, only abiding by the laws of the country in which they reside rather than the country that their users are in. And so we really need to make sure that we are abiding by the laws of every jurisdiction that we work in. And then we look at human rights. So recognized under domestic and international law and the specific rights of indigenous people and the cultures in which we operate. So that all comes under the fairness and equality banner. We look at things like transparency and explainability and accountability, and I think this is super critical when it comes to AI. So for example, we have a principle that we will be transparent about our AI activities, including how and when AI is used in our platform, and that the operation and outputs of our AI systems will be transparent, auditable, and generally explainable so that we can understand any outcomes that we're getting from there. And then we look at reliability, security and privacy. So we need to make sure that AI systems and related data is reliable, accurate, and secure, and that we're protecting individuals' personal information and privacy, and that we're managing new risks on an ongoing basis. We look at human control and oversight, so we don't let AI systems make decisions on their own. We need to make sure that there is an appropriate level of human oversight into our systems and their outputs. And we also need to put in safeguards to reduce and mitigate potential misuse of a platform, which is something that I think people often don't think about until they actually encounter that misuse, and sometimes it's too late. And then lastly, community benefit. So for us, we're very community-focused and we want to make sure that any systems that we design and develop, promote and support the wellbeing and safety of our customers, of our law enforcement partners, but also the communities in which we operate. And so I think other companies might identify slightly different principles. Those are the ones that work for us in the space that we're operating in, which is largely around crime. And it gives us a really good basis to have conversations. So it's something that the entire company has agreed to. And when we are trying to work through how we implement something, whether we work with a particular system, it's something that we can always drag people back to. It's a little bit like a hooker and anchor for a conversation and say, “hey, how does this actually meet with our principle of human control and oversight”, for example. So it's a bit of a tool that product managers can use to make sure that we are doing things the right way, because we've had these principles agreed at the outset.
Melissa Perri - 00:32:34: What a fantastic set of principles. That's really nice and robust, probably more robust than some I've heard before. So I really like how you're looking at every aspect there. When you're thinking about those principles, is there anyone in particular that you found the most challenging to implement?
Tanya Johnson - 00:32:50: So the things that I find most challenging at tech companies is motivating for the development of internal tools, the tools that we need to support the products that we build and the customers using them. So we're often so focused on building products that we can sell and drive revenue. And so it's a lot harder to actually motivate to build the tools that need to support the products and the users behind the scenes. And the impact that I find this has, for example, on Responsible Tech and AI, is often when we talk about things like auditing, people say, “oh, but we have logs”. And, but we have logs is the worst thing for me to hear because we need auditing that is available, it is accessible, that you can understand it, you can interrogate it, you can ask questions about it. And you can also see where there are outliers, where there's behavior that's unexpected. And so often motivating for us to build this auditing, this consumable auditing into our products is the hardest challenge for me in tech. And I found that in many, many companies that I've worked with.
Melissa Perri - 00:33:51: Yeah, I think even in companies that are not as critical to people's livelihoods as something that deals with crime, I've seen them just as equally not be able to put this out. But I agree with you, it's absolutely critical when you've got lives on the line or you've got anything that could really impact people like this to be able to see what's going on. So when you're doing that auditing, what kind of data are you looking for? You mentioned outliers, you mentioned a couple different pieces of it. What would be your ideal set up? What types of information do you want to look at to make sure that you're acting as responsibly as possible?
Tanya Johnson - 00:34:27: I'm going to use the product manager answer to almost everything, which is it depends. So it really does depend on what we're looking at. If I'm looking at our system, for example, and law enforcement usage of it, I would like to see a table of usage that is sortable and searchable and filterable. I want to see who is running searches and what kind of search terms are they using. I want to see if someone is repeatedly searching for the same thing over a very long period of time. I want to see where there's behavior that seems outside the norm. So if, for example, someone is running thousands of searches in a very short period of time, and a lot of the time there's context that explains this kind of stuff. But what it does is it again gives us a bit of a signal as who to ask questions about things and to go to someone and say, “hey, we've noticed that you've searched for the same vehicle registration plate, a license plate a thousand times”. Like, is this about an ongoing criminal case or are they searching for their own license plate repeatedly? And I would love for us to actually use AI as well and Machine Learning to identify outliers here and identify patterns of behavior that we might want to interrogate further. At a very, very, very low level, I do not want to have to ask a developer to search through logs. I want to be able to export data and check it in great detail by myself or to be able to provide that to someone in a position where they have oversight over their organization and make sure that the use that we're seeing is responsible.
Melissa Perri - 00:36:07: Yeah, definitely critical. What do you do in situations where you find that the use is being not well-intended, right? Like if you're looking at an outlier and you see all of these things, like what's the responsibility, I guess, of the company to mitigate that and to look into it? I know we talked a little bit about authority and responsibility across the company. And I could see tech being like, “oh, this will just be like with our safety team, I'm just going to tell them about it”. But like what do you think a product manager's role is in there when they see something being used inappropriately?
Tanya Johnson - 00:36:39: Yeah, I think that sometimes it's not necessarily the product manager's role to handle the cons, but certainly the product manager's role to raise when they have spotted something. And I think working with the customer success team, the people who manage the relationships with that particular organization or user is a really good start. Sometimes the stuff that we see is unintentional. It's someone not fully understanding the impact of what they're doing. And there I think discussions and training might be an appropriate way of dealing with it, but we have been known to just cut off access entirely where we've seen behavior that isn't in line with our terms and conditions and with the agreements and partnerships that we have with the organizations that we work with. It's like the first time we see something, we'll have a bit of a discussion about it, but if there's continued misuse, access is cut off. And I think that that's a really interesting thing when it comes to tools that are used for business. A lot of the tools that we use in our day-to-day life are tools for entertainment. They are tools that just supplement what we're doing. The tool that I work on is usually a very core part of someone's work. And so cutting off access to that tool would significantly impede their ability to do their job, which means that it's a pretty good lever, frankly.
Melissa Perri - 00:37:57: To make sure that people are not acting out of turn or using it for bad behavior. When you think about the future of Responsible Tech and AI, especially with AI being a hot topic everybody's talking about today, and all of our LLMs and ChatGPT and all these things out there. What are you concerned about when it comes to responsible tech and how do you really see this area of the industry evolving over time?
Tanya Johnson - 00:38:24: I think my biggest personal concern is around generative AI and specifically where it comes to content creation. So what we're seeing is the ability to convincingly create videos and audio content that mimics specific people. And in the industry that I'm in, I'm quite concerned about how that might impact evidence and how much we're going to need to find tools that can actually identify where evidence has been tampered with or where evidence has been generated. So I'm quite concerned about that. I don't really feel like we have the sophistication yet to verify that kind of generated content. And so that's deeply concerning to me. That's obviously quite context specific to where I'm working, but I think everyone should be quite concerned about that.
Melissa Perri - 00:39:14: Yeah, you do see this trend online as well of people taking video as the real facts and what happens when you can't trust video because of the deep fakes and the stuff like that. So that is extremely concerning. So as a product leader, right, looking at, or how are you thinking about what to do with that kind of going forward since we have the crime prevention and things like that? It might be, I haven't figured it out yet, but what types of things are you monitoring to keep abreast of that, knowing that this is going to become more of an issue later?
Tanya Johnson - 00:39:43: I think we need to keep across all advances in technology and be having good discussions around it. We have a number of lawyers on our team who look across laws and codes and different jurisdictions. And in fact, one of the co-founders of our company was a privacy lawyer before founding the company. And so we've always come at it from that aspect. And so I think we're keeping across changes in technology and changes in the markets that we operate in and changes in law and how everything works quite significantly. When it comes to other companies and how they do this, you might not have the benefit of having legal folks on your team and researchers and data folks on your team. And so you may not be in as good a position to really understand exactly what's happening and what's out there, but there are so many amazing books and resources and things that are being developed. I think a little bit like Auror framework, there's a lot out there that can help guide you on what questions to ask and how to interrogate these things so that you can at least get a start on identifying risks to your business. So there's so much out there. If anything, there's probably too much out there. And it's a case of narrowing it down to what's going to be most useful to you.
Melissa Perri - 00:41:01: And that's useful to you in your specific context. Is that what you're talking about?
Tanya Johnson - 00:41:05: Yeah, absolutely. So I think different people need to go to different levels depending on the product that they're developing, who it's being used by, and the level of risk. We all have different levels of risk with what we're doing. And as I talked about, we're in a very, very high area of risk with what we're doing. So we do need to go to fairly extreme levels to make sure that what we're doing is safe. You may not necessarily need to do the same if the product that you're building is for purposes of entertainment or that kind of thing.
Melissa Perri - 00:41:37: I think that's really good advice for all the product managers listening out there. And I hope they take this to heart and do their research on AI and understand their own biases and look at their teams and try to create diverse teams. For those who are listening that have power, I know I'm in an extremely privileged situation compared to most people out there that are affected by the things we build. So trying to keep them in mind every single day, I think, is a critical message. So thank you so much, Tanya, for sharing that with us. If people want to learn more about your work or even find the framework to get started with Responsible Tech, where can they go?
Tanya Johnson - 00:42:12: And they can go to auror.co, that's A-U-R-O-R.C-O, and there's a link to our responsible Tekken AI framework in the Trust Center on our website. And looking forward to really iterating on it as well as we do with everything in product. If anyone does use it, I'm really keen to understand how it helped and how we can improve it over time. We're just keen to move from a position like we were in when we started looking into this, where there was very little available and where it was, it was very large governmental documents. So we started with like a 36 page impact assessment template before we put anything in and realized that we needed something that was a lot simpler and a lot more accessible for people to use so that we could actually keep up with innovating and developing technology responsibly.
Melissa Perri - 00:43:01: Well, thank you so much for creating that. I'm sure it's going to help a lot of people out there to get started. And thank you so much for being on the Product Thinking Podcast.
Tanya Johnson - 00:43:08: Thanks for inviting me. I've been a long time listener and a huge fan, so it's pretty exciting to be able to get to chat to you today. And I really appreciate it.
Melissa Perri - 00:43:16: I've really enjoyed having you on here too. So for those of you listening, if you enjoyed this episode of the Product Thinking Podcast, please like and subscribe so that you never miss another episode. We're going to link all of Tanya's links in our show notes. So if you go to productthinkingpodcast.com, you can find a link there to a framework and to reaching out to Tanya on LinkedIn. So make sure that you head to productthinkingpodcast.com to follow us there as well. And we will see you next Wednesday with another fabulous guest.