© 2018 by Funny as Tech

Ethical challenges when creating avatars and chatbots: interview with Mitu Khandaker

March 5, 2018

(This interview transcript has been edited for clarity and hilarity. LISTEN to the entire conversation. We chat with Mitu about the impact of immersive technology on how we relate to characters, comparisons to Westworld, what Alexa might look like, and everyone's favorite character--Clippy)

 

Joe Leonardo: Welcome to the Funny as Tech podcast. I am your co-host and comedian Joe Leonardo. And to my left is David Ryan, the tech ethicist, and we have a special guest today. We're fortunate to have Mitu Khandaker.

 

David Ryan Polgar: Mitu Khandaker is a game designer, scholar, and entrepreneur. She holds a PhD on the aesthetics of interactivity in video games, completed at the University of Portsmouth in 2015. Prior to that, she was a 2008 Kauffman Global Scholar and received a Masters in Computer Engineering from the University of Portsmouth. A rising star in the games industry, she won the Breakthrough Brit BAFTA in 2013 and the Creative English Trailblazer Award in 2014.

 

Welcome to Funny as Tech.

 

Mitu Khandaker: Hi, thanks for having me guys.

 

DRP: So we love the issue and we've discussed it a few times but never really anything of the concept of virtual assistants, especially how we interact with them and some of the gender issues, how we boss them around. We'd love to kind of start that off for the conversation.

 

JL: I think actually my Alexa bosses me around because I slur my words a lot and she doesn't understand and I get frustrated and I take it personally as if she doesn't like me or something. I think I have the opposite problem with tech. (laughter)

 

MK: (laughing) You feel abused by tech?

 

(banter)

 

DRP: What would you say when most people right now think of virtual assistants, which is a broad term, what are they mostly thinking about?

 

MK: So conversational interfaces as a whole obviously really taking off. And a lot of people have Alexa, a lot of people now more comfortable than they were before interacting with like Siri or Google assistant on their phones. And I think a large part of that is just because it's a really easy interaction, right? Like we will use language every day. It's a super easy way to interact and I think that if we're thinking about accessibility of interfaces, it does make sense.

 

The thing that I did my PhD in was looking at as interfaces, specifically in video games, and sort of interfaces that we use in fictional settings, as they become more immersive (i.e. VR and AR), How does that change our relationship with the characters that we're interacting with in those experiences?

 

So let's say a characteristic in trouble in a video game, does the way we respond to them change if we're playing that game in virtual reality versus if we're just playing it when we're looking at a screen?

 

JL: I wonder if it's because all of your senses are immersed in virtual reality?

 

MK: Right. The idea being that we start thinking of them more like real people, so we expect them to behave like real people and all sort of like a mental connection with them is more similar to the way we think about each other as humans. And I think that matters a lot.

 

My day job now is I'm the co founder of a company called SpiritAI. And what we do is we actually operate in sort of a space which is very similar to things like chatbots. What we do is we make tools to help create dynamic  autonomous characters like AI-driven characters who can improvise over a certain narrative space. So if you imagine like Westworld, that's a really good way of thinking of it.

 

So in Westworld. the character are sort of tied to these stories that they are a part of, but they can kind of improvise around those stories. So those are the tools we're helping create. We're basically helping to create Westworld (laughing). 

 

(banter)

 

DRP: A happier version of Westworld.

 

MK: But this is the thing, I think Westworld obviously raises some really interesting issues around this, right? Around the person-hood that we assign to these, to these characters because we know that they're virtual--even though they seem real to us in every kind of way.

 

MK: I'm very interested in this idea of how our conversational agents going to be represented in future. So right now with our tools, we're helping all kinds of partners create AI-driven characters. Some of them are represented in various ways. They have a virtual avatar or we're helping people create holograms, etc. What will it mean when Alexa becomes a hologram? What will it mean when Siri becomes a hologram?

 

Because right now, obviously, these things are represented like people in some way and that they have recognizably female voices. But will they look like women? What will be the race with which they're represented? What do they look like?

 

JL: They could go with like a non-human figure. Or make it customizable.

 

DRP: That's the idea behind the Hello Barbie Hologram. This idea that you could actually change the skin color, change the hair color, change the eyes.

 

JL: I'd make mine me.

 

(laughter)

 

MK: I think the reason that this is such an interesting set of questions is because right now, if you think about sort of kids growing up with Alexa and Sir...the fact that (with) Siri and Alexa you can just give them orders. And they don't necessarily expect like please or thank you or anything. There also represented with a female voice. What does that do to like kids growing up? And even adults, right?

 

DRP: And we're calling them "assistants."

 

 

 

MK: I mean let's face it, a lot of the history of this stuff came from the fantasy of secretaries who can't say no...who will be able to just have limitless energy and fill out, fulfill any requests for you. So once we start rolling in the idea of these assistants or characters, virtual characters of any kind, having bodies, we really need to start thinking about who they look like. What race are they represented as well? Because that especially, has massive implications.

 

JL: A lot of it is on the developers having a experience in the world and if it's just a bunch of straight white dudes and they have a very limited experience, you know, that could impact what they design.

 

(DRP discussing the how test marketing may lead to choosing female characters)

 

DRP: Does that very nature of repeating the past then just perpetuate gender stereotypes?

 

MK: Yeah, I think so. It's not just the representation point, it's how we talk to them, right?

 

Let's say it's a digital assistant of some kind, which has an avatar which is female, which looks like a person of color, let's say. And we are sort of just ordering it around. What does that mean? Is that OK? Would it be OK if it was represented as white? Would it ever be OK? Is it OK to actually order any AI-character around? Maybe we need guidelines in place so that we treat all robots with kindness.

 

JL: What if they just make a very big version of Clippy?

 

MK: Maybe that would solve it. (laughing)

 

JL: It's not a human, it's not an animal, it's just a thing.

 

MK: It's a very annoying sentient paperclip? I think that's a solution. Microsoft Word had it, clearly. (laughing)

 

 

LISTEN to the entire conversation below and also connect with Mitu Khandaker!

 

Twitter: @MituK

SpiritAI

 

 

Share on Facebook
Share on Twitter
Please reload

Featured Posts

Ethical challenges when creating avatars and chatbots: interview with Mitu Khandaker

March 5, 2018

1/2
Please reload

Recent Posts
Please reload

Archive
Please reload

Search By Tags
Please reload

Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square