It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. On the show today, Humor Us.
VINITH MISRA: I have some jokes. I can't, you know, promise that they'll be funny. But they are jokes. I think you'll agree that they are jokes.
ZOMORODI: This is Vinith Misra.
MISRA: These are - the way I describe it is if you can imagine a late night host cracking these. That's the aesthetic.
ZOMORODI: Vinith is a computer scientist. But I called him to hear some jokes.
MISRA: A new report says that Americans are living longer. That's the good news. The bad news is that a lot of them are living in New Jersey.
ZOMORODI: (Laughter) All right. I grew up in New Jersey, but I can take it.
MISRA: (Laughter) Jersey is a bit of a punching bag. But speaking of punching bags, a Florida man says he found a rattlesnake in a bag of frozen broccoli. That's terrible.
MISRA: You should have bought fresh.
ZOMORODI: (Laughter) OK. I'm laughing because it's true. Frozen broccoli is gross. It's just mushy. Now, it's debatable whether these jokes are funny - maybe, like, so bad, they're funny. They're definitely weird.
MISRA: I'll give you one more. I enjoy this one. A woman in Australia who found a spider in her bananas took it to a reptile park, where it laid over 1,000 eggs. The woman said she was surprised to find so many eggs. But then she remembered she'd bought them at Costco.
ZOMORODI: Oh, yeah. That's good. That's good. That one's for my mother-in-law, I think.
What makes these jokes so interesting is that Vinith did not write them. A friend did not write them. They were not discovered in some bargain bin joke book. In fact, no one wrote them.
MISRA: Yeah. So these jokes were actually generated by a machine.
MISRA: There is a model called GPT-3, which is made accessible via an organization called OpenAI. And it's actually not a joke model, per se. It's a more general language model. And it serves a lot of different purposes. But it turns out it's also quite good at figuring out the aesthetic and shape of jokes.
ZOMORODI: Vinith has long been fascinated by artificial intelligence. He's worked on AI at IBM, Netflix and now Roblox. And he says a big hurdle with artificial intelligence is just getting people to interact with their machines and A.I. more seamlessly. Computational humor could help bridge that gap.
MISRA: That's right. I'm talking about computational humor. That's using computers to generate and understand humor.
ZOMORODI: Here's Vinith Misra on the TED stage.
MISRA: It's an actual field, no joke.
MISRA: Sorry. So computers today, see, they're getting smarter. They're getting smarter. But they're also developing a sense of humor. And they have the potential to change how we relate to our mechanical friends, but also how we relate to each other. And to be clear, I don't think this is just a curiosity. As computers increasingly surround us in our lives, I think it's going to be a necessity. Now, I wasn't always convinced of the value of relatable machines, let alone making you laugh with software. Why would I need my software to lighten the mood, right? But then I took a closer look at myself. See, I'm not an angry man. But I routinely fantasize about taking my laptop and smashing it against a rock. Now, people - people frustrate me, too. But the difference is that with people, I have a safety valve called humor. Even on a call with Comcast, someone cracks a joke, it changes the whole dynamic. We can look at humor as sort of the WD-40 of human interactions. In a world where we're increasingly surrounded by computers, we're going to desperately need some of that lubrication, or we're going to drown in the frustration.
ZOMORODI: Vinith's appreciation of the power of humor goes way back to his childhood, when he realized how quickly jokes could help him make friends.
MISRA: We were moving around a lot. And I didn't really have a very static set of friends. I was born in India. I moved to Pennsylvania, then Alabama, then California. And, you know, I wasn't necessarily the most gregarious person to begin with. And it wasn't the easiest thing. But I did realize, you know, like, cracking jokes could be a way to make those connections. And what I resorted to was, in hindsight, basically, creating these sort of algorithms for jokes.
MISRA: So I'd start with a joke like this one. What do you call a bee that eats too much? Chubby. OK. I was in elementary school, cut me some slack. Now, I'd recognized, not being a total moron, that the humor in this joke comes from the similarity of chubby and bee. And then I'd replicate this tons of times. What do you call a bee that's good for your health? A Vitamin B. What do you call a newborn bee? A baby. What do you call a bee in the spring? A maybe. You get the idea. Now...
MISRA: ...What's interesting, though, is that this process that I described for you, this very hack-ish and uncreative process, it may not sound like an algorithm, but it is.
ZOMORODI: And now, as an adult, as a technologist, you look back and you see sort of very rudimentary coding that was happening in your mind, right?
MISRA: Yeah. Yeah. Absolutely. I mean, that stuff was essentially algorithms, right? It's a very simplistic algorithm. So if you had to replicate what I was doing with a computer - and this is, actually, kind of, like, old school natural language processing or computational humor. So in this case, you craft a sentence that's structured in a very particular way, that needs certain words or phrases that fit certain parts of speech and rules within that sentence. And you look up, basically, lists of words that kind of fit that profile. And you kind of swap them in. It's just a very deterministic, honestly, kind of a dumb process. Most of the thinking is happening when you're creating that structure in the first place. That's where that - that's where to put on your thinking hat.
ZOMORODI: It kind of reminds me of Mad Libs, you know, just swapping in nouns and verbs to make funny, surprising sentences.
MISRA: Yeah, no, absolutely. I think Mad Libs is a wonderful connection to, you know, these sort of algorithms. And Mad Libs are actually - they are an algorithm. They just have a prompt where humans need to enter some of the data in there.
MISRA: But to use this more generally and broadly, we're going to have to go beyond puns to the more unstructured and subtle humor that we humans engage in pretty regularly. I mean, think about the last thing that made you laugh. Chances are not only was it not a pun, it probably wasn't even a joke. In some sense, the real goal for us here, it's not necessarily to create machines that are going to write jokes for us but to create machines with personalities that we find humorous or amusing.
Now, to get to personalities, though, we often have to go through language. And language is a bare. Your average English speaker knows tens of thousands of words and breaks grammatical rules about as often as he follows them. And even if you can get past that, there's issues of ambiguity, context and general commonsense knowledge. When I ask you, how much does President Obama make, somehow you know I'm asking about his salary, not about how much soup he makes. This is very hard to encode into an algorithm. But in recent days, we may have caught a break. We may have found a back door. And this back door has a big sign on it - that's a hint - and the sign reads data.
MISRA: Most behaviors, most phenomenon in the world are just so complicated, it's actually pretty hard to capture all of them with a set of rules. And that's where machine learning kind of came in as an alternative to all of this, where instead of trying to write a lot of these rules yourself, you actually let the machines kind of do that for you. You give a bunch of data to them. That's just sort of evidence and observations about whether it's human behavior, human language, what have you, maybe examples of jokes. And you sort of allow the machines to figure out those rules on their own.
ZOMORODI: And how good are they at this point? Because, you know, of course, we have facial recognition and...
ZOMORODI: ...Photo recognition that's gotten really good. Like, it knows if it's a tiger, that's for sure.
ZOMORODI: It now knows if it's you. But where are we in terms of - language is a very different beast for...
ZOMORODI: ...Machines to understand and parse, right? And I feel like a lot of the time I hear that as we talk about machine learning, we don't necessarily know what the machine has taught itself to look for, right?
MISRA: Yeah. Yeah. There is a little bit of this black box phenomenon where you're sort of throwing a bunch of data at it and it's sort of figuring it out, but it's actually kind of mysterious what it's doing inside there. You know, most of the computers you interact with today when they say something to you or they print some language to you, something for you to read, more often than not, that is - almost always that is not generated by a computer. That's usually written by a human being and it's being kind of canned and delivered to you, right? And the reason is that the generation just isn't quite reliable and good enough. So even those examples of jokes I gave earlier, those are cherry-picked, right? Like, those are probably, like, the top 10% of outputs from the model. I can give you the more typical outputs.
ZOMORODI: Yeah, let's hear it.
MISRA: Yeah. So here's one. A man in Florida was arrested after he tried to pay for his McDonald's order with cocaine. The police said they knew something was up when he tried to get change for a Happy Meal. So it's kind of, like, surreally funny. And it has the shape of a joke, right? It's got, like, Florida. It's got, you know, McDonald's, cocaine. These are all things you would expect to be funny.
ZOMORODI: (Laughter) It has Florida.
MISRA: That's a really strong indicator, honestly. But it's - but at the end of the day, it's not really a cohesive joke. It doesn't really make a lot of sense.
ZOMORODI: No, it's kind of trippy. I mean, the key here is that machines need vast amounts of data, right? Like, so where does this data come from?
MISRA: Yeah. No, it's a great question. And yeah, there's this tenet in machine learning called garbage in, garbage out, right? Like, if you put in garbage data, your machine is going to basically be producing garbage for you, too. But it's also biases. And, you know, like, the type of data you're training on will inform your outputs in ways that you might not even realize. It's really, like, holding up a mirror to ourselves and in some cases not the prettiest part of ourselves.
ZOMORODI: You're reminding me of that infamous incident a few years ago when Microsoft had a Twitter bot and - racist Twitter bot. Do you remember that?
MISRA: Yeah. Yeah, that was bad.
ZOMORODI: Can you explain what happened there?
MISRA: So what Microsoft created was a - basically a chat bot on I believe it was Twitter. And the interesting thing about this bot was not only was it trained on historical language, but it was also being trained on the conversations it was continuing to have with people. That is basically people could basically continue to influence how this bot behaved by engaging with it on Twitter. So you can imagine where the story goes. It wasn't very long before they had turned this seemingly, you know, innocent social experiment into, you know, this horrifying, like, Nazi-like Twitter presence that was saying things that, you know, I couldn't even repeat right now.
ZOMORODI: OK. So, clearly, computational humor can get dangerously offensive very quickly, especially if it's trying too hard.
ZOMORODI: But what about when the stakes are lower?
MISRA: I think, like, it's interesting because designers, I feel like, have some rules of thumb, and they've evolved over time around, like, what is the right amount of humor to inject into your web app design or your copy that you're putting into your website? And those norms have shifted over time. Like, if you look at what a 404 page looked like in, you know, 1998 and you compare it to the kind of pun-type, jokey 404 pages that we see nowadays...
ZOMORODI: You mean like when a link is broken and you end up on a webpage where it says, oops, nothing to see here.
MISRA: Yeah. I think, you know, there's maybe a computational humor opportunity there, right? Like, what if you could actually remix and have multiple, like, 404 pages and it actually becomes kind of, like, a little bit of softening of the experience, of the frustration that you feel when you're not finding what you're looking for?
ZOMORODI: You're reminding me of my experience the other day trying to rebook an airline ticket with a bot that the airline named Nelly. And she was not funny. She wasn't even that helpful. But could we get to a place where these programs are good enough to be reliably funny and make these interactions a little more enjoyable?
MISRA: It's a good question, and I think it's, like - it's also a bit of a philosophical one around art because, like - because comedy is a form of art. And I think humor is an example of that, where - you know, a joke, you know, was not written by a human. Is that still funny? And I guess in some ways it is. I think it's definitely testing of the limits of, you know, like, what, you know, human-computer relations can look like. Yeah.
ZOMORODI: I mean, is this the ultimate Turing test, do you think - like, the ultimate?
MISRA: That's a great question. In certain ways, yes - right? - because humor is - it is often thought of as sort of, like, the - you know, the place where we're not - the farthest frontier of human intelligence and natural language because it encompasses so many things that are difficult to quantify. It contains a lot of cultural context around, like, the types of things that people tend to find funny, and those often vary culture to culture. And yet it still has to kind of make sense, which is the - you know, arguably the least quantifiable and hardest thing to really get your head around.
MISRA: So yeah, I mean, I would definitely argue, you know, like, if we've been able to solve humor, that is, in many ways, the hardest problem. So, again, I think, like, there's clear, like, utility to humor, but I do think there's some nuance around, like, when it's appropriate to use it and when it's not. So - and I think even, like, in these human-designed, like, experiences, that's still, like, being figured out, honestly.
ZOMORODI: So watch this space.
MISRA: Yes, definitely watch this space.
ZOMORODI: That's Vinith Misra. He's a computer scientist at the video game company Roblox. You can see his full talk at ted.com.
On the show today, humor us.
Copyright © 2022 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.