This is a transcript of episode 276 of the Troubleshooting Agile podcast with Jeffrey Fredrick and Douglas Squirrel.

Are assumptions killing your curiosity? On the podcast this week, Squirrel and Jeffrey discuss abduction - but not the alien kind!! Listen to learn about the methods of abductive reasoning and coherence busting, and find out how you can apply the principles of Wordle to improve your outcomes.

Show links:

Listen to the episode on SoundCloud or Apple Podcasts.

Introduction

Listen to this section at 00:11

Squirrel: Welcome back to Troubleshooting Agile. Hi there, Jeffrey.

Jeffrey: Hi, Squirrel. You know, I was really hoping to talk about abduction today.

Squirrel: Wonderful. Well, as you know, I’m a big believer in UFOs, and I was recently taken up to to orbit. So alien abduction is near and dear to my heart. But maybe we want to talk about a different kind of abduction.

Jeffrey: Yeah actually, what I meant was abductive reasoning.

Squirrel: Oh, that’s another important topic. We’ll talk about the little green men next time.

Jeffrey: Yes, please.

Squirrel: Excellent. Well, abductive reasoning is an idea I just encountered a couple of weeks ago, I think you know more about it than I do, Jeffrey. The idea is that it’s neither inductive nor deductive. It’s not Sherlock Holmes and it’s not scientific method. Is that what you wanted to talk about?

Jeffrey: That’s the one. The kind that in the Wikipedia page gives you the example of mastermind, though I think more relevant for people these days is probably playing Wordle.

Squirrel: Oh, yeah. Good one.

Jeffrey: My wife loves Wordle and is always saying, “Oh, I got it today in three,” for example. That’s a good day.

Going the Wrong Way for a Better View

Listen to this section at 01:23

Squirrel: But that means twice, she failed. So twice she came up with a guess and each one almost certainly was better than the previous one, and each one was plausible. She didn’t guess that the word was 15 letters. She guessed that it was zebra, or comb, or something. You make a reasonable guess, you iteratively improve it. But you’re neither trying to reason from first principles to determine the answer must be zebra, nor are you observing an awful lot of pieces of data and then getting an answer. What you’re doing is coming up with a best inference, which you can then refine and improve. This turns out to have tremendous relevance to Agile software development.

Jeffrey: Well, actually, one thing you said is each guess was better than the one before. And actually, that’s almost explicitly not true in her approach. Some of the guesses are made simply to gather information. In other words, it’s not that the guess is likely, but it’s a guess that’s designed to gather the maximum information.

Squirrel: Okay, better in a different sense. So better in terms of getting more information. But it might be that you try all the vowels within the first two guesses and therefore you can make your third guess more in more educated way.

Jeffrey: Precisely. “Oh, look, one ‘E’ is the only vowel in my five-letter word.”

Squirrel: But the important abductive characteristic is that the guesses are plausible, the guesses are reasonable approximations. The claim is that humans have this capability, and machines are catching up to us on their inductive ability only: that chatGPT can ingest the entire Internet and talk to us close to the way a human would, that’s inductive behavior. That’s taking a lot of data and converting it into a conclusion. Abduction says “I’m going to take some data and I’m going to convert it into a partial conclusion, which I can then iterate on.” And that’s what we do in Agile software development! That’s what I was most excited about.

Jeffrey: Exactly. Actually it’s that point that for me is the most interesting because I hope this podcast helps people to put a label on something that you may have done but not had a word for, which in this case is to proceed on the basis of a guess, knowing that you are dealing with uncertainty. And I wanted to contrast that with the fallacy of affirming the consequent, which is to say they come up with an explanation and they therefore assume that it’s correct because it’s consistent. The example on Wikipedia is, “if the lamp were broken, then the room would be dark.” and taking that and going, “well, the room is dark, so the lamp must be broken.” Now that’s not true because there’s other possibilities for why the room might be dark, like the switch is off or there’s no lamp in the room.

Squirrel: I’m wearing a blindfold.

That Would Make Sense

Listen to this section at 04:21

Jeffrey: Yeah, exactly. Many possible explanations. So. And what often happens? I see what common mistake is that people will come up with one explanation. And then, because it could be correct, then they stop. In fact, it’s so common it’s laid out in the book Thinking: Fast and Slow, where they describe it with an acronym WYSIATI: “what you see is all there is.” This is describing this process by which our “fast brain” is working, not by what’s correct, but by what’s coherent with the data you have. And as soon as you have a coherent explanation, you stop.

Squirrel: And that leads to the fallacy that I hear so often from my clients: often when they start getting coaching they say, “I’m just so frustrated!” Either it’s “my engineers commit to a date and then they don’t get anything done by then!.” Or they’re the recipients and they say, “I’m trying to deliver, and I explain what things go wrong, but everyone says, well, it has to be done on this date.” Both of those folks are applying deductive reasoning and they’re affirming the consequent. “We’ve made a plan. The plan was based on reasonable assumptions and beliefs and therefore we should be able to achieve the plan.” Nothing could be further from the truth. That’s the unfortunate reality that Agile development keeps trying to help people understand, that true agile development is focused on. Unfortunately the fallacy is so attractive that many of my clients pay me a lot of money to disabuse them of it.

Jeffrey: Right. We’ve talked about this in a different form in the past when we’ve talked about the need for coherence busting. I think it’s been so long, it’s worth describing what we mean by “coherence busting.”

Squirrel: Absolutely. Do you want to do it or should I?

Jeffrey: Why don’t you.

Squirrel: Okay, I will. So it was an idea that you came up with, if I remember right, because you used to do it with your kids. And then I gave it the name coherence busting. So what you do when you coherence bust is you notice yourself affirming the consequent. You notice yourself applying deductive reasoning where it doesn’t apply or where you think it might not. The strongest signal of that is your confidence. When you think it’s really right, that’s exactly when you need coherence busting. You might believe that the engineers are too lazy to finish their code on time and none of them are working at the pace that they should and we should push them more. That would be a very coherent belief. It has a lot of consequences which you can observe and you can say “this matches the data that I’m observing. I’ve come to this conclusion by watching things happen. I can see the steps that I followed to get here. And so it looks right,” which is why you need to do something else. So in coherence busting, you come up with as many different explanations as you can. They should be inconsistent and so that they can’t all be true. And at least one of them should be ridiculous in a humorous way. Because when you’re thinking about it humorously, when your humor centers in your brain are operating, you’re more open to learning. You can actually check that neurologically. So I might say in that case where I thought the engineers were lazy, I might come up with several different explanations. One is that the engineers don’t have the right skills, so they’re trying but not succeeding. Another is that the engineers are secretly being paid by a competitor, so they’re intentionally sandbagging the effort and they’re trying to go more slowly. And a third is that aliens, the ones who were abducting me, have beamed messages into their brains, scrambling them and causing them to be bad at their jobs. Now, you laughed at that one. And that’s very good because it’s extremely unlikely to be correct. And actually all three are unlikely to be correct. But they can’t all be true, right? They are inconsistent. They’re inconsistent with the original conclusion. And that means that if you hold all of them in your head at once and they all match the data, and your brain is more ready for inquiry, being curious, you can then in that spirit go into the situation and say, “Well, how could I eliminate aliens? That’s pretty easy. How could I eliminate bribery from a competitor? Not too hard. Missing skills? Well, I’d have to ask a lot of questions. That would be good. And maybe I’ll discover it’s neither skills nor aliens nor laziness, but it’s some other thing.” And that’s what you want. You want to discover that unexpected result which you’re going to get to by abductive reasoning rather than deductive.

Observations are Not Symptoms

Listen to this section at 09:01

Jeffrey: Yeah, and an example that came to mind when you’re describing that that fit this was talking to some people worried about a particular engineer who they were saying that they felt maybe wasn’t very good, wasn’t performing well, because they weren’t checking in a lot of code and they weren’t getting much done. That was one possible reason. Maybe this person wasn’t very good or maybe they were checked out. That’s something I’ve often heard people described as. But when we looked into what was really happening, actually the person was one of the most senior people on the team and spending a lot of their time helping everyone else. So all these other people who were making great progress was because this person was coming around and unblocking them, and they would go from one to the next across about a dozen people. They’d essentially make a cycle spending 5 or 10 minutes with people helping them get on their way, and that’s how his day went. So the whole team was making progress because of this individual, and the people at a distance who weren’t aware of that felt like, “Oh, this person seems kind of checked out, they’re not really getting much done.” Very, very far from the truth.

Squirrel: Had you gone in with the assumption that this person was checked out, you would be asking them lots of questions about their commitment and their interest, and you might not ever discover that actually the person was giving lots of help to others and very valuable to them. With a coherence busting preparatory step, you get yourself ready to think, “Well, you know, I better talk to more people than just this person who appears to be slacking,” and that gives you the opportunity to learn more. So that’s the abductive approach, which so far as I know, we haven’t managed to program chatGPT to do. So it’s a place where I think our jobs are safe.

Jeffrey: And I think that’ll give people a place to use this. I’m hoping people will use the idea of abductive reasoning in their workplaces to question the certainty of what plans they have in place or what prior thoughts they have. But even more, I know everyone will get to use it now because chatGPT is something everyone’s talking about and you can sort of wisely sit back and say “Yes, but how is it on abductive reasoning?”

Squirrel: And you can smile while people talk about aliens. Don’t, don’t worry, I don’t actually believe in aliens, but it’s more fun to start that way. Thanks, Jeffrey.

Jeffrey: Thanks, Squirrel.