This is a transcript of episode 258 of the Troubleshooting Agile podcast with Jeffrey Fredrick and Douglas Squirrel.

A listener asks, “how will we know it’s working?” when considering changes of process or technology. Jeffrey and Squirrel discuss how to set “imperfect indicators” to measure your progress—or lack thereof!—along the J-shaped curve that takes you through learning to improvement.

Show links:

Listen to the episode on SoundCloud or Apple Podcasts.

Introduction

Listen to this section at 00:11

Squirrel: Welcome back to Troubleshooting Agile. Hi there, Jeffrey.

Jeffrey: Hi, Squirrel. We have a listener question this week, something we haven’t done in a while, so-

Squirrel: We have listeners? And they ask us questions!? I’m shocked.

Jeffrey: It happens from time to time. This week we heard from Jonathan, who says: “Hi Douglas and Jeffrey, I have a question that’s been rolling around in my head and bouncing off other’s for a while.”

Squirrel: Sounds painful.

Jeffrey: “I still don’t feel like I have a satisfactory answer. Now, the question is probably deceptively simple at first blush. ‘How do we know if this is working?’ And by this I mean some new initiative or change in how we’re working. Let me illustrate with an example. So a team has a really slow software delivery, and it takes two weeks for their separate QA team to do testing before release. So we decide to move QA into the development team. And chaos ensues. Many on the team start asking to revert to the old configuration because it felt better. So the question is this, how do we know in the midst of this chaos if this is working or it should be aborted? How do we know when it’s time to call the experiment of failure rather than try longer?” He also gives some other examples. “Maybe this one’s too easy because this is a very standard change people make, but maybe there’s others people might experience where it’s less obvious, like switching from Scrum to Kanban or vice versa, or going from a co-located team to distributed or distributed to co-located.”

Steady Hand or Change Course?

Listen to this section at 01:36

Squirrel: Sure, but they’re all the same. I don’t think any are more obvious. I think this has a common pattern here.

Jeffrey: So what do we do? How do we know? Following the change when someone says, “Hey, look, the old way was better, let’s stop this experiment.” How do we know whether they’re right or whether we should keep on?

Squirrel: So I have a whole theory on this which I teach often in a different context, but it applies just as well here. When I’m doing strategy workshops for companies and we’re working out what the organization—the tech team, the company as a whole, or some division—should be doing, one of the things that often comes up is “How will we know if this nifty new amazing market or technology or new is working?” Well, let me say what you don’t need. What you don’t need is a very accurate measure that’s very carefully calculated that comes in a year later. If it’s delayed in that way, it won’t be useful. You’ll discover a year afterward that this was a disaster. What you want to do is find out a week in that it was a disaster. So that’s why I say that what you want to look for is imperfect indicators. There’s a common notion that people talk about of leading indicators, I’ve modified that to say imperfect because I want to underline how important it is that these indicators are wrong. The reason I say that they need to be wrong is that almost certainly whatever it is that you’re going to measure-, what was the first example that Jonathan gave us?

Jeffrey: Moving the QA team into development.

Squirrel: Very good, often works, sometimes fails, but you’re not going to know whether that’s actually improved productivity until you’ve been through perhaps quite a few sprint cycles and you’ve released a number of products, you’ve seen whether the quality is actually gone up or gone down, you’ve had a chance to get feedback from customers. That’s going to be months if not years in the future, by which time you’ve invested. Maybe some people have quit in disgust because they didn’t like the new method. You’ve put a lot at risk for a long time with no information. You wouldn’t want to do that in any circumstance. What’s much better is to find something that is not going to give you an accurate picture. So you’re looking for something that’s wrong, but that tells you that you’re on the right track. That gives you enough information to be able to say not “This definitely is wrong, we definitely should change it. We should definitely abandon this” or “It’s definitely a success.” But the balance of the evidence, the preponderance of the evidence is that this is probably working so we should probably continue. So in the QA example, Jeffrey, can you think of any imperfect indicators in that case? What would be an imperfect way of knowing that your QA efficiency had gone up, that your team was catching bugs, without waiting for releases and customers and complaints?

Jeffrey: Right. That’s a great question. I think it has to do with the theory of what benefit we expect to see, which I’ll come back to in a moment. But the fact that people say, “Oh, this thing that we’re doing that’s new feels worse.” That should be expected. I think that’s worth saying more about. But on the other hand it’s not really data. We’re trying something new and it’s hard. Well, of course it is. You didn’t know how to do it. Back to the theory of what you expect to be better, in this case the problem we’re trying to solve is this separate QA team and downstream testing leads to this extra two week cycle at the end. Well, without waiting for multiple releases and seeing the long term impacts, are we able to get a release out without having that separate step? That might be something I would look for because that’s my hypothesis of where we expect to see benefit. So I’m going to ask “Do we see evidence that our motivating reasoning was accurate, that this has the potential to pay off the way we hoped?” So that’s what comes to mind. Does that fit the kind of thing you’re looking for?

Squirrel: It does because it has an important element, namely it’s wrong. By that I mean it’s going to tell you merely the time taken. It might be that quality is worse, that morale is down, that we’re getting less information because of all these adjustment factors, all the things that are happening as we’re learning about how to do this new method. But you’re measuring an important factor, “Does this actually speed things up?” Because it’s entirely possible that in your situation for your world, the fact that you haven’t done the QA afterwards results in actually slower releases, maybe you have to do an extra compliance step, there’s additional certifications that you need to do, and those things take longer because you didn’t separate out the QA bits. Well if that was happening, then it is a disaster. You should stop and reconsider. Maybe you can do it a different way so you don’t wind up with this problem, but you’re making the core thing worse. So it’s unlikely that you’re going to get the ultimate benefit, but you’re not measuring these other elements which are very important and which may scupper the plan as well. But you measure the key one, and by measuring that one, you get a real customer-focused benefit. That’s another element of imperfect indicators: it’s customer-focused, it’s driving your profit. There’s a business reason to do it and your indicator has some connection—not a perfect connection—with the outcome that you’re ultimately looking for, namely, getting those features more quickly to customers and making more money from them.

Jeffrey: That’s right. I like that really what we’re looking for in an imperfect indicator is tied to our expected benefit, because now I look at the other examples that Jonathan provided like Scrum to Kanban or vice versa. I like the fact that it’s vice versa because it could be either change. Either way we’re doing it, we have a theory, a motivating reason to try it, and so we’re going to focus on checking for some progress towards that thing that we said we cared about.

Squirrel: Exactly.

Jeffrey: That will matter more than comfort.

Squirrel: As long as you have one or more imperfect indicators, then you kind of have a glide path. You kind of know whether you’re on track to land the plane short of the runway or past the runway and bang into the trees, or you’re you’re basically headed for the right height, the right altitude. When you have that guide, that gives you a much clearer sense whether or not as Jonathan asks you should stop the project, adjust it, or keep going.

Expected Friction

Listen to this section at 08:38

Jeffrey: This topic ties back to something we’ve talked about before, the expectation that things will be worse before they get better. We talked about the J-curve and the cost of making change, the need to recognize upfront that it’s going to be difficult, that you’re going to expect due to conservation of energy reasons if you’re learning something, then your output will otherwise be lower because you’ve put energy somewhere else. I feel this idea of the imperfect indicator is another sort of preparation that you’re doing with people psychologically. We should expect it to be uncomfortable, and now you’re saying we’re going to come up with this measure and it’s imperfect so that when people say, “But wait, you’re missing all these other factors!” Your answer is, “Well, yes, that’s that’s what I meant when I said it was imperfect. Those things are important, maybe my initial imperfect indicator is is sufficiently imperfect, maybe there’s another one that’s also imperfect but better. But we’re not looking for the ultimate correct answer where we know for sure. We want to know how soon we can get some signal that this change is working.” Is that right? Do you see that as an important part here about the psychological side and managing people’s expectation in that way?

Squirrel: Absolutely. You should set up your imperfect indicators at the beginning. That’s what I do in my strategy sessions, and that’s what I would recommend to Jonathan in these sort of organizational change situations as well, that ideally you and I always would say jointly design this so you get people involved and committed and interested while you still make a firm decision. And when you’ve made that decision, you say, “Okay, we’re going to measure with this for a while. If we’re really seem to be missing something, we can always adjust the indicator. But we know whatever we use, it’s going to be imperfect. And we’re going to read this out every week, every day, every month.” I would say probably every week is the longest time you want to aim for, if you’re going to be reading out the indicator and checking where you are any less frequently than that, you’re going to leave a lot of room for doubt and uncertainty and just missing opportunities in changing the change program, changing whatever it is you’re doing or dropping it.

Jeffrey: I like that because if you keep going back to your measure, you’re aligning people back to “Let’s remember what’s important. Let’s remember why we’re doing that.” In my experience, that element is really helpful in any kind of change program because you’re coming back to “Remember, this is the reason why we’re doing this. Yes, it’s painful right now. And guess what? We’re probably bad at this because it’s new to us. But let’s remember what we’re going for.” Adjustments that we make will probably be more effective because we have that end in mind.

Squirrel: There you go. Okay. Well, Jonathan, I hope that we answered your question. We certainly enjoyed talking about it. It was a great prompt for us, feel free to ask us more about it. Thanks, Jeffrey.

Jeffrey: Thanks, Squirrel.