This is a transcript of episode 250 of the Troubleshooting Agile podcast with Jeffrey Fredrick and Douglas Squirrel.
Following last week’s debate, Squirrel and Jeffrey share their different takes on the value of having estimated a tech team’s planned work when it comes to a retrospective at the end of a cycle, like a sprint or a quarter: Jeffrey sees that checking what the team achieved against what they planned can lead to important lessons about dependencies, inefficiencies, and skills gaps, while Squirrel thinks the risk of the estimates “escaping” is too high, and that the team ought to be able to notice process failings as they go without the “crutch” of estimates.
Show links:
Listen to the episode on SoundCloud or Apple Podcasts.
Introduction
Listen to this section at 00:11
Squirrel: Welcome back to Troubleshooting Agile. Hi there, Jeffrey.
Jeffrey: Hi Squirrel. So, as you predicted…(laughs)
Squirrel: We’re picking up from last week, are we?
Jeffrey: That’s right. Last week we talked about estimates at the beginning of a process and then you said, “Let’s talk about the other half. What about the end of the process?” Because I talked about how we had just finished our planning cycle, but I think among the most important parts of that cycle is the lessons learned. We look back at what we had said at the start of the previous quarter, in this case starting in Q4 we look back at Q3, based on what we said versus what we did, what did we learn throughout the quarter? Let’s make sure we take lessons learned from that into our Q4 planning.
Squirrel: I’m a big fan of that. I think that’s fantastic. I just think you can do it without lying to yourself with estimates.
Jeffrey: One of things we got into last time is “who are the estimates for?” There’s different stakeholders, and there are estimates that we do for stakeholders outside the team, and those we do for the team itself. What I have been focused on here is the later. At the end of the quarter, part of the value for the team itself is to compare what they had predicted versus what they actually did. I think that’s a valuable reflective exercise to then decide what we want to do differently going forward. One of the things that came out this last time as a lesson learned was that they felt they had not understood, not fully appreciated the interdependency across different teams on this project. This is a fairly complicated project we were running, with multiple teams and the dependencies and hand-offs between teams had more impact than they had realized. So essentially when people were estimating they didn’t do enough cross-functional collaboration on that to take into account the workload generated by the dependencies between teams.
Are Estimates the Only Source of Feedback?
Listen to this section at 02:55
Squirrel: Okay, you got some tremendous value here in understanding what you could do differently for next phase of this project, but the question I have is why were estimates important in that? Because it sure seems to me that the team would say, “Man, we had no idea that there would be so many dependencies, as I’m thinking back on the quarter, I found myself phoning Team X all the time and asking them what was done and what wasn’t and trying to figure out how to coordinate. We didn’t appreciate that at all at the beginning.” I could imagine getting that insight, and notice I didn’t say anything about the estimate.
Jeffrey: I think you’re right that there are other ways to get it than estimates, at least in some cases. For me, the estimates act like a fallback. They’re sort of like an end-to end-test of the whole system which will catch things that other things don’t. It’s not that I couldn’t have caught them with something else, but I often don’t. I’m heavily influenced by a discussion session that I had many years ago of the book You’re Not So Smart, a collection of 50 cognitive biases we’re subject to. We did this reading of them and we wanted to say, “Well, what could we do if we accept that we’re subject to all these biases,” especially hindsight bias: the fact that we have mutable memories, that we rewrite our memories based on subsequent things that we learn. How do we learn effectively when we can’t trust ourselves, or our own memories? What seemed to be the single most effective thing we could try to do was to write things down ahead of time, and then go back later and check what happened versus what we wrote down at the time, rather than our memory of our prediction.
Squirrel: You’ve just said a key word there. You said prediction. I think I agree fanatically with that one. There’s a great example from the scientific world, if you know anything about the the notion of P-hacking and in other ways kind of fooling yourself into thinking you were testing the hypothesis you actually proved rather than the one you went in with, you can prove a lot of stuff that turns out not to be true, and it’s got Nobel laureates and other very clever people fooling themselves because they didn’t write down their prediction at the beginning of the psychology experiment orwhatever it was they were trying to test. So I think that’s fantastic. I just don’t see why you need an estimate. You could have the prediction. For example, “we will have something really valuable to show every week.” “Every week the conversion rate will go up and we will never have any difficulties with the vendors that provide the tools that we’re using.” That could be one of your predictions. You get to the end of the quarter and you say, “Oh my God, February was a total write-off. The vendors couldn’t get anything done and we were on the phone to them instead of writing code for the whole month, our prediction was wrong.” That sounds really useful. Again, I don’t need to say “I think I will have this thing done by February the 14th at 4:57 in the afternoon” in order to get that insight.
Jeffrey: We don’t do that. We don’t have the same granularity you describe, in two ways. One is on this project, we’re not in a place where we have end users using it in this time period. We’ll have beta access, we are looking to say “we expect to have some change in behavior that we can measure among beta clients,” for example.
Squirrel: Okay. “I predict that our beta clients will love this and they’ll write to us about how great it is and they’ll use the features every week.”
Success is Not Accurate Estimates
Listen to this section at 07:24
Jeffrey: Right. But we do predictions of what those features are going to be. So we’re saying “we think we’ll be able to do this set of things within this quarter that will have this effect on their behavior.” When we don’t deliver those set of things then the question becomes, “well why not?” Here’s a key point: this is not a bludgeoning exercise to say, “oh, you thought you would get these four things done and you only got two things done. So you’re terrible.” That’s not how this goes. It’s “you thought you’d get these four things done, you’ve got two of them done. What happened?” There can be different kinds of answers including, “hey, we learned that there is this fifth thing that’s even more valuable. So we changed priorities.” Or it could be, “Oh, we learned this thing was a lot harder than we thought and we decided there were better alternatives.” Or even, “yeah, we were just wrong on what we could get done,” which has the clear follow-up: “Why is that?” I think this is where we get into what it was that we failed to anticipate and how can we avoid that failure in the future. That’s where the value comes in that discussion, finding places where we can change our behavior in the future for better results. Also, those missed estimates sometimes represent a lack of knowledge we now can remedy by understanding where we were wrong on these estimates.
Squirrel: I would love that result. I’m not a fan of saying, “well, let’s estimate so that we can estimate better in the future,” that seems self-referential to me. But the second part about learning where we don’t understand our system sounds great. Why wait till the end of the quarter to discover that? Why not discover that as you’re delivering each week and you discover the problem and then you can look back at the end of the quarter and say, “Yeah, this was happening a lot. This is something that we can do something about.” I just think that the estimate is seductive. It’s so easy to think, “Well, if we could just be more predictable, if we could just have better estimates, then that would be a signal that we’re doing better.” And I’m just not convinced that that’s the case.
Jeffrey: Okay. This is interesting here, estimates as a signal we’re doing better. I think this is where you and I agree: I would never say the accuracy of our estimates is the measure of success.
Squirrel: Oh, great. Okay, good. Glad we agree about that one.
Jeffrey: I think that’s a really interesting distinction here. Because the goal is not to have accurate estimates. It’s just what we’re looking at to see what could help us learn more. I wouldn’t say that we’re good because we have accurate estimates. We’re good if we’re getting good results with our clients, if we’re delivering value.
Squirrel: That’s what I’m after. I just say skip the estimates because I don’t see how they add something to it. If they do for you, I’m not going to argue with success. But I’ve seen a lot of cases where the estimates escape, they get off the ranch and become a bludgeoning excuse, that’s the thing I’m trying to hedge against.
Jeffrey: I think I would look at that and say, “well, it turns out you have other problems in that environment.”
Squirrel: No question. But I think that happens enough that it makes me worry that many of our listeners and many of my clients have those problems. Therefore, I think estimates are kind of like TNT. You know, if you put it in the right place and blow it up, you can get a tunnel. That’s good. You can also have a real problem.
Jeffrey: I can definitely agree, if we’re in an environment where these are going to be misused, then we should be cautious about how we proceed. Estimates might not have the risk-reward ratio there.
Squirrel: There we go. Well, that was a very interesting debate. Thanks, Jeffrey.
Jeffrey: Thanks.