This is a transcript of episode 218 of the Troubleshooting Agile podcast with Jeffrey Fredrick and Douglas Squirrel.
Dragan returns to talk with us about why small batch sizes are valuable, and how to apply this idea way beyond just release frequency, to pull requests, co-creation, and more.
Show links:
- Dragan’s website
- Dragan’s Twitter
- Dragan’s LinkedIn
- Systems Thinking
- Promiscuous Pairing
- Theory of Constraints
- Donald Reinertsen, The Principles of Product Development Flow
Listen to the episode on SoundCloud or Apple Podcasts.
Introduction
Listen to this section at 00:11
Squirrel: Welcome back to Troubleshooting Agile. Hi there, Jeffrey. We have a guest again, don’t we?
Jeffrey: We do! We are joined once again by Dragan. We were talking last episode about psychological safety, and we were very excited to have a follow up here about the impact of batch size in learning and building psychological safety. So, batch size as an intervention point in a system. Last episode, Dragan, you talked about the difference between a monitoring point and an intervention point, and we’d like to hear more about how batch size plays in. I’m curious and have all kinds of questions here, what’s the monitoring point that corresponds to batch size? What are the places of impact when you adjust batch size?
Dragan: There are so many things related to the batch size, and ‘lean’ folks have done tremendous work when it comes to that. They noticed this interesting relationship between the size of the batches and psychological safety. The teams and organizations that had a lack of psychological safety also tended to be incentivized for bigger batches, because it’s difficult to endure continuous work together and some people think that they’re being monitored or observed, right? So we try to get all the things perfect as as much as possible before trying to get the feedback. And there’s this other side of it: if we reduce the size of the batch, we tend to get more chances for building psychological safety through building acceptance and trust in the team. When I talk about that, I have in mind this idea of ‘if it hurts, do it more often’ which Squirrel mentioned last time. But the idea is trying to drive down the cycle of pairing and mobing in a sense of ‘how long does it take us before we switch the roles?’
Squirrel: ‘Wait, I just learned about pairing, I’m confused. You’re saying that I’m supposed to switch roles. How does that work?’
Dragan: Exactly, every pairing and mobbing session involves different roles, and one typical way of doing it is ‘drivers and navigators,’ and after some time we get to switch the roles for it. It can be 20 minutes, 50 minutes, 10 minutes, whatever, it makes sure that we rotate all the roles throughout the team. I think it’s important when it comes to building psychological safety in the teams that are just starting with mobbing to try to reduce this cadence, the cycle as low as possible. Because, you know, it’s not that one person gets stuck under the pressure of being observed or watched, the way some people tend interpret this when they start working in this way through co-creation. Reducing the cadence tends to help with that because we rotate faster and get to expose ourselves sooner, and then we start building on this trust even more as we go. So that’s one of the interesting examples of how batch size helps with psychological safety.
Squirrel: The other I’ve seen frequently I’ve heard called ‘promiscuous pairing.’ This is where you pair with many different people so that you get an opportunity to interact with many other folks. That would be another small batch size, I think you might say we should do that frequently? Or do you see it differently?
What is a Batch, Really
Listen to this section at 04:18
Dragan: Yes, promiscuous pairing is one of the ways of doing pair programming. You get to switch the pairs after some amount of time, and if you accelerate this to 11, so to speak, then you get to mobbing, right? If you accelerate promiscuous pairing to the max, then you get into the state of mob programming because we get to switch the pairs more and more frequently until we get to the point that actually we are working all together. So yes, that’s one of the patterns.
Jeffrey: It occurs to me that we’re using the phrase batch size, and we might be using it in different ways to refer to different things. Can you give me some examples of batch size in a software team? What are the things that fall into batch size? I think as we’ve talked, we’ve established promiscuous pairing session is one type of batch size, and we also talked a little bit about releases. These seem like very different concepts. How can we tie them together with batch size?
Dragan: If you think about the typical way of working in a team that tends to be a bit more siloed, you have different roles, you know, designers and developers and testers. So the idea of batch size is actually the amount of work that is transferred from one stage to the other, the size of the work that is transferred from designer to developer and developer to QA, etc. This batch size is way more visible when it comes to the asynchronous way of working, in this very traditional role separation. But as you work together more, these things tend to be more vague and harder to see, right? To give you an example, if you do pull requests, which like ninety five percent of the industry is currently doing, you have development time and a review time, right? So someone is someone is altering the pull requests, then someone is reviewing these pull requests. The amount of work that is transferred from development to review is the size of the batch. But if you try to squeeze this time you have between development and review and you continue reducing the size of the pull request, then at one point you get to a place where you have a pull request of one line of code that is reviewed as it’s being typed, so you reduce the latency between the author and reviewers. That also helps you reduce the size of the batch. Effectively, you’re getting into a co-creation, into pair programing, because then things actually have became continuous.
Squirrel: I never thought of that. It’s almost like a limiting process in calculus, you’re taking a process that’s discrete and happening infrequently, and you do it more and more frequently and in the limit, you end up with paring, mobbing, and co-creation. Did I hear that right?
Dragan: Yeah, exactly.
Jeffrey: And from a systems point of view, lean theory or theory of constraints or whichever, what’s good about small batch size? What do we what do we get from? I think for all of us, we start a priori in this conversation knowing that small batch sizes are good. It’s worth, I think, going back to that. Why are small batches good again? We can see that we end up with co-creation, so one way to look at this is that co-creation patterns are good because we end up with small batch sizes. What are the benefits we’re getting from those small batch sizes?
Dragan: There are so many benefits. One of the ways to think about it comes to the agile movement itself, which I interpret as getting a value for users sooner. That means if we try to think about being able to course correct sooner, that also includes reducing the size of the batch. So what this means is, if instead of us sending 10 features to a customer in one release, you could release this one feature. We are going to get the feedback about it sooner, which is going to help us course correct if needed. We are working in complex adaptive systems, so exploring the unknown space is kind of default, right? So trying to build in the mechanisms that help us to navigate, to find this value sooner, means building in the mechanisms that help us to get the feedback sooner. If you want to do that, then we need to reduce the amount of work that we have bundled for sending to the end customer. It involves lots of steps in between, all the way starting from the business until the end of the value stream, trying to reduce the amount of work in progress, in order to get this piece of work to customers. This way we are able to figure out sooner if we are on the wrong path. There are lots of other things when it comes to small batches, going into lean theory the throughput tends to go up, so that’s another thing. If you if you want to dive deeper into this topic, Donald Reinertsen’s book The Principles of Product Development Flow is a deep dive into it. So that’s one more recommendation.
Squirrel: That book is absolutely wonderful but, health warning, it is very dense, so be ready for a lot of theory.
Connecting the Dots
Listen to this section at 10:41
Jeffrey: Fantastic. One thing I liked in your description there is you capture the different levels this is happening on. That batch size, on the one hand, is a batch that’s coming out from the company at the business level to the customers. Then you also have the smaller batches inside the system as work is being done, and you bring in the different intervention points. We could be having the smallest possible batch size between us. We could be doing co-creation and have these very small batches between people within the team, and still have a very large batch that we’re releasing out to clients.
Dragan: Yes.
Jeffrey: These are two different kinds of batch size. So if we come back to what we’re monitoring, on the one hand we could be looking at monitoring the release cadence of the team, and then within the team we can be looking at, as you talked about, maybe pull request size, what’s the average time or size of pull requests? Do I have that right as places we might look to, to see what’s happening, to understand our system?
Dragan: Yes. Our end goal is definitely to try to have across the whole value stream reduction of the batch size, right? Because we might be doing small batches in one part of the value stream or the whole development flow is going that way, but if at the end we release this in a huge chunk, then it causes lots of other problems downstream. One of the benefits of small batches is staying in the context, so if something goes wrong in production with the customers, the context is very recent so you are able to troubleshoot and course correct sooner. But if you batch up the work before releasing to the customer and something goes wrong, you have to find a needle in a haystack. The context has been lost because who knows when this feature was worked on or developed? So yes, trying to have this in the whole value stream as much as possible.
Jeffrey: Developers of a certain age have lived through the rise of continuous integration servers at least, if not continuous integration as a practice between people. Having automated build feedback is something that I’ve seen spread fairly ubiquitously, and something that didn’t exist for the most part in the 90s, and it’s the same idea that we’re getting faster feedback with smaller bits of code. So rather than what used to happen, six or nine months of developer work that then were going through an integration phase, the batch size of people trying to figure out how to make their code work together was literally months and months of work. Today, if I commit my code, I have the expectation I should be hearing back fairly soon. Minimally, there should be a nightly build process that will tell me, and the difference in batch size between 24 hours or five minutes as opposed to months is a radical difference.
Dragan: Exactly. We were talking also about the social, emotional parts of the socio-emotional technical system. When you think about these old times—I’m sure there are still companies doing this even now—where you release the software after a year and then you have a hardening sprint or fixing bugs and stuff like that, I’ve seen companies opening champagne and celebrating after being able to go through all of these phases and finally be able to release. This is emotional level feedback, and this excitement and celebration also talks about the size of the batches that we have in the system. It’s a very valuable feedback to have in mind. In my opinion, teams should feel its business as usual almost all of the time. If we have these spikes in the amount of excitement and the opposite of that, it can also talk a lot about the size of the budget that we have in the system and delayed feedback loops.
Jeffrey: I love this idea of the champagne consumption graph as a monitoring point for your batch size.
Squirrel: Maybe we should just have champagne all the time.
Jeffrey: Well actually, that’s exactly what I was going to say! Really, having moved to daily demos where people will daily be saying ‘here’s what we’ve completed today,’ and demoing out of production, I have exactly that feeling. We don’t drink as much champagne, but-
Squirrel: Just a thimbleful, a little bit every day.
Jeffrey: That’s right.
Squirrel: That sounds super. Well, we could keep talking to Dragan about this forever. We really appreciate having you on and talking to us about so many interesting topics: emotions, psychological safety, batch sizes and champagne. Just wondering if people want to get in touch with you? Where’s the best way to do that?
Dragan: I tend to post on LinkedIn, Twitter, and also have my blog with micro blog posts. Small batches!
Squirrel: Excellent. Thanks, Jeffrey and Dragan.
Jeffrey: Thank you, Squirrel.
Dragan: Thank you.