This is a transcript of episode 313 of the Troubleshooting Agile podcast with Jeffrey Fredrick and Douglas Squirrel.

ChatGPT is a boring writer, but can be a fantastic teacher. This week on Troubleshooting agile, we discuss how you can use AI language models to learn how to have difficult conversations with colleagues and disappoint them helpfully.

Show links:

Listen to the episode on SoundCloud or Apple Podcasts.

Looking on ChatGPT

Listen to this section at 00:14

Squirrel: Welcome back to Troubleshooting Agile. Hi there Jeffrey.

Jeffrey: Hi, Squirrel.

Squirrel: So it’s the New Year, and I’m desperately keen that we not make a list of predictions. Can we please not do the state of agile in 2024? Or you know, seven things that we think are going to affect the world? I’ve just had too many of those. Can we skip that, Jeffrey?

Jeffrey: Okay, apparently you and I are on different mailing lists because mine have been blissfully free of that. But I completely understand, it sounds like you’ve been a bit overexposed to everyone’s predictions of the future.

Squirrel: Certainly. And I did one. And and people can go read my mailing list if they want to hear what my predictions, but I’m prediction-ed out.

Jeffrey: Hahaha, okay. Well it’s good, instead of prediction, I have a thought about reflection. And in the past year in 2023, I think there was surprisingly, there was something that was a big story for the year that actually did seem to affect me personally and I’m actually kind of…

Jeffrey: This may lead us to some thoughts about the future, but this whole topic of LLMs - large language models - and ChatGPT and generative AI, I actually think there was something pretty big there. While it’s been discussed a lot, I think it actually had some interesting, elements to it that I haven’t heard discussed much. Particularly with respect to kind of things we talk about in this podcast. So would you be willing to reflect back a bit? And if that leads us to the future, that’s fine. It’s not a top ten list, though.

Squirrel: As long as it’s not a top ten. No, no what’s coming this year, that’s great for me. And I think we’ve been refreshing. At least it’s refreshing to me, when I record these with you, that we haven’t over exposed on this. So this is probably our second commentary on use of large language models in a year in which they were kind of the top technical story in every outlet. So, I think we’re scoring pretty well. Let’s try it.

Jeffrey: Hahaha, okay. So the large language models, and I’ll maybe expose a bit about how what I’ve done with them this year or what we’ve done with them in my organization, which is I have a data science group that reports to me, and we’re becoming kind of the internal experts for us on LLMs. And then we were also looking to see how could we democratize knowledge and use of LLMs.

Jeffrey: And so there was a deliberate initiative within the company to boost people’s knowledge and awareness and even use of the technologies. And one of the things that I personally did was, I tried to use the LLMs, to see how they were as a communication tool. And so I took one of the case studies that you and I have talked about in the past, which is the Ted and Paula case study. This is something that we got from Roger Schwartz in his ground Rules for Effective Teams-

Squirrel: It’s an example of a difficult conversation. Ted and Paula don’t agree about how a presentation went. And Paula isn’t saying it, and-.

Jeffrey: Yeah, that’s right.

Squirrel: The goal of the exercise, when you do it yourself, we do this with audiences often, is to get the audience to think of ways that, as Paula, you could say more of what you’re thinking. So that’s just the setup of what Jeffrey’s talking about. Keep going.

Jeffrey: That’s right. Yeah, yeah. And it’s actually called the Withholding Information Case study. Roger Schwartz also uses in his book, The Skilled Facilitator.

Squirrel: We’ll link to it in our show notes. I think we have it somewhere. It’s certainly in lots of his books. Anyway, we’ll find it. Go ahead.

Jeffrey: Yeah. So what’s really interesting in the case is everyone, of course, reads the case and in our two column case study format that we talk about in our book, and that also is in this book. The right hand column, which has the dialog is very placid, there’s really not a lot going on. Paula’s asking, ‘well, how do you think it went?’ You know, and asking some questions, but it doesn’t seem like there’s a lot of conflict there, in a way that’s totally normal and natural. When I when I do the role play and I only speak out the right hand side, people are like, ‘yeah, that sounds like that’s a conversation that happens all the time.’

Squirrel: And it sounds like a good conversation, like it’s effective and everybody’s happy, then you read the left hand column and you see the seething, boiling lava underneath.

Jeffrey: Hahahahaha! That’s right! It’s a totally different story. There’s all kinds of emotional energy going on and acting that out, the internal voice, you can tell kind of the upsetness that’s going on, all the energy and emotion that’s being withheld. And it’s just not part of the dialog.

ChatGPT Shares Information

Listen to this section at 04:54

Jeffrey: And what we talk about on this podcast and in our book is about how to how to bring more of that into the conversation so you can have an actually productive conversation with productive conflict. Different views can evolve. So, it’s a very nice teaching tool. So I tried feeding it into ChatGPT. To see how would work. And I tried to get it to say, you know, ‘I’m Paula and here’s what I’m thinking and feeling. What should I say?’ And what I found was, it actually gave like really good advice. Like it’s version of Paula would have shared all those things and and I actually kind of struggled a bit to try to get it to role play Paula not wanting to say things.

Squirrel: Yeah. So trying to be the original Paula who’s doing a bad job and isn’t sharing information. Yeah, I’m not surprised.

Jeffrey: ‘I’m trying to avoid conflict. I don’t want to hurt anyone’s feelings,’ and trying to think of all the the reason justifications. And while it would modify the wording, it found good ways to bring those elements in. And so one thing that it made me think about is the the future of these kind of agents as a potential conversational partner, as a way to help people practice their conversations, and also to get advice.

Jeffrey: ‘I want to send this email to this person, and I’m really angry. And here’s what happened,’ to see as ways to help coach people through, because I can do it. And certainly you’ve done it with people one on one, but they often struggle to do it on their own. This is why we have the exercises we do, to build those skills. And I just thought it’d be really interesting to see LLMs become a more common part of people’s toolkit for productive conversations and maybe even expanding that.

Jeffrey: What could how could they do as maybe a facilitator in a group conversation? So as I’m looking back at the at the year and thinking about something that was really very different this year than other years, and that I think has big promise for the future. That’s what comes to mind for me.

Squirrel: Well, that’s interesting. I also have started to recommend to my coaching clients rather than finding a local eight year old, which is one of my favorite pieces of advice, because eight year olds are very good, first of all, at pretending and second of all, at being annoying. So if you want to have somebody pretend to be your annoying boss, an eight year old is a really good candidate, but also people in your life, colleagues who know the person or maybe who don’t know the person but know you.

Squirrel: There are lots of good conversation partners that I’ve been recommending for a long time for people to practice a difficult conversation, a trust conversation that you want to have with a stakeholder, like customer service or something like that. Or the tech team, when you’re wanting to agree on deadlines or some difficult conversation, those folks can be very helpful. I have said, ‘now try out one of the large language models.’ I tend to send people not to ChatGPT for a reason, I’ll say in a moment, but I tend to send them to character.ai, which I am told, and I have anecdotal evidence, but certainly nothing concrete I haven’t tried it very much, is better at pretending to be somebody else.

Squirrel: So, you can have a conversation there with Einstein or Richard Feynman or somebody like that. I think there’s kind of these preset characters, but if I understand it right, which I may not, and I would love listeners to tell me what their experience is. And maybe there are better ones out there. These things are cropping up all the time.

Squirrel: The experience is that you can you can tell it a bit about the person you want, the person who might be difficult, the person who seems to you to be recalcitrant, unwilling to budge on a deadline or give you more information about a client or something like that, and it’ll play that better. It’s kind of designed to be a player. If Character.ai isn’t it, this is where the future is.

Squirrel: So, okay, fine. We’re in prediction mode. Uh, you got me there eventually. But this is where one of the things, one of the several things that large language models are actually good at, they’re terrible at writing. Please don’t write anything and put it on the internet or write your homework or anything like that, with these things, they’re boring as heck. Not to mention they make stuff up. But most important, they just write the lowest common denominator of the internet, which is not a very high common denominator.

Squirrel: But here’s the thing, they’re very good at making stuff up. So let’s use that and let’s have them be actors. And I really find them effective for that purpose. The problem with the ones that are generic, like ChatGPT and Bard and the new one, Gemini, which apparently is fake or something, Google was messing up their video, but is supposed to be really revolutionary. All of these are carefully trained. It’s part of the the production of these models is that they are fine tuned to be extremely helpful, to share information, to give you more. You ask it ‘is Rio de Janeiro the capital of Brazil?’ And it says ‘no! And let me tell you why. Here’s the actual capital and here’s when it was built.’ It gives you a lot of extra information. It’s supposed to do that because it was essentially given electric shocks if it didn’t during its training. So, I’m not surprised at all that it can be helpful in that it struggles to not be helpful. You have to train it in a different way to really get this ability to act unhelpfully.

ChatGPT is Subservient

Listen to this section at 10:08

Jeffrey: And I think that’s what you had told me was your big concern is, while we do tell people to be more transparent and more curious, but there’s also times where you have to, you know, share your own views and advocate that you’re going to do something that people don’t want to do. You know, the phrase disappointing people helpfully. And I think that’s the thing where you think that the models just aren’t tuned in such a way to assert a position that, you know, ‘I’m sorry, I’m not going to get to B because I’m doing A, we agreed that that’s a higher priority or it’s a higher priority to me. So therefore, you know, I’m doing A and not B,’ you’re thinking that that just doesn’t fit in the current robot mindset that the LLMs would be like, ‘oh, you want me to do B okay, fine, sure. I’ll go do that instead.’

Squirrel: Or they’ll say an awful lot of what we would call and what Chris Argyris, who originated all this stuff would call easing in. So because they’re trained to be so polite and so helpful, you have to think of the tools that you find commercially that are commonly available, like ChatGPT as kind of subservient, as servants. And they’re trained to act that way. And you notice that in their responses. And, so I would expect that it might say, you know, ‘I’m terribly sorry. If we could hire some more people, we’d be able to do the thing that you would like. We’re going to do this instead, but it really breaks my heart.’

Squirrel: Whereas that may not be how you actually feel. And it is not the most effective way to disappoint someone helpfully. It’s much more effective to boldly and clearly assert that the priorities have shifted, that their item is less important and something else is more important, and why. And not to give a lot of apologies and easing in, and trying to make the other person feel good because you don’t actually want the other person to feel good. That’s not sharing what you have in your left hand column, what you are actually thinking and feeling. Now, if you really do feel that way, that’s great. And the tools probably can help you with phrasing if that’s something you struggle with. But most of us don’t.

Squirrel: Most of us struggle with coming in and saying, ‘hey, Jeffrey, we’re not recording a podcast today. It’s more important that I shovel the garden because there’s been ten inches of snow here in Britain.’ There hasn’t, but if there were, that would be the sort of thing that I think Jeffrey would find most helpful, because then he could say, ‘great! I’m going to go surfing!’ He doesn’t live in Britain. It was some place where he could do that. And, that would be more helpful than me spending five minutes apologizing to him before I told him that.

Squirrel: So that’s not what I’m going to see these models doing yet. Now, two weeks from now, somebody may invent something new. It is a new year, and we’ve had a lot of revolution in the past year. But right now, if you want to practice having difficult conversations, which if you listen to us at all, you know, we really want you to do, use them with caution. Try to get them to play a difficult role, to be the difficult character, if you can but you’re still, I think, going to need some human help at the moment to learn how to disappoint people helpfully, as assertively as you probably need to.

Jeffrey: Yeah. And we’d love to hear people’s feedback on that, because and one thing I’ll say is also, I’d love to hear people, if you go andtry to get it to help you and what you say: put all your context in, put all your left hand thoughts in, all the things-

Squirrel: Very important, yes.

Jeffrey: -that you’re struggling with. Don’t try to give ChatGPT your draft, but instead give it all your context and see what it comes up with. And I want to see if you had the experience I did, in the role play, which is, actually it was better and braver about bringing in some of the elements than you might have been yourself, I think, I’ll be curious, Squirrel, if your point about the lack of confrontation is something that people see.

Jeffrey: But I do think this is an area that, as I look back on on 2023, was one of the most interesting things to happen in terms of communication, and I think could have a big impact going ahead into 2024 and beyond, on communication and teamwork and the kind of things that we talk about. So really, I’m hoping this is something that our listeners go and test and then write in and tell us like, ‘yeah, it was great,’ or ‘no, it was terrible. And here’s why.’ Because I think there’s a lot of potential here.

Squirrel: We would really love to hear from you if you’re doing anything with these tools. If you are writing documents that people read, please don’t write to us using ChatGPT. I think that would probably be terribly boring. So please write to us in your own voice. We would sure love to hear from you, if you are using them to practice and if maybe you’ve been inspired by us to go and try it. What are your results? When you ask it to the tool to be a boss or a difficult employee or somebody that you’re having a difficult conversation with what do you see? What results do you get where?

Squirrel: This is an experimental world. We’re no longer writing deterministic programs here, so we’re going to have different results. We’re going to have different outcomes. And I know Jeffrey and I are going to continue experimenting in this area. We’d sure love to hear what you think. You may also think this is the dumbest idea ever. And you think that humans are never going to be replaced by computers for human conversation. That would also be an interesting point of view, and we’d love to hear all of that from you.

Squirrel: The way to get in touch with us, of course, is at agileconversations.com. We post on X, we’re trying out other things, tell us where else we should so you can find us there. We would love to hear from you. And of course there you’ll also find our book and free videos and material for practicing conversations and a whole bunch of other good stuff. So we’d really like to see you at agileconversations.com. And the other way to keep in touch, of course, is whether you use ChatGPT to summarize us or not, to come back next Wednesday when we’ll have another episode of Troubleshooting agile. Thanks, Jeffrey.

Jeffrey: Thanks, Squirrel.