How many times are we faced with this situation: You’ve got 9 balls up in the air and someone tosses in a few more. But these are on fire. And have spikes. But the severity of the flames and points depend on which person is looking at all of the things you have up int he air.
So what do you do?
You may be lucky enough to have departments that communicate at multiple levels so priorities have been decided on before they start tossing to you so you can toss a few balls to someone else to juggle while you keep everything else up in the air. At other times, everyone’s project is THE MOST IMPORTANT THING IN THE UNIVERSE and you have to finish it NOW NOW NOW!!!
I think that most of us can admit that there are far more encounters with the latter than the former. So, how have I survived this long without losing my hair? (okay, most of my hair?)
- Surround yourself with people you can trust, who are competent, and are willing to pitch in when needed.
- Discover those who are not going to do more than is absolutely necessary and find something to inspire them to go the extra few feet, or take the extra step.
- PLAN PLAN PLAN for the what ifs that can cause you to have to start juggling one handed or blind folded.
- Take those short pauses between major incidents to decide how to become more efficient, maybe automate parts, maybe train/promote someone to be your backup
I think the main mistake that people make when they come out from under one of these huge events is to sit back and relax. To me, this is the most important time to learn. What went right, what went wrong, and how can we cut down the emergency of the situation so it can be planned and handled better?
I like planning mock engagements, I’ve done it in several past jobs. If time permits, I like to have the team practice before deployments to make sure all materials we need for it (test plans, logins, setup data, bodies) are ready and available. This has helped shape our process, of which I’m pretty proud of at this point. I’ve even gone so far as to hamstring it to see how we handle it, and gauge how much it throws us off. I walked out of my office in the middle of a practice deployment and shut off one of my QA people’s machine as he was testing. He was not very happy, but he recovered quickly and moved to another machine while another team member printed out the things he needed.
He was able to respond quickly, but it still ended up costing time in the end. He hadn’t been tracking in the test case where he was, so he needed to begin that section again. We had a post-mortem meeting where we all sat down and went through what was good, bad, and where we could make things more efficient. The main thing that came out of that meeting was to have everyone have everything they needed summarized and printed out so the lack of a machine or access to the network drive wouldn’t hinder them as much. We also decided that having the physical process of checking off steps and having the manager review for completion was a good idea as well.
Honestly, as a manager, it’s really amazing to watch how dedicated people react in a crisis with aplomb and efficiency, and how they take what they have experienced and turn it into a tool to make us better.
Of course, there are also the other side of the coin. “My machine is dead, so I can’t do anything.” Or, “I sent you an email, I was waiting for you to respond. Do you smell smoke?” How to you motivate them in daily work?