teams

When random selection of leaders enhance group performance, what can we learn of leadership?

This year, I am director of a course on Team leadership & Human Resource Management. Organising 200 students, 21 Teaching Assistants, and 7 lecturers keeps me quite busy! In reviewing literature for the course, I came upon an interesting article from 1998 where the authors had examined effects of different kinds of leaders on group performance.

In short, what they found was that when groups leaders where selected randomly (or more precisely, by alphabetical order of last name), groups actually performed BETTER at a problem solving task than when leaders were selected by performance on a leadership quality-test (formal selection), when leader selection was informal (by ”whatever means you see fit”), or no leader assigned.

The task was a ”survival” task, where a group is given a scenario where they’ve been stranded in the wild and the task is to rank a list of items in order of importance for survival - perhaps you’ve participated in something like this yourself.

The researchers think it may be the case that the formally selected leader felt superior and apart from the group, which would have led to the worse performance on the task. Everyone in the groups ranked items individually, and had to reach a group decision on rank. In groups with a random leader, members rank decisions deviated less from the group’s than in the formal leader or control groups. So, the random leaders were more just ”one with the group” than the formal leaders. 

A very interesting find in the paper was the attitudes and opinions given from leaders and followers. The randomly selected leaders were seen as less legitimate than the formal leaders, the formal leaders enjoyed their role more, participants in random-leader groups were less satisfied with the decision-making process. Formal leaders considered themselves more effective. This is all rather ironic considering that the random-leader groups made clearly superior decisions.

They further tested implicit theories of leadership by letting (other) people answer under which condition they thought groups would perform better. Not a single subject though that the randomly selected leader-groups would be superior - so clearly the results of their experiment are counter-intuitive.


What the paper does not answer is by what mechanism did the random leader groups perform better? If it was just formal vs random leader then ok, the ”formal leader feels superior” is a plausible reason that they perform worse as a group, as the leader might have had unwarranted influence on the results of the group for example. But that does not explain why random leader-groups perform better than informal leader or no-leader groups - this was not really adressed in the paper.

Perhaps the randomly assigned leader felt the responsibilities of the leader role but less of the ”glory” possibly associated with being selected from the test, in a way that positively influenced results. I believe there are some leader behaviours that are positive for making group decisions, for example, making sure that everyone is heard; and some leader behaviours or effects that are detrimental to group decisions, such as ones opinion/decision having extra weight. Leaders were selected by scoring high on ”leadership”, not by having special knowledge of survival situations, and so there is no reason to give any extra weight to their opinion on the ranking of items. Whether this is what happened can not be judged by the paper as it did not measure this exactly, so I am simply allowing myself some speculation here. :)

In either case, this does not mean randomly selecting leaders is always better. It is likely that the nature of the task, a problem solving task that had ”one right answer”, matters. It is the kind of task where ”wisdom of the crowd” would more likely lead to a correct answer - it is however very interesting that a random leader was more effective than no leader at all.

Staffing for analytics, a matter of transfer and emergence

Analytics and big data are all the rage these days and one very interesting issue is that of talent match. Organizations claim there is a lack of analytics talent, and data scientists accuse organizations of only hiring "unicorns".

I think the truth is that there is a bit of a gap between the talent that does exist and the context in which they are now needed. Companies are looking for unicorns for two reasons, I think. One is because they just don't know, really, what they need. And the second is that they know that they don't really know what they need, so they want someone who does know, and can manage the whole effort, and ideally perhaps do everything so you only need to pay one person. Because the field in its current incarnation is new, I don't think it's a stretch to say that there is actually a shortage of people with many years of experience managing analytics teams.

So you don't need a data science unicorn, you need a team of people with complementary strenghts. The good news about that is that you can probably find a lot of that talent in your organization already. The bad news is that just hiring a bunch of people is not going to make a functioning data science team - probably. Bringing in people from academia means they have to transfer and adapt what they know how to do to a new context, a business context. The people in your data science team need to be able to communicate with each other, and the rest of the organization, for the unicorn to emerge as the sum becomes greater that its parts.

I don't have the prescription for how this is done, but I do know that as the field is in its infancy there will be few plug-and-play solutions. I also think that analytics isn't really a separate... part... of an organization but a capability or organizational skill that must be developed. It's really about being able to learn better and quicker. It is about upgrading both organizational "senses" and the organization's "prefrontal cortex".

Some great points from Robin Bloor, "A data-science rant", about what a data science team should be and do

1. There is nothing new at all about what is being called “data science.” It is the application of statistics to specific activities.
2. We name sciences according to what is being studied, and the behavior involved is (or should be) along the lines of the scientific method. If what is being studied is business activity, and that’s usually the case, then it is not “data science,” it is business science. It is a language standard.
3. This statistical activity is identical to what we also call data analysis.
— Robin Bloor