Views from ‘Seeing Data’ research (Part 1)

This is the first in a series of three blogposts about the Seeing Data project. The first post is guest written by Helen Kennedy, Professor of Digital Society at the University of Sheffield and director of Seeing Data. Part one discusses some of the findings and what this means for how we think about ‘effective’ visualisations. The next two posts will focus on our ‘talking mats’ method (part two) and what our findings mean for visualisation designers (part three).


Part 1: What we found and what this means for how we think about ‘effective’ visualisations?

WHAT WE DID

Regular readers of this blog will know that Andy and I have been working with Rosey Hill (University of Leeds) and Will Allen (University of Oxford) on Seeing Data, a research project which had two core aims. The first was to improve our understanding of the factors that affect people’s engagement with data visualisations and the second was to think about the implications of our findings for how we define and measure effectiveness in visualisations. We went about our research using this mix of methods:

  • We commissioned visualisation agency CLEVER°FRANKE to produce visualisations of two migration datasets, which were used as case studies in our research
  • We analysed visualisations themselves, using what is known as a social semiotic approach (that is, analysing and understanding visual elements as shaped by the contexts in which they are embedded)
  • We interviewed 13 visualisation professionals about the place of users in their design processes, the degree of user involvement, how they think about their users and where these ideas emerge from
  • We ran nine focus groups around the UK with a total of 46 participants, most of whom kept a diary for the week prior to their focus group, detailing their encounters with visualisations in their daily lives (excluding work). We showed them these eight visualisations in the focus group – here is an explanation for how we chose them.
  • We used a technique in the focus groups called Talking Mats, which required participants to place thumbnails of the visualisations on a grid according to how much they liked and learnt from them (in the next Seeing Data blog, Andy will say much more about how we used these)
  • Seven of our participants kept a visualisation diary for a month after the focus groups, and we interviewed them about their diaries at the end of the month.

WHAT WE FOUND

Two points stand out from our research: the profound way that a range of social, human factors affect people’s encounters with visualisations, and the importance of emotions when engaging with datavis.

First, the social/human factors. These are:

Subject matter: Visualisations do not exist in isolation of the subject matter that they represent. When subject matter spoke to participants’ interest, they were engaged; when it didn’t, they weren’t.

“I didn’t like what the topic was. What was the point?”

Source/location: The source of a visualisation is important: it has implications for whether users trust them. When visualisations were encountered in already-trusted media that participants view or read regularly, they were more likely to trust them.

“You see more things wrong or printed wrong in The Sun I think (said one participant who usually reads The Daily Mail.”

Beliefs and opinions: Participants trusted the newspapers they regularly read and therefore trusted the visualisations in these newspapers, because both the newspapers and the visualisations fitted with their views of the world. But it’s not just when visualisations confirm existing beliefs that beliefs matter. Some participants liked data in visualisations that called into question their beliefs, because they provoke and challenge horizons. So beliefs and opinions matter in this way too.

“It was surprising, it was something I hadn’t even thought of and it was like, ‘Wow!’. […] it was something I didn’t expect.”

Time: Engaging with visualisations can be seen as work, or laborious, by people for whom doing so does not come easily. Because of this, having time available is crucial in determining whether people are willing to do this ‘work’.

“Because I don’t have a lot of time to like read things and what have you, so if it’s kept simple and easy to read, then I’m more likely to be interested in it and reading it all and, and you know, to look at it, have a good look at it really.”

Confidence and skills: Participants needed to feel confident in their ability to make sense of a visualisation, in order to be willing to give it a go. This usually meant feeling confident that they had some of these skills (which many participants doubted).

Language skills, to be able to read the text within visualisations (not always easy for people for whom English is not their first language).

Mathematical or statistical skills, for knowing how to read particular chart types or what the scales mean.

“How would you know what that is and what all this is unless you’ve got a certain level of maths skills or English skills as well?”

Visual literacy skills, for understanding meanings attached to the visual elements of datavis.

“It was all these circles and colours and I thought that looks like a bit of hard work; don’t know if I understand.”

Computer skills, to know how to interact with a visualisation on screen, where to input text, and so on.

Critical thinking skills, to be able to ask what has been left out of a visualisation, or what point of view is being prioritised.

Of course, visual elements, style and arrangements also played a role in determining whether participants felt engaged with the visualisations we showed them. These sometimes appealed to participants, but sometimes they were deemed unfamiliar and off-putting.

“It was a pleasure to look at this visual presentation because of the co-ordination between the image and the message it carries.”

“Frustrated. It was an ugly representation to start with, difficult to see clearly, no information, just a mess.”

The quotes in this post also show that all of the factors (subject matter, media location, beliefs and opinions, skills) provoked strong emotional reactions amongst participants. We found that people reacted emotionally to:

  • Visualisations as visualisations (for example enjoyment of beauty or frustration with lack of interactivity as seen in the above quotes)
  • Data (“data told me crime was increasing and I felt frightened”)
  • Subject matter (“I hate Shakira and Rihanna – why would you visualise junk?”)
  • Source/media location (“I love Buzzfeed, I’m addicted to it”)
  • A combination of these things (“I actually found it a little offensive”)

During our interviews with month-long diary keepers, we asked what these participants remembered about the visualisations that they had seen during the focus groups. We were struck by the fact that none of them could remember any specific data from the visualisations they looked at there, but they could remember the overall impressions that the visualisation made and, importantly, the way that the visualisations had made them feel. Again, this shows the importance of emotions in engaging with data through visualisations.

This might all seem like common sense, but these are things that don’t get talked about much amongst visualisation designers and researchers. The challenge for such folks is how to translate these findings into practice and research – Andy will say more about this in one of the next two Seeing Data blogs. I’ll end this one with some reflections about what our findings mean for how we should define effectiveness in relation to data visualisations.

DEFINING EFFECTIVENESS

Based on our findings, we think that in the visualisation field, definitions of effectiveness need to be broadened. Such definitions need to take into account the fact that people don’t always look at visualisations with the aim of accessing specific information quickly or remembering it forever. Visualisations in the media that are targeted at non-specialists might aim to persuade, and they all need to attract in order for people to commit time to finding out about the data. This means that effectiveness can be defined in many different ways, including:

  • Provoking questions and the desire to engage in discussions with others
  • Creating empathy for other humans in the data
  • Generating enough curiosity to draw the user in
  • Reinforcing or backing up existing knowledge
  • Provoking surprise
  • Persuading or changing minds
  • Learning something new
  • Acquiring new confidence in understanding visualisations or data
  • Finding the data useful for one’s own purposes
  • Enabling an informed or critical engagement with a topic
  • Having a pleasurable experience or being entertained
  • Enjoying the visual
  • Provoking a strong emotional response.

What this means for visualisation designers will be the subject of one of the next posts by Andy. In the meantime, if you’re interested in whether visualisations can make a positive contribution to society, given all of these complexities, sign up for this event!:

We aim to publish a longer article which says more about our findings in the next couple of months. Follow @visualisingdata or @seeing_data to find out when it’s out!


Screen Shot 2015-10-11 at 20.30.29