Skip to main content
KBS_Icon_questionmark link-ico
ipad-820272_1920 ;

In-conversation about digital health and Data Selves

Deborah Lupton, Benjamin Hanckel and Shayda Kashef

20 August 2019

On 13th June 2019 the Social Science & Urban Public Health Institute (SUPHI) at King’s College London hosted a special in-conversation event with Professor Deborah Lupton.

On 13th June 2019 the Social Science & Urban Public Health Institute (SUPHI) at King’s College London hosted a special in-conversation event with Professor Deborah Lupton. During the event Professor Lupton discussed her forthcoming book Data Selves, and reflected on the role that digital technologies are playing in the urban public health landscape. Below is an edited version of the transcript from the event. Special thanks to sponsor, PLuS Alliance.

What is digital health and where do you see the field at now?

Digital health is a short, snappy title to refer to the huge range of digital technologies that are used to apply to health, right through from older technologies such as websites, search engines, online discussion forums, through to things like 3D printing of body parts and apps and all those kinds of digital technologies that are quite new on the scene. So I think digital health for me encompasses that diverse range of ways that certain forms of healthcare and health communication can be digitised. And given that there are more and more of these technologies emerging, there’s never something I have to wonder writing about because there’s always something new on the horizon and it’s really interesting to trace their trajectory and find how older technologies are often forgotten about. Google search is probably the number one most highly used digital health technology because people use it as a form of self-triage, and it’s interesting to me how those older technologies are often forgotten about in the rush and the excitement of the social imaginaries that represent newer technologies, such as apps and wearable devices, which is the brave new world of healthcare. People are often still getting much more value from websites and online discussion forums, for example, than they are getting from health apps.

Putting this in the context of your current work, can you tell us about the Vitalities Lab you set up at the University of New South Wales and how it relates to digital health?
I’ve been building on my previous interest in Foucauldian theory now to incorporate some perspectives from new materialisms and particularly feminist new materialisms and vital materialisms, as there’s an overlap between those two materialisms but they aren’t the same thing. I’ve only been at the University of New South Wales for four months but as part of my appointment I was encouraged to set up a research team. The name Vitalities is meant to denote the kinds of directions and interests that I have at the moment. So to begin with I’ve been writing a lot about ‘lively data’ over the past few years and that means people’s personal data and about the digital data economy and how digital data about people take on value. They are lively because people engage with data about their bodies and themselves in ways that synergistically change their own lives, they may respond to their own data and change aspects of their lives based on what their data are telling them. So that’s the notion of lively data.

Vital materialism gets back to that idea of vitalities as well. So, some of the feminist new materialism scholarship that I’ve been engaging with, particularly the work of Karen Barad, Jane Bennett, Donna Haraway and Rosi Braidotti, talk a lot about capacities and affective forces, and how they are generated through interactions that people have with other humans and with non-human actors. It’s very much this idea that there are capacities that are generated when people come together with other people, with non-humans, which I’m trying to explore in my recent research, of course within particular digital technologies, how people engage with their digital devices but also their data to generate new capacities. So that gets back to that vitality as well, there are these capacities constantly being generated and reformed and reconfigured with and through devices and data.

Can you expand on how ‘vitalities’ might contribute to new ways of thinking about methods and methodological enquiry?
In terms of theoretical methods, there’s a sort of emerging approach to qualitative empirical research, post-qualitative inquiry, that I’ve also found really interesting to work with lately. And there’s a very strong overlap of post-qualitative inquiry with more than human theory, because post-qualitative inquiry sees research as always being a research assemblage, as always being partially emergent and sort of going away from the very positivist approach to qualitative approaches which have dominated a lot of health related qualitative enquiry of late. Every type of research is always a research assemblage: the researcher is always part of the data that they generate. So that’s where post-qualitative research departs from the more positivist approach to qualitative research. It’s interesting how the more than human theory is now being brought into research methods: and that’s what I’m trying to do with the kind of work that I’m doing recently, when I’m writing up my own empirical research and analysing it to bring in post-qualitative perspectives as well as the more than human theory that I’m trying to think with when I’m generating concepts that I’m using to analyse my empirical data. So the empirical data might include traditional forms of data such as interview transcripts, or focus group transcripts, but with post-qualitative methods they often now include arts-based materials, drawings, storyboards that people might have made in workshops. I’ve been experimenting with a method called story completion recently, which involves people finishing stories that we start for them and then inviting them to create the narrative, which is another form of research material that I think can be quite interesting to use as a way of understanding people’s experiences. So that’s been a really new method that I’ve been experimenting with as another way of accessing people’s feelings and experiences in ways that they themselves might find hard to articulate if we’re just asking them in an interview to articulate. Because often they’re such mundane experiences for people that coming at it from a more oblique way or a more sort of creative way can be an interesting way to access those kinds of experiences and fears.

Can you expand on how you have engaged with some of these themes in your most recent work, and in particular in your forthcoming book Data Selves?
Data Selves covers what I call ‘living data’ and it gets back to the lively data I was talking about earlier, but also how people live with and through and alongside their personal data. In Data Selves I’m really trying to expand on feminist new materialism, human data assemblages ideas, and I argue that people’s personal data are often represented in dematerialised and depersonalised ways, such as when we talk about the big data phenomenon, the data tsunami and being overwhelmed by data. And we often forget that not all data that are generated by, for example smart cities or by any other form of data generation, are about non-humans. But a lot of those data are about actual humans, about their lives, about their bodily practices and habits and routines.

With Data Selves, as the title suggests, I wanted to bring in that more than human, non-human aspect and to understand data human assemblages as all human assemblages, and bring in that humanity and re-humanise this core data. And for me that raises a different form of ethics around those data. I’m arguing that we should think of personal data in similar ways, as sort of embodied, human, not fleshy but they’re kind of about our flesh, that sort of ambiguous ontology. So I would argue that we need to think of them in certain ways like we think of other body parts and other body attributes that people donate or give or sell in some situations as very much human remains, and I’m arguing that that’s how we should treat people’s personal data, and that raises questions about the ethics of how other people might use those data and seek to profit from those data.

In the book I draw on a few of my empirical research projects, which do talk about people, about how they engage with and make sense of their data, and I’m arguing that we need to understand people’s engagements with their data as very often infused with affect, vulnerabilities, multi-sensory engagement. So there’s actually a chapter on what I call materialisations of data, when I talk about social imaginaries of data, the very utopian ideas of data as being very productive and generative, and how people themselves can benefit from their own data. So there’s that very positive representation. Then what’s interesting that over the past few years though, when talking about people’s personal data there’s this very dystopian representation of data that privacy no longer exists. So you’ve got really interesting polar representations of how people’s data can be used in both positive and negative ways.

I did a project which I called the Data Personas Project and that built on the design methods approach, personas. I called it their data persona, or a profile of you that’s made about details about you from your online and app related encounters and engagements. And then I asked people to imagine the futures of their data persona, because I think there’s a lot of interesting and intriguing ways we can think about inviting people to imagine futures, rather than having futures imagined for them, on behalf of them by others. I also asked them how similar or different is your data persona from you? Some people did imagine a dystopian idea whereby nothing is private, you know, the internet knows everything about me, but most people said the internet doesn’t know everything about me, it doesn’t know my internal beliefs and feelings, and so on. So I thought that was really interesting because we also get this discourse in media studies in particular and surveillance studies which is very critical of the idea that people think privacy is dead and they’re not concerned about their privacy, you know, the privacy paradox, so yes, people say “I’m worried about my privacy” but they don’t do anything to actually protect their privacy. But that research that I did using the data persona concept kind of shows that people don’t think that their privacy has been completely taken over by the internet.

There is an ongoing debate about data capture for the common good, versus data capture that is perceived as morally questionable. How might we think about these boundaries?
I try to avoid a really normative approach to these kinds of ethical discussions. The context is everything. And people’s contexts are so variable and unique to them, that’s what really comes out when you look at the ways that people engage with digital technologies and digital data. If you look at the Association of Internet Researchers document on ethics around doing research with online materials, it’s really interesting because they argue that you have to look at the context for each research project. There shouldn’t be hard and fast guidelines about how social research is used when we’re talking about using people’s personal data. More recently, human ethics committees have become far more aware of that, as people might be putting their information out there when they go online, so it’s become a more complicated situation now. It’s not as easy to get ethics approval and you do actually have to argue for why and how you’ll get those people’s consent or if you don’t, why not and so on, so it’s become far more complicated. All I would say is that there needs to be these very detailed, lengthy considerations about the context.

But all those issues around whether people know that you’re accessing their data, to what extent, now there’s the issue now with de-anonymisation too, Because if you know what you’re doing, data harvesters can be really good at de-anonymising data to generate detailed profiles about people.

But even when a decision is made about if it is appropriate to generate these data and what to use people’s data for, because it might improve public health or improve treatment for medical conditions, really strong data privacies and security measures can be leaked or breached or hacked. So you don’t know what the future of those lively data might be, so that’s very difficult.

The event concluded with a brief Q&A session with the audience which covered a range of issues, including:

  • An expansion of the debate about data collection, and how we manage data capture within the context of emerging technologies, and
  • A discussion about the possibilities for technologies to benefit certain people who are marginalised, such as people with disabilities, with Professor Lupton acknowledging that there is more to do in this area.

To respond or comment on this feature, please contact

In this story

Benjamin Hanckel

Benjamin Hanckel

Research Fellow

Shayda  Kashef

Shayda Kashef

Student Services Officer


Features written within the SUPHI research group

Latest news