The article [[Obsidian Publish/Can digital data diagnose mental health problems.pdf]]
# Can digital data diagnose mental health problems? A sociological exploration of digital phenotyping
Sociology of Health & Illness Vol. 0 No. 0 2020 ISSN 0141-9889, pp. 1–15 doi: 10.1111/1467-9566.13175
[[HOPE-S Project Index]]
# Can digital data diagnose mental health problems? A sociological exploration of ‘digital phenotyping’
Rasmus H. Birk1 and Gabrielle Samuel2
1Department of Communication & Psychology, Aalborg University, Aalborg, Denmark 2Department of Global Health & Social Medicine, King’s College London, London, UK
# Introduction
- Thomas Insel - Formal director of the national mental health institute, said there is a "lack of objective measurement." in psychiatry.
- The term "Digital Phenotype" was coined earlier by Jain and colleagues. The idea is that organisms leave behind digital traces.
- I am thinking the assumption is that "internal dynamics" get expressed and observed externally. - Is this "Mind - Body - Behavioral observation?"
- Tutton, 2012 "Some scholars also agree that these promises draw attention away from social determinants of health towards an illusion of a technological fix for health problems in our societies."
- This reminded me to constantly remember in the context of Biopsychosocial, Systemic perspectives of illness.
- [[What's our conceptual framework]]
- [[Systemic Thinking]]
- ?How / what is the unintentional effect of personalised medicine by reducing the patients into data points? Does it depersonalise or disempower them?
- What does empowerment look like? (Bradstreet et al, 2019)
## Origin of Phenotype
- Genetic - Expressing and having effects beyond the organism and into the environment. e.g Beavers creating lakes
"The idea that digital phenotyping is the use of large scale digital data to come to conclusions about an individual's behaviour and traits and therefore their disease phenotypes."
**There is no objective measurement in psychiatry; thus the potential of having digital data to show "unwellness" is attractive.**
- Noting that the ‘lack of objective measurement’ has hamstrung psychiatry in both diagnosing and treating mental disorders, Insel is optimistic about the potential of smartphones to solve this impasse...... Digital Phenotype was first coined by Jain and colleagues (Jain et al. 2015). Phenotype - the traits and behaviours an organism expresses. The idea here is that when a person suffers from any kind of health condition. It is expressed in the digital traces that a person leaves behind in the realm of digital data, and this can be traced through our devices and their sensors (Jain et al. 2015)
**The use of technology to give "objective" measure of patient's unwellness when they are in their community, "real world".**
- Technology promises to make predictions about both present and imminent mental health states by correlating a variety of sensor data with self-report data (surveys of feelings, experiences, environment, etc) ...‘The promise of digital phenotyping’, Insel writes, ‘is that this objective measure happens in the context of the patient’s lived experience, reflecting how he/she functions in his/her world, not in our clinic’ (Insel 2018: 276)
- So far, research in digital phenotyping has applied to pilot
- (1) Diagnosed with schizophrenia, (Barnett et al. 2018,)
- (2) Major depression in danger of relapsing (Matcham et al. 2019).
- (3) PTSD (Bourla et al. 2018),
- (4) suicidal thoughts (Kleiman et al. 2018) or
- (5) Stress (Goodday and Friend 2019).
**The use of digital phenotyping promises to help the patient feel an increased sense of control over their health, may lead to misrepresentation and unrealistic expectations, as it oversimplifies the problem; it shifts attention away from social determinants of health towards an illusion of technology fixing health problems.**
- Preliminary small-scale studies suggest that there may be some potential for digital pheno- typing to offer benefit to those with mental ill health, for example, feeling an increased sense of control over one’s own health (Simblett et al. 2019).
- This, these scholars say, is problematic, because promises of benefit have often been shown to be misaligned with the true capabilities of a technology: technologies are misrepresented and this leads to unrealistic expectations about the technology (Bubela et al. 2009, McLeod and Nerlich 2017). Other scholars also argue that these promises draw attention away from the social determinants of health towards an illusion of a technological fix for health problems in our societies (Tutton 2012), and divert attention away from the failure of big science and heroic medicine to make a tangible difference for people’s health (Prainsack 2017).
**Personalised medicine has been criticised to de-personalise and disempower patients.**
- In personalised medicine, for example, promises of patient empowerment have been particularly criticised....However, critical scholars argue that in some circumstances, this type of medicine has the potential to disempower patients and/or lead to a ‘de-personalised’ medicine and care in clinical practice (Day et al. 2017, Prainsack 2017).
**How is empowerment demonstrated in practice?**
- it is vital that empowerment is demonstrated in practice rather than assumed. Broader questions about the relationship of empowerment to individual and/or social responsibility for one’s health also remain unconsidered. There are, as yet, very few empirical studies exploring this, and those that have explored it suggest that at least for a minority of patients, digital phenotyping can lead to a range of detrimental unintended consequences (Bradstreet et al. 2019).
Argument for digital phenotyping..
**Phenotype - the traits an organism expresses, e.g. red hair, pale skin. Phenotype corresponds to genotype - the genetic constitution of the organism, but can be affected by the environmental factor (e.g. height is genetic, but influenced by diet.)**
- The first definition of ‘digital phenotyping’ seems to have appeared in 2015 in a brief commentary in Nature Biotechnology (Jain et al. 2015). The authors here claim to draw on Richard Dawkins’ idea of the ‘extended phenotype’ (Dawkins 1982, Jain et al. 2015). The phenotype at its most basic level is a term for the traits an organism expresses, such as having red hair or pale skin. Phenotypes correspond to a particular genotype (the specific genetic constitution of an organism) but may also be affected by environmental factors (e.g. the phenotype ‘height’ is related to both a person’s genetic constitution and, along with other environmental factors, their diet).
**The environment has a bigger effect on organisms than the other way around, i.e. the phenotype that can survive to get chosen by natural selection. (But can't man shape the environment for its survival?)**
- While environmental factors may play a role in defining a person’s phenotype, the phenotype itself is not considered to have an effect on the environment, and natural selection for the phenotype takes place at the organism level. For example, historically, in cooler climates, people with paler skin were selected for: they absorbed more sunlight, which improved their health/fitness
**Extended Phenotype is the idea that genes express and have an effect on the environment.**
- The extended phenotype, somewhat simplistically put, is the idea that genes can be expressed (have an effect) beyond the organism and into the environment. Dawkins’ mentions the lakes created by beavers’ high dams as an example of this. The genotypes of beavers interact with their direct environment, so that natural selection functions at the level of the environment (i.e. good dams), rather than the single organism. Jain et al use this concept to argue that the increasing embeddedness of technology into everyday life might mean that the extended phenotype reaches into the realm of the digital as well, that is, that our genotype is expressed not only in us as individuals but is also expressed in our digital traces, much like the beavers’ genes are expressed as high dams.
**Who we are shape / influence how we use technology? Or Based on how we use technology, can infer who we are?**
- Jain et al use this concept to argue that the increasing embeddedness of technology into everyday life might mean that the extended phenotype reaches into the realm of the digital as well, that is, that our genotype is expressed not only in us as individuals but is also expressed in our digital traces, much like the beavers’ genes are expressed as high dams. ....Using this analogy, just as extended phenotype adaptations can provide insight into an individual organism’s genetic constitution, these authors raise the question of whether ‘\[...\] our interface with technology \[can\] be somehow diagnostic and/or prognostic for certain conditions?’ (Jain et al. 2015: 462).
==But is there really an underlying biological phenotype cause to behaviour?==
**Three critiques (1) potential bias within digital phenotyping (2) how digital phenotyping attempts to make sense of a client's social experience (3) the use of biological metaphors and concepts that run the risk of reifying mental disorders as primarily biological.**
- We then put forward three key critiques. First, we explore forms of potential bias within digital phenotyping, namely related to normality and pathology, and – following Benjamin (2019a) – inequality and racism. Second, we critique how digital phenotyping typically attempts to ’sense’ and measure the social. Third, we critique the usage of biological metaphors and concepts in digital phenotyping, arguing that this runs the risk of reifying mental disorders as primarily biological.
## The Potential for bias in sensing
**In digital phenotyping, we have to be clear of the relationship between correlation and causation.**
- First, that the relationship between correlation and causation here has been overlooked: while depression may well be related to a subsequent lack of mobility, it does not mean that a lack of mobility necessarily indicates depression.
**There are unspoken assumption of what it meant to live a normal life.**
- Second, that the assumption that depression and mobility are related speaks to a tacit assumption about what constitutes a normal life. However, such assumptions can be problematic.
**There are psycho, socio-economic reasons that can limit people access to sociability or movement in GPS sensors.**
- Marginalised populations, for example, may not have access to the forms of mobility that are presupposed in these studies. One’s neighbourhood may not be safe to traverse, or there may be a lack of public transport which decreases one’s ability to travel to new places.
**Data need to be representative. The selection criteria for research participants might have skewed data collected. If not care, the algorithm trained on will be coded with bias**
- Or what about homeless people – a group whose high prevalence of mental health problems is well-documented? This is likely to be a group that would fall outside the ‘ideal’ user (Oudshoorn and Pinch 2003) that studies such as the above have in mind. It is of course well-documented by social scientists that mobility – and the infrastructures that enable forms of mobility – are dispersed unequally across societies (e.g. Sheller 2016). In this sense, it would be a mistake to presuppose that a lack of mobility necessarily indicates depression: in fact, it may indicate a systemic and unequal access to opportunities, including the forms of mobility. Likewise, it is not obvious how studies such as this consider the challenges faced by those who are differently abled and whose mobility may be structurally and architecturally limited.
- There are also outstanding questions about which data these apps are trained on and therefore whose data comes to count as the benchmark for normality. Huckvale, Venkatesh and Christensen (2019: 5) have argued that machine learning models, generally and in digital phenotyping, can display bias if they are trained on limited populations such as U.S. college students. This reflects a problem often mentioned in psychological research and the behavioural sciences more largely – these samples consist primarily of people from ‘Western, Educated, Industrialized, Rich and Democratic societies’ (Henrich et al. 2010: 61). Simply put, digital phenotyping may, if we are not careful, be tailored primarily to groups which hold rather particular positions in society and runs the risks of not being representative of the diversity of populations.
**Examples and objections from scholars about coded biases**
- These concerns of bias also align with similar concerns which have been repeatedly noted in the ‘big data’ and artificial intelligence literature (Loh 2018: 61). Many scholars are sceptical about digital technologies, for health and otherwise, because of the propensity for (often racial) bias in these technologies (e.g. Benjamin 2019a, 2019b). It is well-established that inequalities can be coded directly into health applications (Gianfrancesco et al. 2018, Zarsky 2016). This is because, as Benjamin (2019a, 2019b) notes, the very data that algorithms become trained on are inevitably shaped by structural and historical inequalities and forms of discrimination that risk becoming perpetuated and reified in the coding. If algorithms and digital tools are trained on data from the world, then in so far as the world contains injustices and forms of bias, these algorithms risk reflecting that (Benjamin 2019a: 422).
- recent study exposed how an algorithm used in U.S. hospitals to allocate health care has been discriminating against people of African-American descent (Obermeyer et al. 2019). Some researchers within digital phenotyping do acknowledge the risk for bias and the need to tailor their methods to different populations, with for example Wang and colleagues suggesting that researchers interested in ‘depression sensing’ need to ‘[...] conduct a large-scale, longitudinal study with a diverse cohort well beyond students’. (Wang et al. 2018: 42:22, our emphasis, see also; Huckvale et al. 2019).
**If large data sets are biased, interventions will not truly be personalised because inequality might have been built in**
- In sum, the lack of representation in digital phenotyping research risks betraying the promise of personalising these algorithms. That is, if these algorithms are trained primarily on very particular samples of the population – in terms of income, ethnicity, occupational position and so on – then the notion that these applications may personalise mental health is compromised.
**Unintentionally, social inequalities and injustices may become misattributed to a personal mental health problem.**
- Detailed personalisation also carries other risks: for example that societal inequalities and injustices (such as access to infrastructures of mobility) become misidentified and reified as individual mental health problems.
## Sensing the Social
**The studies that use sensors as a proxy to infer one's social life tell us little about the quality of their social connections.**
- Another example comes from a well-cited and recent study on symptoms of depression and post-traumatic stress disorder (PTSD) (Place et al. 2017). Seventy-three participants who had at least one symptom of PTSD and/or depression, were given a smartphone which had a preinstalled application that would collect digital trace data about physical activities (using accelerometers and gyroscopes), location (GPS), vocal cues and ‘social’ data, which in this case was the registration of in-going and out-going phone calls and text messages. In the latter, while the authors described using data about ‘how the user is interacting with others through the phone’ (Place et al. 2017, not paginated, see Table 1 in the paper), ==what is actually tracked is counting the number and timing of in-going and out-going phone calls and text messages.== This measure has been used extensively in other studies as well (e.g. Faurholt-Jepsen et al. 2014) and will reveal some degree of how people use their phones in everyday life. For example, ==it will reveal at what time calls are made, how many calls are made in a row and so on – so one will presumably be able to see, as well, the times at which the person is socially active, and get an inkling of their social life. However, at a more meaningful level, data such as this tells us very little about how people use their phones or interact with others, about a person’s social life, of the meanings they attach to this, or even of the purposes of these phone calls.== Several hour-long phone calls over a week may indicate a rich social life with meaningful conversations with close friends, spouses or parents – or it may reflect particularly frustrating experiences with the customer service department in one’s bank. Alternatively, you may be less likely to make/receive phone calls when you are physically socialising, so an increase in phone calls may correlate with a decrease in actual social contact. And while at least one study (Doryab et al. 2019) corrects for this by asking for phone numbers of parents and friends to provide more information on who people are calling, this still leaves unaddressed the issue of meaning and purpose.
**A person's subjective view of their situation is the reason for mental health. Therefore patient self-report is important**.
- As mentioned, some researchers have attempted to overcome these hurdles in digital phenotyping by including self-report measures or questionnaires within applications, to help gauge how people are feeling (e.g. Matcham et al. 2019). In so doing, they emphasise that subjective interpretations are not just, as for example Place and colleagues (2017) argue, reflections of problematic ‘bias’ that may skew otherwise objective measurements. But we would argue that a person’s interpretations of their situation – including how they think and feel about their social life – is the central thing to gauge, exactly because it fundamentally reflects and shapes their engagement with the world and, hence, their mental health.
**Some "states " can't be informed from passively sensed data, such as "loneliness", a self-interpreted social situation. The person themselves may not experience loneliness even though their passive data shows that they have low phone calls or not moving (GPS)**
- Our point, more broadly, is that there are limits to uses of passive sensing. For example, Doryab and colleagues (2019) sought to identify the ‘behavioural phenotypes’ of loneliness, specifically from passive sensing. In this study, they surveyed university students about their feelings of loneliness before and after a semester, and collected digital data (e.g. GPS locations, call logs) from them in the interim. Their algorithm was able to infer which students had high scores of loneliness at an accuracy rate of 80.2% (see Doryab et al. 2019). However, we would argue that this study misses something crucial in its attempt to infer states of loneliness purely from passively sensed digital data: it misses out on the fact that loneliness is not an objectively measured quality but rather one’s self-interpreted social situation (Hawkley and Cacioppo 2010). Loneliness is rarely static, and may indeed fluctuate from situation to situation. Thus, we would argue, there are some states that cannot be inferred purely from passively sensed data.
**Bluetooth distance is not emotional distance or social proximity.**
- ==Reminded me of biases and to be careful of such errors.. what can be measured may not be valuable. What is valuable may need to be measured.==
- Our final example reflects on the previously mentioned use of Bluetooth sensors to gauge the social. Some authors argue that Bluetooth sensing can be used to generate data on ‘[...] how individuals interact with others’ (Torous et al. 2017: 3). However, this claim seems exaggerated – physical distances between people is not the same as how individuals interact (e.g. what they say or what their interactions are about and so on). Furthermore, this measure does not gauge proximity between persons, but between Bluetooth sensors. Thus, for studies that use Bluetooth sensors to estimate the quality of social bonds, it rests on the assumption that they are estimating the distances between persons carrying smartphones rather than smartphones left on a table, in coat pockets, prams or purses (consider, here, the different behaviours of men and women carrying smartphones). While most researchers will explicitly acknowledge this methodological limitation, it still means that the use of Bluetooth sensors to gauge social interactions is problematic.
## Critique 3 - The risk of Reifying mental disorder as biological
- There is no clear-cut biomarker for mental illness... lets shift focus from biological biomarkers to digital..
**- Reframing? To see mental illness as a problem with *living?** (What does this mean? If patients have a mental illness, they are not living "correctly?")
- "The exploration for markers in this environment align more with recent work which, rather than trying to attribute a biological causation of mental health, sees mental illness as problems of living ([Borsboom](https://onlinelibrary.wiley.com/action/doSearch?ContribAuthorRaw=Borsboom%2C+Denny) 2017)" that is, as a much less stable phenomenon that cannot be understood fully without considering the environment and the life of the individual."
**We must be alert and remind ourselves that behaviours are complex interactions of biopsychosocial, cultural, and economical.. not simply because "driven" by underlying "mental illness"**
- "The language of biomarkers and phenotypes conveys the impression that digital traces are products of underlying biology rather than as markers for complex problems of living embedded within a social, cultural and economic environment."
- This reminds me that alerts, EMA must be about surfacing challenges in a social, cultural, economic and systemic context.
- i.e., any deviation or alerts should be interpreted as a signal to uncover the "how come.", to understand reasons.
- Picking up symptoms is one thing, easy.. but the next is about giving interventions, management, and treatment.
- If we only stop at creating an alert system and leaving it as it is... it can be reductive. We need to remain humanistic and combine with the human approach.
**Interventions must be guided by coherent theoretical constructs. Find out what the patients' problems are, then use technological tools to solve that problem. Tools to serve humans, not humans to fit into theory** ^cc9d19 ^a83b3b
- In other words, digital phenotyping has access to a wealth of data, generated by particular sensors and instruments, and now the field is searching for constructs of the social – and of mental health – that these may measure. We might speculate that what is happening here is that, rather than letting coherent theoretical constructs guide what is being measured, the capabilities of smartphone-based sensors guide the creation of theoretical constructs. We argue that this approach to mental health diagnosis and prognosis is the wrong way around: we need to think more about mental health problems as problems of living (Borsboom 2017) and then think about how these might be explored via a variety of tools, of which digital tools might be one approach. If this does not happen, then digital phenotyping’s promise of sensitivity to the context and environment of the person is undermined. This relates to a broader argument which has been made elsewhere by Datta Burton et al. (Forthcoming), that discourses of data-driven value should be inverted such that first we ask what is valuable (in this sense in terms of improving living with mental health), and then we explore approaches to achieving this (with the digital solutions being one of many methods).
- Related to [[ARTICLE Relapse prediction in schizophrenia through digital phenotyping]], [[The V3 framework for validating passive digital biomarkers]], [[Oxford Handbook of phenomenological psychopathology]]
---
My Takeaways
==We can detect shifts in their personal trends, but we should not infer too much, we can check in with the patient to just ask how they are doing==
[[How language distorts reality]]
[[🏠 030 Language and Psychology]]
- Metaphor - Language creates images. e.g "Cloud Computing" and "Tsunami of Big Data, like a Wave".
- Similarly, by saying it is a "digital biomarker" it brings about a certain image
- Instead, perhaps call it a "digital footprint or a trace". It means it's more transient.
- Remember, Technology should serve humans. Not the other way round.
- Sensors should serve humans, not humans trying to fit sensors.