The conference was a very engaging and diverse event, with speakers approaching negative emissions from a range of perspectives.
The full playlist of videos from the conference is available here:
Also see the conference communiqué which emphasised that “to limit warming at 2 degrees or less requires NETs to draw down past and future emissions and store this carbon in land, ocean, and geological reservoirs” while also suggesting that “to deploy NETs at the scales envisioned would require acceptance by industries and communities whose economic activities, environments and amenities may be affected by large-scale deployment of NETs”.
Work with Holly Jean Buck, it my own presentation I attempted to suggest that the development of negative emissions technologies represents an opportunity to build in a robust role for the social sciences in an area of technical and regulatory development that remains at an early, and hence undetermined, stage. However, at present much of the social science related to carbon removal focuses upon “social impacts”, especially as a bounding factor to the feasibility of various carbon removal methods: the impacts follow downstream from the deployment.
This follows a broader tendency where scholarly literatures and popular commentary frame technology as “black-boxed” and well defined, with an independent asocial logic that results in “impacts” or “effects.” In this context social questions are often narrowly framed as “impacts” or “risk” issues, placing the site of social science inquiry also firmly “downstream” of development processes.
However, various approaches in science-and-technology studies have shown how technologies cannot be black-boxed and separated from sets of constitutive social relations. Interdisciplinary social science research can help illuminate the social and political dynamics of negative emissions. The research framework presented here considers both social impacts and social drivers, through both past-oriented and future-oriented social science methods. By re-thinking the role of the social sciences and taking the reflexive governance of negative emission technologies as a central concern, social scientific approaches can provide real-time social intelligence that will be critical to the scalability of negative emission technologies.
We look to #STS and constructivist ways of seeing to set out a new framework for remaking public participation in science and democracy. It comprises 4 paths, each with new approaches and criteria which have been missing from many existing frameworks. These paths are…
3️⃣ Responsible democratic innovations. Don’t only evaluate the positives of participation after the event. Anticipate downsides & social/ethical implications of democratic innovations in advance. Links to work by @Ulrike_Felt@robdoubleday@Jackstilgoe@Sujatha__Raman & others
4️⃣ Reconstitute participation. Turn participation around from being a problem of publics engaging to one of institutions and powers that be recognising, listening & responding to diverse already existing public involvements. Links to work by @SJasanoff, Brian Wynne & others.
Possibly the single best tweet on sci-comm, the deficit model and public engagement. Jack Stilgoe nails it again.
And therein lies the trouble with John’s analysis. The deficit model critique is not about practise. It’s about assuming public trust is a function of knowledge. Scientism, not sci comm, is the problem. See our special issue of PUS: https://t.co/Ued2DW3XKfhttps://t.co/QfGF3qPc3p
By Dr. Declan Kuch*, Assoc. Prof. Matthew Kearnes*, Dr. Georgia Miller*, Dr. Walter Muskovic*
With Labor announcing a Senate inquiry into the ‘whole My Health Record system’, it’s worth considering what has been learnt so far in public discussions, which have brought the Federal Health Minister Greg Hunt into the spotlight in quite a different way to his normal announcements this year. Back in January, Hunt launched a major report from the Australian Council of Learned Academies on ‘The Future of Precision Medicine in Australia’ that included contributions from all major scientific, humanities and social science perspectives. It’s an interesting, well written overview of this emerging field. One way of understanding Precision Medicine is that it builds on advances in ‘-omics’ methods (see section 2.5 of the report) to build large datasets with a view to developing new diagnostic and therapeutic techniques. Minister Hunt has spent much of the year announcing funding in line with this vision, especially through the ‘Genomic Health’ mission, as well as funding for specific conditions, including cerebral palsy, severe epilepsy, kidney disease.
With only a fewnotableexceptions, public discussion of Precision Medicine and its potential has been completely disconnected from the My Health Record discussion. Instead, the focus has been on multiple dimensions of privacy (especially stigma, domestic violence risks); the breadth of organisations and individuals who can access the document; multiple concerns about insurance companies accessing records through the My Health Record database; and the clinical utility or otherwise of using data based on billing, and record summaries organised as PDFs. Experts in systems management, procurement or coding point to ways the problems with security, access and control are ‘locked-in’, tempering others’ calls to ‘fix the problems’ through amending legislation or penalties.
There are lessons in all this for the successful rollout of Precision Medicine, which hinges in large part on the widespread aggregation of biological materials. A combination of careful public consultation and a more open-ended design ethos could have avoided many of these problems, which are likely to have profound flow on effects.
Lesson 1: Make clear and compelling case for how the records will make us healthier
The ‘All of Us’ programme in the United States — presented as ‘a historic effort to gather data from one million or more people living in the United States to accelerate research and improve health’, associated with the US Precision Medicine Initiative set up on the final months of the Obama presidency — made a compelling case for the need for a large, voluntary biobank where new diseases are diagnosed, and treatments developed. The ‘sales pitch’ was a very broad one: give us your tissue samples, and data relevant to social and environmental determinants of health like fitness tracking data, and approved scientists will mine the data to make you better; crucially, considering America’s patchy history with biomedical research and minority populations, this ask has been framed as a deeply inclusive and participatory one. The benefits of the My Health Record are less clear: the website focuses on the portability of data without any reference to providing infrastructure for researchers or other government agencies, despite these featuring very prominently in the legislation.
This problem of how different constituencies understand trade-offs is compounded by insurers and pharmaceutical companies publicly seeking the records for uses that do not align with the interests of citizens. ‘Managing costs’ for insurance companies, for example, may mean excluding people from health or life insurance products based on genetic mutations. What is completely lost is any narrative about progress — will this system help us be healthier? How?
Lesson 2: No issue, no (effective) data platform.
“Digital Health is the penicillin of our time, with Precision Medicine and genomics offering opportunities to cure previously incurable diseases and deliver better life saving medicine”. — Greg Hunt.
The rhetoric about Precision Medicine seems boundless — another magic bullet to cure in a single hit. However, what the debates over the Australian My Health Records reveal is that databases of ‘health data’ are not inherentlyuseful. They require careful crafting concerning agreed standards for data inputs, ethical access and are most successful when responding to pressing issues in the community. One could extend Noortje Marres quip about ‘no issue, no public’ in this way — data is quite literally an act of giving, not a passive unit with inherent power.
Understanding the relations between experts, patients, devices and government in successful health database projects like the Hip and Knee Replacement Register demonstrates this etymological point. This registry was developed due to shared professional concerns about poor devices being approved and has successfully led to regulatory changes.
Furthermore, without justification for how and why government agency and third-party access defaults are set, journalists and members of the public are less likely to trade off their privacy to such a wide range of actors, many of whom have interests counter to their own health. The fate of the UK’s ‘care.data’ project, led by the Australian Digital Health Agency’s Tim Kelsey, bears out this point: the project was abandoned in July 2016 over concerns patient data could be sold to insurance and pharmaceutical companies.
Lesson 3: May a thousand databases bloom? Need for a diverse economy of health data
Data cooperatives like Midata.coop and other non-traditional ways of aggregating data are likely to be an important part of the emerging horizon of Precision Medicine. Decentralisation has advantages of spreading the risk of hacking and enabling more democratic decision-making, as well as flow of benefits. Open Application Programming Interfaces (APIs) can enable information exchange between platforms, mitigating a key security concerns around hacking. An ecosystem of democratically controlled, patient-centred databases would gel with calls for transforming both our understanding of the ‘the economy’ and its structure.
Lesson 4: Ownership matters to both the ethics of the program and ‘public acceptance’
Trust isn’t enough. Trust is active too — we trust healthcare institutions to care for us, but this trust is not absolute and does not automatically extend to ‘digital health’. Publics across the globe understand private ownership of healthcare institutions act according to their incentives, which seldom align with societal goals of equitable access to care.
The success of Precision Medicine hinges on public dialogue about the promise of better health. In this context, ownership over health data goes beyond the legal question of who technically owns the rights to patient records. How do we have a discussion about the collective ownership of health in ways that acknowledge the power of gifts to bond a society, for example? How could an appreciation of the diverse forms of social bonds that make up good health become the starting point for a discourse of innovation, rather than seen as a barrier to technological progress?
Lesson 5: ‘Health’ is an increasingly slippery concept in a post-genomic age
A key understanding of Precision Medicine is its emphasis on the ways it brings social and environmental data to bear on our understanding of disease risk. Thus, new databases of ‘healthy’ populations, such as the Medical Genome Reference Bank, are helping to shed light on how risky gene mutations manifest as disease.
Concerns over tumour sequencing data being added to the My Health Record through systems designed by the Garvan Institute’s corporate arm, ‘Genome.One’, show both the power of genomic data in the public imagination and need for appropriate protections. Australia remains one of the final jurisdictions without legislation forbidding insurers from denying coverage based on genetic conditions. This should be an easy step for the government, one that hopefully saves us from another ‘Downfall’ parody.
The societal implications of the scientific developments around Precision Medicine are profound and require a wide-ranging public discussion about what it means to be healthy or diseased, who is the baseline against which claims of health are to be measured, and who should benefit from the scientific research into health and disease. These discussions require a sociological imagination — one that ties cases to populations and societal institutions; and can enable a sober analysis of the tradeoffs precision medicine poses.
*UNSW, Sydney; Australian Research Council Centre of Excellence in Convergent Bio-Nano Science and Technology.
** Crossposted from an original post on Declan Kuch’s Medium page.
The Environmental Humanities programme at the University of New South Wales (UNSW) in Sydney, Australia, is advertising three PhD scholarships in specific areas in the environmental humanities and science and technology studies (with collaboration from colleagues in a range of other areas).
1.) Protecting Indigenous Knowledge and Confronting Biopiracy 2.) The Impact of Artificial Intelligence on Education Policy 3.) Multispecies Studies: Rethinking Human/Wildlife Interactions
(See below for descriptions)
These Scientia PhD Scholarships are specifically designed to attract high quality PhD candidates across a range of strategic research areas.
PhD Scholarship benefits under the scheme include:
— $40K a year stipend for four years
— Tuition fees covered for the full 4 year period
— Coaching and mentoring will form a critical part of your highly personalised leadership development plan
— Up to $10k each year to build your career and support your international research collaborations
Candidates would most likely already have completed work at Masters level and published work with leading academic publisher to be competitive (or equivalent).
More information on these scholarships is available here.
More information on the Environmental Humanities programme is available here.
This project analyses the commodification of nature/natural products and of Indigenous knowledge. It would seek to conduct a range of case studies, legal analyses, and ethnography, aimed at identifying biopiracy cases. It will also seek to analyse Indigenous mechanisms for protecting their environmental knowledge, including through customary laws and community protocols. This project will reflect upon implementation of the UN Nagoya Protocol to the Convention on Biological Diversity and its role in preventing biopiracy, as well as the World Intellectual Property Organization Intergovernmental Committee on Traditional Knowledge.
This project will be part of investigations into the ongoing and potential impact of artificial intelligence on both education policy making and analysis. PhD projects that address any or all of the following questions are welcome (1) what are the possibilities and challenges for education and education policy that are occurring and will occur by implementing artificial intelligence into governance, instructional and assessment settings? How might these possibilities and challenges relate to changes already occurring around algorithmic governance and big data in education? (2) what are the ethical, economic, and political biosocial considerations of implementing artificial intelligence into educational organizations? This includes issues of trust and transparency relating to the ‘black box’ of AI and prediction; and (3) how does artificial intelligence, including machine learning, use ideas from social policy, including policy and value networks, and how can policy analysts use these same ideas? What are the epistemological and ontological issues, such as those around representation, posed by AI for policy and analysis?
Multispecies Studies is an emerging field of interdisciplinary research that draws the humanities into dialogue with the biological sciences and ethnographic methods to better understand the shifting and highly consequential relationships between human communities and wildlife in a period of escalating social and environmental change. Providing new perspectives on issues as diverse as biodiversity loss, climate change and globalization, work in this area seeks to better understand and intervene in human/wildlife interactions to produce more sustainable, equitable and flourishing outcomes for all parties.
Chris makes some insightful observations – both about the volume itself and the broader field of participatory politics. I was particularly taken with the final paragraph of Chris’ review, which outlines a series of challenges for the ways in which the interpretive and normative strands of STS-inspired research intersect. I’ve taken the liberty of copying Chris’ final paragraph below.
Constructivist STS, in its various shades, now takes as canonical the view that redescription (here, of participation) does not – contra Winner – simply leave things as they are. There is normative content to the constructivist stance, namely that this process of redescription is better than some ‘realist’ alternative. But is this ‘better’ a matter of (1) a stance being pragmatically better at achieving pre-set goals, (2) epistemologically better (more ‘true’), or (3) ethically and/or politically better (productive of more flourishing lives, achieving a more nuanced or richer kind of procedural or other justice)? The first option would slip back into a non-reflexive instrumentalism. The second would accord to constructivist redescription an epistemological privilege it explicitly refuses to ‘realism’, recalling Latour’s problematic remark that the constructivist should ‘just describe the state of affairs at hand’ (Latour, 2005, p. 144). The third calls for ‘us’ to create forms of participation that are ‘cosmopolitan, reflexive, responsible, and [plural]’ (262) in preference to forms of participation that are less so. Here, however, one must explore further the meaning of these terms, and the normative space in which they are gathered. Winner asked whether constructivist STS can position itself, in the light of its analyses, to then ask ‘which ends, principles and conditions deserve not only our attention but also our commitment’ (Winner, 1993, p. 374). Reflexivity, and cosmopolitanism, and responsibility (or response-ability), along with other concepts like care, are presented widely in contemporary STS as demanding our attention, by scholars who call in each case for ‘more’ of a commitment to x. But as to whether, and why, x should ultimately demand our commitment, questions remain. A condition of post-normality demands reflexivity towards our commitments but the treatment of reflexivity in STS leaves us wondering how, as Latour puts it, we are ultimately to distinguish between good and bad attachments (Latour, 2004, p. 457). Without interrogating directly the meaning and justification of such commitments (and therefore edging further into the territory of the philosophy of technology), the reflexivity prized by STS moves towards a perhaps arbitrary limit, and potentially a species of unacknowledged realism about what ultimately reflexivity is for.
Excellent stuff – and surely a challenge that will prove to be of the defining features of STS research over the coming decades.