Digitalisation: A Union Action Guide For Public Services, Work and Workers

Download PDF
Share

Digitalisation:
A Union Action Guide For Public Services, Work and Workers

Digitalisation:  A Union Action Guide For Public Services, Work and Workers

This report written by Christina J. Colclough sets out the issues that public service unions face as public services and employment becomes digitalised, the actions unions can take and the resources available.

This is the first publication of PSI’s 3-year project Our Digital Future - a partnership with FES and EPSU to ensure public service unions and workers understand the challenges digitalisation poses to workers, unions and public services and are empowered to influence them.

Table of contents

This is the first publication of PSI’s 3-year project Our Digital Future - a partnership with FES and EPSU to ensure public service unions and workers understand the challenges digitalisation poses to workers, unions and public services and are empowered to influence them.

This report provides a snapshot of the key digital developments and discussions within international organisations, political bodies and amongst leading experts that are relevant to the core political and thematic work of unions, particularly those with members in public services. While it was written primarily for the affiliated unions of Public Services International, its core learnings and strategies have relevance for the wider labour movement.

Grouped under eleven different headings, the report offers a critical overview of topical priorities and selected literature. Throughout the report the focus is on 3 key areas:

  • the direct effects of digital technologies on public service workers

  • how the public sector can, and should, govern data and algorithmic systems to ensure Quality Public Services

  • how workers’ data rights and privacy rights must be improved through negotiating much stronger data rights.

Each section ends with a list of areas of exploration for unions. These recommendations seek to bridge the gaps, and empower unions and workers, but also public services as a whole as digitalisation is infused into public sector work.

This is by no means an exhaustive report. It is concerned with how digital technologies, broadly speaking, are affecting our core sectors and priorities. It also presents some of the key digitalisation narratives inside multilateral institutions and as pushed by the private sector. It therefore does not critically address all technologies, such as the emerging use of blockchain technologies, nor does it discuss the issue of net neutrality: the role of tech companies in guaranteeing or limiting the freedom of speech. It should, rather, be seen as a reference document helping to embed union work in a digital context and suggesting possible ways forward. The report ends with a short summary and reflections.

Learn more about the project

Our Digital Future

This project is implemented by PSI in partnership with Friedrich-Ebert-Stiftung (FES).

https://publicservices.international/resources/projects/our-digital-future?lang=en&id=11534&showLogin=true

Executive Summary

Digital technology is not born evil. It is not born good either. The impact of digital technologies on the quality of work, on workers’ and trade union rights, on human rights and privacy rights, and on the types of work that will be available is essentially a result of the regulation that is - or not - in place. Through collective agreements public service workers and their unions have a strong possibility to ensure that workplace digital technologies augment and support worker wellbeing and gender equality, and not the opposite. But this demands that staff reps and unions build their capacity to negotiate on the core of digital tech - namely data. Workers’ data rights are poorly developed across the world. This is not coincidental and most certainly a result of heavy industry lobbyism. These rights need to be improved. Unions also need a foundation of knowledge on the different types of digital technology and importantly on the instructions given to artificial intelligence, algorithmic systems and even machine learning. They need to know what transparency to demand, and they need to be ready to hold management accountable.

There is no way around having to acquire this knowledge and understanding, if unions are to maintain or build power in the workplace and society. They need to know what questions to ask management about the systems, and they must ensure that management will mitigate adverse effects on workers. We know datasets are biased, because we humans are. To ensure diverse and inclusive labour markets, workers need a seat at the governance table.

Quality Public Services are not a given and the quest by public sector unions to ensure them is under attack from the corporate power grab that is taking place through digitalisation. Unions across all sectors need to reach out and work with citizens to raise the awareness of the public and together campaign for the democratic provision of QPS. Unions and their members need to be aware of the problems of digitalisation in outsourcing and privatisation and the dangers that need to be exposed and opposed. We must push for better management and governance practices concerning the terms of engagement with private companies in all and any form of PPP, procurement or outsourcing. This will allow us to retain the possibility to assert democratic control and have the legal and practical possibility of remunicipalisation. We must work with the public to educate them about these dangers to service quality, inclusion and democracy. Data extracted and generated in these public-private relations must at the very least be jointly controlled and jointly assessed. If not, the public sector will lose their autonomy to interpret and act upon the data findings. Without public sector capacity building, a void is created that will only allow the continuation of private sector power grab leading to even more public sector dependency and a hollowing out of the public sector’s means to govern.

Unions collectively should become the spearhead of an alternative digital ethos - one that does not commodify and objectify workers and citizens, but empowers them. Public service unions have a key role to play as the workers who see the impact on society, have responsibility for regulating digital capital, protecting democratic institutions and are involved in public policy making. Here stronger workers’ data rights are an essential prerequisite for the establishment of collective data rights, and the formation of data cooperatives, data collectives, data commons, open data and/or data trusts. If nothing changes workers and their unions will very soon be subject to opaque digital systems that they - and indeed management in many circumstances - have little power over.

As the social and economic consequences of the COVID-19 crisis become clearer, there seems little doubt that we are entering years of economic hardship. With this will follow the need to find cost-cutting measures in both the public and private sector. Automation of tasks and jobs and the expanded use of digital technologies will most likely occur, although their long-term cost-saving benefits are actually not clear. To fund the welfare state and ensure quality public services, new tax regimes and, through these, a renewed redistribution of wealth will be vital.

The digitalisation of our economies, public services and societies demands a digitally-informed response from unions.

Foreword

By Rosa Pavanelli, General Secretary, PSI

COVID-19 has all too clearly shown how digital tools have become an integral part of our work and societies. In the public sector, workers are faced with increased surveillance and monitoring as employers try to mitigate the “risks” of remote workers. In addition, to ensure the needs of citizens are met, public authorities and agencies are mobilising quickly to put new e-systems in place. Contact-tracing apps may be the best known right now, however the digital transformation goes right into the heart of public services: to healthcare, social benefits, infrastructure, citizen safety (through CCTV and other monitoring mechanisms) and to education. Whilst many economies have plummeted, the stocks of Apple, Amazon, Alphabet, Microsoft and Facebook, the 5 biggest US IT companies, who also are the owners of much of the technology we use, rose 37 percent in the first seven months of 2020 alone (NYT Aug 19, 2020).

Big Tech’s Domination of Business Reaches New Heights

As the economy contracts and many companies struggle to survive, the biggest tech companies are amassing wealth and influence in ways unseen in decades.

https://www.nytimes.com/2020/08/19/technology/big-tech-business-domination.html

This is not new. It has been underway for some time blurring the boundaries and responsibilities between public and private actors from e-government and e-governance, education technologies, to public procurement outcomes, public infrastructure monitoring, to public workplace human resources. The digital transformation is, however, accelerating like never before across the world. It is likely to continue as the expected economic downturn in the wake of COVID-19 places pressure on public sector budgets for years to come. The risk is that in this environment governments will not use these changes to provide better and more universal quality public services but to accelerate cost cutting, outsourcing and privatisation and through a rapid adoption of new technologies at the expense of users and workers. We must ask, how should we as workers in the public sector respond? What are the key discussions being held by international political and industrial organisations and how can we form and position our responses?

This is not the first time that workers and unions have had to grapple with the rapid introduction of new technology. As in the past, we know that an unfettered rapid digital transformation with no worker or community involvement is unacceptable. We must be vigilant and demand good governance - with us at the table. We must defend and develop our rights. We must demand that we are party to any digital transformation be it at national, regional, local or sectoral level. And we must demand that our governments across the world put the long-term interests of their people above a blind faith in the miracles of digital technology. The risk of austerity comes at a time when we need more sustainable investment, regulation of unaccountable corporations, and innovation from the state, not less.

The digital transformation will demand much of us as unions. We too must have the foresight and courage to change. We must fight to ensure our own digital future as workers but we must also work with the community to ensure that government and public services work are democratic, inclusive and of quality for all. For this we need to be prepared. This report helps unions understand these issues and sets out how we can best face these challenges in the coming years.

1. Democracy - the Duty, Rights and Means to Govern

This section deals with the very essence of the digital economy: the datafication of all social and economic relations. It does so in four main parts.

  • It firstly is concerned with the datafied welfare state and the move away from social issues and problems being understood as shared, to a logic that attributes “risk” to the individual.

  • Secondly, it argues that public services can both abuse, and be abused by, digital technologies causing widespread discrimination and attacks on human rights. The governance of digital technologies as well as transparency and accountability around their intended as well as unintended purpose, intent and structure is urgently called for.

  • Leaning into the two first parts, the third calls for the establishment of collective data rights as a principle and exemplified in practice through the formation of data collectives and/or data trusts.

  • The section ends with a critical evaluation of public procurement regulations and guidelines that aim to govern a trillion dollar worldwide public spending. It argues that to protect and ensure the duty, rights and means to govern, public authorities must have much stronger control over and access to the data generated and extracted in the procured task.

The Datafied State

Social services, means testing, students’ grades are in many states data-driven (Dencik, Lina, 2021, forthcoming). Local, regional and/or national levels of government are getting their own data from e-governance processes, buying them, for example, from data brokers, or are relying on third party analytics software to interpret them. Within public services in many parts of the world, datafication and profiling are replacing would-be face-to-face interactions with caseworkers. Core decision-making processes, as well as their execution, are therefore being partially or completely delegated to automated systems, producing de facto automated decision-making.

The datafication of the welfare state relies on elaborate digital technologies, many of which are supplied to the state by private companies. As efficiency has conveniently been tabled as a goal in of itself, the private sector has been quick to offer their services at attractive prices. Corporate consultancy companies, lobbyists and big tech firms have succeeded in creating an efficiency narrative that governments have bought into. This has not least been manifested by decades of neo-liberal political economics - regulation, it claims, is an innovation-damaging burden, the public sector is bureaucratic and too heavy.

The private-sector interest in the datafication of the state must be met with caution. As private companies supported by the OECD, the EU, the UN embrace e-governance we must critically ask whether the governance institutions of the state have followed suit with the adoption of digital technologies? Who is holding these private companies and digital technologies accountable, and how?

Take for example Cambridge Analytica, Palantir and Clearview AI.

The now dissolved Cambridge Analytica, through political profiling based on social media datasets, meddled in the 2016 US election, the UK Brexit vote, and elections in Trinidad and Tobago, Australia, India, Mexico, Zambia, South Africa, Italy, Argentina and more. In a 2018 sales pitch, executives boasted the company had worked on more than 200 elections around the world. Going right to the core of democracy: the right to vote freely, Cambridge Analytica micro-targeted the electorate through using harvested facebook data to target voters with typically transient and fleeting adverts, beyond easily identifiable means of scrutiny or accountability.

Major tech firm Palantir is in no way any better. Indeed Palantir helped Cambridge Analytica. Palantir’s CEO is one of Facebook’s Board of Directors. Palantir’s surveillance software is used by many police departments in the US as well as the US Immigration And Customs Enforcement agency (ICE). Palantir’s advanced algorithms have been subject to many lawsuits, including the Racial Discrimination Lawsuit (2016).

On September 26, 2016, the Office of Federal Contract Compliance Programs of the U.S. Department of Labor filed a lawsuit against Palantir alleging that the company discriminated against Asian job applicants on the basis of their race. According to the lawsuit, the company "routinely eliminated" Asian applicants during the hiring process, even when they were "as qualified as white applicants" for the same jobs. Palantir settled the suit in April 2017 for $1.7 million while not admitting wrongdoing.

Clearview AI, who the New York Times called “The Secretive Company That Might End Privacy as We Know It” is another example of a rights violating business built on the idea the technology can solve any problem at any cost. Clearview AI helps law enforcement match photos of unknown people to their online images. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes “far beyond anything ever constructed by the United States government or Silicon Valley giants” claims the New York Times. Police officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

We must ask what problems are these technologies trying to solve, is it appropriate to use these digital tools for the problems identified and who is actually defining the problem in the first place? The consequences of poorly understood and poorly governed data-driven systems in the state are ultimately an attack on democracy and the idea of a universal, inclusive and empowering state. It is a move away from a collective vision of social issues and problems towards a logic that attributes “risk” to the individual. This individualisation and risk-based approach is facilitated by data-driven inferencing, used in some countries in welfare benefit calculations, predictive policing and algorithmically defined exam grading.

“When public sector organisations integrate tools and platforms from providers within this [digital] economy to administer the welfare state, they therefore implement not only the systems themselves, but also a regime that propels the further datafication of social life.” Dencik, Lina (2021, forthcoming)

This, coupled with bias algorithms (see section below; bias and inequalities) has led to a proven ‘disadvantaging of the already disadvantaged’ - especially people of colour, people in poorer neighbourhoods and women. This has been all too evident in the use of algorithms in predictive policing and in the justice system where artificial intelligence has poorly recognised people of colour (an examination of facial-analysis software shows error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women. Algorithms have been proved in numerous studies to exacerbate the gendered bias disfavouring women. In the world of work, Facebook’s microtargeting ad service helped dozens of Canadian public and private sector employers exclude people from receiving job ads on the basis of age and gender.

Use of Facebook targeting on job ads could violate Canadian human rights law, experts warn | CBC News

Facebook is allowing employers — including federal, provincial and municipal governments — to post job ads that target prospective employees in a way that some experts say could violate Canadian human rights law, CBC News has learned.

https://www.cbc.ca/news/politics/facebook-employment-job-ads-discrimination-1.5086491

In the datafied state, citizens and workers are subject to massive surveillance as data is extracted through the use of digital technologies. These datasets are aggregated, analysed and used to predict and manipulate behaviour and opinions.

The Rise of EdTech

Whilst at least 463 million students have no access to remote schooling, the global Education Technology (EdTech) market size is expected to reach USD 89.1 billion by the end of 2020 - 76.4 million in 2019. With an annual growth rate prediction of 18%, by 2027 the market size is anticipated to reach USD 285.2 billion.

Yet Education International, the Global Union for organisations of teachers and other education employees, conducted in 2020 a global member survey on digital technologies. Here more than two-thirds of the member organisations answered that they were not involved in the assessment of digital technologies in the workplace.

Deepening digital divides, these technologies aim to support learner and teacher assessments, scheduling tools, individualised learning plans, instant teacher-learner feedback, and much more. Common for these tools are they are 1. digital, 2. data-driven and data extracting and 3. the products of private companies. One such company ALEF Education claims they are extracting 50 million data points per day.

The COVID-19 pandemic has only accelerated what Ben Williamson and Anna Hogan describe as “global edtech industry solutionism,” whereby private and commercial actors have “set the agenda, offered technical solutions for government departments of education to follow, and is actively pursuing long-term reforms.

This begs the questions: But where does this leave the human rights and privacy rights of educators and learners alike? Who has the responsibility to check whether these tools are exacerbating or bridging inequalities? Are they reaching out to rich areas or poor, urban environments or rural? Are educators with their wealth of knowledge, pedagogy and emotions involved in the assessment of these technologies and their impact on learners?

The former UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston has flagged how technology solutionism can be extremely dangerous for equality. The rise of what he labelled “the digital welfare state” has been accompanied by considerable budget cuts, intrusive forms of conditionality, opaque and immutable decision-making processes, and the indiscriminate collection and processing of personal data.

This mirrors the warnings political scientist Virginia Eubanks gave in her 2018 book, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. Here the author outlines the life-and-death impacts of automated decision-making on public services in Indiana in the USA through three case studies relating to welfare provision, homelessness and child protection services. As book reviewer Louise Russell-Prywata explains:“The story of Indiana’s welfare reform contains all the key elements of an automation bogeyman: an explicit aim to reduce costs and move people off benefits; a whiff of dodginess about the award process for a $1.3 billion contract to privatise a state service; widespread tech failure upon implementation; the inability to effectively hold the corporate contractor to account for this failure; the removal of human connections; and pressure on community services such as food banks to deal with the consequences.”

Summarising, the digitalisation of public services unless properly and democratically governed runs the risk of re-commodifying citizens and subjecting them to adverse control, biases and discriminations. A key requirement for ensuring quality public services in the future but also inclusive, democratic and collective-risk based public services will be to decouple from the current private-sector led logic of efficiency ensuring digitalisation and enforce new norms and new governance mechanisms.

The Public Sector’s Duty to Govern

As some of the examples above demonstrate, public services can both abuse, and be abused by digital technologies causing widespread discrimination and attacks on human rights. The governance of these technologies as well as transparency and accountability around their purpose, structure and intent is urgently called for. Whilst several, yet globally sporadic, regulations are in place, such as data protection regulations [f.x the Californian Consumer Privacy Act (CCPA) and the European General Data Protection Regulation (GDPR)], much more needs to be done, and the public sector must lead the way. Simply put, privacy protections that hinge on obtaining individual consent to enable data to be captured and repurposed by companies and the state, cannot be separated from the discriminatory, or adverse, outcomes of that use. This is especially true in an era when most of us (including technology companies themselves) cannot fully understand what algorithms do or why they produce specific results. Whilst the state needs certain amounts of data to operate, every public authority or agency deploying algorithmic systems, must be held accountable to their use. They should be democratically managed with the consent and participation of multiple voices (especially those of citizens but importantly also workers) as is discussed below in the section Governing Algorithmic Systems.

A further concern is that states must urgently begin to look into the question of data ownership, and how the data derived from and generated by private and public actors alike should be collectivised for the public’s good. The next section is concerned with these issues.

Collective Data Rights

PSI has already begun exploring the importance of shifting from an individualistic view of data to one of collective ownership, access and control. Parminder Jeet Singh’s research commissioned by PSI argues that data-sharing must be made mandatory to safeguard democracy.

“Widespread access to society’s data – currently in the hands of a few digital corporations – is a precondition for a fair economy, quality public services, public policy-making and democratic governance. Asserting collective ownership rights over data is one of the most fundamental policy issues of our time.” (Singh, PSI 2020)

Establishing public data infrastructures, such as data trusts are, he argues, the very foundation of a strong domestic digital and AI industry.

A data trust provides independent, fiduciary stewardship of data
From the Open Data Institute

Data trusts are an approach to looking after and making decisions about data in a similar way that trusts have been used for other forms of asset in the past, such as land trusts that govern land on behalf of local communities. They involve one party authorising another to make decisions about data on their behalf, for the benefit of a wider group of stakeholders.

With data trusts, the independent person, group or entity stewarding the data takes on a fiduciary duty. In law, a fiduciary duty is considered the highest level of obligation that one party can owe to another – a fiduciary duty in this context involves stewarding data with impartiality, prudence, transparency and undivided loyalty.

There are lots of ways to steward and govern the sharing of data, and other types of ‘data institutions’ – this is just one of them.

Pension funds are an example of an institution with fiduciary responsibilities. Credit Unions too. Pentland et al argue in a recent white paper how credit unions very well could become data unions.

The main argument in Singh’s work is that we must have collective ownership over the systemic digital intelligence about ourselves currently extracted mainly by private companies, as well as over the data from which it is derived. The paper argues that a society’s data and digital intelligence are public goods and hence should be treated and provided as such. Singh continues: “Public data infrastructures have to be a key part of the new digital institutional ecologies. Most of them will be directly run by the public sector as a part of existing public departments or agencies in different areas, or will be operated by setting up new cross-sectoral agencies. Some data infrastructures could be managed in partnerships with nonprofits or businesses, and others run privately as regulated utilities. Effective regulation for data markets is also required. Public sector capacities need to evolve for all these roles.”

Singh and PSI call for a long-term vision - an alternative digital ethos - one led and shaped by the trade union movement in cooperation with other progressive forces that emphasises the collective rights to data.

In the medium-term, trade unions could begin to create ‘worker data collectives’ by pooling their members’ data into data trust structures. Colclough (2020) argues that Workers’ Data Collectives will prevent the irreparable commodification of work and workers and could become a knowledge and insight hub to boost union campaigning.

With a well-defined purpose, governance structure, policies and redlines for the use of, and access to, the Collective’s data, the pooling of individual’s data will empower workers, and offer alternative interpretations of the real world than that currently unilaterally controlled by data-extracting companies. The establishment of Worker’s Data Collectives on a national and international level will naturally require union capacity building (see section on Skills and Comptencies below) but much inspiration can be found in the historical relation between trade unions and credit unions.

It is essential in the short term that unions collaborate to prevent the private sector colonisation of the public sector through digital technologies, and to hold governments accountable and responsible for the technologies they use. Collective data rights must be established.

A possible place to start would be to revamp global and national procurement regulation and guidelines. The next section deals with why.

Public Procurement

Much work has been conducted by PSI on public procurement and its effects on workers and working conditions and sustainability (See Tackling the Challenges of Global Urbanization, 2019, and Digital trade rules and big tech, 2020). This section concentrates on the digital aspect of procured tasks. The focus is not on the digitalisation of procurement processes or how public procurement, outsourcing andPPPs affect working conditions as this is briefly covered in the section Work, workers rights and workplace governance. Rather this section is specifically concerned with the power imbalance between contracting authority and private contractors in relation to the data extracted and generated in procured tasks. It argues that the “state duty to protect” indeed extends to situations where a commercial “nexus” exists between public actors and businesses - namely the datafied nexus of knowledge and information.

In May 2020, the World Bank estimated that public procurement — the process by which governments purchase goods, services and works from the private sector — amounted to $11 trillion out of global GDP of nearly $90 trillion in 2018. In other words, 12 percent of global GDP is spent following procurement regulation. The majority of public procurement spending in the OECD countries (63%) is carried out at sub-central government level (OECD 2017).

Whilst many global guidelines and regulations exist aimed at opening the markets for procurement bidding (see literature list), increasing transparency and accountability, very few of these include explicit mention of data obligations between the private actors and the public sector. Indeed, the European Commission states: “The EU public procurement Directives regulate the procedures governing purchases by public bodies but do not intervene into the subject of purchases. This refers also to a possible data produced within contracts awarded following public procurement procedures. Having said that, due to the very nature of public procurement (purchasing on the open market something a public buyer needs) normally contracts provide that rights to any data created within them remain with the contracting authority.” (DG Internal Market, Industry, Entrepreneurship and SMEs, email of August 28, 2020)

In other words, in the EU it is down to the contracting authority (the public authority) to determine whether data rights, or control and access, should be part of the contract. It was not possible to obtain examples of either, yet this does not mean they don't exist.

The ILO’s 2011 “Terms and Conditions Applicable to ILO Contracts for Services” Annex 1 art 6.1.1 includes wording that provides better protections, namely:

“All documents (including drawings, estimates, manuscripts, maps, plans, records, reports, recommendations) and other proprietary items (including data, devices, gauges, jigs, mosaics, parts, patterns, photographs, samples, and software) (jointly referred to as Proprietary Items), either developed by the Contractor or its Personnel in connection with the Contract or furnished to the Contractor by or on behalf of the ILO to support the performance of the Contractor’s obligations under the Contract, are the exclusive property of the International Labour Organization; and, will be used by the Contractor and its Personnel solely for the purposes of the Contract.”

It has not been possible to find out from the ILO whether this mention of “data” as a proprietary item includes the data extraction and generation from algorithmic systems.

As digitalisation progresses with the current unbalanced data control and access in the hands of very few private companies coupled by a growing subsector of often opaque data broker firms, Singh (2020) warns that democracy is at stake.

Currently the US and China together account for 90 per cent of the market capitalisation value of the world’s largest digital platforms. These platforms in turn are superpowers dominating markets and societies. Microsoft, followed by Apple, Amazon, Google, Facebook, Tencent and Alibaba − account for two thirds of the total market value.

If public authorities rely mostly on the data analyses or the tools designed to do the data analysis marketed to them by these private companies, then their scope for developing their own datasets and interpretations of the data findings disappears. This overreliance on the private sector locks the authorities into a dependency relation, characterised by an uneven access to not only information, but also to the knowledge that can be derived from the data. A vicious cycle is formed, where the capacity building inside public authorities to gather, understand, store and make use of data lags behind that of private actors, leading to a further dependency on the private sector and less capacity building et cetera. In addition, public services and procurement authorities must be trained to ask some very important questions:

What are the individual and collective risks involved in using a digital technology? What mitigations need to be made to overcome bias and discriminations in the technology? What problem can or will the technology solve? Is the tool’s risk and impact profile proportionate to its use? What data is generated and extracted and what privacy preserving measures need to be in place to ensure responsible joint data access and control. Is the problem at hand better solved through non-technical means?

As Mulligan & Bamber 2019 write: “These [machine learning] systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they and the public have little input into—or even knowledge about—their design or how well that design aligns with public goals and values.”

In the World Economic Forum’s AI Procurement in a Box: reports and toolkits from 2020, it is remarked that:

“The lack of data sharing and data governance in the public sector often leads to a lack of data availability, discoverability and usability. Since data is currently often the basis of any AI development, these challenges are a great barrier to AI adoption” (p. 6).

Whilst it is not surprising that the WEF is promoting an agenda to open up data availability for the spread of AI across government, it is striking that even the WEF notes that there is a lack of governance. Less surprising is that these otherwise elaborate toolkits and recommendations do not address that data produced in procured tasks is often held exclusively in the hands of the contractors reinforcing the vicious cycle: lack of data - lack of public sector capacity - lack of public sector governance. This reflects a broader trend that public data should be made freely available to corporations but privately held data is monopolised for their exclusive use, profit and power. It seems workers or trade unions were members of the stakeholder groups informing the reports and guidelines. This is evident in both the Workbook for policy and procurement officials and the AI Government Procurement Guidelines report. In both, employees are only mentioned as those affected by AI, not those who also can co-govern AI in the workplace (see the section “Governing Algorithmic Systems” in this report for a suggestion as to how that co-governance could take place).

Joint Data Access and Control in Procurement

The public sector faces a two-fold risk. Firstly, the increasing risk of lacking the data and knowledge derived from that data to actually perform their duties in the public interest. Secondly, lacking the governance structures and capacities to govern those systems to ensure the public good against corporate capture.

Some initiatives in the EU and the US have been tabled to address specifically the first challenge. In 2018, the European Union established a High-Level Expert Group on Business-to-Government (B2G) Data Sharing. Their work concluded in 2020 with the publication of their final report and recommendations “Towards a European strategy on business-to-government data sharing for the public interest.” The report includes examples of good practice including mention of several national legislations that require access to private data (page 34). The experts conclude by advising that in order to make data sharing in the EU easier, policy, legal and investment measures are needed in three main areas:

  1. Governance of B2G data sharing across the EU: such as putting in place national governance structures;

  2. Setting up a recognised function (‘data stewards’) in public and private organisations, and;

  3. Exploring the creation of a cross-EU regulatory framework.

The report also recommends that public authorities enhance transparency, citizen engagement and ethics: such as making B2G data sharing more citizen-centric, developing ethical guidelines, and investing in training and education.

Interestingly, select members of this expert group have also worked with US-based think tank GovLab on their Open Data Policy Lab “Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices”. This Lab and it’s subtasks explore how ‘data collaboratives’ can be the structure that enables the use of private-sector held data for the public good. GovLab recommends that so-called data stewards coordinate, drive decision-making and review and implement opportunities for unlocking the public value of a company’s data. During 2020 the Open Data Policy Lab has spearheaded The Summer of Open Data - a three-month project. Here they have interviewed public and private experts involved in Open Data projects at local, regional and government level as well as national statistical agencies and international bodies. Their panel discussions/videos cover topics such as state of subnational data, developing data collaboratives at local and community level, promoting sustainability of data projects, advocating for open data’s value, and developing appropriate data skills. Here are just two of the conclusions from the panels.

“We are seeing localities without any sort of data infrastructure thinking and looking into data more than ever. They need [data] to understand transportation issues [and] economic issues.”
“Beyond just using viewing it as a tool for transparency, governments needed to incorporate data and open data into an overall strategy targeted at their priorities. Partnerships and new approaches to data reuse, such as data collaboration, could be used to help institutions capture and realize this value for themselves and their citizens.”

The second issue has to do with the contracting authorities’ competencies to negotiate on shared data access and control as well as in governing algorithmic systems in the workplace. With regards to the competencies on all levels of public authority, this simply must be dealt with through employing the best people in government and giving priority to training. Work in the public sectors needs to be attractive and well paid. With this, tenure will rise, and so will the competencies of public authorities to negotiate good procurement deals that put the interests of the public centre stage.

Furthermore, the section Governing algorithmic systems elsewhere in this report, provides a model for the co-governance of algorithmic systems. It aims to bring workers and responsible managers to the table to put sound and periodic governance mechanisms in place to ensure that all algorithmic systems are in compliance with the law and with the AI principles adopted by many governments. Also here, competencies must be prioritised and improved.

Areas of Exploration for Unions

This section has looked at public services’ duties, rights and means to govern in an increasingly digitalised world. Unless properly and democratically governed, the digitalisation of public services through the misuse of own data or the dependency on private companies runs the risk of re-commodifying and individualising citizens and subjecting them to adverse control, biases and discriminations. To defend democracy and democratic control from the private sector power grab, a transition to collective data rights, control and access will be essential.

In this respect, unions would benefit from the following activities:

Short term

  1. Hold workshops and trainings in the governance model of algorithmic systems and data rights, including the risks of automated decision making tools in public services exacerbating inequality and the importance of governments safeguarding their rights to access data and AI that is in the public interest.

  2. Build case studies that ‘make real’ the implications of not safeguarding government and public sector control of public interest data.

  3. Find good practice examples of good data governance in government and public services.

  4. Bring together interested union and partner, to examine concrete examples of lack of digital governance in specific sectors and recommend policy and actions to further this work.

  5. Investigate the number of procurement contracts that include joint data access and control or sole data access and control. Compile and share these insights.

  6. Map which public procurement regulations or guidelines could most likely be improved to include stronger collective data rights, including joint data access and control

  7. Demand that governments establish central agencies that can provide advice to all parts of government on core policy issues regarding digitalisation of public services, workers rights and the economy

Medium term

  1. Instigate with public authorities training in understanding the dangers of outsourcing and PPPs to data control and building the capacity to protect public access to data if outsourcing and PPPs cannot be stopped.

  2. Run coordinated campaigns on procurement and data

  3. Conduct investigative work on data trusts/workers’ data collectives

  4. Engage with relevant international and regional bodies: UN, OECD, WEF, EU, AU and others on discussions concerning joint data access and control as a way to protect democracy and the rights and means to govern.

  5. Work with PSI and TUAC, the trade union advisory committee to the OECD, to map the link between the United Nations Guiding Principles on Business and Human Rights (UNGPs) and articles on data access and control. Can the UNGPs be used to support government/public authorities’ autonomy?

Long term

  1. Build worker data trusts to empower workers

  2. Build public community data structures to ensure governments have the data needed to govern responsibly and in a privacy preserving, efficient way at the same time as empowering citizens.

Key Literature

Data trust literature

Procurement Literature

2. Work, Workers Rights And Workplace Governance

This section begins with a description of the rising precariousness of work in public services and reflections on what effect the Covid-19 crisis will have on public sector work. We then move to the influence of digital technologies and automation on workers’ rights, before ending with reflections on potential union strategies to empower workers and safeguard their rights through collective bargaining and co-governance structures.

This section makes the argument that data and algorithmic systems are the foundation of all digital tools and services. As we will discuss in other sections,employers are either buying or leasing proprietary tools and systems or designing their own. This is directly and indirectly affecting workers, the nature of their work and their rights. As a result workers’ and their unions must demand much stronger rights over the data extracted and generated at work, as well as a seat at the table in the governance of algorithmic systems.

The Changing Nature Of Work

PSI and EPSU have published several reports on the effects of digitalisation on work . They portray a rising precariousness of public sector work, in some countries spurred by a combination of fiscal austerity policies and a digitalisation strategy that mainly aims to cut costs by layoffs and staff reduction in public services (p. 66 PSI). EPSU’s report highlights that the biggest impact of digitalisation is on the intensification of work (nearly two thirds of respondents), followed by monitoring of work and workers, performance-oriented management and mental health outcomes (over half of respondents). Those surveyed also mentioned a perceived loss/standardisation of social relationships with colleagues (37%).

The summary report of Digitalisation and Public Services: a Labour Perspective very clearly lays out how digitalization affects public service employment and labour markets in five main ways (p. 20-21):

  1. Creating jobs and new professions related to new digital technologies (e.g. big data analysts, app designers, cybersecurity specialists, digital device and maintenance experts, digital research and development engineers, etc.);

  2. Destroying jobs and tasks that can be digitalized, automatized or robotized especially low value-added and low skill, simple cases, repetitive tasks, or dangerous, tedious or strenuous work (e.g. invoice handling and processing, database management, administrative tasks, security and surveillance patrolling, routing medical testing, etc.);

  3. Changing employment content (e.g. complexification with more skills and tasks required to perform the same job), blurring the workplace and work/life boundaries (combining mobile work, office work and telework), and bringing in new forms of digitally-enabled management (e.g. digital check on working time and performance, digital office sharing and teamwork software etc.)

  4. Changing the relation with citizens/service users, notably reducing human contact and interface (e.g. intelligent machine interfaces, chatbots and digital user services and care; phone conversations replaced by computerized user handling; online interfaces for service access and self-service digital facilities such as in registry services and libraries; smart meters and automated consumption sensors, etc.)

  5. Shifting the employment relationship, following the rise of digital employment services and platform work, typically for outsourced, privatized (e.g. in health and social services), which is associated with the spread of non-standard, precarious forms of employment, including “bogus” self-employed with no formal employment contract and “zero-hours” contracts with partial or no social protection and security coverage.

As mentioned by General Secretary Rosa Pavanelli in the foreword of this report, these observations and experiences could well be intensified as the public sector scrambles to find ways forward during and post the Covid-19 crisis. At the same time, and not unrelated, the automation of managerial tasks, including human resources, scheduling and performance management is taking place inside public services.

Beware of the jobs

As PSI has flagged on many occasions, the digitalisation of public services is also affecting the types and nature of work in the services, as well as to changing conditions of work. Chapter 3 in the ‘Digitalisation and Public Services: a labour perspective’ report deals with precisely these issues. From employment creation to employment destruction, changes in employment content and relations with citizens/users, to shifts in employment systems and the relations between employee and employer.

Work is also becoming increasingly intensified as routine tasks get digitised and automated (see section The changing nature of work in this report), it is monitored and surveilled, and is leading to a blurring of work life and private life. The ‘always on’ culture that has been fed by digital tools has indeed led several countries and many unions to push for the Right to Disconnect. (France, Germany, Italy, the Philippines, Luxembourg and Spain have national legislation in place. Belgium, Canada and India are in process)

Job losses are also expected. According to PSI's report, in France it is estimated that between 3% and 8% of staff (40,000 to 110,000 workers) will be affected in the near future, particularly in administrative and technical jobs. The British multi-sector union UNITE believes that over 230,000 of its 1.4 million members could lose their jobs to automation by 2035, with many workers in health services and local government being at risk. The FNV trade union in the Netherlands reported that 1,500 mostly lower-skilled jobs out of a total 15,000 were cut as a result of the digitalisation of legal services.

A Universal Basic Income has been proposed as a reaction to the automation of jobs and the expected overall decline in number of jobs. In PSI’s “Universal Basic Income: A Union Perspective” report from 2019 General Secretary Rosa Pavanelli however raises significant concerns with UBI as “a generous gift to the wealthy who don’t need it.” She continues:

“With a UBI in place many have argued that the states obligations to welfare will have been met. That people would then be free to use the money as they best need – free from government interference. With such a large increase in public spending required to fund a UBI it would certainly provide those who prefer market solutions to public provision with powerful arguments to cut what targeted welfare spending might remain.”

There is no doubt that a UBI subordinates the interests of people to those of technological change and to the digital solutionialism school of thought. Instead, to combat job cuts, unions should push for the mandatory commitment by public employers to the People Plan - a plan that commits employers to the investment and career development of staff in connection with the introduction of disruptive technologies. The plan is presented in more detail in the People Plan section.

Bias and Inequalities

As clearly discussed in PSI’s reports Digital trade rules and big tech and Digitalisation and Public Services: a labour perspective an ungoverned digitalisation will continue to polarise, existing inequalities in the labour market on the basis of skills, gender and other worker characteristics.

Algorithms are biased because humans are, or as some put in, “garbage in, garbage out”. The datasets that algorithms or AI systems are trained on will be biased. This is why we need governance - governance over the instructions to the algorithms, the data that is inputted and outputted and governance of the outcomes of algorithmic systems.

UNESCO’s recent report: Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue, states: “Research... unambiguously shows that the gender biases found in AI training data sets, algorithms and devices have the potential of spreading and reinforcing harmful gender stereotypes. These gender biases risk further stigmatizing and marginalizing women on a global scale. Considering the increasing ubiquity of AI in our societies, such biases put women at risk of being left behind in all realms of economic, political and social life. They may even offset some of the considerable progress that countries have made towards gender equality in the recent past.”

The report ends with a set of clear-cut recommendations, including that governments should: “Commit to policies, regulations, and mechanisms (proactively and through redress) that promote gender equality in and through AI; encourage the development of AI applications that do not perpetuate bias or negatively impact girls and women but that rather respond to their needs and experiences; create funding mechanisms for participatory AI, access to AI and AI education for girls and women; promote diversity and equality through hiring, supply chain and related practices; and contribute to the collection of sex-disaggregated data.”

Bias and discrimination are not limited to gender lines. Racial discrimination in technologies such as facial recognition, audio software, juridical algorithms are all clearly documented. With real-life effects in policing, welfare benefit systems, financial services, hiring algorithms, it is well overdue that we demand that all autonomous systems (algorithms, AI) are governed by multiple stakeholders.

To avoid the commodification of public sector employment across all sectors, workers and their unions will need to forcefully bridge gaps in many of the world’s data protection regulations and negotiate for several key provisions:

  1. Firstly for much improved workers’ data rights. Indeed across the world workers are either directly exempted from data protection regulations or partially so . Even in the GDPR, stronger provisions for workers’ data that were tabled by the European Parliament were not included in the final text.

  2. For co-governance of algorithmic systems deployed at work that extract or otherwise include workers’ data or the surveillance and monitoring of workers.

  3. For a new policy, making it mandatory for employers adopting disruptive technologies to re-train or upskill their current workforce and to offer training and career transition support for those who will be displaced.

It is to these three issues we now turn.

Workers’ Data Rights, Incl The Data Lifecycle At Work

In many of the world’s data protection regulations, workers’ data are either directly exempted or have far weaker protections than citizen data. In addition, all of these said regulations are based on individual’s rights, and not collective rights (Colclough, 2020). This results in a growing power grab by employers (public and private) at the detriment of workers’ rights, human rights and trade union power. As part of a longer term union vision of collective control and access to data (Singh for PSI 2020), unions should begin to expand their collective agreements by negotiating the Data Life Cycle at work. This will give workers better rights over the data extracted, used, stored and even sold by employers.

The data life cycle depicted here is inspired by the ILOs 1997 Code of Practice - Protection of workers’ personal data.

Figure 1: The Data Life Cycle at Work by Christina J. Colclough

The data-collection phase covers internal and external collection tools, the sources of the data, whether shop stewards and workers have been informed about the intended tools and whether they have the right to rebut or reject them. Much data extraction is hidden from the worker (or citizen) and management must be held accountable. In the GDPR area, companies are obliged to conduct impact assessments (DPIAs) on the introduction of new technology likely to involve a high risk to others’ information. They are also obliged to consult the workers. Yet very few unions have access to, or even know about, these assessments—unions should claim their rights to be party to them.

In the data-analyses phase unions must cover the regulatory gaps which have been identified—namely the lack of rights with regard to the inferences (the profiles, the statistical probabilities) drawn from algorithmic systems. Workers should have greater insight into, and access to these inferences and rights to rectify, block or even delete them. Such inferences can be used to determine “optimal” scheduling, wages (if linked to performance metrics) or, in human resources, whom to hire, promote or fire. They can be used to predict behaviour based on historic patterns, emotional and/or activity data. Access to the inferences is key to the empowerment of workers and indeed to human rights. Without these rights, there will be few checks and balances on management’s use of algorithmic systems or on data-generated discrimination and bias.

The data-storage phase is important but will become more so if e-commerce negotiations on the ‘free flow of data’, within, and on the fringes of, the World Trade Organization, are implemented. This would entail data being moved across borders to what we can expect would be areas of least privacy protection. They would then be used, sold, rebundled and sold again in whatever way corporations saw fit.

Unions must also be vigilant in the data off-boarding phase. This refers to the deletion of data but also the sale and transfer of data sets, with associated inferences and profiles, to third parties. Unions should negotiate much better rights to know what is being off-boarded and to whom, with scope to object to or even block the process—this is hugely important in light of the e-commerce trade negotiations. Equally, unions should as a minimum have the right to request that data sets and inferences are deleted when their original purpose has been fulfilled, in line with the principle of data minimisation recognised in the GDPR (article 5.1c).

With these improved rights through collective agreements or law, many of the issues related to the surveillance and monitoring of workers could be addressed. They also will lay the foundation for the transition to collective data rights as mentioned elsewhere in this report and as depicted in PSI's report Economic Rights in a Data-Based Society.

Governing Algorithmic Systems

Improved data rights must also be supplemented by new structures in the workplace for the governance of algorithmic systems. These new structures can be enabled through collective bargaining or law. However, whilst many AI principles exist, very few people have worked on how to make these principles implementable in practice. An exception here is the Alan Turing Institute and Dr David Leslie’s 2019 report Understanding artificial intelligence ethics and safety. A guide for the responsible design and implementation of AI systems in the public sector. This report is a comprehensive guide to public sector governance of algorithmic systems. The complexities at hand are well portrayed yet with concrete advice on good procedures:

“While aspects of this topic can become extremely technical, it is important to make sure that dialogue about making your AI system interpretable remains multidisciplinary and inclusive.”(page 44 of the report)

In addition, the Digital Future Society recommends in their report “Towards Gender Equality in Digital Welfare” four basic principles for ensuring gender equality in automated decision-making systems:

  1. Gender-relevant datasets and statistics

  2. Gender mainstreaming in planning

  3. Co-design, oversight, and feedback

  4. Equality by default

In order to turn the words of good intent into something measurable and workable in workplaces, we need to learn how to govern all (semi) autonomous systems at work that directly or indirectly affect workers. We will call these systems in general 'algorithmic systems'. Algorithmic systems is a term that covers all data-generating and data-driven systems that might or might not include artificial forms of intelligence. They all include some form of algorithmic analyses. What is important here is that these systems are used in human resources (f.x. automated hiring systems), for scheduling (f.x. of homecare workers’ routes) and/or for monitoring workflows, speeds and efficiencies. The use of algorithms implies that a result will be produced that says something normatively about the worker’s performance or activities.

Workers must be party to the governance of these systems both before they are used and periodically afterwards. What is the purpose of the system, what data is it trained on, what are the risks (bias, discrimination) for individuals or groups of workers (or indeed citizens), is the algorithm fair- if so to whom? After the algorithmic system has been used, its outcomes should be governed. Did it have the intended effect? How do the results match the purpose? Were new discriminations or biases revealed? How can this be mitigated and how can the system be modified or improved? Figure 2 below depicts the governance cycle.




Figure 2: Co-governance model for algorithmic systems.

Importantly, the results from the evaluations must be logged and saved. This to enable aggregate learnings from the systems deployed and to check for legal compliance. It is also one way of holding management accountable and responsible for the systems they deploy.

People Plan

A third demand from the unions can be via collective agreement or law to have a mandatory requirement to employers that they invest in their people as part of their investment in new digital or otherwise disruptive technologies. For example, an employer that invests in a new algorithmic scheduling tool has spent time and money scoping the market, speaking to consultants, buying or leasing the product and maybe training management. They should as part of this investment budget for the People Plan that can include measures as depicted on figure 3 below.

Figure 3: Elements of a People Plan as a mandatory requirement to employers

Areas of Exploration for Unions

Data and algorithmic systems are the foundation of all digital tools and services Employers are either buying or leasing proprietary tools and systems or designing their own. This is directly and indirectly affecting workers, the nature of their work and their rights.

Unions should explore:

  1. Strategising for collective data rights firstly through the improvement of workers’ data rights across the world

  2. Establishing training modules for leadership, shop steward education and training experts, and other staff. These modules should cover negotiating the data lifecycle at work, co-governing algorithmic systems and other digitally-relevant issues.

  3. Creating module guides to the above, including topics and questions that will support the affiliates and staff in their transition to negotiating digitally.

  4. Propose model collective agreement articles to support affiliates in their negotiations.

  5. Conducting a survey on Covid-19 spurred changing nature of work, incl on precariousness and digitalisation.

Key literature

3. Digital Trade and Tax

This section is concerned with two issues: Firstly, digital trade and the emerging ecommerce discussions in trade agreements as well as in and on the fringes of the World Trade Organisation. Secondly, discussions on Digital Tax.

Digital Trade

Trade negotiations are a complex field, often riddled with unique terms, references and legal language making it hard for anyone but the experts to understand. However, what is essentially at stake are national governments’ right to govern;), to put regulations in place to serve the interests of their workers and citizens and to have long-term strategies of their own, free from corporate interests, to form digital industrialisation that promotes local businesses, policies and processes. (see PSI reports The Really Good Friends of Transnational Corporations Agreement, TISA versus Public Services & Overview of international trade and Africa)

As Professor Jane Kelsey wrote in the PSI report Digital Trade Rules And Big Tech: “The goal [of multinational companies] is to shrink the size and power of the state, expand the size and scope of profit-driven markets, and increase the global power of transnational corporations. As new sources of profit and expansion emerge, so the trade rules expand. Since the 1990s the agreements have targeted government laws and policies on services, including finance and telecommunications, government procurement, intellectual property and technological knowhow. Over the past decade, as the digital revolution gained momentum, there has been a new focus on electronic commerce or digital trade. As the subject matter expands, so do the restrictions on governments’ right to regulate.”

Digital trade rules across various trade agreements and negotiations have very similar wordings. Basically the powerful tech elite is attempting to convince governments that by adopting these rules, new development opportunities and potential cost savings will arise. In reality these rules are designed to tie the hands of government. The rules under negotiation include the removal of the following key governmental rights: the possibility to enforce data localisation, the right to view source code, the right to demand physical and therefore legal presence of multinationals and the right to demand technology transfer.

PSI and OWINFS (Our World is Not For Sale) led a successful campaign which stopped many of these provisions being adopted in the now shelved Trade in Services Agreement (TISA) - but the corporations promoting them have shifted to other forums. PSI's Digital trade rules and big tech report lays out how these demands were adopted into the TPPA - the Trans-Pacific Partnership Agreement.The same provisions are now being discussed in, and on the fringes of, the World Trade Organisation (see Digital Trade Rules, by James, D: 2020).

In order to bypass WTO negotiations, the USA is signing smaller bilateral trade agreements that include digital trade, the free flow of data and more (for example with Brazil, China and Japan). Unlike a comprehensive free trade deal, these smaller deals do not require the approval of Congress, which can stall an agreement for many months, or sink it entirely. The United States Trade Representative has published the contents of the US-Japan trade agreement, which reveals very similar content to the ongoing WTO negotiations.

Without going into detail on each of the demands, it is interesting to note how they affect all of the sectors and themes raised in this current report, including

  • 1. the right and means for local and national governments to govern freely and thus avoid a private sector power grab (see analysis from Latin America by PSI here);

  • 2. Public Procurement (gone will be the public authorities’ scope to procure and prioritise local firms thus forcing SMEs to compete against large MNcs with their global supply chains),

  • 3. Data Rights and the opposition to localise data (and thus essentially prevent collective data trusts),

  • 4. Smart Cities (see example from Toronto below),

  • 5. health care and health data, 6. taxation and more.

Key Literature

Digitalisation and tax

Tax serves a number of purposes: to raise revenue for government spending, to redistribute wealth and as a regulatory tool to encourage or discourage certain activities by making them more or less expensive. Tax is also a critical part of democracy and the social compact - the demand for accountability from government is strengthened when individuals pay tax to it.

Using tax to regulate Digital change

To measure the actual value of digital activities, UNCTAD lists a number of initiatives in their Digital Economy Report 2019 (p. 50). Bill Gates famously proposed a Robot Tax at a rate similar to what the (now displaced) workers would have been taxed so tax revenue could pay for more employment in education and elder care. The idea, according to Gates, is also to slow down the speed of the technology’s adoption, to give society more time to adjust. EU Parliamentarian Mady Delvaux simultaneously issued a Robot Tax proposal, which was strongly voted down.

Other commentators have noted that digital transformations of the labour market will have a profound impact on the revenue base, although the effects will differ in the developing world where corporate taxation makes up a larger proportion of the tax base. In February 2019 the then European Commission Head of Unit for DG ECFIN, Jakob Wegener Friis, wrote: “Displacement of workers as a result of AI is very likely to put significant pressure on public revenues, if the current system based on personal income tax and social contributions does not adapt. This impact on tax revenues may be heightened by the shift from employment based on formal open-ended contracts, with employers generally responsible for collecting tax revenues, to freelance/gig work, which is more complex to tax. It is also likely to require increased spending on unemployment benefits, active labour market policies, and education.”

Whilst many agree that a solution is needed, problems exist with all of the above proposals. What is a robot? Is excel a robot? How can a definition be sufficiently bulletproof so to avoid loopholes and industry tax avoidance. Also without better forms of global tax governance we simply risk the rise of robot tax havens. And how do we ensure that platform workers enjoy both the same levels of employment and social protection as employed workers and that the same level of tax is paid?

What could be some ways to change the tax incentives for robots or digital technologies? One solution would be to disallow accelerated depreciation for investments in automation. Professors Abbott and Bogenschneider (2018) propose that businesses with high levels of worker automation could have their tax depreciation automatically reduced beyond a certain threshold.

Abbott and Bogenschneider assert:

“The tax system incentivizes automation even in cases where it is not otherwise efficient...The vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees.” (p.1)

Daron Acemoglu and Pascual Restrepo (2019) agree in their article “Artificial Intelligence, Automation, and Work” and add:

“At the root of this negative effect is the fact that subsidies induce firms to substitute capital for labor even when this is not socially cost- saving (though it is privately beneficial because of the subsidy)” (p. 225)

The above research was later picked up in February 2019 by Eduardo Porter writing for the New York Times. Here he argued that many companies invest in automation because the tax code encourages it, not because robots are more productive. He asserts that the purpose of taxing robots is not simply to stop them from killing jobs. It is to level the playing field, to ensure that investments in automation raise productivity.

Taxing Digital Companies

The ability of multinational companies to artificially shift their profits into tax havens to avoid paying tax in the country where the workers are and the economic activity takes place has resulted in a declining share of corporate taxation over many decades. The global corporate taxation rules, written over 100 years ago, are unable to cope with modern multinational business models. Digital business models developed over the last few decades have exploited these loopholes and enabled digital corporations, who are now the largest companies in the world, to stop paying any corporate tax at all in many cases. PSI’s series Fixing Corporate Tax: Briefs for Workers and Unions outlines how this is done and the devastating impact on workers and unions.

Responding to growing discontent at tax avoidance, the G20 tasked the OECD to develop solutions resulting in the Base Erosion and Profit Sharing (BEPS) work in 2013. Lobbying by large corporations ensured that the BEPS outcomes were weak and failed to deal with the underlying problems of profit shifting to tax havens. The main reason for its failure was the unwillingness to treat multinationals as single global entities (unitary taxation) and allocate global tax revenue to countries according to where it is genuinely earned, not artificially shifted, based on a formula (formulary apportionment). Unitary taxation and formulary apportionment are long standing demands of the global tax justice movement and the Independent Commission on Reform of International Corporate Taxation (ICRICT).

However even the OECD conceded that the level of abuse by digital companies was so great that a new subset of rules were needed for digital business models. Importantly they also acknowledged that the infiltration of digital business activities into many other sectors made it impossible to separate digital firms from the rest of the economy for tax purposes and also acknowledged that the solution required taxing corporations on a global basis using a form of unitary taxation. But even these attempts have been weak and blocked by the USA, working in the interests of the tech industry.

The South Centre issued in May 2020 a research report “National Measures on Taxing the Digital Economy,” on which examined the problems with the OECD’s attempt to set global standards. They concluded:

  1. Many OECD countries are instead taking national measures, much to the chagrin and dismay of the OECD. Countries are being asked to adhere to the negotiations, when OECD countries themselves are not and we have seen a proliferation of countries unilaterally introducing some form of digital taxation.

  2. All countries have the right to take national measures, The measures introduced can be broadly categorized into three: (a) Digital Service Taxes (b) New Nexus Rules, mainly Significant Economic Presence (c) Withholding Taxes on Digital Transactions

  3. National measures have positively impacted the multilateral discussion, as more and more countries began taking national measures on taxing the digital economy, the OECD was forced to take steps to hasten the multilateral discussions

Many countries, frustrated by the lack of progress and desperate for revenue, have indeed introduced digital services taxes. The Centre for International Corporate Tax Accountability and Research (CICTAR) points out that the design of digital services taxes is critical.

While the OECDs model is flawed, it is much better to tax digital firms at a national level on the basis of the countries share of global profits, than it is to tax individual sales. This is because digital sales taxes are more easily shifted to consumers and like all sales taxes are regressive - wheresas profits taxes are less able to be shifted to consumers and hence are more progressive. It is a basic principle that profits should be taxed - so that unprofitable companies are not taxed out of business. Introducing digital sales taxes (sometimes misleadingly called digital services taxes) at a national level is unlikely to raise much revenue, but would provide these firms with an easy escape from demands that they pay their fair share. Ultimately digital sales taxes move us further from the global solutions we need.

For these reasons it is essential that the growing public anger with digital tax avoidance is used to drive digital corporate taxes based on global profits and not individual sales. Countries do not need to wait for the OECD to introduce national taxes based on this design.

Areas of Exploration for Unions

Unions could:

  1. Continue to raise awareness about Digital Trade and Tax - it is important that these issues have a coherent push from trade unions across the world.

  2. Engage in multilateral tax and trade discussions offering viable alternative solutions.

  3. Raise public awareness to the above.

  4. Support solutions to tax the digital economy on the basis of global profits not sales

  5. Engage with CICTAR to expose corporate wrongdoing and build public and political support for necessary changes

Key literature

Digital Trade

Digital Tax

4. Digitalisation and Development

Although all the issues outlined throughout the report, and in the first two sections on Democracy and the Rights and Means to Govern and Digital Trade and Tax are relevant to the developing world there are particularities in the dissemination and impact of digital technologies on the Global South that warrant specific attention. This section will cover digital divides, digital colonialism and the impact of current digital trade negotiations on the ability of developing countries to form their own sovereign industrial and digital transformations.

Digital Divides

Globally, just over half of households (55%) have an internet connection, according to UNESCO. In the developed world, 87% are connected compared with 47% in developing nations, and just 19% in the least developed countries. In addition, people in rural areas across the world, but especially in the Global South have far less access to the internet than those in urban areas. According to a UN report, the main issues preventing the adoption and use of ICTS in rural areas are:

  • Lack of electricity: 15 percent of the world population is estimated to be without electricity

  • Literacy: many (13 percent) are still incapable of basic reading and writing

  • Gender: women are 50 percent less likely to be online

  • Poverty: millions of people still live below the international poverty line

  • Affordability: the high cost of broadband access in many countries

  • Language: most online content is only in a handful of languages

  • Local content: lack of locally appealing apps hinders usage

  • Network coverage: 3G networks reached 70 percent of population but was only 29 percent in rural areas in 2016.

In addition to the rural/urban divide, gender inequality in digital technology is even more alarming. Women are less likely to have internet access than men, and this gap is widening. The 2019 UNESCO publication "I'd Blush If I Could", produced under the EQUALS Global Partnership, illustrates that women are now four times less likely than men to be digitally literate, and represent just 6% of software developers. In addition, other inequalities persist that receive far less attention. For example, in South Asia caste and in many countries age.

These divides and inequalities look likely to worsen due to the COVID-19 pandemic. According to the World Bank, poverty is set to increase sharply as a result of the pandemic, with as many as 200 million more people living on less than $5.50 per day, heavily concentrated in South Asia and sub-Saharan Africa.

Digital Colonialism

Digitalisation has been a 100-year long sequential evolution in the Global North. Building on initial state infrastructure investments, mobile phones, smartphones and the Internet could grow. By contrast, the countries of the Global South are, mainly through private sector investments, leapfrogging straight into wireless technology. These investments are mainly coming from Big Tech companies in the US and China: Microsoft, followed by Apple, Amazon, Google, Facebook, Tencent, and Alibaba − account for two-thirds of the global market value of digital platforms. They are superpowers dominating markets and societies.

The term ‘digital colonialism’ refers to this modern-day “Scramble for the Global South” where large-scale tech companies enable internet access (at a high cost) all while extracting, analysing, and owning user data for profit and market influence with nominal benefits for users. Under the guise of altruism, large scale tech companies can use their power and resources to access untapped data.

For example, Facebook’s Free Basics app - a mobile app and web platform created by Facebook - has since its 2015 launch been hailed by the Facebook CEO Mark Zuckerberg as the "first step towards digital equality" due to its audacious plan to "introduce" millions of people to the internet, many of whom live in developing countries. It provides free of data charges access to a variety of basic services like news, weather and health information, job ads, and of course, Facebook. Although banned in India, the program is currently active in over 63 countries in Africa, Asia and Latin America. Free Basics features a glut of third-party services from private companies in the US; it harvests huge amounts of metadata about users and violates the principles of net neutrality. Its popularity is not least due to the extremely expensive data costs in many developing countries. According to the Alliance for Affordable Internet (2019), across Africa the average cost for just 1GB data is 7.12% of the average monthly salary. In some countries, 1GB costs as much as 20% of the average salary — too expensive for all but the wealthiest few.

2.3 billion people live in a country where a 1GB mobile broadband plan is unaffordable for individuals earning an average income. Most of these people live in the least developed countries, where the average cost of 1GB of internet access is 14.8% of Gross National Income (GNI) per capita, with users in countries such as the Democratic Republic of the Congo, the Central African Republic and Haiti obliged to pay almost half of their monthly income.

Affordability is much better in some LDCs like Cambodia, with 1GB of internet access costing less than 2% of GNI per capita. The major driver behind this is Cambodia's highly competitive internet markets.

Jumping headlong into the digital age has indeed provided financially strapped countries in the Global South with digital technology, new opportunities and greater connectedness. But the introduction of technology has often outpaced the establishment of state institutions, legal regulations, and other mechanisms that could manage the new challenges arising from this technology. For the private (often foreign) companies, this has opened the space for the unfettered extraction, analysis, and control of data. It also has let private corporations take the power to control what the local populations can see and access on the internet, as well as what services and goods they are offered.

The values and norms embedded in digital technologies

The tech world does not exist in an abstract or external space. Geography matters! All digital technologies embed cultural, normative and/or valuative legacies ,through the bias of the data sets an algorithm is trained on or the instructions provided to the algorithm.

In other words, digital technologies do things in a particular way because of the culture, norms, values and interests of the people who design them, or of the data the algorithm has been trained on. The “things” it is told to do,- IE first reject people with no formal education, and second reject those whose eyes glance to the left when asked an emotional question - are the instructions. Instructions can be very culturally-bound. Think of an automated hiring system deployed in Kenya to find a public utility engineer. The tool is designed in the USA and is trained on US datasets. Here an analysis on legacy data sets show that engineers with soft, monotone voices with a midwest accent are more trustworthy and productive than others. The analysis also shows that these engineers are typically men aged 30-40, no children and living in the urban suburbs.

This automated hiring system with its learnings is then bought by the Kenyan public services and deployed as is. It is not hard to imagine that the system will fail to find the most suitable Kenyan candidates for the job. Whilst this is an exaggerated thought experiment, these kinds of scenarios are an element of of Digital Colonialism.

The algorithms go beyond singular use cases. They also as we mentioned above, define what goods and services, news stories, music recommendations and much more that are made visible to the users of the internet. Are local firms disadvantaged, or less visible, than foreign multinationals? Are Global North interpretations of geopolitical events pushed to the top of news feeds? Once more the power of the tech companies also becomes a power over meaning creation and values.

Digital Trade & Development

These power asymmetries will only grow if new Digital Trade rules are adopted more widely. They will not only limit national government’s ability to regulate in the public interest; they will also limit the public sector’s access to information as this is mainly held in the hands of private companies.

In Singh’s words:

Widespread access to society’s data – currently in the hands of a few digital corporations – is a precondition for a fair economy, quality public services, public policy-making and democratic governance.” (p. 1. PSI 2020)

In the Global South especially, we end up in a downward spiral that effectively ties the hands of governments thus preventing them from establishing the needed state institutions, legal regulations, and other mechanisms that could ensure their own sustainable and empowering digital transformation.

Sovereignty

Private companies are filling the void created by the lack of governmental and intergovernmental investment in public digital infrastructure in the Global South. They are the ones who are capturing the space, enabling them to extract, utilise and profit from the data and the untapped markets. This infrastructural takeover is additionally used to stifle local industry. Profits are harvested and extracted back to the home country all whilst local tax contributions are kept at a bare minimum or avoided entirely.

For example, a 2018 PSI report exposed how a UK private equity firm with investments in upgrading the Ugandan electricity network funneled over USD$100million into a subsidiary in Mauritius, depriving the country of up to $38 million in taxes. A vicious cycle is thus formed - lack of public resources opens the space for private industry takeover, which in turn due to profit extraction and tax flights, leaves the local economy with a stifled industry and a much lower tax revenue. The sovereignty of governments to form and shape their own digital transformations is therefore removed.

Areas of Exploration for Unions

The Global South is faced with particular challenges in relation to digitalisation’s effects on the establishment of quality public services, a fair economy and democratic governance. These challenges are further exacerbated by the COVID-19 crisis, the predicted breakage of global supply chains and the general economic downturn that is expected in the years to come. Unions could pursue the following strategies:

In the short term & medium term:

  1. Unions should continue to demand that multilateral institutions, trade agreements and multinational organisations reverse and oppose digital colonialism through public investments and empowering policies.

  2. Unions should support national digital industrialisation policies and programmes that are free from the oppression of digital trade rules enabling the Global South to form their own paths into a digital economy that works for them.

  3. In all sectors, themes and priorities, the understanding that digital technology is place-bound and value-bound should be mainstreamed, accounted for and messages, communications and strategies adjusted to prevent further digital colonialism and a deepening of digital divides.

  4. The creation of networks and alliances with progressive civil society organisations in the Global South concerned with privacy rights, internet affordability, empowering digital transformation, digital trade and taxes.

  5. . Support global redistribution of taxes (see Digital Tax above), responsible technological disruption policies (see the section called People Plan below), quality jobs across value and supply chains and public investment in the countries and public services of the Global South to ensure a just and equitable economic growth.

  6. Highlight how entrepreneurial, innovative public sector policies can better support the Just Transition commitment in the Paris Agreement, through (for example) regulation on obligatory accounting of corporate just transition activities (see section on People Plan below):

Key Literature

5. Public Administration - e-governance

The transition to e-government and e-governance across public services is explored in PSI’s report ‘Digitalisation and Public Services: a labour perspective’ and the corresponding policy-orientated summary report. Both reports speak of a rapid digitalisation of central government across the world that is not without risks. PSI points at the following main risks:

  • the digital divide and exclusion of some citizens/users from essential public service access;

  • the dependency on private providers for digitalization technology, counselling, training and maintenance;

  • privacy and security issues related to the control, use and ownership of the data gathered from citizens and from strategic services (e.g. justice, health) – especially when such functions are contracted out to private providers;

  • the difficulty for the government to predict and control costs for private digitalization technology providers.

Transition to e-governance: Beware of the risks

There are significant risks to deploying algorithmic systems without a sound governance infrastructure in place (see section above on Datafied Welfare State and below Governing Algorithmic Systems)

In the education sector, the recent A-level algorithm scandal in the UK is just one example. Here, “the problem” the algorithm set out to solve was to standardise A-level results across the country. It remains unclear who had decided this was of importance. Regardless, the effects on the students when divided by socioeconomic status was severe: Pupils in the wealthiest of three categories had their proportion of grades C or above lowered from 89% to 81%, a drop of eight percentage points. Pupils in the lowest category had their grades dropped more than 10 percentage points to 74.6%, receiving a C or above.

One student even figured out how to string together key words and sentences to produce a 100% score from the algorithm's automated marking system.

Dr Joanna Redden from the Data Justice Lab recently told the Guardian [Councils scrapping use of algorithms in benefit and welfare decisions, Aug 2020]:

“We are finding that the situation experienced here with education is not unique … algorithmic and predictive decision systems are leading to a wide range of harms globally, and also that a number of government bodies across different countries are pausing or cancelling their use of these kinds of systems."

In the Justice system there is a wide-spread academic literature on the bias and discrimination inherent in algorithmic risk assessment tools. These tools are used in a variety of criminal justice decisions, assessing data such as an offender’s criminal history, education, employment, drug use and mental health, then predicting the likelihood that that person will reoffend. In 2016, an investigative report by ProPublica called into question the objectivity and fairness of algorithmic risk assessment in predicting future criminality. Their study looked at the data over 7,000 arrestees scored on COMPAS in a pretrial setting in a southern county of Florida. Its findings concluded that the popular risk assessment tool COMPAS discriminates against African Americans due to its algorithm overpredicting high risk of reoffending Another A.I. driven tool used in law enforcement is Clearview AI - a highly criticised facial recognition tool used by law enforcement agencies ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security in the US. Clearview AI’s software allows organizations to match pictures of people’s faces to a database containing more than 3 billion images that have been taken from social media platforms and other websites. A report from February 2020 revealed that Clearview AI is used in 26 countries outside the U.S. including Belgium, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Slovenia, Spain, Sweden, Switzerland, and the United Kingdom.

Ai is also used on social benefits systems across the world, often with little accuracy. For example, the Australian government has announced it will refund $720 million to the almost 400,000 welfare recipients who were unjustly saddled with debt by a faulty algorithm. The automated welfare system, nicknamed ‘robodebt’, pitted welfare recipients against faulty “income averaged” annual pay data. Lately, there has been a widespread withdrawal in the UK public sector from algorithmic systems used in benefit and welfare decisions.

In the European Union, the AI High Level Expert Group issued in June 2020 a set of Sectoral Recommendations including on the public sector (p. 9 and forward). Isabelle Schömann, ETUC was the single trade union member of the group. They too acknowledge the risks:

“The Policy and Investment Recommendations for Trustworthy AI devoted significant attention to the public sector, which the AI HLEG considers as a catalyst for achieving progress in this domain. However, it is also an area with significant potential for fundamental rights violations, ethical issues and potential undesirable societal and socio-economic impacts.” (p.9).

Interestingly, their recommendations partially address the risks identified by PSI, as well as some of the recommendations in other sections of this report. Some of their recommendations are listed below, with commentary from PSI.

  • It is essential to ensure that AI systems and interfaces used for the provision of public services do not compound the digital divide. Citizens and legal entities should continue to benefit from equal access to public services, regardless of their digital capabilities. This requires that public administrations maintain multiple interfaces for the provision of public services including telephone calls, paper documents etc. These can be handled automatically or through internet services and service kiosks that suit the populations. PSI goes one step further than the HLEG and warns of an additional divide, namely gender, resulting from structural imbalances in the fields of ICT and technology. Digitalisation may therefore well exacerbate existing horizontal and vertical gender-based employment segregation. (p. 64)

  • AI-enabled e-Government services should be accompanied by adequate arrangements in terms of accountability and traceability, enabling ex post verification. This requires the adoption of auditable AI solutions, as well as the digitalisation and storage of records of background information and data relevant to decisions adopted by public administrations. This is precisely what we call for in the co-governance of AI systems presented in the section Governing Algorithmic Systems. However, the HLEG do not describe who should be party to this verification. We believe that workers must be included.

  • The ability to describe, contextually and after the decision is made, the reasons that led to a specific course of action by a Public Administration should be a foundational element of the relationship between government and citizens (and as a realisation of the right to “good administration”, as included in Art. 41 of the EU Charter of Fundamental Rights). This is again part of the Governing Algorithmic Systems model presented in this current report It stresses the importance of logging the outcomes, mediations and tradeoffs. By obliging the public authorities who use algorithmic systems to describe in plain language the tool and the reasons that led to a decision is an essential step in keeping a human, and not a machine, in charge and responsible for outcomes.

  • Take action to promote data and algorithmic literacy amongst the public administration. Civil servants need to be trained in: data collection, management, cleaning and storage practices; prioritising quality, conducting ethical and social impact assessments; and, securing compliance with relevant privacy and data protection rules. In a nutshell, civil servants should be increasingly acquainted with the ethical, legal, social and economic impact of AI while remaining wary of potential adverse impacts on fundamental rights, democracy and the rule of law. We too point to that in sections on Public Procurement and as part of the process of Remunicipalisation. The adequate training of all personnel involved in the deployment of algorithmic systems should be a prerequisite for their use. These powerful, and as we have seen, potentially discriminatory tools, must only be used by people who 1. understand their instructions and the reasons for using the tool, 2. have early warning structures in place to enable the mitigation of discrimination and bias, and 3. who can flag unintended results and know this will be listened to and acted upon.

  • Develop AI and data strategies within all relevant branches of governments. It makes perfect sense to have a “whole-of-government” strategy concerning the use of these tools, and importantly have the governance and mitigation structures in place.

  • Promote interoperability to enable efficient and secure communication between jurisdictions. This recommendation encourages countries in the EU to implement systems which can talk to one another. It is thus relevant for countries with a decentralised, independent structure such as the USA and Australia.

Lastly and importantly, the HLEG also remark:

"The need to respect diversity and inclusion was also raised in various contexts during the workshops, especially in relation to the inclusion of employees throughout the deployment of AI in organisations (considered to be essential when it comes to decisions made with regard to the work environment);"

This is again in line with recommendations on the governance of AI systems presented elsewhere in this report. It is imperative to have new co-governance structures in places - through which many of the problems identified in the use of algorithms would be identified pre-deployment and dealt with.

Areas of Exploration for Unions

E-governance and e-government are two areas of public service transformation which are receiving lots of attention from international bodies and private companies alike. The rhetoric is that the transition to e-services cannot take place faster enough. But as this section has shown, this transformation can be riddled with problems that deepen rather than bridge digital divides and enhance rather than eliminate biases and discriminations.

Much of this has to do with:

  • a. Rapid private sector capture of the public sector’s digital transformation without adequate public sector governance, clear demands and oversight;

  • b. low capacity level inside public services;

  • c. A blind and somewhat naive belief that automated systems are better and cheaper than human-led systems;

  • d. a narrative within public services and governments that the public sector is ‘too heavy’, ‘too bureaucratic’ and must be made more cost efficient and productive.

Unions could explore the following avenues of inquiry and demands:

  1. Push for the public sector’s digital capacity and autonomy to be vastly improved through investments into its own digital solutions.

  2. Ensure all algorithmic systems in place in public administration are co-governed by workers representatives and leadership (see Governing Algorithmic Systems below)

  3. Demand that any and all agreements with the private sector are made public, clear and transparent, including information about who has control and access to the data, how this data is governed, who has access and for what purpose.

  4. Public administrations should be able to describe why a partnership with the private sector is favoured over building their own capacities.

Key literature

6. Local and Regional Government

The focus in this section is on three local and regional government developments that are characterised by a high degree of digitalisation: smart cities, public infrastructure and remunicipalisation.

Smart Cities

“Smart City” has spread as a catchword to describe a very heterogeneous set of trends, technological and organizational changes occurring at the level of local and regional governments. The overall idea pertains to the use of information and communication technologies to improve public services provided by local governments through a more efficient use of resources (e.g. water, public lighting, waste etc.), resulting in cost and energy savings and reduced environmental footprints; but also to offer additional services in fields such local transport (e.g. traffic management and real time information, on-demand services, information and communication with public authorities ia apps, websites or chatbots, crime detection, schools, libraries, hospitals, and other services. hat makes the difference between whether smart cities work for people or not is whether they are “public-led” vs. “corporate-led”.

Smart cities are based on features which are very closely related to the Internet, sensors, digital devices and data gathering and analysis of large amounts of information. While smart cities can simply encapsulate a select few landmark projects, it can also en-compass a wide range of different city services and municipal departments.

The OECD launched in 2019 their Smart Cities and Inclusive Growth Programme which aims to:

  • redefine the concept of smart cities around the contribution of digital innovation to better lives for all people;

  • measure how smart cities perform and ultimately deliver well-being outcomes for citizens; and

  • guide local and national governments in their efforts to reshape city governance, business models and stakeholder engagement.

They held their first roundtable on Smart Cities on July 9, 2019 - no union representatives were present (see list of participants here). Judging by the agenda a strong union voice would have been beneficial as would that of citizen and community groups. One item on the agenda was:

Agenda Item III - Digital Innovation and Disruption to city governance: revisiting business models and citizen engagement.

Despite its numerous benefits, digital innovation can also disrupt the way cities are governed and financed. Without an integrated, multi-sectoral, and whole of government perspective at national and local levels, digital innovations can upend legal and regulatory frameworks safeguarding affordability objectives, but also consumer protection, taxation, labour contracts and fair competition. They can jeopardise citizen data, privacy and safety, and shake the decision-making powers and modalities in the era of real-time - and often asymmetric – information. And, equally important, they can deepen inequality among digitally marginalised groups unless local governments recognise that tech-driven solutions are as important to the poor as they are to the affluent.

One interesting failed Smart City project is that of Toronto Quayside involving Google’s Sidewalk Labs. This example cuts into, and across, several of the sections of this current report: collective data trusts, the use of smart technologies and the surveillance that follows and the changing balance of power between the public and private sector.

Prime Minister Justin Trudeau of Canada promised the Toronto Quayside project would create “a city of tomorrow, and that the project would create “technologies that will help us build smarter, greener, more inclusive” communities”. Sensors inside buildings would measure aspects such as noise, while an array of cameras and outdoor sensors will track everyone who lives, works or merely passes through the area, whilst measuring everything from air pollution to the movement of people and vehicles through intersections.

Quayside came under immense public scrutiny and expert criticism l. A key issue was how transparent the collection of that data would be. One of the world’s leading experts on data trusts, Sean McDonald, was one of the most prominent advocates against the specific Sidewalk Lab proposal for a Civic Data Trust to govern the smart city’s data. e points to several key problems with Sidewalk Lab’s proposal and concludes that: “The proposed Civic Data Trust is a piecemeal adaptation of existing pieces of civic data infrastructure, situated in one of the highest profile public procurement processes in recent history […]Proposing that Toronto should base ownership determinations on the urbanity of a data set is a departure from Canadian data ownership law and a precedent that, if approved, could extend far beyond this project.”

We can draw a number of key lessons from the Toronto smart city case.

  • 1. Whilst data trusts are essential the design, and who is included, is critical.

  • 2. Citizen groups need to be included from the start,

  • 3. Where there is no alternative to private sector companies involvement, the smart city case must have a strong public sector framing, including mechanisms for democratic control,

  • 4. Transparency is key. There must be a commitment to the public disclosure of all information and negotiation items,

  • 5. A smart city is a surveillance city which creates significant privacy concerns.

Once again, and in the words of Shoshana Zuboff, the author of the acclaimed book “In the Age of Surveillance Capitalism”:

“We should not call smart phones, smart speakers, smart cities, smart. We should call them surveillance phones, surveillance speakers, surveillance cities.”

In the EU, the ‘European Innovation Partnership on Smart Cities and Communities (EIP-SCC)’is an initiative supported by the European Commission that brings together cities, industry, small business (SMEs), banks, researchers and others. Interestingly, no activities or publications have been posted on the Smart Cities webpage of the European Commission since 2018.

However, The Community of Practice - the CoP-CITIES - is an initiative of the European Commission, which is open to external stakeholders (cities and networks of cities, International and Intergovernmental Organisations and research bodies). It builds on and brings together ongoing work and expertise on cities by JRC and REGIO. It brings together the overall conversations within The European Innovation Partnership on Smart Cities and Communities, the Digital Transition Partnership of the Urban Agenda for the EU, H2020 projects, the Digital Cities challenge initiative, and the Green Digital Charter.

The substantial report The Future of Cities - is well worth a read. It is a product of an initiative of the Joint Research Centre (JRC), the science and knowledge service of the European Commission (EC), and supported by the Commission's Directorate-General for Regional and Urban Policy (DG REGIO). It covers areas such as Provision of Services, Health, Social Segregation Climate, Tech and Urban Governance. The report is supported by an interactive platform

Whilst there seems to be many EU initiatives, they appear scattered across the large European structure. It could be interesting to dig deeper into the real prioritisation of Smart Cities by the Commission.

In developing countries in Asia, smart cities are also a tool to attract FDI. Development Banks play a key role in pushing and shaping smart city ideas. The ASEAN Smart Cities Network (ASCN) was established in 2018 as a collaborative platform where up to three cities from each ASEAN member state, including capitals, work towards the common goal of smart and sustainable urban development. The primary goal of the Network is to improve the lives of ASEAN citizens, using technology as an enabler. Specifically, the ASCN aims to:

  • facilitate cooperation on smart cities development.

  • Catalyse bankable projects with the private sector.

  • Secure funding and support from ASEAN's external partners.

25 cities have handed in action plans. They are available upon registration here. It remains unclear what role the ASCN plays as brokers of the big investment deals via the Development Banks. Also, given the issues mentioned above with regards to the Toronto Quayside project, unions in the region should follow these action plans and link these to discussions on digital colonialism.

Gender and Smart Cities

In a speech at the WSIS Forum 2019 called (En)gendering the Smart City, Caitlin Kraft-Buchman (CEO of Women@theTable) discussed the dual nature of cities; she referred to popular representations of cities as “places of economic opportunity, liberation, and reinvention”, but conversely also of “fear, danger, and violence for women, from dark city streets to public transport”.

Kraft-Buchman argued that deliberations about the future of smart cities should focus on how people experience these issues rather than focus on the efficiencies such as how to keep traffic running smoothly through urban areas. She asserted that in practice and research the gender dimension of Smart Cities is not receiving attention.

(Smart?) Public Infrastructure

The above lessons from the Smart City concept can be well transferred to digital developments in public infrastructure development and maintenance.

The Dutch university, TU Delft, offers an interesting conceptualisation of infrastructure(s). They distinguish between Common Infrastructure and Computational Infrastructure

  • Common infrastructure is the one we live with, the one we have already come to know and govern. We can define them as physical, institutional and human structures that organize everyday social life. For example – water systems, sewage systems, electrical grids, roadways, railways and waterways. They also include cityscapes and their administration, systems that deliver healthcare, and education infrastructures; they co-exist with us and rely on political administrators, public authorities, commercial ecosystems, labor organizations and other social groups. In this first category we also include the information or data that play an increasingly operational role in how we organize institutions and everyday activities. By virtue of our intimate acquaintance with this first category of infrastructure, societies around the world have a deeply historical sense of the values that these infrastructures are intended to serve. Over time, we have developed governance structures and societal norms based on which we come to manage these infrastructures. The public interest tends to matter. So do a variety of goals like universal access and justice. Although, we should acknowledge that in some cases infrastructures are built and sustained by unjust practices like colonization or labor exploitation, in others, they are used to control or violate populations, which further underlines their significance for the fabric of our societies.

  • Computational infrastructure they define as “Fast forward to today, and we see software being increasingly offered and developed through ‘platforms and infrastructures-as-a-service’. These ‘service-oriented architectures’ promise customers the ability to respond more quickly and cost-effectively to changing market conditions. They enable rapid data feedback about people’s behavior, physical environments and digital environments. In response, software developers have moved towards a more data intensive form of developing digital services using iterative engineering methods…….”Our concern is that the computational infrastructures are far more than a technological ecosystem alone. Like all infrastructure, they incentivize us to embed their values, and therewith much of their politics in the lower layers of the technology stack. Comparable to the cables and control equipment in electrical networks that determine what can and cannot be connected to it, computational infrastructures embed constraints on what can and cannot be built on top of it, as well as what is accessible to those needing to audit or validate its functionality. Furthermore, the rapidly growing environmental/carbon implications (already on par with the global aviation industry) as well as the detrimental reliance on labor remain largely non-transparent.” (extracts from Seminar on Programmable Infrastructures, TU Delft 2020)

The distinction between infrastructures offered here will be helpful in not only critically understanding Smart City debates, but also the increasing digitalisation and privatisation of traditional public infrastructures such as water, energy and sanitation.

As PSI General Secretary Rosa Pavanelli firmly underlined in her address to the UN High Level Political Forum in 2018: “The current market-based approaches are not providing the answer. We have seen too many Independent Power Producers use their legally-binding Power Purchase Agreements to drain tax-payers’ money for their shareholders. The IPP-PPA model proves to be a disgrace for too many communities…..We need more than technological fixes. We need :

  • Strong public policies and governance, with the participation of users and workers for quality public services to defend our common interests and meet our collective needs.

  • To address the inequality of the global tax system to make all pay their fair share.

  • To protect the workers and communities that will suffer the worst dislocations as we move to a low-carbon planet. A Just Transition based on social justice, and labour rights.

Indeed, the corporate influence in many of the discussions held on energy, water and sanitation supplies is striking. The privatisation of what should be a universal right of access, is being further spurred by digital solutions aimed at linking supply with demand monitoring and controls through smart metres. Ultimately this is founded on an intense surveillance of consumers and consumption. For workers in the utilities sectors, automation and privatisation (and semi-privatisation through PPPs, outsourcing and procurement) has fundamentally changed their jobs, the number of jobs and the skills profiles demanded in the sectors.

Whilst e-solutions to limit water waste, find pipe leakages, or produce clean energy are potentially conducive for combating climate change, the price felt by workers and communities deemed less “investable” by private interests can be very high. Many are disadvantaged due to their (remote) geographical location or socio-economic characteristics, as profit concerns weigh more than a universal obligation of supply on equal terms.

Remunicipalisation

Many cities and geographies are taking back the control over a range of fundamental sectors, incl water supply. According to TNI, there has been 1,400 cases of remunicipalisation since the turn of the millennium, in more than 2,400 cities across 58 countries.


Figure 5: from The Future is Public, TNI 2019

Trade unions have been key to building the case for remunicipalisation and in the successful transition from private to public ownership. Not only has remunicipalisation proved to save costs, increase democratic participation and improve the quality of services, it also has a potential public sector capacity building effect. But this depends on whether the digital services (the data analysis, data visualisations, monitoring and maintenance) also are under public control.

Areas of Exploration for unions

This section shows the need for citizen and worker involvement in inclusive governance of digital systems and the public-private sector relations that surround them. The following policies and strategies could be pursued by unions:

  1. Representatives of workers’ and citizens should at all times be party to discussions on Smart Cities and Infrastructure and not as shown above be discussions which largely omit the workers’ and community voice.

  2. Unions should follow Smart City developments, and give specific attention to digital colonialism tendencies, data trust formations and whether citizens and workers in these Smart Cities are truly empowered by the data trusts or not.

  3. nions must understand the gendered nature of smart city discussions, bringing the voice of women and their lived experiences to the forefront in these discussions.

  4. Good examples from remunicipalisation should be broadcast and discussions across all levels of government held on the work quality and beneficial aspects of this process.

  5. Unions must be involved to ensure decent work and working conditions, just transitions for displaced workers, and the future of democratic control over public services. As in previous section on Public Administration, the following should be explored:

    1. the public sector’s digital capacity and autonomy should be vastly improved through investments into own digital solutions.

    2. algorithmic systems in place in public administration should be co-governed by workers representatives and leadership (see Governing Algorithmic Systems below)

    3. Unions should demand that any and all agreements with the private sector are made public, clear and transparent, including information about who has control and access to the data, how this data is governed, who has access and for what purpose.

    4. Public administrations should be able to describe why a partnership with the private sector is favoured over building their own capacities.

      Key Literature

7. Climate Change & Climate Migration

In this section, the focus will be on if and how technologies can be used to combat climate change. In June 2020, PSI published their Confronting the climate crisis: Time to Act. Toolkit for Trade Unions in Public Services. It's a comprehensive report that spurs union action from the ground up in partnership with climate change NGOs. It identifies various educational activities that can be used to build the capacity of public services workers and unions to understand the climate debate and mobilise. It emphasises:

“All levels of government have a vital role to play in dealing with climate change – it is the public sector that must lead and ensure reductions in greenhouse gas emissions economy-wide (in both the public and private sectors). It is the public sector that must deal with the consequences of climate crisis” (p. 11)

The report continues: "Neoliberal restructuring since the 1970s has fundamentally shifted social balances worldwide, privileging private interests over the public good. In this context, the climate crisis presents both threats and opportunities. Threats because the dominant political and economic processes are responsible for the crisis. Opportunities precisely because present economic and political processes cannot adequately resolve the problems the climate crisis generates. The field is open more than ever - the possibilities and potential for new thinking, new actions and new directions are growing. Public service trade unions can ally with other organisations who are working towards alternative energy, economic and social futures where climate justice is at the core of programmes for social and economic justice.” (p. 43)

Without a widespread commitment to tech for good that respects human rights and privacy rights at the same time as it contributes to a better climate, some of these tech solutions could in fact have more negative effects than positive. In other words, the aim is to suggest an agenda that serves the interests of people (workers) and the planet.

Just Transition Policies

One consequence of a transition to a greener economy to meet the Paris Agreement is a drastic change in the kinds and types of jobs available. In response, the trade union movement coined the demand for a just transition. The genealogy of ‘Just Transition’ is well laid out in this ILO Actrav publication from 2018. At its heart, it is about making the shift to a low-carbon and sustainable society as equitable as possible. Many unions celebrated after the Just Transition was included in the preamble to the 2015 Paris Agreement:

“Taking into account the imperatives of a just transition of the workforce and the creation of decent work and quality jobs in accordance with nationally defined development priorities.”

However, in many of the Nationally Determined Contributions (NDCs) made by the signatories of the Paris Agreement, just transition is only marginally mentioned, if at all. According to the ILO Actrav report, even the detailed presentation of the EU’s NDC (UNFCCC 2017) refers to this aspect only under the heading “creating an enabling environment” and labour issues are referred to as “skills agenda”.

A Just Transition policy must not be reduced to merely a question of reskilling or upskilling. Similar to the employers’ de facto reduction of digitalisation to one of adequate skills policies, this reductionism is neither sustainable nor inclusive. Unions must ensure government commitment to a new social contract, to decarbonisation, to social dialogue and decent work for all.

As the ILO Actrav publication cautions:

“The three dimensions of sustainable development – economic, social and environmental – are strongly interrelated and need to be addressed by the use of a comprehensive and coherent policy framework.” (p. 3)

What About Technology?

There is a growing literature on how new and emerging technologies can support the transition to more climate friendly practices such as the ITU report report: “Turning digital technology innovation into climate action.” The evidence and case studies presented cover a range of measures that are being deployed to build resilience to the climate crisis – from using space sensing observation to track deforestation, to developing smart grids to accelerate the energy transition, to strengthening early warnings systems against the rising number of extreme weather events. Acknowledging that digital technologies themselves have a substantial carbon footprint, the ITU has also established a Focus Group which will provide a global platform to raise awareness of the environmental impacts of artificial intelligence and other frontier technologies, as well as these technologies’ ability to contribute to the achievement of the Sustainable Development Goals and the objectives of the Paris Agreement. The report specifically mentions the following frontier technologies as potentially able to mitigate and, where possible, reverse the effects of climate change:

  1. The Internet of Things (IoT): enabling advanced services by interconnecting things (physically and virtually) based on existing and evolving interoperable information and communication technologies. IoT is increasingly responsible for connectivity-based service models in areas as diverse as water, sanitation, healthcare, agriculture, education and finance.

  2. Artificial intelligence (AI): potential for AI to be a tool in the effort to decouple economic growth from rising carbon emissions.

  3. Renewable energy technologies: the integration of automated sensors, data capture, performance measurement or other mechanisms with renewable energy technologies that enable the creation of electricity, heat and fuel from renewable sources, such as solar, wind, hydro, wave and tidal power, heat-exchange/geothermal and bioenergy.

  4. Digital twins: A digital twin is the virtual representation of a physical object or system across its life cycle. It uses real-time data and other sources to enable learning, reasoning and dynamic recalibration for improved decision making. Within the context of climate change and response, digital twins are an attractive proposition, particularly for urban areas that are rapidly growing in population, size and energy consumption.

  5. 5G technology: is expected to leave behind a smaller environmental footprint than the current technologies because it will be more directional and efficient, resulting in less energy and power being wasted.

PWC’s report The State of Climate Tech 2020: The next frontier for venture capital shows how private investments in climate tech have exploded over the last 6 years, amounting to $60 billion dollars. These investments have mainly been in mobility and transport:

  • Developments which increase efficiency (of engines, design, or materials) associated with movement of goods or people by land, air or sea;

  • Development of electric vehicles and micro-mobility vehicles, and the infrastructure used to propagate these technologies, including ride sharing apps and charging points

  • Development of battery technologies for mobility applications and associated infrastructure

  • Improvements to the efficiency of transport systems, including use of autonomous & sensor technologies, improvements to maintenance and repair, and urban planning and design.

Figure 6: From: The State of Climate Tech 2020: The next frontier for venture capital p.34 FALU = food, agriculture and land use. GHG is Greenhouse Gas.

The geographical split shows that nearly half of all venture investment in climate tech startups, $29 billion, went to startups in the USA and Canada. China is the second most significant region at $20 billion. The European market is approximately a third of China’s at $7 billion invested.

An example of tech for good is ICOS, the open data portal. It is fed by a network of carbon sensors across Europe and has transformed how scientists are studying the climate. Or Rewiring America - an initiative to rapidly decarbonize the US through electrification creating 15 million to 20 million jobs in the next decade, with 5 million permanent jobs after that.

An industry report by GeSI and Deloitte from 2019 argues how digital technologies can be used to achieve 103 of the 169 SDG targets (p. 7), including SDG 8 on “sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all”. In their view, technologies help to:

  • Connect & Communicate, opening up relationships, information, ideas and opportunity;

  • Monitor & Track the world around us, so that our impact is transparent and we can make targeted interventions;

  • Analyse vast swathes of information; Optimise processes, procedures and resource productivity; and Predict where we need to intervene; and

  • Augment our human abilities and Autonomate systems to carry out activities on our behalf by creating an ‘active bridge’ between the physical and digital worlds.

The carbon footprint of technology

But what about the carbon footprint of digital technologies themselves? One study, Green AI, Schwartz et al (2019) concluded that “the computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. We all use energy just by using our mobile phones, paying with our credit cards, sending emails or watching TV. When we do, we are sending and receiving data requests and responses to and from warehouse-sized buildings around the world, full of hundreds of thousands of servers. These data centers are among the most energy-intensive systems on the planet, representing approximately 10% of global electricity generation.

Quantum computing, although mainly still a theoretical construct, promises much lower energy consumption. But any digital system should be vetted for, and held accountable to, its energy use and impact on climate change. Again, if the AI principles of many are to be realised so the interests of people and planet are put above anything else, this significant fact needs far more attention.

It Takes Two To Tango

Common across the reports mentioned above, which are a snapshot of a growing literature on climate change and digital technologies, is the omission of any mention of the role of workers and unions in managing, implementing and governing these technological systems for the common good. Another commonality is that none mention citizens’ nor workers’ privacy rights or human rights.

Another key area of concern is that most of the tech solutions are designed and deployed by the private sector. Returning to the discussions in the section on Democracy And The Rights And Means To Govern it is crucial for public services to be vigilant and demand equal access to and control over the data and algorithms deployed in these technologies. Here much inspiration can be drawn in the role of union activism and partnerships in remunicipalisation efforts (see section Public Administration and Infrastructure).

Climate Migration & Digital Identities

As early as 1990, the Intergovernmental Panel on Climate Change (IPCC) noted that the greatest single impact of climate change could be on human migration—with millions of people displaced by shoreline erosion, coastal flooding and agricultural disruption.

Indeed, recent estimates predict that by 2050 between 150 and 200 million people risk being forced to leave their homes as a result of desertification, rising sea levels and extreme weather conditions (UNESCO 2017). Already today, people are forced to uproot due to the climate crisis. For lack of an adequate definition under international law, such migrants are almost invisible in the international system: no institution is responsible for collecting data on their numbers, let alone providing them with basic services. Unable to prove political persecution in their country of origin they fall through the cracks in asylum law.

A potential human rights crisis is looming if the rights of climate migrants are not secured. Here some organisations and experts are speaking of the value of Digital Identities to ensure the fundamental rights of displaced persons. In March 2019, the UNHCR published a document “UNHCR Strategy on Digital Identity and Inclusion” that stipulates:

“Globally, more than 1 billion people lack identity papers. In our modern world this excludes from services and socio-economic participation, it limits access, for example, to work, housing, a mobile phone and a bank account. In fact, the lack of a documented identity constitutes for vulnerable and already marginalized people a constant risk of transgressing the lines between legal and illegal.”

The UNHCR’s ‘Population Registration and Identity Management EcoSystem’ (PRIMES) programme seeks to give all refugees and otherwise displaced persons a digital identity. In late 2018, 8 in every 10 refugees registered by UNHCR received a biometric identity. Data of more than 2.4 million persons of concern are securely hosted in proGres v4, one of the core modules of UNHCR’s central Population Registry.

Using the UNHCR digital identities, the World Food Programme (WFP) is deploying blockchain technology to enable refugees to pay for their food by means of entitlements recorded on a blockchain-based computing platform that is linked to their digital identity. Refugees purchase food from local supermarkets in the camp by using a scan of their eye instead of cash, vouchers or e-cards.

However, Digital Identities do not come without their own fundamental sets of issues that actually could exacerbate existing biases, discrimination, or power imbalances. A critical report Digital Identity in the Migration & Refugee Context: ITALY CASE STUDY from 2019 concluded that:

  • Migrants exchange identity data for resources without meaningful consent. Privacy, informed consent, and data protection are compromised throughout the process of migrant and refugee identification.

  • Systemic bureaucratic biases present obstacles that would likely impede the fair development and integration of digital identity systems.

  • Trust is lacking in the sociotechnical systems that are intertwined with identity. Cultural mediators can be uniquely positioned in the system to build trust and literacy around privacy rights and informed consent. Moreover, if NGOs collecting identity data obtain the capacity and literacy, they can become ready access points to bolster data protection for migrant and refugee beneficiaries.

To the latter point, it should be added that these access points should ideally be held within public bodies either at multinational level (the UN) and/or with local or regional government agencies. In anycase, it is essential that these access points are well governed, have secure data governance procedures and policies in place that prevent the misuse of data, and have the capacities and skills to do so.

Areas of Exploration for Unions

Digital technologies are increasingly being deployed to mitigate or prevent the impacts of climate change on people, societies and livelihoods. Many of these solutions are owned by private companies. Some are developed within the United Nations.

A commonality across them all regardless of ownership is that, despite doing good, they can also pose a potential threat to the fundamental rights of people. Massive data extraction by private companies unless rigorously governed can lead to the tracking and tracing of ethnic, religious or cultural groups. Digital identities can be misused, spurring further bias and discrimination.

Unions could:

  • Explore how digital technologies can be used to further - not diminish - worker and citizen rights in digital climate change solutions.

  • Investigate how digital technologies - if implemented correctly - support governments action on climate change, drought, famine and land degradation. Unions must be a key force pushing politicians to (re)commit to effective democratic multilateralism.

  • Push for energy use accounting of any digital system so its impact on climate change can be scrutinised. Holding the OECD accountable to its AI Principles and highlighting the importance of reducing the carbon footprint of digital technologies

  • Ensure the private sector does not further hollow out the competencies of local, regional and national governments by hijacking climate crisis remedies.

  • Explore how, and if so under what conditions, digital Identities can be used to ensure the fundamental rights of all citizens regardless of legal status.

Key Literature

8. Health & Social Care Services

In 2019, Paul Webster wrote the following in The Lancet:

“There are huge industry opportunities here. Big pharma and the tech companies are circling around while the NHS sells its records and data. You’ve got a multi-billion pound industry of providing technology, providing consultancies, and another huge new industry providing patients with wearable health monitoring devices and apps. The NHS doesn’t have the capacity to do this. We are letting too many market entities in to monetise it.”

Although this quote addresses the UK NHS, it is valid for many public healthcare services across the world. The digitalisation of healthcare in all of its services is a multibillion dollar industry valued at USD 111.4 billion in 2019 and expected to reach USD 510.4 billion by 2025.

The private sector interest in healthcare covers both “healthtec” and “medtec”such as:

  • Diagnostics & Error Reduction (for example x-rays and scans analysed by AI, predictive and preventative treatments)

  • Surgery (for example robot-assisted or conducted surgery)

  • Administration ( electronic patient files, automation of history taking,)

  • Fitness (wearable sensors, patient health, monitoring)

  • Drug Development

  • Mental Health (telepsychiatry service, online digital therapy)

  • Communication (tele-health, online consultations, inter-hospital secure communication)

As with the digitalisation of public administrations, the aim of HealthTech is to increase efficiency, accessibility and to open new markets and services. As explored in the PSI report Digitalisation and Public Services: a labour perspective: “Even in tech-savvy countries, the current wave of digital change and innovation has caused some anxiety. This stems from the obvious fact that large multinational tech companies are driving the proliferation of untested, unregulated digital health tools, their main motivation being the gathering of data to explore new, profitable avenues of medicine, health services and business models.” (p. 38)

HealthTech is being implemented across the world, with private investors capitalising on decades of under-funded public healthcare systems as an excuse. For example, the International Finance Corporation - a World Bank Agency that serves as the largest multilateral investor in private health care in emerging markets claims that: “Brazil is one of the most attractive and promising healthcare markets globally”. Although Brazilians are guaranteed free and universal public healthcare coverage, private spending surpasses public spending generating one of the largest private healthcare systems in the world. At the same time, the Brazilian healthcare system faces multiple challenges, including an overstretched and underfunded public system, a shortage of beds, and an uneven distribution of resources and physicians. Given this context, Brazilian healthcare providers are looking for new solutions that can improve care and reduce costs, offering great business opportunities to health-tech innovators.”

Benefits of HealthTech

Although district hospitals and decentral health posts are generally preferable to ensure that citizens have access to adequate health care, HealthTech has several potential advantages. It can connect citizens in remote areas with life-saving medical interventions (providing these citizens have a phone line, a computer and the digital skills to operate the system). It can provide early warning signals of upcoming health conditions (see box for an example). It can lessen waiting times by allocating in more efficient ways patients to doctors, it can optimise the use of operation rooms, and speed up lab results. It can, as PSI points out in their report PSI: Digitalisation and Public Services, help and support doctors, nurses, teachers and administrative staff in the delivery of health and social services.

Using AI to give doctors a 48-hour head start on life-threatening illness

DeepMind* has developed technology that, in the future, could give doctors a 48-hour head start in treating acute kidney injury (AKI).

AKI Affects up to one in five hospitalised patients in the UK and the US, the condition is notoriously difficult to spot and deterioration can happen quickly. Working with the US department of Veterans affairs (VA)< the DeepMind team applied AI technology to a comprehensive de-identified electronic health record dataset collected from a network of over a hundred VA sites. The research shows the AI could accurately predict AKI patients up to 48 hours earlier than it is currently diagnosed. Importantly the model correctly predicted 9 out of 10 patients whose condition deteriorated so severely that it required dialysis.

* Deep Mind Technologies is a UK based artificial intelligence company and research laboratory founded in September 2010 and acquired by Google in 2014.

Risks of HealthTech

Although trillions of dollars are being invested in HealthTech and the benefits are obvious, the risks remain high. These risks include health outcomes, privacy rights and the lack of investment in adequate public health systems across all geographies. Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for patients, or contribute to a stronger universal health system.

The EU High Level Expert Group on AI, amongst others, pick up on this risk by recommending policymakers to: “Promote and recognise AI skills applied to healthcare. Given the possible impacts of the use of AI in healthcare, consideration should be given to mechanisms for assuring that the developers of AI systems are competent in the context of health.”

In Burkina Faso, where the healthcare system recently underwent digital transformation efficiency has improved, but there have also been several breaches of doctor/patient confidentiality as medical data went digital

Other risks include:

  • costly data breaches: personalized health data is particularly vulnerable to threats because of the high volume of sensitive information. As with any tech company, data privacy is a massive concern.

  • an overreliance on tech: the Emergency Care Research Institute (ECRI) in the US notes that patients can be at risk when medical devices do not detect alarms, when medical staff receive no notifications of the alarms, and when medical workers do not respond to alarms in a timely fashion.

  • remote health and the risk of misdiagnosis and the quality of care

  • programming errors: connected medical devices, such as morphine pumps or automatic pacemaker adjustors, create new risks to patient safety due to programming errors or network errors.

  • hostile takeover of all remotely-controlled medical implants. As with all things digitally controlled, they can be hacked.

One of the biggest threats of HealthTech is the elaborate extraction and analysis of huge sets of data: from patient files, to hospital journals, to “google searches”, to more elaborate data extractions that combine your lifestyle with medical data obtained legally (or not). For example, as as reported in the Financial Times in September 2020: “Of more concern may be the launch last week of the Coefficient Insurance Company, a new venture for Google’s life sciences and healthcare subsidiary Verily. While it aims to help refine employers’ risks in health insurance, a Bloomberg report quoted a Verily executive as saying the company also plans to work to identify specific employees who may be at risk of developing certain conditions and intervene. “That could, for example, involve monitoring an employee’s vital signs via their smartphone and setting them up with a virtual coach to help manage a chronic condition like diabetes or heart disease.”

Indeed, many experts are now speculating that large amounts of health data will mark the end of the collective risk model and pricing of insurances. Once again, the co-governance of algorithmic systems used in public services will be necessary to ensure that these services put human rights and citizen rights before efficiency requirements. Here inspiration can be found in some of the recommendations of the EU HLEG on AI (p.12) :

  1. There are concerns on a possible trade-off between access to quality treatment and privacy, as well as possible discrimination as a result of AI use. To prevent discrimination and deepening of health inequalities, AI use in the healthcare sector has to be thoroughly and independently monitored. Transparency should be sought in relation to data access and the purpose for which the data is used. Best practices in communicating the potential risks and challenges from using AI on sensitive health data (e.g. genomic data) should be developed and shared. The security of healthcare data and the resilience of AI-based healthcare systems is of particular importance. Investments in anonymisation and encryption techniques for healthcare data should be strongly considered.

Social Care Services

In social care services in high income economies, digital tools are becoming more and more commonplace. For example, scheduling tools are used to algorithmically manage work schedules and routes between clients, electronic patient files and journals are accessible remotely, smart speakers allow care personnel to check in with clients at a distance and/or robot carers augment or displace care workers.

The increasing automation of the management of social care services has both positive and negative effects. In a recent publication, the European Foundation for the Improvement of Living and Working Conditions reports that digital technologies can contribute to an increased sense of safety for users and have enabled older people to continue living in their own homes for longer. In addition, care provided with the aid of robots, telepresence and wearable devices can reduce the risk of contagion and ensure the continuity of care in case of quarantine, lockdown and/or social distancing. Yet reversely, workers risk being individualised and isolated from contact with colleagues and/or managers as the management of care services becomes automated.

Sentab seniors platform – Estonia - connecting people

From: “Impact of digitalisation on social services”, Eurofound 202

The Sentab system is a platform providing entertainment, social interaction and monitoring functions to connect seniors, their caregivers and relatives over the interfaces that are the most common for them, for instance a television interface for seniors and web and mobile interface for caregivers. Early trials at five nursing homes indicated that Sentab features are well aligned with the needs of seniors and caregivers.

However, the system could be improved with further data analysis capabilities to determine the emotional state and short- and long-term memory of older people.

Sentab has developed a set of complex software algorithms for detecting mood patterns and recognising trends in seniors’ emotional states, along with an internet protocol-based communications system and a database management system. For seniors, the system improves accessibility to social care and supports inclusiveness because they can live more independently.

The example from Eurofound (2020) discusses a digital system that provides entertainment, social interaction but also enables monitoring to take place between seniors, their caregivers and relatives. What Eurofound does not go into is how “Sentab has developed a set of complex software algorithms for detecting mood patterns and recognising trends in seniors’ emotional states.” This is a very good example of the potentials of digital technologies that might well have a reasonable explanation and use, but that simultaneously call into question: 1. Can AI truthfully detect emotions? 2. What happens with this data? 3. Have “emotional responses” been trialled? If so, on whom? Did they know? Were the care workers involved in the system design?

The Gender Inequality of Care

As populations grow and change, the demand for health workers is estimated to almost double by 2030 with the expected creation of around 40 million new health worker jobs, primarily in upper-middle and high-income countries.

However, the projected growth in jobs primarily in low- and middle-income countries takes place alongside the potential shortfall of 18 million health workers if universal health coverage is to be achieved and sustained by 2030, as envisaged in WHO’s global strategy on human resources for health. Without targeted interventions, the situation in resource-constrained settings could be further exacerbated by increased labour mobility towards countries with greater demands and higher earnings, thereby undermining already vulnerable health systems. Investing in the quality of jobs in terms of working conditions, labour protection and rights at work is the key to retaining health workers where they are needed, and to improving the attractiveness of the sector to ensure a sufficient supply of care workers.

The imminent shortfall of health workers is not leading to better working conditions for health care workers across the world, nor to a formalisation of much of the unpaid care labour that is currently taking place. Across the world, women carry out three-quarters of unpaid care work, or more than 75 per cent of the total hours provided. Women dedicate on average 3.2 times more time than men to unpaid care work. As a result, women are constantly time poor, which constrains their participation in the labour market.

Women face another imminent threat, linked to labour market participation if the 18 million shortfall of healthcare workers is not solved as women are more likely to pull out of employment to care for the young, elderly and sick. This would be a terrible blow to all gender equality improvements that have been made the last decades. It is a matter that should receive urgent and prioritised attention across the world.

So what could digitalisation do? Digital technologies are already being introduced to enable remote caring/contact or to augment health care workers in the form of robots or other types of machines. Whilst these tools have their merit, they do not deal with the fact that these highly needed occupations are undervalued by society. To avoid the right to care being depeneding on wealth, urgent change is needed to drastically revalue the work of health care workers, both those in paid employment as well as those offering unpaid care work. To really map the extent of paid and unpaid care workers, workers could work with their unions to collect their own data through the use of the app WeClock (as presented below.)

Job Changes & Risks

It is baffling that the increasing demand for healthcare workers and a falling supply has not led to better wage and working conditions. Rather, recent years have seen the rise of even more precarious work through the platform economy. Care.com is the largest, but far from the only platform. Cooperatives have also sprung up. According to the UN, and ICA and B20 figures, some 100 million households worldwide enjoy access to healthcare thanks to cooperatives. The presence of this enterprise model has been confirmed within the health systems of 76 countries, registering more than 3,300 health cooperatives with an overall turnover of 15 billion dollars.

Despite some of the added values in the cooperative model, according to the ILO, cooperatives in the care economy face various challenges that hinder their sustainability and viability. Issues such as limited access to capital and start-up revenues, a lack of cooperative know-how and knowledge gaps across the care sector impede cooperatives’ potential. Although not mentioned in the ILO report, cooperatives still exist on a for-profit basis, collectivising market risks to its members, and thus often paving the way for privatisation. Unions should critically examine the growing number of cooperatives, as they often serve as a means to remove health care as a state responsibility.

The rising precariousness of work and the worker-led establishment of cooperatives raises risks for workers, who often are working alone. Here unions have a key role to play in creating a home and a base for healthcare workers. Unions also have to demand the training, retraining and retaining of health workers. Healthtech should not be utilised as a “cost-cutting” measure entailing the laying off of the already under numbered health and social care staff.

Digital technologies can be of aid in assisting remote workers with alarm functions, automatic GPS tracking and the quick and easy access to support. But these technologies should be implemented with the patients’ and workers' privacy rights at their core. With the increasing influence of private sector companies in the healthcare sector, it is doubtful that this will happen unless unions advocate for it.

COVID-19 - Contact Tracing Apps

Many governments have rushed to implement contact-tracing apps and systems for monitoring the sick, their whereabouts and who they have been in contact with. Private companies and research initiatives have also been developed. In the private sector, the largest initiative came from the recent collaboration between Apple and Google. Their plan relies on tapping the short-range Bluetooth signals from smartphones. Phones would keep track—anonymously—of other phones they were near. When the owner of one of those phones was diagnosed with COVID-19, the plan was that alerts from the public sector would be sent to others who had recently been nearby. The idea was to help public health officials more quickly track down potentially exposed people and stem the spread of the virus. However, their original plan failed as many states in the US and governments across the world failed to implement the system and build their side of the deal: the ability to send notifications to citizens. So now, the tech giants will also provide the technology for sending and receiving alerts. The companies term it Exposure Notification Express. it will allow state health agencies to use the software without creating a customized app. Google and Apple maintain that their system respects the privacy rights of citizens.

Other initiatives have been well underway, for example the Swiss led DP-3T: Decentralized Privacy-Preserving Proximity Tracing is an open protocol developed in response to the COVID-19 pandemic to facilitate digital contact tracing of infected participants. The protocol, like competing protocol Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT), uses Bluetooth Low Energy to track and log encounters with other users. The protocols differ in their reporting mechanism, with PEPP-PT requiring clients to upload contact logs to a central reporting server, whereas with DP-3T, the central reporting server never has access to contact logs nor is it responsible for processing and informing clients of contact. Because contact logs are never transmitted to third parties, it has major privacy benefits over the PEPP-PT approach, however this comes at the cost of requiring more computing power on the client side to process infection reports.

India’s Aarogya Setu app - meaning “Bridge to Health” was launched in early April 2020. India has made it mandatory for citizens living in containment zones, government and private sector employees to download it. Yet users and experts in India and around the world say the app raises huge data security concerns. Their main concerns are that Aarogya Setu stores location data and requires constant access to the phone's Bluetooth, making it invasive from a security and privacy viewpoint. The app allows the authorities to upload the collected information to a government-owned and operated "server", which will "provide data to persons carrying out medical and administrative interventions necessary in relation to Covid-19". The Software Freedom Law Centre, a consortium of lawyers, technology experts and students, says it is problematic as it means the government can share the data with "practically anyone it wants", including the police. The government denies it will do that. Aarogya Setu is also not open source, which means that it cannot be audited for security flaws by independent coders and researchers.

South Korea has one of the world’s most successful contact tracing systems,backed up by teams of experts and thousands of workers engaged in surveillance and monitoring. Some of the tactics used in South Korea, which has a population of about 51 million, might be difficult to replicate in vastly populated emerging countries. Also, the surveillance technology might well not be acceptable to citizens of many Western nations. Tracing potential contacts in South Korea involved reviewing hundreds of hours of surveillance camera footage and going through mobile phone and credit card transactions. So-called CCTVs are ubiquitous in South Korea as virtually all streets and workplaces have them. Jung, the former Director of the Centers for Disease Control and Prevention said:

“We had a smaller absolute number of cases than other nations, but more importantly, the social norm, where people are okay with their privacy being infringed for the wider public interest, allowed comprehensive investigations, which is just unimaginable in western countries."

Other governments have failed in their introduction of contact tracing. The UK government’s attempts are probably the best known failure. These failures have a lot to do with:

  1. The contact tracing systems will very quickly lead to a false sense of security and potentially haphazard behavior if COVID-19 testing figures are not, simultaneously, vastly increased.

  2. Governments must simultaneously put in place sound governance structures of the data procured. (Read More)

Further recommendations are:

  • We need to hold both app developers and deployers, as well as governments accountable and push for transparent and ethical adoption and implementation of these new apps. We must demand that the data is ephemeral (transitory, or existing only briefly), that the apps themselves are temporary, and that the systems are governed by a broad group of representatives from all walks of life.

  • Even with the above measures in place, we need to consider that many citizens need to download and use the app for it to offer sound and reliable results. Many vulnerable groups (for example, the elderly), however, do not use mobile phones. How do we protect these groups without confining them to their homes in isolation for months to come?

  • Who controls and has access to the data is of primary importance. Put differently, organizations/authorities who have control over and access to the data also have control over the narrative, and can tell the rest of us what the state of affairs is. They could hide certain truths, exaggerate others. All data interpretation relies on who is doing the interpreting. This is why we need governance mechanisms but also data access rights for different stakeholders.

  • We must have a centralized mechanism whereby we can safely report privacy breaches and adverse uses of the apps.

Contact tracing apps are never stronger than the weakest link, this across many countries although vastly improved, has to do with testing numbers and frequency. However, regardless of the perceived or real necessity of these apps, they have potentially a significant effect on citizens’ privacy rights. In the world of work, unions must be vigilant and demand that the following conditions be applied:

  1. use of location-tracing apps is voluntary, employers must be prohibited from demanding that a worker downloads and uses the app as a precondition of return to work. This prohibition includes through seeking “informed consent” from the workers individually and/or collectively.

  2. That no worker should be forced to hand over app data to their employer as a form of monitoring.

  3. That workers have a right to share app data with their union for evaluation and risk assessment.

  4. That all existing health and safety rules and agreements are followed.

The same technology - tapping into the Bluetooth sensors on mobile devices - as well as tracking GPS location can be responsibly used by workers and their unions to actually track how often workers are within 1,5 metres of one another. This can be done by using the privacy-preserving app WeClock,

Areas of Exploration for Unions

Corporate influence in the healthcare sector is growing exponentially, along with the risk that the right to quality care becomes a direct function of citizens’ personal wealth. At stake is also workers’ and citizens’ privacy rights as digital technologies in place or being developed feed into the massive data extraction that is potentially so harmful for our human rights.

Unions should explore:

  1. Advocating for the investment in upskilling and reskilling to harness the potential of Trustworthy AI in public healthcare. Specific upskilling, training, and education, in AI at all levels and for all roles in the domain of healthcare should be provided in order to ensure that humans and AI work together to achieve significant progress in this domain and to ensure that privacy rights and the right to health are at the core of all digital technologies.

  2. Training within unions to build capacity in relation to negotiating digital rights. See sections on Workers’ Data Rights and Governing Algorithmic Systems

  3. Push for agreements between the public sector and private healthcare providers that include terms that favour the public sector concerning the data extracted and generated with the aim to responsibly build capacity in the public sector and safeguard workers and citizens privacy rights and human rights.

  4. Push for an in-depth analysis of the regulatory needs for the use of AI and other digital technologies in healthcare, both from the point of view of health professionals and patients. In particular, existing regulation already includes detailed provisions on the levels of safety and performance that medical devices should achieve, but, to fully make it fit for the use of AI an evaluation of the possible need to update the existing regulatory framework should be undertaken.

  5. Push for an inclusive AI development and policy framework. The participation of workers and communities in the policy-making process in relation to AI in healthcare is critically important. Stakeholders include, but are not limited to, patients, patient groups, citizens, healthcare professionals at all levels and their trade unions/professional associations and member state health authorities.

  6. Ensure that contact tracing apps respect the privacy rights or citizens, and push for the necessary government institutional structures to manage the data responsibly.

  7. Use WeClock to provide the workers’ story and their de facto risk environments.

    Key Literature

9. Skills and Competencies

The digital capacity of public services must be increased to ensure autonomy from private sector interests. On a micro-scale, the skills and nature of many public sector jobs is also changing as digital tools become an augmented part of workers’ jobs. From handling apps, to shifting towards more complex citizen enquiries and tasks as automated solutions (for example chatbots) increasingly take over the more common and routine aspects of workers’ jobs. As outlined in PSIs summary report 2019:

“Digitalization increases job complexity and the skill requirements workers need to perform the same work. As the most simple and repetitive tasks are replaced by software, computers and robots, employers place new multiple demands on workers, expecting them to be more qualified, multitask and to master intellectual, social and high-tech skills, often without providing matching training. Skills that are increasingly needed within a digitalized context include problem-solving, creativity, communication skills or the ability to think in a comprehensive and networked manner.” (p. 22)

The following training needs have been outlined by union representatives (p. 80-81).

  • adapting initial and further training programmes at company level as well as in occupational profiles (national, sector-level);

  • (re-) classifying pay groups according to new digital tasks and job profiles;

  • providing all workers with basic digital skills, including workers less affected by digitalisation and older workers;

  • re- und upskilling workers whose jobs are automatised in order to protect them against redundancy;

  • integrating new occupational profiles into company specific training, skills development and qualification programmes;

  • integrating digital tools and methods into initial and further training courses and programmes;

  • guaranteeing a right to training for every worker.

There is nothing gender neutral about the impact of digitalization in the Future World of Work. From flexible working hours, to life-long learning and digital-skill training, to the technology gap and labour segregation; digitalization will have a significant, and most times, overlooked, impact on women. Given that the majority of workers with care responsibilities are women, the issue of when during the day training takes place will have an enormous impact on the gender composition of (re)trained workers.

One key task for public sector employers and workers alike will be to stay abreast with digital change so they early on in transformation processes can anticipate what skill needs will be required. This is mirrored in an EPSU 2015 Local Government European social partner Joint Declaration in which they commit to:

  • Consider the information and training needs for workers at different stages in the implementation process and for different groups within the workforce;

  • Identify how workers feel about the loss of personal/ telephone contact with clients after the introduction of digital systems;

Remembering the Importance of Soft Skills

AI-systems are being deployed which aim to identify skills gaps before they grow too large. (e.g. HeadAI, LinkedIn via OECD). These systems use public and open data to map the changing skills demands with the supply of skills. While there is a lot of sense in this, there are also many potential pitfalls [OECD Forum 2019, Putting Hard Edges on Soft Skills: The future(s) of artificial intelligence and skills in the emerging digital economy]. The most important being that these systems reduce workers to their formal skills and competencies and do not, because they currently cannot, include workers’ “soft skills” or “human competencies”. These competencies are often what make all the difference in the workplace. Are you the glue that sticks the organisation or department together? Are you the one with all antennas out and aware of your colleagues’ wellbeing? Are you the jester, creative mind, the change maker? Very few workers report on their soft skills online, nor are many workplaces or annual appraisal systems designed to flush out and name the importance of these skills. Yet they are the least automatable. They are also the hardest to measure in quantifiable ways, but can be the ones that make your team or workplace function well. For example, nurses’ reassurances, the gentle touch, the person-to-person caring, all have a key role in patients’ recovery. Or teachers’ ability to understand each pupil and their personal circumstances, their nature and psychology and meet them where they are, cannot be measured in school grades alone. Unions should take the lead and push for workplace as well as systemic awareness of the importance of these human competencies.

Indeed, as public sector workers as of yet are not subject to the same sales/profit evaluations as many private sector workers are, the interpersonal competencies for public sector workers should be given much more weight and significance. The OECD Future of Education and Skills 2030 project supports in their report “ THE FUTURE OF EDUCATION AND SKILLS Education 2030 the key role of transformative competencies that address the growing need for young people to be innovative, responsible and aware (p. 5-6):

  1. Creating new value

    1. Increasingly, innovation springs not from individuals thinking and working alone, but through cooperation and collaboration with others to draw on existing knowledge to create new knowledge. The constructs that underpin the competency include adaptability, creativity, curiosity and open-mindedness.

  2. Reconciling tensions and dilemmas

    1. To be prepared for the future, individuals have to learn to think and act in a more integrated way, taking into account the interconnections and inter-relations between contradictory or incompatible ideas, logics and positions, from both short- and long-term perspectives. In other words, they have to learn to be systems thinkers.

  3. Taking responsibility

    1. The third transformative competency is a prerequisite of the other two. Dealing with novelty, change, diversity and ambiguity assumes that individuals can think for themselves and work with others. Equally, creativity and problem solving require the capacity to consider the future consequences of one’s actions, to evaluate risk and reward, and to accept accountability for the products of one’s work.

The University of Southern California has identified a portfolio of soft skills widely sought by employers around the world – competencies to guide choices about AI and digitalisation more generally. The five essential attributes are Adaptability, Cultural Competence, Empathy, Intellectual Curiosity and 360-degree thinking.

Soft Skills: Core Attributes

From University of Southern California

Based on our research we uncovered five attributes that are critical to success in today’s complex world, yet are grossly under supplied in the market. Those competencies are:

  • Adaptability
    Demonstrates mental agility and remains comfortable with ambiguous, unstructured environments and flexible in the face of continual change. Willingness to adjust one’s thinking and approach in response to new, unexpected or changing conditions and information.

  • Cultural Competency
    Demonstrates emotional and cross-cultural intelligence; capable of working inclusively, respectfully and effectively across cultures or organizations that have different values, norms, customs, and language or terminology. Also demonstrates broad, cross-functional thinking, shunning the limitations of structural, geographic, departmental, or other organizational boundaries.

  • Empathy
    Capable of understanding and recognizing others’ needs, goals, feelings, priorities and perspectives by engaging in active listening and focusing on reflective responses that clarify and strengthen dialogue. Able to effectively interpret others’ viewpoints and integrate these insights into more effective approaches for problem-solving and need fulfilment.

  • Intellectual Curiosity
    Possesses a hunger for new knowledge, information, and understanding that fuels ever-higher levels of learning and performance. Engages in novel opportunities and experiences, strives for measurable growth and demonstrates emotional intelligence and savvy.

  • 360-Degree Thinking
    Takes a holistic, multi-dimensional, analytical approach to problem-solving. Able to convert information into insights, infer implications from data and extrapolate from data to real-world applications and engage in sense-making by “connecting the dots.”

These important competencies are, according to the researchers, a counterpart to STEM. The researchers believe they are even more important than technical skills, yet are grossly under supplied in the market.

Areas of Exploration for Unions

Whilst many unions are already engaged in the topic of provision of skills and training for their members either directly through collective agreements or via public policy advocacy, there are a few additional points unions could push for:

  1. To ensure equal access to training opportunities, training should take place in normal working hours. This is particularly important for workers with care responsibilities.

  2. Training should be regarded by employers as an element of workers’ job tasks, not an addition to these. This is important to ensure that workloads and expectations are properly balanced so workers who do engage in further training are not overburdened with their tasks.

  3. Unions must spearhead the campaign to value and appraise human competencies, both in relation to ongoing workplace appraisal systems but also in “just transition” policies (see section on the People Plan).

  4. Unions could consider pooling their training offers to their members to prevent duplication. This is particularly important in the field of digital competencies as the topic is often multifaceted and complex.

  5. Unions could also consider opening their training schemes to union members from other parts of the world. This would help bridge digital divides and create benefits from multiple perspectives.

Key Literature

10. Digital Transformation of Trade Unions

The union movement needs to strategically, organisationally, practically and culturally engage in the powers, potentials and challenges of digital technologies. Power asymmetries in the labour market and in public services, in particular, are growing. Unions simply cannot afford not to transform themselves, and push for an alternative digital ethos that puts the privacy of workers (and citizens!) first.

At the same time, trade unions must not be blind to how digital technologies can be used to infringe their rights to organise, to assemble, to free speech and to campaign. Recent Amazon and Facebook leaks have revealed how they are designing and deploying tools that will flag possible union organising activities amongst their employees. Other AI-driven tools are being deployed to scrape available data from social media, local news and chat rooms to provide companies with early warnings of employee discontent.

One such company, prewave.ai even made their way into a trade union and technology training event hosted in Europe. Many unions use Whatsapp to communicate with members and between one another. Yet Whatsapp is owned by Facebook, who in turn has issued a statement that they want to share data between the two services. Facebook, well known for selling data, could well become the means through which union organising can be identified and cracked down on before it even gets off the ground. Similarly, many unions have opted for cloud-based solutions, leaving all documents and emails in the cloud for the cloud owners to see. It is therefore pertinent that unions reconsider the privacy, security and protection of their tools and strategies. Alternatives exist.

Digital Maturity Framework(s)

For unions to become the frontrunners for an alternative digital ethos, they have to engage in a proactive digital transformation themselves, which builds their capacity and preparedness to engage with the digital world of work. A Digital Maturity Framework is a good tool to guide and help this transformation. Many such frameworks exist, mainly for the private sector, and some for government [Data Orchard, What have we found out about Data Maturity so far?, 2016]. A very useful version has been developed for the not-for-profit sector by Data Orchard and DataKind in the UK.

Figure 8: DataKind and Data Orchards Data Maturity Framework for NGOS

This framework (called a Data Maturity Framework) lets users test their digital maturity along 7 key dimensions: Uses, Data, Analysis, Leadership, Culture, Tools and Skills. For each of these dimensions the users get 5 maturity level scores with clear explanations as to what needs to be in place on each level: Unaware, Emerging, Learning, Developing, and Mastering.

As such this Data (Digital) Maturity Framework gives users/organisations the ability to discuss where they are on the 7 key dimensions and what needs changing to progress on the maturity journey.

By evaluating your digital maturity levels across the 7 key dimensions, your union can evaluate where your strengths and weaknesses are, and get indications of what concretely needs doing to create a holistic digital transformation.

Data Governance

Another key dimension of a union’s digital transformation is that they govern the data they collect carefully and responsibly. Unions have, or potentially have, lots of data. For example from membership information: age, gender, name, wages, workplace, education, courses taken, home address, email, phone number and so forth. If this information (data) is structured on a spreadsheet or in a database, then analysis can be made on gender pay gaps, occupational changes, career paths and patterns, geographical wage differences and much much more. This data is in many data protection regulations regarded as highly sensitive information, and therefore must be governed and protected carefully. Unions potentially have lots of other data: collective agreements, changes in these over time, membership surveys, legal affairs and regulatory changes and much more. All of this can be used wisely, but can also be misused terribly.

To create an alternative digital ethos that protects and respects the privacy rights of workers and citizens alike, unions must be weary of what data they collect, for what purposes, how long it is kept, where it is stored and who inside the union has access to the data. Good Data Governance is a protection against the misuse of data, a way to ensure there are decent cybersecurity measures in place to prevent hacking and a means through which to ensure that the union is not hoarding data for no reason other than to have it.

Data Governance is typically an expensive field of operations. But it can also be regarded as a procedural and internal union policy one. The UK union Prospect co-developed an online, privacy-preserving guide to help unions with their own data governance: Lighthouse.

Unionists can use Lighthouse as form of an online guide - or quiz - where those participating get to rate their methods and practices along a range of topics.

WeClock

More and more private companies and public services are extracting work-related data. Unions need to have a response so workers and unions do not become de-powered by the employers’ unilateral interpretations of work. The best way to do that is for unions to work with their members and gather data on their working conditions that later can be used in campaigning and conversations with the employers.

One responsible and privacy-preserving way for unions to utilise tech for good and get information about their members' working conditions is through the use of the new open source app, WeClock. It aims to support workers’ in combating wage theft and promoting worker wellbeing. It works on Android, iPhone and Apple Watch and can help workers log key conditions of their work. The data is stored on the worker’s device exclusively. He or she can then decide to share it with their organiser, their union or to a trusted third party.

For example, homecare workers can use WeClock to track the distance they travel between clients and how much time they spend on each location. Is their schedule reasonable? Would a different route be more beneficial to them and their clients? If they use their own cars, do they get sufficient fuel compensation? Or a civil servant can use the Android version’s ability to track app usage to log how often outside of work they use work apps.

Unions can ask members to track their information for a limited amount of time, share their data back to the union and with a data analyst at hand, unions can begin their journey into data storytelling and visualisation. It is more important than ever that workers’ realities are analysed and their struggles told. See how unions and organisers can use WeClock in their campaigning in the UnionKit. Here is also a video explaining how WeClock works, and why it offers a privacy-preserving alternative for workers and unions.

Areas of Exploration for Unions

  1. Establish training modules and collectively pool experiences and good practices

  2. Work with members and collaboratively find data analysts/data safe harbours to support unions in their analysis and visualisation of data.

  3. Establish a list of safe to use digital tools to limit the data capture of workers. For example, should unions refrain from using Whatsapp and promote Signal instead. Also, unions could beneficially explore the wider use of Virtual Private Networks to safeguard internal communications.

Key literature/links

This publication was developed for PSI by Dr Christina Coclough, head of The Why Not Lab.

Futures of Work | The Why Not Lab | Christina J. Colclough

The Why Not Lab puts workers centre stage in discussions on the future of work. With eye-opening angles, methods and questions, we help progressive organisations and governments think outside the box, innovate and take concrete steps towards sustainable digital change in the labour market.

https://www.thewhynotlab.com

The publication is supported by a partnership between PSI and The Friedrich Ebert Stiftung.