The hub provides real-world bargaining clauses, union guidance, and framework agreements that can be adapted for use at the bargaining table.
No results found
6 Digital tools, artificial intelligence, and algorithms
This theme is concerned with what digital tools and systems are being deployed, how they are being used, and what co-governance rights workers have in this process. This section additionally includes corresponding restriction clauses.
Digital tools are a key way that worker data is generated, but they aren't the only way. For this reason, we suggest that you review these clauses in conjunction with the information presented in Section 5 which concerns workers' data rights more broadly.
Monitoring and surveilling workers is nothing new. However, the influx of digital technologies has expanded the ways that these activities are undertaken. The list of tools and their surveillance capabilities is almost endless. Examples include recording online activities, movement analysis and geotracking, biometric and facial recognition, emotion tracking, keystroke monitoring and speech analysis. While sometimes tools function as planned they are also prone to error or to being misused. This is why unions must be well informed about how digital tools impact workplace functions including hiring, pay and evaluation.
Collective bargaining agreements are needed to address the following topics and themes in the context of workers’ rights, digital tools, AI and management by algorithms:
This sub-theme is concerned with information sharing practices related to digital tools at work and the importance of clearly defining how digital tools will be used.
Technologies can often be used in multiple ways. Some of the same tools used to aid workers can also be used to surveil or discipline them.
Unions should negotiate clear language that specifies what tools will be used in the workplace and how tools will be used. The goal should be to ensure that tools and technologies benefit rather than harm workers. We encourage unions to be specific about what is permitted and to restrict use to only the things that have been explicitly agreed to.
Trade unions in action: Consultation on the introduction of new tools
The introduction of digital tools at work does not have to be a unilateral, employer-driven decision. Unions can negotiate to have an active role in determining where and how technologies will be used. Canadian union CUPE has developed contract language to secure information sharing practices about where GPS technologies are installed. Their language, referenced below, also gives unions the right to provide recommendations about installation locations.
The Union shall provide its recommendation on the placement and range of the existing video/audio equipment and RAM tracking or any GPS device through discussion with two (2) Union executives or designates. The Union may also provide suggestions on additional locations.
Prior to installing, moving, or modifying existing or new video/audio equipment and RAM tracking or any GPS device for general surveillance, the Employer shall advise the Union and, if requested, shall discuss the matter with two (2) Union executives or designates. The Employer shall clearly mark all vehicles, identifying the vehicle is equipped with the capacity for GPS tracking.
Union
Canadian Union of Public Employees
Country
Canada
Year
various
Document type
Collective Bargaining Agreement
Clause number
10114
Trade unions in action: Governing generative AI
Since 2023, generative Artificial Intelligence (AI) technologies have become significantly more accessible. These technologies are AI systems that are capable of creating new content, including images and text. Unions representing entertainment sector and communications sector workers have been some of the first to negotiate guardrails for the use of this type of technology. Contract provisions have clarified how authorship is determined when humans and AI both contribute to the development of a project, and under what conditions generative AI can be used by the employer and by workers. Below is an example from the Writers Guild of America.
A writer will be required to adhere to the Company’s policies regarding the use of GAI [Generative Artificial Intelligence] (e.g., policies related to ethics, privacy, security, copyrightability or other protection of intellectual property rights). Any purchase of literary material from a professional writer is also subject to such policies. A writer must obtain the Company’s consent before using GAI. The Company retains the right to reject the use of GAI, including the right to reject a use of GAI that could adversely affect the copyrightability or exploitation of the work.
Country
United States
Year
2023
Document type
Collective bargaining agreement
Clause number
10545
The Company may not require, as a condition of employment, that a writer use a GAI [Generative Artificial Intelligence] program which generates written material that would otherwise be ‘literary material’ (as defined in Article 1.A.5.) if written by a writer (as defined in Article 1.B.1.a. and Article 1.C.1.a.) (e.g., a Company may not require a writer to use ChatGPT to write literary material). The preceding sentence does not prohibit a Company from requiring a writer to use a GAI program that does not generate written material, such as a GAI program that detects potential copyright infringement or plagiarism.
Country
United States
Year
2023
Document type
Collective bargaining agreement
Clause number
10546
This sub-theme concerns how unions can introduce restrictive language that places limitations on tools and technologies and how they are used at work.
It is important to be explicit about how technologies and tools will be used, but it is equally important to specify how they will not be used. Many collective agreements identified include language on acceptable uses of technologies and place limits on their use. When it comes to restriction, unions will want to be on guard that technologies are not used for surveillance or punitive aims.
Trade unions in action: Restricting the use of digital tools
Unions have developed a wide range of language for using and restricting the use of technological tools at work. In the example below, UK union CWU members work at a company where forward-facing cameras have been installed. The collective agreement language explains why this technology has been introduced (i.e. to provide a better understanding of what happens if there is a vehicle incident) but also places limits on how it will be used. In this case, there is a commitment that sound recording will not be used. This language is clear and firm, and changes to the policy are dependent on consultation.
[Company] has taken a policy to install Forward Facing Cameras to all its insured fleet vehicles […].
The cameras will be used to provide at better understanding what happens in a vehicle incident / near miss; encourage improvement in driving style following a road traffic accident / near miss; speed up the claims process thereby saving costs; support the defence of third party claims against [Company].
[…] Sound recording is not activated whether recording or not. Consultation must take place before this can be changed. Should a steward occasionally wish to satisfy himself that a camera sound is still set to the off position, such a request will be accommodated by the local manager. […]
Country
United Kingdom
Year
2020
Document type
Collective Bargaining Agreement
Clause number
10117
Trade unions in action: Prohibiting the use of technologies and tools for punitive purposes
Some unions, like Korean union KPTU, have developed language that places limits on the use of digital tools. This provision also gives unions the right to negotiate when and for how long such technologies will be used. Canadian union CUPE, meanwhile, has developed strong language about where surveillance systems can and can't be used. Additionally, the use of such systems requires employee consent and CUPE has taken additional steps to protect workers who opt out of monitoring programs from reprisal. Both of these examples are included below.
Article 117 [Surveillance Equipment]
The employer shall not install surveillance equipment and soft wares aiming to record move and work process of workers such as computers, telephones, video cameras, biometric of fingerprints, iris, vein, RFID and other technologies of information & communications, sound and video. However, in case of installing equipment to prevent accidents or dangers such as occupational safety or theft, the following shall be agreed with the union in advance, as well as measures shall be taken in order for workers and the union recognise during use.
- Purpose of installation and period of use
- Installation method, installation method and record content
- Types and technical details of surveillance equipment
- Department and person in charge
The employers shall not reflect the contents of records by surveillance equipment in performance assessment or use them as grounds for disciplinary action. In cases of that installed equipment without prior consent from the union violates laws and collective agreement, or when the purpose of installation and usage period expires the employer shall immediately remove the surveillance equipment and notify the result to the union.
The employer shall not install surveillance equipment in places that can invade privacy, such as restrooms, changing rooms, washrooms, staff lounge, dormitories and so on.
Union
Korean Public Service and Transport Workers' Unions
Country
South Korea
Year
various
Document type
Collective Bargaining Agreement
Clause number
10110
Surveillance cameras, any technology or systems capable of monitoring Employees or their work and any other related equipment shall not be used in Employee occupied areas without [their] knowledge […].
No electronic monitoring of employees or their work shall be undertaken unless there is written consent. Such consent shall be subject to withdrawal at any time and must be renewed for each contract year. […]
It is understood that there shall be no reprisal against any member of the bargaining unit who chooses not to give such written permission. […]
The Employer agrees to inform the Union if they plan to put any area of the workplace under electronic surveillance, or plan to monitor communications. […] The employer agrees that employees shall be notified of the purpose of such monitoring and any occasions […].
There shall be no electronic monitoring and/or surveillance of a covert nature. The Union and all employees in a work location where there is electronic monitoring and/or surveillance shall be advised in writing of the location, and the nature of any equipment used for electronic monitoring and/or surveillance. […]
The Union shall be notified and a notice shall be posted in all workplaces in which the Employer has installed electronic monitoring or surveillance equipment. Such equipment shall not be used to conduct general, on-going supervision of employees.
Union
Canadian Union of Public Employees
Country
Canada
Year
various
Document type
Collective Bargaining Agreement
Clause number
10122
This sub-theme is concerned with the importance of transparency for workers to fully understand how digital tools and algorithmic management practices impact in the workplace.
The tools and technologies that workers work with, under, and in the presence of, help to collect significant amounts of worker data. This data can later be used by management or automated systems like algorithms for a range of workplace functions with the potential to influence things like the distribution of work, worker productivity and even human resource procedures. Also, algorithmic systems are often built by third parties. This means that sometimes these technologies are poorly understood by the very managers that use them.
Unions should bargain for language that enhances their understanding of how automated systems are designed and function at work. This can contribute to greater transparency in workplace management. Initiating conversations about the need for transparency when it comes to the design and function of tools, artificial intelligence and algorithms can open the door to productive conversations and redistribute power within the workplace.
Trade unions in action: Bargaining for algorithmic transparency
Algorithmic management is the use of computer algorithms and AI techniques to control employees. This form of automated management is driven by a series of binary decisions that often use data extracted from the digital tools that workers work with, under and alongside.
Among its many applications, algorithmic management can be used to hire, fire, discipline and reward workers. It can also be used to distribute work, organize production, and streamline processes. Algorithmic management systems are notoriously opaque, making monitoring difficult and transparency necessary.
In Spain, a royal decree, referenced below, was issued for the implementation of a collective agreement involving the union UGT. This includes the requirement that the parameters and decision-making logic embedded in algorithms be made available.
Be informed by the company of the parameters, rules and instructions on which algorithms or artificial intelligence systems are based that affect decision-making that may have an impact on working conditions, access to and maintenance of employment, including profiling. (extract from the Royal Decree implementing the agreement)
Country
Spain
Year
2021
Document type
Collective Bargaining Agreement
Clause number
10129
This sub-theme is concerned with how workers and unions can intervene in algorithmic systems when a problem arises or when automated systems do not perform as expected.
Artificial intelligence and algorithmic systems involve automated decision-making. AI and algorithmic technologies are built by humans but sometimes they malfunction. Or, in the case of machine learning, automated systems evolve independently of human oversight. It is important that workers and unions are able to intervene when things go wrong.
Trade unions in action: Keeping humans in the loop
A key method for intervening in algorithmic management is to ensure that systems are built with 'humans in the loop' and that, should something go awry, humans are in control and can intervene. GDPR Article 22 ensures that individuals have the right to human intervention and will not solely be subject to automated decision-making. However, whether unions are located in Europe or not, similar language can be negotiated at the bargaining table.
The European Social Partners Framework Agreement on Digitalisation established a multi-sector agreement, referenced below, that promotes good governance principles for AI and algorithms. The first principle is that such technologies should abide by a 'human in control' approach.
Deployment of AI systems:
- should follow the human in control principle;
- should be safe, i.e. it should prevent harm. A risk assessment, including opportunities to improve safety and prevent harm such as for human physical integrity, psychological safety, confirmation bias or cognitive fatigue should be undertaken;
- should follow the principles of fairness, i.e. ensuring that workers and groups are free from unfair bias and discrimination;
- needs to be transparent and explicable with effective oversight. The degree to which explicability is needed is dependent on the context, severity and consequences. Checks will need to be made to prevent erroneous AI output.
In situations where AI systems are used in human-resource procedures, such as recruitment, evaluation, promotion and dismissal, performance analysis, transparency needs to be safeguarded through the provision of information. In addition, an affected worker can make a request for human intervention and/or contest the decision along with testing of the AI outcomes.
AI systems should be designed and operated to comply with existing law, including the General Data Protection Regulation (GDPR), guarantee privacy and dignity of the worker.
Country
Europe
Year
2020
Document type
Framework Agreement
Clause number
10126
Additional reading on Theme 6: Digital tools, artificial intelligence and algorithms
UC Berkeley Labor Center’s report on “Data & Algorithms at Work: The Case for Worker Technology Rights.”
Coworker published a new report “Little Tech Is Coming for Workers” and launched a new database to track workplace “bossware” tech.
AI Now’s report “Algorithmic Accountability for the Public Sector.”
Partnership on AI (PAI) report: Fairer Algorithmic Decision-Making and Its Consequences: Interrogating the Risks and Benefits of Demographic Data Collection, Use, and Non-Use. PAI also introduced an AI Incident Database -- a systematized collection of AI incidents that have resulted in safety, discrimination, or other problems.
Ada Lovelace Institute (2022): Algorithmic impact assessment: a case study in healthcare
Senators Wyden, Booker and Clare have introduced the “Algorithmic Accountability Act of 2022,” which would require companies to conduct impact assessments for the automated systems they use to make critical decisions, such as those that provide access to financing, employment or housing.