Generative AI: A Worker’s Perspective

Download PDF
Share

Generative AI: A Worker’s Perspective

Generative AI: A Worker’s Perspective

Promises and Perils

Table of contents

-
Note
: if you need to print this publication, download this word doc version for best results.

1. What is Generative AI? 

Generative AI refers to a subset of artificial intelligence that focuses on creating (or generating) new data, content, or information that mimics human-like output. Instead of simply processing data or performing predefined tasks, generative AI can generate text, images, music, code, and even videos. This category of AI is particularly powerful because it doesn’t just learn patterns or analyse existing information—it produces new content based on the patterns it has learned from its training data. 

One of the most well-known examples of generative AI is OpenAI's Chat GPT series, which is trained on massive amounts of text data and can generate essays, answer questions, write poetry, and even engage in conversation. Other prominent tools include DALL·E (which generates images from text), Anthropic (for Generative AI Safety and Explainability), Hugging Face (for community-driven Generative AI development) and Midjourney (for producing visual content). 


2. How Does Generative AI Work? 

Generative AI works through combining machine learning (ML) algorithms and vast amounts of training data. At the heart of generative AI are models like neural networks, specifically deep learning architectures. These networks are designed to mimic the brain’s functioning, including learning patterns, making predictions or generating new information based on prior inputs. 

Most generative AI models rely on unsupervised learning algorithms. Unlike supervised learning algorithms, which rely on labelled data to try to steer AI systems towards a desired output, unsupervised learning algorithms require large amounts of unlabelled data to be fed into models. From this, generative AI systems  

learn to generate similar content without being explicitly programmed. In text-based AI, for instance, models are trained using vast datasets from books, websites, and articles. By analyzing this data, AI systems learn grammar, context, tone, and other nuanced aspects of language. 

Two important components of how generative AI models operate are: 

  • Training Phase: During this phase, the model is fed with vast amounts of data. It learns to recognize patterns, structures, and relationships in the data. 

  • Inference Phase: When the model is put into action, it generates new content by predicting what should come next based on the prompt or input provided by the user. 

What is important to know is that these systems are fueled by incredible amounts of energy. Within the next six years, the data centres required to develop and run the kinds of next-generation AI models that Microsoft is investing in may use more power than all of India. They will be cooled by millions upon millions of litres of water. 

Watch these short videos on Generative AI and Large Language Models (LLMs are a specific type of generative AI that focuses on understanding and creating human-like text). If you want to do a deep dive check out this video. (Note: you can choose subtitles in many languages. ) 



3. Use Cases in Public Services  

Generative AI is finding its place in a wide range of public service sectors, from health and education to administrative work and law enforcement. Here are some use cases: 

  • Healthcare: Generative AI is used in creating personalized health reports, prompting doctors to request certain information from patients during office visits, or assisting doctors with diagnosis. For example, AI tools can generate summaries from medical records, recommend treatments based on symptoms, and create predictive models for patient outcomes. IBM’s Watson is one such AI system that has been used in oncology to provide treatment recommendations. 

  • Education: In schools, AI can assist teachers by generating lesson plans, creating automated quizzes, or even offering personalized tutouring to students. Tools like Khan Academy have started integrating generative AI into their platforms to assist students in real-time problem-solving. 

  • Government and Administration: Public services, such as city management or public welfare, can use AI to generate reports, respond to public inquiries, decide how public benefits are distributed, or even automate basic tasks like form generation. A city administration might use AI to predict and respond to public transportation needs, or automatically generate tax forms based on user input. 

  • Law Enforcement: AI is used across the world to assist law enforcement by generating reports, analyzing crime patterns and predicting future crime occurences, or even drafting legal documents.  

  • Customer Service: Many public-facing institutions now use AI-driven chatbots to provide citizens with immediate responses to queries. For example, local governments can use generative AI to create virtual assistants that answer questions about public services like waste management or tax payments. 


4. Why Would Public Services Use Generative AI? 

As many public services are financially squeezed, they may be tempted to use Generative AI for the following reasons.  

  • Increased Efficiency: Generative AI can automate repetitive tasks, allowing workers to focus on more complex, meaningful work. For example, an AI tool can generate reports or answer citizen queries, which would otherwise take hours of manual labour. 

  • Cost Savings: By reducing the need for workers to perform labor-intensive tasks, generative AI can lower operational costs. Governments can reallocate budgets away from staff costs to other parts of their service. 

  • Enhanced Creativity: AI can assist in creative problem-solving by generating multiple solutions or ideas for complex issues. For instance, it can help city planners come up with new designs for public spaces or assist policy makers in drafting new legislation. 

  • Data Processing and Decision-Making: Generative AI can analyze vast amounts of data and generate actionable insights, which could improve public decision-making. For example, it could assist healthcare workers in analyzing population health trends or crafting appropriate public health responses. 

  • Personalization of Services: In sectors like education and healthcare, generative AI can personalize services based on the needs of individuals. AI can help customize learning experiences or tailor medical treatments to individual patients. 


5. Problems with Generative AI in Public Services 

While generative AI can offer some benefits, it also poses significant challenges, especially when deployed in public services. Below is an expanded analysis of potential problem areas, from ethical concerns to technical limitations

Click a topic to expand

Economic and Workforce Displacement  

Generative tools can be used as assistants, augmenting human creativity, but they can also be used to automate certain types of work. There are many open questions about what tasks will be most easily automated and whether that automation will result in a reduction in total jobs, a profound change in how certain work is valued, or a restructuring of labour as new jobs are created. For example, a public service employee responsible for communication could either (1) lose their job because management decides to let generative AI write the press releases; (2) get a reduction in salary as they face more competition (by machine) in the market or; (3) no longer write as much manually, but instead be in charge of producing final texts using AI or perhaps fact checking AI produced texts. 

Generative AI could lead to significant changes in the public service workforce, with potential job displacement being a major concern: 

  • Job Losses in Certain Sectors: As generative AI takes over tasks like report writing, legal document drafting, and customer service, many public service jobs may be at risk. Clerical, administrative, and even some professional roles could be automated, leading to job losses and economic disruption

  • Shift in Skill Requirements: AI is also likely to change the nature of public service work. Workers may need to acquire new skills in AI oversight, data management, or ethical compliance. This can create a divide between those who are able to upskill and those who are left behind. 

Accountability and Lack of Transparency  

AI decision-making is often seen as a "black box," where the internal workings of the system are opaque even to those who design and deploy it. This lack of transparency can be problematic in public services, where accountability is crucial: 

  • Opaque Decision-Making: When AI is used to make or inform decisions—such as allocating public housing or determining eligibility for social services—there may be little clarity on how those decisions are made. This can erode public trust, especially if citizens feel they are being treated unfairly by an AI system they don’t understand. 

  • Limited Legal Recourse: If an AI system makes a mistake that harms a citizen—such as denying a benefit or misclassifying someone in a law enforcement database—it can be difficult to establish legal responsibility. Should the blame fall on the AI vendor, the public institution using the system, or the policymakers who approved its use?  

Lack of Human Oversight and Over-Reliance on AI  

Generative AI is a powerful tool, but public services run the risk of over-relying on it, leading to a lack of critical human oversight in decision-making: 

  • Automating Human-Centric Roles: If generative AI is used to automate tasks that require human empathy and understanding—such as social work, counselling, or healthcare triage—it can lead to depersonalised, inappropriate, or even harmful interactions. For instance, using AI to assess welfare applications without human review could result in vulnerable individuals who are experiencing unique challenges being denied essential services. 

  • Erosion of Skills: Over-reliance on AI can lead to a decline in human expertise and judgment in critical areas. Workers may lose skills and experience if they depend too heavily on AI-generated solutions without fully understanding the underlying issues or context. 

Bias and Discrimination  

Generative AI systems are trained on vast datasets that often reflect historical biases. When these datasets contain biased information—whether it's about race, gender, socio-economic status, or any other characteristic—AI models tend to perpetuate and even amplify these biases. In public services, this can have serious consequences. For example: 

  • Racial or Gender Bias in Decision-Making: As AI systems learn to recognize pattern in unstructured datasets, they may identify patterns based on variables that correspond to identity features protected by anti-discrimination legislation. Two common examples are race and gender. AI systems used in law enforcement, or the criminal justice system could exhibit racial bias in their predictions, as seen in some predictive policing tools. Or, AI systems used for hiring in public-sector jobs could favour male candidates over female ones if the training data reflects historical gender imbalances. 

  • Discriminatory Welfare Allocation: AI systems used to allocate social benefits or public housing may discriminate against marginalized communities if the data they rely on is skewed. For example, if historical data shows that certain demographics were less likely to receive benefits, the AI may unjustly continue this pattern. 

Addressing bias requires continuous auditing and re-training of AI models on diverse datasets. However, some public services might think this is costly and time-consuming especially if resources are limited. Additionally, diverse datasets may not exist or they may not be context appropriate for the intended use of the system. In many cases, the public service in question will need to manually override the AI system to limit output of harmful material, but these sorts of interventions are necessarily brittle and imperfect. Indeed, workers in public services in the UK spoke of the constant necessity of overriding a system to avoid harm to citizens.

Cultural Insensitivity and Lack of Contextual Understanding  

Generative AI models are often trained on datasets that may differ from the context where the system is used. When a system developed in one context is deployed in a different location, it may generate cultural interpretations, values, or display sensitivities that are perceived to be insensitive or inappropriate. This is also a problem when AI models are trained on global datasets if the system is used for a particular population. This can be problematic in multicultural societies where public services must cater to diverse groups, and across geographies where systems predominantly trained on the Western world’s data are used in other regions: 

  • Inaccurate Translation Services: AI translation tools used in public services may struggle with cultural idioms, local dialects, or context-specific language use, leading to miscommunications. For instance, a generative AI translating government documents might miss the cultural significance of certain terms, resulting in misunderstandings. 

  • Insensitive Public Messaging: When generative AI is used to generate public-facing communications, such as government announcements or public health campaigns, there is a risk that the messaging may be culturally tone-deaf or inappropriate. For example, a generative AI system used to create content for a public health campaign might unknowingly offend certain religious or ethnic communities if it doesn't understand their customs. 

  • Language Barriers: AI-powered chatbots or customer service systems deployed in public services may not understand or respond accurately to regional dialects or minority languages. For instance, in India, where many languages are spoken, an AI service might favour Hindi or English, leaving speakers of less common languages at a disadvantage. 

  • Misinterpretation of Cultural Practices: AI used in healthcare or legal systems might misinterpret traditional practices, leading to poor advice or decisions. For example, if a public healthcare AI is programmed based on Western medical practices, it might disregard indigenous or cultural healing practices, leaving minority groups feeling misunderstood or mistreated. 

  • Stereotyping: Because generative AI is trained on large datasets that include biased or stereotypical content, it can reinforce or even amplify harmful cultural stereotypes when interacting with people from different regions. For example. if an AI is asked to generate images or descriptions of certain nationalities, it might produce exaggerated or stereotypical portrayals based on outdated or biased data

  • Bias in Historical or Social Context: Generative AI trained on certain datasets may present historical events or social issues from a narrow perspective, often omitting or misrepresenting important cultural viewpoints from other regions. 

Fabrication and Misinformation  

Generative AI systems are also known to regularly produce entirely fabricated or incorrect information that may sound plausible but is factually inaccurate. This is what is known as ‘fabrication,’ or ‘confabulation’. This can be problematic in public services where accuracy is crucial, for example: 

  • Erroneous Legal Documents: If AI is used to generate legal documents or advice in public administration, hallucinations could lead to flawed legal reasoning or incorrect documentation, causing harm to individuals or delays in legal proceedings. 

  • Misleading Medical Advice: In healthcare, generative AI could hallucinate information that leads to inappropriate treatments or wrong diagnoses. Imagine a public health chatbot advising citizens based on false or incomplete information—it could seriously jeopardize public health. 

Sustainability and Environmental Impact  

The environmental costs of training and running large AI models are huge. Generative AI systems, particularly large language models like ChatGPT, require significant computational resources, which consume a vast amount of energy: 

  • High Carbon Footprint: The computational power needed to train and maintain generative AI models can have a significant carbon footprint. Public institutions must weigh the environmental cost of deploying large AI systems, particularly in light of broader government commitments to reduce carbon emissions. 

  • Resource Intensity: The hardware required for AI—high-performance servers, cooling systems, etc.—is resource-intensive. This can strain public budgets and infrastructure, particularly in regions that lack advanced technological capabilities. 

Privacy Violations and Data Security Risks  

Generative AI models are typically trained on large datasets, some of which may contain sensitive personal information. In public services, where AI is often tasked with handling confidential data (such as health records, social security information, or legal files), privacy concerns are paramount: 

  • Data Breaches: AI systems may inadvertently reveal sensitive data. For instance, if a generative AI system is used to assist with case management in social services, it might generate content that discloses private information about individuals without their consent. 

  • Re-Identification Risk: Even anonymized datasets can pose a privacy risk if AI models are able to "re-identify" individuals based on patterns in the data. This could be especially problematic in public health, where data about disease outbreaks might inadvertently expose personal details of patients. 

Intellectual Property (IP) Concerns  

Generative AI systems often create new content—whether text, images, or even music—by using and transforming existing works. In public services, where content such as public documents, reports, and educational materials are generated, this raises questions about intellectual property rights: 

  • Copyright Infringement: Generative AI tools might inadvertently generate content that closely resembles copyrighted works, leading to potential legal disputes. For example, if an AI system generates a public campaign image or slogan that too closely resembles a commercial logo or phrase, public institutions could be held liable for infringement.  

  • Ownership of AI-Generated Content: In public service contexts, determining who owns the rights to AI-generated content can be a legal grey area. Does the government own the rights to reports or documents produced by an AI, or does the original creator of the AI hold these rights? Clear legal frameworks are still evolving, but public institutions must be cautious in handling this issue. 

 


 

6. Tips for Workers in Public Services  

While the field of generative AI is still emerging and regulatory responses are pending, we would like to offer a few fluid, changeable, and non-exhaustive responses that public sector workers can adopt.  

Before introducing generative AI into the workplace, workers should demand clear communication, transparency, and safeguards from management to ensure the technology is deployed ethically, safely, and fairly.  

But firstly – as an individual remember the following:

  • Never ever upload sensitive material to these systems. This includes datasets that include personal data or personally identifiable information, case files and the like. It is important to remember that these are systems developed by private companies. Once data is uploaded, it becomes theirs.  

  • Never trust the output. Always double check the results you receive – fact check them and validate any content with reputable sources.  

  • Consider the environmental impact of playing around with these systems.  

In your work setting, try to get the following in place before you accept to use generative AI at work:

Management has, in consultation with you, established a clearly defined internal policy governing how generative AI will be used in the workplace. This should cover topics such as: 

  • Stipulations that sensitive materials should never be uploaded to these systems. This includes datasets that include personal data or personally identifiable information, case files and the like. 

  • Language guaranteeing that workers will not be required to use their personal email address to create an account for work purposes. 

  • A “No Job Loss Guarantee”: i.e. that the introduction of AI will not lead to immediate job losses 

  • Clear instructions on who can use AI, how it will be integrated into work processes, what kind of tasks it is permitted to assist with and a plan for upskilling/reskilling workers in working time. 

  • A training scheme to ensure workers a good comprehension of what the AI can and cannot do to avoid misunderstandings about its potential impact and performance.  

  • Language establishing safeguards if working conditions are changed.  

  • Information to all workers in a plain language about the nature and purpose of the systems used 

  • Clear privacy safeguards, for example concerning what other purposes the logged data on when the generative AI system is used, for what purposes and by whom can be used. 

  • Human-in-control principles that ensure workers have the right and adequate time to check the validity of the information produced by generative AI systems 

  • Clear policies on how data will be protected, stored, and used if management uses generative AI to process employee data. This includes personal information and communication that may be analysed by Generative AI. 

  • Bias mitigation strategies, including how management will ensure that AI systems do not reinforce existing biases, especially in decision-making processes affecting workers and/or the public. 

  • Accountability structures: Management must establish accountability structures, so it’s clear who is responsible for decisions made with AI input. Workers and the public should know where to raise concerns if AI is making errors or causing harm. To this end, establish a whistleblowing system (for safe reporting) 

  • Environmental accounting 

  • Grievance mechanisms to enable problems to be addressed early before they escalate as well as helping to identify patterns over time 

  • Supply chain due diligence (such as guaranteeing that all workers involved in the development and moderation of generative AI systems, including data annotators, content moderators, and contract workers, must be paid a living wage based on their region  

  • Materials generated by AI in collaboration with or under the direction of workers shall be jointly owned by the worker(s) and the public service. Workers retain partial ownership rights, particularly when their creativity, expertise, or knowledge significantly influences the AI's output. 

  • Any material, prompts, data, or specifications provided by workers to guide the AI system will be recognized as their intellectual property (IP). The worker shall have the right to be credited as a co-creator of any AI-generated content that stems from their input. 

  • If AI-generated materials result in commercial gain or other benefits, workers shall be entitled to a fair share of profits or other benefits in proportion to their input. Transparent mechanisms for profit-sharing or any other means of redistribution of gains into the workforce should be established.  

  • A guarantee that policies shall be reviewed regularly with input from employees to ensure it remains relevant as AI evolves and its role in the workplace changes. 

Note: If you don’t have the statutory right of consultation or this is not guaranteed in your collective agreement, negotiate for the right to be consulted before management introduces Generative AI systems and try to include as many of the items under point 1 above as possible  

You could additionally raise the following points with management.  

Management must be able to prove that they meet all legal obligations, such as those relating to health and safety, data protection (including impact assessments), equality and human rights law.  

Management has put in place regular system audits that will be conducted in cooperation with you/your union. 

Management must guarantee non-Discriminatory practices: AI usage must be thoroughly vetted for fairness and non-discrimination. 

Transparency in AI Decision-Making: The public must have the right to know when Generative AI has been involved in decisions such as welfare assessments, law enforcement, or public health responses. The public service must remain transparent and accountable to workers and the public. 

 


7. Conclusion  

There is a considerable risk that the use of AI systems by public services will only help Big Tech companies consolidate their power. Corporate control over public services is already widespread and is spurred by the increasing push to integrate digital systems into everything from the provision of social services, to assessing job applications, to policy making and democracy. Couple this with overall privatisation trends and the rise in public procurement concerned with private sector development of digital systems for public services and the picture is clear. 

In addition, Big Tech companies have control over massive amounts of data not only through public services but also from the services they offer businesses and the public. They have the financial means, the computational power needed to run AI systems and they have the technical expertise required to do so.  

To safeguard quality public services and democracy it is, therefore, pertinent that public service unions across the world engage in advocating for inclusive AI governance law and policies. These policies should strengthen public service autonomy and capacities to limit the dependency-relations on commercial interests; revise public procurement demands to include environmental and social impact accounting; safeguard quality jobs and redistribute profits made from digital technologies back into the workforce; ensure transparency and accountability of AI systems; enhance data ownership; include obligatory inclusive governance of AI systems, and much more

Many of these policy pushes can be directed through collective bargaining. To this end, PSI has developed three tools that can be helpful. The first is the Digital Bargaining Hub. An open database of collective bargaining clauses, framework agreements and the like from unions across the world. Stay tuned there as unions negotiate around the use of Generative AI! 

The second is the co-governance guide and the third is called the data lifecycle at work. All three are important when management are using generative AI systems to inform their managerial decisions about the workers, and in the services you provide to the public.  

Video

This 18min video covers key questions such as What is generative AI, how does it work, how do I use it, what are some of the risks & limitations. Also covers things like autonomous agents, the role of us humans, prompt engineering tips, AI-powered product development, origin of ChatGPT, different types of models, and some tips about mindset around this whole thing.

Generative AI in a Nutshell - how to survive and thrive in the age of AI

Read more

Welcome to our digitalisation page - where you can find our key publications, resources and news on how unions can shape the digital transformation in the interests of workers and public services.