Skip to main content

University of Salford home

Centre for Research on Inclusive Society

  • Home
    • Sustainable Housing and Urban Studies Unit
    • Criminal Justice, Deviance and Victimology
    • Environment, Place and Society
    • Digital Society
    • Work and Welfare
    • Equity, inequalities and inclusivity
  • Collaborators and Partnerships
  • Our staff
  1. Home
  2. CRIS Research Blog
  3. From Hype to Practice: Four Conversations About AI & Society…

From Hype to Practice: Four Conversations About AI & Society…

Dr Thomas Redshaw

The impact of artificial intelligence (AI) technologies is becoming hard to ignore for colleagues across all disciplines. In just one year, the proportion of students in the UK using AI tools for assessments jumped from 53% to 88% (Freeman, 2025). Half the UK population now uses AI in some form, while one in six businesses have begun integrating these tools (Office for National Statistics, 2023). Alongside this surge in use, the UK government has launched a £14bn AI Opportunities Action Plan.

Amid this rapid change, universities face urgent questions. How should we approach AI ethically and pedagogically? What kinds of research and policies are needed? And how can we critically engage with the hype surrounding AI to understand its real-world effects?

To explore these questions, our research centre launched the AI & Society Forum earlier this year, a monthly discussion bringing together colleagues across various disciplines and guests from different industries and sectors. Over the first four sessions, we explored what AI is, what it gets wrong, how to use it responsibly, and whether it truly enhances productivity.

Defining AI and Why It Matters

Our opening session sought to address a simple question: what do we mean when we talk about AI?

AI is often treated as a single tool, but it’s better understood as both a field of research and a constellation of technologies. It includes everything from natural language processing systems like ChatGPT, to machine vision for facial recognition, to machine learning algorithms that shape our social media feeds and search results.

As Ewa Luger (2023) notes, AI systems are always social as well as technical – they are built by humans and embedded within the messy realities of social life. This means they carry ethical, political, and cultural implications.

Our first forum discussion helped establish a shared vocabulary and set out key concerns among colleagues. It felt clear that interdisciplinary conversations are essential if universities are to engage critically with AI, rather than simply react to external pressures.

Understanding AI Beyond the Hype

Building on this foundation, our second session turned to how AI actually works, and what it often gets wrong.

As Dan McQuillan (2022) argues, machine learning is a process of “brute force mathematics”, optimising for fluency rather than truth. These systems are powerful, but they also misrepresent and oversimplify the world, leading to errors and harms that often fall hardest on marginalised communities.

These issues were further emphasised through examples shared by participants:

  • Bryn Phillips, a community organiser, discussed a custom GPT he built to advise renters on their housing rights.
  • Dr Muhammad Khan from SBS highlighted the staggering and escalating energy demands of AI systems.
  • Dr Keith Silika from Policing raised concerns about universities’ reliance on Big Tech for data storage and shared his research into hybrid cloud models.

We also discussed a recent study on co-designing AI policies with students (Judson, 2025), which spoke to the anxieties many of us had around the use of AI among students. It became clear we needed to dedicate a full session to this!

Responsibility and Governance

Following from our previous discussion, our key focus this month was how universities should respond to the proliferation of AI. We discussed three dominant positions emerging across the sector:

  1. Rapid embrace: AI as a competitive advantage, closely aligned with government innovation agendas.
  2. Responsible integration: developing ethical frameworks and practices that prioritise transparency and inclusion.
  3. Resistance: rejecting AI integration in classrooms, as seen in an open letter signed by over 800 educators worldwide.

Colleagues including Craig Smith, Sudi Sharifi, Graeme Sherriff, and others debated these perspectives, exploring how universities might balance innovation with professionalism. One practical idea was to involve students directly in shaping AI policies rather than imposing rules from above, a bottom-up approach to governance.

Productivity and Labour

With July’s session addressing the use of AI among students, August’s turned to the use by colleagues. AI tools are marketed as ways for professionals to save time and boost efficiency, but the evidence emerging paints a more complicated picture.

  • One study found software developers were 19% slower when using AI tools because of the need for oversight (Becker et al, 2025).
  • Another reported that 77% of workers actually experienced increased workloads after adopting AI (Whittle, 2025).

Our discussion highlighted that AI is most useful when assisting with tasks we already know how to do, but evidence also suggests that where this is the case, organisations tend to expect more output from fewer staff, and subsequently recruit fewer staff, especially for entry-level positions, something that is affecting graduates in particular.

We also discussed the arguments of sociologist Mark Carrigan (2024), who calls on scholars to see AI not as a tool for automating tasks but as an interlocutor: something we engage with critically and reflexively. This framing resonated with participants including Pal Vik, Sharon Coen, Simona Merlusca, Tania Goddard, Emma Kwegyir-Afful and others who reflected on their own teaching and research practices.

Building a Community of Practice

Across these four sessions, a few key threads have emerged. AI is deeply social, not just technical. Its integration into higher education brings both opportunities and risks, from environmental and labour impacts to questions of governance and ethics. And while its productivity promises are enticing, the reality is far more complex.

The AI & Society Forum has shown the value of creating interdisciplinary spaces where researchers and practitioners can grapple with these issues together. Future sessions will explore topics such as AI’s environmental costs, its role in assessment, and how creative practices can shape AI futures.

As AI continues to evolve, our task is not simply to keep up, but to shape how these tools are understood and used, ensuring higher education remains a place of critical inquiry and shared responsibility.

Dr Tom Redshaw joined the University of Salford in 2018, having been a Lecturer in Sociology at Loughborough University (2017-18) and St Mary’s University, London (2016-17). Tom has led and taught on undergraduate and postgraduate programmes covering a variety of topics in sociology and acquired Fellowship of the HEA in 2020. Tom has a PhD in Sociology (University of Manchester, 2017) and conducts research into the social impact of new technologies. 

References

Becker, J., Rush, N., Barnes, E., & Rein, D. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. https://arxiv.org/abs/2507.09089

Carrigan, M. (2025). Generative AI for academics. London: Sage.

Freeman, J. (2025) Student Generative AI Survey 2025. HEPI. Available here: https://www.hepi.ac.uk/reports/student-generative-ai-survey-2025/

Judson, H. (2025). “But I Didn’t Use ChatGPT!”: Democratic Course Design and Generative Artificial Intelligence in Higher Education Landscape. Teaching Sociology, 53(3), 246-256. https://doi.org/10.1177/0092055X251342530 (Original work published 2025)

Luger, E. (2023) What do we know and what should we know about AI. London: Sage

Office for National Statistics. (2023, June 16). Understanding AI uptake and sentiment among people and businesses in the UK [Bulletin]. https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/understandingaiuptakeandsentimentamongpeopleandbusinessesintheuk/june2023

McQuillan, D. (2022). Resisting AI : an anti-fascist approach to artificial intelligence (First edition). Bristol University Press.

Whittle, J. (2025, July 15). Does AI actually boost productivity? The evidence is murky. The Conversation. https://theconversation.com/does-ai-actually-boost-productivity-the-evidence-is-murky-260690

© 2025 University of Salford