2025 Conference on International Cyber Security | 4-5 November 2025
Register now

< Return to program overview

Panel 7

|

Shaping the Digital: Cybersecurity Communities Beyond the State

Liliya Khasanova

Liliya Khasanova is a post-doctoral researcher at the Hitachi Center for Technology and International Affairs at the Fletcher School of Law and Diplomacy.  Her research interests include national perspectives on the international law of cyberspace, global-regional data governance, and the impact of ICTs on the international legal order. Before joining the Fletcher School, Liliya was a post-doctoral fellow at the Berlin-Potsdam Research Group ‘International Rule of Law – Rise or Decline?’. She was selected as Women, Peace and Security (WIIS) next-generation fellow in 2021 and is currently a Center Associate with the Davis Center for Russian and Eurasian Studies at Harvard University.

Josephine Wolff

Abstract

Keynote

Private and Public Sector Stakeholder Perspectives on the EU AI Act

Artificial intelligence is now recognised as a central force in shaping economies, societies and power relations - whether welcomed or not. The AI innovation, led mainly by big tech companies, is also increasingly positioned as a geopolitical matter. What does this mean for the traditional role of states as central actors in global governance? How do these fast-moving technologies shift the regulatory agency? The paper seeks to address these questions by focusing on the European Union’s evolving regulatory framework of AI.

Drawing on chronological and thematic analysis of the various drafts of the EU AI Act, this paper investigates the interplay between the regulatory ambitions of states and private sector influence. The paper seeks to understand the roles played by different stakeholders in the process, their motivations and the extent to which they are satisfied with the language in the law. At a high level, many of these motivations are apparent - tech companies, in general, lobbied for deregulation and independent security assessments while advocacy groups called for more stringent government oversight. However, very little research has been done on the details of what these different stakeholders were advocating for and how policymakers landed on their eventual draft.

The research will proceed in two stages: (1) document analysis and (2) semi-structured interviews. The document analysis stage will involve reviewing all of the drafts and amendments to the AI Act text to identify significant changes in the text and map out a timeline of the law’s evolution. It will also include analysis of public and leaked statements by companies, advocacy organizations and lawmakers. This will allow us to map out a timeline of major changes to the EU AI text as well as a network of involved stakeholders in both public and private sectors to target for the interview stage.

Through this lens, the paper reveals a deeper dependency on corporate compliance, cooperation, and infrastructure. It illustrates how the regulatory agency is shifting, as global tech not only implements but also shapes the rules that govern AI development and deployment. By placing these developments within the broader themes of order, disorder, and re-order in cyberspace, the paper argues that AI regulation is increasingly becoming a site of negotiated power between governments and global technology companies.