Senior Lecturer at Bar Ilan University & Non-resident Research Scholar at the Center for Long-Term Cybersecurity (CLTC), University of California, Berkeley

Dr. Gil Baram researches how AI is reshaping the cybercrime and cyber conflict landscape, with a focus on what the shift means for incident response, attribution, and cross-sector coordination. Over the past two years she has led tabletop exercises in Berkeley, Singapore, and Tel Aviv, bringing together senior practitioners from government, law enforcement, critical infrastructure, and the technology sector to stress-test how existing playbooks hold up against AI-accelerated operations. She is a Senior Lecturer at Bar-Ilan University and a non-resident research scholar at UC Berkeley’s Center for Long-Term Cybersecurity (CLTC). She advises industry, government, and international organizations on AI-enabled cyber threats, cyber diplomacy and policy.
AI is widely described as a force multiplier for cybercrime. Yet most threat intelligence discourse stays at the technique level (AI-generated phishing, polymorphic malware, deepfake-enabled fraud) and underplays what practitioners encounter operationally: generative AI is reshaping the cybercrime ecosystem faster than incident response playbooks, attribution workflows, and cross-sector coordination mechanisms can adapt.
This talk presents operational findings from three tabletop exercises conducted with senior practitioners across Berkeley (2024), Singapore (2025), and Tel Aviv (2025). Participants came from government, law enforcement, critical infrastructure operators, technology firms, and civil society: the same mix of actors that must coordinate during a real incident. Across all three sites, a consistent pattern emerged. The dominant failure mode was not detection; detection generally worked. The breakdown happened in the decision layer above it, in attribution calls, public disclosure, escalation, and cross-sector communication, where teams were forced to commit before facts could be verified.
Three operational shifts drove these failures, and each maps onto practitioner workflows. First, AI-enabled operations compressed decision timelines below the cycle time of standard IR, disclosure, and reporting processes. Second, generative AI raised the floor on deception quality, making attribution noisier and forcing analysts to act on lower-confidence signals than their playbooks assume. Third, AI further blurred the line between criminal, state-linked, and hybrid actors, breaking the classification logic that drives escalation and coordinated response.
The implication for the threat intelligence community is concrete. The next generation of operational risk is less about new TTPs and more about whether IR architectures, attribution standards, and cross-sector coordination protocols can hold up under AI-accelerated tempo. The talk closes with practical takeaways for threat intelligence workflows, IR decision-making, and the design of cross-sector coordination playbooks.