Sydney AI Security Summit 2025
Unlock the strategic imperative of AI security. A first-of-it’s-kind gathering of forward-thinking leaders, practitioners and security teams to confront the "AI security chasm" head-on and build a foundation of trust for AI initiatives.
Check out last year's event


Defend your AI future with expert insights, real-world strategies, and interactive sessions on model security, adversarial defence, and compliance.
This November, we’re bringing together security leaders, AI practitioners, and industry innovators to address the fast-emerging risks of AI adoption.
Discover best practices for securing AI models, defending against adversarial threats, and ensuring compliance as AI moves from pilot to production. Dive into how organisations are tackling challenges from deepfakes to model poisoning, and learn what’s needed to build resilient, trustworthy AI systems. Engage in interactive sessions, technical demos, real-world case studies, and expert debates to stay ahead of the latest threats.
Key Themes
- Securing AI Models in Production
- AI Supply Chain Risks
- Detecting and Preventing Model Poisoning
- Defending Against Adversarial Attacks
- Guarding Against Deepfakes and Synthetic Media
- Regulatory and Compliance Requirements for AI Security
- AI Identity & Access Risks (Shadow Access, AI Agents)
- Building Trustworthy and Resilient AI Systems
Who Should Attend?
CISOs, security leaders, AI engineers, IT leaders, risk managers, and anyone eager to understand and solve the security challenges of the AI era while networking with peers facing the same pressures.
Presented by Foundation Partner

Our Speakers
Agenda
AI offers huge opportunities. From streamlining operations to unlocking new customer value, but without the right guardrails, it can also introduce risks around security, ethics, compliance, and trust. In this keynote, Anna will share how organisations can lay the foundations to adopt AI safely, responsibly, and with confidence. She will explore how to balance innovation with governance, ensuring that both leaders and employees are equipped to manage AI’s promise and its pitfalls.
We’ll cover:
- Set governance early: Establish risk, compliance, and accountability frameworks before rollout
- Enable people: Equip staff with skills and policies to use AI securely
- Design in trust: Embed security and transparency into AI systems from the start
AI is fast becoming core to business. Securing it isn’t just technical, it’s essential for resilience, trust and growth. This session explores how to embed security across the AI lifecycle, with insights from Snyk on integrating security into development and scaling a security-first approach.
- AI Security as Business Resilience: Why securing AI is a strategic imperative, not just a technical task
- Secure at Inception: How to embed security across the entire AI lifecycle from design to deployment
- The Human and Governance Layers: Cultural shifts and organisational changes needed for safe and sustainable AI adoption
At The Warehouse Group, New Zealand’s largest retailer, AI has moved well past pilot projects, it’s woven into customer experiences, supply chains, and daily operations. But bringing AI into the heart of the business also exposed gaps and risks: how to keep sensitive data safe, how to manage tools the business wanted to use before security was ready, and how to build trust in decisions made by machines.In this keynote, Ankit will share the messy reality of securing AI at enterprise scale including what worked, where things went wrong, and what he'd do differently. Expect practical lessons grounded in a real-world retail journey, not theory.
We’ll cover:
- The expanding risk surface: Where AI introduced new exposures across data, models, and vendors and what actually made a difference in reducing them
- Balancing innovation and control: The guardrails that stuck (and the ones that teams ignored), and how to keep security aligned with business pace
- Lessons for peers: The pitfalls to avoid and the steps other leaders can take right now to strengthen AI security programmes in 2025
In this innovative session, attendees will be faced with a series of scenarios that they may face in their roles. Attendees will discuss the possible courses of action with their peers to consider the ramifications of each option before logging their own course of action.
Results will be tallied and analysed by our session facilitator and results will impact the way the group moves through the activity.
Will we collectively choose the right course of action?
The rise of AI is introducing supply chain–style risks we’ve seen before in open source. The way developers pick, use, and secure components has direct parallels to how organisations will need to think about models, training data, and AI tooling. This keynote connects the dots between securing the software supply chain and what it means to secure AI.
We’ll cover:
- Parallels to open source: How today’s AI risks echo the dependency, trust, and governance challenges seen in software supply chains.
- Where AI helps (and misleads): The promise and pitfalls of using AI to guide security choices and code fixes.
- Applying the lessons: Practical steps organisations can take from supply chain security to get ahead of AI-specific threats.
AI promises speed and scale but without clear guardrails, it can also create serious risk. This panel brings together senior leaders to unpack how organisations are approaching AI risk management and governance in practice.
We’ll cover:
- Where risk shows up first: From model bias to shadow AI, the issues leaders are most worried about.
- Governance in practice: What frameworks and policies are actually working (and which ones aren’t).
- Balancing speed and safety: How to keep innovation moving without losing control of risk.
Large Language Models (LLMs) are reshaping business and security landscapes, but they also introduce a new wave of risks. This session will break down the OWASP Top 10 for LLMs, highlighting real-world vulnerabilities and mitigation strategies. From prompt injection to data leakage and model supply chain risks, you’ll gain practical guidance on how organisations can safely innovate with AI while staying ahead of emerging threats.
- Know the Risks: Understand the most critical vulnerabilities unique to LLMs.
- Defend Smartly: Learn practical defence techniques to reduce AI security risks.
- Adopt Safely: Gain a framework to align innovation with secure AI adoption.
Choose one Roundtable topic to join on the day.
- Protecting Data in AI Models
- AI and Compliance
- Managing ShadowAI
- AI Governance
- Building the AI Security Framework
- Securing AI Agents
Past Speaker Highlights
Who Attends?
Chief Information Security Officer
Head of AI
Head of ML
Head of Cybersecurity
Information Security Director
Head of AI Security
Heads of Cloud Security
Head of DevSecOps
Head of Application Security
MLOps Lead
ML/AI Security Engineer
AI Infrastructure Architect
Penetration Testing Lead
Head of AI Risk
Head of Cybersecurity Operations
Head of Cybersecurity GRC
Head of Cloud Platform Engineering
Head of AI Engineering
Head of Engineering
Head of Software Security
Our event sponsors
.png)
Past Sponsors
Event Location
W Sydney

Frequently Asked Questions
Get In Touch
Contact our event team for any enquiry

Danny Perry
For sponsorship opportunities.

Lili Munar
For guest and attendee enquiries.

Ben Turner
For speaking opportunities & content enquiries.

Taylor Stanyon
For event-related enquiries.

















