Melbourne AI Security Summit 2026
Unlock the strategic imperative of AI security. A gathering of forward-thinking leaders, practitioners, and security teams focused on building robust, trusted foundations for enterprise AI security initiatives.

Defend your AI future with expert insights, real-world strategies, and interactive sessions on model security, adversarial defence, and compliance.
We’re bringing together security leaders, AI practitioners, and industry innovators to address the fast-emerging risks of AI adoption.
Discover best practices for securing AI models, defending against adversarial threats, and ensuring compliance as AI moves from pilot to production. Dive into how organisations are tackling challenges from deepfakes to model poisoning, and learn what’s needed to build resilient, trustworthy AI systems. Engage in interactive sessions, technical demos, real-world case studies, and expert debates to stay ahead of the latest threats.
Key Themes
- Securing AI Models in Production
- AI Supply Chain Risks
- Detecting and Preventing Model Poisoning
- Defending Against Adversarial Attacks
- Guarding Against Deepfakes and Synthetic Media
- Regulatory and Compliance Requirements for AI Security
- AI Identity & Access Risks (Shadow Access, AI Agents)
- Building Trustworthy and Resilient AI Systems
Who Should Attend?
CISOs, security leaders, AI engineers, IT leaders, risk managers, and anyone eager to understand and solve the security challenges of the AI era while networking with peers facing the same pressures.
Our Speakers
Agenda
Join us from 8:30am to network with your peers over a light breakfast snack and fresh, barista-made coffee.
AI offers huge opportunities, from streamlining operations to unlocking new customer value. Without the right guardrails, however, it can also introduce risks around security, ethics, compliance, and trust. This keynote explores how organisations can lay the foundations to adopt AI safely, responsibly, and with confidence. It will examine how to balance innovation with governance, ensuring that both leaders and employees are equipped to manage AI’s promise and its pitfalls.
We’ll cover:
- Set governance early: Establish risk, compliance, and accountability frameworks before rollout
- Enable people: Equip staff with skills and policies to use AI securely
- Design in trust: Embed security and transparency into AI systems from the start
AI is fast becoming core to business. Securing it isn’t just technical, it’s essential for resilience, trust and growth. This session explores how to embed security across the AI lifecycle, with insights on integrating security into development and scaling a security-first approach.
This talk will cover:
- AI Security as Business Resilience: Why securing AI is a strategic imperative, not just a technical task
- Secure at Inception: How to embed security across the entire AI lifecycle from design to deployment
- The Human and Governance Layers: Cultural shifts and organisational changes needed for safe and sustainable AI adoption
As AI moves beyond pilot projects and becomes embedded in customer experiences, supply chains, and day-to-day operations, organisations are confronting new gaps and risks. These include protecting sensitive data, managing AI tools adopted faster than security controls can keep up, and building trust in decisions made by machines. This keynote explores the messy reality of securing AI at enterprise scale- what worked, where things went wrong, and what should be done differently. Expect practical lessons grounded in real-world experience, not theory.
We’ll cover:
- The expanding risk surface: Where AI introduces new exposures across data, models, and vendors- and what actually makes a difference in reducing them
- Balancing innovation and control: The guardrails that stick (and the ones teams ignore), and how to keep security aligned with business pace
- Lessons for peers: The pitfalls to avoid and the steps leaders can take now to strengthen AI security programmes in 2025
In this innovative session, attendees will be faced with a series of scenarios that they may face in their roles. Attendees will discuss the possible courses of action with their peers to consider the ramifications of each option before logging their own course of action. Results will be tallied and analysed by our session facilitator and results will impact the way the group moves through the activity.
Will we collectively choose the right course of action?
The rise of AI is introducing supply chain–style risks we’ve seen before in open source. The way developers pick, use, and secure components has direct parallels to how organisations will need to think about models, training data, and AI tooling. This keynote connects the dots between securing the software supply chain and what it means to secure AI.
We’ll cover:
- Parallels to open source: How today’s AI risks echo the dependency, trust, and governance challenges seen in software supply chains.
- Where AI helps (and misleads): The promise and pitfalls of using AI to guide security choices and code fixes.
- Applying the lessons: Practical steps organisations can take from supply chain security to get ahead of AI-specific threats.
As AI agents move from experimentation to real-world execution, traditional controls quickly fall short. This session shares a practical, production-tested approach to securing agentic AI without slowing delivery.
We’ll cover:
- Where traditional security models break down for agentic AI
- How to design and enforce effective guardrails in production
- Monitoring, evaluation, and control patterns that actually work
- Lessons learned from deploying autonomous systems safely
AI promises speed and scale but without clear guardrails, it can also create serious risk. This panel brings together senior leaders to unpack how organisations are approaching AI risk management and governance in practice.
We’ll explore:
- Where risk shows up first: From model bias to shadow AI, the issues leaders are most worried about.
- Governance in practice: What frameworks and policies are actually working (and which ones aren’t).
- Balancing speed and safety: How to keep innovation moving without losing control of risk.
Choose 1 topic to discuss with a curated group of your peers!
- Securing AI From Idea To Production
- Open Source In AI: Know The Risk
- Trusting Code Written With AI Help
- Managing Legal And Licensing Exposure
- Making Dev And Security Work Together
- Right-Sizing Security Tests In Pipelines
- Shipping GenAI Features Safely
- Supply Chain Integrity For AI Products
Large Language Models (LLMs) are reshaping business and security landscapes, but they also introduce a new wave of risks. This session will break down the OWASP Top 10 for LLMs, highlighting real-world vulnerabilities and mitigation strategies. From prompt injection to data leakage and model supply chain risks, you’ll gain practical guidance on how organisations can safely innovate with AI while staying ahead of emerging threats.
We'll explore:
- Know the Risks: Understand the most critical vulnerabilities unique to LLMs.
- Defend Smartly: Learn practical defence techniques to reduce AI security risks.
- Adopt Safely: Gain a framework to align innovation with secure AI adoption.
As AI regulation accelerates globally, organisations must balance compliance with the need to move fast. From the EU AI Act to emerging APAC and U.S. frameworks, regulation is reshaping how AI is built, deployed, and secured. This think tank brings together cybersecurity and technology leaders to debate whether regulation meaningfully improves AI security or unintentionally slows innovation and drives risk elsewhere.
Debate topics will include:
- Which regulations actually reduce AI security risk?
- How can organisations stay compliant without slowing delivery?
- Does regulation curb shadow AI, or push it underground?
- What governance approaches work in practice?
- By 2027, will regulation help or hinder secure AI adoption?
Past Speaker Highlights
Who Attends?
Chief Information Security Officer
Head of AI
Head of ML
Information Security Director
Heads of Cloud Security
Head of Application Security
Head of Cybersecurity
* AI Infrastructure Architect
Head of AI Security
Head of DevSecOps
ML/AI Security Engineer
Penetration Testing Lead
Head of Cyber Operations
Head of Cloud Platform Engineering
Head of Engineering
MLOps Lead
AI Infrastructure Architect
Head of AI Risk
Head of Cyber GRC
Head of AI Engineering
Head of Software Security




Attendee Testimonials
Our event sponsors
Past Sponsors


Event Location
Collins Square Events Centre

Frequently Asked Questions
Get In Touch
Contact our event team for any enquiry

Danny Perry
For sponsorship opportunities.

Lili Munar
For guest and attendee enquiries.

Steph Tolmie
For speaking opportunities & content enquiries.

Taylor Stanyon
For event-related enquiries.















