November 12, 2026
8:30am - 3:30pm
Collins Square Events Centre

Melbourne AI Security Summit 2026

Unlock the strategic imperative of AI security. A gathering of forward-thinking leaders, practitioners, and security teams focused on building robust, trusted foundations for enterprise AI security initiatives.

Melbourne AI Security Summit 2026
Defend your AI future with expert insights, real-world strategies, and interactive sessions on model security, adversarial defence, and compliance.

We’re bringing together security leaders, AI practitioners, and industry innovators to address the fast-emerging risks of AI adoption.

Discover best practices for securing AI models, defending against adversarial threats, and ensuring compliance as AI moves from pilot to production. Dive into how organisations are tackling challenges from deepfakes to model poisoning, and learn what’s needed to build resilient, trustworthy AI systems. Engage in interactive sessions, technical demos, real-world case studies, and expert debates to stay ahead of the latest threats.

Key Themes
  • Securing AI Models in Production
  • AI Supply Chain Risks
  • Detecting and Preventing Model Poisoning
  • Defending Against Adversarial Attacks
  • Guarding Against Deepfakes and Synthetic Media
  • Regulatory and Compliance Requirements for AI Security
  • AI Identity & Access Risks (Shadow Access, AI Agents)
  • Building Trustworthy and Resilient AI Systems
Who Should Attend?

CISOs, security leaders, AI engineers, IT leaders, risk managers, and anyone eager to understand and solve the security challenges of the AI era while networking with peers facing the same pressures.

Our Speakers

No items found.

Agenda

8:30am
Registration, Breakfast & Barista Coffee

Join us from 8:30am to network with your peers over a light breakfast snack and fresh, barista-made coffee.

No items found.
9:25am
Welcome & Opening Remarks
No items found.
9:30am
Opening Keynote: Preparing Organisations for Secure and Responsible AI Adoption

AI offers huge opportunities, from streamlining operations to unlocking new customer value. Without the right guardrails, however, it can also introduce risks around security, ethics, compliance, and trust. This keynote explores how organisations can lay the foundations to adopt AI safely, responsibly, and with confidence. It will examine how to balance innovation with governance, ensuring that both leaders and employees are equipped to manage AI’s promise and its pitfalls.

We’ll cover:

  • Set governance early: Establish risk, compliance, and accountability frameworks before rollout
  • Enable people: Equip staff with skills and policies to use AI securely
  • Design in trust: Embed security and transparency into AI systems from the start
No items found.
9:50am
Keynote: Building AI Security into the Foundations of Your Company

AI is fast becoming core to business. Securing it isn’t just technical, it’s essential for resilience, trust and growth. This session explores how to embed security across the AI lifecycle, with insights on integrating security into development and scaling a security-first approach.

This talk will cover:

  • AI Security as Business Resilience: Why securing AI is a strategic imperative, not just a technical task
  • Secure at Inception: How to embed security across the entire AI lifecycle from design to deployment
  • The Human and Governance Layers: Cultural shifts and organisational changes needed for safe and sustainable AI adoption
No items found.
10:20am
Panel: Securing AI at Scale - Exploring Real-World Secure AI Applications

As AI moves beyond pilot projects and becomes embedded in customer experiences, supply chains, and day-to-day operations, organisations are confronting new gaps and risks. These include protecting sensitive data, managing AI tools adopted faster than security controls can keep up, and building trust in decisions made by machines. This keynote explores the messy reality of securing AI at enterprise scale- what worked, where things went wrong, and what should be done differently. Expect practical lessons grounded in real-world experience, not theory.

We’ll cover:

  • The expanding risk surface: Where AI introduces new exposures across data, models, and vendors- and what actually makes a difference in reducing them
  • Balancing innovation and control: The guardrails that stick (and the ones teams ignore), and how to keep security aligned with business pace
  • Lessons for peers: The pitfalls to avoid and the steps leaders can take now to strengthen AI security programmes in 2025
No items found.
10:50am
Morning Tea & Networking
No items found.
11:20am
Interactive Audience Activity: AI Rogue Agent Attack Simulation

In this innovative session, attendees will be faced with a series of scenarios that they may face in their roles. Attendees will discuss the possible courses of action with their peers to consider the ramifications of each option before logging their own course of action. Results will be tallied and analysed by our session facilitator and results will impact the way the group moves through the activity.

Will we collectively choose the right course of action?

No items found.
11:40am
How I Solved... Software Supply Chain Lessons & What They Teach Us About Securing AI

The rise of AI is introducing supply chain–style risks we’ve seen before in open source. The way developers pick, use, and secure components has direct parallels to how organisations will need to think about models, training data, and AI tooling. This keynote connects the dots between securing the software supply chain and what it means to secure AI.

We’ll cover:

  • Parallels to open source: How today’s AI risks echo the dependency, trust, and governance challenges seen in software supply chains.
  • Where AI helps (and misleads): The promise and pitfalls of using AI to guide security choices and code fixes.
  • Applying the lessons: Practical steps organisations can take from supply chain security to get ahead of AI-specific threats.
No items found.
11:55am
How I Solved... Guardrails for Agentic AI in Production

As AI agents move from experimentation to real-world execution, traditional controls quickly fall short. This session shares a practical, production-tested approach to securing agentic AI without slowing delivery.

We’ll cover:

  • Where traditional security models break down for agentic AI
  • How to design and enforce effective guardrails in production
  • Monitoring, evaluation, and control patterns that actually work
  • Lessons learned from deploying autonomous systems safely
No items found.
12:10pm
Panel Discussion: AI Risk and Governance - Getting Control Before It Gets Away

AI promises speed and scale but without clear guardrails, it can also create serious risk. This panel brings together senior leaders to unpack how organisations are approaching AI risk management and governance in practice.

We’ll explore:

  • Where risk shows up first: From model bias to shadow AI, the issues leaders are most worried about.
  • Governance in practice: What frameworks and policies are actually working (and which ones aren’t).
  • Balancing speed and safety: How to keep innovation moving without losing control of risk.
No items found.
12:40pm
Roundtable Discssions

Choose 1 topic to discuss with a curated group of your peers!

  1. Securing AI From Idea To Production
  2. Open Source In AI: Know The Risk
  3. Trusting Code Written With AI Help
  4. Managing Legal And Licensing Exposure
  5. Making Dev And Security Work Together
  6. Right-Sizing Security Tests In Pipelines
  7. Shipping GenAI Features Safely
  8. Supply Chain Integrity For AI Products
No items found.
1:30pm
Lunch & Networking
No items found.
2:20pm
Keynote: OWASP Top 10 for LLMs

Large Language Models (LLMs) are reshaping business and security landscapes, but they also introduce a new wave of risks. This session will break down the OWASP Top 10 for LLMs, highlighting real-world vulnerabilities and mitigation strategies. From prompt injection to data leakage and model supply chain risks, you’ll gain practical guidance on how organisations can safely innovate with AI while staying ahead of emerging threats.

We'll explore:

  • Know the Risks: Understand the most critical vulnerabilities unique to LLMs.
  • Defend Smartly: Learn practical defence techniques to reduce AI security risks.
  • Adopt Safely: Gain a framework to align innovation with secure AI adoption.
No items found.
2:50pm
Think Tank: Will Regulation Save AI Security... or Break Innovation?

As AI regulation accelerates globally, organisations must balance compliance with the need to move fast. From the EU AI Act to emerging APAC and U.S. frameworks, regulation is reshaping how AI is built, deployed, and secured. This think tank brings together cybersecurity and technology leaders to debate whether regulation meaningfully improves AI security or unintentionally slows innovation and drives risk elsewhere.

Debate topics will include:

  • Which regulations actually reduce AI security risk?
  • How can organisations stay compliant without slowing delivery?
  • Does regulation curb shadow AI, or push it underground?
  • What governance approaches work in practice?
  • By 2027, will regulation help or hinder secure AI adoption?
No items found.
3:30pm
Event Closed
No items found.

Past Speaker Highlights

Shubham Arora

Chief Engineer - AI Platform, Commonwealth Bank

Anna Aquilina

Chief Information Security Officer, UTS

Ankit Gupta

General Manager Group Technology, The Warehouse Group

Leron Zinatullin

Chief Information Security Officer, Linkly

John Morcos

Head of Cyber Security Governance and Operations, Blackmores Group

Edwin Kwan

Head of Product Security, Domain

Arnav Sharma

Principal Security Architect, News Corp

Shenphen Ringpapontsang

Head of Risk & AI Ethics, Future Group

Who Attends?

Chief Information Security Officer

Head of AI

Head of ML

Information Security Director

Heads of Cloud Security

Head of Application Security

Head of Cybersecurity

* AI Infrastructure Architect

Head of AI Security

Head of DevSecOps

ML/AI Security Engineer

Penetration Testing Lead

Head of Cyber Operations

Head of Cloud Platform Engineering

Head of Engineering

MLOps Lead

AI Infrastructure Architect

Head of AI Risk

Head of Cyber GRC

Head of AI Engineering

Head of Software Security

Man in gray blazer holding a laptop and talking to another man wearing a black jacket with a conference badge, surrounded by other attendees with badges at an indoor event.Audience seated in a conference room watching a speaker present slides about winning fantastic prizes on large screens.Crowd of people networking indoors at a conference or event with informational booths in the background.Audience attentively listening to a speaker in a conference room with round tables and water pitchers.

Attendee Testimonials

No items found.
No items found.
No items found.
No items found.

Our event sponsors

No items found.
For sponsorship opportunities, please get in touch with Danny Perry, danny@clutchgroup.co

Past Sponsors

Event Location

Collins Square Events Centre

Level 5, Tower 2/727 Collins St, Docklands VIC 3008
Melbourne AI Security Summit 2026

Frequently Asked Questions

No items found.

Get In Touch

Contact our event team for any enquiry

Danny Perry

Director of Sales
For sponsorship opportunities.
danny@clutchgroup.co

Lili Munar

Director of Client Relations
For guest and attendee enquiries.
lilibeth@clutchgroup.co

Steph Tolmie

Director of Conference Production
For speaking opportunities & content enquiries.
stephanie@clutchevents.co

Taylor Stanyon

Director of Operations
For event-related enquiries.
taylor@clutchgroup.co