Melbourne AI Engineering and Infrastructure Summit 2026
Shape the future of AI and join industry leaders for hands-on sessions and insights on scalable AI systems and high-performance infrastructure.

Join us for the second edition of the AI Engineering and Infrastructure Summit!
We're bringing together AI engineers, data scientists, and technology leaders to explore scalable AI systems and high-performance infrastructure.
Discover best practices for deploying AI models at scale, optimising data pipelines for machine learning workloads, and implementing continuous integration and deployment. Dive into Edge AI, discuss ethics in AI engineering, and debate whether cloud or on-prem solutions are best for AI development. Engage in interactive sessions, real-world case studies, panel discussions, and debates to stay ahead of emerging trends in AI engineering.
Key Themes:
- Building Scalable AI Systems
- Leveraging AI modernisation to transform applications and systems
- High-Performance AI Infrastructure
- Deploying AI Models at Scale
- Optimising Data Pipelines for ML Workloads
- Implementing Continuous Integration and Deployment
- Edge AI
- Ethics in AI Engineering
- Cloud vs. On-Prem: What Is Best for AI Development
Register now to secure your place and receive announcements when our full program launches.
Our Speakers
Agenda
Beat the rush and join us early for complimentary barista-made coffee and breakfast.
Why AI that looks brilliant in a demo so often collapses in the real world and the engineering, infrastructure and operating fundamentals required to make AI truly production ready in 2026.
Why most AI failures stem from data pipelines, infrastructure and operating models not the model itself. The real production bottlenecks: latency, reliability, cost and scale. What “production ready AI” actually requires in 2026 and what teams must stop overlooking.
- Learn why most AI projects fail due to data, infrastructure, and operating models not the model itself.
- Understand the real bottlenecks to scaling AI: latency, reliability, cost, and operational complexity.
- Discover what it truly takes to make AI production-ready in 2026.
As AI moves into real-world systems, responsibility must be built into the technology itself, not added later. Engineering teams are now expected to design AI that is fair, transparent, and accountable, while still delivering innovation at speed.
This panel explores how organisations are embedding ethical considerations into AI engineering, from bias mitigation and model transparency to governance and accountability.
- Identifying and reducing bias in AI systems
- Improving transparency and explainability in models
- Embedding accountability and governance into AI development
A case study on designing efficient data pipelines to handle large-scale data ingestion, processing, and storage for AI applications.
In this innovative session, attendees will be faced with a series of scenarios that they may face in their roles. Attendees will discuss the possible courses of action with their peers to consider the ramifications of each option before logging their own course of action.
Results will be tallied and analysed by our session facilitator and results will impact the way the group moves through the activity.
Will we collectively choose the right course of action?
As we deployed AI agents to automate accounting workflows in a highly regulated environment, we quickly learned that model-level safeguards weren’t enough. This session explores how we built a layered safety chain, spanning the user’s role, the agent, and the underlying services and data, to manage blast radius, permissions and autonomy in production. We’ll share how targeted beta rollouts and deliberate UI design helped define clear boundaries between essential human oversight and safe, scalable automation.
Enterprises are rapidly moving beyond a single-model strategy, deploying multiple LLMs, specialised models, and AI tools across different platforms and providers. While this unlocks new capabilities, it also introduces growing complexity around orchestration, infrastructure, governance, and cost control.
This panel explores how engineering and AI platform leaders are managing multi-model environments in practice. From model selection and orchestration to platform design, monitoring, and governance, panellists will share how organisations are keeping AI systems scalable, reliable, and maintainable as the model ecosystem continues to expand.
- Understand how enterprises are orchestrating multiple AI models across providers and platforms
- Learn practical approaches to infrastructure design, monitoring, and reliability in multi-model environments
- Explore strategies for managing governance, risk, and cost as AI usage scales across the organisation
Select a topic of discussion and engage in an interactive roundtable discussion with a group of your like-minded peers.
Put your knowledge to the test in this fast-paced quiz covering real-world trivia, key concepts, and emerging trends. Compete for bragging rights - and a travel voucher - as the top scorer takes the crown.
An exploration of the complexities involved in engineering scalable AI systems, including data pipelines, model deployment, and real-time processing.
A lively session exploring whether cloud-based or on-premises infrastructure offers the optimal environment for AI development and deployment. As organisations scale AI, leaders are grappling with where those workloads should run balancing the cloud’s rapid access to compute and AI services with the greater control over data, cost, and performance offered by on-premises environments.
This interactive think tank puts the question directly to the audience. Participants vote live on a series of real-world scenarios, explore the results together, and vote again as perspectives shift through the discussion.
- Where are most of your organisation’s AI workloads currently running?
- For large-scale model training, which environment delivers the best results?
- What is the biggest driver influencing where AI workloads are deployed?
- Where do you see the biggest hidden cost in AI infrastructure?
- For highly sensitive or regulated data, where should AI workloads run?
- Looking ahead three years, what will the dominant AI infrastructure model be?
Unwind with your peers for a couple of drinks on us!
Past Speaker Highlights
Who Attends?
* AI Infrastructure Architect
AI Infrastructure Architect
Agile Transformation Director
Head of AI Risk
Heads of AI
Head of AI Engineering
ML/AI Security Engineer
Head of Infrastructure
Heads of Cloud and IT Infrastructure




Attendee Testimonials
Our event sponsors


Past Sponsors




Event Location
Collins Square Events Centre

Frequently Asked Questions
Get In Touch
Contact our event team for any enquiry

Danny Perry
For sponsorship opportunities.

Lili Munar
For guest and attendee enquiries.

Steph Tolmie
For speaking opportunities & content enquiries.

Taylor Stanyon
For event-related enquiries.























