Sydney AI Engineering & Infrastructure Summit 2025
Shape the future of AI and join industry leaders for hands-on sessions and insights on scalable AI systems and high-performance infrastructure.

Join us at the AI Engineering and Infrastructure Summit to shape the future of AI systems.
In July, we're bringing together AI engineers, data scientists, and technology leaders to explore scalable AI systems and high-performance infrastructure.
Discover best practices for deploying AI models at scale, optimising data pipelines for machine learning workloads, and implementing continuous integration and deployment. Dive into Edge AI, discuss ethics in AI engineering, and debate whether cloud or on-prem solutions are best for AI development. Engage in interactive sessions, real-world case studies, panel discussions, and debates to stay ahead of emerging trends in AI engineering.
Key Themes:
- Building Scalable AI Systems
- High-Performance AI Infrastructure
- Deploying AI Models at Scale
- Optimising Data Pipelines for ML Workloads
- Implementing Continuous Integration and Deployment
- Edge AI
- Ethics in AI Engineering
- Cloud vs. On-Prem: What Is Best for AI Development
Who Should Attend?
AI engineers, data scientists, IT professionals, technology leaders, and anyone eager to enhance their understanding of AI engineering and infrastructure.
Don't miss this chance for a day of learning, innovation, and collaboration.
Registrations open early 2025.
Program Highlights
Speakers
Sessions
AI, Engineering & Infrastructure Leaders
Track
Our Speakers
Agenda
Discover a proven roadmap for building and deploying enterprise AI solutions that deliver real business value. Explore the essential steps, from strategic planning and resource alignment to streamlined development workflows, that ensure future growth and adaptability.
This panel discussion dives into the compelling reasons to leverage AI when modernising tech estates and businesses, outlines sophisticated implementation approaches, and identifies the essential stakeholders who can champion a truly transformative agenda.
- Examining the drivers prompting organisations to transformation applications and systems
- Highlighting key opportunities including using AI to help modernise, and as a part of the modernised state
- Understanding cross-functional roles necessary to orchestrate AI enabled modernisation successfully
- Outlining advanced strategies for refining infrastructure, data pipelines, and MLOps processes to scale AI effectively
- Fostering collaboration, robust governance, and continuous learning to ensure AI’s long-term viability and impact
In this innovative session, attendees will be faced with a series of scenarios that they may face in their roles. Attendees will discuss the possible courses of action with their peers to consider the ramifications of each option before logging their own course of action.
Results will be tallied and analysed by our session facilitator and results will impact the way the group moves through the activity.
Will we collectively choose the right course of action?
How do you architect an AI-native platform purpose-built for vector search, LLM workflows, and scale? In this technical session, Relevance.ai Co-Founder Daniel Palmer shares the foundational decisions and cutting-edge infrastructure behind the platform's rapid growth. Gain insights into the real engineering behind enabling powerful, scalable, and enterprise-ready AI capabilities.
- Designing for Scale from Day One: How Relevance.ai architected a vector-first platform to support fast, scalable AI-driven use cases
- Compute and Cost Efficiency at Scale: Balancing performance, latency, and cost using smart infrastructure strategies across multi-cloud environments
- LLM Workflow Orchestration: Powering custom enterprise AI workflows with modular, flexible pipelines and real-time data processing
- From MVP to Enterprise-Ready: Evolving the platform to meet enterprise security, reliability, and integration requirements without sacrificing speed
Building AI systems is easy in the lab—but scaling them into real-world environments without blowing out costs, risking reliability, or overwhelming teams is where most organisations struggle. In this session, we’ll share how we helped companies move from small pilots to scalable, production-ready AI systems that deliver real impact without the growing pains.
- How we built infrastructure that could flex with model complexity and user demand without needing constant rebuilds.
- How we reduced deployment time and operational risk by standardising pipelines, monitoring, and retraining workflows.
- How we helped teams control compute costs and performance trade-offs by designing smarter scaling and resource allocation strategies.
Explore the technical intricacies of designing, deploying, and scaling AI infrastructure. Delve into the tools, frameworks, and architectures that power high-performance AI solutions, and learn how to balance agility, security, and cost-efficiency.
- How do teams architect resilient, high-performance computing environments to support AI workloads at scale?
- How can teams ensure real-time, high-volume data flow for AI?
- Which pipelines streamline model development, deployment, and continuous monitoring?
Roundtable topics to be shared with registered attendees for their selection
In the rush to implement Large Language Models, many organizations are overlooking the strategic value of traditional machine learning approaches. Through real-world examples and practical frameworks, this talk challenges the "LLM everywhere" mindset and demonstrates how a hybrid approach combining targeted traditional ML with LLMs can create more reliable, cost-effective, and safer AI systems.
In this interactive session, participants will explore and debate five hot-button issues shaping AI’s future. Expect divergent views and lively discussion on how these trends could redefine both engineering practices and business outcomes.
- Cloud vs. On-Prem High-Performance Computing – Balancing elasticity, control, and cost
- AutoML Tools – Do they empower teams or oversimplify complex engineering challenges?
- Ethical AI vs. Speed to Market – Should organisations slow innovation to ensure responsible development?
- Edge AI vs. Centralised Processing – Is pushing more AI to the edge truly efficient or overly complex?
- Low-Code/No-Code AI – Does democratising AI risk quality and governance, or is it the key to widespread adoption?
Who Attends?
Chief Technology Officer
Head of Machine Learning
Head of AI
Head of Engineering
Head of AI Engineering
Head of Cloud
Head of Data
Head of Infrastructure
Chief Data Officer
Digital Transformation Director
Head of DevOps
Application Development Director
Software Architect
Cloud Architecture Manager
Site Reliability Engineering Manager
Head of Platform
Benefits For Attendees





Event Location
Dockside

FAQs
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.
Get In Touch
Contact our event team for any enquiry

Danny Perry
For sponsorship opportunities.

Lili Munar
For guest and attendee enquiries.

Ben Turner
For speaking opportunities & content enquiries.

Taylor Stanyon
For event-related enquiries.