Feature flags are a powerful tool that can help software teams ship faster, safer, and smarter. In this session, you will learn what feature flags are, how to use them effectively, and the many benefits they offer. We will cover topics such as:
Who is this session for?
This session is for anyone who wants to learn more about feature flags and how to use them to improve their software development process.
At Uber, we migrated our entire Java test codebase from JUnit4 to JUnit5 using OpenRewrite, fully automating one of the largest test migrations in the industry. The migration covered over 75,000 test classes, 400,000 tests, and 15M+ lines of code, with 1.5 million lines modified — completed in just a few months with minimal developer disruption.
This session covers the engineering approach, tooling design, and validation strategies that made this possible. Topics include customizing OpenRewrite for Uber’s monorepo, ensuring migration correctness at scale, managing rollout safety, and integrating automated refactoring into CI pipelines. Attendees will learn practical strategies for applying large-scale code transformations safely and efficiently in complex production environments.
Internally managed frameworks provide companies with the ability to support their developers at scale by providing reasonable defaults, managing upgrades, and managing integration with internal tooling.
However, for the teams managing these frameworks there is a significant burden to maintain the framework and support users who are migrating to newer versions of the internal framework.
The microservices team at Squarespace has addressed these concerns by adopting Moderne to help manage our internal SpringBoot starters, custom Java images, and more!
In this session we will discuss:
As codebases grow and architectures decentralize, developers spend an increasing amount of time trying to understand existing systems before making changes. Text search, IDE features, and AST tools help, but they break down when teams need more accuracy, cross-repo visibility, and deeper type-aware insight. We will explore how semantic code search—powered by Moderne’s Lossless Semantic Tree (LST)—fills this gap. We’ll examine how LSTs provide the necessary level of fidelity, how that enables developers to add new code without breaking existing systems, and why semantic search is becoming a core capability for platform and architecture teams. See practical examples of real-world searches (finding API usages, tracing dependencies, impact analysis), and get a deeper understanding of what it means to treat code search as collaboration infrastructure rather than a one-off migration tool.
We will examine how LSTs provide the necessary level of fidelity, how that enables developers to add new code without breaking existing systems, and why semantic search is becoming a core capability for platform and architecture teams. See practical examples of real-world searches (finding API usages, tracing dependencies, impact analysis), and get a deeper understanding of what it means to treat code search as collaboration infrastructure rather than a one-off migration tool.
There is a growing interest in application of LLMs for code migration / refactoring.
This talk will share the successful applications and experiences on using LLMs for code migrations at Google. We see evidence that the use of LLMs can reduce the time needed for migrations significantly, and can reduce barriers to get started and complete migration programs.
We hope that industry practitioners will find our insights useful.
Modern Java applications utilize hundreds of dependencies. Each additional dependency adds complexity to the application development process. Java developers need to think about how to manage internal libraries and external libraries.
OpenRewrite recipes are an important tool for managing Java dependencies. In this talk, we will share our experiences with OpenRewrite. We will examine key recipes that we are using to modernize our systems.
After helping Fortune 500 companies modernize millions of lines of code across thousands of repositories, we've discovered that technical excellence isn't enough—the human and organizational challenges often determine success or failure. This session distills real-world experiences from large-scale OpenRewrite and Moderne adoptions into 10 actionable lessons that will save your organization months of effort and millions in technical debt reduction costs.
This session distills real-world experiences from large-scale OpenRewrite and Moderne adoptions into 10 actionable lessons that will save your organization months of effort and millions in technical debt reduction costs.
Hi, Spring fans! There's never been a better time to be a JVM / Spring developer! Spring brings the worlds of security, batch processing, NoSQL and SQL data access, AI, enterprise application integration, web and microservices, gRPC, GraphQL, observability, AI, agentic systems, and so much more to your doorstep. In this talk we're going to look at some of the amazing opportunities that lay before the Spring Boot developer in 2026!
Maintaining large-scale distributed systems poses significant challenges due to their complexity, scale, and the risks of live changes. This talk presents a case study of a system which processes vast volumes of items in real time for billions of users. Over nine months, this system underwent a live architectural refactoring to improve maintainability, developer efficiency, and reliability. Key strategies included staged rollouts, automated testing, and impact validation, resulting in a 42% boost in developer efficiency, 87% reliability improvement, and notable gains in system performance and resource savings.
Looking forward, in this talk, we will explore the growing role of AI-driven refactoring techniques in accelerating development, enhancing reliability, and optimising performance in complex systems. This talk offers an overview of our current efforts, practical insights, and future directors for code maintenance and refactoring empowered by AI.
REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.
But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.
You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.
This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.
Migrating a massive, actively developed codebase from one language to another is a daunting challenge—especially when the goal is to maximize developer productivity and long-term maintainability. At Meta, we embarked on a multi-year journey to translate tens of millions of lines of Java code, going far beyond incremental adoption to pursue a full-scale, automated migration.
In this talk, I’ll share the technical and organizational lessons learned, with a focus on the custom refactoring infrastructure we built to make it possible. I’ll discuss how we leveraged metaprogramming, static analysis, and even runtime data collection to tackle the thorniest issues—like null safety and dependency graph complexity—while minimizing developer toil and risk.
In this talk, Ill share the technical and organizational lessons learned, with a focus on the custom refactoring infrastructure we built to make it possible. Ill discuss how we leveraged metaprogramming, static analysis, and even runtime data collection to tackle the thorniest issueslike null safety and dependency graph complexitywhile minimizing developer toil and risk.
This session follows our journey at SAP from developer frustration to establishing the large scale refactoring capability. We’ll share how we moved from grassroots discovery and community workshops to building a robust business case for automation. Learn how we leveraged an InnerSource model to bridge the gap between central technology teams and product units, effectively “paving the road” for seamless upgrades, migrations, and recurring code changes.
Learn how we leveraged an InnerSource model to bridge the gap between central technology teams and product units, effectively “paving the road” for seamless upgrades, migrations, and recurring code changes.
How do you confidently deploy automated changes across thousands of repositories, in hours instead of weeks? At Netflix, we've reimagined the way code changes and dependency updates are delivered—swiftly, safely, and at scale. In this talk, we’ll share how we reimagined our approach to automated changes at scale, blending technical innovation with a focus on developer trust and safety.
We’ll dive into the systems and strategies that enabled us to move from manual, slow updates to a fully automated, zero-touch process. Key to this transformation were advanced dependency management, automated SCM changes, and comprehensive artifact observability, allowing us to track and validate every change across a vast codebase.
But automating at scale only works if engineers are confident in the process. Drawing from Netflix’s experience, we’ll explore how sharing validation results, feedback loops, and impact analysis data builds trust in automation—empowering teams to embrace rapid, platform-driven changes. We’ll discuss how shifting verification left, providing developer self-service insights, and analyzing failure impacts enable both high velocity and robust quality, even as the number of automated changes grows.
Finally, we’ll highlight future directions—such as language-agnostic tooling and proactive security measures—offering practical takeaways for any organization looking to accelerate and scale their automated change management with confidence.
The Human Side of AI: Unlocking Opportunity in the New Bottlenecks of Software Development
Artificial Intelligence is transforming software development at unprecedented speed—reshaping roles, workflows, and organizational dynamics. Yet, with every acceleration comes a new constraint.
Kiran Rouzie (Head of Technology Strategy, North America & Global Head of Organizational Change & Transformation) and Rachel Laycock (CTO) of ThoughtWorks will discuss how AI introduces fresh bottlenecks that shift where value is created in the delivery lifecycle — and how these shifts unlock opportunities for business analysts, QA professionals, and other often-overlooked roles to rise in influence. More broadly, they will reflect on the human side of AI adoption: how diversity of thought, perspective, and lived experience must shape the integration of AI into the workforce.
Far from being a story of replacement, this conversation emphasizes the collective contribution of people in guiding, validating, and amplifying AI’s potential. Kiran and Rachel will highlight how voices from diverse and minority communities are essential in shaping trustworthy AI practices, and how organizations can foster inclusion while reimagining roles.
Attendees will walk away with a deeper understanding of:
Far from being a story of replacement, this conversation emphasizes the collective contribution of people in guiding, validating, and amplifying AIs potential. Kiran and Rachel will highlight how voices from diverse and minority communities are essential in shaping trustworthy AI practices, and how organizations can foster inclusion while reimagining roles.
Attendees will walk away with a deeper understanding of:
As AI agents take more and more of a leading role in crafting code, it’s suddenly become apparent that the “first user” of engineering productivity tooling will be shifting towards agents rather than individual human developers.
With a human still at the helm of a fleet of agents in producing software, and increasingly less involved in the writing of individual lines of code, maximizing engineering value delivery means making every tool call faster, more token efficient, and more accurate.
Many of our members have now brought in not one but several coding agents and foundation models in an effort to accelerate everything from feature delivery to application modernization. But now attention will shift to how to ever tighten the tool call feedback loops. Each percentage gain in tool call efficiency multiplied across an increasingly large fleet of agents creates an outsized impact.
We’ll cover a variety of concrete cases where tool call efficiency can be harvested immediately:
It’s amazing how we can now build working apps just by few-shot prompting LLMs. But try doing this with monorepos of 10s of MLOC, like the ones used for planet-scale apps that must be secure and compliant. Responsible agentic coding at scale, where 1000s of engineers materialize code changes by engaging in deep sessions with AI is really challenging. We want to share Airbnb’s journey, the trade-offs we picked, learnings, outcomes and productivity impact we observed.
At Tinder, modernizing our backend platform means tackling two major challenges at once: the code and the coder.
In this talk, we’ll share how Tinder built its “upgrade muscle” through repeatable, increasingly automated platform initiatives, and how OpenRewrite helped make that possible. We’ll anchor the story in two migrations: our first Java upgrade, which exposed the limits of manual coordination at scale, and our Java 17 + Spring Boot 3 effort — where we leaned hard on automation to keep change boring and predictable.
We’ll then zoom in on the Java 17 and Spring Boot 3 upgrade, starting with a critical testing dependency migration. By authoring a custom OpenRewrite recipe to migrate unit tests from JMockit to Mockito, we refactored thousands of tests across our backend codebase and completed the effort in roughly a month, without widespread manual changes or developer disruption.
We’ll close by showing how OpenRewrite has evolved into a continuous platform tool at Tinder, enabling safe, large-scale refactoring over time. Attendees will learn how to combine deterministic refactoring tools like OpenRewrite with LLMs in a complementary, cost-effective way, and how to apply that approach to build a sustainable, developer-friendly upgrade strategy.
Modern codebases don’t change one repo at a time. Shared libraries ripple across services, migrations span hundreds of targets, and simple refactors get stuck in coordination and rollout. This talk explores how AI can turn large-scale code change into a repeatable capability. Agents that plan work across multiple repos and produce reviewable pull requests. We’ll discuss what makes this safe at scale including standardized tool interfaces to internal systems, orchestration for sequencing and dependency propagation, and quality gates that keep humans in control.
Modernizing legacy systems seemed exciting…until I found myself absorbed in rewrites, facing business blockers, and watching tech debt pile up instead of shrink. In this talk, I’ll share the biggest traps I’ve seen and experienced firsthand while working on modernization efforts in large organizations—and what helped us avoid (or recover from) them. From picking the wrong architecture patterns too early to losing stakeholder trust halfway through, I’ll walk through real examples of what not to do, along with the principles and strategies that helped us get back on track. Whether you’re breaking down a monolith or updating a business-critical system, I’ll help you steer clear of common pitfalls and make smarter, more sustainable decisions.
What This Talk Will Answer:
-What are the most common and costly mistakes teams make during architecture modernization?
-How do you choose between refactoring, rewriting, or rearchitecting a legacy system?
-How can Domain-Driven Design reduce risk and improve focus in modernization efforts?
-What strategies keep modernization aligned with business priorities and avoid loss of momentum?
-How do you avoid turning tech upgrades into long-running, low-impact projects?
At Duolingo, we used OpenRewrite and AI to upgrade Java services to a shared Golden Path—a set of clear standards designed to reduce complexity and ensure consistency. OpenRewrite handled the deterministic refactors, while AI addressed the remaining service-specific changes and build failures. This talk covers how combining the two made it possible to consistently ship working upgrades at scale.
Kafka messaging is easy to use until it something breaks…
Digging into recovery, batch processing, retry things get hairy pretty quickly. Most companies have those practices documented but how do you check if they are implemented. This is where Moderne comes in. With help of AI you can easily “codify” your best practices and create recipes for them. You need to also apply other tricks - like reducing false positives with pre-filtering, grouping recipes into logical units and finally summarizing results. Use Moderne's deterministic approach to identify gaps in current implementation to ensure consistency and then treat outcomes as inputs to either AI flows or “fix” actions.
Agents have intelligence and tools. What they lack is expertise: the encoded know-how that carries your workflows, your definitions, your institutional habits. In this session, Dania will share what the Stacklok team is learning as they build skills for knowledge workers. Want to ensure your AI agents have more leverage and impact? Jump into this session.
Modern engineering organizations aren’t limited by what they can build — they’re limited by how safely and efficiently they can evolve what already exists.
At Uber’s scale, even “routine” migrations — language upgrades, API deprecations, security remediations, framework shifts — can span thousands of services across multiple monorepos. Without strong automation and coordination, these efforts create review overload, ownership ambiguity, and operational risk — slowing down critical platform and business initiatives.
In this talk, we’ll share how we built Shepherd, Uber’s large-scale migration platform, to make organizational change a structured, repeatable capability.
To date, Shepherd has transformed over 4.3 million lines of code, generated and orchestrated 30,000+ diffs across five monorepos, and enabled critical company-wide initiatives — including major Java upgrades, context propagation adoption, Redis version migrations, and other foundational platform improvements. Efforts that previously required quarters of manual coordination can now be executed in weeks, with measurable progress and controlled rollout.
Shepherd combines deterministic, compiler-aware code transformations with AI-assisted remediation, automated diff orchestration, ownership-aware routing, and integrated validation pipelines. Migrations become observable workflows — with safety gates, feedback loops, and production-aware safeguards — rather than one-off engineering campaigns.
But the broader impact extends beyond migrations.
The infrastructure built to automate change at scale — validation gating, large-scale diff management, review signal collection, and ownership-aware execution — is now forming the foundation for production-safe, agentic development workflows. By embedding AI within deterministic, validated engineering loops, we move beyond isolated code generation toward systems that can propose, validate, and safely land changes in real production environments.
This talk examines how to scale change — technically and organizationally — and what that enables for the next generation of AI-driven software development.
Software development in the era of AI is fraught with risk, especially in rapidly evolving large enterprise software organizations. In this talk Rui and Nachi share the tools Meta has implemented to mitigate risk. Specifically, Meta has developed, deployed, and enforced Diff Risk Score (DRS) and other code health metrics to tackle production risk. Equipped with a model that predicts if a code change might cause a product customer disruption, Meta developers can build features and workflows to improve almost every aspect of writing and pushing code. Today, DRS powers many risk-aware features that optimize product quality, developer productivity, and computational capacity efficiency. Notably, DRS has helped us eliminate major code freezes, letting developers ship code when they historically could not with minimal impact to customer experience and the business.
Topics and outline
Morgan Stanley has been leveraging Moderne for close to a year now as part of its suite of modernization tools to automate large-scale refactoring and technical debt remediation. Join us to learn more about our approach, our successes, and our challenges to date, and what we have planned next to continue this transformation.
Join us to learn more about our approach, our successes, and our challenges to date, and what we have planned next to continue this transformation.
AI is accelerating software development at an unprecedented pace, but many teams are discovering a frustrating reality: faster coding isn’t translating into faster delivery.
The reason is counterintuitive. When you accelerate one part of a system, you don’t improve the system… you stress it. More code becomes more review, more coordination, more cognitive load, and ultimately, less flow.
This talk connects that modern failure mode to a foundational systems insight from The Goal: local optimization usually degrades overall performance. From there, Michael Carducci shows how to apply the Theory of Constraints to modern software delivery.
Using concrete examples, you’ll see how practices like XP, DevOps, Domain-Driven Design, and Team Topologies act as targeted interventions on specific bottlenecks—and how misapplying them can make things worse.
You’ll leave with a practical mental model for identifying constraints in your system, reasoning about trade-offs, and designing for flow in an AI-accelerated world.
Gartner just declared the semantic layer a non-negotiable foundation for AI. Most of the industry responded with a blank stare.
This presentation is the answer to that blank stare.
Your AI has a dirty secret: there is no mechanism in its architecture for truth. Only probability. Every response is a hallucination — most just happen to overlap with the facts. The philosophers figured out why 2,500 years ago, and they also gave us the solution. Plato defined knowledge as justified true belief. RAG is our architecture for justification. But there's a problem — your structured data is wholly inaccessible to it, because your JSON is full of magic strings that mean nothing outside the system that generated them.
This presentation shows you how to fix that. Not with a new framework, a bigger model, or an enterprise triple store. With a discipline — the discipline of making meaning explicit. JSON-LD, RDFS, OWL, and Schema.org form a standards stack that has been quietly solving this problem for 30 years. Your AI is already fluent in it. Half the web already speaks it. Google built an empire on it.
You'll leave with a concrete understanding of what the semantic layer actually is, why it matters, and — most importantly — how to start building it this week with the APIs you already have.
Your data isn't worthless. AI just doesn't know what it means yet.
AI adoption is accelerating across engineering organizations, but translating that adoption into meaningful impact remains uneven. While many teams see immediate gains in speed, those gains often stall as new bottlenecks emerge around verification, coordination, and trust. This talk shares lessons learned from observing AI adoption at scale across large engineering systems. We will explore what actually drives effective usage, why increased output does not always translate into better outcomes, and how organizations can rethink workflows to fully realize the benefits of AI.
We will cover practical strategies for driving adoption beyond surface-level usage, approaches for handling verification in AI-assisted development, and ways to measure success that go beyond activity metrics. The goal is to provide a clear, experience-backed framework for turning AI from a promising tool into a reliable driver of engineering impact.
Java has powered enterprise systems for nearly three decades—but many teams still struggle with aging codebases, performance bottlenecks, and the complexity of modern cloud environments. In this session, we explore how generative AI is transforming not just how we write code, but how we modernize and optimize entire Java workloads through practical, end-to-end workflows.
You’ll learn how to use AI-assisted tools to refactor and upgrade legacy Java code, adopt modern frameworks and architectures, and continuously improve performance. We’ll go beyond code-level changes and show how AI can help analyze runtime behavior, uncover bottlenecks, and guide optimizations using tools like Java Flight Recorder and other observability techniques.
We’ll also cover how to combine different approaches—from existing tools to custom AI Embabel agents —that allows you to build a repeatable workflow that connects code changes with measurable performance improvements in production systems.
Managing thousands of components across millions of lines of code is a massive challenge. At Spotify, we moved from manual multi-month migrations to a fleet-first mindset, using tools like OpenRewrite and AI-powered background coding agents to run automated, daily code refactorings and infrastructure optimisations.
This talk covers how we scaled to over one million automated changes, our evolution from deterministic recipes to agentic loops, and how we use test automation and LLM judges to maintain quality.
Automation and AI-powered development tools are dramatically increasing how fast we can write code, but most teams aren’t shipping any faster. The bottleneck has shifted.
In this talk, we’ll explore the growing gap between code generation and code delivery, and why traditional engineering practices are struggling to keep up. While developers can now produce changes at unprecedented speed, teams are still constrained by long testing cycles, slow and inconsistent code reviews, and branching and release processes designed for a different era.
We’ll examine the systemic challenges that emerge when scaling AI-assisted development across teams, including maintaining code quality, ensuring confidence in automated changes, and avoiding process bottlenecks that negate productivity gains.
More importantly, this session will focus on practical strategies to close the gap. We’ll cover how to rethink testing through automation and continuous validation, how smaller and more frequent changes can reduce risk and accelerate feedback, and how modern release and branching approaches can enable faster, safer delivery.
Attendees will leave with a clearer understanding of why speed gains from AI often stall before production and what concrete changes are needed to truly accelerate software delivery end-to-end.
In tech teams it's a constant firefight. We react. Then we react to the reaction… the cycle continues. In all this noise, in all this chaos, how do we move forward. How do we remain proactive?
A great leader must be an enabler for the team. At times this means insulating the team from the noise. At other times it means improving the environment for the team. At all times, however, it requires setting clear priorities and conditions for success.
This session is focused on the art of moving forward in even the noisiest environments.
Agentic DevOps means AI agents are now active members of your engineering team — writing code, catching security issues, and resolving production incidents without waiting for a human to start the work. But getting agents to work reliably across a large engineering org is harder than it looks. In this session, Microsoft, GitHub, and Moderne show how Moderne acts as the deterministic harness for agents like GitHub Copilot — giving them the precision tooling they need to make changes you can trust across thousands of repos.
We'll walk through real examples showing what this looks like in practice — and what it means for how your engineering org operates.