Feature flags are a powerful tool that can help software teams ship faster, safer, and smarter. In this session, you will learn what feature flags are, how to use them effectively, and the many benefits they offer. We will cover topics such as:
Who is this session for?
This session is for anyone who wants to learn more about feature flags and how to use them to improve their software development process.
At Uber, we migrated our entire Java test codebase from JUnit4 to JUnit5 using OpenRewrite, fully automating one of the largest test migrations in the industry. The migration covered over 75,000 test classes, 400,000 tests, and 15M+ lines of code, with 1.5 million lines modified — completed in just a few months with minimal developer disruption.
This session covers the engineering approach, tooling design, and validation strategies that made this possible. Topics include customizing OpenRewrite for Uber’s monorepo, ensuring migration correctness at scale, managing rollout safety, and integrating automated refactoring into CI pipelines. Attendees will learn practical strategies for applying large-scale code transformations safely and efficiently in complex production environments.
Internally managed frameworks provide companies with the ability to support their developers at scale by providing reasonable defaults, managing upgrades, and managing integration with internal tooling.
However, for the teams managing these frameworks there is a significant burden to maintain the framework and support users who are migrating to newer versions of the internal framework.
The microservices team at Squarespace has addressed these concerns by adopting Moderne to help manage our internal SpringBoot starters, custom Java images, and more!
In this session we will discuss:
As codebases grow and architectures decentralize, developers spend an increasing amount of time trying to understand existing systems before making changes. Text search, IDE features, and AST tools help, but they break down when teams need more accuracy, cross-repo visibility, and deeper type-aware insight. We will explore how semantic code search—powered by Moderne’s Lossless Semantic Tree (LST)—fills this gap. We’ll examine how LSTs provide the necessary level of fidelity, how that enables developers to add new code without breaking existing systems, and why semantic search is becoming a core capability for platform and architecture teams. See practical examples of real-world searches (finding API usages, tracing dependencies, impact analysis), and get a deeper understanding of what it means to treat code search as collaboration infrastructure rather than a one-off migration tool.
We will examine how LSTs provide the necessary level of fidelity, how that enables developers to add new code without breaking existing systems, and why semantic search is becoming a core capability for platform and architecture teams. See practical examples of real-world searches (finding API usages, tracing dependencies, impact analysis), and get a deeper understanding of what it means to treat code search as collaboration infrastructure rather than a one-off migration tool.
There is a growing interest in application of LLMs for code migration / refactoring.
This talk will share the successful applications and experiences on using LLMs for code migrations at Google. We see evidence that the use of LLMs can reduce the time needed for migrations significantly, and can reduce barriers to get started and complete migration programs.
We hope that industry practitioners will find our insights useful.
Modern Java applications utilize hundreds of dependencies. Each additional dependency adds complexity to the application development process. Java developers need to think about how to manage internal libraries and external libraries.
OpenRewrite recipes are an important tool for managing Java dependencies. In this talk, we will share our experiences with OpenRewrite. We will examine key recipes that we are using to modernize our systems.
After helping Fortune 500 companies modernize millions of lines of code across thousands of repositories, we've discovered that technical excellence isn't enough—the human and organizational challenges often determine success or failure. This session distills real-world experiences from large-scale OpenRewrite and Moderne adoptions into 10 actionable lessons that will save your organization months of effort and millions in technical debt reduction costs.
This session distills real-world experiences from large-scale OpenRewrite and Moderne adoptions into 10 actionable lessons that will save your organization months of effort and millions in technical debt reduction costs.
Hi, Spring fans! There's never been a better time to be a JVM / Spring developer! Spring brings the worlds of security, batch processing, NoSQL and SQL data access, AI, enterprise application integration, web and microservices, gRPC, GraphQL, observability, AI, agentic systems, and so much more to your doorstep. In this talk we're going to look at some of the amazing opportunities that lay before the Spring Boot developer in 2026!
Maintaining large-scale distributed systems poses significant challenges due to their complexity, scale, and the risks of live changes. This talk presents a case study of a system which processes vast volumes of items in real time for billions of users. Over nine months, this system underwent a live architectural refactoring to improve maintainability, developer efficiency, and reliability. Key strategies included staged rollouts, automated testing, and impact validation, resulting in a 42% boost in developer efficiency, 87% reliability improvement, and notable gains in system performance and resource savings.
Looking forward, in this talk, we will explore the growing role of AI-driven refactoring techniques in accelerating development, enhancing reliability, and optimising performance in complex systems. This talk offers an overview of our current efforts, practical insights, and future directors for code maintenance and refactoring empowered by AI.
REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.
But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.
You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.
This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.
The age of hypermedia-driven APIs is finally upon us, and it’s unlocking a radical new future for AI agents. By combining the power of the Hydra linked-data vocabulary with semantic payloads, APIs can become fully self-describing and consumable by intelligent agents, paving the way for a new class of autonomous systems. In this session, we’ll explore how mature REST APIs (level 3) open up groundbreaking possibilities for agentic systems, where AI agents can perform complex tasks without human intervention.
You’ll learn how language models can understand and interact with hypermedia-driven APIs, and how linked data can power autonomous decision-making. We’ll also examine real-world use cases where AI agents use these advanced APIs to transform industries—from e-commerce to enterprise software. If you’re ready to explore the future of AI-driven systems and how hypermedia APIs are the key to unlocking it, this session will give you the knowledge and tools to get started.
Migrating a massive, actively developed codebase from one language to another is a daunting challenge—especially when the goal is to maximize developer productivity and long-term maintainability. At Meta, we embarked on a multi-year journey to translate tens of millions of lines of Java code, going far beyond incremental adoption to pursue a full-scale, automated migration.
In this talk, I’ll share the technical and organizational lessons learned, with a focus on the custom refactoring infrastructure we built to make it possible. I’ll discuss how we leveraged metaprogramming, static analysis, and even runtime data collection to tackle the thorniest issues—like null safety and dependency graph complexity—while minimizing developer toil and risk.
In this talk, Ill share the technical and organizational lessons learned, with a focus on the custom refactoring infrastructure we built to make it possible. Ill discuss how we leveraged metaprogramming, static analysis, and even runtime data collection to tackle the thorniest issueslike null safety and dependency graph complexitywhile minimizing developer toil and risk.
This session follows our journey at SAP from developer frustration to establishing the large scale refactoring capability. We’ll share how we moved from grassroots discovery and community workshops to building a robust business case for automation. Learn how we leveraged an InnerSource model to bridge the gap between central technology teams and product units, effectively “paving the road” for seamless upgrades, migrations, and recurring code changes.
Learn how we leveraged an InnerSource model to bridge the gap between central technology teams and product units, effectively “paving the road” for seamless upgrades, migrations, and recurring code changes.
How do you confidently deploy automated changes across thousands of repositories, in hours instead of weeks? At Netflix, we've reimagined the way code changes and dependency updates are delivered—swiftly, safely, and at scale. In this talk, we’ll share how we reimagined our approach to automated changes at scale, blending technical innovation with a focus on developer trust and safety.
We’ll dive into the systems and strategies that enabled us to move from manual, slow updates to a fully automated, zero-touch process. Key to this transformation were advanced dependency management, automated SCM changes, and comprehensive artifact observability, allowing us to track and validate every change across a vast codebase.
But automating at scale only works if engineers are confident in the process. Drawing from Netflix’s experience, we’ll explore how sharing validation results, feedback loops, and impact analysis data builds trust in automation—empowering teams to embrace rapid, platform-driven changes. We’ll discuss how shifting verification left, providing developer self-service insights, and analyzing failure impacts enable both high velocity and robust quality, even as the number of automated changes grows.
Finally, we’ll highlight future directions—such as language-agnostic tooling and proactive security measures—offering practical takeaways for any organization looking to accelerate and scale their automated change management with confidence.
The Human Side of AI: Unlocking Opportunity in the New Bottlenecks of Software Development
Artificial Intelligence is transforming software development at unprecedented speed—reshaping roles, workflows, and organizational dynamics. Yet, with every acceleration comes a new constraint.
Kiran Rouzie (Head of Technology Strategy, North America & Global Head of Organizational Change & Transformation) and Rachel Laycock (CTO) of ThoughtWorks will discuss how AI introduces fresh bottlenecks that shift where value is created in the delivery lifecycle — and how these shifts unlock opportunities for business analysts, QA professionals, and other often-overlooked roles to rise in influence. More broadly, they will reflect on the human side of AI adoption: how diversity of thought, perspective, and lived experience must shape the integration of AI into the workforce.
Far from being a story of replacement, this conversation emphasizes the collective contribution of people in guiding, validating, and amplifying AI’s potential. Kiran and Rachel will highlight how voices from diverse and minority communities are essential in shaping trustworthy AI practices, and how organizations can foster inclusion while reimagining roles.
Attendees will walk away with a deeper understanding of:
Far from being a story of replacement, this conversation emphasizes the collective contribution of people in guiding, validating, and amplifying AIs potential. Kiran and Rachel will highlight how voices from diverse and minority communities are essential in shaping trustworthy AI practices, and how organizations can foster inclusion while reimagining roles.
Attendees will walk away with a deeper understanding of:
As AI agents take more and more of a leading role in crafting code, it’s suddenly become apparent that the “first user” of engineering productivity tooling will be shifting towards agents rather than individual human developers.
With a human still at the helm of a fleet of agents in producing software, and increasingly less involved in the writing of individual lines of code, maximizing engineering value delivery means making every tool call faster, more token efficient, and more accurate.
Many of our members have now brought in not one but several coding agents and foundation models in an effort to accelerate everything from feature delivery to application modernization. But now attention will shift to how to ever tighten the tool call feedback loops. Each percentage gain in tool call efficiency multiplied across an increasingly large fleet of agents creates an outsized impact.
We’ll cover a variety of concrete cases where tool call efficiency can be harvested immediately:
It’s amazing how we can now build working apps just by few-shot prompting LLMs. But try doing this with monorepos of 10s of MLOC, like the ones used for planet-scale apps that must be secure and compliant. Responsible agentic coding at scale, where 1000s of engineers materialize code changes by engaging in deep sessions with AI is really challenging. We want to share Airbnb’s journey, the trade-offs we picked, learnings, outcomes and productivity impact we observed.
At Tinder, modernizing our backend platform means tackling two major challenges at once: the code and the coder.
In this talk, we’ll share how Tinder built its “upgrade muscle” through repeatable, increasingly automated platform initiatives, and how OpenRewrite helped make that possible. We’ll anchor the story in two migrations: our first Java upgrade, which exposed the limits of manual coordination at scale, and our Java 17 + Spring Boot 3 effort — where we leaned hard on automation to keep change boring and predictable.
We’ll then zoom in on the Java 17 and Spring Boot 3 upgrade, starting with a critical testing dependency migration. By authoring a custom OpenRewrite recipe to migrate unit tests from JMockit to Mockito, we refactored thousands of tests across our backend codebase and completed the effort in roughly a month, without widespread manual changes or developer disruption.
We’ll close by showing how OpenRewrite has evolved into a continuous platform tool at Tinder, enabling safe, large-scale refactoring over time. Attendees will learn how to combine deterministic refactoring tools like OpenRewrite with LLMs in a complementary, cost-effective way, and how to apply that approach to build a sustainable, developer-friendly upgrade strategy.
Modern codebases don’t change one repo at a time. Shared libraries ripple across services, migrations span hundreds of targets, and simple refactors get stuck in coordination and rollout. This talk explores how AI can turn large-scale code change into a repeatable capability. Agents that plan work across multiple repos and produce reviewable pull requests. We’ll discuss what makes this safe at scale including standardized tool interfaces to internal systems, orchestration for sequencing and dependency propagation, and quality gates that keep humans in control.
Modernizing legacy systems seemed exciting…until I found myself absorbed in rewrites, facing business blockers, and watching tech debt pile up instead of shrink. In this talk, I’ll share the biggest traps I’ve seen and experienced firsthand while working on modernization efforts in large organizations—and what helped us avoid (or recover from) them. From picking the wrong architecture patterns too early to losing stakeholder trust halfway through, I’ll walk through real examples of what not to do, along with the principles and strategies that helped us get back on track. Whether you’re breaking down a monolith or updating a business-critical system, I’ll help you steer clear of common pitfalls and make smarter, more sustainable decisions.
What This Talk Will Answer:
-What are the most common and costly mistakes teams make during architecture modernization?
-How do you choose between refactoring, rewriting, or rearchitecting a legacy system?
-How can Domain-Driven Design reduce risk and improve focus in modernization efforts?
-What strategies keep modernization aligned with business priorities and avoid loss of momentum?
-How do you avoid turning tech upgrades into long-running, low-impact projects?
At Duolingo, we used OpenRewrite and AI to upgrade Java services to a shared Golden Path—a set of clear standards designed to reduce complexity and ensure consistency. OpenRewrite handled the deterministic refactors, while AI addressed the remaining service-specific changes and build failures. This talk covers how combining the two made it possible to consistently ship working upgrades at scale.
Managing thousands of components across millions of lines of code is a massive challenge. At Spotify, we moved from manual multi-month migrations to a fleet-first mindset, using tools like OpenRewrite and AI-powered background coding agents to run automated, daily code refactorings and infrastructure optimisations.
This talk covers how we scaled to over one million automated changes, our evolution from deterministic recipes to agentic loops, and how we use test automation and LLM judges to maintain quality.
Kafka messaging is easy to use until it something breaks…
Digging into recovery, batch processing, retry things get hairy pretty quickly. Most companies have those practices documented but how do you check if they are implemented. This is where Moderne comes in. With help of AI you can easily “codify” your best practices and create recipes for them. You need to also apply other tricks - like reducing false positives with pre-filtering, grouping recipes into logical units and finally summarizing results. Use Moderne's deterministic approach to identify gaps in current implementation to ensure consistency and then treat outcomes as inputs to either AI flows or “fix” actions.
Building high impact AI agents comes down to two things: great prompts (that deliver context) and great tools (that support action in real environments).
In this session we will explore how enterprises are using the Model Context Protocol to optimize prompts and tools in concert. There will be specific examples of how leaders are designing and running evals, distilling context, refining tool selection and more to drive reproducible, exceptional results.
Modern engineering organizations aren’t limited by what they can build — they’re limited by how safely and efficiently they can evolve what already exists.
At Uber’s scale, even “routine” migrations — language upgrades, API deprecations, security remediations, framework shifts — can span thousands of services across multiple monorepos. Without strong automation and coordination, these efforts create review overload, ownership ambiguity, and operational risk — slowing down critical platform and business initiatives.
In this talk, we’ll share how we built Shepherd, Uber’s large-scale migration platform, to make organizational change a structured, repeatable capability.
To date, Shepherd has transformed over 4.3 million lines of code, generated and orchestrated 30,000+ diffs across five monorepos, and enabled critical company-wide initiatives — including major Java upgrades, context propagation adoption, Redis version migrations, and other foundational platform improvements. Efforts that previously required quarters of manual coordination can now be executed in weeks, with measurable progress and controlled rollout.
Shepherd combines deterministic, compiler-aware code transformations with AI-assisted remediation, automated diff orchestration, ownership-aware routing, and integrated validation pipelines. Migrations become observable workflows — with safety gates, feedback loops, and production-aware safeguards — rather than one-off engineering campaigns.
But the broader impact extends beyond migrations.
The infrastructure built to automate change at scale — validation gating, large-scale diff management, review signal collection, and ownership-aware execution — is now forming the foundation for production-safe, agentic development workflows. By embedding AI within deterministic, validated engineering loops, we move beyond isolated code generation toward systems that can propose, validate, and safely land changes in real production environments.
This talk examines how to scale change — technically and organizationally — and what that enables for the next generation of AI-driven software development.
Software development in the era of AI is fraught with risk, especially in rapidly evolving large enterprise software organizations. In this talk Rui and Nachi share the tools Meta has implemented to mitigate risk. Specifically, Meta has developed, deployed, and enforced Diff Risk Score (DRS) and other code health metrics to tackle production risk. Equipped with a model that predicts if a code change might cause a product customer disruption, Meta developers can build features and workflows to improve almost every aspect of writing and pushing code. Today, DRS powers many risk-aware features that optimize product quality, developer productivity, and computational capacity efficiency. Notably, DRS has helped us eliminate major code freezes, letting developers ship code when they historically could not with minimal impact to customer experience and the business.
Topics and outline
Morgan Stanley has been leveraging Moderne for close to a year now as part of its suite of modernization tools to automate large-scale refactoring and technical debt remediation. Join us to learn more about our approach, our successes, and our challenges to date, and what we have planned next to continue this transformation.
Join us to learn more about our approach, our successes, and our challenges to date, and what we have planned next to continue this transformation.
Technical debt is often treated as a code-level problem, addressed in the cracks of time between features. In practice, it behaves more like a hidden tax on innovation, quietly reducing the amount of time and energy teams can devote to new ideas.
In this session, we examine data on the prevalence of technical debt across modern engineering organizations, showing that debt is not an exception but the norm. We connect this debt to everyday forms of developer toil that result from existing debt and actively contribute to its continued accumulation.
Drawing on quantitative signals we show how unpaid technical debt progressively erodes innovation time, even when teams appear busy and delivery metrics remain stable. Rather than focusing on strategies for paying down debt after the fact, this talk reframes technical debt as a systemic outcome of incentives, planning assumptions, and work design.
To realize meaningful returns on AI investments, leadership must take accountability and ownership of establishing best practices, enabling engineers, measuring impact, and ensuring proper guardrails are in place.
When prompting practice and reflexive AI use is driven from the top down, engineers can align on the highest value use cases and experience peak productivity gains. When coupled with DX's AI Measurement Framework, leaders can gain a clear picture of AI's true impact, identify the real bottlenecks in the SDLC that can be augmented with AI, and drive improvement.
In this session, Justin Reock, Deputy CTO at DX, and author of DX's Guide to AI Assisted Engineering, Advanced Prompting Guide, and Strategic AI Leadership Playbook, will explain what the most effective leaders of AI enabled engineering organizations are doing to drive satisfactory utilization, augmentation, and psychological safety across their teams. Based on interviews, use cases, and data, leaders will walk away with an understanding of how to best lead their teams through mature AI rollouts.
The most dangerous sentence in product development today is “let's add AI to this.” In ops-heavy environments — where software meets physical operations, store associates, warehouses, and supply chains — this mistake is even more costly. I've spent years deploying AI products where the end users aren't sitting at desks but standing on warehouse floors, scanning freight, and making decisions under time pressure.
This talk covers how to run a value-first audit of your product to find where AI actually changes outcomes, why the best AI features are often invisible to users, how to design AI that replaces broken workflows rather than decorating them, and when to kill an AI project even if the model performs perfectly. I'll share equal time on what we shipped and what we killed — and why the kills mattered more.