What sounds like a mechanical task, rewriting C and C++ into Rust with the help of AI, quickly turns into an exercise in archaeology. Large codebases are not syntax trees but sedimentary layers of forgotten assumptions, undefined behavior, and historical accidents. Rust forces those hidden contracts into the open, and every time it does, something breaks. Sometimes even the original author would need a moment to remember why the code worked in the first place.

Memory safe, except where we gave up

I stumbled over a job opening on LinkedIn, and boy, this one is quite interesting. It was posted by Galen Hunt, Distinguished Engineer at Microsoft, and it reads less like a vacancy and more like a declaration of intent. Eliminate every line of C and C++ from Microsoft by 2030. Not reduce. Not modernize. Eliminate. The plan is equally bold. Combine algorithms and AI to rewrite Microsoft’s largest codebases, guided by a North Star that sounds almost deliberately provocative. One engineer, one month, one million lines of code. Anyone who has ever stared at a legacy system long enough knows that this sentence alone should trigger a dozen warning lights. And that is exactly why it is fascinating.

Translation is easy, semantics are not

At first glance, rewriting C or C++ into Rust sounds like a mechanical task. Parse the code, build an AST, understand control flow and data flow, map constructs, generate Rust equivalents. With enough infrastructure, enough compute, and enough patience, that part is tractable. At least, it looks tractable on paper.

In reality, there are days when I open my own code from the day before and need a moment to remember what on earth I was thinking. And that code was written by me, with full context, intent, and mental model available at the time. Now imagine trying to reconstruct intent from a codebase that has passed through dozens, sometimes hundreds, of engineers over decades, each leaving behind perfectly rational decisions that only made sense in the moment they were made.

Software systems are not syntax trees. They are sedimentary layers of assumptions, shortcuts, undefined behavior, performance hacks, and historical accidents. Much of that behavior was never written down. It lives in the gaps between what the language promises and what the compiler happened to do for the last fifteen years. C and C++ code often works not because it is correct, but because everyone involved learned where the landmines are and carefully steps around them. Knowledge is tribal. Comments are vague. Tests encode behavior without explaining why that behavior exists. Rust does not allow that kind of informal contract. It forces assumptions into the open. Lifetimes must be explicit. Aliasing must be justified. Ownership must be proven. What used to be “don’t touch this or it breaks” becomes “explain why this is safe.”

Every time that happens, something breaks.

Sometimes it breaks because the original code was wrong and nobody noticed. Sometimes it breaks because the original code was relying on undefined behavior that just happened to work. Sometimes it breaks because the translation is technically correct but semantically different in a way no test ever covered. And sometimes it breaks because nobody, including the original author, really remembered why the code was written that way in the first place. This is the uncomfortable truth behind large scale translation. The hard part is not understanding what the code does today. The hard part is understanding what it was never supposed to do, but somehow ended up doing anyway, and which other parts of the system now quietly depend on. That is where “mechanical translation” stops being mechanical, and where even the best infrastructure and the smartest AI will repeatedly run into human history encoded as software.

Undefined behavior, now with consequences

One of the first things that will go wrong is that Rust will surface bugs that were always there. Code that relied on struct layout, alignment, integer overflow, aliasing, or lifetime assumptions that were technically undefined but practically stable. In C and C++, these bugs often survive for decades. They get performance tuned, wrapped, reused, and depended on. In Rust, they become compilation errors, or worse, logic changes that pass the compiler but subtly alter behavior. At that point, the translation system has three choices. Redesign the code, which is slow and risky. Preserve the behavior using unsafe, which defeats part of the goal. Or paper over it and hope tests catch it, which is how you accumulate future outages.Multiply that decision by millions of lines of code.

Unsafe is not the exception, it is the pressure valve

Rust advertises memory safety, but any serious systems programmer knows that unsafe is not an edge case. It is a pressure valve. Every time the borrow checker says no, but the engineer knows better, unsafe becomes tempting. Every time performance matters more than elegance, unsafe becomes expedient. Every time the surrounding ecosystem is still C or C++, unsafe becomes unavoidable. In a translation effort of this scale, unsafe will concentrate exactly where you would expect. FFI boundaries, OS interfaces, custom allocators, concurrency primitives, lock free data structures, shared memory, performance critical loops. None of that is accidental. It is structural. The danger is not that unsafe exists. The danger is that it spreads. Once unsafe appears in one place, it tends to attract more unsafe around it, until the supposed safety of the surrounding code is largely theoretical. That is how you end up with Rust code that is technically memory safe by language definition, but operationally just as fragile as the C it replaced.

Memory safe, except where we gave up.

Performance will quietly drive bad decisions

Another failure mode is performance, not catastrophic, but subtle and corrosive. Safe Rust is often fast, but not always fast in the same way as hand tuned C or C++. When translation introduces even small regressions, teams will notice. Latency budgets will be exceeded. Throughput will dip. Someone will open a ticket. That is when unsafe starts to feel justified. It will be added with good intentions. Just this one loop. Just this one allocation path. Just until we understand the regression better. Those justifications age badly. Over time, the translated codebase risks becoming a patchwork of safe Rust shells around unsafe cores, optimized to behave exactly like the original C, quirks and all. At that point, the safety story becomes harder to explain, even if it is technically defensible.

AI will amplify mistakes as efficiently as it amplifies progress

AI is a multiplier. That is both its strength and its danger. When the underlying model of the system is correct, AI accelerates change. When the model is incomplete or wrong, AI accelerates the propagation of mistakes. At the scale described here, small misunderstandings do not stay small for long. An incorrect assumption about aliasing. A misinterpreted lifetime. A subtle difference in integer semantics. These things can ripple through millions of lines of generated code before a human ever notices. This is why algorithmic guardrails matter more than model quality. And it is also why this effort will be limited not by how smart the AI is, but by how good the invariants are. If the invariants are weak, the AI will confidently do the wrong thing, very fast.

Organizational friction is inevitable

Even if the technology works, the organization will push back. Teams trust code they wrote more than code that was generated. They trust bugs they understand more than bugs they do not. They trust ownership more than abstraction. When a translated Rust system misbehaves, the first question will not be “is the code correct.” It will be “who do I blame.” That is not a cynical observation, it is a practical one. Accountability matters in production systems. Machine generated code muddies that accountability unless the tooling makes provenance and intent painfully clear. If that clarity is missing, adoption will stall, no matter how impressive the demo.

What success actually looks like

Success here does not look like zero unsafe. That is a fantasy. Success looks like containment. Unsafe code exists, but it is isolated, documented, reviewed, and boring. It lives at the edges, not in the heart of the system. Success looks like translated systems that engineers are willing to touch, debug, and extend. Not because they are perfect, but because they are understandable. Success looks like fewer security advisories, fewer memory corruption bugs, fewer late night incidents caused by ancient pointer arithmetic. And yes, success still includes plenty of places where someone shrugged and said, “we gave up here, for now.”

Why this is still worth doing

Despite all of this, the attempt matters. C and C++ are a permanent tax on organizations at this scale. Security incidents, maintenance cost, onboarding friction, and fear of change all compound over time. Doing nothing is not a neutral option. If this effort fails, it will still teach the industry more than a dozen conference talks about AI coding assistants ever will.If it succeeds, even partially, it will change how we think about technical debt, legacy systems, and what machines can realistically help us do.

Just do not confuse “memory safe” with “problem solved.”