Why I am (Temporarily) Leaving Claude Code for Antigravity

I switched from Claude Code (Opus 4.5) to Google Antigravity (Gemini 3 Pro) for the raw performance and generous usage limits. While Claude Code offers better workflow rigidity for CLI users, Antigravity's Gemini 3 Pro model is currently the superior engine for programming tasks.

A Religious Claude User Crosses the Aisle

Claude Code as the Oasis in a Vibe-Coding Desert

Six months ago, when I decided to finally cave in and try the leading AI code editors (Windsurf and Cursor) and command line clients (Claude Code), there was a clear winner for me: Claude Code. Nothing could beat the quality, Developer Experience (DX) and, most importantly, value for money that a Claude Code running on Opus 4.5 could bring into the table.

I have spent the last six months then building every single type of application my mind could come up with through this powerful CLI vehicle named Claude Code, with every step, gaining a deeper understanding on how to use it and its quirks, and Anthropic in return corresponded with new improvements to the tool and (arguably more important) new AI models heavily skewed towards programming.

By the time Sonnet 4.0 came out, it was pretty much clear: we are counting down the days until programmers are a commodity. I started to rely more and more on its capabilities, learning how to take advantage of every new MCP server that I crossed paths with, agentic behaviors, parallel workflows and even context management, which I consider to be at the top of what you should learn when starting out with Claude Code.

Every new interaction also highlighted an ever-increasing dependency on the tool, as I kept veering towards more and more uncharted territory, and in all fairness, Claude Code was always up to the task, especially on these newer models, highlighting that programming has just been commoditized forever. However, I never believed the real core of software development was in programming in the first place.

Architecture, design, and creativity are not easily replicable by the token generators we call generative AI. Programming is a hard skill, it is the vehicle through which all these other skills culminate into. Although it can still make or break the final product, it does not do it in the same way as the other skills do.

The Honeymoon Phase: Infinite Tokens Before the Enshittification

I have just told you an incredible story about how Claude Code changed the way I, along with loads of other enthusiasts, develop software forever. But, as you have already read, this post is about how I left Claude Code for Google Antigravity. What had lead me to finally take action?

Multiple reasons, but the main one was the enshittification of Claude Code. The introduction of weekly limits had never affected me, until it did. My workflow is pretty simple and linear, as it has been for the last six months: I have used the pro subscription, exhausted my limits most days of the week three times, and never hit that limit. This past week, it finally happened. I have finally hit the weekly limit, on a Wednesday.

Let me introduce some additional but crucial details:

  • I have never switched from the Opus 4.5 model.

  • During this or the previous three weeks, I never used the Claude chat interface. The entire subscription was completely dedicated to Claude Code. The reason was Gemini 3 Pro. It is such an incredible model that it became a part of my daily workflow.

  • I have used the Max subscription before, and found it was too much for me. It was monetarily way more expensive and my usage would sit at around 60% at most. In essence, I did not deem it as much value for money as the pro version.

Unfortunately for Anthropic, just because their compute tap stops pouring towards my account, my work does not stop. So, I was in the market for an alternative. My choice was easy: I was ready to give Google Antigravity a try, especially after the great impression Gemini 3 Pro left in me, to the point where I would only use Claude Code and never touch Claude chat.

The Problem Was Not Opus 4.5

You can also tell me: "the Opus 4.5 model just spends more energy, so Anthropic cannot provide the same compute limits". To which I say "ever heard of enshittification?".

Enshittification is the degradation of customer experience over profits. This could be regarded as a classic playbook case of that, except, for me, the answer lies deep into the state of the AI industry in general.

Simple math reveals the strategy, where AI companies are racing to provide the cheapest possible model that still delivers incredible performance. Were they to describe us their preferred model, I envision it would be powerful enough such that it would run only at the cloud, and efficient enough that their entire business model could finally become profitable.

I am by no means a doomer and I want AI to succeed, because I believe on what is on the other side: the world will become a better place. But I am also aware that they currently have a big monetary hole being plugged by heavy investment. Therefore, decreasing the value for money equation is an obvious move, especially considering how generous a Claude Code is, in comparison to API usage.

Fluidity vs. Rigor: The Problem with Unstructured Agents

The next steps were clear: I installed Google Antigravity on a Windows Computer. This detail is important, because I expect a different experience on a different OS. It was clear with Claude Code that using Ubuntu (either directly or through WSL on Windows) yielded a noticeably better performance, with Claude Code not even functioning on the command line when I started using it.

The Benchmark: An Automated Audio Editor

The benchmark was a simple automated audio editor. The requirements were simple, but something I could benchmark against Claude Code, as I have developed a similar application with Claude Code. You can argue it was not a head-to-head comparison, because by the time I created the app on Antigravity, I already had a better grasp on the requirements, implementation details, and other quirks that come with time.

Here are the requirements:

  • Next.js on the front-end
  • Node.js over Express.js on the back-end
  • Open-Whisper to detect filler words
  • FFMPEG microservice to handle everything else (conversion between different video and audio formats, audio operations such as denoising and compression, ...)

The Side-By-Side Experience

First Impressions of a Familiar IDE Layer

It comes without saying, Google Antigravity is closer to Cursor and Windsurf (forks of the VSCode IDE with a sidebar and some extra fluff) than Claude Code (a literal command line tool, even if you could technically set it up with the same type of sidebar as the other tools mentioned here), so the experience will be different by design. We must take these differences in mind and focus more on the results and DX.

As I have used cursor in the past, I was not surprised with the layout of the IDE. It was familiar and inviting. I liked the addition of actual software design documents by default, where it was certainly easier to observe than the Markdown files and reading the plans from Claude Code in the command line. Most of the time I would need to copy the terminal output into a new text file inside VSCode and then make my remarks to Claude Code. This is a point favoring Antigravity.

The Workflow Gap Between Planning and Implementation

When interacting with Antigravity's AI helper (which I used only with Gemini 3 Pro, to take as much raw power from this experiment as possible), you are taken to an initial planning stage, which works quite well initially, similar to the planning mode from Claude Code (not to mention the presence of planning files outside the terminal, which was a big plus highlighted in the last paragraph).

But, once I started with the implementation and required some back and forth to polish the requirements and design, I noticed the first shortcoming: I could not switch back to planning in a defined manner, whereas with Claude Code, I could just hit Shift + Tab and both me and the tool would be on the same page, where not a single line of code would be written until we figured out the implementation details.

Instead of the defined workflow which I got accustomed to with Claude Code, Antigravity does not provide this defined mode. Instead, I would see it writing code when I wanted to organize my thoughts with it, to decide the next implementation steps or debug the issues that inevitably come along with software development project.

This lead to the impression of a broken workflow, which I eventually could tame by rejecting its changes and letting it know that we do not do things like that around here. If this is the main tool Google is using for software development internally, I would expect a bit more polish in this regard. So, this is a point for Claude Code.

Tool Use Reliability and the Git Cache Failure

The third differentiator is autonomous command execution. This requires caution, as online reports are filled with horror stories of deleted databases and exposed environment keys. Safe to say, caution is heavily advised.

Nonetheless, I consider letting them run some commands a part of the experience, albeit with careful supervision. As a Windows user, I would have liked that Antigravity would use WSL, my preferred terminal for most tasks, but by default it would use the command line. I am not sure this can be changed, but this is where I saw what looked like a bug. I asked Antigravity to add some files to .gitignore and removing the ones already committed to git. Antigravity thought it did it, but it did not do it. I ended up having to execute the command myself (which worked).

I am not saying something like this would never happen under Claude Code, but this highlights another point I felt: Claude Code seems more fluid. The proximity of the terminal proved to be a better place when developing, closer to the trenches. Sure, the DX around having AI models integrated directly in the IDE, without needing to constantly using copy and paste to interact with them is a definitive advantage, but on the other side there is a proximity you just do not get anywhere else.

I also noticed it was easier in Claude Code to interrupt the agent when the results were going in a different route than I expected, or add additional information as needed. Once I hit enter in Antigravity, it seemed more difficult to bring it back to a discussion or stop its work, even with a stop button nearby.

Parallel Workflows & Context Management

Another difference was parallel workflows. This could be my lack of experience with Antigravity speaking, but I could not grasp how to launch multiple instances on the same repository. Now, let me be clear: I am not a fan of doing this even on Claude Code, as I just confuse myself a lot when working with git worktrees, but I know some of you just love launching five instances of Claude Code at the same time. And sometimes I do that too, as it makes working in different parts of the same application easier.

These two points can be a big deal when you remember tiny improvements as you go. I am pretty sure my repository would not be so polluted with Claude Code, because I would have remembered to launch another instance and set up .gitignore from the start.

The Gemini 3 Pro Performance Advantage

With that said, Gemini seemed faster at development, to the point where I would probably need to use multiple instances of Claude Code to obtain similar results in the same amount of time. This could be the AI model shining through. Gemini 3 Pro is just a beast at everything I keep throwing at it. Additionally, I did not need to clear the context even once, which for Claude Code, you know you need to plan way ahead. With the same app, it could have easily autocompacted at least five times, which would require me to keep a better documentation and plan better. I guess this is the life of using a model that has 1M tokens at my disposal.

Lastly, Antigravity is in the honeymoon stage, so there was no interruption for cooldowns. I have used the Google AI Pro subscription, so your mileage may vary with other options. Safe to say, this is temporary, but right now (january 2026) you get a lot more value for money by using Antigravity. Given what led to this experiment was the weekly Claude Code limit, this becomes a big deal.

Anthropic Lost a Customer Today (But Maybe Not Forever)

Given the arguing points mentioned throughout this post, I have to admit: the jury is out on Claude Code. The biggest advantage that made me choose Claude instead of ChatGPT in the past was the presence of Claude Code (other than a better experience with the models from Anthropic for my specific daily tasks). I know Codex exists and I intend on trying it sometime soon, but Claude Code never made me want to switch things up.

Now, even with its quirks, Google's Antigravity seems like a better deal. Considering the heavy promotions Google is providing for its subscriptions, the virtually unlimited usage you get (for now), and the raw performance of Gemini 3 Pro, I am sorry to admit: I am considering the switch from Claude Code to Antigravity. And considering this is the main point making me return to Anthropic, as I already use Gemini for my daily chat interactions, I might abandon my Claude subscription, at least for now.

When enshittification eventually kicks in, I am not sure this will be the case though. So, it is not a goodbye to Claude, it is a bye for now.


Building in public. Follow my journey at InvisiblePuzzle (opens in a new tab) where I document how I'm building B2B automation tools while working full-time.

Tags: #googleantigravity #claudecode #gemini3pro #aicodeeditor #softwareengineering #devops #productivity

Get notified on new posts

No spam. Unsubscribe anytime.

© 2026 InvisiblePuzzle

Building Software Tools for B2B