How I used ChatGPT's $20 Plus plan to fix a nightmare bug - fast

7 hours ago 3
I used Codex 5.2 to save a customer and upgrade my product in less than an hour - for only $20
Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET's key takeaways

  • A $20 ChatGPT Plus plan can handle real-world bug fixes.
  • Codex helped identify both code bugs and hosting issues.
  • AI saved time by fixing code and drafting support emails.

When you're a lone programmer, you both cherish and dread tech support tickets.

You cherish them because interactions with users often result in a better understanding of what your code is doing out there in the wild. You dread them because sometimes those interactions result in fairly large homework assignments where you need to fix broken code.

The initial problem

Last week, I got one such ticket. A user wrote in to tell me that she couldn't get my security tool to block access to her website. I maintain an open source WordPress plugin that is designed to make a website private. The plugin is free, but my expenses are mostly supported by a series of add-ons.

I sometimes get complaints that the plugin won't block access. The solution is almost always one of two steps: turn on compatibility mode, which changes blocking behavior for certain themes, or turn off caching because cached websites ignore changes in status.

Also: I've tested free vs. paid AI coding tools - here's which one I'd actually use

I sent back a response to her, but she told me neither fix worked. We went back and forth for a while, but none of my normal tips seemed to work. Credit to the user: she stuck with me and answered all my questions. Sometimes, users just give up, and you're left wondering what might be going on out there. But a diligent user who's willing to be a partner in finding a solution is like gold.

We don't need to go into too much detail about the problem because this is mostly a ChatGPT story. But I eventually identified that the setting that turned on blocking wouldn't stick, a situation that only occurred for a few websites that had a certain configuration related to the robots.txt file. This had to do with a feature I added back in October, but it was a behavior I hadn't seen before in my test environment.

That said, at least one other user had experienced the issue because I got a one-star review from someone who complained about this exact symptom on the WordPress plugin repository. That user wasn't helpful, having never reached out to me. He just happily slapped a one-star "this sucks" review on the main place I promote my plugin and went away. I'm sure it dissuaded a bunch of other downloads, but at least I had validation that the bug I found was a thing.

(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

ChatGPT to the rescue

In September, when I initially added the feature that would cause this user difficulty, I was using the $200-per-month ChatGPT Pro plan. That's because I ran out of capacity in about five hours and didn't want to wait days to continue. In October, once I was done with the programming project, I reverted back to the much more reasonable $20-per-month ChatGPT Plus plan. That's what I'm signed up for now.

Also: I got 4 years of product development done in 4 days for $200, and I'm still stunned

I was pleased to discover that the $20-per-month plan is more than enough for occasional bug fixes and feature addition runs. This entire process was done using the ChatGPT Plus plan.

I have kept my ChatGPT integration in my VS Code development environment, so to get back into the code for diagnosing this bug, I just opened VS Code and started typing into the Codex pane. I selected GPT-5.2-Codex, which is OpenAI's latest and greatest coding model, and went to work.

My first attempt was to simply share the user's complaint with ChatGPT and ask the AI to scan the code to see if it could find an error. That did not work because the user's initial complaint didn't contain enough information to diagnose the problem. The user said that she was not knowledgeable about web administration, so I asked for permission to look at her site myself.

A few minutes of looking around showed me that when one of my newest features, an AI scraping defense capability, was enabled on her site, no other changes in the same tab group would stick. You could click a checkbox and hit save, but it never did save. I hadn't seen the problem before, but it became clear the issue had to do with her server configuration.

Once I identified the problem, I asked ChatGPT to fix it. What I found particularly interesting was that before Codex would make changes, it reminded me that my code had a settings export feature and asked me to get those settings from the user's system. It wanted to double-check how the settings data looked before it made any changes.

That was not a mindless AI request. That was a fairly sophisticated request from "someone" fully versed in the overall architecture of my security product. It was not something a newbie, first-year programmer would ask for, but more something a more experienced developer might check before making changes.

Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it

I went ahead and produced the settings JSON file and fed it to Codex. Once it looked at that, it went through my code and identified a usage pattern that would, indeed, cause the buggy behavior. Fixing that bug required some engineering, and while I mixed spinach with hummus and lime pepper seasoning for dinner that night, ChatGPT rewrote my code and fixed the bug.

I recreated the user's configuration locally and tested both the pre- and post-bug fix. The revised code did solve the problem, so I sent it back to the user.

But it was supposed to work

At this point, you might expect me to tell you the user was satisfied and all was good in Webland once again. But no. While the setting now stuck, enabling protection to be turned on, the user's site was still appearing to be unprotected.

I still had access to the user's site, so I went back in and found that some pages were protected, but others were not. All the caching plugins on the site had been turned off, so it wasn't site-specific caching that was the issue.

I updated Codex with all this information, and the AI made a bunch of suggestions, ranging from the ridiculous (delete the entire server and start again) to the incredibly helpful. I've long learned that coding AIs like to throw out wacky suggestions, which I assume is to make sure we humans are paying attention. Once we eliminate those options, the AIs tend to become more grounded.

It also made a bunch of logging and tracking suggestions that would have worked if I were the server administrator with shell access to that machine. But since the user was very unfamiliar with server tech, and her hosting provider didn't expose the shell for user access, those options were not possible.

Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbot

To its credit, once I explained to ChatGPT that those approaches wouldn't be possible, it came up with a new strategy. It asked me to append a parameter to the pages that wouldn't hide, something like ?mps_hide=1.

This, it explained, would force the cache to feed a new page rather than a cached page. If the URL with the test parameter was blocked, but the URL without the test parameter was unblocked, it would confirm that there was, indeed, caching somewhere between the server and the browser.

It did, and there was. Some system-level caching way beyond the control of my code was feeding old versions of pages that should have been behind my security plugin. Because the pages were never being fed by my plugin, I didn't have the chance to block them.

The only fix was to disable caching at the hosting level. This was out of my control, out of the AI's control, and even out of the user's control. The problem had to be escalated to the hosting provider's tech support team.

Signed, sealed, delivered

Here's where this story veers away from coding, even though I did all this inside my VS Code development environment.

Here was my next challenge. I was faced with trying to explain to a very untechnical user how to explain a very technical requirement to a historically unresponsive and fairly uncooperative hosting provider's tech support team. I have history with this hosting provider. It once tried to get me fired from ZDNET because I wouldn't give it a five-star review, back two editors in chief ago. So, yeah. History.

What I needed to do was give my user the text of a tech support request she could pass on to the oh-so-friendly folks at the hosting provider. I didn't want any of my residual resentment to appear anywhere in this text. I also didn't want to spend the hour or so it would take to carefully write a technical document for them to work from.

Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it

So, right inside my development environment, I told Codex that a non-technical user needed to transmit a tech support request to the hosting provider. The report needed to supply enough details to get the job done.

Not only did Codex write that for me, but it included what it called "proof of diagnosis," effectively telling the hosting provider's tech support team how it had proved that the problem was with the host-level caching and couldn't be fixed at our end.

I sent the text to the user. She sent it to the hosting provider. A day later, she reported that everything was working again.

The Plus plan

ChatGPT's $20-per-month Plus plan is great for occasional bug fixing and the sort of work described in this article. While you won't be able to get it to create a whole new product for you, it's a very welcome addition to a VS Code workflow. This is especially the case if you already have ChatGPT Plus for some other purpose.

The new GPT-5.2-Codex was particularly helpful and mostly on track. While my interactions with the user took place over days, the actual time I spent working on the code and the solution clocked in at under an hour due to Codex's help. Oh, and the customer left me a much-appreciated five-star review.

Also: How ChatGPT actually works (and why it's been so game-changing)

Have you tried using an AI coding assistant like Codex to debug a real-world problem under time pressure? Did it actually save you time, or did it add overhead and false leads? How comfortable are you relying on AI for user-facing communication, like writing a support email that has to be both accurate and diplomatic? 

And when the root cause turns out to be something outside your code, like host-level caching, what's your go-to approach for proving the diagnosis and getting a hosting provider to act? Share your experiences and tips in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Read Entire Article