Site Title

How We Used AI to Analyze and Improve a Public Codebase

Linkedin
x
x

How We Used AI to Analyze and Improve a Public Codebase

Publish date

Publish date

Exploring Code with AI: A Practical Test

Today AI tools are changing how we interact with software projects. Instead of reading through dozens of files and manually setting up environments, developers can now ask AI to explain code, resolve issues, and even enhance functionality.

In this post, we document a practical test: we selected a public codebase, opened it with Cursor, and used AI to understand, run, and improve the application. There was no prior knowledge of the project—just a straightforward attempt to see how far AI could assist in reviewing and working with unfamiliar code.

The process included code analysis, Docker setup, debugging, and applying enhancements—all supported by AI prompts. Here’s how it went.

Step 1: Getting Started with the Code

We started by downloading a public code repository to our local machine. The goal was to keep it as raw as possible—no prep, no walkthroughs, just the code and Cursor.

Once downloaded, we opened the project in Cursor and let it load everything.

Step 2: Letting AI Analyze the Project

With the project ready, we gave Cursor a simple prompt:

“Analyze the code and let me know what it does, what technologies it uses, what are the patterns followed.”

Within moments, Cursor returned a helpful breakdown of the project’s structure, frameworks, and logic. This gave us the clarity we needed to move forward—no guessing, no digging through every file manually.

Step 3: Running the App with Docker

Next, we asked Cursor to containerize and run the app:

“Containerize the application and run it using Docker. Give me the .env file in the prompt to create the file in case you don’t have access to create it.”

Cursor generated all the necessary files. After a few retries, the app finally started running—completely isolated and ready to test in Docker.

Step 4: Fixing Errors 

Unsurprisingly, the app didn’t run perfectly on the first try. We saw errors, and instead of troubleshooting them manually, we pasted them directly into Cursor and asked for help.

This turned out to be one of the most efficient parts of the process. Cursor quickly identified the causes and offered fixes—some of which worked immediately, others took a few iterations. But the process was smooth and fast.

Pro tip: Don’t rely on AI to find the errors itself—paste the exact error message in. It saves time.

Step 5: Enhancing the Code

Once the app was running, we moved on to improvements. We asked Cursor to review the code and suggest enhancements. Specifically:

  • Add documentation

  • Make the frontend responsive

  • Improve Docker setup

Cursor applied all of these updates directly. The result? A smoother, more modern app experience without hours of manual effort.

Step 6: Trying a Core Code Change

To push the experiment further, we had Cursor add password hashing for better security. It wrote the code, applied the changes to existing users, and even handled the migration.

It took a few tries, and one of the changes broke the app temporarily, but Cursor helped fix that too. The result: secured passwords and a functional UI.

Wrapping Up

This was never about building something from scratch. It was about testing a public project, using AI to understand and run it, and seeing how far we could take it with almost no context.

What we did:

  • Opened a public repo

  • Analyzed it with Cursor

  • Ran the app in Docker

  • Fixed issues

  • Enhanced the UI and backend

  • Applied a real security improvement

Simple, fast, and surprisingly effective.

Working with unfamiliar code doesn’t have to be slow or overwhelming. This test highlights how AI can serve as a practical partner—not just for writing code, but for understanding, running, and improving existing projects. When used thoughtfully, tools like Cursor can turn exploration into progress, helping developers move faster with more confidence, even in codebases they’ve never seen before.

Related Insights

From Git to Production: Terraform, Ansible, and Argo CD on Kubernetes

In the era of multi-cloud complexity, platform teams are under pressure to deliver speed, safety, and scale without losing control. One team faced exactly this challenge: they needed a simple, auditable way to go from a Git commit to a running application—across any cloud.

From Whiteboard to Deployment: Agentic AI as a True Engineering Partner

Agentic AI isn't some far-off fantasy clinging to the fringes of engineering dreams; it's marching into the very heart of software delivery. Forget a new IDE tab, a smarter prompt, or a chatbot in Slack. This isn't just another tool; it's becoming a fundamental component of how modern teams craft and ship products. And the results? They're already undeniable.

The "Vibe Check" Bubble: Why Your AI Pilots Are Unsafe at Scale

There is a reason why 80% of Enterprise AI pilots are currently stuck in "Pilot Purgatory." They work perfectly for ten users. The demo is flawless. The CEO is impressed. But the moment you scale to 10,000 users, the system collapses into a mess of hallucinations, unexplainable loops, and subtle drifts.

Working on something similar?​

We’ve helped teams ship smarter in AI, DevOps, product, and more. Let’s talk.

Stay Ahead of the Curve in Tech & AI!

Actionable insights across AI, DevOps, Product, Security & more