Codeium in VS Code

For this article, I'm specifically speaking about Codeium in Visual Studio Code.

Codeium also offers the Windsurf Editor and several other tools as well as pricing tiers. I have only used their free tier as that matches my commitment level as a hobby developer. 

 

My background

Before I dive into my thoughts on Codeium, let me give you some context—because when it comes to AI coding assistants, the source of the opinion matters.

I spent 16 years as a professional software developer, working across a wide range of industries and technologies. My career started in old-school inventory control systems for a large parts supplier. After a year I shifted to the video game industry, where I spent a decade building engines and tools to accelerate development for over 50 published titles. After that, I spent five years consulting on modern web and API applications at World Wide Technology, working with a variety of clients and tech stacks.

Then, five years ago, I gave up my 'Tech Card' and moved into engineering management. I no longer write code for a living, but I support those who do. And while the fundamentals of coding never truly fade, stepping away from daily development has made me a little rusty.

A year ago, I got back into hobby development and started using Codeium almost daily. So, with experience spanning both deep technical work and the challenges of stepping away and returning, I'll share what I think works well, what doesn't, and how AI coding assistance has changed how I build software.

Usage scenarios

Over time, I've found that I use Codeium in four broad ways, each with varying levels of effectiveness and efficiency gains.

1) I have written code like this before.

For repetitive coding tasks, inline AI suggestions and code completion are a huge time-saver. Suppose I add another object from an SDK that typically requires 7–8 function calls to adjust common parameters after instantiation—Codeium picks up on this pattern and suggests them automatically. A few quick "tab" presses let me accept them without manually recalling every function name. Similarly, Codeium helps catch patterns in variable usage. If I initialize an object but forget to destroy it, Codeium often suggests the missing cleanup, preventing potential memory leaks.

In addition to this, if I am 'thinking via comments' and type out a few lines of what I plan to do, the auto-completion can often get me most of the way there. If it's something short and sweet, like performing a set of actions on all objects in an array it can work quite well. This is like a mini version of item 4 below, in which you'll see the same thoughts echoed. 

Usefulness: High | Frequency: High | Effectiveness: High

2) Examine this code.

This category usually falls under "Why doesn't this work?" followed by pasting in some code or including file context. While similar to asking a technical question, this approach provides something concrete for Codeium to analyze.

Here, AI has a huge advantage over human eyes. Finding a misplaced semicolon or an unclosed bracket is no longer a keyboard-smashing moment of insanity.

Codeium is also useful for suggesting alternative methods, syntax improvements, or optimizations I might not be aware of. This has been especially helpful when translating code between languages—when I know how to do something in one language but suspect there's a more idiomatic way to do it in another. Or when I get something working first and then need to optimize it later.

Usefulness: High | Frequency: Low | Effectiveness: High

3) I have a technical question.

When I generally know what I want to do but am unsure of the exact syntax or best practices for a language or framework, AI chat can be incredibly helpful. It's especially great for tasks like building regex—just a well-phrased prompt can get you exactly what you need.

That said, AI isn't always reliable. The better I phrase my prompt using known coding patterns, the better my results. However, I've encountered plenty of cases where Codeium confidently suggests functions or syntax that simply don't exist. When called out, it often admits, "You're right! That doesn't exist!"—but getting a valid alternative is hit or miss. Sometimes, it gives up entirely. Other times, it spirals into making up more nonexistent features. More often than not, it finally provides correct syntax, but the result may not actually achieve what I intended.

The good news? This problem is improving with newer AI models.

Usefulness: Medium | Frequency: Low | Effectiveness: Medium

4) I want you to generate an entire function or class.

I most often use this when I know exactly what I want the code to do but don't want to write it for the nth time in my career. When it works, this is a massive time saver.

However, the more complex the request—especially for full classes or interconnected functionality—the greater the chance that something won't work. The generated code is often messy, not something you'd want to submit for a code review or revisit later without refactoring.

The use case where I've had the most success is generating Python scripts for data processing. Since Python has an enormous library ecosystem, figuring out which packages to use and how to stitch them together can be daunting.

Here's an actual prompt I used:

Write a command-line Python program to generate polygons from a 
1-bit-per-pixel collision mask PNG.
Non-transparent pixels indicate being part of a collision polygon.
The output should be a JSON file, with each polygon named and numbered,
 consisting of (x, y) coordinate pairs in a clockwise order.
Assume they are closed polygons. The JSON file should be pretty-printed.

Did this generate exactly what I wanted in working code the first time? Nope.

But I was able to refine it through follow-up questions, have Codeium explain certain functions, and tweak the output manually. The final solution used PIL, OpenCV (cv2), and NumPy in a simple 63-line script. Reading documentation and piecing everything together on my own probably would have taken all day—this back-and-forth took just 20 minutes.

Usefulness: High | Frequency: Low | Effectiveness: Low

 

Total efficiency gained

So, the biggest question everyone has is, how much faster, cheaper, and better does a coding assistant make me? But putting that in quantifiable terms is hard. 

For most of what I've been working on, I know where I want to go, and I'm in complete control of what is good enough. 

In fact, I'd say I represent the absolute best-case scenario for efficiency gains.

  • I don't have to make a feature work at the whims of an odd product request.
  • I don't have to ensure my coding style matches anyone on my team.
  • I don't have to accept my original destination if the close enough is good enough.

But I know everyone wants a number, so here it is. 

Based on my commit history for my last month and examining times I knew AI was helpful, I can confidently find several places where an hour's worth of work was done in minutes. Coupling that with the seconds gained each time with auto-complete, I probably save 1-2 hours for every 8 I'm spending. 

Could I save more by letting it write more of my code? Probably.

But at some point, I'd stop understanding how it works and doubt my ability to continue to iterate and create. I believe this would introduce a downward curve, where you would have spent more time trying to get what you want out of AI than if you just wrote it from scratch. 

You may have noticed that the order I listed things in corresponds to how much of my own code the AI has to work with—starting with a lot and ending with nothing at all. In my experience, this reflects the most effective way to use AI today. The more context you provide, the more likely the output will align with your expectations.

Like any tool, knowing how to use it wisely is essential. But the overall developer experience is so much better with it than without. 

Technologies