If it ain’t broke, don’t fix it. Except, of course, when it comes to side projects. Tinker away.
A few months ago, I realised my other website - Tom’s Carnivores - was approaching its 8th birthday. Some parts were untouched since it launched in 2017, while others creaked under the weight of technical debt. Adding new content had become laborious, but I also struggled to justify the time it would take to modernise.
At work, I’ve been very impressed with the AI code editor Cursor, so I decided to try it out for this migration. Perhaps, I thought, it could accelerate my work in migrating the site from Hugo to my current framework of choice, Astro.
This was a success, in the sense that I’ve shipped the new version of the website! 🎉 But I was surprised by the role that AI played, both in terms of where it excelled and where it repeatedly fell short. In this article I’ll share what I learned along the way. And no, AI did not write this post 🤖
- Background: The tech stack
- Repetitive tasks, refined prompts: where AI excels
- The 70% problem in action: where AI struggles
- Wrap-up: Describing paintings
Let’s go!
1. Background: The tech stack
Tom’s Carnivores is not a complex project. It comprises <100 pages of static content, mostly stored as markdown and built using Hugo. Interactivity is limited to a half-dozen components (mostly calculators and maps) and a lightbox-based photo gallery (Photoswipe); nothing which calls for a fancy client-side framework. The only server-side logic is an Edge function for geolocation, used to customise certain bits of content.
The migration involved the following:
From | To | |
---|---|---|
Templates | Hugo (go/html) | Astro (JSX) |
Content | Markdown & HTML | MDX |
Styling | Global SCSS file | Tailwind 4 (Vite plugin) |
Search | Algolia | Fuse.js |
Interactivity | Vanilla JS (hard-coded + inline) | Still vanilla JS, but typed modules. Some AlpineJS. |
Why move to Astro? That’s a fair question. I could’ve modernised the site considerably by adopting new Hugo features. Hugo is a very active project. But even this would’ve been a big undertaking, and so I decided that a better investment of time would be to future-proof by adopting my preferred front-end stack.
In addition, my day job at Lightyear (which I haven’t written about yet - more on that another time) involves a lot of work in JavaScript, specifically Next / React / TypeScript. This gave me yet another reason to lean in this direction.
I realised before I’d started that this would be a big undertaking, both in terms of content and code. I resolved to test AI - specifically Claude’s 3.5 Sonnet model built into Cursor - at every stage of this process.
First, let’s explore some things it was good at.
2. Repetitive tasks, refined prompts: where AI excels
My advice for a migration like this is to look for tasks where the time investment of crafting a really good prompt is worthwhile - i.e. something you’ll need to do multiple times throughout the project. This will also give you opportunities to refine your prompt across each iteration of the task.
For me, this meant that the biggest automation opportunities - and time savings - lay in the content and file format conversions, specifically going from markdown (.md
) to MDX (.mdx
).
The first 10% of posts took hours. The remaining 90% took minutes. A few examples of what it handled:
2.1 Syntax conversion
I started with the basics, asking the AI to convert the TOML frontmatter in my markdown files to the YAML format used in my Astro .mdx
files. By providing Cursor with my Astro collection schema as context (including Zod validation), it got this right 100% of the time.
The markdown itself included about 20 different Hugo partials and shortcodes for pulling in photo galleries, Instagram embeds, videos, and so on. I’d kept property names the same wherever I could, so going from the Hugo go/html
syntax to Astro’s MDX was trivial. I simply extended my prompt to explain any intricacies of converting those components which didn’t follow a predictable pattern.
Tackling subsequent posts, it invariably ran into shortcodes it didn’t know how to handle, but I’d expand the prompt, run it again, rinse and repeat.
This was all largely unsurprising. Something a bit more interesting…
2.2 Renaming and reorganising files
Many of my articles include photo galleries, powered by the library Photoswipe. Creating a gallery meant writing a series of nested shortcodes, referencing individual photos by their filename and providing the captions as props. For example:
{{< photoswipe >}}
{{< photoswipe-item src="images/file_001.jpg" caption="Descriptive caption." >}}
{{< photoswipe-item src="images/file_002.jpg" caption="Another example." >}}
{{< photoswipe-item src="images/file_003.jpg" caption="The last one." >}}
{{< /photoswipe >}}
With some galleries running into the hundreds of images, this had become a huge pain to assemble. For the new version, I’d decided to:
- Utilise Vite’s
import.meta.glob()
function in Astro to import an entire folder of images, and generate a Photoswipe gallery containing all of them, using filenames as captions. - Take advantage of Astro’s built-in
<Image/>
component to automatically resize the images (including for thumbnails).
This would vastly simplify my gallery markup:
<PhotoSwipe imageFolder="all-my-images" />
Thankfully, there are plenty of demos showcasing exactly this kind of Photoswipe + Astro setup, such as petrovicz/astro-photoswipe, so I needn’t go into the intricate details of how to build it. The main obstacle for my purposes was extracting captions out of markdown and moving to a filename-based structure. In other words, how do I rename over a thousand images based on Hugo shortcodes spread over dozens of files without going insane?
Enter Homebrew rename! As it turns out, AI’s not bad at writing Perl scripts. What’s more, I could integrate this step into my existing prompt. Every time the AI came to a block of my old {{< photoswipe >}}
shortcodes, it processed them by following these steps:
- For each photo, take the existing caption prop, and - if it exceeds my arbitrary limit of 140 characters - shorten the caption without losing the essential meaning.
- Prefix that new caption with an integer to reflect its position in the current sequence.
- Generate a valid command to rename the existing file to use the new caption as its filename.
And so for each block of Photoswipe shortcodes (i.e. each distinct gallery block) the AI would spit out a single-line rename command to handle all of them at once. To continue our previous example, it would give me:
rename -s 'file_001.jpg' '01. Descriptive caption.' -s 'file_002.jpg' '02. Another example.' -s 'file_003.jpg' '03. The last one.'*
Once a Markdown file was processed, I’d run the provided rename script(s) myself (I don’t much fancy giving terminal access to an AI just yet). Then I’d simply move the helpfully-numbered image files into a folder. I could’ve taken this further and automated that step too, but by this point the process was sufficiently slick for my liking.
My finished MDX files contained just a single <Photoswipe imageFolder="example"/>
component in place of each block of shortcodes. Lovely.
2.3 Writing SEO meta data
One last example of AI behaving itself.
The Tom’s Carnivores website has several distinct sections, including a blog and a species directory. Blog posts include a description
prop in the frontmatter, which I used as both the SEO meta description and the snippet on paginated indexes. My directory, however, did not include a description prop in its schema, and so the pages used a generic templated sentence along the lines of “Learn how to grow [species] here.”
Time to rectify this historic oversight! Summoning my trusty AI, I integrated this instruction into my (now quite sizeable) prompt. It helped, of course, that the model had access to the full text of my plant directory as context, meaning it could rustle up factual and concise descriptions for obscure plants, like so:
description: "Nepenthes attenboroughii is a recently discovered species from Palawan. It's famous for its huge bell-shaped pitchers that can trap small mammals."
Great species, by the way. Take a look.
Anyway, my takeaway here is that preparing a prompt might take at least as long as a single (manual) iteration of the task, but if your prompt is good, subsequent iterations can become instant.
3. The 70% problem in action: where AI struggles
So far I’ve focused on the somewhat laborious content migration. Let’s turn now to the code rewrite, which is the part of this project I was most looking forward to.
I should say up front that I relied on Cursor’s autocomplete feature throughout. I find it’s a good timesaver when adding types, improving error handling, completing imports and so on. I use this at work and it’s a familiar part of my workflow. I have little of any consequence to say about it - you need to check its suggestions and make amends, but even so, it makes me a bit faster.
For this project, I decided to test AI in two more complex scenarios:
- Migrating an entire component from Hugo to Astro
- Creating a entirely new Astro component from scratch
Let’s go through each experiment in turn.
3.1 Migrating a component from Hugo to Astro
I used my interactive Sarracenia map for this test. It’s powered by Highcharts with their Highmaps plugin, and lets users explore the distribution of each species of North American pitcher plant (Sarracenia) at a county-level.
On my old Hugo site, this page loaded the Highcharts JS library (an old version) via <script>
tag, followed by the Highmaps plugin, then a third <script>
tag which loaded my chart configuration and initialised the map. For the new version, I intended to take advantage of how <script>
tags are processed by Astro: import
Highcharts and Highmaps as Node modules, write the config and interactivity in TypeScript, and let Astro inject the bundled and minified script where it’s needed as type="module"
.
Overall, the AI did quite a good job here. Getting the map to load in the browser required a bit of collaborative debugging to nail the modern Highcharts imports, but once that was done, we iterated on the component together until it matched what I had in mind. While I know I ought to be impressed that we got there in the end, the truth is I was course-correcting throughout: re-writing certain bits of its code, flagging up functions which I could see had issues, and nudging it back on track when it started to over-engineer something or go on a weird tangent.
Articulating my thoughts on this experience was difficult. It’s like writing a tricky annual review for a colleague who’s clearly talented, but has a bizarre way of completing certain tasks, or is hampered by very specific skills gaps. Some observations:
- It can struggle to rewrite from first principles. The interactivity didn’t work in the AI’s first attempt, because it tried to replicate parts of my old
onclick=()
approach to letting the user select a species. My original version used a very inefficient loop to set the visibility of every series on the chart, which caused a noticeable lag in the UI. It took a bit more prompting before the AI understood what we were actually trying to achieve. - You can achieve considerably better results when you know the code. I knew that since v10 (released in 2022), Highcharts and Highmaps fully supported TopoJSON. In the old Hugo version I was using the legacy GeoJSON map sources. Because I already knew this, I could go in and switch out the United States map we were loading and modernise the config file, meaning smaller map files, faster loading, and some newer features. If I was 100% reliant on the AI, wins like this likely wouldn’t happen as frequently.
- It still needs you to do 100% of the product thinking. I guess I’m holding the AI to an unrealistic standard here. But there were clear and obvious deficiencies in the old map which I had naively wondered whether the AI might remedy. For example, while porting the map I came up with the idea of a ‘Show All Species’ toggle which would use a color gradient to indicate how many species were native to a given county. The AI was happy to help me execute this idea, and did so successfully, but I think we’re a very long way off AI coming up with ideas like these on their own.
Overall it was a success. In fact if you’d shown me this 5 years ago, I’d have been utterly blown away. But AI in this scenario proved itself a modest time saver, rather than a way to achieve anything new.
In fact, I couldn’t help but wonder how different the results would’ve been if I hadn’t fully known my way around the code. This prompted my last experiment…
3.2 Creating a new Astro component
Here I deliberately set out to create something that I would’ve lacked the skills to do on my own.
For years I’d pondered the idea of an interactive diagram of a Venus Flytrap - an animated image of a leaf, complete with trigger hairs, that the user could click on to simulate contact with a fly. I thought it could be a fun way to illustrate this plant’s ability to count (it takes two touches in a 20 second window to activate the snapping mechanism). I’d never gotten as far as technically scoping it, but reckoned it would be possible as an animated SVG, a <canvas>
element, or maybe even with pure CSS and JS.
I explained my concept to the AI, and it reckoned an animated SVG was the way to go. I’ve animated simple SVGs in the past, but nothing like this. Let’s give it a shot, I thought.
Obviously I’m not shipping this any time soon.
I’d later discover this article by Addy Osmani which exactly describes the problems I faced when building this. In short, asking an AI to work on code you don’t understand can get you 70% of the way there, but the final 30% becomes a frustrating exercise in diminishing returns:
The reality is that AI is like having a very eager junior developer on your team. They can write code quickly, but they need constant supervision and correction. The more you know, the better you can guide them. […] When an experienced developer encounters a bug, they can reason about potential causes and solutions based on years of pattern recognition. Without this background, you’re essentially playing whack-a-mole with code you don’t fully understand.
My experience very much aligned with this. The AI’s first attempt didn’t look much like a Venus Flytrap, and nothing happened when I clicked it. I attempted to debug the issue, explaining what I was seeing to the AI and then accepting its code suggestions. The exchange went something like this:
- “Nothing happens when I click it.”
Ah, I see the problem. We aren't passing the state correctly. Let's fix...
etc- “Still doesn’t work.”
Ok, let's try a different approach. We'll restructure the component to...
etc- “The first click’s now registering, but the second doesn’t do anything.”
Right, I see the issue now. We've forgotten to add the necessary steps...
etc- “It’s now not loading at all - something you just did broke it.”
Oops - I think we've missed a step. Let's try a different approach...
etc
Repeat ad infinitum.
Eventually we got it to a place where it loaded, functioned correctly, and looked vaguely like a Venus Flytrap (if you squinted). The code was considerably longer than when we started, and I didn’t know why.
I guess I could’ve spent longer in the chat window, grinding away at the problems. At a push, maybe I’d even discover the exact combination of words that would produce something instantly recognisable as a Venus Flytrap. But I think it more likely that we’d have spent hours stuck in endless loops, floundering from one problem to another, each change introducing another issue which I lacked the knowledge to properly diagnose.
4. Wrap up: Describing paintings
I enjoyed this project, and would’ve done so with or without AI as my copilot. There’s a special kind of satisfaction which comes with refactoring and modernising your old code, particularly when you know the product is used by a decent number of people (and I’m pleasantly surprised whenever I check analytics to discover that my humble plant site fits that criteria).
Conveniently enough for me, the areas where AI shone were the things I enjoyed least. I was less than enamored with the prospect of migrating all my old content files, which would’ve been an arduous and repetitive task without AI.
The tasks where AI was less impactful were things I enjoyed doing myself. For example, I was excited to update and improve my interactive components, rethink how my plant directory was architected, and streamline the process of adding photo galleries of my plants. The AI was able to help these tasks, but there came a point where it felt like describing a painting to someone over the phone. I’d rather just paint it myself. To be clear, AI was still useful here, but our working arrangement felt better when AI was weighing in on possible approaches to a problem, debugging a specific bit of code, or simply wiring up imports with autocomplete.
Conversely, getting the AI to create something completely new - and which sat outside of my technical comfort zone - felt like working with a dodgy contractor. The scope of the task would spiral out of control and I lacked the expertise in the domain to reign it in, or spot the mistakes in its output.
Today vs tomorrow
For software projects like this, I find the AI of today works best as an eager assistant, or a helpful second pair of eyes. It’s not a great pilot.
Of course, the irony here is that everything I’ve said above might be null and void when the next model comes out. Perhaps AGI will arrive sooner than expected and make this article appear laughably anachronistic, and human-driven coding will become a quaint pastime. But I doubt it. Migrating from Hugo to Astro in 2025 has actually made me feel more optimistic about a career in software, not less.
Thanks for reading, and happy coding.