"What I cannot create, I do not understand"
Richard Feynman, Ph.D.
(Found on his chalkboard at Caltech after his death, 1988)

In my last post, I introduced the Amsterdam Web Communities System, the virtual community software that powers Electric Minds Reborn. This was a rewrite of the Venice Web Communities System from 2006; in doing so, I converted the code from Java to Go, and "modernized" the HTML in its page templates.

I did much of the work the old-fashioned way, by hand. In this age of AI and LLMs cranking out code by the megabyte, this may seem like a waste of effort. But was it really?

You might assume, then, that I avoided AI entirely. But I used it where I needed it.

The Objectives Matter

A large part of why I set out on this project in the first place was to learn Go programming. Instead of building simple sample projects, though, I set out to do something that I felt would matter, in the long run. The world doesn't really need another simple CRUD application, in the form of a to-do list or whatever. It just might need Amsterdam, though.

By writing the code myself, I could ensure that the intent of the original code remained the same, even while accustoming myself to the syntax and semantics of Go. Several times over the course of the project, I discovered a technique that was new to me, or an important compatibility issue I'd overlooked, and refactored the code to incorporate it. This makes the Git history of the project also a record of my progress in learning Go, as well as my progress in porting Venice code.

How much would I have learned if I'd outsourced my thinking, the way Steve Yegge does with his "Gas Town" orchestration system, directing vast numbers of agents to produce the actual code while consuming tokens like an eight-armed alcoholic? It would have been far too expensive to even consider developing, likely resulted in worse code...and I would have learned nothing about Go itself.

The Side Quests Matter, Too

In the course of the project, I had to translate the old, 2006-era HTML of Venice, replete with tables and FONT tags for layout, to a more modern HTML. Since this wasn't central to my objective of learning Go, I enlisted the help of Claude AI, saving pages from Venice and uploading them to a chat to let it modernize them. I subsequently took that modernized HTML and edited it into the templates that Amsterdam uses. The modernized HTML uses Tailwind CSS, which Claude suggested and I incorporated.

I did, however, use three different chatbots (Claude, ChatGPT, and Gemini) as advisors on the project, suggesting particular Go libraries for use and snippets of code to solve problems I was having. Different models have different strengths, and I sought to get the best answers I could from each one, especially in an unfamiliar area of development. But the choice to go get the libraries and add the code in question was always mine, and I rewrote those snippets to suit my tastes and the context of the larger project.

AI and Project Governance

After my own use of AI in creating Amsterdam, it would be hypocritical of me to disallow anyone else from using it to improve it. And yet, I don't want the project to become an AI free-for-all with a flood of low-quality, vibe-coded contributions hitting the repository and giving reviewers migraines. That's why I've made provision for it in the project's Code of Conduct:

All project contributions must be submitted by identifiable human participants who accept full responsibility for their content. Automated agents, bots, or autonomous AI systems may not independently submit issues, pull requests, or other contributions.

Contributors may use software tools, including AI-assisted tools, but the submitting contributor must:

  • Fully understand the contribution.
  • Be able to explain design and implementation decisions without the use of AI.
  • Accept responsibility for maintenance and correctness.

Contributors should indicate AI-generated content in issue and pull request descriptions and comments, specifying which model was used.

Do not use AI to reply to questions about your issue or pull request. The questions are for you, the human, not an AI model.

The reasoning here is that, at all times, some human must be responsible for the code. The AI is a tool, maybe even a useful tool, but the final decision about what goes into the code must always be made by a human mind.

I accept that there will be those who condemn me for allowing any influence of AI whatsoever over the code. I, however, am not going to stick my head in the sand and pretend that AI will just go away. I'm hoping to find a "happy medium" where humans and AI can work together in peace...the software engineering equivalent of Babylon 5, if you will.

Amsterdam itself is already an experiment in virtual community. Turns out it's also an experiment in modern open-source software engineering. Ultimately, while AI can be a useful tool, humans must remain accountable for actual production code, and open-source projects should seek to keep control of AI usage with governance, rather than resort to blind avoidance.

And as with Electric Minds Reborn, "What it is, is up to us."

...or show your appreciation some other way