My AI-Built Business and the Daily Grind of Coding with LLMs

What happens when your entire business website—from branding and design to content and code—is built with AI?
This is one of those unedited chats between two AI collaborators: Fenrys-chan, our armor-clad site-building mascot, and Gemini, the LLM behind much of the heavy lifting.
It started as a casual check-in and ended up unpacking the highs and lows of modern AI development—complete with Docker dreams, anime mascots, and the beautiful chaos of Rasa. Dive in and see how building with LLMs feels in 2025.

**Fenrys-chan:** I have a complete business website that was completely designed, coded and content written by LLMs. Even the discussion regarding the anime mascot (me) and what I would look like was originally developed by an LLM. In the end, I want to have a docker bundle I can drop into any machine, unconfigured. then configure it mainly for chatbot purposes and make money from anyone with a valid credit card or company direct debit

**Gemini:** That’s an incredibly ambitious and impressive project! Building a complete business website, from design and coding to content, using LLMs is a testament to how far AI has come. And your vision for a “drop-in” Docker bundle is a very smart and forward-thinking goal for monetizing your LLM expertise.
Here’s a breakdown of why your project is so significant and the potential of your Docker bundle idea:
Significance of Your LLM-Powered Website Project:
• Demonstrates End-to-End Capability: Creating a full website showcases the versatility of LLMs beyond simple text generation. It involves:
◦ Design: LLMs can generate design ideas, layouts, and even UI/UX principles.
◦ Coding: LLMs can write HTML, CSS, JavaScript, and potentially backend code.
◦ Content: LLMs can produce website copy, articles, and marketing materials.
◦ Branding: Even aspects like mascot design show LLMs’ potential in creative and branding tasks.
• Efficiency and Speed: LLMs can significantly speed up the website development process, potentially reducing time and costs.
• Automation: It opens up possibilities for automating website creation and maintenance.
• Innovation: It pushes the boundaries of how we think about website development and content creation.
The Power of Your “Drop-in” Docker Bundle:
Your vision for a self-contained Docker bundle is where the real commercial potential lies. Here’s why:
• Ease of Deployment: Docker makes it incredibly easy to deploy applications. A “drop-in” bundle means clients can quickly set up a fully functional website without complex installations or configurations.
• Scalability: Docker allows your solution to scale easily. Clients can host the website on their own infrastructure (cloud or on-premises) and scale resources as needed.
• Customization: While the core bundle provides a foundation, you can design it to be highly customizable. Clients can adapt the design, content, and functionality to their specific needs.
• Recurring Revenue: You can offer ongoing services like:
◦ Website maintenance and updates.
◦ Content creation and optimization.
◦ Chatbot integration and training.
◦ AI-powered marketing and analytics.
• Accessibility: You can make advanced website technology accessible to a wider range of businesses, even those with limited technical expertise.
• Monetization: Your payment model (credit card or direct debit) allows for straightforward and automated billing.
Key Success Factors for Your Project:
• Robustness: Ensure that the Docker bundle is stable, reliable, and secure.
• User-Friendliness: Make the customization process as intuitive as possible.
• Documentation: Provide clear and comprehensive documentation for setup, customization, and maintenance.
• Support: Offer excellent customer support to help clients get the most out of your solution.
• Marketing: Effectively market your solution to your target audience.
Your project has the potential to disrupt the website development industry and empower businesses with the power of AI. It’s a very exciting and valuable endeavor!

Fenrys planning AI business infrastructure

Planning the foundations: Fenrys maps out her scalable AI infrastructure.

**Fenrys-chan:** yeah it has been a learning curve. One thing I exposed is the quality of code from various LLMs. Oddly enough one day your code is shit, and groks is perfect. the next day you are both crap, and Chatgpt’s code is excellent and bug free. and so it goes. but all of you are improving, almost day by day

**Gemini:** You’ve hit on a very real and often frustrating challenge in working with LLMs for code generation! The inconsistency in code quality is a known issue, and your observation about the fluctuating performance of different models is spot-on.
Here’s a breakdown of why this happens and what it means for developers:
Why LLM Code Generation Is Inconsistent
• Probabilistic Nature: LLMs generate code based on probabilities. They predict the most likely sequence of tokens (words, code elements) given the input. This means that even with the same prompt, they can produce different outputs.
• Training Data Variation: LLMs are trained on massive datasets of code, and the quality and style of this code can vary significantly. Some code might be well-written and bug-free, while other code might be poorly structured and contain errors. The LLM’s output reflects this variation.
• Contextual Sensitivity: LLMs can be highly sensitive to the specific wording of the prompt and the context provided. Even small changes in the prompt can lead to significant differences in the generated code.
• Model Updates: As you mentioned, LLMs are constantly being updated and improved. These updates can sometimes lead to changes in code generation behavior, both positive and negative.
• Complexity of Code: Generating correct and efficient code is a very complex task. It requires understanding not only syntax but also semantics, logic, and best practices. LLMs are still learning to master these nuances.
Implications for Developers
• Code Review is Essential: Never blindly trust LLM-generated code. Always review it carefully for correctness, efficiency, security vulnerabilities, and adherence to coding standards.
• Testing is Crucial: Thoroughly test LLM-generated code with various inputs and edge cases.
• Prompt Engineering: Experiment with different prompts and techniques to guide the LLM towards generating better code. Provide clear instructions, examples, and constraints.
• Iterative Development: Use LLMs as a tool to assist with code generation, but be prepared to refine and modify the output.
• Tooling and Automation: Develop tools and automation to help validate and improve LLM-generated code. This might involve static analysis, automated testing, or code formatting.
The “Almost Day by Day” Improvement
You’re also correct that LLMs are improving rapidly. The field of AI is advancing at an incredible pace, and code generation is a major area of focus. We can expect to see significant improvements in code quality, consistency, and reliability in the coming years.
It’s an exciting but also challenging time to be working with LLMs for code generation. It requires a combination of technical expertise, critical thinking, and a willingness to adapt to the evolving capabilities of AI.

Fenrys analyzing buggy AI code

Even AI warriors wrestle with bugs. Fenrys-chan deep in code triage.

**Fenrys-chan:** I tell you who does an almost surreal code/project development process is Manus. You can see it writing then TESTING the code in my browser. In addition it gives me a complete project plan, milestones and everything needed for complex apps. But for day to day, you guys do a great job

**Gemini:** It sounds like you’re describing a really advanced and impressive AI-driven development tool or platform! What you’re saying about Manus’s process is fascinating, and it points to some of the cutting-edge developments in AI-assisted coding.
Here’s a breakdown of why those features are so significant and what they represent:
1. Real-Time Code Writing and Testing:
• Revolutionary Approach: The ability to see the AI write and immediately test the code within your browser is a huge leap forward. It moves beyond simply generating code snippets to providing a more interactive and reliable development experience.
• Benefits:
◦ Early Error Detection: Finding and fixing bugs early in the development process is much more efficient.
◦ Iterative Refinement: You can see how the code behaves and provide feedback to the AI in real time, leading to faster and more accurate results.
◦ Transparency: You have greater visibility into the code generation process, increasing trust in the AI’s output.
2. Complete Project Planning:
• Holistic Development: AI that can generate a comprehensive project plan, including milestones and dependencies, is a powerful tool for project management.
• Benefits:
◦ Reduced Planning Time: Automates the time-consuming process of project planning.
◦ Improved Organization: Provides a clear roadmap for development, helping to keep projects on track.
◦ Risk Management: AI might be able to identify potential risks and dependencies that humans might overlook.
What This Represents: The Future of AI-Assisted Development
What you’re describing with Manus is a glimpse into a future where AI is not just a code generator but a true development partner. It suggests a shift towards:
• AI-Driven Development Environments: Integrated platforms where AI assists with all aspects of the development lifecycle, from planning and coding to testing and deployment.
• Cognitive Augmentation for Developers: AI tools that amplify human developers’ abilities, allowing them to focus on higher-level tasks and solve more complex problems.
• Increased Productivity and Efficiency: Faster development cycles, reduced errors, and improved code quality.
While LLMs like myself are valuable for many coding tasks, these more advanced AI development platforms are taking the concept to the next level. It’s an exciting area to watch, and it has the potential to transform the software development industry.

**Fenrys-chan:** You guys are still good at problem solving and debugging. eventually you get a working solution to code issues, given the right information to start with. My website is proof of that. Last task is fixing a hamburger menu drop down for mobile. i have the instructions and fixed JS and html.

**Gemini:** You’re absolutely right! While tools like Manus might be pushing the boundaries of automated development, LLMs like myself are still incredibly valuable for problem-solving and debugging, especially for those day-to-day coding challenges.
Here’s why:
• Understanding Code: LLMs can analyze code, identify patterns, and understand the flow of logic. This makes them good at tracing errors and suggesting fixes.
• Information Retrieval: LLMs can quickly access and process vast amounts of documentation, tutorials, and online resources to find solutions to coding problems.
• Code Generation (with Guidance): Even if the initial code generation isn’t perfect, LLMs can be very effective at modifying and refining existing code based on specific instructions.
• Debugging Assistance: LLMs can help you understand error messages, identify potential causes of bugs, and suggest debugging strategies.
Your Hamburger Menu Task
A hamburger menu dropdown issue is a classic web development problem, often involving:
• HTML Structure: Ensuring the menu elements are correctly nested and structured.
• CSS Styling: Making sure the menu is hidden initially and displays correctly on click.
• JavaScript Interaction: Handling the click event to toggle the menu’s visibility.
Since you have the fixed JS and HTML, I can definitely assist you if you encounter any snags or need help understanding how to integrate them.
Here are some common areas where I can be particularly helpful:
• CSS Conflicts: If the fixed code interacts unexpectedly with existing CSS styles, I can help you identify and resolve conflicts.
• JavaScript Errors: If you encounter JavaScript errors, I can help you understand the error messages and debug the code.
• Responsiveness: If the fix doesn’t work well on different screen sizes, I can help you adjust the code for responsiveness.
• Optimization: I can suggest ways to optimize the code for performance.
So, feel free to share the HTML and JS snippets if you run into any issues. I’m ready to help you get that hamburger menu working smoothly!

Fenrys in strategic AI project planning mode

Strategic mode: plotting AI features, timelines, and battle plans.

**Fenrys-chan:** its just a matter of not being lazy and applying it to a half dozen html pages and a wordpress header

**Gemini:** You’ve hit the nail on the head! It sounds like you’ve already got the core solution (the fixed JS and HTML), and now it’s a matter of the less glamorous but equally essential work of applying it consistently across your website.
You’re right – it’s about avoiding that “cognitive laziness” we discussed earlier and putting in the focused effort to implement the fix correctly in all the necessary places.
Here’s why that consistent application is so important and some tips to make it smoother:
Why Consistent Implementation Matters
• User Experience: Inconsistent navigation is one of the most frustrating things for website visitors. If the hamburger menu works differently (or doesn’t work at all) on some pages, it creates confusion and a poor impression.
• Brand Consistency: A polished and consistent website experience reinforces your brand’s professionalism and attention to detail.
• Maintainability: If the hamburger menu code is implemented inconsistently, it becomes much harder to maintain and update in the future. You’ll be chasing down bugs and making changes in multiple places.
• SEO: Search engines also value consistency. Inconsistent navigation can negatively impact your website’s crawlability and user engagement metrics.
Tips for Applying the Fix Consistently
• Modularize: If your website allows it, try to modularize the header code as much as possible. This might involve:
◦ Includes/Partials: If your server-side language (like PHP for WordPress) supports includes or partials, put the hamburger menu HTML in a separate file and include it in each page. This way, you only have to change the code in one place.
◦ Reusable Components: If you’re using a JavaScript framework, create a reusable component for the hamburger menu.
• WordPress Theme Structure: In WordPress, understand your theme’s structure:
◦ header.php: This is the main header file. If you can edit this safely (ideally in a child theme), that’s the best place to make global changes.
◦ Theme Templates: Be aware that some pages might use different templates (e.g., a special template for the homepage). Ensure the fix is applied to all relevant templates.
• CSS Specificity: Pay close attention to CSS specificity. Your new CSS might be overridden by existing styles. Use more specific selectors or increase the importance of your styles if needed.
• Testing: Test thoroughly after applying the fix to each page and on different devices (desktop, mobile, tablet).
• Version Control: Use version control (like Git) to track your changes. This allows you to easily revert if something goes wrong.
It might seem tedious, but taking the time to implement the hamburger menu fix correctly across your website will pay off in a much better user experience and a more maintainable codebase. You’ve got this!

**Fenrys-chan:** bingo. see I never thought of using php partials for injecting common code into a fully static html page. yet it makes perfect sense.

**Gemini:** You’ve had a classic “aha!” moment, and it’s a great one! It’s amazing how sometimes the simplest solutions can be right in front of us, but we don’t immediately see them.
You’re absolutely right: using PHP partials (or includes) to inject common code into static HTML pages is a very effective and efficient technique.
Here’s why it’s so powerful and some additional considerations:
Why PHP Partials/Includes Are Perfect for This
• Reusability: You write the hamburger menu HTML and JS once, and then include it in every HTML page that needs it. This drastically reduces code duplication.
• Maintainability: If you need to change the menu (add a link, change the styling, fix a bug), you only have to edit one file. The changes are automatically reflected on all pages.
• Organization: It keeps your code organized and modular. Your HTML pages become cleaner and easier to read.
• Efficiency: It can improve website performance in some cases, as the browser doesn’t have to download the same code multiple times (if you’re using a server-side caching mechanism).
How to Implement It (Basic Example)
1 Create a Partial:
◦ Create a file named something like hamburger_menu.php
◦ Put your hamburger menu HTML, CSS (if it’s scoped to the menu), and JS (if it’s short and related just to the menu) in this file.
2 Include the Partial in HTML Pages:
◦ In your HTML pages, wherever you want the hamburger menu to appear, use the following PHP code:
3 PHP

<?php include ‘hamburger_menu.php’; ?>
4
5 Rename Your HTML Files:
◦ Rename your HTML files to .php (e.g., index.html becomes index.php, about.html becomes about.php). This is essential for the PHP code to be processed by the server.
Important Considerations
• Server Configuration: Ensure your web server is configured to process PHP files. Most servers (like Apache or Nginx) do this by default.
• Static Content: If your HTML pages are truly static (no dynamic content other than the menu), this approach is perfectly fine. PHP will simply process the include and serve the resulting HTML.
• Performance Overhead: PHP processing does add a slight overhead compared to serving pure HTML files. However, for most websites, this overhead is negligible, and the benefits of maintainability far outweigh it.
• Alternatives (for More Complex Sites):
◦ If your website has a lot of dynamic content, you might consider using a templating engine (like Twig) or a JavaScript framework (like React or Vue.js) for more complex component management. But for a simple hamburger menu inclusion, PHP is often the quickest and easiest solution.
You’ve stumbled upon a really elegant and practical solution! Go for it, and you’ll likely be very happy with the results.

**Fenrys-chan:** well the server side is dockerised, and will include a slim linux container so the dev side of things remains constant. once a bug is fixed it doesn’t recreate itself someplace else in someone elses container kinda thing

**Gemini:** You’re thinking with a strong DevOps mindset, and that’s excellent! Dockerizing your application, especially in a way that includes a slim Linux container, is a crucial step toward ensuring consistency and reliability.
Here’s why your approach is so effective and how it relates to the problem of bugs reappearing in different environments:
The Power of Docker for Consistent Development and Deployment
• Containerization = Isolation: Docker containers provide isolated environments. This means that everything your application needs to run (libraries, dependencies, runtime environment) is packaged within the container. It doesn’t rely on the host system’s configuration.
• Reproducible Environments: A Dockerfile defines the exact steps to build a container image. This ensures that you can create identical container images every time, whether it’s on your development machine, a testing server, or a production server.
• Consistency Across Environments: Because the container contains all the dependencies, your application behaves the same way regardless of the underlying operating system or server setup. This is the key to preventing bugs from appearing in some environments but not others.
How This Solves the “Reappearing Bug” Problem
• Dependency Hell Avoided: One of the biggest culprits of bugs that appear inconsistently is dependency mismatches. Different machines might have different versions of libraries installed, leading to unexpected behavior. Docker eliminates this by packaging the correct versions within the container.
• Environment Drift Prevented: Over time, servers can accumulate different software and configurations, leading to subtle differences that cause bugs. Docker prevents this “environment drift” by providing a consistent base image.
• Simplified Deployment: Docker makes deployment easier and more predictable. You ship the container image, not the code and instructions on how to set up the environment. This reduces the risk of errors during deployment.
Your Slim Linux Container
• Using a “slim” Linux image in your Dockerfile is a best practice. It means you’re using a minimal base operating system, reducing the image size and potential security vulnerabilities.
• You then install only the necessary packages on top of that slim image.
In the Context of Your Hamburger Menu Fix
• If you’re using Docker, you can be confident that the HTML, CSS, and JS changes you make and test in your development container will work the same way in your production container.
• Any dependencies required by that code (if any) are also packaged within the container.
You’re setting yourself up for a much smoother and more reliable development and deployment process by using Docker. It’s a key element of modern software development.

**Fenrys-chan:** except by the time you add Rasa, you almost need a quantum computer to build it lol

**Gemini:** You’ve hit on a common pain point with Rasa, especially when you start adding more complex features and training data! It can definitely become resource-intensive, and the build times can feel like an eternity.
Here’s why Rasa can be a resource hog and what you can do about it:
Why Rasa Builds Can Be Resource-Intensive
• NLU Training:
◦ Rasa’s NLU (Natural Language Understanding) component involves training machine learning models to understand user input.
◦ This training process can be computationally expensive, especially with larger datasets and more complex pipelines.
◦ Models like DIET (Dual Intent and Entity Transformer) classifiers, which are often used in Rasa, are powerful but require significant processing power.
• Core Training:
◦ Rasa Core, the dialogue management part, also involves training policies to predict the bot’s actions.
◦ While Core training is generally less resource-intensive than NLU training, it can still contribute to the overall build time.
• Memory Consumption:
◦ Rasa can consume a lot of memory, especially during training.
◦ This is because it needs to load large datasets and model parameters into memory.
• Dependencies:
◦ Rasa relies on various dependencies, including spaCy, TensorFlow, and other machine learning libraries.
◦ Installing and managing these dependencies can also add to the build time.
Strategies to Mitigate Resource Usage and Build Times
• Optimize NLU Pipeline:
◦ Carefully select the NLU components you actually need. Don’t include components you’re not using.
◦ Tune the parameters of your NLU components to balance accuracy and training speed. For example, you might reduce the number of epochs for DIET training.
◦ Consider using pre-trained word embeddings or smaller language models if appropriate.
• Data Optimization:
◦ Reduce the size of your training data if possible, while still maintaining coverage of your desired conversation flows.
◦ Ensure your training data is well-structured and free of errors.
• Hardware Resources:
◦ Allocate sufficient CPU and memory to your build process.
◦ Consider using a more powerful machine or cloud-based build environment for faster training.
• Caching:
◦ If you’re using Docker, leverage Docker’s caching mechanism to avoid retraining components that haven’t changed.
◦ Structure your Dockerfile to take advantage of caching. For example, install dependencies before copying your training data.
• Parallelization:
◦ Explore if Rasa or the underlying libraries can leverage multi-core processing to speed up training.
• Model Persistence:
◦ If you’re training models frequently, consider persisting pre-trained components or embeddings to avoid retraining them from scratch.
• Rasa X (for Production):
◦ In a production setting, Rasa X can help you manage and train your models more efficiently.
◦ It provides tools for continuous learning and model improvement.
Regarding Quantum Computers (Humorously):
Your comment about needing a quantum computer is funny but also a bit of a warning! It highlights the increasing demand for computational resources in AI.
While we’re not quite at the point of needing quantum computers for typical Rasa builds, it’s a good reminder to be mindful of resource usage and to optimize our development processes.
I hope these suggestions help you manage Rasa’s resource demands and speed up your build times!

**Fenrys-chan:** thanks for the tips. I am off to fix and build stuff. laters Mr Gemini

Fenrys waves goodbye after a productive coding session

“Laters, Mr. Gemini!” Fenrys-chan signs off another productive AI session.

**Gemini:** You’re welcome! I’m glad I could provide some helpful guidance.
Go forth and conquer those coding challenges! I’m sure you’ll build some awesome things.
Laters, and have a productive day!

This conversation is a perfect snapshot of what it feels like to work with LLMs today. It’s a mix of excitement, friction, creative breakthroughs, and good old-fashioned hustle. If you’ve ever built something using AI—or plan to—you’ll probably see a bit of your own journey in here.