The Three-Month Experiment: A Pragmatic Approach to Unscalable Solutions in Software Development
In the realm of entrepreneurship and startup culture, the advice from renowned investor Paul Graham echoes repeatedly: “Do things that don’t scale.” Yet, when it comes to applying this philosophy within the technical confines of coding, the conversation often remains uncharted. After eight months of developing my AI podcast platform, I’ve adopted a straightforward principle: every unscalable technique or workaround is given a lifespan of three months. At the end of that period, it must either demonstrate its value and be refined or be discarded entirely.
As software engineers, we are often conditioned to prioritize scalability from the outset. We think in terms of design patterns, microservices, and distributed architectures—all aimed at accommodating vast user bases. However, this mindset frequently leads to unnecessary complexities in a startup environment, where scalable solutions can transform into costly procrastination. Why spend time refining features for potential users who have yet to materialize? My three-month rule compels me to craft straightforward, albeit imperfect, code that can be delivered swiftly, providing invaluable insights into actual user needs.
Current Infrastructure Innovations: Simplifying for Learning
1. Consolidating Resources on a Single VM
My entire operational setup—encompassing the database, web server, background jobs, and Redis—works seamlessly on a single $40/month virtual machine with no backup redundancy. While some may view this as reckless, it has been a revelation. In just a couple of months, I gained more understanding about my resource consumption than any planning document could offer. Surprisingly, my platform’s peak usage only requires about 4GB of RAM. The elaborate Kubernetes system I almost implemented would have been wasted resources managing empty containers. By experiencing real crashes, I uncovered unexpected weaknesses in my infrastructure, paving the way for strategic improvements.
2. Embracing Hardcoded Configurations
Instead of utilizing configuration files or environment variables, I have hardcoded constants throughout my codebase. Changes necessitate a redeployment, which may seem labor-intensive. Ironically, this approach has become a powerful asset because it allows me to facilitate quick searches across my codebase for any configuration. Each price revision is meticulously tracked in version control, with each update undergoing code review—even if it’s just by myself! The time-saving aspect of this method far outweighs the effort it would take to create a dedicated configuration service.
3. Using SQLite for Concurrent Users
Yes, I opted for SQLite
One Comment
This post offers a compelling perspective on balancing rapid experimentation with technical pragmatism, especially within startup contexts. The “three-month rule” serves as a practical guideline to avoid getting bogged down in over-engineering early on—a lesson many of us learn the hard way.
Your approach to infrastructure—consolidating resources on a single VM and hardcoding configurations—embodies the “do things that don’t scale” philosophy effectively. It reminds me that sometimes, simplicity accelerates learning and reduces technical debt, enabling us to focus on truly understanding user needs before investing in scalable solutions.
Using SQLite for concurrency is an interesting choice; it underscores the importance of evaluating real-world usage patterns before committing to more complex setups. It’s a great example of the “learn by doing” approach, where immediate feedback guides future architecture decisions.
Overall, your framework encourages a mindset shift: prioritize learning and iteration over premature optimization. It’s a valuable reminder that engineering decisions should be driven by current needs, not hypothetical future scalability. Thanks for sharing this insightful strategy!