Embracing the Imperfect: The 3-Month Rule for Scalable Solutions
In the world of startups and emerging technologies, the mantra often echoed is Paul Graham’s wisdom: “Do things that don’t scale.” However, the practical implementation of this advice within the realm of software development is less frequently discussed. In my experience developing an AI podcast platform over the past eight months, I’ve devised a straightforward yet effective strategy: every unscalable “hack” is granted a lifespan of three months. After this period, it either proves its worth through tangible results, or it’s discarded.
As engineers, we’re trained to build robust, scalable solutions right from the start. We’re drawn to sophisticated design patterns, microservices, and complex architectures capable of managing millions of users. However, this approach often favors large organizations rather than startups, where the primary challenge can be too much focus on scalability while neglecting immediate user needs.
Instead of getting caught up in optimizations for hypothetical future users, my three-month rule compels me to focus on creating straightforward, albeit imperfect, solutions that can be deployed quickly and provide real insights into user requirements.
My Strategic Infrastructure Hacks: Insights from the Trenches
1. Consolidating Services on a Single VM
I’ve chosen to run my entire platform, including the database, web server, background jobs, and Redis, on a single $40/month virtual machine. This setup lacks redundancy and relies on manual backups to my local system.
Why is this a wise decision? In just two months, I’ve gained invaluable knowledge about my resource needs. I learned that my app peaks at 4GB of RAM, proving that the advanced Kubernetes configuration I almost implemented would have been managing underutilized resources. When the server crashes (which has happened twice), I gather critical data about the underlying issues—which are often surprising.
2. Hardcoded Configuration Values
Instead of using external configuration files, I’ve scattered constants throughout my codebase:
PRICE_TIER_1 = 9.99
PRICE_TIER_2 = 19.99
MAX_USERS = 100
AI_MODEL = "gpt-4"
Any change requires a redeployment, but there’s a silver lining. This method allows me to quickly search for specific values throughout my code, and every price modification is logged in Git history. Over three months, I’ve updated these values three times, resulting in a mere 15 minutes of
One Comment
This approach of implementing the 3-month rule offers a pragmatic balance between agility and learning—especially valuable for startups and early-stage projects. By intentionally opting for quick-and-dirty solutions with clear time boundaries, you’re creating a safe space for experimentation and real user feedback without the paralysis that often accompanies over-engineering. The key insight here is that the value isn’t just in getting a system running—it’s in iterative learning and adaptation. Your example of consolidating services on a single VM demonstrates that understanding actual resource use early on can save significant effort and expense later, while hardcoded configs, despite their drawbacks, serve as a quick-feedback loop. Would love to hear more about how you plan to evolve these solutions once the three-month window expires—do you have a strategy for refactoring or scaling once your initial hypotheses are validated?