Embracing the 3-Month Rule: A Practical Approach to Building Unscalable Solutions
In the world of startups and product development, seasoned entrepreneur Paul Graham once advised, “Do things that don’t scale.” While this mantra resonates with many, actual implementation—particularly in coding—often goes uncharted.
Over the past eight months, as I have been crafting my AI podcast platform, I’ve embraced a straightforward methodology: give every unscalable hack a lifespan of exactly three months. After this period, each hack must either validate its worthiness through tangible results and potential for further development, or it will be phased out.
As technologists, we are typically conditioned to craft “scalable” architectures from the outset. We delve into design patterns, microservices, and distributed systems aimed at accommodating vast user bases. Yet, such grand designs can lead to costly delays in a startup environment. Often, we find ourselves optimizing for future users who may never materialize, while neglecting the insights from our current, real users. The 3-month rule compels me to develop straightforward, albeit imperfect, code that can be swiftly deployed and genuinely enhances my understanding of user needs.
My Current Infrastructure Strategies and Their Strategic Value:
1. Consolidated Operations on One Virtual Machine
Running a database, web server, background jobs, and Redis all on a single $40/month virtual machine may seem imprudent—especially with no redundancy measures in place and manual backups. However, this strategy has proven invaluable. In just two months, I gained insights into actual resource consumption that would have taken volumes of planning documents to uncover. For instance, my “AI-heavy” platform reached a peak of only 4GB RAM usage, rendering elaborate Kubernetes setups unnecessary at this stage. Each crash has provided real-time data about vulnerabilities—often surprises that differ from my initial expectations.
2. Directly Hardcoded Configurations
Instead of relying on configuration files or environment variables, I have opted for straightforward constants throughout my codebase, like:
python
PRICE_TIER_1 = 9.99
PRICE_TIER_2 = 19.99
MAX_USERS = 100
AI_MODEL = "gpt-4"
This approach allows me to perform quick searches across the code for any config value. Each change is tracked in version control, creating a clear history of modifications. While developing a dedicated configuration service might have taken a week, my three changes in three months have
One Comment
This is a compelling approach that underscores the importance of validating assumptions through rapid iteration and real-world testing. I appreciate how the 3-month rule creates a disciplined timeframe for assessing the viability of unscalable hacks—transforming them from impulsive experiments into validated features or decisive deletions.
Your emphasis on starting with simple, resource-efficient setups—like consolidating on one VM and hardcoding configurations—resonates with the agile startup mindset. These strategies enable quick learning cycles and conserve resources that can be better allocated once proven concepts are validated.
One additional insight I’d add is the value of documenting your experiments—even quick notes on what worked, what didn’t, and why. Over time, this forms a rich tapestry of insights that can guide future decisions without over-investing in unvalidated infrastructure or architecture.
Ultimately, this approach balances the necessity of doing “unscalable” work early on with a clear measure of when to pivot, ensuring rapid progress while avoiding technical debt accumulation. Thanks for sharing such a practical and insightful framework!