The 3-Month Experiment: A Practical Approach to Building an Unscalable Foundation
In the tech world, Paul Graham famously urged entrepreneurs to “do things that don’t scale.” Yet, there seems to be a lack of practical strategies for integrating this philosophy into the depths of coding and development.
After eight months of creating my AI podcast platform, I’ve established a straightforward framework: any unscalable workaround I implement is given a three-month lifespan. At the end of this period, it either demonstrates its value and gets properly integrated, or it gets phased out.
As developers, we’re typically trained to focus on building scalable solutions right from the start. We become enamored with design patterns, microservices, and distributed systems—all the components that can support millions of users. However, this mindset often aligns more with larger organizations and can lead to expensive delays in startup environments. My three-month rule compels me to focus on writing straightforward, albeit imperfect, code that enables rapid deployment and reveals the genuine needs of my users.
My Current Approaches: Why They Make Sense
1. Consolidated on a Single VM
My entire infrastructure—database, web server, background jobs, and Redis—runs on a single, $40/month virtual machine without redundancy and relies on manual backups to my local system.
This strategy has proven insightful; I’ve gained more understanding of my actual resource needs in the past two months than mind-numbing capacity planning meetings ever provided. For instance, my “AI-intensive” platform maxes out at 4GB RAM. The Kubernetes setup I nearly implemented? It would have only served to manage idle containers.
When the system does crash—as it has twice—I gather genuine data about failure points. Spoiler alert: it’s never what I initially anticipated.
2. Hardcoded Values for Configuration
PRICE_TIER_1 = 9.99
PRICE_TIER_2 = 19.99
MAX_USERS = 100
AI_MODEL = "gpt-4"
There are no configuration files or environment variables—just constants scattered throughout the code. Altering any configuration means a swift redeployment.
The unanticipated benefit? I can search my entire codebase for any configuration parameter in the blink of an eye. Each price modification leaves a footprint in the Git history, and every change undergoes code review—even if it’s self-imposed.
Creating a configuration management service would require a week, yet I’ve updated these values merely three
One Comment
This approach highlights a pragmatic balance between rapid iteration and strategic learning—something that’s often overlooked in favor of immediate scalability. I appreciate how the three-month rule serves as a valuable psychological and practical framework, encouraging you to implement unscalable solutions confidently, observe their effects, and then decide whether to invest in more scalable alternatives.
Your emphasis on direct, low-overhead experimentation—like running everything on a single VM or hardcoding config values—reminds us that early-stage development is about discovering what truly matters for your users and infrastructure. Overengineering upfront can indeed distract from rapid product iteration and user feedback.
This mindset also aligns with lean startup principles, emphasizing validated learning over speculative investments. As your project matures, transitioning to more scalable solutions becomes a natural next step, but your current method effectively minimizes wasted effort early on. Thanks for sharing this insightful framework—it’s a valuable reminder that sometimes simplicity and speed pave the way for sustainable growth.