Embracing the 3-Month Rule: A Practical Approach to Unscalable Solutions
In the world of startup development, there’s a well-known piece of advice from Paul Graham: “Do things that don’t scale.” However, implementing this guidance in the realm of coding often goes unaddressed. After eight months of building my AI podcast platform, I’ve devised a strategic framework: every unscalable technique is granted a lifespan of three months. After that period, it either proves its effectiveness and earns a more robust implementation, or it is discarded.
As engineers, we are imbued with the drive to create scalable solutions from the outset. With an emphasis on design patterns, microservices, and distributed systems, we envision architectures that can accommodate millions of users. However, this mindset can be a hindrance in a startup environment.
In many cases, focusing on scalability can lead to costly delays — particularly when you are optimizing for users who have yet to materialize and solving challenges that may never arise. My three-month framework compels me to write straightforward, albeit imperfect, code that is deployable. This allows me to gather genuine insights into what users truly need.
Current Infrastructure Shortcuts and Their Merits
1. Consolidating Everything on One Virtual Machine
I’ve chosen to run my database, web server, background jobs, and Redis on a single $40/month virtual machine with no redundancy and manual backups to my local system. While this might seem reckless, it has allowed me to accurately assess my resource requirements in just two months. I discovered that my “AI-heavy” platform peaks at just 4GB of RAM, rendering the complex Kubernetes setup I almost created unnecessary, as I would have been managing idle resources.
When the system crashes (which has happened twice), I gather valuable data on the failures. Interestingly, the problems are rarely what I anticipated.
2. Using Hardcoded Configurations
Instead of relying on config files or environment variables, I maintain hardcoded constants within my code, such as:
python
PRICE_TIER_1 = 9.99
PRICE_TIER_2 = 19.99
MAX_USERS = 100
AI_MODEL = "gpt-4"
This approach might seem outdated, but the advantage is significant. I can quickly search my entire codebase for any configuration value. Each price adjustment is recorded in my git history, and every config change undergoes a review process, albeit
One Comment
This post offers compelling insights into the practical application of “doing things that don’t scale” within the context of engineering and startup development. The three-month framework is a smart way to balance experimentation with accountability, fostering a mindset that prioritizes learning over perfection early on.
I especially appreciate the emphasis on rapid prototyping and the willingness to accept short-term inefficiencies—such as consolidating infrastructure or hardcoding configurations—that can significantly speed up iteration cycles. This approach aligns well with Patagonia’s concept of “test and learn” in startups, allowing teams to validate assumptions quickly before scaling.
One aspect worth exploring further is how to manage technical debt accumulated during these unscaled phases. Would you consider implementing regular review points after each three-month period to refactor or optimize codebases, or perhaps establish clear metrics to determine when a solution is ready for scaling?
Overall, this methodology exemplifies the importance of flexibility and intentional experimentation in building resilient startups—thanks for sharing such a practical perspective!