Python Backend Development: From Django ORM Pitfalls to FastAPI Performance
There's a massive gap between knowing Python syntax and understanding how to build backend systems that actually work at scale. You can learn Django in a weekend, build a working API in an afternoon with Flask, or create endpoints with FastAPI's automatic documentation. But none of that prepares you for the moment when your endpoint starts timing out because it's making 200 database queries per request.
The real learning happens when you deploy to production and everything breaks.
The Django ORM Problem Nobody Talks About
Django's ORM is elegant. You write Python, it generates SQL. But that abstraction hides performance problems until they bite you.
Here's a common pattern that looks perfectly reasonable:
def post_detail(request, post_id):
post = Post.objects.get(id=post_id)
comments = Comment.objects.filter(post=post)
return render(request, 'post.html', {'post': post, 'comments': comments})
Clean code. Clear intent. Seems fine. Now open that template:
<h1>{{ post.title }}</h1>
<p>By {{ post.author.username }}</p>
{% for comment in comments %}
<p>{{ comment.text }}</p>
<small>- {{ comment.author.username }}</small>
{% endfor %}
This innocent-looking code just triggered a database query storm. When the template accesses post.author.username
, that's a query. For each comment's comment.author.username
, that's another query. One post with 50 comments = 52 database queries.
This is the N+1 problem, and it's the single biggest performance trap in Django.
The fix requires understanding Django's query optimization:
def post_detail(request, post_id):
post = Post.objects.select_related('author').get(id=post_id)
comments = Comment.objects.filter(post=post).select_related('author')
return render(request, 'post.html', {'post': post, 'comments': comments})
Now it's 2 queries total. select_related
performs a SQL JOIN, loading related objects in the same query. The difference between 52 queries and 2 queries is the difference between a slow app and a fast one.
But when do you use select_related
versus prefetch_related
?
Use select_related
for:
- ForeignKey relationships
- OneToOne relationships
- When you always access the related object
Use prefetch_related
for:
- ManyToMany relationships
- Reverse ForeignKey relationships (accessing comments from a post)
- When you might not access all related objects
Here's where it gets interesting—combining them:
posts = Post.objects.prefetch_related(
Prefetch(
'comment_set',
queryset=Comment.objects.filter(approved=True).select_related('author')
)
).select_related('author')
This loads posts with their authors (JOIN), then efficiently prefetches only approved comments with their authors. It's the kind of optimization you only learn by debugging production slowness.
Flask: Architectural Decisions From Day One
Flask gives you freedom, which means you make architectural decisions whether you realize it or not. Every Flask app starts simple:
from flask import Flask
app = Flask(__name__)
@app.route('/api/posts')
def get_posts():
posts = db.query('SELECT * FROM posts')
return jsonify(posts)
Works great for the first prototype. But real applications need structure. Where does business logic live? How do you test this? How do you handle authentication across multiple routes?
A mature Flask application looks more like this:
project/
├── app/
│ ├── __init__.py # Application factory
│ ├── models/
│ │ ├── user.py
│ │ └── post.py
│ ├── routes/
│ │ ├── auth.py
│ │ └── api.py
│ ├── services/
│ │ └── post_service.py # Business logic
│ └── middleware/
│ └── auth.py
├── config.py
└── tests/
The application factory pattern separates configuration from application code:
# app/__init__.py
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def create_app(config_name='default'):
app = Flask(__name__)
app.config.from_object(f'config.{config_name}')
db.init_app(app)
from app.routes.auth import auth_bp
from app.routes.api import api_bp
app.register_blueprint(auth_bp, url_prefix='/auth')
app.register_blueprint(api_bp, url_prefix='/api')
return app
Business logic moves to service layers:
# app/services/post_service.py
class PostService:
@staticmethod
def create_post(title, content, author_id):
# Validation logic
if len(title) < 5:
raise ValueError('Title too short')
# Business rules
user = User.query.get(author_id)
if user.post_count >= user.daily_limit:
raise ValueError('Daily post limit reached')
# Database operations
post = Post(title=title, content=content, author_id=author_id)
db.session.add(post)
db.session.commit()
return post
Routes stay thin, handling only HTTP concerns:
# app/routes/api.py
from flask import Blueprint, request, jsonify
from app.services.post_service import PostService
api_bp = Blueprint('api', __name__)
@api_bp.route('/posts', methods=['POST'])
@require_auth # Decorator handles authentication
def create_post():
data = request.get_json()
try:
post = PostService.create_post(
title=data['title'],
content=data['content'],
author_id=g.current_user.id
)
return jsonify(post_schema.dump(post)), 201
except ValueError as e:
return jsonify({'error': str(e)}), 400
This separation makes testing straightforward—you can test business logic without HTTP, and test routes without implementing full services.
FastAPI: When Async Actually Matters
FastAPI's killer feature is native async/await support. But async isn't magic—it's a specific tool for specific problems.
The classic example everyone shows:
# Synchronous - blocks for 900ms total
user = requests.get(f'https://api.example.com/users/{id}').json()
posts = requests.get(f'https://api.example.com/posts/{id}').json()
comments = requests.get(f'https://api.example.com/comments/{id}').json()
versus
# Asynchronous - blocks for ~300ms total
async with aiohttp.ClientSession() as session:
tasks = [
session.get(f'https://api.example.com/users/{id}'),
session.get(f'https://api.example.com/posts/{id}'),
session.get(f'https://api.example.com/comments/{id}')
]
responses = await asyncio.gather(*tasks)
The async version runs all three requests concurrently. While waiting for the first API to respond, it starts the other two. Total time is limited by the slowest request, not the sum of all requests.
But here's what tutorials don't tell you: async doesn't help with CPU-bound work. If you're processing images, compressing videos, or running complex algorithms, async won't speed anything up. For those cases, you need multiprocessing:
from concurrent.futures import ProcessPoolExecutor
def process_image(image_data):
# CPU-intensive work
return apply_filters(image_data)
@app.post('/process-images')
async def process_images(images: List[UploadFile]):
with ProcessPoolExecutor() as executor:
# Runs in separate processes, uses multiple CPU cores
results = await asyncio.gather(*[
loop.run_in_executor(executor, process_image, img.read())
for img in images
])
return results
The pattern: async for I/O-bound work (APIs, databases, files), multiprocessing for CPU-bound work.
Authentication: Beyond the Happy Path
Every backend needs authentication. The basic implementation looks like this:
from passlib.context import CryptContext
from jose import jwt
pwd_context = CryptContext(schemes=['bcrypt'])
def create_access_token(user_id: int) -> str:
payload = {'sub': user_id, 'exp': datetime.utcnow() + timedelta(hours=1)}
return jwt.encode(payload, SECRET_KEY, algorithm='HS256')
@app.post('/login')
async def login(username: str, password: str):
user = await db.get_user_by_username(username)
if not user or not pwd_context.verify(password, user.hashed_password):
raise HTTPException(status_code=401)
token = create_access_token(user.id)
return {'access_token': token}
Seems solid. But production authentication needs more:
Refresh tokens (short-lived access tokens, long-lived refresh tokens):
def create_tokens(user_id: int):
access_token = create_access_token(user_id, expires=timedelta(minutes=15))
refresh_token = create_access_token(user_id, expires=timedelta(days=7))
return access_token, refresh_token
@app.post('/refresh')
async def refresh(refresh_token: str):
try:
payload = jwt.decode(refresh_token, SECRET_KEY)
new_access_token = create_access_token(payload['sub'])
return {'access_token': new_access_token}
except jwt.JWTError:
raise HTTPException(status_code=401)
Rate limiting (prevent brute force attacks):
from slowapi import Limiter
from slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address)
@app.post('/login')
@limiter.limit("5/minute") # Max 5 login attempts per minute
async def login(request: Request, username: str, password: str):
# ... authentication logic
Token invalidation (logout, security):
The problem with JWTs: you can't invalidate them without a database lookup, which defeats the purpose of stateless tokens. Solutions:
- Short-lived tokens (15 minutes) + refresh tokens
- Token blocklist in Redis
- Version numbers in tokens that increment on password change
# Token version approach
def create_access_token(user_id: int, token_version: int):
payload = {
'sub': user_id,
'version': token_version,
'exp': datetime.utcnow() + timedelta(minutes=15)
}
return jwt.encode(payload, SECRET_KEY)
async def verify_token(token: str):
payload = jwt.decode(token, SECRET_KEY)
user = await db.get_user(payload['sub'])
if user.token_version != payload['version']:
raise HTTPException(status_code=401, detail='Token invalidated')
return user
When a user changes their password or you need to invalidate all their tokens, increment token_version
in the database.
Database Patterns That Actually Matter
Backend development is database-heavy. The patterns that matter:
Connection pooling (don't create a new connection per request):
# SQLAlchemy
engine = create_engine(
'postgresql://localhost/mydb',
pool_size=20, # Maintain 20 connections
max_overflow=10, # Allow 10 more if needed
pool_timeout=30, # Wait 30s for available connection
pool_recycle=3600 # Recycle connections after 1 hour
)
Transactions (atomic operations):
from sqlalchemy.orm import Session
def transfer_funds(from_user_id: int, to_user_id: int, amount: float):
with Session(engine) as session:
with session.begin(): # Starts transaction
from_user = session.query(User).filter_by(id=from_user_id).with_for_update().first()
to_user = session.query(User).filter_by(id=to_user_id).with_for_update().first()
if from_user.balance < amount:
raise ValueError('Insufficient funds')
from_user.balance -= amount
to_user.balance += amount
# Commits automatically if no exception
# Rolls back if any exception occurs
The with_for_update()
locks rows, preventing race conditions where two concurrent transfers could both read the same balance before either writes.
Migrations (versioned schema changes):
# Using Alembic
def upgrade():
op.add_column('users', sa.Column('email_verified', sa.Boolean, default=False))
op.create_index('idx_users_email', 'users', ['email'])
def downgrade():
op.drop_index('idx_users_email', 'users')
op.drop_column('users', 'email_verified')
Every schema change is versioned and reversible. You can upgrade, downgrade, and see the full history of your database structure.
What Separates Good from Great
Backend development isn't about knowing frameworks—it's about understanding systems. The differences that matter:
Thinking about failure: What happens when the database goes down? When Redis is unreachable? When an API call times out? Great backends gracefully degrade instead of crashing.
Understanding trade-offs: Using PostgreSQL vs MongoDB isn't about one being "better"—it's about your access patterns, consistency requirements, and team expertise.
Monitoring from day one: Production issues are inevitable. Having metrics, logs, and traces means you can diagnose problems instead of guessing.
Writing testable code: Separating business logic from framework code makes testing possible without spinning up servers or databases.
The path to mastery isn't memorizing Django settings or Flask extensions. It's building real systems, watching them break under load, debugging production issues at 2am, and learning from each failure. That experience—knowing why systems fail and how to prevent it—is what makes a senior backend developer.
Practice on Vibe Interviews to sharpen these skills in realistic scenarios, but remember: the real learning happens when you build something, deploy it, and deal with the consequences.
Vibe Interviews Team
Part of the Vibe Interviews team, dedicated to helping job seekers ace their interviews and land their dream roles.
Ready to Practice Your Interview Skills?
Apply what you've learned with AI-powered mock interviews. Get instant feedback and improve with every session.
Start Practicing Now