How Understanding Concurrency Saved Me in a Microsoft Interview
A few months ago, I was interviewing at Microsoft for SDE-2 role. Though I was bit nervous but things were going well — I’d nailed the algorithm round, answered some system design questions, and felt confident.
Then during my system design round, came the curveball.
👉 “What happens if two transactions try to update the same row in a database at the same time?”
For a second, my brain froze. Was the answer about isolation levels? Locks? Maybe just “the database takes care of it”? Saying that would’ve been the blunder of my career.
Luckily, I had brushed up on concurrency and locking just a week earlier. I took a breath and explained how databases handle multiple users querying and writing at the same time. The interviewer nodded, smiled, and said:
“That’s exactly what I wanted to hear.”
That day, I realized that knowing how databases really handle concurrency isn’t optional for backend developers. It’s critical. Let me share what I explained in that interview (and what saved me).
🧵 Why Concurrency is a Problem
Press enter or click to view image in full size
Concurrency in a nutshell
Databases often serve hundreds or thousands of users simultaneously. Some users only read data (queries), while others write updates or deletes. Without control, chaos happens.
Take this example:
-- Transaction 1 UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- Transaction 2 UPDATE accounts SET balance = balance - 100 WHERE id = 1;
If both execute at the same time, the account could lose $200 even though the user only had $100.
💡 Imagine two people withdrawing cash from the same ATM at the same time — if the machine doesn’t coordinate, you both walk away with the same $100 bill.
🔒 Locks: The First Line of Defense
Databases prevent this chaos using locks:
- Shared Locks (S): Multiple reads are fine (like people browsing books in a library).
- Exclusive Locks (X): Only one writer at a time (like checking a book out — no one else can use it until you’re done).
- Row Locks: Only the row being updated is locked.
- Table Locks: The entire table is locked (safer, but slower).
This ensures that only valid operations happen together.
🕹️ Pessimistic vs Optimistic Concurrency
Different databases take different approaches to concurrency.
Pessimistic Concurrency

Pessimistic concurrency
- Assume conflicts will happen.
- Lock data immediately.
- Example: MySQL InnoDB row locks.
- ✅ Safe, but can lead to lots of waiting.
Optimistic Concurrency

Optimistic concurrency
- Assume conflicts are rare.
- Let everyone proceed, then check at commit if data changed.
- If conflict → rollback.
- Example: PostgreSQL’s MVCC (Multi-Version Concurrency Control).
- ✅ Faster for reads, scales better.
💡 Pessimistic is like holding a pen tightly so no one else can grab it. Optimistic is leaving pens on a desk and only checking later if two people wrote on the same page.
📚 MVCC: Keeping Readers and Writers Happy
In PostgreSQL and Oracle, concurrency is handled with MVCC.
- Writers don’t overwrite rows. Instead, they create new row versions.
- Readers always see a consistent snapshot of the database.
- Writers never block readers, and readers never block writers.
It’s like Google Docs — you see a frozen copy while someone else edits. Once they’re done, your version updates. No waiting, no chaos.
⚡ Transaction Isolation Levels
SQL defines levels of isolation that control how strict concurrency rules are:
- Read Uncommitted → You might see uncommitted (dirty) data.
- Read Committed → Only committed data is visible (Postgres default).
- Repeatable Read → Queries within a transaction always return the same result.
- Serializable → The strictest. Transactions behave as if they ran one by one.
💡 Think of these like hotel privacy:
- Read Uncommitted = curtains open, anyone can peek.
- Serializable = locked doors, complete privacy.
⚔️ Deadlocks: When Transactions Get Stuck
Sometimes, two transactions wait for each other forever:
-- Transaction 1 LOCK row A; LOCK row B; -- Transaction 2 LOCK row B; LOCK row A;
Both are waiting, and neither can proceed. This is a deadlock.
Databases solve this by detecting deadlocks and rolling back one transaction automatically.
Press enter or click to view image in full size
Deadlock in picture
Imagine two people stuck in a narrow hallway: “You go first.” “No, you go first.” Until security drags one person back.
🎯 Why This Matters in Interviews (and in Real Life)
That interview question wasn’t about trivia. It was about whether I could think about systems at scale.
Understanding concurrency helps you:
- Avoid designing queries that block each other unnecessarily.
- Choose the right isolation level for your app.
- Debug deadlocks when they happen.
- Build scalable systems where hundreds of users work at once.
When I explained all this in the interview, I ended with:
“To process trnsactions safely, I’d use an atomic database update (UPDATE ... WHERE amount >= balance RETURNING) for simple cases, or row-level locks (SELECT FOR UPDATE) when complex business logic is involved. At scale, I’d add a reservation system or Redis counters with background reconciliation.”
The interviewer smiled. I didn’t oversimplify, but I didn’t drown in theory either. I connected the business problem (two users ordering from the same place) to the technical concept (database concurrency).
And that’s what saved me from a big blunder.
✅ Takeaway
The next time someone asks you:
👉 “What happens when multiple users query the same database at once?”
You’ll know it’s not magic. It’s a mix of locks, isolation levels, and MVCC, all working together to ensure data consistency.
And trust me — if you get this question in an interview, showing that you understand concurrency could be the difference between stumbling and shining.

Comments ...
No Comments Yet ...Add One