March 25 2025
Measure Before You Optimize
How to avoid optimizing by reflex and make decisions with evidence instead of vague discomfort.
Andrews Ribeiro
Founder & Engineer
4 min Intermediate Systems
The problem
A lot of optimization starts with impatience.
The page feels slow.
The list feels heavy.
The API seems to take too long.
Then the team starts changing code before answering the most basic question:
- where is the cost actually coming from?
Without that answer, the usual risks show up fast:
- optimizing the wrong part
- adding complexity for no real gain
- declaring success on top of an improvement nobody proved
Mental model
Think about it like this:
perception raises a hypothesis; measurement confirms it or kills it.
That applies to frontend, backend, database, and async processing.
The goal is not to become obsessed with dashboards.
The goal is to stop treating discomfort like diagnosis.
Performance work gets much cleaner when the order is simple:
- notice the problem
- choose the right metric
- capture a baseline
- change something
- compare again
If you skipped that order, there is a good chance you are optimizing in the dark.
Breaking it down
First: which flow is actually bad?
“The system is slow” is usually a weak description.
You need a better cut:
- opening the home page?
- saving a form?
- fetching data?
- rendering a table?
Without that cut, the measurement includes too much noise.
Then: which metric represents that pain?
Not every performance problem needs the same metric.
Examples:
- API response time
- time until the page becomes interactive
- query duration
- number of unnecessary re-renders
- job processing time
A good metric should represent what actually got worse for the user or for the system.
Measuring an easy number just because it exists is not the same as measuring the right thing.
Baseline is what stops self-deception
Before changing code, you need to record the current state.
That baseline might be:
- average duration
- a relevant percentile
- render count
- CPU usage
- payload size
The exact format matters less than the principle:
you need a real “before” if you want to trust the “after”.
Without that, the team is mostly hoping the change helped.
Measurement also helps kill the wrong theory
This part is underrated.
Sometimes everyone is convinced the problem is inside a component, but the real delay comes from:
- network
- database
- a third-party script
- hydration
Measurement is not only for validating a fix.
It is also what keeps you from wasting time in the wrong place.
Not every improvement is worth the cost
This is where experience usually shows up.
An optimization can improve a number and still bring:
- harder code to maintain
- awkward coupling
- extra branches in the logic
- more room for bugs
If the gain is small and the maintenance cost is real, the change may not be worth keeping.
Simple example
Imagine a customer list that “feels slow”.
The impulsive reaction is to start adding:
memouseMemouseCallback
everywhere.
A better approach is:
- open the profiler
- check whether the cost is in render, network, or processing
- measure how many renders are happening
- compare before and after the change
If the real delay was coming from the API, stuffing the component with micro-optimizations did not solve the problem.
It only made the code more annoying.
Common mistakes
- optimizing without a baseline
- measuring a metric that does not represent the real pain
- trusting only the feeling on your local machine
- declaring success without comparing before and after
- keeping an expensive optimization even when the gain was trivial
How a senior thinks
More experienced engineers tend to treat performance like an investigation, not a reflex.
The reasoning usually sounds like this:
Before I buy complexity, I want proof of where the cost is and how much this change improves the metric that actually matters.
That protects the team from two very common traps:
- vanity tuning
- performance complexity with no meaningful return
Seniority here is not about knowing every profiling tool.
It is about understanding that performance without evidence quickly turns into opinion wearing technical clothes.
What the interviewer wants to see
In interviews, this topic often shows up when the interviewer wants to test judgment.
They want to see whether you:
- separate hypothesis from evidence
- talk about baseline and comparison
- choose the metric based on the real problem
- understand that optimization also adds maintenance cost
A strong answer usually sounds like this:
First I narrow down the flow that feels slow. Then I choose a metric that represents that pain and capture the baseline. Only after that do I propose a change. If the change does not meaningfully improve the metric that matters, I do not buy the extra complexity.
What has not been measured yet is still a suspicion, not a diagnosis.
Good optimization is not the cleverest one. It is the one that improves something important without making the surrounding system worse.
Quick summary
What to keep in your head
- Optimizing without a baseline usually trades clarity for complexity without proving real user benefit.
- Good measurement starts by choosing the flow, the metric, and the part of the system that is actually hurting.
- Perception helps raise a hypothesis, but it does not replace evidence.
- In interviews, strong answers usually follow the same order: observe, measure, compare, then change code.
Practice checklist
Use this when you answer
- Can I explain the difference between a suspected bottleneck and a proven one?
- Do I know how to choose a metric that represents the real pain instead of any convenient number?
- Can I describe a baseline and how I would compare before and after the change?
- Can I say when an optimization is not worth the maintenance cost it adds?
You finished this article
Next step
Where the Bottleneck Is Next step →Share this page
Copy the link manually from the field below.