March 15 2025
Core Web Vitals: What They Are and What Actually Affects the Score
A practical way to think about LCP, INP, and CLS when you need to improve real experience, not just please a tool.
Andrews Ribeiro
Founder & Engineer
4 min Intermediate Systems
The problem
A lot of performance discussion turns into score worship.
The page gets a bad Lighthouse grade and the team panics.
The problem is that this usually mixes three different things:
- the metric
- the cause
- the business priority
If you do not separate those layers, the team starts optimizing numbers instead of experience.
Mental model
Core Web Vitals are trying to measure three simple questions:
- did the user see the main content early?
- did the page respond quickly when they interacted?
- did the interface stay stable or jump around?
Today that usually shows up as:
- LCP
- INP
- CLS
If you want the short version:
Core Web Vitals are an imperfect but useful attempt to measure loading, response, and stability in the way users actually perceive them.
The important point is that the acronym does not explain the cause.
It only tells you where to look.
Breaking the problem down
LCP: did the main content appear too late?
LCP looks at when the most relevant piece of content became visible.
In practice, it usually suffers because of:
- slow HTML delivery
- a heavy critical resource, like the hero image
- CSS or JS delaying rendering
- CPU tied up hydrating or executing too much script
So bad LCP is not automatically “an image problem.”
It might be the image.
It might be network.
It might be rendering.
It might be too much JavaScript on the path.
INP: did the interface respond too late?
INP looks at interaction responsiveness.
It tries to capture that feeling of clicking and waiting for the app to react.
It usually gets worse because of:
- a long task on the main thread
- a heavy event handler
- expensive rendering after the event
- too much JavaScript fighting for CPU
CLS: was the screen jumping around?
CLS measures visual instability.
The user is about to click one thing, the screen moves, and they hit something else.
The most common causes are:
- images without reserved dimensions
- content inserted above existing content
- fonts changing layout late
- ads or embeds changing size after load
Score does not replace reading the flow
One page can have a good score and still frustrate users in an important flow.
Another can have a mediocre score but solve the user need fast enough.
That is why mature performance work does not ask only “what score did we get?”
It also asks:
- on which device?
- on which page?
- at which step of the flow?
- with what impact on the user?
Simple example
Imagine a landing page with a large hero, an external font, and analytics scripts loading too early.
The score shows:
- bad LCP
- light CLS
- acceptable INP
A shallow read would say:
“Let’s optimize everything.”
A better read would say:
- the main bottleneck is LCP
- the hero image probably carries some of the cost
- maybe the loading and priority of that resource are wrong
- maybe CSS or JS is also blocking render on the way
In other words, the metric does not tell you to touch everything.
It helps you shrink the investigation space.
Common mistakes
- Treating score as the final goal instead of experience.
- Talking about Core Web Vitals without knowing what each metric is actually trying to capture.
- Assuming there is only one cause for each metric.
- Trying to improve all metrics at the same time with no priority order.
- Declaring victory with lab improvements while ignoring the real user flow.
How a senior thinks
People who think better about performance usually follow this path:
- which metric is the worst?
- what is that metric trying to signal?
- which classes of cause make sense here?
- which hypothesis has the best cost-benefit for investigation?
That keeps you from becoming a servant of framework checklists.
Seniority here means knowing that a metric is a compass, not a full map.
And also knowing that performance only matters when it improves an experience users actually have.
What the interviewer wants to see
They want to see whether you:
- understand what LCP, INP, and CLS are trying to measure
- know how to connect each metric to likely kinds of causes
- do not confuse score with diagnosis
- can prioritize investigation and improvement
If you say something like “Bad LCP makes me look at the loading and render path for the main content. Bad INP makes me look at the main thread and post-interaction cost,” you are already showing a more mature read.
A good score does not compensate for a bad experience. A bad score does not explain by itself what to fix.
Core Web Vitals help most when you use the metric to investigate, not when you perform for the tool.
Quick summary
What to keep in your head
- Core Web Vitals are a way to approximate what users feel with measurable signals.
- Each metric points to a different kind of problem: late loading, late response, or layout instability.
- Improving the score without understanding the cause usually turns into blind micro-optimization.
- In interviews, mature answers connect a bad metric to a concrete hypothesis around network, rendering, or CPU.
Practice checklist
Use this when you answer
- Can I explain LCP, INP, and CLS in human language instead of hiding behind the acronym?
- Do I know how to connect each metric to the most common kinds of causes?
- Can I explain why a bad score does not automatically mean everything is slow?
- Can I prioritize which metric to work on first based on the real product flow?
You finished this article
Next step
Measure Before You Optimize Next step →Share this page
Copy the link manually from the field below.