March 24 2025
Product Events: How to Instrument Without Generating Analytics Garbage
How to instrument product events in a way that helps decisions, debugging, and learning instead of becoming a pile of broken names and empty dashboards.
Andrews Ribeiro
Founder & Engineer
5 min Intermediate Thinking
The problem
Some teams treat instrumentation like a minor task.
Something like:
- “just add some tracking there”
- “we’ll organize it later”
- “send everything you can and we’ll see what we use”
It almost always ends the same way.
You get:
- duplicate events
- inconsistent names
- properties with no standard
- dashboards that look rich but answer nothing
This is the main point:
too much data without a contract almost always becomes useless data.
Mental model
Think of a product event as an observation contract.
You are saying:
- which behavior matters
- when that behavior happened
- with which minimum context it needs to be interpreted
That is not very different from modeling an API.
You do not want:
- a random name
- an unstable shape
- semantics that change without warning
With events it is the same.
If the contract is weak, later interpretation becomes guesswork.
Breaking the problem down
Not everything deserves to become an event
This is the first filter.
An event should exist when it helps answer something like:
- did users reach this step?
- where did they drop off?
- did they use this new capability?
- where in the journey did the error happen?
- did the flow change improve or worsen behavior?
If the answer is only:
- “maybe someone will want to know one day”
the event usually starts out weak.
An event describes behavior, not implementation
This mistake shows up a lot.
Weak names:
clicked_blue_buttonmodal_open_v2submit_form_new
Those names age badly because they are glued to interface details.
A better name tries to represent product intent or journey step:
signup_startedcheckout_payment_submittedsearch_result_opened
Visual detail can become a property when it truly matters.
But the main name should survive a layout refactor.
Properties exist to explain context, not carry the whole world
The other common extreme is an obese payload.
The event carries:
- the whole screen state
- the entire user object
- arbitrary text fragments
- flags nobody understands
That creates three problems:
- unnecessary cost and complexity
- higher privacy risk
- worse analysis, because nobody knows what is trustworthy
A good property usually answers:
- which variant was active?
- in which plan or segment did this happen?
- what was the flow entry point?
- was there an error? what class of error?
If the property does not help interpretation, it is probably extra.
The firing moment matters more than it looks
Sometimes the event is right but it fires at the wrong moment.
Classic example:
- recording
checkout_completedwhen the person clicked the button instead of when the purchase was confirmed
That destroys the metric.
The question needs to be:
at which moment can I actually say this behavior happened?
Click, attempt, success, and failure are different moments.
Mixing them creates false readings.
A simple taxonomy already solves more than it seems
You do not need to start with an encyclopedia.
But you do need a minimum rule set.
Something like:
- consistent past-tense or participle verbs
- journey-oriented names
- common properties with a stable convention
- clear distinction between attempt, success, and failure
Without that, everyone sends events however they want.
And the data turns into local dialect.
Simple example
Imagine a signup flow.
Bad instrumentation:
clicked_register_buttonregister_submitsign_up_doneregister_error
Each screen names it differently.
Nobody knows whether done means click, request sent, or account created.
Better instrumentation:
signup_startedsignup_submittedsignup_completedsignup_failed
With controlled properties:
plan_typeentry_pointerror_codewhen there is a failure
Now there is a readable line.
You can measure:
- how many start
- how many try to submit
- how many complete
- where they fail
Without having to reconstruct intent later.
What usually goes wrong
- Instrumenting after the feature is already out and trying to patch meaning back in.
- Naming the event after the component instead of the journey.
- Adding properties because “maybe they will be useful.”
- Mixing click events with success events.
- Failing to agree on minimum rules across product, engineering, and analytics.
- Changing event semantics without treating it as a contract break.
How someone more senior thinks
A more mature person usually thinks across three layers at the same time:
- product
- operations
- data longevity
Product:
- which decision does this need to support?
Operations:
- does this event also help explain an error, a drop, or a regression?
Longevity:
- when the interface changes, will this tracking still be readable?
That view avoids both decorative tracking and chaotic instrumentation.
Interview angle
This topic appears less as a literal question and more as a follow-up.
For example:
- “how would you measure the success of this feature?”
- “which events would you instrument?”
- “how would you know where the user is dropping off?”
The interviewer usually wants to see whether you:
- measure real behavior
- distinguish attempt from success
- think in stable contracts
- do not answer with a generic dashboard
Weak answer:
I would add events to the main buttons and then look at the data.
Strong answer:
I would choose events aligned with journey steps, separating start, attempt, success, and failure. I would also keep names oriented to product behavior, with a few stable properties, so the interpretation stays valid even if the interface changes.
Closing
Instrumenting well is not about sending more events.
It is about sending less, with more intent.
If the data does not help someone decide, diagnose, or learn, it is only taking up space.
In the end, good tracking looks simple.
But that simplicity is usually a sign that someone thought carefully before implementing it.
Quick summary
What to keep in your head
- A good event represents a relevant product behavior, not any random click someone decided to register.
- Name, properties, and firing moment need to be stable enough to survive interface evolution.
- If the same concept appears under five different names, you did not gain visibility. You gained noise.
- Mature instrumentation is designed together with the decision it needs to support, not patched in afterward.
Practice checklist
Use this when you answer
- Can I explain which decision this event helps support?
- Does the event name describe product behavior instead of a visual screen detail?
- Do the properties carry enough context without becoming a payload dump?
- If the UI changes tomorrow, does the event still mean the same thing?
You finished this article
Share this page
Copy the link manually from the field below.