Skip to main content

Product Events: How to Instrument Without Generating Analytics Garbage

How to instrument product events in a way that helps decisions, debugging, and learning instead of becoming a pile of broken names and empty dashboards.

Andrews Ribeiro

Andrews Ribeiro

Founder & Engineer

The problem

Some teams treat instrumentation like a minor task.

Something like:

  • “just add some tracking there”
  • “we’ll organize it later”
  • “send everything you can and we’ll see what we use”

It almost always ends the same way.

You get:

  • duplicate events
  • inconsistent names
  • properties with no standard
  • dashboards that look rich but answer nothing

This is the main point:

too much data without a contract almost always becomes useless data.

Mental model

Think of a product event as an observation contract.

You are saying:

  • which behavior matters
  • when that behavior happened
  • with which minimum context it needs to be interpreted

That is not very different from modeling an API.

You do not want:

  • a random name
  • an unstable shape
  • semantics that change without warning

With events it is the same.

If the contract is weak, later interpretation becomes guesswork.

Breaking the problem down

Not everything deserves to become an event

This is the first filter.

An event should exist when it helps answer something like:

  • did users reach this step?
  • where did they drop off?
  • did they use this new capability?
  • where in the journey did the error happen?
  • did the flow change improve or worsen behavior?

If the answer is only:

  • “maybe someone will want to know one day”

the event usually starts out weak.

An event describes behavior, not implementation

This mistake shows up a lot.

Weak names:

  • clicked_blue_button
  • modal_open_v2
  • submit_form_new

Those names age badly because they are glued to interface details.

A better name tries to represent product intent or journey step:

  • signup_started
  • checkout_payment_submitted
  • search_result_opened

Visual detail can become a property when it truly matters.

But the main name should survive a layout refactor.

Properties exist to explain context, not carry the whole world

The other common extreme is an obese payload.

The event carries:

  • the whole screen state
  • the entire user object
  • arbitrary text fragments
  • flags nobody understands

That creates three problems:

  1. unnecessary cost and complexity
  2. higher privacy risk
  3. worse analysis, because nobody knows what is trustworthy

A good property usually answers:

  • which variant was active?
  • in which plan or segment did this happen?
  • what was the flow entry point?
  • was there an error? what class of error?

If the property does not help interpretation, it is probably extra.

The firing moment matters more than it looks

Sometimes the event is right but it fires at the wrong moment.

Classic example:

  • recording checkout_completed when the person clicked the button instead of when the purchase was confirmed

That destroys the metric.

The question needs to be:

at which moment can I actually say this behavior happened?

Click, attempt, success, and failure are different moments.

Mixing them creates false readings.

A simple taxonomy already solves more than it seems

You do not need to start with an encyclopedia.

But you do need a minimum rule set.

Something like:

  • consistent past-tense or participle verbs
  • journey-oriented names
  • common properties with a stable convention
  • clear distinction between attempt, success, and failure

Without that, everyone sends events however they want.

And the data turns into local dialect.

Simple example

Imagine a signup flow.

Bad instrumentation:

  • clicked_register_button
  • register_submit
  • sign_up_done
  • register_error

Each screen names it differently. Nobody knows whether done means click, request sent, or account created.

Better instrumentation:

  • signup_started
  • signup_submitted
  • signup_completed
  • signup_failed

With controlled properties:

  • plan_type
  • entry_point
  • error_code when there is a failure

Now there is a readable line.

You can measure:

  • how many start
  • how many try to submit
  • how many complete
  • where they fail

Without having to reconstruct intent later.

What usually goes wrong

  • Instrumenting after the feature is already out and trying to patch meaning back in.
  • Naming the event after the component instead of the journey.
  • Adding properties because “maybe they will be useful.”
  • Mixing click events with success events.
  • Failing to agree on minimum rules across product, engineering, and analytics.
  • Changing event semantics without treating it as a contract break.

How someone more senior thinks

A more mature person usually thinks across three layers at the same time:

  1. product
  2. operations
  3. data longevity

Product:

  • which decision does this need to support?

Operations:

  • does this event also help explain an error, a drop, or a regression?

Longevity:

  • when the interface changes, will this tracking still be readable?

That view avoids both decorative tracking and chaotic instrumentation.

Interview angle

This topic appears less as a literal question and more as a follow-up.

For example:

  • “how would you measure the success of this feature?”
  • “which events would you instrument?”
  • “how would you know where the user is dropping off?”

The interviewer usually wants to see whether you:

  • measure real behavior
  • distinguish attempt from success
  • think in stable contracts
  • do not answer with a generic dashboard

Weak answer:

I would add events to the main buttons and then look at the data.

Strong answer:

I would choose events aligned with journey steps, separating start, attempt, success, and failure. I would also keep names oriented to product behavior, with a few stable properties, so the interpretation stays valid even if the interface changes.

Closing

Instrumenting well is not about sending more events.

It is about sending less, with more intent.

If the data does not help someone decide, diagnose, or learn, it is only taking up space.

In the end, good tracking looks simple.

But that simplicity is usually a sign that someone thought carefully before implementing it.

Quick summary

What to keep in your head

Practice checklist

Use this when you answer

You finished this article

Next article Event Taxonomy Without Becoming an Archaeological Spreadsheet Previous article Boundaries Between Feature, Shared, and Design System

Keep exploring

Related articles