July 17 2025
Sharing Context Between Requests and Jobs Without Global Hacks
How to carry useful context from a synchronous flow into a job with traceability and a clear boundary.
Andrews Ribeiro
Founder & Engineer
3 min Intermediate Systems
The problem
One request comes in.
The backend decides something.
Then it dispatches a job.
At that point, there is almost always context that still matters:
tenant_idcorrelation_id- trigger origin
- priority
- rule version
- operational reason
The common mistake is not deciding this properly.
Then the system falls into one of these paths:
- global variable
- implicit framework context
- shared singleton
- a job that looks up half the state later “because it can”
All of that seems fine until the first serious investigation.
Mental model
A job does not continue a request.
A job starts a new execution moment.
So it needs enough context to:
- execute the action
- be auditable
- be reprocessable
- be investigable
That does not mean carrying everything.
It means carrying the right context.
Simple example
Imagine one request approves an order and schedules invoice issuance.
The invoice job may need:
order_idtenant_idcorrelation_idrequested_byissued_under_policy_version
It probably does not need:
- full headers
- the entire request payload
- the user’s whole session
When you send too much context, the job becomes a truck full of coupling.
When you send too little, the job becomes a black box.
The common mistake
The common mistake is assuming observability can solve everything later.
Without minimum context in the job payload, you end up with:
- logs with no clear link to the origin
- opaque retries
- reprocessing with no semantics
- a job that needs too many lookups just to understand why it exists
Another common mistake is serializing the entire request “just in case.”
That usually only moves junk further down the line.
What usually helps
It helps to split what the job needs into three groups:
- identity of the target
- identity of the operational context
- minimum tracing metadata
In practice, that usually becomes:
- relevant ids
- correlation id
- tenant or scope
- reason, version, or priority when that changes execution
That is usually enough to preserve clarity without carrying garbage.
How a senior thinks
Engineers who have already debugged real async flows usually ask:
- if this job fails tomorrow, will I understand where it came from?
- if I reprocess it, can I preserve the important context?
- is this field necessary for execution or did it only come along for convenience?
- am I carrying one hidden dependency from the request into the job?
That conversation removes a lot of invisible hacks.
Interview angle
This topic appears in backend, queues, jobs, tracing, and incidents.
The interviewer wants to see whether you understand:
- that an async job needs its own contract
- that too much context and too little context break different things
- that traceability and reprocessing depend on explicit propagation
A strong answer often sounds like this:
“I would not try to share context through a global variable or request local storage. I would prefer to put in the job payload only the target identity and the minimum operational context, like tenant, correlation id, and policy version when that affects execution.”
Direct takeaway
Context between request and job should not be magical.
It should be explicit enough for the flow to remain understandable.
Quick summary
What to keep in your head
- An async job needs explicit context that is sufficient to operate and audit, not magical access to the original request.
- Correlation id, tenant, technical actor, and trigger reason are usually more important than copying the whole payload.
- Too much implicit context makes debugging, retry, and reprocessing worse.
- Good propagation carries only what is necessary for continuity, not the whole shadow of the request.
Practice checklist
Use this when you answer
- Can I explain which context the job actually needs to execute and investigate?
- Is that context explicit in the job payload or hidden in a singleton, thread-local, or fragile lookup?
- Do retry and reprocessing still keep enough traceability?
- Am I propagating only the essential parts or throwing the whole request into the queue for convenience?
You finished this article
Share this page
Copy the link manually from the field below.