Sampling
Learn how to configure the volume of error and transaction events sent to Sentry.
Adding Sentry to your app gives you a great deal of very valuable information about errors and performance you wouldn't otherwise get. And lots of information is good -- as long as it's the right information, at a reasonable volume.
To send a representative sample of your errors to Sentry, set the sample_rate
option in your SDK configuration to a number between 0
(0% of errors sent) and 1
(100% of errors sent). This is a static rate, which will apply equally to all errors. For example, to sample 25% of your errors:
import sentry_sdk
sentry_sdk.init(
# ...
sample_rate=0.25,
)
The error sample rate defaults to 1.0
, meaning all errors are sent to Sentry.
Changing the error sample rate requires re-deployment. In addition, setting an SDK sample rate limits visibility into the source of events. Setting a rate limit for your project (which only drops events when volume is high) may better suit your needs.
To sample error events dynamically, set the error_sampler
to a function that returns the desired sample rate for the event. The error_sampler
takes two arguments, event
and hint
. event
is the Event that will be sent to Sentry, hint
includes Python's sys.exc_info() information in hint["exc_info"]
.
Your error_sampler
function must return a valid value. A valid value is either:
- A floating-point number between
0.0
and1.0
(inclusive) indicating the probability an error gets sampled, or - A boolean indicating whether or not to sample the error.
One potential use case for the error_sampler
is to apply different sample rates for different exception types. For instance, if you would like to sample some exception called MyException
at 50%, discard all events of another exception called MyIgnoredException
, and sample all other exception types at 100%, you could use the following code when initializing the SDK:
import sentry_sdk
from sentry_sdk.types import Event, Hint
def my_error_sampler(event: Event, hint: Hint) -> float:
error_class = hint["exc_info"][0]
if error_class == MyException:
return 0.5
elif error_class == MyIgnoredException:
return 0
# All the other errors
return 1.0
sentry_sdk.init(
# ...
error_sampler=my_error_sampler,
)
You can define at most one of the error_sampler
and the sample_rate
. If both are set, the error_sampler
will control sampling, and the sample_rate
will be ignored.
We recommend sampling your transactions for two reasons:
- Capturing a single trace involves minimal overhead, but capturing traces for every page load or every API request may add an undesirable load to your system.
- Enabling sampling allows you to better manage the number of events sent to Sentry, so you can tailor your volume to your organization's needs.
Choose a sampling rate with the goal of finding a balance between performance and volume concerns with data accuracy. You don't want to collect too much data, but you want to collect sufficient data from which to draw meaningful conclusions. If you’re not sure what rate to choose, start with a low value and gradually increase it as you learn more about your traffic patterns and volume.
The Sentry SDKs have two configuration options to control the volume of transactions sent to Sentry, allowing you to take a representative sample:
- Uniform sample rate (
traces_sample_rate
):- Provides an even cross-section of transactions, no matter where in your app or under what circumstances they occur.
- Uses default inheritance and precedence behavior
- Sampling function (
traces_sampler
) which:- Samples different transactions at different rates
- Filters out some transactions entirely
- Modifies default precedence and inheritance behavior
By default, none of these options are set, meaning no transactions will be sent to Sentry. You must set either one of the options to start sending transactions.
To do this, set the traces_sample_rate
option in your sentry_sdk.init()
to a number between 0 and 1. With this option set, every transaction created will have that percentage chance of being sent to Sentry. (So, for example, if you set traces_sample_rate
to 0.2
, approximately 20% of your transactions will get recorded and sent.) That looks like this:
sentry_sdk.init(
# ...
# Set traces_sample_rate to 1.0 to capture 100%
# of transactions for tracing.
# We recommend adjusting this value in production,
traces_sample_rate=1.0,
)
To use the sampling function, set the traces_sampler
option in your sentry-sdk.init()
to a function that will accept a sampling_context
dictionary and return a sample rate between 0 and 1. For example:
def traces_sampler(sampling_context):
# Examine provided context data (including parent decision, if any)
# along with anything in the global namespace to compute the sample rate
# or sampling decision for this transaction
if "...":
# These are important - take a big sample
return 0.5
elif "...":
# These are less important or happen much more frequently - only take 1%
return 0.01
elif "...":
# These aren't something worth tracking - drop all transactions like this
return 0
else:
# Default sample rate
return 0.1
sentry_sdk.init(
# ...
traces_sampler=traces_sampler,
)
For convenience, the function can also return a boolean. Returning True
is equivalent to returning 1
, and will guarantee the transaction will be sent to Sentry. Returning False
is equivalent to returning 0
and will guarantee the transaction will not be sent to Sentry.
The information contained in the sampling_context
object passed to the traces_sampler
when a transaction is created varies by platform and integration.
For Python-based SDKs, it includes at least the following:
{
"transaction_context": {
"name": <string> # transaction title at creation time
"op": <string> # short description of transaction type, like "http.request"
},
"parent_sampled": <bool> # if this transaction has a parent, its sampling decision
... # custom context as passed to `start_transaction`
}
When using custom instrumentation to create a transaction, you can add data to the sampling_context
by passing it as an optional second argument to start_transaction
. This is useful if there's data to which you want the sampler to have access but which you don't want to attach to the transaction as tags
or data
, such as information that's sensitive or that’s too large to send with the transaction. For example:
sentry_sdk.start_transaction(
# kwargs passed to Transaction constructor - will be recorded on transaction
name="GET /search",
op="search",
data={
"query_params": {
"animal": "dog",
"type": "very good"
}
},
# `custom_sampling_context` - won't be recorded
custom_sampling_context={
# PII
"user_id": "12312012",
# too big to send
"search_results": { ... }
}
);
Whatever a transaction's sampling decision, that decision will be passed to its child spans and from there to any transactions they subsequently cause in other services.
(See Distributed Tracing for more about how that propagation is done.)
If the transaction currently being created is one of those subsequent transactions (in other words, if it has a parent transaction), the upstream (parent) sampling decision will be included in the sampling context data. Your traces_sampler
can use this information to choose whether to inherit that decision. In most cases, inheritance is the right choice, to avoid breaking distributed traces. A broken trace will not include all your services.
def traces_sampler(sampling_context):
# always inherit
if sampling_context["parent_sampled"] is not None:
return sampling_context["parent_sampled"]
...
# rest of sampling logic here
If you're using a traces_sample_rate
rather than a traces_sampler
, the decision will always be inherited.
If you know at transaction creation time whether or not you want the transaction sent to Sentry, you also have the option of passing a sampling decision directly to the transaction constructor (note, not in the custom_sampling_context
object). If you do that, the transaction won't be subject to the traces_sample_rate
, nor will traces_sampler
be run, so you can count on the decision that's passed not to be overwritten.
sentry_sdk.start_transaction(
name="GET /search",
sampled=True
);
There are multiple ways for a transaction to end up with a sampling decision.
- Random sampling according to a static sample rate set in
traces_sample_rate
- Random sampling according to a sample function rate returned by
traces_sampler
- Absolute decision (100% chance or 0% chance) returned by
traces_sampler
- If the transaction has a parent, inheriting its parent's sampling decision
- Absolute decision passed to
start_transaction
When there's the potential for more than one of these to come into play, the following precedence rules apply:
- If a sampling decision is passed to
start_transaction
, that decision will be used, overriding everything else. - If
traces_sampler
is defined, its decision will be used. It can choose to keep or ignore any parent sampling decision, use the sampling context data to make its own decision, or choose a sample rate for the transaction. We advise against overriding the parent sampling decision because it will break distributed traces. - If
traces_sampler
is not defined, but there's a parent sampling decision, the parent sampling decision will be used. - If
traces_sampler
is not defined and there's no parent sampling decision,traces_sample_rate
will be used.
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").