diff --git a/docs/platforms/ruby/common/configuration/options.mdx b/docs/platforms/ruby/common/configuration/options.mdx index 49955f5533f8f..bb646f46fee0e 100644 --- a/docs/platforms/ruby/common/configuration/options.mdx +++ b/docs/platforms/ruby/common/configuration/options.mdx @@ -326,6 +326,32 @@ config.trace_ignore_status_codes = [404, (502..511)] + + +Automatically capture how long requests wait in the web server queue before processing begins. The SDK reads the `X-Request-Start` header set by reverse proxies (Nginx, HAProxy, Heroku) and attaches queue time to transactions as `http.queue_time_ms`. + +This helps identify when requests are delayed due to insufficient worker threads or server capacity, which is especially useful under load. + +To disable queue time capture: + +```ruby +config.capture_queue_time = false +``` + +**Nginx:** + +```nginx +proxy_set_header X-Request-Start "t=${msec}"; +``` + +**HAProxy:** + +```haproxy +http-request set-header X-Request-Start t=%Ts%ms +``` + + + The instrumenter to use, `:sentry` or `:otel` for [use with OpenTelemetry](../../tracing/instrumentation/opentelemetry). diff --git a/docs/platforms/ruby/common/tracing/instrumentation/automatic-instrumentation.mdx b/docs/platforms/ruby/common/tracing/instrumentation/automatic-instrumentation.mdx index 06e5c62996a28..969849a219443 100644 --- a/docs/platforms/ruby/common/tracing/instrumentation/automatic-instrumentation.mdx +++ b/docs/platforms/ruby/common/tracing/instrumentation/automatic-instrumentation.mdx @@ -20,5 +20,7 @@ Spans are instrumented for the following operations within a transaction: - includes common database systems such as Postgres and MySQL - Outgoing HTTP requests made with `Net::HTTP` - Redis operations +- Queue time for requests behind reverse proxies (Nginx, HAProxy, Heroku) + - Requires `X-Request-Start` header from reverse proxy Spans are only created within an existing transaction. If you're not using any of the supported frameworks, you'll need to create transactions manually. diff --git a/docs/platforms/ruby/common/tracing/instrumentation/performance-metrics.mdx b/docs/platforms/ruby/common/tracing/instrumentation/performance-metrics.mdx index 8955adc964609..ab8c95a9b51ee 100644 --- a/docs/platforms/ruby/common/tracing/instrumentation/performance-metrics.mdx +++ b/docs/platforms/ruby/common/tracing/instrumentation/performance-metrics.mdx @@ -22,6 +22,8 @@ Sentry supports adding arbitrary custom units, but we recommend using one of the + + ## Supported Measurement Units Units augment measurement values by giving meaning to what otherwise might be abstract numbers. Adding units also allows Sentry to offer controls - unit conversions, filters, and so on - based on those units. For values that are unitless, you can supply an empty string or `none`. diff --git a/docs/platforms/ruby/guides/good_job/index.mdx b/docs/platforms/ruby/guides/good_job/index.mdx new file mode 100644 index 0000000000000..280419981eafd --- /dev/null +++ b/docs/platforms/ruby/guides/good_job/index.mdx @@ -0,0 +1,278 @@ +--- +title: GoodJob +description: "Learn about using Sentry with GoodJob, an ActiveJob adapter for Postgres-based job queuing." +--- + +The GoodJob integration adds support for [GoodJob](https://github.com/bensheldon/good_job), a multithreaded, Postgres-based ActiveJob backend for Ruby on Rails. This integration provides automatic error capture with enriched context, performance monitoring with execution time and queue latency tracking, and cron monitoring for scheduled jobs. + +## Install + +Install `sentry-good_job`: + +```bash +gem install sentry-good_job +``` + +Or add it to your `Gemfile`: + +```ruby +gem "sentry-ruby" +gem "sentry-good_job" +``` + +## Configure + +### Automatic Setup with Rails + +If you're using Rails and have GoodJob in your dependencies, the integration will be enabled automatically when you initialize the Sentry SDK. + +```ruby {filename:config/initializers/sentry.rb} +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.breadcrumbs_logger = [:active_support_logger, :http_logger] + + # Set traces_sample_rate to 1.0 to capture 100% + # of transactions for tracing. + config.traces_sample_rate = 1.0 +end +``` + +### Manual Setup + +For non-Rails applications or when you need more control, you can configure the integration explicitly: + +```ruby +require "sentry-ruby" +require "sentry-good_job" + +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.traces_sample_rate = 1.0 + + # Configure GoodJob-specific options + config.good_job.report_after_job_retries = false + config.good_job.include_job_arguments = false + config.good_job.auto_setup_cron_monitoring = true +end +``` + + + Make sure that `Sentry.init` is called before GoodJob workers start processing + jobs. For Rails applications, placing the initialization in + `config/initializers/sentry.rb` ensures proper setup. + + +## Verify + +To verify that the integration is working, create a job that raises an error: + +```ruby {filename:app/jobs/debug_job.rb} +class DebugJob < ApplicationJob + queue_as :default + + def perform + 1 / 0 # Intentional error + end +end +``` + +Enqueue the job: + +```ruby +DebugJob.perform_later +``` + +When the job is processed by GoodJob, the error will be captured and sent to Sentry. You'll see: + +- An error event with the exception details +- Enriched context including job name, queue name, and job ID +- A performance transaction showing job execution time and queue latency + +View the error in the **Issues** section and the performance data in the **Performance** section of [sentry.io](https://sentry.io). + +## Features + +### Error Capture + +The integration automatically captures exceptions raised during job execution: + +- Exceptions are captured with full context (job name, queue, arguments if enabled, job ID) +- Trace propagation across job executions +- Configurable error reporting (after retries, only dead jobs, etc.) + +### Performance Monitoring + +Job execution is automatically instrumented with performance monitoring: + +- **Execution time**: Time spent executing the job +- **Queue latency**: Time job spent waiting in the queue before execution +- **Trace propagation**: Jobs maintain trace context from the code that enqueued them + +Transactions are created with the name `queue.active_job/` and include: + +- A span for the job execution +- Queue latency measurement +- Breadcrumbs for job lifecycle events + +### Cron Monitoring + +The integration provides two ways to monitor scheduled jobs: + +#### Automatic Setup + +GoodJob cron configurations are automatically detected and monitored: + +```ruby {filename:config/initializers/good_job.rb} +Rails.application.configure do + config.good_job.cron = { + example_job: { + cron: "0 0 * * *", # Daily at midnight + class: "ExampleJob" + } + } +end +``` + +With `auto_setup_cron_monitoring` enabled (default), Sentry will automatically create cron monitors for all jobs in your GoodJob cron configuration. Monitor slugs are generated from the cron key. + + + Cron monitors are created when your application starts and the GoodJob + configuration is loaded. You don't need to create monitors manually in Sentry. + + +#### Manual Setup + +For more control over cron monitoring, use the `sentry_cron_monitor` method in your job: + +```ruby {filename:app/jobs/scheduled_cleanup_job.rb} +class ScheduledCleanupJob < ApplicationJob + include GoodJob::ActiveJobExtensions::Crons + + sentry_cron_monitor( + schedule: { cron: "0 2 * * *" }, # 2 AM daily + timezone: "America/New_York" + ) + + def perform + # Cleanup logic + end +end +``` + +The `sentry_cron_monitor` method accepts: + +- `schedule`: Cron schedule hash (e.g., `{ cron: "0 * * * *" }`) +- `timezone`: Timezone for the schedule (optional, defaults to UTC) + + + If you use manual cron monitoring with `sentry_cron_monitor`, set + `auto_setup_cron_monitoring` to `false` to avoid duplicate monitors. + + +View your monitored jobs at [sentry.io/insights/crons](https://sentry.io/insights/crons/). + +## Options + +Configure the GoodJob integration with these options: + +### `report_after_job_retries` + + + +Only report errors to Sentry after all retry attempts have been exhausted. + +When `true`, errors are only sent to Sentry after the job has failed its final retry attempt. When `false`, errors are reported on every failure, including during retries. + +```ruby +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.good_job.report_after_job_retries = true +end +``` + + + +### `report_only_dead_jobs` + + + +Only report errors for jobs that cannot be retried (dead jobs). + +When `true`, errors are only sent to Sentry for jobs that have permanently failed and won't be retried. This is stricter than `report_after_job_retries`. + +```ruby +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.good_job.report_only_dead_jobs = true +end +``` + + + +### `include_job_arguments` + + + +Include job arguments in error context sent to Sentry. + +When `true`, job arguments are included in the event's extra context. **Warning**: This may expose sensitive data. Only enable this if you're certain your job arguments don't contain PII or sensitive information. + +```ruby +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.good_job.include_job_arguments = true +end +``` + + + Job arguments may contain personally identifiable information (PII) or other + sensitive data. Only enable this option if you've reviewed your job arguments + and are certain they don't contain sensitive information, or if you've + configured [data scrubbing](/platforms/ruby/data-management/sensitive-data/) + appropriately. + + + + +### `auto_setup_cron_monitoring` + + + +Automatically set up cron monitoring by reading GoodJob's cron configuration. + +When `true`, the integration scans your GoodJob cron configuration and automatically creates Sentry cron monitors for scheduled jobs. + +```ruby +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.good_job.auto_setup_cron_monitoring = false +end +``` + +Disable this if you prefer to use manual cron monitoring with the `sentry_cron_monitor` method. + + + +### `logging_enabled` + + + +Enable detailed logging for debugging the integration. + +When `true`, the integration logs detailed information about job monitoring, cron setup, and error capture. Useful for troubleshooting but should be disabled in production. + +```ruby +Sentry.init do |config| + config.dsn = "___PUBLIC_DSN___" + config.good_job.logging_enabled = true # Only for debugging +end +``` + + + +## Supported Versions + +- Ruby: 2.4+ +- Rails: 5.2+ +- GoodJob: 3.0+ +- Sentry Ruby SDK: 5.28.0+ diff --git a/platform-includes/performance/queue-time-capture/ruby.mdx b/platform-includes/performance/queue-time-capture/ruby.mdx new file mode 100644 index 0000000000000..8385fa823c739 --- /dev/null +++ b/platform-includes/performance/queue-time-capture/ruby.mdx @@ -0,0 +1,50 @@ +## Automatic Queue Time Capture + +The Ruby SDK automatically captures queue time for Rack-based applications when the `X-Request-Start` header is present. This measures how long requests wait in the web server queue (e.g., waiting for a Puma thread) before your application begins processing them. + +Queue time is attached to transactions as `http.queue_time_ms` and helps identify server capacity issues. + +### Setup + +Configure your reverse proxy to add the `X-Request-Start` header: + +**Nginx:** + +```nginx +location / { + proxy_pass http://your-app; + proxy_set_header X-Request-Start "t=${msec}"; +} +``` + +**HAProxy:** + +```haproxy +frontend http-in + http-request set-header X-Request-Start t=%Ts%ms +``` + +**Heroku:** The header is automatically set by Heroku's router. + +### How It Works + +The SDK: + +1. Reads the `X-Request-Start` header timestamp from your reverse proxy +2. Calculates the time difference between the header timestamp and when the request reaches your application +3. Subtracts `puma.request_body_wait` (if present) to exclude time spent waiting for slow client uploads +4. Attaches the result as `http.queue_time_ms` to the transaction + +### Disable Queue Time Capture + +If you don't want queue time captured, disable it in your configuration: + +```ruby +Sentry.init do |config| + config.capture_queue_time = false +end +``` + +### Viewing Queue Time + +Queue time appears in the Sentry transaction details under the "Data" section as `http.queue_time_ms` (measured in milliseconds). diff --git a/src/mdx.ts b/src/mdx.ts index 9a8b34dd00e32..af0374a87ebf7 100644 --- a/src/mdx.ts +++ b/src/mdx.ts @@ -268,6 +268,7 @@ export async function getDevDocsFrontMatterUncached(): Promise { const source = await readFile(file, 'utf8'); const {data: frontmatter} = matter(source); + return { ...(frontmatter as FrontMatter), slug: fileName.replace(/\/index.mdx?$/, '').replace(/\.mdx?$/, ''), diff --git a/src/types/frontmatter.ts b/src/types/frontmatter.ts index a336bcefefe48..aed7608e1ec61 100644 --- a/src/types/frontmatter.ts +++ b/src/types/frontmatter.ts @@ -39,11 +39,11 @@ export interface FrontMatter { * A list of keywords for indexing with search. */ keywords?: string[]; + /** * Set this to true to show a "new" badge next to the title in the sidebar */ new?: boolean; - /** * The next page in the bottom pagination navigation. */ @@ -53,6 +53,7 @@ export interface FrontMatter { * takes precedence over children when present */ next_steps?: string[]; + /** * Set this to true to disable indexing (robots, algolia) of this content. */