Conversation
📝 WalkthroughWalkthroughThe pull request adds Sentry error tracking integration to the application. Changes include defining Sentry environment variables in Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
.env.example (1)
49-50: Use safer example sampling defaults to avoid accidental 100% prod ingestion.Line 49 and Line 50 currently suggest full sampling. Consider documenting conservative defaults (e.g., traces
0.1, profiles0.0) and opting up intentionally.Suggested .env.example tweak
-# SENTRY_TRACES_SAMPLE_RATE=1.0 # 100% of requests for performance monitoring (lower in prod if needed) -# SENTRY_PROFILES_SAMPLE_RATE=1.0 # 100% of requests for profiling (lower in prod if needed) +# SENTRY_TRACES_SAMPLE_RATE=0.1 # safer default; increase intentionally +# SENTRY_PROFILES_SAMPLE_RATE=0.0 # start disabled; enable selectively🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.env.example around lines 49 - 50, Update the .env.example to use conservative Sentry sampling defaults instead of 1.0: set SENTRY_TRACES_SAMPLE_RATE to a lower default like 0.1 and SENTRY_PROFILES_SAMPLE_RATE to 0.0, and update their inline comments to clarify these are safer example defaults and that teams should intentionally opt up if higher sampling is desired.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tcf_core/settings/base.py`:
- Around line 170-189: The Sentry init calls sentry_sdk.init(... before the
_sentry_before_send function is defined, causing a used-before-assignment when
SENTRY_DSN is set; fix by moving or declaring the _sentry_before_send hook above
the sentry_sdk.init invocation (or define a placeholder that is replaced later)
so that sentry_sdk.init references a defined callable; update the block around
sentry_sdk.init and the _sentry_before_send function to ensure
_sentry_before_send is defined (e.g., place the _sentry_before_send function
definition immediately before the sentry_sdk.init call) and keep existing names
and parameters unchanged.
- Around line 177-178: Remove the invalid event_level keyword from the
sentry_sdk.init(...) call and instead add LoggingIntegration to the integrations
list: import sentry_sdk.integrations.logging.LoggingIntegration and Python's
logging, then call sentry_sdk.init(dsn=SENTRY_DSN,
integrations=[DjangoIntegration(),
LoggingIntegration(event_level=logging.ERROR)], enable_logs=True, ...) so that
only ERROR and above are sent via the logging integration; ensure you delete the
existing event_level argument and update imports accordingly.
- Around line 166-176: Add a new environment flag (e.g.,
SENTRY_SEND_DEFAULT_PII) and use it to control send_default_pii instead of
hardcoding True: read the env var (prefer env.bool("SENTRY_SEND_DEFAULT_PII",
default=False)) near where SENTRY_DSN and sample rates are defined, and pass its
value to sentry_sdk.init(...) as send_default_pii=... so that PII forwarding is
opt-in and configurable; reference SENTRY_DSN, SENTRY_TRACES_SAMPLE_RATE,
SENTRY_PROFILES_SAMPLE_RATE, send_default_pii, and sentry_sdk.init to locate the
changes.
- Around line 202-203: The import of request_context from
tcf_core.request_logging is broken because that module doesn't exist; either
implement a tcf_core.request_logging module that exposes a request_context
object (with get() method used by settings.base) and wire it into the request
lifecycle, or remove the two lines from settings.base (the "from
tcf_core.request_logging import request_context" and "context =
request_context.get()") and instead obtain request_id/tag Sentry context at
request-handling time where an actual request context exists (e.g., middleware
or the function that initializes Sentry), updating any references to
request_context.get() to use the new source; specifically edit settings.base to
eliminate or replace uses of request_context and ensure Sentry tagging uses the
new request-scoped mechanism.
---
Nitpick comments:
In @.env.example:
- Around line 49-50: Update the .env.example to use conservative Sentry sampling
defaults instead of 1.0: set SENTRY_TRACES_SAMPLE_RATE to a lower default like
0.1 and SENTRY_PROFILES_SAMPLE_RATE to 0.0, and update their inline comments to
clarify these are safer example defaults and that teams should intentionally opt
up if higher sampling is desired.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0221ac73-4715-48c3-82f5-77701e15bdb5
📒 Files selected for processing (3)
.env.examplerequirements/base.txttcf_core/settings/base.py
| SENTRY_DSN = env.str("SENTRY_DSN", default=None) | ||
| SENTRY_TRACES_SAMPLE_RATE = env.float("SENTRY_TRACES_SAMPLE_RATE", default=1.0) | ||
| SENTRY_PROFILES_SAMPLE_RATE = env.float("SENTRY_PROFILES_SAMPLE_RATE", default=1.0) | ||
|
|
||
| if SENTRY_DSN: | ||
| sentry_sdk.init( | ||
| dsn=SENTRY_DSN, | ||
| integrations=[DjangoIntegration()], | ||
| # Add data like request headers and IP for users | ||
| send_default_pii=True, | ||
| # Only send ERROR and above to Sentry (CloudWatch handles INFO/DEBUG) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the base.py settings file
find . -type f -name "base.py" | grep settingsRepository: thecourseforum/theCourseForum2
Length of output: 101
🏁 Script executed:
# Let's read the specific lines mentioned in the review
cat -n tcf_core/settings/base.py | sed -n '160,185p'Repository: thecourseforum/theCourseForum2
Length of output: 1279
🏁 Script executed:
# Let's see more context around the Sentry initialization
cat -n tcf_core/settings/base.py | sed -n '150,200p'Repository: thecourseforum/theCourseForum2
Length of output: 2099
🏁 Script executed:
# Check if SENTRY_SEND_DEFAULT_PII is already configured anywhere
rg "SENTRY_SEND_DEFAULT_PII" .Repository: thecourseforum/theCourseForum2
Length of output: 56
🏁 Script executed:
# Check if send_default_pii is configured elsewhere in the codebase
rg "send_default_pii" .Repository: thecourseforum/theCourseForum2
Length of output: 132
🏁 Script executed:
# Look for environment variable patterns in the settings file
cat -n tcf_core/settings/base.py | head -50Repository: thecourseforum/theCourseForum2
Length of output: 1785
Make PII capture explicitly configurable via environment variable (default off).
Line 175 unconditionally enables send_default_pii=True without an option to disable it. This is a privacy and compliance risk in production. Other Sentry settings like SENTRY_TRACES_SAMPLE_RATE and SENTRY_PROFILES_SAMPLE_RATE are already configurable via environment variables—PII capture should follow the same pattern.
Gate PII forwarding behind env var
SENTRY_DSN = env.str("SENTRY_DSN", default=None)
SENTRY_TRACES_SAMPLE_RATE = env.float("SENTRY_TRACES_SAMPLE_RATE", default=1.0)
SENTRY_PROFILES_SAMPLE_RATE = env.float("SENTRY_PROFILES_SAMPLE_RATE", default=1.0)
+SENTRY_SEND_DEFAULT_PII = env.bool("SENTRY_SEND_DEFAULT_PII", default=False)
@@
- send_default_pii=True,
+ send_default_pii=SENTRY_SEND_DEFAULT_PII,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SENTRY_DSN = env.str("SENTRY_DSN", default=None) | |
| SENTRY_TRACES_SAMPLE_RATE = env.float("SENTRY_TRACES_SAMPLE_RATE", default=1.0) | |
| SENTRY_PROFILES_SAMPLE_RATE = env.float("SENTRY_PROFILES_SAMPLE_RATE", default=1.0) | |
| if SENTRY_DSN: | |
| sentry_sdk.init( | |
| dsn=SENTRY_DSN, | |
| integrations=[DjangoIntegration()], | |
| # Add data like request headers and IP for users | |
| send_default_pii=True, | |
| # Only send ERROR and above to Sentry (CloudWatch handles INFO/DEBUG) | |
| SENTRY_DSN = env.str("SENTRY_DSN", default=None) | |
| SENTRY_TRACES_SAMPLE_RATE = env.float("SENTRY_TRACES_SAMPLE_RATE", default=1.0) | |
| SENTRY_PROFILES_SAMPLE_RATE = env.float("SENTRY_PROFILES_SAMPLE_RATE", default=1.0) | |
| SENTRY_SEND_DEFAULT_PII = env.bool("SENTRY_SEND_DEFAULT_PII", default=False) | |
| if SENTRY_DSN: | |
| sentry_sdk.init( | |
| dsn=SENTRY_DSN, | |
| integrations=[DjangoIntegration()], | |
| # Add data like request headers and IP for users | |
| send_default_pii=SENTRY_SEND_DEFAULT_PII, | |
| # Only send ERROR and above to Sentry (CloudWatch handles INFO/DEBUG) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tcf_core/settings/base.py` around lines 166 - 176, Add a new environment flag
(e.g., SENTRY_SEND_DEFAULT_PII) and use it to control send_default_pii instead
of hardcoding True: read the env var (prefer env.bool("SENTRY_SEND_DEFAULT_PII",
default=False)) near where SENTRY_DSN and sample rates are defined, and pass its
value to sentry_sdk.init(...) as send_default_pii=... so that PII forwarding is
opt-in and configurable; reference SENTRY_DSN, SENTRY_TRACES_SAMPLE_RATE,
SENTRY_PROFILES_SAMPLE_RATE, send_default_pii, and sentry_sdk.init to locate the
changes.
| if SENTRY_DSN: | ||
| sentry_sdk.init( | ||
| dsn=SENTRY_DSN, | ||
| integrations=[DjangoIntegration()], | ||
| # Add data like request headers and IP for users | ||
| send_default_pii=True, | ||
| # Only send ERROR and above to Sentry (CloudWatch handles INFO/DEBUG) | ||
| enable_logs=True, | ||
| event_level="error", # Only ERROR, CRITICAL go to Sentry as events | ||
| # Sample performance traces (1.0 = 100%, adjustable via env var) | ||
| traces_sample_rate=SENTRY_TRACES_SAMPLE_RATE, | ||
| # Sample profiling (1.0 = 100%, adjustable via env var) | ||
| profiles_sample_rate=SENTRY_PROFILES_SAMPLE_RATE, | ||
| # Ignore common noise | ||
| ignore_errors=[ | ||
| KeyboardInterrupt, | ||
| # Add other exceptions to ignore if needed | ||
| ], | ||
| before_send=_sentry_before_send, | ||
| ) |
There was a problem hiding this comment.
Fix used-before-assignment for _sentry_before_send (merge blocker).
Line 188 references _sentry_before_send before it is defined (Line 192), which breaks when SENTRY_DSN is set.
Move hook definition above init
+def _sentry_before_send(event, hint):
+ """Filter out noise before sending to Sentry."""
+ # Ignore 404 errors
+ if event.get("logger") == "django.request":
+ if "exception" in event:
+ exc_type = event["exception"]["values"][0].get("type")
+ if exc_type == "Http404":
+ return None
+
+ # Add request_id from context
+ from tcf_core.request_logging import request_context
+ context = request_context.get()
+ if context:
+ event.setdefault("tags", {})
+ event["tags"]["request_id"] = context.get("request_id")
+
+ return event
+
if SENTRY_DSN:
sentry_sdk.init(
dsn=SENTRY_DSN,
@@
before_send=_sentry_before_send,
)
-
-
-def _sentry_before_send(event, hint):
- ...Also applies to: 192-208
🧰 Tools
🪛 GitHub Actions: Continuous Integration
[error] 188-188: pylint (E0601): Using variable '_sentry_before_send' before assignment (used-before-assignment)
🪛 Ruff (0.15.9)
[error] 188-188: Undefined name _sentry_before_send
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tcf_core/settings/base.py` around lines 170 - 189, The Sentry init calls
sentry_sdk.init(... before the _sentry_before_send function is defined, causing
a used-before-assignment when SENTRY_DSN is set; fix by moving or declaring the
_sentry_before_send hook above the sentry_sdk.init invocation (or define a
placeholder that is replaced later) so that sentry_sdk.init references a defined
callable; update the block around sentry_sdk.init and the _sentry_before_send
function to ensure _sentry_before_send is defined (e.g., place the
_sentry_before_send function definition immediately before the sentry_sdk.init
call) and keep existing names and parameters unchanged.
| enable_logs=True, | ||
| event_level="error", # Only ERROR, CRITICAL go to Sentry as events |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For sentry-sdk Python 2.0.x, what parameters are accepted by sentry_sdk.init()? Specifically, are enable_logsandevent_level valid init kwargs, and what is the recommended way to send only ERROR+ logs?
💡 Result:
For sentry-sdk Python 2.0.x, sentry_sdk.init accepts parameters including dsn, debug, traces_sample_rate, enable_logs (to enable Sentry structured logs), integrations (list of Integration instances), and many others like release, environment, sample_rate, before_send, etc. Full list in official docs and API reference. enable_logs is a valid top-level kwarg {value: yes, citations: [4,6,10,48], confidence: high}. It enables sending logs via sentry_sdk.logger (available since 2.35.0). event_level is NOT a valid top-level init kwarg {value: no, citations: [1,3,48], confidence: high}. It is an option for LoggingIntegration (event_level=logging.ERROR by default, controls when log records create error events) {value: LoggingIntegration option, citations: [6,50,51], confidence: high}. Recommended way to send only ERROR+ logs: Use enable_logs=True for structured logs via sentry_sdk.logger.error (and higher). For standard logging module, customize LoggingIntegration with sentry_logs_level=logging.ERROR (captures ERROR+ as Sentry logs) and/or event_level=logging.ERROR (sends ERROR+ logs as events), passed via integrations=[LoggingIntegration(...)]. Example: import logging import sentry_sdk from sentry_sdk.integrations.logging import LoggingIntegration sentry_sdk.init( dsn="DSN", enable_logs=True, # Optional, for sentry_sdk.logger integrations=[ LoggingIntegration( sentry_logs_level=logging.ERROR, event_level=logging.ERROR, ), ], ) This filters to ERROR and above per official logging docs {value: as above, citations: [6,10,50], confidence: high}.
Citations:
- 1: https://docs.sentry.io/platforms/python/configuration/options/
- 2: https://docs.sentry.io/platforms/python/configuration/options.md
- 3: https://docs.sentry.io/platforms/python/
- 4: https://docs.sentry.io/platforms/python/integrations/logging/
- 5: https://docs.sentry.io/platforms/python/logs.md
- 6: https://getsentry.github.io/sentry-python/api.html
- 7: https://docs.sentry.io/platforms/python/integrations/logging.md
- 8: https://getsentry.github.io/sentry-python/_modules/sentry_sdk/integrations/logging.html
🏁 Script executed:
cat -n tcf_core/settings/base.py | sed -n '170,190p'Repository: thecourseforum/theCourseForum2
Length of output: 1040
Remove event_level from sentry_sdk.init() parameters and use LoggingIntegration instead.
The event_level parameter is not a valid top-level argument for sentry_sdk.init(). It only accepts event_level as a configuration option for LoggingIntegration. Passing it directly will cause a TypeError: unexpected keyword argument 'event_level' at startup.
To achieve the intent of sending only ERROR and above logs, import and add LoggingIntegration to the integrations list:
from sentry_sdk.integrations.logging import LoggingIntegration
import logging
sentry_sdk.init(
dsn=SENTRY_DSN,
integrations=[
DjangoIntegration(),
LoggingIntegration(event_level=logging.ERROR),
],
enable_logs=True,
# ... other parameters
)Remove line 178 entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tcf_core/settings/base.py` around lines 177 - 178, Remove the invalid
event_level keyword from the sentry_sdk.init(...) call and instead add
LoggingIntegration to the integrations list: import
sentry_sdk.integrations.logging.LoggingIntegration and Python's logging, then
call sentry_sdk.init(dsn=SENTRY_DSN, integrations=[DjangoIntegration(),
LoggingIntegration(event_level=logging.ERROR)], enable_logs=True, ...) so that
only ERROR and above are sent via the logging integration; ensure you delete the
existing event_level argument and update imports accordingly.
| from tcf_core.request_logging import request_context | ||
| context = request_context.get() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Find candidate modules named request_logging.py"
fd -i '^request_logging\.py$'
echo
echo "2) Locate request_context symbol definitions/usages"
rg -n -C3 --type=py '\brequest_context\b'
echo
echo "3) Inspect tcf_core package exports (if present)"
fd -i '^__init__\.py$' tcf_core --exec sed -n '1,200p' {}Repository: thecourseforum/theCourseForum2
Length of output: 1195
🏁 Script executed:
# Search for request_logging across all file types
rg -i 'request_logging' --type=py
# Search for request_context definitions (not just usage)
rg -n 'def\s+request_context|request_context\s*=' --type=py
# Look for modules that might define context or logging utilities
fd --type f -name '*.py' | xargs grep -l 'context' | head -20
# Check for any Python files in tcf_core with logging or context in name
fd --type f -path 'tcf_core/*' -name '*.py' | sortRepository: thecourseforum/theCourseForum2
Length of output: 573
🏁 Script executed:
# List all Python files in tcf_core
find tcf_core -type f -name '*.py' | head -30
# Search for any mention of request_context (definition or usage)
rg 'request_context' --type=py -A 2 -B 2
# Check if there's a logging module or middleware
find tcf_core -type f -name '*logging*' -o -name '*context*' -o -name '*middleware*'Repository: thecourseforum/theCourseForum2
Length of output: 915
🏁 Script executed:
# Check context_processors.py
cat -n tcf_core/context_processors.py
# Check record_middleware.py
cat -n tcf_core/settings/record_middleware.py
# Search for ContextVar or contextvars usage (common pattern for request context)
rg 'contextvars|ContextVar' --type=py -A 3 -B 1Repository: thecourseforum/theCourseForum2
Length of output: 2030
🏁 Script executed:
# Check cognito_middleware.py
cat -n tcf_core/cognito_middleware.py
# Check handle_exceptions_middleware.py
cat -n tcf_core/settings/handle_exceptions_middleware.py
# Search for any comment or reference to request_id or sentry
rg 'request_id|sentry' --type=py -B 2 -A 2 | head -50Repository: thecourseforum/theCourseForum2
Length of output: 4535
🏁 Script executed:
# Read the _sentry_before_send function completely
sed -n '180,220p' tcf_core/settings/base.py
# Check if request_context might be from a third-party library
rg 'import.*request_context|from.*request_context' --type=py
# Search for any TODO or FIXME comments related to this
rg 'TODO|FIXME' tcf_core/settings/base.py -B 2 -A 2Repository: thecourseforum/theCourseForum2
Length of output: 1710
Fix broken request_context import; module does not exist in codebase.
The import at lines 202-203 references a non-existent tcf_core.request_logging module. The module and request_context are not defined anywhere in the repository. Either implement the missing request_logging module with a request_context object, or remove this code block if request_id tagging is not essential for Sentry events.
🧰 Tools
🪛 GitHub Actions: Continuous Integration
[error] 202-202: pylint (E0401): Unable to import 'tcf_core.request_logging' (import-error)
[error] 202-202: pylint (C0415): Import outside toplevel (tcf_core.request_logging.request_context) (import-outside-toplevel)
[error] 202-202: pylint (E0611): No name 'request_logging' in module 'tcf_core' (no-name-in-module)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tcf_core/settings/base.py` around lines 202 - 203, The import of
request_context from tcf_core.request_logging is broken because that module
doesn't exist; either implement a tcf_core.request_logging module that exposes a
request_context object (with get() method used by settings.base) and wire it
into the request lifecycle, or remove the two lines from settings.base (the
"from tcf_core.request_logging import request_context" and "context =
request_context.get()") and instead obtain request_id/tag Sentry context at
request-handling time where an actual request context exists (e.g., middleware
or the function that initializes Sentry), updating any references to
request_context.get() to use the new source; specifically edit settings.base to
eliminate or replace uses of request_context and ensure Sentry tagging uses the
new request-scoped mechanism.
What I did
Questions/Discussions/Notes
SENTRY_DSNenv variable to the deployment. For now, we can use my personal sentry as a trial, but we will need to move to a tCF sentry in the future.Summary by CodeRabbit