-
Notifications
You must be signed in to change notification settings - Fork 1
feat: Real-time exports, phase 2 - Intacct #695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Warning Rate limit exceeded@ashwin1111 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 53 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
WalkthroughThis update introduces a new real-time export feature for workspace schedules, adds a corresponding database field and migration, and adapts scheduling and export logic throughout the application to support this capability. It also refines error handling, updates function signatures for clarity and flexibility, removes deprecated scheduling APIs, and adds comprehensive signal handling and tests for expense group settings. Changes
Sequence Diagram(s)Real-Time Export Scheduling and Export FlowsequenceDiagram
participant User
participant API
participant WorkspaceSchedule
participant Scheduler
participant Exporter
User->>API: Update workspace schedule (enable real-time export)
API->>WorkspaceSchedule: Set is_real_time_export_enabled = True
API->>Scheduler: Remove existing schedule if real-time export enabled
Note right of Scheduler: No periodic job scheduled
User->>API: Trigger export (manual or via event)
API->>Exporter: export_to_intacct(triggered_by=...)
Exporter->>API: Export result
Expense Group Settings State Change SignalsequenceDiagram
participant ExpenseGroupSettings
participant SignalHandler
participant Configuration
participant AsyncTaskQueue
ExpenseGroupSettings->>SignalHandler: pre_save event
SignalHandler->>Configuration: Fetch workspace config
SignalHandler->>SignalHandler: Check for state changes
alt State changed and config enabled
SignalHandler->>AsyncTaskQueue: async_task(create_expense_groups, ...)
end
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (2)
apps/workspaces/actions.py (1)
23-29
:⚠️ Potential issueUpdate function docstring to match signature
The docstring still references the removed
export_mode
parameter but doesn't document the newtriggered_by
parameter.""" Export expenses to Intacct :param workspace_id: Workspace ID - :param export_mode: Export mode :param expense_group_ids: Expense group IDs + :param triggered_by: Source that triggered the export :return: None """apps/fyle/tasks.py (1)
81-90
:⚠️ Potential issue
schedule_expense_group_creation
now calls the updated signature with too few arguments
create_expense_groups
requires four positional arguments (workspace_id, fund_source, task_log, imported_from
) after this PR, but the async invocation only passes three.
This will raiseTypeError: create_expense_groups() missing 1 required positional argument: 'imported_from'
.Suggested fix (and import the enum at top):
-from django_q.tasks import async_task +from django_q.tasks import async_task +from fyle_accounting_library.fyle_platform.enums import ExpenseImportSourceEnum ... - async_task('apps.fyle.tasks.create_expense_groups', workspace_id, fund_source, task_log) + async_task( + 'apps.fyle.tasks.create_expense_groups', + workspace_id, + fund_source, + task_log, + ExpenseImportSourceEnum.BACKGROUND_SCHEDULE + )
🧹 Nitpick comments (8)
apps/workspaces/models.py (1)
210-210
: Consider indexing the new boolean field for performance
Theis_real_time_export_enabled
field will often be used in filtering schedules. Adding a database index can significantly speed up these queries.tests/test_workspaces/test_views.py (1)
2-2
: Remove unused import to satisfy Flake8 / Ruff
datetime
is imported but never used, causing the CI failure (F401
).
Either delete the import or use it (e.g., for the dynamicstart_date
fix below).-from datetime import datetime
🧰 Tools
🪛 Ruff (0.11.9)
2-2:
datetime.datetime
imported but unusedRemove unused import:
datetime.datetime
(F401)
🪛 GitHub Actions: Continuous Integration
[error] 2-2: flake8: F401 'datetime.datetime' imported but unused
apps/fyle/signals.py (1)
80-100
: Signal fires async tasks insidepre_save
– risk of double executionBecause the trigger is in
pre_save
, the async task is queued even when the
subsequent DB save eventually rolls back (e.g., due to a validator error or
transaction rollback). This can lead to “phantom” tasks working with stale data
or creating duplicates.If possible, move this logic to
post_save
after the row is safely
persisted or wrap the async call in an outer transaction/on-commit hook:from django.db import transaction # ... def run_pre_save_expense_group_setting_triggers(...): # ... if should_trigger: - async_task(...) + transaction.on_commit( + lambda: async_task( + 'apps.fyle.tasks.create_expense_groups', + workspace_id=instance.workspace_id, + fund_source=[FundSourceEnum.PERSONAL], + task_log=None, + imported_from=ExpenseImportSourceEnum.CONFIGURATION_UPDATE, + ) + )This guarantees the task is enqueued only when the surrounding transaction
succeeds.apps/fyle/helpers.py (1)
490-505
: Guard against emptymessage
/error
in helper
update_task_log_post_import
sets:task_log.detail = {"message": message} if message else {"error": error}If both arguments are
None
, the detail becomes{'error': None}
, which is
misleading. Consider validating that exactly one of the two is provided.-def update_task_log_post_import(task_log: TaskLog, status: str, message: str = None, error: str = None) -> None: +def update_task_log_post_import( + task_log: TaskLog, + status: str, + *, + message: str | None = None, + error: str | None = None +) -> None: """Helper function to update task log status and details""" - if task_log: - task_log.status = status - task_log.detail = {"message": message} if message else {"error": error} + if task_log: + if not any([message, error]): + raise ValueError("Either `message` or `error` must be supplied") + task_log.status = status + task_log.detail = {"message": message} if message else {"error": error}apps/workspaces/views.py (1)
487-489
: Redundantget_queryset
overrideThe override simply delegates to
super()
and adds no value—remove it to keep
the view concise.- def get_queryset(self) -> QuerySet[LastExportDetail]: - return super().get_queryset()apps/workspaces/tasks.py (2)
73-83
: Potential data-shape issue when persistingadditional_email_options
additional_email_options.append(email_added)
will create a nested list ifemail_added
itself is a list.
If the caller is already sending a single dict,append
is correct; if it is sending multiple addresses, preferextend
instead.Please confirm the expected type; if it is a list, adjust:
- ws_schedule.additional_email_options.append(email_added) + ws_schedule.additional_email_options.extend(email_added)
81-101
: Atomicity & race-condition windowDeletion/creation of the
django-q
schedule and updating theWorkspaceSchedule
happen in separate DB transactions.
A concurrent request could observe an inconsistent state (e.g.,is_real_time_export_enabled=True
but still referencing an old schedule). Wrapping the whole branch in a singletransaction.atomic()
block will guarantee consistency.Not mandatory but highly recommended for correctness.
tests/test_fyle/test_signals.py (1)
11-14
:ExpenseGroupSettings.objects.get
may raiseDoesNotExist
in a clean DBThe first test assumes a default
ExpenseGroupSettings
row exists. In fresh databases this fails before the signal is even triggered.Safer pattern:
expense_group_settings, _ = ExpenseGroupSettings.objects.get_or_create(workspace_id=workspace_id)This keeps the test focused on signal behaviour, not fixture setup.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
apps/fyle/helpers.py
(2 hunks)apps/fyle/signals.py
(2 hunks)apps/fyle/tasks.py
(8 hunks)apps/fyle/views.py
(2 hunks)apps/internal/tasks.py
(1 hunks)apps/sage_intacct/queue.py
(4 hunks)apps/sage_intacct/views.py
(2 hunks)apps/workspaces/actions.py
(2 hunks)apps/workspaces/apis/advanced_settings/serializers.py
(1 hunks)apps/workspaces/apis/advanced_settings/triggers.py
(1 hunks)apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.py
(1 hunks)apps/workspaces/models.py
(1 hunks)apps/workspaces/tasks.py
(4 hunks)apps/workspaces/urls.py
(0 hunks)apps/workspaces/views.py
(3 hunks)requirements.txt
(1 hunks)tests/sql_fixtures/reset_db_fixtures/reset_db.sql
(7 hunks)tests/test_fyle/fixtures.py
(1 hunks)tests/test_fyle/test_signals.py
(1 hunks)tests/test_workspaces/fixtures.py
(1 hunks)tests/test_workspaces/test_apis/test_advanced_settings/fixtures.py
(2 hunks)tests/test_workspaces/test_tasks.py
(3 hunks)tests/test_workspaces/test_views.py
(3 hunks)workers/export/worker.py
(0 hunks)
💤 Files with no reviewable changes (2)
- apps/workspaces/urls.py
- workers/export/worker.py
🧰 Additional context used
🧬 Code Graph Analysis (3)
apps/sage_intacct/views.py (1)
apps/workspaces/actions.py (1)
export_to_intacct
(22-128)
apps/internal/tasks.py (1)
apps/workspaces/actions.py (1)
export_to_intacct
(22-128)
apps/fyle/helpers.py (1)
apps/tasks/models.py (1)
TaskLog
(47-78)
🪛 Ruff (0.11.9)
apps/workspaces/actions.py
22-22: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
tests/test_workspaces/test_views.py
2-2: datetime.datetime
imported but unused
Remove unused import: datetime.datetime
(F401)
🪛 GitHub Actions: Continuous Integration
tests/test_workspaces/test_views.py
[error] 2-2: flake8: F401 'datetime.datetime' imported but unused
🔇 Additional comments (30)
tests/test_workspaces/fixtures.py (1)
1-67
:✅ Verification successful
Removal of deprecated workspace_schedule fixture
Theworkspace_schedule
key has been removed to align with the deprecation of scheduling endpoints. Ensure no other tests or fixtures reference this field to prevent breakages.Run this script to confirm no leftover references:
🏁 Script executed:
#!/bin/bash rg -n "workspace_schedule"Length of output: 9356
Deprecated fixture ‘workspace_schedule’ removal verified
No instances of the singularworkspace_schedule
key remain in tests or fixtures. All matches forworkspace_schedules
(plural) are expected in SQL fixtures and code.requirements.txt (1)
13-13
: Upgrade fyle-accounting-mappings to 2.2.0
Thefyle-accounting-mappings
package has been updated to2.2.0
. Verify compatibility with the new real-time export logic and run the full test suite to catch any integration issues.tests/test_workspaces/test_apis/test_advanced_settings/fixtures.py (2)
27-27
: Include newis_real_time_export_enabled
flag in fixture
Addedis_real_time_export_enabled: False
toworkspace_schedules
underadvanced_settings
. This matches the updated model and ensures tests cover the default state.
56-56
: Include newis_real_time_export_enabled
flag in response fixture
Addedis_real_time_export_enabled: False
to the API response fixture. This keeps the test fixture in sync with the serializer changes.tests/test_fyle/fixtures.py (1)
43-46
:❓ Verification inconclusive
Supplement fixture with new webhook fields
The added keysis_posted_at_null
,report_title
,payment_number
, andmasked_corporate_card_number
ensure the fixture matches the updated webhook payload structure. Please verify that downstream tests and any schema validations are updated to consume these fields.Run the following script to confirm tests reference the new keys:
🏁 Script executed:
#!/bin/bash # Verify new fixture keys are used in test suite rg -n "report_title" -A2 tests/test_fyle rg -n "payment_number" -A2 tests/test_fyle rg -n "masked_corporate_card_number" -A2 tests/test_fyleLength of output: 14594
Test Fixtures Include New Webhook Fields – Please Verify Schema/Model Updates
The test suite’s fixtures undertests/test_fyle/fixtures.py
now reference the added keys (is_posted_at_null
,report_title
,payment_number
,masked_corporate_card_number
) in dozens of scenarios (e.g. initial block at lines 43–47 and repeats at lines 285, 316, 360, … 1131). This confirms downstream tests consume those fields.• tests/test_fyle/fixtures.py: initial webhook fixture at lines 43–47 plus ~150 other occurrences
• tests/test_fyle/test_helpers.py: filters based onreport_title
in multiple testsNext Steps:
- Manually confirm that any payload‐validation layer (JSON schema, Pydantic/Django models, serializers, etc.) in your webhook handler has been updated to accept and validate these new fields.
tests/test_workspaces/test_tasks.py (3)
40-41
: Added new parameter to match updated function signatureThe
is_real_time_export_enabled=False
parameter is now passed to theschedule_sync
function, aligning with changes to the function signature in the main codebase. This maintains backward compatibility with existing test behavior.
57-58
: Added new parameter to maintain test consistencyThe same parameter is added to the second call to
schedule_sync
, maintaining consistency with the updated function signature.
145-146
: Added new parameter to email notification testThe
is_real_time_export_enabled=False
parameter has been consistently applied to theschedule_sync
call in the email notification test case.apps/workspaces/apis/advanced_settings/serializers.py (1)
149-150
: Added new field to support real-time export featureThe
is_real_time_export_enabled
field has been added to theWorkspaceSchedulesSerializer
, exposing this configuration through the API. This aligns with the model changes and enables clients to control real-time export behavior.apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.py (1)
1-18
: Clean migration for adding real-time export fieldThis migration properly adds the
is_real_time_export_enabled
boolean field to theWorkspaceSchedule
model with a default value ofFalse
. The migration follows Django conventions and appropriately depends on the previous migration.apps/workspaces/apis/advanced_settings/triggers.py (1)
20-21
: Updated trigger to pass new real-time export parameterThe
run_post_configurations_triggers
method now forwards theis_real_time_export_enabled
setting from the workspace schedule to theschedule_sync
function, ensuring the real-time export configuration is properly applied.apps/fyle/views.py (2)
75-80
: Good update to use named parameters instead of positional arguments.This change improves code readability by explicitly naming the parameters being passed to the
create_expense_groups
function, making the intent clearer and the code more maintainable.
515-515
: Good refactoring to use named parameters.Consistent with the other changes, switching from positional to named parameters makes the function call clearer and reduces the risk of parameter order mistakes during future modifications.
apps/sage_intacct/queue.py (4)
92-93
: Good defensive programming improvement.Adding the check for
triggered_by
being truthy prevents accidentally overwriting the existing value withNone
or an empty value, which improves robustness of the task log update logic.
193-194
: Good defensive programming improvement.Same improvement as in the previous function - checking that
triggered_by
is truthy before updating the task log'striggered_by
field prevents potential data loss.
257-258
: Good defensive programming improvement.Consistent application of the pattern to check for truthy
triggered_by
before updating the task log in the bills creation scheduling function.
321-322
: Good defensive programming improvement.Consistent application of the same defensive check in the charge card transaction creation scheduling function, ensuring the
triggered_by
field is only updated when a valid value is provided.apps/sage_intacct/views.py (2)
16-16
: Appropriate import added for the ExpenseImportSourceEnum.This import is required for the changes made to the
export_to_intacct
function call on line 128.
128-128
: Good update to use named parameters with explicit source tracking.This change aligns with the updated
export_to_intacct
function signature inworkspaces/actions.py
which now determines the export mode internally based on thetriggered_by
parameter. UsingDASHBOARD_SYNC
as the source correctly identifies that this export was triggered by a user action in the dashboard.apps/internal/tasks.py (1)
70-70
: Well-structured update to the export function call.This change correctly:
- Removes the now-obsolete positional 'AUTO' export mode parameter
- Uses explicit keyword arguments for better readability
- Sets the
triggered_by
source toINTERNAL
, properly identifying this as an automatic internal re-exportThe change aligns with the updated signature of
export_to_intacct
inworkspaces/actions.py
which now determines the export mode internally based on the source.apps/workspaces/actions.py (2)
36-36
: New export mode logic looks goodThe refactored code now determines export mode internally based on the
triggered_by
parameter, which centralizes this logic and simplifies the API.
51-59
: Consistent propagation of triggered_by parameterThe changes have properly propagated the
triggered_by
parameter to all scheduling functions, making it available for downstream processing.Also applies to: 60-67, 69-76, 85-92, 94-101, 103-110, 112-119
tests/sql_fixtures/reset_db_fixtures/reset_db.sql (7)
5-6
: Update pg_dump version comment
The dump header is updated to reflect pg_dump version 15.13; this is purely informational and has no functional impact.
5964-5965
: Register new migrations indjango_migrations
fixture
Entries for thefailed_events
andworkspace_schedules
migrations have been appended in sequence, matching your migration files.
9415-9415
: Updatefailed_events
COPY statement
The COPY command now includes theis_resolved
column, keeping the fixture in sync with the schema.
9895-9895
: Updateworkspace_schedules
COPY statement
The COPY command now includesis_real_time_export_enabled
, reflecting the new column in the schema.
10005-10005
: Setdjango_migrations_id_seq
to new max ID
The sequence is updated to 236, which matches the highest migration ID in this dump.
2799-2801
:✅ Verification successful
Add
is_real_time_export_enabled
column toworkspace_schedules
This aligns the fixture with the new migration (0045). Please verify that the corresponding migration provides a default value or backfills existing rows to satisfy theNOT NULL
constraint.
🏁 Script executed:
#!/bin/bash # Verify migration default/backfill for is_real_time_export_enabled rg -n "is_real_time_export_enabled" -C2 apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.pyLength of output: 321
Migration Backfill Verified
The0045_workspaceschedule_is_real_time_export_enabled
migration addsis_real_time_export_enabled
withdefault=False
, satisfying theNOT NULL
constraint and backfilling existing rows accordingly. The SQL fixture and migration are now in sync.
2861-2862
:❓ Verification inconclusive
Add
is_resolved
column tofailed_events
Matches the new migration (0004). Confirm that the migration handles existing data by providing a default or backfilling this non-nullable column.
🏁 Script executed:
#!/bin/bash # Verify migration default/backfill for is_resolved rg -n "is_resolved" -C2 apps/rabbitmq/migrations/0004_failedevent_is_resolved.pyLength of output: 177
🏁 Script executed:
#!/bin/bash set -e echo "=== Locating all migrations directories ===" find . -type d -name migrations echo echo "=== Listing Python files in each migrations directory ===" find . -type d -name migrations | while read dir; do echo "Files in $dir:" ls -1 "$dir"/*.py || echo " (no .py files)" done echo echo "=== Searching for 'is_resolved' in all migration files ===" rg -n "is_resolved" -C2 --glob "*/migrations/*.py" || echo "No occurrences found"Length of output: 20808
🏁 Script executed:
#!/bin/bash set -e echo "=== Locate migration files related to failed events ===" find . -type f -name "*fail*event*.py" || echo " (no matching files)" echo echo "=== Search for 'is_resolved' globally ===" rg -n "is_resolved" -C3 . || echo " (no occurrences found)" echo echo "=== Search for AddField operations adding is_resolved in migrations ===" rg -n "AddField" -C5 --glob "*/migrations/*.py" | rg -n "is_resolved" -C5 || echo " (no AddField for is_resolved found)"Length of output: 25812
Missing migration for
failed_events.is_resolved
I wasn’t able to locateapps/rabbitmq/migrations/0004_failedevent_is_resolved.py
in the codebase. Please verify:
- That the migration file exists at the correct path (
apps/rabbitmq/migrations/0004_failedevent_is_resolved.py
).- It adds the
is_resolved
column with a default or includes a backfill step for existing rows to satisfy theNOT NULL
constraint.apps/workspaces/tasks.py (1)
116-146
: Minor: clarify keyword arguments inexport_to_intacct
Good job converting to keyword arguments – increases readability and saves positional-order bugs.
@@ -19,7 +19,7 @@ | |||
logger.level = logging.INFO | |||
|
|||
|
|||
def export_to_intacct(workspace_id: int, export_mode: bool = None, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None: | |||
def export_to_intacct(workspace_id: int, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace mutable default argument with None
The function signature uses a mutable default argument (list = []
), which can lead to unexpected behavior if the function is called multiple times and modifies this list.
-def export_to_intacct(workspace_id: int, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None:
+def export_to_intacct(workspace_id: int, expense_group_ids: list = None, triggered_by: ExpenseImportSourceEnum = None) -> None:
Then initialize the list inside the function:
+ if expense_group_ids is None:
+ expense_group_ids = []
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def export_to_intacct(workspace_id: int, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None: | |
def export_to_intacct( | |
workspace_id: int, | |
expense_group_ids: list = None, | |
triggered_by: ExpenseImportSourceEnum = None, | |
) -> None: | |
if expense_group_ids is None: | |
expense_group_ids = [] | |
# … rest of the function … |
🧰 Tools
🪛 Ruff (0.11.9)
22-22: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🤖 Prompt for AI Agents
In apps/workspaces/actions.py at line 22, the function export_to_intacct uses a
mutable default argument list = [], which can cause unexpected behavior. Change
the default value of expense_group_ids to None in the function signature, and
inside the function, check if expense_group_ids is None and if so, initialize it
to an empty list. This prevents shared mutable defaults across function calls.
url = "/api/workspaces/{}/export_detail/?start_date=2025-05-01".format(workspace_id) | ||
|
||
api_client.credentials( | ||
HTTP_AUTHORIZATION="Bearer {}".format(test_connection.access_token) | ||
) | ||
|
||
LastExportDetail.objects.get(workspace_id=workspace_id) | ||
# last_exported_at=datetime.now(), total_expense_groups_count=1 | ||
|
||
TaskLog.objects.create(type='CREATING_EXPENSE_REPORT', status='COMPLETE', workspace_id=workspace_id) | ||
|
||
failed_count = TaskLog.objects.filter(workspace_id=workspace_id, status__in=['FAILED', 'FATAL']).count() | ||
|
||
response = api_client.get(url) | ||
assert response.status_code == 200 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Hard-coded future date makes the test non-deterministic
start_date=2025-05-01
will make this test fail on any CI run executed before that
date. Generating the date at runtime keeps the test evergreen and also gives
purpose to the datetime
import.
- url = "/api/workspaces/{}/export_detail/?start_date=2025-05-01".format(workspace_id)
+ start_date = datetime.utcnow().date().isoformat()
+ url = f"/api/workspaces/{workspace_id}/export_detail/?start_date={start_date}"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
url = "/api/workspaces/{}/export_detail/?start_date=2025-05-01".format(workspace_id) | |
api_client.credentials( | |
HTTP_AUTHORIZATION="Bearer {}".format(test_connection.access_token) | |
) | |
LastExportDetail.objects.get(workspace_id=workspace_id) | |
# last_exported_at=datetime.now(), total_expense_groups_count=1 | |
TaskLog.objects.create(type='CREATING_EXPENSE_REPORT', status='COMPLETE', workspace_id=workspace_id) | |
failed_count = TaskLog.objects.filter(workspace_id=workspace_id, status__in=['FAILED', 'FATAL']).count() | |
response = api_client.get(url) | |
assert response.status_code == 200 | |
start_date = datetime.utcnow().date().isoformat() | |
url = f"/api/workspaces/{workspace_id}/export_detail/?start_date={start_date}" | |
api_client.credentials( | |
HTTP_AUTHORIZATION="Bearer {}".format(test_connection.access_token) | |
) | |
LastExportDetail.objects.get(workspace_id=workspace_id) | |
# last_exported_at=datetime.now(), total_expense_groups_count=1 | |
TaskLog.objects.create(type='CREATING_EXPENSE_REPORT', status='COMPLETE', workspace_id=workspace_id) | |
failed_count = TaskLog.objects.filter(workspace_id=workspace_id, status__in=['FAILED', 'FATAL']).count() | |
response = api_client.get(url) | |
assert response.status_code == 200 |
🤖 Prompt for AI Agents
In tests/test_workspaces/test_views.py around lines 461 to 475, the test uses a
hard-coded future date "2025-05-01" for the start_date query parameter, which
makes the test non-deterministic and prone to failure before that date. Replace
the hard-coded date with a dynamically generated date at runtime using the
datetime module, such as today's date or a relative date, to ensure the test
remains evergreen and deterministic.
def retrieve(self, request: Request, *args, **kwargs) -> Response: | ||
instance = self.get_object() | ||
serializer = self.get_serializer(instance) | ||
response_data = serializer.data | ||
|
||
start_date = request.query_params.get('start_date') | ||
|
||
if start_date and response_data: | ||
misc_task_log_types = ['CREATING_REIMBURSEMENT', 'CREATING_AP_PAYMENT', 'FETCHING_EXPENSES'] | ||
|
||
task_logs = TaskLog.objects.filter( | ||
~Q(type__in=misc_task_log_types), | ||
workspace_id=kwargs['workspace_id'], | ||
updated_at__gte=start_date, | ||
status='COMPLETE', | ||
).order_by('-updated_at') | ||
|
||
successful_count = task_logs.count() | ||
|
||
failed_count = TaskLog.objects.filter( | ||
~Q(type__in=misc_task_log_types), | ||
status__in=['FAILED', 'FATAL'], | ||
workspace_id=kwargs['workspace_id'], | ||
).count() | ||
|
||
response_data.update({ | ||
'repurposed_successful_count': successful_count, | ||
'repurposed_failed_count': failed_count, | ||
'repurposed_last_exported_at': task_logs.last().updated_at if task_logs.last() else None | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
last()
returns the oldest record; failed_count
ignores start_date
-
With
order_by('-updated_at')
,queryset.last()
yields the oldest record.
To capture the most recent export, usefirst()
. -
failed_count
is not filtered bystart_date
, so exports prior to the
window are included, skewing the count. -
Minor:
successful_count
can be computed directly from the same
task_logs
queryset—no change needed.
- task_logs = TaskLog.objects.filter(
+ task_logs = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
workspace_id=kwargs['workspace_id'],
updated_at__gte=start_date,
status='COMPLETE',
).order_by('-updated_at')
- failed_count = TaskLog.objects.filter(
+ failed_count = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
status__in=['FAILED', 'FATAL'],
workspace_id=kwargs['workspace_id'],
+ updated_at__gte=start_date,
).count()
response_data.update({
'repurposed_successful_count': successful_count,
'repurposed_failed_count': failed_count,
- 'repurposed_last_exported_at': task_logs.last().updated_at if task_logs.last() else None
+ 'repurposed_last_exported_at': task_logs.first().updated_at if task_logs.exists() else None
})
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def retrieve(self, request: Request, *args, **kwargs) -> Response: | |
instance = self.get_object() | |
serializer = self.get_serializer(instance) | |
response_data = serializer.data | |
start_date = request.query_params.get('start_date') | |
if start_date and response_data: | |
misc_task_log_types = ['CREATING_REIMBURSEMENT', 'CREATING_AP_PAYMENT', 'FETCHING_EXPENSES'] | |
task_logs = TaskLog.objects.filter( | |
~Q(type__in=misc_task_log_types), | |
workspace_id=kwargs['workspace_id'], | |
updated_at__gte=start_date, | |
status='COMPLETE', | |
).order_by('-updated_at') | |
successful_count = task_logs.count() | |
failed_count = TaskLog.objects.filter( | |
~Q(type__in=misc_task_log_types), | |
status__in=['FAILED', 'FATAL'], | |
workspace_id=kwargs['workspace_id'], | |
).count() | |
response_data.update({ | |
'repurposed_successful_count': successful_count, | |
'repurposed_failed_count': failed_count, | |
'repurposed_last_exported_at': task_logs.last().updated_at if task_logs.last() else None | |
}) | |
def retrieve(self, request: Request, *args, **kwargs) -> Response: | |
instance = self.get_object() | |
serializer = self.get_serializer(instance) | |
response_data = serializer.data | |
start_date = request.query_params.get('start_date') | |
if start_date and response_data: | |
misc_task_log_types = ['CREATING_REIMBURSEMENT', 'CREATING_AP_PAYMENT', 'FETCHING_EXPENSES'] | |
task_logs = TaskLog.objects.filter( | |
~Q(type__in=misc_task_log_types), | |
workspace_id=kwargs['workspace_id'], | |
updated_at__gte=start_date, | |
status='COMPLETE', | |
).order_by('-updated_at') | |
successful_count = task_logs.count() | |
failed_count = TaskLog.objects.filter( | |
~Q(type__in=misc_task_log_types), | |
status__in=['FAILED', 'FATAL'], | |
workspace_id=kwargs['workspace_id'], | |
updated_at__gte=start_date, | |
).count() | |
response_data.update({ | |
'repurposed_successful_count': successful_count, | |
'repurposed_failed_count': failed_count, | |
'repurposed_last_exported_at': task_logs.first().updated_at if task_logs.exists() else None | |
}) |
🤖 Prompt for AI Agents
In apps/workspaces/views.py lines 490 to 519, replace task_logs.last() with
task_logs.first() to get the most recent export since last() returns the oldest
record when ordered by '-updated_at'. Also, add the filter
updated_at__gte=start_date to the failed_count queryset to ensure it only counts
failures within the specified date range, matching the successful_count filter.
This will correct the counts to reflect the intended time window.
def schedule_email_notification(workspace_id: int, schedule_enabled: bool, hours: int) -> None: | ||
""" | ||
Schedule email notification | ||
:param workspace_id: workspace id | ||
:param schedule_enabled: schedule enabled | ||
:return: None | ||
""" | ||
if schedule_enabled: | ||
if schedule_enabled and hours: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
hours
parameter is undocumented and effectively ignored in the schedule that is created
- The doc-string still lists only
workspace_id
andschedule_enabled
; it should document the newhours
parameter. - Inside the body,
hours
is only used in theif
guard (schedule_enabled and hours
) but not when the schedule is actually created – the schedule is always set to run every24*60
minutes (once a day). This makes the extra guard misleading and can silently disable notifications whenhours == 0
even though the cadence is fixed to a day.
If you intend the mail job to run daily, drop the hours
dependency; if you intend it to be configurable, use the supplied value:
- if schedule_enabled and hours:
+ if schedule_enabled:
...
- 'minutes': 24 * 60,
+ 'minutes': hours * 60 if hours else 24 * 60,
Also update the doc-string accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def schedule_email_notification(workspace_id: int, schedule_enabled: bool, hours: int) -> None: | |
""" | |
Schedule email notification | |
:param workspace_id: workspace id | |
:param schedule_enabled: schedule enabled | |
:return: None | |
""" | |
if schedule_enabled: | |
if schedule_enabled and hours: | |
def schedule_email_notification(workspace_id: int, schedule_enabled: bool, hours: int) -> None: | |
""" | |
Schedule email notification | |
:param workspace_id: workspace id | |
:param schedule_enabled: schedule enabled | |
:return: None | |
""" | |
- if schedule_enabled and hours: | |
+ if schedule_enabled: | |
... | |
- 'minutes': 24 * 60, | |
+ 'minutes': hours * 60 if hours else 24 * 60, | |
... |
🤖 Prompt for AI Agents
In apps/workspaces/tasks.py around lines 31 to 38, the hours parameter is
undocumented and not used in scheduling, causing confusion and potential silent
disabling of notifications. Update the doc-string to include the hours parameter
with its purpose. Then, modify the scheduling logic to use the hours value to
set the schedule interval dynamically (e.g., convert hours to minutes) instead
of a fixed 24*60 minutes, or remove the hours parameter and related condition if
the schedule should always be daily.
if is_real_time_export_enabled: | ||
# Delete existing schedule since user changed the setting to real time export | ||
schedule = ws_schedule.schedule | ||
if schedule: | ||
ws_schedule.schedule = None | ||
ws_schedule.save() | ||
schedule.delete() | ||
else: | ||
schedule, _ = Schedule.objects.update_or_create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disabling schedule does not reset is_real_time_export_enabled
flag
When a user switches off scheduling (schedule_enabled=False
block below), the boolean stays whatever it was previously.
This can lead to a workspace that is “disabled” but still treated as real-time enabled elsewhere.
Add an explicit reset:
elif not schedule_enabled and ws_schedule.schedule:
schedule = ws_schedule.schedule
ws_schedule.enabled = schedule_enabled
+ ws_schedule.is_real_time_export_enabled = False
ws_schedule.schedule = None
ws_schedule.save()
schedule.delete()
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/workspaces/tasks.py around lines 85 to 93, when disabling the schedule
by setting ws_schedule.schedule to None and deleting the schedule, the
is_real_time_export_enabled flag is not reset, which can cause inconsistent
state. To fix this, explicitly set is_real_time_export_enabled to False when
disabling the schedule and save the ws_schedule object to persist this change.
if len(expense_group_ids): | ||
if is_state_change_event: | ||
# Trigger export immediately for customers who have enabled real time export | ||
is_real_time_export_enabled = WorkspaceSchedule.objects.filter(workspace_id=workspace.id, is_real_time_export_enabled=True).exists() | ||
|
||
# Don't allow real time export if it's not supported for the branded app / setting not enabled | ||
if not is_real_time_export_enabled or not feature_configuration.feature.real_time_export_1hr_orgs: | ||
return | ||
|
||
logger.info(f'Exporting expenses for workspace {workspace.id} with expense group ids {expense_group_ids}, triggered by {imported_from}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Feature-flag guard bypasses export for unsupported apps but leaves task log COMPLETE
When not is_real_time_export_enabled or not feature_configuration.feature.real_time_export_1hr_orgs
the function exits early without updating task_log
(set to IN_PROGRESS
earlier) – leaving it stuck.
Call update_task_log_post_import(task_log, 'COMPLETE')
(or similar) before returning.
🤖 Prompt for AI Agents
In apps/fyle/tasks.py around lines 267 to 276, the early return when real-time
export is not enabled or supported causes the task_log to remain in the
IN_PROGRESS state. To fix this, before returning in that condition, call
update_task_log_post_import(task_log, 'COMPLETE') to properly mark the task as
complete and avoid leaving it stuck.
expense_groups = ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], workspace_id=workspace.id, exported_at__isnull=True).distinct('id').values('id') | ||
expense_group_ids = [expense_group['id'] for expense_group in expense_groups] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect wrapper causes id__in
lookup to receive a list of QuerySets
ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], …)
wraps the QuerySet in an additional list, producing [<QuerySet …>]
, which Django will attempt to coerce into ints and fail.
-expense_groups = ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], workspace_id=workspace.id, exported_at__isnull=True)...
+expense_groups = ExpenseGroup.objects.filter(
+ expenses__id__in=expense_ids,
+ workspace_id=workspace.id,
+ exported_at__isnull=True
+).distinct('id').values('id')
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
expense_groups = ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], workspace_id=workspace.id, exported_at__isnull=True).distinct('id').values('id') | |
expense_group_ids = [expense_group['id'] for expense_group in expense_groups] | |
expense_groups = ExpenseGroup.objects.filter( | |
expenses__id__in=expense_ids, | |
workspace_id=workspace.id, | |
exported_at__isnull=True | |
).distinct('id').values('id') | |
expense_group_ids = [expense_group['id'] for expense_group in expense_groups] |
🤖 Prompt for AI Agents
In apps/fyle/tasks.py around lines 264 to 266, the filter uses
expenses__id__in=[expense_ids], which wraps the QuerySet in a list, causing a
type error. Remove the square brackets around expense_ids so that the filter
receives the QuerySet or list directly, i.e., use expenses__id__in=expense_ids
instead of expenses__id__in=[expense_ids].
Coverage Report
|
* feat: Real-time exports, phase 2 - Intacct * fix lint * fix turn off bug
Description
Please add PR description here, add screenshots if needed
Clickup
https://app.clickup.com/
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Removals
Tests
Chores