Skip to content

feat: Real-time exports, phase 2 - Intacct #695

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 2, 2025

Conversation

ashwin1111
Copy link
Contributor

@ashwin1111 ashwin1111 commented May 21, 2025

Description

Please add PR description here, add screenshots if needed

Clickup

https://app.clickup.com/

Summary by CodeRabbit

  • New Features

    • Added a real-time export option to workspace scheduling, including a new setting to enable or disable real-time exports.
    • Last export details endpoint now provides additional statistics, such as successful and failed export counts and last export timestamp.
  • Improvements

    • Enhanced error handling and task logging for import and export processes.
    • Refined scheduling logic to support real-time export and improved email notification scheduling.
    • Improved clarity by using keyword arguments in key function calls.
    • Export mode is now determined automatically based on the trigger source.
  • Bug Fixes

    • Fixed export task scheduling to prevent overwriting trigger sources with invalid values.
  • Removals

    • Removed the workspace schedule API endpoint and related views.
  • Tests

    • Added and updated tests to cover new scheduling, export, and signal logic, and removed outdated schedule API tests.
  • Chores

    • Updated dependencies and database fixtures to support new scheduling features.

Copy link
Contributor

coderabbitai bot commented May 21, 2025

Warning

Rate limit exceeded

@ashwin1111 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 53 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 7a6267b and 07aad31.

📒 Files selected for processing (2)
  • apps/sage_intacct/queue.py (4 hunks)
  • apps/workspaces/tasks.py (4 hunks)

Walkthrough

This update introduces a new real-time export feature for workspace schedules, adds a corresponding database field and migration, and adapts scheduling and export logic throughout the application to support this capability. It also refines error handling, updates function signatures for clarity and flexibility, removes deprecated scheduling APIs, and adds comprehensive signal handling and tests for expense group settings.

Changes

File(s) Change Summary
apps/fyle/helpers.py Refactored error handling in handle_import_exception to accept optional TaskLog; added update_task_log_post_import helper for centralized task log updates.
apps/fyle/signals.py, tests/test_fyle/test_signals.py Added pre-save signal for ExpenseGroupSettings to trigger async tasks on state changes; comprehensive tests for signal logic and edge cases.
apps/fyle/tasks.py Updated function signatures to accept optional task_log; centralized task log updates; added feature-flag and import source logic for exports.
apps/fyle/views.py Updated calls to create_expense_groups to use explicit keyword arguments.
apps/internal/tasks.py Refactored export_to_intacct call to use only keyword arguments, removing positional export_mode.
apps/sage_intacct/queue.py Updated logic to only update task_log.triggered_by if new value is truthy.
apps/sage_intacct/views.py Added triggered_by=ExpenseImportSourceEnum.DASHBOARD_SYNC argument to export trigger.
apps/workspaces/actions.py Refactored export_to_intacct to determine export_mode internally based on triggered_by.
apps/workspaces/apis/advanced_settings/serializers.py Added is_real_time_export_enabled field to WorkspaceSchedulesSerializer.
apps/workspaces/apis/advanced_settings/triggers.py Passed is_real_time_export_enabled to schedule_sync in post-configuration triggers.
apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.py Migration: added is_real_time_export_enabled boolean field to WorkspaceSchedule.
apps/workspaces/models.py Added is_real_time_export_enabled field to WorkspaceSchedule model.
apps/workspaces/tasks.py Updated scheduling functions to handle real-time export flag; altered schedule creation/deletion logic accordingly.
apps/workspaces/urls.py, apps/workspaces/views.py, tests/test_workspaces/test_views.py Removed ScheduleView and related API endpoints/tests; enhanced LastExportDetailView to provide export stats with new test.
requirements.txt Updated fyle-accounting-mappings version from 1.45.0 to 2.2.0.
tests/sql_fixtures/reset_db_fixtures/reset_db.sql Updated DB schema to add is_real_time_export_enabled and is_resolved columns; updated migration entries.
tests/test_fyle/fixtures.py Added new keys to expense webhook fixture for more detailed test data.
tests/test_workspaces/fixtures.py Removed workspace_schedule key from workspace test fixtures.
tests/test_workspaces/test_apis/test_advanced_settings/fixtures.py Added is_real_time_export_enabled to advanced settings and response fixtures.
tests/test_workspaces/test_tasks.py Added is_real_time_export_enabled=False to all schedule_sync test calls.
workers/export/worker.py Removed early message rejection based on retry count; simplified message processing flow.

Sequence Diagram(s)

Real-Time Export Scheduling and Export Flow

sequenceDiagram
    participant User
    participant API
    participant WorkspaceSchedule
    participant Scheduler
    participant Exporter

    User->>API: Update workspace schedule (enable real-time export)
    API->>WorkspaceSchedule: Set is_real_time_export_enabled = True
    API->>Scheduler: Remove existing schedule if real-time export enabled
    Note right of Scheduler: No periodic job scheduled
    User->>API: Trigger export (manual or via event)
    API->>Exporter: export_to_intacct(triggered_by=...)
    Exporter->>API: Export result
Loading

Expense Group Settings State Change Signal

sequenceDiagram
    participant ExpenseGroupSettings
    participant SignalHandler
    participant Configuration
    participant AsyncTaskQueue

    ExpenseGroupSettings->>SignalHandler: pre_save event
    SignalHandler->>Configuration: Fetch workspace config
    SignalHandler->>SignalHandler: Check for state changes
    alt State changed and config enabled
        SignalHandler->>AsyncTaskQueue: async_task(create_expense_groups, ...)
    end
Loading

Poem

A bunny hops through fields of code,
Real-time exports now bestowed!
Signals listen, tasks align,
Schedules change, the stars align.
With every fix and tested view,
This patch brings something bright and new.
🐇✨
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the size/L Large PR label May 21, 2025
@ashwin1111 ashwin1111 requested a review from Hrishabh17 May 21, 2025 17:12
Copy link

Coverage

Coverage Report
FileStmtsMissCoverMissing
apps
   exceptions.py471177%27, 33, 39–44, 50, 56–61, 67, 73, 79–80
apps/fyle
   actions.py117794%212, 261, 317–321
   constants.py10100% 
   helpers.py2582790%127, 210–245, 290, 504, 586, 593–596, 599–600
   models.py3362393%313–315, 319–321, 325–327, 331–333, 337–339, 343–345, 406, 512–545, 568, 585
   queue.py280100% 
   serializers.py410100% 
   signals.py50688%28, 61–62, 75–77
   tasks.py1946566%160–161, 164–165, 168–169, 191–201, 213–218, 239, 268–287, 339–386
   views.py1881592%96–103, 271, 356–357, 390–391, 416–420, 513–517
apps/internal
   actions.py180100% 
   views.py40198%37
apps/mappings
   constants.py10100% 
   exceptions.py60395%62–64
   helpers.py20575%45–55
   models.py700100% 
   serializers.py100100% 
   signals.py84792%85–87, 122, 169–174
   tasks.py1031684%112–122, 241, 244, 247, 252–259
   utils.py360100% 
   views.py51296%35–36
apps/mappings/imports
   schedules.py150100% 
apps/mappings/imports/modules
   base.py153299%122, 345
   categories.py64592%112–113, 123, 125, 157
   cost_centers.py55591%91–92, 103, 105, 139
   expense_custom_fields.py69396%60–65
   merchants.py260100% 
   projects.py61493%94–95, 111, 147
   tax_groups.py150100% 
apps/sage_intacct
   dependent_fields.py2593786%306, 342–345, 374–383, 385–386, 388–389, 391–392, 396–398, 442, 462, 502–504, 550–552, 600–606, 610–611
   helpers.py500100% 
   models.py7743496%70–71, 221, 328, 335, 379, 400–417, 476, 538, 544, 579, 749, 1066–1069, 1236–1256, 1346
   queue.py1861592%77–79, 125, 141, 149, 178–180, 242–244, 306–308
   serializers.py310100% 
   tasks.py9778891%99–100, 126–129, 163, 199–200, 204–207, 209, 212, 262, 296–297, 442, 448, 456, 611–612, 632–633, 679–680, 776–777, 844–845, 941–942, 998–999, 1093–1094, 1150–1151, 1272–1273, 1339–1348, 1390–1413, 1531–1535, 1591–1593, 1679–1689, 1692, 1726–1728, 1762–1765, 1827, 1836
   utils.py7926891%152, 382–383, 388, 401–402, 410–411, 487–488, 562–566, 670, 910–933, 1346–1368, 1391–1401, 1629–1633, 1661–1665, 1693–1697, 1739–1743, 1796–1800, 1820, 1829, 1838, 1847–1848, 1859, 1882–1884, 2041
   views.py110893%194–196, 201–202, 208–209, 261
apps/sage_intacct/errors
   errors.py20100% 
   helpers.py26196%73
apps/tasks
   models.py670100% 
   serializers.py60100% 
   views.py370100% 
apps/users
   helpers.py120100% 
   models.py530100% 
   views.py240100% 
apps/workspaces
   actions.py46296%42, 126
   models.py1090100% 
   permissions.py34974%33, 59–67
   serializers.py39197%53
   signals.py26196%70
   tasks.py116893%87–91, 184, 214–215
   utils.py9544%15–24
   views.py214797%68–71, 104, 428–429
apps/workspaces/apis/advanced_settings
   serializers.py78396%248, 251, 254
   triggers.py100100% 
   views.py110100% 
apps/workspaces/apis/errors
   serializers.py200100% 
   views.py150100% 
apps/workspaces/apis/export_settings
   serializers.py98397%256, 259, 262
   triggers.py27293%33–34
   views.py110100% 
apps/workspaces/apis/import_settings
   serializers.py1401788%231–237, 242–248, 256, 274, 277, 305, 308–309, 317
   triggers.py47198%31
   views.py330100% 
workers/export
   actions.py11191%20
   worker.py46589%37–38, 66–67, 100
TOTAL675752392% 

Tests Skipped Failures Errors Time
322 0 💤 0 ❌ 0 🔥 40.098s ⏱️

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (2)
apps/workspaces/actions.py (1)

23-29: ⚠️ Potential issue

Update function docstring to match signature

The docstring still references the removed export_mode parameter but doesn't document the new triggered_by parameter.

    """
    Export expenses to Intacct
    :param workspace_id: Workspace ID
-    :param export_mode: Export mode
    :param expense_group_ids: Expense group IDs
+    :param triggered_by: Source that triggered the export
    :return: None
    """
apps/fyle/tasks.py (1)

81-90: ⚠️ Potential issue

schedule_expense_group_creation now calls the updated signature with too few arguments

create_expense_groups requires four positional arguments (workspace_id, fund_source, task_log, imported_from) after this PR, but the async invocation only passes three.
This will raise TypeError: create_expense_groups() missing 1 required positional argument: 'imported_from'.

Suggested fix (and import the enum at top):

-from django_q.tasks import async_task
+from django_q.tasks import async_task
+from fyle_accounting_library.fyle_platform.enums import ExpenseImportSourceEnum
...
-    async_task('apps.fyle.tasks.create_expense_groups', workspace_id, fund_source, task_log)
+    async_task(
+        'apps.fyle.tasks.create_expense_groups',
+        workspace_id,
+        fund_source,
+        task_log,
+        ExpenseImportSourceEnum.BACKGROUND_SCHEDULE
+    )
🧹 Nitpick comments (8)
apps/workspaces/models.py (1)

210-210: Consider indexing the new boolean field for performance
The is_real_time_export_enabled field will often be used in filtering schedules. Adding a database index can significantly speed up these queries.

tests/test_workspaces/test_views.py (1)

2-2: Remove unused import to satisfy Flake8 / Ruff

datetime is imported but never used, causing the CI failure (F401).
Either delete the import or use it (e.g., for the dynamic start_date fix below).

-from datetime import datetime
🧰 Tools
🪛 Ruff (0.11.9)

2-2: datetime.datetime imported but unused

Remove unused import: datetime.datetime

(F401)

🪛 GitHub Actions: Continuous Integration

[error] 2-2: flake8: F401 'datetime.datetime' imported but unused

apps/fyle/signals.py (1)

80-100: Signal fires async tasks inside pre_save – risk of double execution

Because the trigger is in pre_save, the async task is queued even when the
subsequent DB save eventually rolls back (e.g., due to a validator error or
transaction rollback). This can lead to “phantom” tasks working with stale data
or creating duplicates.

If possible, move this logic to post_save after the row is safely
persisted or wrap the async call in an outer transaction/on-commit hook:

from django.db import transaction

# ...
def run_pre_save_expense_group_setting_triggers(...):
    # ...
    if should_trigger:
-        async_task(...)
+        transaction.on_commit(
+            lambda: async_task(
+                'apps.fyle.tasks.create_expense_groups',
+                workspace_id=instance.workspace_id,
+                fund_source=[FundSourceEnum.PERSONAL],
+                task_log=None,
+                imported_from=ExpenseImportSourceEnum.CONFIGURATION_UPDATE,
+            )
+        )

This guarantees the task is enqueued only when the surrounding transaction
succeeds.

apps/fyle/helpers.py (1)

490-505: Guard against empty message / error in helper

update_task_log_post_import sets:

task_log.detail = {"message": message} if message else {"error": error}

If both arguments are None, the detail becomes {'error': None}, which is
misleading. Consider validating that exactly one of the two is provided.

-def update_task_log_post_import(task_log: TaskLog, status: str, message: str = None, error: str = None) -> None:
+def update_task_log_post_import(
+    task_log: TaskLog,
+    status: str,
+    *,
+    message: str | None = None,
+    error: str | None = None
+) -> None:
     """Helper function to update task log status and details"""
-    if task_log:
-        task_log.status = status
-        task_log.detail = {"message": message} if message else {"error": error}
+    if task_log:
+        if not any([message, error]):
+            raise ValueError("Either `message` or `error` must be supplied")
+        task_log.status = status
+        task_log.detail = {"message": message} if message else {"error": error}
apps/workspaces/views.py (1)

487-489: Redundant get_queryset override

The override simply delegates to super() and adds no value—remove it to keep
the view concise.

-    def get_queryset(self) -> QuerySet[LastExportDetail]:
-        return super().get_queryset()
apps/workspaces/tasks.py (2)

73-83: Potential data-shape issue when persisting additional_email_options

additional_email_options.append(email_added) will create a nested list if email_added itself is a list.
If the caller is already sending a single dict, append is correct; if it is sending multiple addresses, prefer extend instead.

Please confirm the expected type; if it is a list, adjust:

-        ws_schedule.additional_email_options.append(email_added)
+        ws_schedule.additional_email_options.extend(email_added)

81-101: Atomicity & race-condition window

Deletion/creation of the django-q schedule and updating the WorkspaceSchedule happen in separate DB transactions.
A concurrent request could observe an inconsistent state (e.g., is_real_time_export_enabled=True but still referencing an old schedule). Wrapping the whole branch in a single transaction.atomic() block will guarantee consistency.

Not mandatory but highly recommended for correctness.

tests/test_fyle/test_signals.py (1)

11-14: ExpenseGroupSettings.objects.get may raise DoesNotExist in a clean DB

The first test assumes a default ExpenseGroupSettings row exists. In fresh databases this fails before the signal is even triggered.

Safer pattern:

expense_group_settings, _ = ExpenseGroupSettings.objects.get_or_create(workspace_id=workspace_id)

This keeps the test focused on signal behaviour, not fixture setup.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a790c92 and be7a1cc.

📒 Files selected for processing (24)
  • apps/fyle/helpers.py (2 hunks)
  • apps/fyle/signals.py (2 hunks)
  • apps/fyle/tasks.py (8 hunks)
  • apps/fyle/views.py (2 hunks)
  • apps/internal/tasks.py (1 hunks)
  • apps/sage_intacct/queue.py (4 hunks)
  • apps/sage_intacct/views.py (2 hunks)
  • apps/workspaces/actions.py (2 hunks)
  • apps/workspaces/apis/advanced_settings/serializers.py (1 hunks)
  • apps/workspaces/apis/advanced_settings/triggers.py (1 hunks)
  • apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.py (1 hunks)
  • apps/workspaces/models.py (1 hunks)
  • apps/workspaces/tasks.py (4 hunks)
  • apps/workspaces/urls.py (0 hunks)
  • apps/workspaces/views.py (3 hunks)
  • requirements.txt (1 hunks)
  • tests/sql_fixtures/reset_db_fixtures/reset_db.sql (7 hunks)
  • tests/test_fyle/fixtures.py (1 hunks)
  • tests/test_fyle/test_signals.py (1 hunks)
  • tests/test_workspaces/fixtures.py (1 hunks)
  • tests/test_workspaces/test_apis/test_advanced_settings/fixtures.py (2 hunks)
  • tests/test_workspaces/test_tasks.py (3 hunks)
  • tests/test_workspaces/test_views.py (3 hunks)
  • workers/export/worker.py (0 hunks)
💤 Files with no reviewable changes (2)
  • apps/workspaces/urls.py
  • workers/export/worker.py
🧰 Additional context used
🧬 Code Graph Analysis (3)
apps/sage_intacct/views.py (1)
apps/workspaces/actions.py (1)
  • export_to_intacct (22-128)
apps/internal/tasks.py (1)
apps/workspaces/actions.py (1)
  • export_to_intacct (22-128)
apps/fyle/helpers.py (1)
apps/tasks/models.py (1)
  • TaskLog (47-78)
🪛 Ruff (0.11.9)
apps/workspaces/actions.py

22-22: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

tests/test_workspaces/test_views.py

2-2: datetime.datetime imported but unused

Remove unused import: datetime.datetime

(F401)

🪛 GitHub Actions: Continuous Integration
tests/test_workspaces/test_views.py

[error] 2-2: flake8: F401 'datetime.datetime' imported but unused

🔇 Additional comments (30)
tests/test_workspaces/fixtures.py (1)

1-67:

✅ Verification successful

Removal of deprecated workspace_schedule fixture
The workspace_schedule key has been removed to align with the deprecation of scheduling endpoints. Ensure no other tests or fixtures reference this field to prevent breakages.

Run this script to confirm no leftover references:


🏁 Script executed:

#!/bin/bash
rg -n "workspace_schedule"

Length of output: 9356


Deprecated fixture ‘workspace_schedule’ removal verified
No instances of the singular workspace_schedule key remain in tests or fixtures. All matches for workspace_schedules (plural) are expected in SQL fixtures and code.

requirements.txt (1)

13-13: Upgrade fyle-accounting-mappings to 2.2.0
The fyle-accounting-mappings package has been updated to 2.2.0. Verify compatibility with the new real-time export logic and run the full test suite to catch any integration issues.

tests/test_workspaces/test_apis/test_advanced_settings/fixtures.py (2)

27-27: Include new is_real_time_export_enabled flag in fixture
Added is_real_time_export_enabled: False to workspace_schedules under advanced_settings. This matches the updated model and ensures tests cover the default state.


56-56: Include new is_real_time_export_enabled flag in response fixture
Added is_real_time_export_enabled: False to the API response fixture. This keeps the test fixture in sync with the serializer changes.

tests/test_fyle/fixtures.py (1)

43-46:

❓ Verification inconclusive

Supplement fixture with new webhook fields
The added keys is_posted_at_null, report_title, payment_number, and masked_corporate_card_number ensure the fixture matches the updated webhook payload structure. Please verify that downstream tests and any schema validations are updated to consume these fields.

Run the following script to confirm tests reference the new keys:


🏁 Script executed:

#!/bin/bash
# Verify new fixture keys are used in test suite
rg -n "report_title" -A2 tests/test_fyle
rg -n "payment_number" -A2 tests/test_fyle
rg -n "masked_corporate_card_number" -A2 tests/test_fyle

Length of output: 14594


Test Fixtures Include New Webhook Fields – Please Verify Schema/Model Updates
The test suite’s fixtures under tests/test_fyle/fixtures.py now reference the added keys (is_posted_at_null, report_title, payment_number, masked_corporate_card_number) in dozens of scenarios (e.g. initial block at lines 43–47 and repeats at lines 285, 316, 360, … 1131). This confirms downstream tests consume those fields.

• tests/test_fyle/fixtures.py: initial webhook fixture at lines 43–47 plus ~150 other occurrences
• tests/test_fyle/test_helpers.py: filters based on report_title in multiple tests

Next Steps:

  • Manually confirm that any payload‐validation layer (JSON schema, Pydantic/Django models, serializers, etc.) in your webhook handler has been updated to accept and validate these new fields.
tests/test_workspaces/test_tasks.py (3)

40-41: Added new parameter to match updated function signature

The is_real_time_export_enabled=False parameter is now passed to the schedule_sync function, aligning with changes to the function signature in the main codebase. This maintains backward compatibility with existing test behavior.


57-58: Added new parameter to maintain test consistency

The same parameter is added to the second call to schedule_sync, maintaining consistency with the updated function signature.


145-146: Added new parameter to email notification test

The is_real_time_export_enabled=False parameter has been consistently applied to the schedule_sync call in the email notification test case.

apps/workspaces/apis/advanced_settings/serializers.py (1)

149-150: Added new field to support real-time export feature

The is_real_time_export_enabled field has been added to the WorkspaceSchedulesSerializer, exposing this configuration through the API. This aligns with the model changes and enables clients to control real-time export behavior.

apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.py (1)

1-18: Clean migration for adding real-time export field

This migration properly adds the is_real_time_export_enabled boolean field to the WorkspaceSchedule model with a default value of False. The migration follows Django conventions and appropriately depends on the previous migration.

apps/workspaces/apis/advanced_settings/triggers.py (1)

20-21: Updated trigger to pass new real-time export parameter

The run_post_configurations_triggers method now forwards the is_real_time_export_enabled setting from the workspace schedule to the schedule_sync function, ensuring the real-time export configuration is properly applied.

apps/fyle/views.py (2)

75-80: Good update to use named parameters instead of positional arguments.

This change improves code readability by explicitly naming the parameters being passed to the create_expense_groups function, making the intent clearer and the code more maintainable.


515-515: Good refactoring to use named parameters.

Consistent with the other changes, switching from positional to named parameters makes the function call clearer and reduces the risk of parameter order mistakes during future modifications.

apps/sage_intacct/queue.py (4)

92-93: Good defensive programming improvement.

Adding the check for triggered_by being truthy prevents accidentally overwriting the existing value with None or an empty value, which improves robustness of the task log update logic.


193-194: Good defensive programming improvement.

Same improvement as in the previous function - checking that triggered_by is truthy before updating the task log's triggered_by field prevents potential data loss.


257-258: Good defensive programming improvement.

Consistent application of the pattern to check for truthy triggered_by before updating the task log in the bills creation scheduling function.


321-322: Good defensive programming improvement.

Consistent application of the same defensive check in the charge card transaction creation scheduling function, ensuring the triggered_by field is only updated when a valid value is provided.

apps/sage_intacct/views.py (2)

16-16: Appropriate import added for the ExpenseImportSourceEnum.

This import is required for the changes made to the export_to_intacct function call on line 128.


128-128: Good update to use named parameters with explicit source tracking.

This change aligns with the updated export_to_intacct function signature in workspaces/actions.py which now determines the export mode internally based on the triggered_by parameter. Using DASHBOARD_SYNC as the source correctly identifies that this export was triggered by a user action in the dashboard.

apps/internal/tasks.py (1)

70-70: Well-structured update to the export function call.

This change correctly:

  1. Removes the now-obsolete positional 'AUTO' export mode parameter
  2. Uses explicit keyword arguments for better readability
  3. Sets the triggered_by source to INTERNAL, properly identifying this as an automatic internal re-export

The change aligns with the updated signature of export_to_intacct in workspaces/actions.py which now determines the export mode internally based on the source.

apps/workspaces/actions.py (2)

36-36: New export mode logic looks good

The refactored code now determines export mode internally based on the triggered_by parameter, which centralizes this logic and simplifies the API.


51-59: Consistent propagation of triggered_by parameter

The changes have properly propagated the triggered_by parameter to all scheduling functions, making it available for downstream processing.

Also applies to: 60-67, 69-76, 85-92, 94-101, 103-110, 112-119

tests/sql_fixtures/reset_db_fixtures/reset_db.sql (7)

5-6: Update pg_dump version comment
The dump header is updated to reflect pg_dump version 15.13; this is purely informational and has no functional impact.


5964-5965: Register new migrations in django_migrations fixture
Entries for the failed_events and workspace_schedules migrations have been appended in sequence, matching your migration files.


9415-9415: Update failed_events COPY statement
The COPY command now includes the is_resolved column, keeping the fixture in sync with the schema.


9895-9895: Update workspace_schedules COPY statement
The COPY command now includes is_real_time_export_enabled, reflecting the new column in the schema.


10005-10005: Set django_migrations_id_seq to new max ID
The sequence is updated to 236, which matches the highest migration ID in this dump.


2799-2801:

✅ Verification successful

Add is_real_time_export_enabled column to workspace_schedules
This aligns the fixture with the new migration (0045). Please verify that the corresponding migration provides a default value or backfills existing rows to satisfy the NOT NULL constraint.


🏁 Script executed:

#!/bin/bash
# Verify migration default/backfill for is_real_time_export_enabled
rg -n "is_real_time_export_enabled" -C2 apps/workspaces/migrations/0045_workspaceschedule_is_real_time_export_enabled.py

Length of output: 321


Migration Backfill Verified
The 0045_workspaceschedule_is_real_time_export_enabled migration adds is_real_time_export_enabled with default=False, satisfying the NOT NULL constraint and backfilling existing rows accordingly. The SQL fixture and migration are now in sync.


2861-2862:

❓ Verification inconclusive

Add is_resolved column to failed_events
Matches the new migration (0004). Confirm that the migration handles existing data by providing a default or backfilling this non-nullable column.


🏁 Script executed:

#!/bin/bash
# Verify migration default/backfill for is_resolved
rg -n "is_resolved" -C2 apps/rabbitmq/migrations/0004_failedevent_is_resolved.py

Length of output: 177


🏁 Script executed:

#!/bin/bash
set -e

echo "=== Locating all migrations directories ==="
find . -type d -name migrations

echo
echo "=== Listing Python files in each migrations directory ==="
find . -type d -name migrations | while read dir; do
  echo "Files in $dir:"
  ls -1 "$dir"/*.py || echo "  (no .py files)"
done

echo
echo "=== Searching for 'is_resolved' in all migration files ==="
rg -n "is_resolved" -C2 --glob "*/migrations/*.py" || echo "No occurrences found"

Length of output: 20808


🏁 Script executed:

#!/bin/bash
set -e

echo "=== Locate migration files related to failed events ==="
find . -type f -name "*fail*event*.py" || echo "  (no matching files)"

echo
echo "=== Search for 'is_resolved' globally ==="
rg -n "is_resolved" -C3 . || echo "  (no occurrences found)"

echo
echo "=== Search for AddField operations adding is_resolved in migrations ==="
rg -n "AddField" -C5 --glob "*/migrations/*.py" | rg -n "is_resolved" -C5 || echo "  (no AddField for is_resolved found)"

Length of output: 25812


Missing migration for failed_events.is_resolved
I wasn’t able to locate apps/rabbitmq/migrations/0004_failedevent_is_resolved.py in the codebase. Please verify:

  • That the migration file exists at the correct path (apps/rabbitmq/migrations/0004_failedevent_is_resolved.py).
  • It adds the is_resolved column with a default or includes a backfill step for existing rows to satisfy the NOT NULL constraint.
apps/workspaces/tasks.py (1)

116-146: Minor: clarify keyword arguments in export_to_intacct

Good job converting to keyword arguments – increases readability and saves positional-order bugs.

@@ -19,7 +19,7 @@
logger.level = logging.INFO


def export_to_intacct(workspace_id: int, export_mode: bool = None, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None:
def export_to_intacct(workspace_id: int, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace mutable default argument with None

The function signature uses a mutable default argument (list = []), which can lead to unexpected behavior if the function is called multiple times and modifies this list.

-def export_to_intacct(workspace_id: int, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None:
+def export_to_intacct(workspace_id: int, expense_group_ids: list = None, triggered_by: ExpenseImportSourceEnum = None) -> None:

Then initialize the list inside the function:

+    if expense_group_ids is None:
+        expense_group_ids = []
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def export_to_intacct(workspace_id: int, expense_group_ids: list = [], triggered_by: ExpenseImportSourceEnum = None) -> None:
def export_to_intacct(
workspace_id: int,
expense_group_ids: list = None,
triggered_by: ExpenseImportSourceEnum = None,
) -> None:
if expense_group_ids is None:
expense_group_ids = []
# … rest of the function …
🧰 Tools
🪛 Ruff (0.11.9)

22-22: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
In apps/workspaces/actions.py at line 22, the function export_to_intacct uses a
mutable default argument list = [], which can cause unexpected behavior. Change
the default value of expense_group_ids to None in the function signature, and
inside the function, check if expense_group_ids is None and if so, initialize it
to an empty list. This prevents shared mutable defaults across function calls.

Comment on lines +461 to +475
url = "/api/workspaces/{}/export_detail/?start_date=2025-05-01".format(workspace_id)

api_client.credentials(
HTTP_AUTHORIZATION="Bearer {}".format(test_connection.access_token)
)

LastExportDetail.objects.get(workspace_id=workspace_id)
# last_exported_at=datetime.now(), total_expense_groups_count=1

TaskLog.objects.create(type='CREATING_EXPENSE_REPORT', status='COMPLETE', workspace_id=workspace_id)

failed_count = TaskLog.objects.filter(workspace_id=workspace_id, status__in=['FAILED', 'FATAL']).count()

response = api_client.get(url)
assert response.status_code == 200
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Hard-coded future date makes the test non-deterministic

start_date=2025-05-01 will make this test fail on any CI run executed before that
date. Generating the date at runtime keeps the test evergreen and also gives
purpose to the datetime import.

-    url = "/api/workspaces/{}/export_detail/?start_date=2025-05-01".format(workspace_id)
+    start_date = datetime.utcnow().date().isoformat()
+    url = f"/api/workspaces/{workspace_id}/export_detail/?start_date={start_date}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
url = "/api/workspaces/{}/export_detail/?start_date=2025-05-01".format(workspace_id)
api_client.credentials(
HTTP_AUTHORIZATION="Bearer {}".format(test_connection.access_token)
)
LastExportDetail.objects.get(workspace_id=workspace_id)
# last_exported_at=datetime.now(), total_expense_groups_count=1
TaskLog.objects.create(type='CREATING_EXPENSE_REPORT', status='COMPLETE', workspace_id=workspace_id)
failed_count = TaskLog.objects.filter(workspace_id=workspace_id, status__in=['FAILED', 'FATAL']).count()
response = api_client.get(url)
assert response.status_code == 200
start_date = datetime.utcnow().date().isoformat()
url = f"/api/workspaces/{workspace_id}/export_detail/?start_date={start_date}"
api_client.credentials(
HTTP_AUTHORIZATION="Bearer {}".format(test_connection.access_token)
)
LastExportDetail.objects.get(workspace_id=workspace_id)
# last_exported_at=datetime.now(), total_expense_groups_count=1
TaskLog.objects.create(type='CREATING_EXPENSE_REPORT', status='COMPLETE', workspace_id=workspace_id)
failed_count = TaskLog.objects.filter(workspace_id=workspace_id, status__in=['FAILED', 'FATAL']).count()
response = api_client.get(url)
assert response.status_code == 200
🤖 Prompt for AI Agents
In tests/test_workspaces/test_views.py around lines 461 to 475, the test uses a
hard-coded future date "2025-05-01" for the start_date query parameter, which
makes the test non-deterministic and prone to failure before that date. Replace
the hard-coded date with a dynamically generated date at runtime using the
datetime module, such as today's date or a relative date, to ensure the test
remains evergreen and deterministic.

Comment on lines +490 to +519
def retrieve(self, request: Request, *args, **kwargs) -> Response:
instance = self.get_object()
serializer = self.get_serializer(instance)
response_data = serializer.data

start_date = request.query_params.get('start_date')

if start_date and response_data:
misc_task_log_types = ['CREATING_REIMBURSEMENT', 'CREATING_AP_PAYMENT', 'FETCHING_EXPENSES']

task_logs = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
workspace_id=kwargs['workspace_id'],
updated_at__gte=start_date,
status='COMPLETE',
).order_by('-updated_at')

successful_count = task_logs.count()

failed_count = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
status__in=['FAILED', 'FATAL'],
workspace_id=kwargs['workspace_id'],
).count()

response_data.update({
'repurposed_successful_count': successful_count,
'repurposed_failed_count': failed_count,
'repurposed_last_exported_at': task_logs.last().updated_at if task_logs.last() else None
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

last() returns the oldest record; failed_count ignores start_date

  1. With order_by('-updated_at'), queryset.last() yields the oldest record.
    To capture the most recent export, use first().

  2. failed_count is not filtered by start_date, so exports prior to the
    window are included, skewing the count.

  3. Minor: successful_count can be computed directly from the same
    task_logs queryset—no change needed.

-        task_logs = TaskLog.objects.filter(
+        task_logs = TaskLog.objects.filter(
             ~Q(type__in=misc_task_log_types),
             workspace_id=kwargs['workspace_id'],
             updated_at__gte=start_date,
             status='COMPLETE',
         ).order_by('-updated_at')

-        failed_count = TaskLog.objects.filter(
+        failed_count = TaskLog.objects.filter(
             ~Q(type__in=misc_task_log_types),
             status__in=['FAILED', 'FATAL'],
             workspace_id=kwargs['workspace_id'],
+            updated_at__gte=start_date,
         ).count()

         response_data.update({
             'repurposed_successful_count': successful_count,
             'repurposed_failed_count': failed_count,
-            'repurposed_last_exported_at': task_logs.last().updated_at if task_logs.last() else None
+            'repurposed_last_exported_at': task_logs.first().updated_at if task_logs.exists() else None
         })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def retrieve(self, request: Request, *args, **kwargs) -> Response:
instance = self.get_object()
serializer = self.get_serializer(instance)
response_data = serializer.data
start_date = request.query_params.get('start_date')
if start_date and response_data:
misc_task_log_types = ['CREATING_REIMBURSEMENT', 'CREATING_AP_PAYMENT', 'FETCHING_EXPENSES']
task_logs = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
workspace_id=kwargs['workspace_id'],
updated_at__gte=start_date,
status='COMPLETE',
).order_by('-updated_at')
successful_count = task_logs.count()
failed_count = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
status__in=['FAILED', 'FATAL'],
workspace_id=kwargs['workspace_id'],
).count()
response_data.update({
'repurposed_successful_count': successful_count,
'repurposed_failed_count': failed_count,
'repurposed_last_exported_at': task_logs.last().updated_at if task_logs.last() else None
})
def retrieve(self, request: Request, *args, **kwargs) -> Response:
instance = self.get_object()
serializer = self.get_serializer(instance)
response_data = serializer.data
start_date = request.query_params.get('start_date')
if start_date and response_data:
misc_task_log_types = ['CREATING_REIMBURSEMENT', 'CREATING_AP_PAYMENT', 'FETCHING_EXPENSES']
task_logs = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
workspace_id=kwargs['workspace_id'],
updated_at__gte=start_date,
status='COMPLETE',
).order_by('-updated_at')
successful_count = task_logs.count()
failed_count = TaskLog.objects.filter(
~Q(type__in=misc_task_log_types),
status__in=['FAILED', 'FATAL'],
workspace_id=kwargs['workspace_id'],
updated_at__gte=start_date,
).count()
response_data.update({
'repurposed_successful_count': successful_count,
'repurposed_failed_count': failed_count,
'repurposed_last_exported_at': task_logs.first().updated_at if task_logs.exists() else None
})
🤖 Prompt for AI Agents
In apps/workspaces/views.py lines 490 to 519, replace task_logs.last() with
task_logs.first() to get the most recent export since last() returns the oldest
record when ordered by '-updated_at'. Also, add the filter
updated_at__gte=start_date to the failed_count queryset to ensure it only counts
failures within the specified date range, matching the successful_count filter.
This will correct the counts to reflect the intended time window.

Comment on lines +31 to +38
def schedule_email_notification(workspace_id: int, schedule_enabled: bool, hours: int) -> None:
"""
Schedule email notification
:param workspace_id: workspace id
:param schedule_enabled: schedule enabled
:return: None
"""
if schedule_enabled:
if schedule_enabled and hours:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

hours parameter is undocumented and effectively ignored in the schedule that is created

  1. The doc-string still lists only workspace_id and schedule_enabled; it should document the new hours parameter.
  2. Inside the body, hours is only used in the if guard (schedule_enabled and hours) but not when the schedule is actually created – the schedule is always set to run every 24*60 minutes (once a day). This makes the extra guard misleading and can silently disable notifications when hours == 0 even though the cadence is fixed to a day.

If you intend the mail job to run daily, drop the hours dependency; if you intend it to be configurable, use the supplied value:

-    if schedule_enabled and hours:
+    if schedule_enabled:
 ...
-                'minutes': 24 * 60,
+                'minutes': hours * 60 if hours else 24 * 60,

Also update the doc-string accordingly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def schedule_email_notification(workspace_id: int, schedule_enabled: bool, hours: int) -> None:
"""
Schedule email notification
:param workspace_id: workspace id
:param schedule_enabled: schedule enabled
:return: None
"""
if schedule_enabled:
if schedule_enabled and hours:
def schedule_email_notification(workspace_id: int, schedule_enabled: bool, hours: int) -> None:
"""
Schedule email notification
:param workspace_id: workspace id
:param schedule_enabled: schedule enabled
:return: None
"""
- if schedule_enabled and hours:
+ if schedule_enabled:
...
- 'minutes': 24 * 60,
+ 'minutes': hours * 60 if hours else 24 * 60,
...
🤖 Prompt for AI Agents
In apps/workspaces/tasks.py around lines 31 to 38, the hours parameter is
undocumented and not used in scheduling, causing confusion and potential silent
disabling of notifications. Update the doc-string to include the hours parameter
with its purpose. Then, modify the scheduling logic to use the hours value to
set the schedule interval dynamically (e.g., convert hours to minutes) instead
of a fixed 24*60 minutes, or remove the hours parameter and related condition if
the schedule should always be daily.

Comment on lines +85 to +93
if is_real_time_export_enabled:
# Delete existing schedule since user changed the setting to real time export
schedule = ws_schedule.schedule
if schedule:
ws_schedule.schedule = None
ws_schedule.save()
schedule.delete()
else:
schedule, _ = Schedule.objects.update_or_create(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Disabling schedule does not reset is_real_time_export_enabled flag

When a user switches off scheduling (schedule_enabled=False block below), the boolean stays whatever it was previously.
This can lead to a workspace that is “disabled” but still treated as real-time enabled elsewhere.

Add an explicit reset:

     elif not schedule_enabled and ws_schedule.schedule:
         schedule = ws_schedule.schedule
         ws_schedule.enabled = schedule_enabled
+        ws_schedule.is_real_time_export_enabled = False
         ws_schedule.schedule = None
         ws_schedule.save()
         schedule.delete()

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/workspaces/tasks.py around lines 85 to 93, when disabling the schedule
by setting ws_schedule.schedule to None and deleting the schedule, the
is_real_time_export_enabled flag is not reset, which can cause inconsistent
state. To fix this, explicitly set is_real_time_export_enabled to False when
disabling the schedule and save the ws_schedule object to persist this change.

Comment on lines +267 to +276
if len(expense_group_ids):
if is_state_change_event:
# Trigger export immediately for customers who have enabled real time export
is_real_time_export_enabled = WorkspaceSchedule.objects.filter(workspace_id=workspace.id, is_real_time_export_enabled=True).exists()

# Don't allow real time export if it's not supported for the branded app / setting not enabled
if not is_real_time_export_enabled or not feature_configuration.feature.real_time_export_1hr_orgs:
return

logger.info(f'Exporting expenses for workspace {workspace.id} with expense group ids {expense_group_ids}, triggered by {imported_from}')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Feature-flag guard bypasses export for unsupported apps but leaves task log COMPLETE

When not is_real_time_export_enabled or not feature_configuration.feature.real_time_export_1hr_orgs the function exits early without updating task_log (set to IN_PROGRESS earlier) – leaving it stuck.
Call update_task_log_post_import(task_log, 'COMPLETE') (or similar) before returning.

🤖 Prompt for AI Agents
In apps/fyle/tasks.py around lines 267 to 276, the early return when real-time
export is not enabled or supported causes the task_log to remain in the
IN_PROGRESS state. To fix this, before returning in that condition, call
update_task_log_post_import(task_log, 'COMPLETE') to properly mark the task as
complete and avoid leaving it stuck.

Comment on lines 264 to 266
expense_groups = ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], workspace_id=workspace.id, exported_at__isnull=True).distinct('id').values('id')
expense_group_ids = [expense_group['id'] for expense_group in expense_groups]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect wrapper causes id__in lookup to receive a list of QuerySets

ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], …) wraps the QuerySet in an additional list, producing [<QuerySet …>], which Django will attempt to coerce into ints and fail.

-expense_groups = ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], workspace_id=workspace.id, exported_at__isnull=True)...
+expense_groups = ExpenseGroup.objects.filter(
+    expenses__id__in=expense_ids,
+    workspace_id=workspace.id,
+    exported_at__isnull=True
+).distinct('id').values('id')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expense_groups = ExpenseGroup.objects.filter(expenses__id__in=[expense_ids], workspace_id=workspace.id, exported_at__isnull=True).distinct('id').values('id')
expense_group_ids = [expense_group['id'] for expense_group in expense_groups]
expense_groups = ExpenseGroup.objects.filter(
expenses__id__in=expense_ids,
workspace_id=workspace.id,
exported_at__isnull=True
).distinct('id').values('id')
expense_group_ids = [expense_group['id'] for expense_group in expense_groups]
🤖 Prompt for AI Agents
In apps/fyle/tasks.py around lines 264 to 266, the filter uses
expenses__id__in=[expense_ids], which wraps the QuerySet in a list, causing a
type error. Remove the square brackets around expense_ids so that the filter
receives the QuerySet or list directly, i.e., use expenses__id__in=expense_ids
instead of expenses__id__in=[expense_ids].

Copy link

Coverage

Coverage Report
FileStmtsMissCoverMissing
apps
   exceptions.py471177%27, 33, 39–44, 50, 56–61, 67, 73, 79–80
apps/fyle
   actions.py117794%212, 261, 317–321
   constants.py10100% 
   helpers.py2582790%127, 210–245, 290, 504, 586, 593–596, 599–600
   models.py3362393%313–315, 319–321, 325–327, 331–333, 337–339, 343–345, 406, 512–545, 568, 585
   queue.py280100% 
   serializers.py410100% 
   signals.py50688%28, 61–62, 75–77
   tasks.py1946566%160–161, 164–165, 168–169, 191–201, 213–218, 239, 268–287, 339–386
   views.py1881592%96–103, 271, 356–357, 390–391, 416–420, 513–517
apps/internal
   actions.py180100% 
   views.py40198%37
apps/mappings
   constants.py10100% 
   exceptions.py60395%62–64
   helpers.py20575%45–55
   models.py700100% 
   serializers.py100100% 
   signals.py84792%85–87, 122, 169–174
   tasks.py1031684%112–122, 241, 244, 247, 252–259
   utils.py360100% 
   views.py51296%35–36
apps/mappings/imports
   schedules.py150100% 
apps/mappings/imports/modules
   base.py153299%122, 345
   categories.py64592%112–113, 123, 125, 157
   cost_centers.py55591%91–92, 103, 105, 139
   expense_custom_fields.py69396%60–65
   merchants.py260100% 
   projects.py61493%94–95, 111, 147
   tax_groups.py150100% 
apps/sage_intacct
   dependent_fields.py2593786%306, 342–345, 374–383, 385–386, 388–389, 391–392, 396–398, 442, 462, 502–504, 550–552, 600–606, 610–611
   helpers.py500100% 
   models.py7893596%70–71, 221, 328, 335, 379, 400–417, 476, 538, 544, 579, 749, 1066–1069, 1236–1256, 1346, 1778
   queue.py1901990%77–80, 126, 142, 150, 179–182, 244–247, 309–312
   serializers.py310100% 
   tasks.py9778891%99–100, 126–129, 163, 199–200, 204–207, 209, 212, 262, 296–297, 442, 448, 456, 611–612, 632–633, 679–680, 776–777, 844–845, 941–942, 998–999, 1093–1094, 1150–1151, 1272–1273, 1339–1348, 1390–1413, 1531–1535, 1591–1593, 1679–1689, 1692, 1726–1728, 1762–1765, 1827, 1836
   utils.py7886891%152, 382–383, 388, 401–402, 410–411, 474–475, 549–553, 657, 897–920, 1333–1355, 1378–1388, 1616–1620, 1648–1652, 1680–1684, 1726–1730, 1783–1787, 1807, 1816, 1825, 1834–1835, 1846, 1869–1871, 2028
   views.py110893%194–196, 201–202, 208–209, 261
apps/sage_intacct/errors
   errors.py20100% 
   helpers.py26196%73
apps/tasks
   models.py670100% 
   serializers.py60100% 
   views.py370100% 
apps/users
   helpers.py120100% 
   models.py530100% 
   views.py240100% 
apps/workspaces
   actions.py46296%42, 126
   models.py1090100% 
   permissions.py34974%33, 59–67
   serializers.py39197%53
   signals.py26196%70
   tasks.py116992%87–91, 112, 185, 215–216
   utils.py9544%15–24
   views.py214797%68–71, 104, 428–429
apps/workspaces/apis/advanced_settings
   serializers.py78396%248, 251, 254
   triggers.py100100% 
   views.py110100% 
apps/workspaces/apis/errors
   serializers.py200100% 
   views.py150100% 
apps/workspaces/apis/export_settings
   serializers.py98397%256, 259, 262
   triggers.py27293%33–34
   views.py110100% 
apps/workspaces/apis/import_settings
   serializers.py1401788%231–237, 242–248, 256, 274, 277, 305, 308–309, 317
   triggers.py47198%31
   views.py330100% 
workers/export
   actions.py11191%20
   worker.py46589%37–38, 66–67, 100
TOTAL677252992% 

Tests Skipped Failures Errors Time
322 0 💤 0 ❌ 0 🔥 40.140s ⏱️

@ashwin1111 ashwin1111 merged commit 84fdd69 into master Jun 2, 2025
4 checks passed
ashwin1111 added a commit that referenced this pull request Jun 4, 2025
* feat: Real-time exports, phase 2 - Intacct

* fix lint

* fix turn off bug
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/L Large PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants