Skip to content

Commit f2fd765

Browse files
Abel Milashclaude
andcommitted
Document atomicity trade-off and <=1000 guidance for chunked operations
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent 173a490 commit f2fd765

4 files changed

Lines changed: 25 additions & 4 deletions

File tree

.claude/skills/dataverse-sdk-use/SKILL.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Use the PowerPlatform Dataverse Client Python SDK to interact with Microsoft Dat
2525
- `client.batch` -- batch multiple operations into a single HTTP request
2626

2727
### Bulk Operations
28-
The SDK supports Dataverse's native bulk operations: Pass lists to `create()`, `update()`, or `upsert()` for automatic bulk processing; for `delete()`, set `use_bulk_delete=True`. Lists exceeding 1,000 records are automatically split into sequential 1,000-record chunks — no manual pre-splitting needed. Operations across chunks are **not atomic**: a failure mid-way may leave earlier chunks applied.
28+
The SDK supports Dataverse's native bulk operations: Pass lists to `create()`, `update()`, or `upsert()` for automatic bulk processing; for `delete()`, set `use_bulk_delete=True`. Lists exceeding 1,000 records are automatically split into sequential 1,000-record chunks — no manual pre-splitting needed. Operations across chunks are **not atomic**: a failure mid-way may leave earlier chunks applied. Callers that require atomicity should limit their input to ≤ 1,000 records.
2929

3030
### Paging
3131
- Control page size with `page_size` parameter

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,8 @@ client.records.delete("account", ids, use_bulk_delete=True)
188188

189189
> **Large batches**: Lists exceeding 1,000 records are automatically split into sequential
190190
> 1,000-record chunks — no manual pre-splitting needed. Note that chunked operations are
191-
> **not atomic**: a failure mid-way may leave earlier chunks applied.
191+
> **not atomic**: a failure mid-way may leave earlier chunks applied. Callers that require
192+
> atomicity should limit their input to ≤ 1,000 records.
192193
193194
### Upsert operations
194195

src/PowerPlatform/Dataverse/operations/dataframe.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,8 @@ def create(
182182
1,000-row chunks before sending to ``CreateMultiple``. You do not
183183
need to pre-split large DataFrames. Note that chunked operations
184184
are **not atomic** — a failure mid-way may leave earlier chunks
185-
applied.
185+
applied. Callers that require atomicity should limit input to
186+
≤ 1,000 rows.
186187
187188
Example:
188189
Create records from a DataFrame::
@@ -259,7 +260,8 @@ def update(
259260
1,000-row chunks before sending to ``UpdateMultiple`` (or a single
260261
PATCH for one row). You do not need to pre-split large DataFrames.
261262
Note that chunked operations are **not atomic** — a failure
262-
mid-way may leave earlier chunks applied.
263+
mid-way may leave earlier chunks applied. Callers that require
264+
atomicity should limit input to ≤ 1,000 rows.
263265
264266
Example:
265267
Update records with different values per row::

src/PowerPlatform/Dataverse/operations/records.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,12 @@ def create(
7777
7878
:raises TypeError: If ``data`` is not a dict or list[dict].
7979
80+
.. note::
81+
Lists exceeding 1,000 records are automatically split into
82+
sequential chunks. This is **not atomic** — a failure mid-way
83+
may leave earlier chunks applied. Callers that require atomicity
84+
should limit input to ≤ 1,000 records.
85+
8086
Example:
8187
Create a single record::
8288
@@ -135,6 +141,12 @@ def update(
135141
:raises TypeError: If ``ids`` is not str or list[str], or if ``changes``
136142
does not match the expected pattern.
137143
144+
.. note::
145+
Lists exceeding 1,000 IDs are automatically split into sequential
146+
chunks. This is **not atomic** — a failure mid-way may leave
147+
earlier chunks applied. Callers that require atomicity should
148+
limit input to ≤ 1,000 IDs.
149+
138150
Example:
139151
Single update::
140152
@@ -486,6 +498,12 @@ def upsert(self, table: str, items: List[Union[UpsertItem, Dict[str, Any]]]) ->
486498
neither a :class:`~PowerPlatform.Dataverse.models.upsert.UpsertItem` nor a
487499
dict with ``"alternate_key"`` and ``"record"`` keys.
488500
501+
.. note::
502+
Lists exceeding 1,000 items are automatically split into
503+
sequential chunks. This is **not atomic** — a failure mid-way
504+
may leave earlier chunks applied. Callers that require atomicity
505+
should limit input to ≤ 1,000 items.
506+
489507
Example:
490508
Upsert a single record using ``UpsertItem``::
491509

0 commit comments

Comments
 (0)