System.DmlException: Update failed. First exception on row 0; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record or 1 records: [recordId]: []
What does this error mean?
Salesforce uses row-level locking to ensure data consistency. When an Apex transaction attempts a DML operation on a record, Salesforce acquires an exclusive lock on that row in the database. If another concurrent transaction already holds a lock on the same record — and the second transaction's lock wait times out — Salesforce throws this error and rolls back the second transaction entirely.
This is not a code bug per se — it is a concurrency problem that arises when multiple processes compete for the same record at the same time.
starts update
conflict
→ timeout → DmlException
Common Causes
1. Multiple Flows or Triggers Firing Simultaneously
When the same record (or its parent) is updated by multiple automated processes at the same time — e.g., two record-triggered flows, or a flow and a trigger both updating a parent Account — lock contention is almost guaranteed at scale.
2. Batch Jobs Updating the Same Parent Record
A common pattern: a batch job processes many child records, each of which rolls up changes to a shared parent. All batch threads compete to lock and update the same parent row simultaneously.
3. Concurrent API Calls from Integrations
An external system making parallel API requests to update related Salesforce records can trigger lock conflicts when multiple requests target the same record in the same time window.
4. Implicit Parent Locks from Master-Detail DML
Inserting, updating, or deleting a detail record in a master-detail relationship acquires an implicit lock on the master record. High-volume inserts of detail records that share the same master will contend for that parent lock.
5. Long-Running Transactions Holding Locks
A transaction that performs heavy Apex processing before its DML — such as external callouts or complex calculations — holds no locks during that time, but when it finally executes DML it may conflict with a faster transaction that got in first.
How to Fix It
Solution 1: Implement a Retry Pattern
For transient lock conflicts, the most practical fix is to catch the exception and retry the DML after a brief delay. Use a Queueable with retry logic so you don't block the original transaction.
public class RetryUpdateJob implements Queueable {
private List<SObject> records;
private Integer retryCount;
private static final Integer MAX_RETRIES = 3;
public RetryUpdateJob(List<SObject> records, Integer retryCount) {
this.records = records;
this.retryCount = retryCount;
}
public void execute(QueueableContext ctx) {
try {
update records;
} catch (System.DmlException e) {
if (e.getDmlType(0) ==
StatusCode.UNABLE_TO_LOCK_ROW
&& retryCount < MAX_RETRIES) {
// Re-enqueue for retry
System.enqueueJob(
new RetryUpdateJob(records, retryCount + 1)
);
} else {
// Max retries exceeded — log the failure
System.debug(LoggingLevel.ERROR,
'Lock retry failed after ' + retryCount +
' attempts: ' + e.getMessage()
);
}
}
}
}
// Usage from trigger or handler
System.enqueueJob(new RetryUpdateJob(recordsToUpdate, 0));
Solution 2: Consolidate DML — Update Parent Once
Instead of updating the parent record inside a loop (once per child), collect all changes in memory and execute a single DML update at the end. This reduces the window during which the parent lock is needed.
// ❌ BAD: Updates parent on every child — multiple locks
for (Case c : cases) {
Account parent = new Account(
Id = c.AccountId,
Open_Cases__c = getCount(c.AccountId)
);
update parent; // Lock acquired & released per iteration
}
// ✅ GOOD: Compute all changes first, one update at the end
Map<Id, Account> toUpdate = new Map<Id, Account>();
for (Case c : cases) {
if (!toUpdate.containsKey(c.AccountId)) {
toUpdate.put(c.AccountId, new Account(
Id = c.AccountId,
Open_Cases__c = 0
));
}
toUpdate.get(c.AccountId).Open_Cases__c++;
}
update toUpdate.values(); // One DML, one lock window
Solution 3: Stagger Batch Scope Size
When a batch job causes lock contention on parent records, reduce the batch scope (records per chunk) so fewer threads compete for the same parent at the same time.
// Reduce scope from default 200 to limit concurrency on parents
Database.executeBatch(new MyBatchJob(), 50);
// Or use a serial Queueable chain for the most contention-prone operations
public class SerialProcessingJob implements Queueable {
private List<Id> remaining;
public void execute(QueueableContext ctx) {
// Process one chunk, then re-enqueue the rest
List<Id> chunk = remaining.subList(
0, Math.min(50, remaining.size())
);
processChunk(chunk);
remaining = remaining.subList(chunk.size(), remaining.size());
if (!remaining.isEmpty()) {
System.enqueueJob(new SerialProcessingJob(remaining));
}
}
}
Solution 4: Use Native Roll-Up Summary Fields
If you are computing roll-up values (counts, sums) on a parent from children — and that's causing the lock contention — consider replacing the Apex logic with a native Roll-Up Summary Field on the master object. Salesforce handles the aggregation natively with lock-safe internal mechanisms.
Pro Tip: When diagnosing this error in debug logs, search for UNABLE_TO_LOCK_ROW. The record ID listed in the error message is the locked row — use it to identify which object and which automation is holding the competing lock. Cross-reference with Setup → Apex Jobs or the Event Log Files to find simultaneous transactions.
Avoid infinite retry loops. Always cap retry attempts (e.g., MAX_RETRIES = 3) and log failures. An unbounded retry loop can exhaust your async job quota and create cascading failures across the org.