System.LimitException: Apex CPU time limit exceeded
What does this error mean?
Salesforce limits how long Apex code can run in a single transaction. When your code exceeds the CPU time allocation, the platform throws a System.LimitException and rolls back the entire transaction.
CPU time counts the time your Apex code is actively executing — time spent waiting on callouts or database I/O is excluded. All code running in the transaction shares this pool: triggers, flows invoked from triggers, validation rules, and any Apex called within.
Common Causes
1. Nested or Complex Loops
Quadratic or cubic time complexity (nested loops over large collections) is the most frequent cause. Each extra loop multiplies CPU cost rapidly as record volume grows.
// ❌ BAD: O(n²) — nested loop over large lists
for (Account acc : allAccounts) {
for (Contact con : allContacts) {
// Runs accounts.size() × contacts.size() times
if (con.AccountId == acc.Id) {
// ... expensive logic here
}
}
}
2. String Manipulation in Loops
String concatenation using + inside a loop creates a new String object on every iteration, rapidly consuming CPU and heap. Use String.join() or a List<String> instead.
3. JSON Serialization of Large Objects
Calling JSON.serialize() or JSON.deserialize() on deeply nested or very large objects is CPU-intensive. Limit the object graph before serializing.
4. Excessive Flow + Apex Combinations
Record-triggered Flows that call Apex actions or invocable methods share the same CPU budget. A trigger and a flow running on the same record both draw from the 10-second pool simultaneously.
5. SOQL for() Loops with Complex Body Logic
SOQL for() loops (chunked iteration) are designed for heap efficiency, but if the loop body contains expensive operations, CPU will accumulate across all chunks.
How to Fix It
Solution 1: Replace Nested Loops with Maps
Replace O(n²) nested loops with a Map lookup, reducing the complexity to O(n). Build the map once, then access it in constant time.
// ✅ GOOD: O(n) — map lookup instead of inner loop
Map<Id, List<Contact>> contactsByAccount = new Map<Id, List<Contact>>();
for (Contact con : allContacts) {
if (!contactsByAccount.containsKey(con.AccountId)) {
contactsByAccount.put(con.AccountId, new List<Contact>());
}
contactsByAccount.get(con.AccountId).add(con);
}
// Single loop with O(1) map lookup
for (Account acc : allAccounts) {
List<Contact> contacts = contactsByAccount.get(acc.Id);
if (contacts != null) {
// process contacts...
}
}
Solution 2: Fix String Concatenation in Loops
// ❌ BAD: String + in a loop
String result = '';
for (String s : values) {
result += s + ','; // New object every iteration
}
// ✅ GOOD: List + String.join()
List<String> parts = new List<String>();
for (String s : values) {
parts.add(s);
}
String result = String.join(parts, ',');
Solution 3: Move Heavy Work to Async
For genuinely expensive operations — large-scale data processing, complex calculations over thousands of records — move the work to asynchronous context where the limit is 60,000ms.
// Move heavy processing to a Queueable
public class HeavyProcessingJob implements Queueable {
private List<Id> recordIds;
public HeavyProcessingJob(List<Id> ids) {
this.recordIds = ids;
}
public void execute(QueueableContext ctx) {
// 60s CPU limit here — process away
List<Account> records = [
SELECT Id, Name FROM Account
WHERE Id IN :recordIds
];
// ... complex logic ...
}
}
// Enqueue from trigger
System.enqueueJob(new HeavyProcessingJob(Trigger.newMap.keySet()));
Solution 4: Monitor CPU Consumption
// Check CPU usage at key points to find bottlenecks
System.debug('CPU before: ' + Limits.getCpuTime() + 'ms');
doExpensiveWork();
System.debug('CPU after: ' + Limits.getCpuTime() + 'ms');
System.debug('Remaining: ' + (Limits.getLimitCpuTime() - Limits.getCpuTime()) + 'ms');
Pro Tip: In your debug log, search for CUMULATIVE_LIMIT_USAGE near the end of the transaction. The Maximum CPU Time line shows your exact consumption. Use Limits.getCpuTime() strategically to isolate which method is the bottleneck.