Best practices for avoiding the “apex CPU time limit exceeded” error
What is the “apex CPU time limit exceeded” error and why does it occur?
CPU time is calculated for all executions on the Salesforce application servers occurring in one Apex transactionโfor the executing Apex code, and any processes that are called from this code, such as package code and workflows. CPU time is private for a transaction and is isolated from other transactions. This can lead to an “apex CPU time limit exceeded” error when code runs inefficiently or performs too many calculations in a short time.
How can I avoid this error?
There are several best practices that can be adopted to help avoid this error.
Bulkification
Use Bulkification: the code should be optimized so that it can process more than one record at a time, thus making the processing more efficient and reducing the risk of hitting CPU time error per transaction.
Asynchronous Processing
Use Asynchronous Processing: Scheduled batch jobs or asynchronous callouts can move complex calculations into an asynchronous process, helping reduce CPU timeout errors.
One great way of significantly reducing CPU Time is by using asynchronous callouts like @future methods, which I will try to use more, here is why:CASE STUDY: Let's say you want to update the AccountNumber of Account Records to 1 if the Account Type is Customer.
This is as simple as it sounds, but the performance communicates in numbers. Let’s see it.
Create CpuAnalyzer apex class with the following synchronous method:
public class CpuAnalyzer {
public static void processRecordsSync() {
Integer totalPoints = Limits.getCpuTime();
System.debug('Total Points Start: ' + totalPoints);
List<Account> accounts = [SELECT Id, Name, Type, AccountNumber FROM Account LIMIT 10000]; // Retrieve a large number of records
for(Account acc : accounts) {
// Perform some processing on each account
// Code that consumes CPU time
if(acc.Type == 'Customer')
{
acc.AccountNumber = '1';
}
}
update accounts;
totalPoints = Limits.getCpuTime();
System.debug('Total Points End: ' + totalPoints);
}
ApexRun the processRecordsSync() synchronous method in the developer console:
CpuAnalyzer.processRecordsSync();
JavaScriptThen analyze the debug statements Total Points End: 2424 which represents the CPU Time in milliseconds
Or you can investigate the debug log and see:
Number of SOQL queries: 1 out of 100
Number of query rows: 1003 out of 50000
Number of SOSL queries: 0 out of 20
Number of DML statements: 1 out of 150
Number of Publish Immediate DML: 0 out of 150
Number of DML rows: 1003 out of 10000
Maximum CPU time: 2424 out of 10000
Maximum heap size: 0 out of 6000000
Number of callouts: 0 out of 100
Number of Email Invocations: 0 out of 10
Number of future calls: 0 out of 50
Number of queueable jobs added to the queue: 0 out of 50
Number of Mobile Apex push calls: 0 out of 10
Debug logWe will compare Maximum CPU time: 2424 out of 10000 with an asynchronous method that will process records by chunks asynchronously.
So, first, we need to split the Account list of records into chunks for example by 500 if the code complexity allows, or lower by 200. But, keep in mind there shouldn’t be more than 50 chunks per runtime, as the maximum allowed for future methods is 50.
Once we split the Accounts by chunks we loop the accountChuncks and pass accounts by chunks in the @future processChunkAsync() method. If you notice I serialize each chunk of accounts in a string, as we are not allowed to pass a list of SObjects in the @future methods, and then deserialize in a list of accounts.
In the CpuAnalyzer apex class add the following methods:
public class CpuAnalyzer {
public static void processRecordsAsync() {
Integer totalPoints = Limits.getCpuTime();
System.debug('Total Points Start: ' + totalPoints);
List<Account> accounts = [SELECT Id, Name FROM Account LIMIT 10000]; // Query to retrieve a large number of records
// Split the list into smaller chunks to process
List<List<Account>> accountChunks = splitList(accounts, 500); // Assuming each chunk contains 500 or 200 records
system.debug('[AF] accountChunks: '+accountChunks);
// Process each chunk asynchronously to avoid CPU time limit
for(List<Account> chunk : accountChunks) {
String serializedAccounts = JSON.serialize(chunk);
processChunkAsync(serializedAccounts);
}
totalPoints = Limits.getCpuTime();
System.debug('Total Points End: ' + totalPoints);
}
@future
public static void processChunkAsync(String serializedAccounts) {
List<Account> accounts = (List<Account>) JSON.deserialize(serializedAccounts, List<Account>.class);
for(Account acc : accounts) {
// Perform some processing on each account
// Code that consumes CPU time
if(acc.Type == 'Customer')
{
acc.AccountNumber = '1';
}
}
update accounts;
}
// Method to split list into smaller chunks
private static List<List<Account>> splitList(List<Account> listToSplit, Integer chunkSize) {
List<List<Account>> chunks = new List<List<Account>>();
Integer chunkCount = listToSplit.size() / chunkSize;
for(Integer i = 0; i < chunkCount; i++) {
List<Account> chunk = new List<Account>();
for(Integer j = i * chunkSize; j < Math.min((i + 1) * chunkSize, listToSplit.size()); j++) {
chunk.add(listToSplit[j]);
}
chunks.add(chunk);
}
// Add the remaining records
Integer remainingRecords = listToSplit.size() - (chunkCount * chunkSize);
if(remainingRecords > 0) {
List<Account> lastChunk = new List<Account>();
for(Integer j = chunkCount * chunkSize; j < listToSplit.size(); j++) {
lastChunk.add(listToSplit[j]);
}
chunks.add(lastChunk);
}
return chunks;
}
}
ApexRun the following line in the developer console:
CpuAnalyzer.processRecordsAsync();
JavaScriptNow, we can see a huge difference in Maximum CPU time: 151 out of 10000 instead of 2424 milliseconds.
Number of SOQL queries: 1 out of 100
Number of query rows: 1003 out of 50000
Number of SOSL queries: 0 out of 20
Number of DML statements: 0 out of 150
Number of Publish Immediate DML: 0 out of 150
Number of DML rows: 0 out of 10000
Maximum CPU time: 151 out of 10000
Maximum heap size: 0 out of 6000000
Number of callouts: 0 out of 100
Number of Email Invocations: 0 out of 10
Number of future calls: 3 out of 50
Number of queueable jobs added to the queue: 0 out of 50
Number of Mobile Apex push calls: 0 out of 10
Debug logIn conclusion, our optimized code only used 151 milliseconds of CPU Time to process 1003 records.
While this is a big difference, it is important to understand when to use @future methods. Salesforce allows only 50 future methods per transaction, which means we are not free to process a very large list of records that would require creating more than 50 chunks.
But this is just one use case, and as we know they are different from task to task, and you might find it very useful for your task.
Think of @future method as a new transaction in the same transaction. So, if you run a trigger handler apex class for example AccoutnHandler that will call the processRecordsAsync() future method, then this processRecordsAsync() opens a new transaction for executing the asynchronous code, and if this transaction is called from a loop, make sure the loop size is less than 50 as it prevents the Number of future calls limit.
SOQL Queries
Optimize database Queries: the code should be written to use bulkified SOQL queries that limit the number of records returned in a single request, thus reducing the overall number of round trips made to the database when querying data.
Recursive Calls
Limit Recursive Calls: when writing code should make use of static variables to prevent recursive calls from running in an infinite loop and consuming too much CPU time.
Monitor Performance
Monitor Performance: Regularly monitor the performance of your code using the Execution Logs and Code Coverage tools in Salesforce to identify areas where CPU utilization can be improved.
By following these best practices, you can greatly reduce the chances of receiving the “apex CPU time limit exceeded” error. In addition, you will also be able to improve the overall performance of your Apex.
If you are unable to resolve the issue using these best practices then it is recommended to contact the Salesforce Support team for assistance.
What are some of the best practices for preventing this error from occurring in your Apex code executions, specifically those that use loops or recursive function calls?
Operations that don’t consume application server CPU time aren’t counted toward CPU time. What has counted All Apex code Library functions exposed in Apex Workflow execution context what is not counted database operations, e.g. the portion of execution time spent in the database for DML, SOQL, and SOSL isn’t counted, nor is waiting time for Apex callouts.
If not managed properly, loops and recursive calls can be responsible for using up a large portion of CPU resources and causing the “apex CPU time limit” error.
For loops, it is important to minimize the number of iterations and break up the loop into smaller chunks if necessary. This will help prevent too many records from being processed in a single request, reducing the chances of CPU time utilization reaching its maximum limit. Additionally, the code should be written with static variables to ensure that recursive calls donโt run in an infinite loop and consume too much CPU time.
When working with loops or recursive function calls, there are a few best practices that can be adopted to help prevent the “apex CPU time limit” error:
– Minimize the number of loop iterations: If possible, reduce the number of records processed by each loop iteration to ensure that the CPU timeout limit does not reach its maximum limit.
– Avoid deeply nested loops: For example, nested loops can be very slow when processing a large volume of records; while code in managed packages may add additional delays.
– Break up loops: If the loop is unable to be reduced, consider breaking it up into multiple smaller loops, or implementing asynchronous processing to help reduce CPU timeout error.
– Use static variables when writing recursive calls to prevent an infinite loop.
– Consider batch-processing large computations or data processing operations using asynchronous callouts.
Can other measures be taken to help avoid this error, such as modifying your organization’s settings or using parallel apex processing?
Yes, in addition to following best practices when writing code, there are a few other measures that can be taken to help prevent the “apex CPU time limit” error.
At the organization level, you may consider applying higher limits for CPU time utilization by modifying your organization’s settings. Additionally, Salesforce offers parallel apex processing, which can help reduce the amount of time it takes for apex code to execute and decrease the risk of running into CPU usage limits.
Finally, you can also consider using tools such as StackStorm or Salesforce DX to improve your development processes and help ensure that all code is optimized for the best performance. These tools can be used to automate processes and allow for more efficient code execution.
Overall, it is important to follow best practices when writing apex code and take the necessary measures to ensure that your organization’s settings are configured appropriately in order to minimize the risk of running into CPU usage timeout and encountering the “apex CPU time limit exceeded” error.
What should you do if you encounter this error in your development environment and how can you troubleshoot it further?
If you encounter the “apex CPU time limit exceeded” error in your development environment, the first step should be to review your code and ensure that it follows all of the best practices outlined above. Additionally, check for any inefficient loops or recursive calls that may be causing excessive CPU limit time utilization and reduce them where possible.
You can use the Developer Console to view logs and analyze performance. The Performance Overview screen will show how much CPU time limit is being used for each transaction.
How will Salesforce address this issue in future releases of Apex and what are some possible workarounds for now?
Salesforce is currently improving the Apex governor limit system and introducing new limits for CPU timeout errors. Additionally, debugging capabilities are being added to help developers quickly identify and address issues.
Ultimately, following best practices and leveraging the tools available will help developers optimize their code and minimize the risk of running into CPU timeout errors or encountering the “apex CPU time limit exceeded” error in the future.
Good luck with your Apex development! ๐
Thank you for your informative article on the best practices for avoiding the Apex CPU time limit exceeded error. As a Salesforce developer, I have experienced this error in the past and found your article to be a valuable resource for avoiding it in the future.
The article provides clear explanations of the causes of the error, such as inefficient code and large data sets, and offers practical advice on how to avoid it. The tips provided, such as reducing the amount of code executed in a single transaction and optimizing queries, are all useful strategies for improving performance and avoiding this error.
I appreciate the emphasis on best practices for developing efficient code, such as using bulkified code and limiting the use of loops. These practices not only help avoid the Apex CPU time limit exceeded error but also improve the overall performance of Salesforce applications.