SFDC Stop - Always the latest about Salesforce


Full Tutorial Series with videos, free apps, live sessions, salesforce consulting and much more.


Telegram logo   Join our Telegram Channel

Friday, 1 May 2020

How to display warnings in Salesforce ? | Become an #AwesomeAdmin using Platform Event Toast LWC

Hello Trailblazers,

I recently created a new generic toast component named as "Platform Event Toast LWC" which can be used to display warnings in Salesforce. I open sourced it on github and it can be accessed on this link:- https://github.com/rahulmalhotra/Platform-Event-Toast-LWC

Have you ever faced a requirement where you need to show warning message to the user who is trying to save a record but you also want that the record should be saved successfully without any restriction ?

In this post, we're going to learn how we can show warnings in Salesforce using Platform Event Toast LWC and take one step forward to become an #AwesomeAdmin.

The Platform Event Toast can be installed as an unmanaged package in any developer/sandbox/production environment by clicking on the links below:-

  1. Developer/Production Environment:- https://login.salesforce.com/packaging/installPackage.apexp?p0=04t7F000005ZBvB
  2. Sandbox Environment:- https://test.salesforce.com/packaging/installPackage.apexp?p0=04t7F000005ZBvB

Use Case

Let's say you wanna show a warning to the user who is updating the Annual Revenue of the account. The new annual revenue is >= $50,000 but the user forgot to update the rating to "Hot". In this scenario, the client doesn't want to restrict the user from saving the record but the user should know that the preferred rating for this account is "Hot"

Our AWESOME Salesforce Admin can fulfill this requirement using Platform Event Toast. All he has to do is to create a process builder on Account as shown below:-

Process Builder will run when an account record is created or edited as we want to show warning when a record is updated by the user.


We're setting the criteria that all of the conditions are met and we've added two conditions:-

  1. Annual Revenue >= $50,000
  2. Rating != Hot

We'll execute the process builder only when the specified changes are made to the record as we want to show warning only once.

AccountProcessCriteria.jpg (1366×628)

At the end, we added an action where we're creating a Toast Event record by setting up properties of toast as shown below:-



Note:- Key is a required field and must be filled. A key is used to uniquely identify a toast event when it's displayed on a page. Moreover, it'll help you as an admin to debug and find the process builder very easily if you're setting up the key as:- ProcessBuilderName_ConditionName as a key should be unique.

Once your process builder is ready and activated, it's time to add our component to the lightning page. We just have to make sure that we're adding the same key here as we specified in the process builder:-



We can leave the other fields empty for this toast as we've setup everything from the process builder. In case, we haven't specified value of other fields in the process builder, we can fill those fields here. Note that if you've added values in the process builder and in the component as well the values from process builder have higher priority and will override the values set in the component.

Once you've setup the toast successfully, you'll see the below result when Account's Annual Revenue >= $50,000 and Rating != Hot



This is just a single use case. However, there can be multiple use cases where you can use this toast. You also have an option as:- Run in System Mode which will allow a toast to run in system mode i.e. the toast will be visible to all users of the org if the toast exists on the page that is opened by the user. This option can be used to create Org Wide Announcements by adding the toast as a part of utility bar so that it's available at application level no matter which page is opened by the user.

If you want to know in detail about the Run in System Mode feature, let me know in the comments down below and I'll write another blog on the same.

Considerations while using Platform Event Toast

Let's have a look at linkedin post below that I published when I created this Platform Event Toast:-



As you can see linkedin post above, there is a comment by René Winkelmeyer @muenzpraeger who is an Architect, Developer Evangelism at Salesforce. He highlighted that the Platform Event Toast will work for small/medium size orgs, but may cause issue in large organizations where we have a large number of users or we're using Platform Events at various places for different purposes. As platform events work in real time, they've limitations on the number of event notifications, concurrent subscribers in a 24 hour period. You can have a detailed look at the limits here:- https://developer.salesforce.com/docs/atlas.en-us.platform_events.meta/platform_events/platform_event_limits.htm

Thanks René for highlightning this 😊

You can also have a look at the video for the same below:-



If you liked this post, make sure to share it in your network and let me know your feedback in the comments down below.

Happy Trailblazing..!!

Wednesday, 15 April 2020

100% Test code coverage for multiple dependent apex HTTP callouts without creating a mock class

Hello Trailblazers,

Welcome to the 5th tutorial in Simplifying the Callouts in Salesforce Tutorial Series. In this tutorial, we're going to Create a Test Class for Multiple Dependent Apex HTTP Callouts in a single transaction Without Creating a Mock Class. This is a Video-Only Tutorial Series. So, I am giving you a summary of the tutorial below and you can learn in detail by watching the video at the end of the blog.

Note:- This tutorial is using resources from previous tutorial. So, in case, you want to implement the same example on your own, make sure to have a look at the previous tutorial once.

Remember our OrgConnectService class from the previous tutorial ? In this tutorial, we're going to create a test class for that. Just to give a brief, our OrgConnectService class has a single method named:- createAccountAndContact() which is performing 3 dependent callouts using HTTPCalloutFramework. The first callout is responsible to create an Account in the connected org using standard API. The second callout will create a contact and link that contact with the account that was created in the previous callout and the third callout is going to query this contact and account from linked org. You can have a look at that class here.

Now, let's have a look at the test class below:-

As you can see above, we have 4 methods in this test class but we'll concentrate in detail on 1st method only as it's covering positive scenarios and is responsible for 76% code coverage. The rest of 3 methods are covering the negative scenarios and are similar. In createAccountAndContactTest() method, we're first of all Creating Individual Mocks for each callout that we're going to perform from the service class. Then, we created an instance of HTTPCalloutServiceMultiMock named as:- multiMock and we added all 3 individual mocks to this multi-mock using addCalloutMock() method.

This method takes the endpoint URL as the first parameter and the individual mock in the second parameter. To set the endpoint URL properly, we created an instance of HTTPCalloutService class named as destinationOrgService by passing the custom metadata name in constructor as we've done in previous tutorials. Finally, we set the multi mock using our Test.setMock() method in which we passed HTTPCalloutMock.class in the first parameter which is the type and in the second parameter we passed our multiMock instance.

Then we simply called our createAccountAndContact() method and passed the required parameters and it automatically used our multi mock setup to get the fake responses that we've set in individual mocks according to the URL it is hitting. Finally, we checked the returnValueMap we're getting from the method to make sure that the callout is successful.

Let's discuss one method with a negative scenario too. In createAccountAndContactTestWrongResponseAccount() method, you can see that I intentionally passed the QUERY_SUCCESS_CODE in the mock just because I want to cover the scenario when my response code from account callout is not 201. As I am testing negative scenario for first request now, I don't need to create a multi mock because I'll be getting a response code as 200 (set using QUERY_SUCCESS_CODE) during test run (first callout for account creation where expected response code is 201) and it'll return an error without executing further code.

In another method createAccountAndContactTestWrongResponseContact(), I am checking negative scenario for second callout. Because of this, I created a multi-mock where in accountMock I passed in CREATE_SUCCESS_CODE as I want this request to be successful. Whereas, in contactMock I passed in QUERY_SUCCESS_CODE as I want this request to be failed and finally asserted the ERROR_CODE and CONTACT_ERROR_MESSAGE for this request.

Want to learn in depth ? Have a look at the below video:-



In this tutorial, we learned how we can get 100% code coverage for multiple dependent apex HTTP callouts without creating a mock class as shown below:-


If you liked this tutorial, make sure to share it in your network and let me know your feedback in the comments down below.

Happy Trailblazing..!!

Friday, 10 April 2020

Multiple Dependent Apex HTTP Callouts in a Single Transaction from Salesforce | HTTPCalloutFramework

Hello Trailblazers,

Welcome to the 4th tutorial in Simplifying the Callouts in Salesforce Tutorial Series. In this tutorial, we're going to perform Multiple Dependent Apex HTTP Callouts in a single transaction. This is a Video-Only Tutorial Series. So, I am giving you a summary of the tutorial below and you can learn in detail by watching the video at the end of the blog.

Note:- This tutorial is using resources from previous tutorial. So, in case, you want to implement the same example on your own, make sure to have a look at the previous tutorial once.

We've made some updates to our CustomerRubyOrg metadata record as shown below:-


As you can see above, I have added a header with key:- Content-Type and value:- application/json. I have updated the method as POST and also the endpoint as:- callout:CustomerRubyAPI/services/data/v48.0 where I am referring CustomerRubyAPI which is the name of my Named Credential record that we created in our previous tutorial.

Let's have a look at the code below:-


As you can see above, we have a OrgConnectService class. In which we have a single method named as createAccountAndContact(). This method is receiving a string for account name and a contact object as a parameter. Inside this method, we're performing 3 callouts to another salesforce org (we call it as source org) one by one. So, in total 3 operations are performed in source org as shown below:-
  1. Creating a new Account Record using the account name received in parameter.
  2. Updating the contact record received in parameter by linking it with Account Record and creating a new Contact Record.
  3. Querying the contact and related account record.

First of all we're creating an instance of HTTPCalloutService named as destinationOrgService and we passed the custom metadata name:- CustomerRubyOrg in the constructor. Then we're setting the endpoint URL and request body for the callout using getter and setter methods present in HTTPCalloutService which is a part of HTTPCalloutFramework.

Finally, we're sending the request and checking if the response code is correct or not. For record creation in salesforce, the response code should be 201 and for querying it should be 200. For the first two callouts, I am parsing the response body using JSON.deserializeUntyped() method which is returning an Object in the response that I am typecasting into Map<String, Object>. I am checking the value of success and getting the id of the record which is created. However, in the third callout, I am simply displaying the response using System.debug().

Wether the request is successful or not, we're forming a Map<String, String> which we're returning by this method. We'll be using this map in the next tutorial where we'll be creating a test class for this class in order to add asserts for all 3 callouts depending upon the return value map.

I have added necessary comments to help you understand the code. The basic flow is :- We're creating an account record by calling out to salesforce standard api. Then we're creating a contact record and linking it with the account record whose id we got in the response from first API callout. Finally, we're querying the contact and related account record using the contact id that we got in the response from 2nd callout. So, in this way each callout depends on the previous callout response which is a very common requirement that we get usually face in real life projects while working on integration.

Want to learn in depth ? Have a look at the below video:-



In the next tutorial we'll see how we can create a test class for such a scenario where we have 3 dependent callouts in a single transaction and that too without creating a mock class using HTTPCalloutFramework. If you liked this tutorial, make sure to share it in your network and let me know your feedback in the comments down below.

Happy Trailblazing..!!

Tuesday, 7 April 2020

Connect two salesforce orgs using Named Credentials, Authentication Provider & HTTPCalloutFramework

Hello Trailblazers,

Welcome to the 3rd tutorial in Simplifying the Callouts in Salesforce Tutorial Series. In this tutorial we're going to connect two salesforce orgs thereby learning the concept of Named Credentials, Authentication Provider and HTTPCalloutFramework in detail. As this is a Video-Only tutorial series. I am giving you a brief of what we've done in this tutorial below, and you can have a detailed look in the related video. So, let's begin.

As we're going to connect two orgs that means, we'll be fetching some data from one org to another. Let's call the org from which we're going to fetch data as the Source Org and let's call the org which is pulling the data from source org as the Destination Org.

Connected App in Source Org

The first step is to create a connected app at the third party which is responsible for providing data. In this case, our third party is our Source Org. So, we need to create a connected app in our source org as shown below:-


Notice that we have selected two OAuth scopes here:-

  1. Access and manage your data (api) :- Used to query data from Salesforce
  2. Perform requests on your behalf at any time (refresh_token, offline_access) :- Used to maintain the refresh token which will be used by authentication provider in order to get new access token automatically when refresh token expire.
We'll be using the Consumer Key and Consumer Secret from this connected app in our authentication provider.

Authentication Provider in Destination Org

We have created an authentication provider record in destination org as shown below:-

Points to notice here:-
  1. Provider type is Salesforce as we're going to connect to a Salesforce Org.
  2. Consumer Key and Consumer Secret are copied and pasted from our connected app which is made in another salesforce org as shown above.
  3. Callback URL is automatically generated when you save the record of your authentication provider. We need to copy and paste this callback url in our connected app which was created in the source org and update that connected app (You can see above that our connected app is having the same Callback URL which is given by our authentication provider). I've updated it later.

Named Credential in Destination Org

We created a named credential in the destination org which is using our authentication provider.

Please make sure that the URL is the base URL of the source org which is visible in Salesforce classic mode as shown in the above image. As you can see, the Identity Type for named credential record is Named Principal and the authentication provider that we created before is selected here. When you save this record for the first time, you'll be taken to a login page where you have to login with the credentials of "Source Org" i.e. the org where you've created your connected app. Make sure that the authentication status is updated to Authenticated as <source org username> as shown in the above image.

HTTPCalloutFramework in Destination Org

Please make sure that you've installed HTTPCalloutFramework in your destination org. You can install it from here. Once you have it installed in your org, you can create a custom metadata record of HTTPCalloutConfiguration as shown below:-

We've set the endpoint as:- callout:CustomerRubyAPI/services/data/v48.0/query/ where CustomerRubyAPI is the name of my named credential. As, I am going to query some records from my source org, so I have setup that URL as the endpoint here. In the URL parameters, I have:- q:<query> therefore, I'll update the URL parameter with key q and set it's value to my actual query before calling out and the final URL will be similar to this:- callout:CustomerRubyAPI/services/data/v48.0/query/?q=SELECT+Name+FROM+Account.

Fetching Data from Source Salesforce Org to Destination Salesforce Org


You can have a look at the above code which is used to perform any query in source org and get the results as the response of the API. You can see in the debug behind which shows the result of a query on account. However, you can perform any query and you don't have to worry about authentication as it's being handled by Named Credentials and Authentication Provider. All other configurations are handled by the HTTPCalloutFramework.

Note:- Here we can easily add any dynamic query because it's handled in the URL parameters. But in case you need to add any dynamic value in the request body let's say, I would recommend to have the format in the HTTPCalloutConfiguration record and use string replace and getters/setters to update the request body before callout.

If you want to have a detailed look, you can watch the video below:-



That's all for this tutorial. In the next tutorial, we'll have a look at how we can create a test class to manage multiple callouts in single transaction. Our test class will be having 100% code coverage and that too without creating a mock class. So, stay tuned.

Happy Trailblazing..!!

Saturday, 4 April 2020

100% code coverage for apex HTTP Callout in Salesforce without creating a mock class

Hello Trailblazers,

This is the second tutorial in Simplifying the Callouts in Salesforce Tutorial Series. In this tutorial we created a new class named as SFDCStopService which is responsible to perforn a callout to SFDCStop Blogs API :- https://sfdcstop.herokuapp.com/blogs 

You can have a look at the SFDCStopService class below:-


For this class, we created a test class so that we can cover the code and we were able to achieve 100% code coverage without creating a mock class using HTTPCalloutFramework. Let's have a look at the test class below:-


So, the question is How we're able to achieve 100% code coverage without creating a mock class ? The answer to this question is that HTTPCalloutFramework provides you an in-built mock class named as:- HTTPCalloutServiceMock. You can create an instance of that and in the mock class constructor you just need to pass the fake response code and the response body. That's it our in-built mock class will create a fake response for you when you're calling out during test.



As, this is a Video-Only Tutorial Series, In case you want to know in detail, you can have a look at the related video below:-



Do give this framework a try and let me know your feedback in the comments down below. You can find all the related code in the singlecallouttest branch of my HTTPCalloutFramework Tutorial Series github repository here. In the next tutorial, we're going to connect two salesforce orgs very easily and will fetch data from one salesforce org to another using HTTPCalloutFramework.

Happy Trailblazing..!!

Thursday, 2 April 2020

Introduction and Installation of HTTPCalloutFramework

Hello Trailblazers,

About 4 months ago, I created a new framework to simplify HTTP Callouts in Salesforce. I made it free, open sourced it on GitHub and shared the same with my #SalesforceOhana i.e. YOU. I got valuable feedback from some of you for which you can see the comments on the post below:-



Thanks to everyone who gave their suggestions on the above post. I improved the framework and recently had a webinar on this. At that time, I decided to create a small tutorial series on this instead of a webinar because there is so much to tell about this framework which can't be covered in one webinar. ( Thanks Radhika Bansal for your suggestion 😊 )

So, here I present the first tutorial in Simplifying the Callouts Tutorial Series by SFDCStop. In this tutorial we're going to have a brief look at HTTPCalloutFramework which is a free and open source solution to simplify apex HTTP Callouts in Salesforce. This is going to be a Video - Only Tutorial Series and I am going to cover everything up in about 6 or 7 tutorials at max (estimated). I am going to share all the resources (presentation/code/videos) in the blogs that I am going to publish under this tutorial series.

For the first tutorial, we have a presentation featuring Why do we need this framework ? What's the current scenario that we're dealing with ? and how does it help you to simplify the callout implementation.

Also we're going to cover the following stuff:-
  1. Installation of HTTPCalloutFramework
  2. Configuring the framework in your org
  3. We're going to perform a callout using 2 LINES OF CODE to a static SFDC Stop Blogs API:- https://sfdcstop.herokuapp.com/blogs 

    YES..!! You read it right...!! We can perform a callout in only 2 lines of code using this framework

To know more, have a brief look at the presentation below:-



And here is a video featuring HTTPCalloutFramework:-



If you liked this tutorial, make sure to give a try to HTTPCalloutFramework and let me know your feedback in the comments section below. You can also contribute to the framework by sending a pull request on github.

Happy Trailblazing..!!

Saturday, 28 March 2020

Are Scheduled Flows Bulkified ? Let's Find Out...!!

Hi Trailblazers,

Have you heard about scheduled flows in Salesforce ? If not, let me tell you a brief about them. Scheduled Flows are flows that are scheduled to run on a specific date and time either once or daily or weekly. So, the question arises that if the flows are scheduled then how do they process records ? Do they process records one by one or they're bulkified and process many records at a time ?

In this post, we're going to create a scheduled flow and will check wether the scheduled flows are bulkified or not.

Use Case / Scenario:- Let's say for reporting purposes you need to find out for each account which is Active, how many contacts have opted out of email. So, as an awesome admin you designed a solution for it and you decided to create a scheduled flow that will run daily at 12:00 A.M. and will check how many contacts have opted out of email for each account and finally, it'll update the count.

It's a custom Roll-Up kind of thing with the fact that it will update all accounts once on a daily basis. Before we begin, I have created two custom fields on Account one is a custom Number field on Account object with API name as:- NumberOfContactsOptedOutOfEmail__c as we're going to update this field using our flow and the second is a custom Checkbox field with API Name as:- IsActive__c as we're only going to check for accounts that are active. You can create the number field as shown below:-


Also the checkbox field is shown below:-


So, we have our fields ready. Now it's time to make a scheduled flow.

1. Go to setup and search for flows.


2. Click on the New Flow button. This will open the flow builder. In the new flow dialog, choose Autolaunched Flow and click on Create button.


3. Double click on the Start button that's already available in the flow builder and you'll see a dialog where you can schedule a flow. As you can see below, In the radio button you should select Scheduled jobs—flow runs for a batch of records, in the Set a Schedule section, you need to enter the Start Date and Start Time. You can set it to the next day's date and time as 12:00 AM as we want our scheduled flow to run at 12:00 AM and also you should set the Frequency as Daily.


Also, in the Run the Flow for a Set of Records (Optional) section. Set the object as Account and in Condition Requirements set it to Conditions are Met (AND). We're adding a condition to consider only active accounts here so we have set the field as IsActive__c, set the Operator to Equals and Value to {!$GlobalConstant.True}. Click on Done.

Now, this flow will run daily at 12:00 A.M. and will get all accounts that are active. Each account in this flow will be stored in the {!$Record} variable automatically. So, our next step is to get all contacts that have opted out of email and are related to the current account record.

4. For this, drag and drop the Get Records element on the flow builder from the left and set the label, api name, description as shown in the below image:-


As you can see above, we've selected the object as Contact as we're getting contact records here and below that in the condition, we've selected Conditions are Met. I have added two conditions as shown below, 1st is to check that contact has opted out of email so that checkbox should be true and the second one is to confirm that we're only getting contact records related to the current account record, so here the AccountId Equals {!$Record.Id}


The contact records are not sorted in our case as we don't want that. Below that we have few more options:-


As you can see above, we need to get all records that specify the above condition so we've selected All records radio button and to store the data, we're only querying the ID field as we just need to calculate the count of these contacts. Also, salesforce will automatically assign a variable for these queried contacts.

5. Now, we queried those contacts related to current account record who have opted out of email. It's time to run a Loop on those contacts and count the records. Let's have a look at the below image, where we dragged and dropped a Loop element of flow builder. I have given a name and description and in the Collection Variable, we need to select the variable containing the list of contacts. As we have given that responsibility to salesforce so you can see that Salesforce has automatically created a variable named as:- Contacts from Get_contacts_who_has_opted_out_of_Email. Select this collection variable.


We've then selected to loop from First item to last item as order doesn't matter to us now. For Loop Variable, I am going to create a new Loop Variable here to refer the current contact in the loop.


6. We've created a new Loop Variable of type Record and Object as Contact which is Available for Output as we just need it to count the records.


Click on Done and you'll see our new variable is set.


Click on Done again.

7. Now we're looping contacts and we'll get each contact record in Contact variable. So, it's time to count those contacts and store the total count in a variable. So, I dragged and dropped an Assignment element on the flow builder as shown below:-


If you click on the Variable field, you'll see that you have the Contact variable in which we'll be having one contact at a time inside the loop.


But here, we need to assign the count so, we're going to create a New Resource for that. Click on New Resource and set the values as shown below:-


So, we've made a new resource of type Variable which has an API name of ContactsCount and has a Data Type as Number with 0 decimal values and the default value is also 0. This variable is available for both input and output as we'll be updating this variable for each contact and we'll be using this variable to update our account at the end. Once you've filled these values, click on Done.

8. You'll see that the ContactsCount variable is available on the assignment. For each contact we encounter in the loop, we need to Add 1 to the ContactsCount variable. You can set this as shown below.


So, the ContactsCount variable will calculate the total number of contacts who have opted out of email and are related to the current Account. Click on Done.

9. We'll be having the above assignment inside the loop and outside the loop when we have the Total Contacts Count, we need to assign that count to the NumberOfContactsOptedOutOfEmail__c field of current Account record. You can see another new assignment element below for this purpose.


Now, we need to update the current account record. So, click on Done and drag and drop the Update Records element on the flow builder and select our {!$Record} variable which is specifying the current account record in the scheduled flow as shown below:-


Click on Done. And connect all the elements together. Your final flow will look like as given below:-


Note in the above image, that the assignment which is updating the ContactsCount variable is inside the loop and the assignment which is updating the NumberOfContactsOptedOutOfEmail__c field is outside the loop so that the total is calculated assigned to the current account field after the last item in the loop. Finally, we updated the current account record.

10. Save the flow by clicking on the Save button on the top right with values as shown below:-


11. Activate the flow by clicking on the Activate button on the top right as shown below:-


Congratulations..!! Your scheduled flow is activated that will update all the active accounts with the number of related contacts who have opted out of email.

Note:- Have you noticed that we've made the whole flow considering that we're only dealing with the current account record that is stored in the {!$Record} variable. But in actual, there may be hundreds of active accounts that are stored in salesforce. So, in order to check how scheduled flows deal with bulk records. I created about 500 accounts and each account have 4 contacts out of which 2 of them have email opt out as shown below.


The flow was already activated and it ran at 12:00 A.M. Now, It's time to check the logs.


I checked that a total of 4 logs were created for 500 Accounts and about 2000 contacts that were there in my org as each account was linked to 4 contacts.

I also queried the accounts and checked that all accounts were updated with value 2 in number of contacts opted out of email field as shown below:-


So let's have a look at all 4 logs one by one:-

Log No. 1


We got to know that 500 accounts were retrieved for our scheduled flow.

Log No. 2


In this log we got to know that our flow begin processing 100 account records and for these 100 account records, 200 contacts were queried as each account has 2 contacts that has opted out of email.


Also, at the end 100 accounts were updated in a single DML statement as shown below:-


So, our log no. 2 actually processed 100 accounts in which 1 SOQL query was used to query 200 contacts and 1 DML statement was used to update 100 accounts. Let's have a look at Log No. 3.

Log No. 3


In this log we got to know that our flow begin processing 200 account records and for these 200 account records, 400 contacts were queried as each account has 2 contacts that has opted out of email.


Also, at the end 200 accounts were updated in a single DML statement as shown below:-


So, our log no. 3 actually processed 200 accounts in which 1 SOQL query was used to query 400 contacts and 1 DML statement was used to update 200 accounts.

Log No. 4 is exactly similar to Log No. 3 and it also processes 200 accounts. So, in this way our 500 accounts were processed in a batch of 100, 200 and 200 accounts respectively.

Therefore, we conclude that Scheduled Flows are Bulkified and process records in a batch of 200 records.

Note:- In case your flow is not running on the scheduled time, make sure that you're setting the right time for your flow based on the default time zone in Company Information (Go to Setup -> Company Information)

If you liked this post, do share it in your network and let me know your feedback in the comments down below.

Happy Trailblazing..!!

Saturday, 14 March 2020

Before Save Update Flows v/s Process Builder | Salesforce #Spring20 Release

Hello Trailblazers,

Salesforce recently launched a new feature in Spring 20 release i.e. before-save updates using lightning flows. We can simply define these before-save update in flows as the flows that update a record before it's committed to the database. Let's call them "Before-Save Update Flows".

So, we can say that the before-save update flows can be fired when a record is created or when a record is updated or both - when a record is created/updated. Because in these 3 conditions, we can update a record before it's committed to the database.

Now, the question arieses:-

We can perform the same operation using a Process Builder.

Why should we use "Before-Save Update Flows" ?


Well, Salesforce Documentation says that the before-save update flows are 10 times faster as compared to the process builder.

To answer this question, we're going to take a simple use case and we'll create a before-save update flow, as well as a process builder and see the difference. So, let's begin 😎

I am taking a sample use case in which I have created a new picklist field on my opportunity named as :- Opportunity Rank (Opportunity_Rank__c) which can have 4 values:- Bronze, Silver, Gold and Platinum. I want the opportunity rank field to be automatically updated when a record is saved based on the following criterias:-

  1. Opportunity Rank is Bronze if Amount < 10,000
  2. Opportunity Rank is Silver if 10,000 <= Amount < 50,000
  3. Opportunity Rank is Gold if 50,000 <= Amount < 100,000
  4. Opportunity Rank is Platinum if Amount >= 100,000

It's time to create a flow now:-

1. To set the flow to fire automatically, when a record is saved double click on the start icon.

2. You'll see a dialog box as shown below. Set the What Launches the Flow radio button to - New or updated records - flow makes fast field updates and Choose When to Launch the Flow radio button to - A record is created or updated. Also, set the object to Opportunity as we're dealing with opportunity here.


3. Click on Done. Next, we created a Decision box where we stored our conditions as shown below. You can click on the (plus) + Button to the right of OUTCOME ORDER heading in the left panel to add more outcomes as I have added below:-

Amount < 10,000
10,000 <= Amount < 50,000
50,000 <= Amount < 100,000
Amount >= 100,000
Now, our outcomes are ready according to the condition so, it's time to update our record. Salesforce provides us a $Record global variable in this type of flow which is actually the record inserted/updated. So, all we need to do is to update the Opportunity Rank field in that $Record variable and salesforce will handle the rest.

In order to update the field, I have created 4 assignments for my 4 conditions which are given below:-

Update Opportunity Rank to Bronze
Update Opportunity Rank to Silver
Update Opportunity Rank to Gold
Update Opportunity Rank to Platinum
Finally, I connected the appropriate outcomes to their correct decisions, so that the final flow will look like as shown below:-

I activated this flow and tested it and got the correct result as shown below:-


Now  it's time to create a process builder. I am not going to tell the step by step instructions for the same, but I am pasting the necessary screenshots below so that you can make it easily:-

1. Creating a new process builder which will fire on record change.


2. Process builder is created on Opportunity Object and will fire when a record is created or edited.


3. There are different criterias in process builder like the one shown below which is checking :- 10,000 <= Amount < 50,000


4. For each condition there is a related action which will update the Opportunity Rank field of the Opportunity Record that started this process. As, you can see below, for the above condition, I have updated my Opportunity Rank field to Silver.


5. The final process builder will look as shown below. Don't forget to Activate the process builder to use it.


Now comes the fun part. As we know that we have a before-update flow and a process builder doing the same thing. There is no workflow rule, assignment rule or any other thing on my opportunity object in my org, but I do have a Validation Rule and an After Insert Trigger ( Which consists of a SOQL query) on the opportunity object to test the performance.

First I activated the flow and deactivated the process builder and created a new opportunity and then I deactivated the flow and activated the process buider and created a new opportunity again. I got the two logs as shown below:-


Can you see the difference in the processing time above ?

Now, let's discuss why did this happen. I checked the logs and got the below scenarios:-

1. Before-Save Update Flow:- The events occured in the following order:-

  • Before-Save Update Flow executed and updated the rank field on opportunity.
  • Validation rule executed.
  • Opportunity Trigger executed and performed a query (SOQL Count - 1) which is inside the trigger written on after insert

In the above case the number of SOQL queries was 1 as my trigger executed once and rest everything was 0 as you can see below:-


2. Process Builder:- The events occured in the following order:-

  • Validation rule executed.
  • Opportunity Trigger executed and performed a query (SOQL Count - 1) which is written on after insert
  • Process Builder executed. This process builder queried the record which was saved in the database (SOQL Count - 2) and updated the rank field and saved the record again (DML-1).
  • As the operation was performed on the same record, validation rule was executed again.
  • Similarly, Opportunity Trigger executed again (SOQL Count - 3).
In this case, we had 3 SOQL queries and 1 DML statement executed as you can see below:-


If you check the order of execution in the Salesforce Docs here


You'll get to know that this happened because the Before-Save Update Flow execute after the system validations and before everything else. Therefore, all the changes are executed even before the record is saved in the database.

However, in case of process builder, it is executed like a workflow rule and you can see that at point 7 the record is saved but not committed and the workflow rule execute after that at point 11. And in our case the point 12 was also valid i.e. our process builder updated the same record therefore triggers and validations were fired again. Therefore, the process builder queried the record again as it was saved before (+1 SOQL) and saved the record again in the database (+1 DML) and the trigger (+1 SOQL) and validation rule was executed again as written in point 12.

The only strange thing that happened in my case was that the Custom Validation Rules were also fired again. However, it's written in the documentation that they are not run again. I'll research more on that and update you accordingly. But after all this research, we came to a conclusion that:-

"Before-Save Update Flows are faster & take less processing time than Process Builders"

That's all for this post. If you liked it make sure to share it in your network and let me know your feedback in the comments down below.

Happy Trailblazing..!!