As I mentioned in my previous articles, I am trying to investigate the details of how the entitlements and API Service Protections are working and are planning to be rolled out (in the case of entitlements). I had a very interesting call with some of the nice people in the product team last which shed some more light on the entitlement issue and the best practice of how they suggest the API is to be used. The suggested method is that the API request load be spread out over the different users in the instance/tenant using impersonation. I will walk through what this means and what I think about this in the article below.
First, if you have not read my previous post on entitlement, I do suggest you do this first. It describes what entitlements are compared to the API Service Protection. I still see a lot of people mixing these up and that is not strange, but they are two different aspects of this, and we need to keep track of what we are talking about.
As mentioned in that article, the point of the enacting the Entitlements, when that is coming, which still is a bit unclear, is so that the compute consumed by a small organization is proportionate compared to a large organization. So, let us go back to the actual per-user licenses and have a look at an example.
Let us say we have a 5 000 Sales Enterprise org, that means that we get:
- 5 000 users who each have 20 000 API request entitlements.
- 100 000 API Requests for non-licensed users.
Compare this to a 10 Sales Enterprise org which will have.
- 5 users who each have 20 000 API request entitlements.
- 100 000 API Requests for non-licensed users
Both these are totally independent of how many instances the first or the second org has.
The first observation is of course that the 100k API Request for non-licensed users do not scale at all with the size of the organization or the number of users. How does this then go in-line with the goal that a large org should have more compute than a small? The second observation is that 20 000 API requests, which actual also the normal UI will be using, is very large. You would have to be one busy salesperson to be able to generate 20 000 API requests manually in 24 hours, so busy I am tempted to say it is virtually impossible to break unless you have very heavy automations running under your account. This was also what the Microsoft rep I talked to mentioned, that this large number is to be used on a per user basis. Hence the natural question was, if we use impersonation in the API, will the Entitlements honor that? The answer was unequivocally: yes.
Hence, this is the clear answer on how we need to create future integrations. We need to spread the load using impersonation over many of the users in the system.
Microsoft docs on how to do impersonation
If we do this the right way, it would probably be possible for most organizations to, over time be able to build a fix for this.
However, it will not be easy as we need to have a tight control of the privileges of all the users. Let me give you an example from a customer I work with:
They are an online travel agency and have people working at the destinations with very restricted privileges. A lot of bookings (orders) are integrated from the booking systems, these should hence be spread out over many users instead of the single application user being used today. There is not natural user to direct the bookings to, as it is a B2C business, and no person at the travel agency “owns” these customers per se, so the load needs to be distributed in a more randomized fashion. So, let us say we have these users:
- John Smith – System Admin (Full access)
- John Doe – Power User (can create orders but not refunds)
- John Surf Dude – Destination Specialist (can view but not create orders, cannot even read refunds)
When rebuilding the integration, we can use user John Smith and John Doe but not John Surf Dude and the only way of generically knowing this is checking what we want to do and comparing this to the privileges of each user to get a shortlist of users that can be used for integration.
However, we do not want to use a user that is close to 20k API requests for that day, so we might need to query the current API Request entitlement usage per user, so that we can filter the current shortlist to an even shorter list before knowing which users to use for impersonation.
- A way forward. I think this can be used, although there are some tricks to it. For my customer we might be able to cut a significant amount of API calls this way which will make a huge difference when we compared to not using this technique.
- Impersonation not always viable – as in the example above, when there is not obvious owner to link to, we need to figure some other logic out of how to spread the API entitlement load. And things start to become tricky.
- Impersonation is complex.
- More complex dependencies on security model
As mentioned above, trying to execute an action as a user that does not have the correct privileges won’t work, so we need to know that first. And setting everyone as System Administrator just will not work.
- Logical user or just random users – trying to map the users to some logical connection from the other system or just randomizing the load. Logical user is probably preferrable but probably will not be a very common pattern.
- Integration often system-to-system not user-to-user
- Integrations are more often done on a system-to-system basis, not user-to-user basis. When looking at CRM-ERP integrations for instance, the user base of these two systems seldom overlaps except for a few users.
- Takes time to refactor code to handle impersonation – There are many organizations out there with numerous complex integrations. And changing integrations on this level will require significant work to be done and the question will be if there is time to complete this work before the entitlement feature goes to GA?
- Strange audit trail – if we use randomized users to update or create data in dataverse that will undoubtedly create very strange audit trails, created by and modified by fields. These are some facts that need to be taken into consideration.
- Power App – per App users have very few requests – Not all licenses have 20k API requests per 24h. The Power App per App has only 1000 API Request entitlement per 24h, these can run out just by a using the system heavily. So do consider the API Entitlements when looking at the licenses.
- Still not GA – Entitlements have still not gone GA. Hence the best time to let Microsoft know what you think is good or bad about this is now. But do be civil, there will be some feature like this, that will handle fairness management of compute consumption. Contact Microsoft through your local User Group, your local MVP or via the comment below or send me a message on LinkedIn and I will put you in contact with the right people. You can also submit an idea to the idea portal.
- There might be a point to binding all entitlements to users, in the case that if, in the future, any overshooting would not only result in angry emails, but service degradation or shut-off for that user. Imagine having creative citizen devs creating some infinitive looping Flow or massively recursive logic unknowingly which causes a lot of requests. This approach would then just cause a block for that user, not the entire tenant. Significantly reducing the severity of the problem.
Personally, I think this method is just way to complex. I think just having a simple pooling on the tenant level of all the API entitlements would be fair and then deducing all usage from this. I think that Microsoft could skip the 100 000 for the non-licensed user, for simplicity. Based on the examples above, that would make:
5000 Sales Enterprise
- 5 000 users who each have 20 000 API request entitlements.
Total API Entitlement for the Tennant: 100 M / 24 h
5 Sales Enterprise users
- 5 users who each have 20 000 API request entitlements.
Total API Entitlement for the Tennant: 100 K / 24 h
And all users, and all non-licensed users use from the same pool.
As for the potential problem of creative users potentially blocking the entire tenant, I would suggest adding a “per user” API request limit, which can be changed by the admins, but by default is set at exactly the same as the entitlements. That would allow admins to reduce the limit to 10k for enterprise users, to ensure the server-to-server integrations were still enabled in a proper and entitled way.
I think this would align with Microsoft’s goals and make it easy to understand for customers and we do not have to rewrite tons of code and make strange workarounds. But maybe there is something I am missing. If so, and you see it, please leave a comment!
“Do you really need to delete records like a Ferrari?” – that question was posed to me when I, a few years ago complained about the bad performance of the Bulk Deletion functionality in Power Platform (at that time Dynamics 365 Online) to a friend at Microsoft who I will not name. And my simple answer is yes, we do need to delete records like a Ferrari, for many reasons. I will discuss why in this article and I have for that reason also created an Idea on the Power Apps Community site on this subject and I hope that you agree with me and vote for it! You will find it on the link below.
Click here to go to the idea article and vote for faster bulk deletion
So, why is a fast bulk deletion important. I would say there are several reasons and I will walk through the ones that I have thought of, if you have any other, please drop a comment.
- Keep your data in check – remove unnecessary data
- GDPR and other compliancy and legal issue
- Power Platform growing into Citizen developer platform
- Entitlements effectivly blocks using external tools
Keeping data in check
For larger organizations, especially with many integrated modules and systems, many running Flows, workflows, Customer Voice surveys etc. the system will generate a lot of data, especially if it is a B2C scenario. A few of these have built in features that automatically remove old logs etc but most don’t and we as admins and system caretakers (isn’t it a fancy title!) need to tend to this, typically by setting up jobs that clean old data. I would recommend looking at the PPAC statistics of which tables are the largest and having a practice of doing so at regular intervals and downloading it. That way you can see the trends over time. A suggestion for an addon to the CoE Starter kit would be a trend analysis of all tables with growth numbers per week for each tables with warnings for quickly growing tables and prognosis.
However, as instances start growing over 50-100 GB in size (of structured data) it soon becomes too large to handle the data with bulk deletion. Some tables might still be managable this way, but in general the performance is has is, when I have tried to measure it (albeit a few years ago) was around 1-3 records per second. A customer I have, working with B2C for whom I wanted to remove their Voice of the Customer, which had been used a lot, had over 50 Million Survey Invites. It is not possible to remove the solution without first removing the data, and if we were to use Bulk Delete and put it on crack and it got to 10 records per second, it would still take around 2 months. I now did it with SSIS/Kingswaysoft and it took a few days. If Bulk Delete could reach around 200 records/second, it would take a little less than 3 days.
I have also noted that when trying to Bulk Delete very large datasets, Bulk Delete simply fails, as I think the FetchXML query might do a SQL Timeout or something like that. Not exactly sure what happens. As it works with Kingswaysoft I don’t know what might be the difference.
GDPR and other compliancy and legal issues
As GDPR and other similar compliancy regulations have come into play in many countries around the world, it has become ever more important to stricly follow these detailed instructions. These might be simple when you look at them on a Power Point C-level perspective but when you dig down on the detailed level, where they actually need to be implemented, things seldom are as simple as in a Power Point.
Power Platform growing into Citizen developer platform
As the Power Platform grows from being just a platform on which Dynamics 365 is delivered to being a huge platform for digitalization entire organizations with almost 100% user saturation will be coming starting to use Dataverse. The amount of data being stored in dataverse will hence grow to massive amounts and hence an effective tool to manage this data is most important. It is probably even important to such a level that Bulk Delete cannot even scratch the top of the iceberg of what we need to be able to do on a data management perspective as data will be growing and expanding in heaps and bounds and admins will not only need to manage Flows and Apps but also data in size and content.
Entitlements effectivly blocks using external tools
The soon to enacted entitlements, as mentioned in my previous post, Entitlements are not throttling | Powerplatform.se, also effectivly stop the use of external tools like SSIS/Kingswaysoft for deleting unwanted data. One of the customers I am working with generate between 10-20 M API requests PER DAY, and the bulk of these are from deletion jobs or other maintainance jobs trying to keep track of the instances. With the new entitlements charge, there is no way this can be continued, but the customer is cought between a rock and a hard place as either the data grows by leaps and bounds or the API calls becomes a huge cost and there is no easy way to handle it. What advise am I to give the customer? I would think that the most reasonable thing would be if the platform made the tools available to maintain the data to avoid the costs. If this is using bulk delete or some other more elaborate feature, that is up to the product team but I do think they should hold off on activating the entitlements until there is a good alternative for managing an instance data within the platform before this (not generating API requests).
What else is missing?
Bulk deletion is not only not being performant enough, it also lacks the effective filtering logic that is required for more complex queries. For some customers a I have had to construct rather elaborate SSIS scripts which start with a complex FetchXML and the filter the data through several Cache Transforms, for instance with GDPR consents and similar to be able to get the final list. I must admit that I havn’t tried using the new T-SQL connector for this, that it could handle the full T-SQL complexity and that it is implemented in Buld Delete or Kingswaysoft as a means to make querying more powerful.
Should a five user organization be entitled to the same amount of compute as a 5 000 user organization?
Entitlements are the limitations that Microsoft have set on the platform that are based on which type of license each user has. This is not the same as the API Service limits which are much more liberal. The entitlements have not yet been fully enforced as the reporting capabilities of the platform have not been rolled out fully yet. But they will. With this blog post I attempt to give my perspective on entitlements on the Power Platform and Dynamics 365 (CRM part).
My previous post was about API Service limits which are commonly referred to as the throttling limits of the platform. The entitlements limits (and here) have another part in the Microsoft docs that go into these a bit deeper. I’d first like to go into why there are two different “protections” or limitations.
The API Service limits are there to protect the platform from noisy neighbours. Some of us, that have been around since the earlier days of Dynamics 365/CRM online remember that the performance used to be rather shaky. This could often be due to the fact that some other instance on the same hardware your instance was hosted on, was being slammed with massive amounts of requests, like during a migration. To make sure that this “noisy neighbour” problem doesn’t occur, the API-limits have been put in place and since they have, things have been a lot better so they do seem to work.
The Entitlements are there for another reason. Let’s say you buy two (2) Dynamics 365 Sales users and then use integrations with a custom built front end for B2C purposes with one of those users (or an app user), and, still within the limits of the API Service limits, hammer the API:s from day to night with an amazing amount of requests. The B2C aspect would be covered from a licensing perspective in what was previously called “external connector” license and is nowdays included in the normal license. However, the amount of compute that the instance is utilizing is way above what you are paying for. This is the reason why Microsoft have created the entitlements, as far as I know anyway. And I think it only makes sense that there is some kind of reasonable proportionality to that.
To quote the Microsoft docs page: “These limits represent the number of requests users are entitled to make each day. The allocated limit depends on the type of license assigned to each user.“
What is a request?
The first question is then, what is a request? Previously we were told, that a batch request (ExecuteMultiple) was one (1) request but that has since changed and is now considered to all the subparts. I would even think that a batch request has the extra overhead of the batch itself. Hence, a batch request with 10 creates, will actually be counted as 11 requests; 1 for the batch, and 10 for the creates. The exact definitions are not disclosed but we get a rather good description from the docs with this paragraph, where I have highlighted some interesting parts:
“For Dataverse, API requests include all data operations that interact with table rows where rows are created, retrieved, updated, or deleted (CRUD). Special operations such as share and assign are included because they are considered updates. These requests can be from any client or application and using any endpoint. These include, but are not limited to, operations performed by plug-ins, async workflows, custom controls, and $batch (ExecuteMultiple) operations. There are a small set of system internal operations that are excluded, like login, sign out, and system metadata operations.”
The important takeaway here is hence that you cannot create a workaround by using a plugin and using the internal context pseudo-api to do the calls, as these are counted as well. Difference might be that they are done in the context of a specific user and that user has a rather large entitlement, which might hence “flatten the curve” so to speak. An interesting aspect, though is the exception to this rule:
“Power Platform API request allocations include use of Power Automate, AI Builder, and Connector APIs. All requests through a connector that result in a Dataverse request will represent 1 Power Platform request.”
This strongly indicates that Microsoft wants us to use the Power Platform tools and that these should not at least have additional costs. There are, however, still some inconsistencies in this area that I really hope that they fix, such as:
- Microsoft supplied integrations in ADF
- Integrations to Dynamics 365 Finance & Operations
- Dynamics 365 Business Central
- Exports to ADLS
- Data Export Service
The latter two can be really heavy on the API:s if you have an enterprise system or a B2C system. I work with a customer which currently have a database of >400 GB which uses Data Export service and the amount of notifications on the Data Export Service just for Contacts for a year often exceed the hundreds of millions.
Other areas which are not mentioned but which I think are included are addon first-party apps like Customer Insight (Sales Insights) which actually uses a ADLS in the background (not that you can actually access it). I have heard stories of support tickets where Microsoft support have blamed the API Service protection for hitting the ceiling when it was Sales Insight that caused it, which would indicate that these are actually counted. I think the intention is to include all of these so that the license for these cover the API entitlements. I just wish they would fix the gaps as customers are being affected.
Entitlement telemetry might not be the same as API Service protection telemetry
That actually brings up another interesting aspect. The measurements that are used for the API Service protections are probably NOT the same as the measurements that are used for entitlements, but this is based on my personal hunch, and not any kind of facts. Mainly based on the assumption that I think that the areas that are excluded from entitlement measures above, probably are not excluded from the API Service protection.
Another definition of request!?
On this page there is another definition of what a request is that is different from the one above. I believe this is older than the one mentioned above, as it uses the term “CDS” which has been replaced by dataverse now. I am not sure though as this page last change is dated on the second of feb 2021 while the other the 5:th of March 2020. The main difference is that this does not make the exception mentioned in the article above, hence every call through a connector, every successful or failed call in Power Automate will be counted as one request. Hopefully Microsoft will clear this up soon.
Entitlements per user
At this link you can find the specific entitlements per license. They are all measured on a 24 h period and range from 20 000 for the full enterprise versions of Dynamics 365 to Power Apps per app plan which get 1000 requests.
Entitlements for non-licensed users, which mainly will be application registrations/application users are fixed per tennant based on the highest licensed purchased on the tennant. This means the following pooled included non-licensed entitlements.
- 1 Sales Enterprise -> 100 000
- 1 Sales Professional -> 50 000
- 1 Power Apps license -> 25 000
- 1 Sales Enterprise, 1000 Power Apps -> 100 000
- 10 000 Sales Enterprise, 10 000 Power Apps -> 100 000
The important note here is that this does not scale at all, but is fixed. And if you plan to do some integrations with a Power Apps only tennant, you’d be wise to buy at least one Dynamics 365 Enterprise, just to get the non-licensed user entitlements, as the Sales Enterprise is around $95 and each additional 10 000 is $50, which means that the saving to get to a 100 000 calls / 24h is:
50*(100 000 – 20 000)/10 000-95= 400 – 95 = $305/month
Buying extra capacity
It is also possible to buy extra API capacity. You can read more about this in the Licensing Guide for the Power Platform. I am not able to find a current price for this at this time, but the list price was previously set at $50 (per 10 000 for 24h). These are then to be allocated to the users as you wish.
“Users will not be blocked from using apps for occasional and reasonable overages at this point of time.“
What will happen when or if you overshoot? A very important question. Most organizations will at some time do this, most probably during migration of data from the old systems. The statement from Microsoft above, especially the highlighted “at this point of time.” is rather omnious. It does indicate that at some time the hammer will come down. But at this time it won’t, admins will be harassed with emails about overshooting and just as with overshooting data capacity, they might start with blocking some features when you are overshooting. It is mentioned in one of the articles in the FAQ that after the transition period they will start blocking. So that will be a real fact unless they change their mind on that.
My very strong advice, is hence that all organizations that are not compliant need to start looking at this as soon as possible. I have some tips on what you can do further down in this article. Please refer to these and feel free to leave a comment if you have questions on the subject not answered here.
There are many ISV:s which export rather large amounts of data. The first ones that come to mind are the Marketing Automation products like Adobe Marketing, Click Dimensions, Dot Digital and more. These all synchronize contacts, marketinglists and marketinglistmembers, at least, which for larger installations can be quite large datasets. I do think it would be advantageous if these ISV:s could include the API Entitlements that are required, or if they are billed by Microsoft to the ISV which in turn bills the customer with a surcharge. At the very least Microsoft have to take ISV:s into the equation here as they are an essential part of the ecosystem, especially from the customer perspective.
Tips on how to handle future entitlement enforcement
- Start by using the PPAC to get an overview of how your situation looks even though you might not get an exact picture.
- Consider the overhead of batching. There can be performance advantages to batching as mentioned in my previous article. But there needs to be
- Consider “outsourcing large datasets” to ADLS – although the ADLS export also uses API-calls.
- Maybe not a problem if short term – for now
- Consider using official connectors or Power Automate instead (although that might cause costs in itself)
- If building Power App licens based solutions and you have heavy integrations, buy one Dynamics 365 Enterprise license.
- If possible impersonate the data load over all the users. This can be done with plugins and synchronous workflows for instance. Patterns that can be used in this case can be staging tables in dataverse where the owner is set and then a plugin is triggered that slices the row into many pieces as the owner of the import record. I am not sure if impersonation using the API will have any effect on this. That needs to be investigated. If it can be used to spread the load, that would be a good pattern to use.
- Refactor inefficient code. Depending on implementation maybe increase use of caching or other techniques to reduce the amount of requests. Make sure you have skilled Power Platform/Dynamics 365 developers working with development as knowing how to do this very particular to this platform.
Microsoft representatives, locally in Sweden anyway, are saying to our customers and potential customers that they need not worry about this. I find that message a bit mixed with what I read here. On the other hand I think this will be a very rough change for many organizations. If your organization will be very negativly affected by this and you feel that you are still paying “fairly” for your part, then I suggest you contact Microsoft and describe your business scenario in detail. If you need help with who to contact you can always start with the people who have written the articles who you can ask to forward the articles to the right people, use your local user group or ask some local MVP for help as they often have contacts directly with the product group (and many other experts do too).
Good luck and do leave a comment or share this if you like it!
Some people might have heard about an industry best practice that you should never have custom columns (fields) on the systemuser table (entity) in dataverse. Is this true and why so? This article is based on my understanding of how the inner workings of dataverse works and hence what you need to think about when designing your application to not unintentionally create an application that destroys your environments performance. In short, be careful about adding custom columns to the systemuser and if you do, only add fields that have static data, ie data that doesn’t often change. Let me describe this in more detail.
First of all, I would like to give credit to a lot of this to my friend and former Business Application MVP Adam Vero, who described this in detail for me, I have also discussed this with other people and since had it confirmed but not actually seen it documented as such, why it might not be fully official. I do, however, not see any problems with people understanding this, rather the opposite.
Dataverse is an application platform that has security built into it as an integral part, there are security roles, system users, teams and business units that form the core pieces of the security in the system. As the system will often need to query data from these four tables, it has a built in “caching” functionality that per-environment loads these four tables and precalculates them into an in-memory table for easy and fast access. This is then then stored in-memory for as long as the data in these four tables is kept static, in other words, nothing is changed, no updates, no creates, no deletes.
What could then happen if you add a column to the systemuser table? Well, that depends. If this column is a column that you set when the user is created and then never change that, that isn’t a problem, as this wouldn’t affect the precalculated in-memory table. However, if the data of these columns are constantly being changed, like for instance, if you add a column called “activities last 24h” and then create a Flow which every time an email, appointment etc. is created it will increment this by one per day and reset it every night for every user. Then every time, this writes to any user, the precalculated in-memory table will be flushed and recalculated before it can be used again causing a severe performance hit that can be very hard to troubleshoot.
How would you create a solution for a the “activities last 24h”, as described above then? Well, I would probably create a related entity called userstatistics with a relationship to systemuser. In this case it could even be smarter to have a 1:N relationship to this other entity as you could then have many userstatistics per user and measure differences in activities day by day.
But wouldn’t the NLB:s (Network Load Balancer) make this irrelevant as each environment is hosted together with many others? Well, I cannot, due to NDA talk about the details of how the NLBs actually work for the online environements, but I can say this, no, it is still relevant. The NLB will make it so for performance reasons.
As for teams, it is only the owner teams that count in this equation, the access teams are only being used for sharing or other types of grouping and hence never part of this pre-calculation.
And the savvy person would then of course realize that the multiplied size of:
systemusers x owner teams x business units x security roles
Does make up the size of this pre calculated table and for large implemenations, this can give indications of where performance can start to make a difference as every time a user does not have organizational level privilige, the system has to go through the entire table to check what is right. And then of course the POA. But that table is story for another day and another article.
Just final word. The platform is constantly shifting and even though this was true and probably still is true, there might be changes going on or that have happened that I am unaware of, that have changed how this works. If I hear of this, I will let you know.
Dataverse, and new terms for picklists, fields and more. This has been described in detail by many others in the community like Jukka Niiranen. However Microsoft are saying that there will be no new API, that all API:s will stay the same so this article is just going to discuss this subject briefly.
First of all, they will, of course do exactly what they are saying. The current WebAPI will of course be available for many years to come as we have seen the old 2011 SOAP API still be available and used a lot, still today, often for the reason that there isn’t full functional parity between the API:s.
Some tools allow you to seamlessly choose between the API:s, like Kingswaysoft, and I have asked Daniel Cai, the owner and head developer/architect at Kingswaysoft which API I should use. I don’t know if he has changed his recommendation, but last year, he still recommended, if my memory hasn’t failed me, to use the old SOAP API due to stability and functionality issues. I will let him, on their blog describe their thoughts on this, if they want to.
However, I think this is rather telling of how Microsoft act and will act regarding an older API, that it will be maintained and kept alive for many years.
That brings us to the interesting question if there will be a new API based on these new terms, I frankly do not know, but my deduction would say a definite yes.
It think the easiest way of seeing this is falsifying this and extrapolating it. In other words, what would happen if they didn’t create a new API and we looked at the platform as a whole in 5-7 years without knowing the history? It would look totally crazy. A platform with one set of terms in the UI and another set of terms in the API:s. It would make documentation really complex as there would be need for translations between these two concepts everywhere and the bar for getting developers into the platform would be higher than necessary.
Hence the only reasonable deduction is that they will make a new API with the new terms. It will probably be release in 6 months or so, but the current WebAPI will still continue to work for many years to come.
These are my personal thoughts and are not based on any information from Microsoft, despite the fact that I am an MVP. I urge you to make your own deductions and I think you will find that this seems reasonable. Please share your thoughts disagreements or otherwise in the comment section below.