Impersonation – the Entitled future

Impersonation – the Entitled future

As I mentioned in my previous articles, I am trying to investigate the details of how the entitlements and API Service Protections are working and are planning to be rolled out (in the case of entitlements). I had a very interesting call with some of the nice people in the product team last which shed some more light on the entitlement issue and the best practice of how they suggest the API is to be used. The suggested method is that the API request load be spread out over the different users in the instance/tenant using impersonation. I will walk through what this means and what I think about this in the article below.

First, if you have not read my previous post on entitlement, I do suggest you do this first. It describes what entitlements are compared to the API Service Protection. I still see a lot of people mixing these up and that is not strange, but they are two different aspects of this, and we need to keep track of what we are talking about.

As mentioned in that article, the point of the enacting the Entitlements, when that is coming, which still is a bit unclear, is so that the compute consumed by a small organization is proportionate compared to a large organization. So, let us go back to the actual per-user licenses and have a look at an example.

Let us say we have a 5 000 Sales Enterprise org, that means that we get:

  • 5 000 users who each have 20 000 API request entitlements.
  • 100 000 API Requests for non-licensed users.

Compare this to a 10 Sales Enterprise org which will have.

  • 5 users who each have 20 000 API request entitlements.
  • 100 000 API Requests for non-licensed users

Both these are totally independent of how many instances the first or the second org has.

The first observation is of course that the 100k API Request for non-licensed users do not scale at all with the size of the organization or the number of users. How does this then go in-line with the goal that a large org should have more compute than a small? The second observation is that 20 000 API requests, which actual also the normal UI will be using, is very large. You would have to be one busy salesperson to be able to generate 20 000 API requests manually in 24 hours, so busy I am tempted to say it is virtually impossible to break unless you have very heavy automations running under your account. This was also what the Microsoft rep I talked to mentioned, that this large number is to be used on a per user basis. Hence the natural question was, if we use impersonation in the API, will the Entitlements honor that? The answer was unequivocally: yes.

Hence, this is the clear answer on how we need to create future integrations. We need to spread the load using impersonation over many of the users in the system.

Microsoft docs on how to do impersonation

If we do this the right way, it would probably be possible for most organizations to, over time be able to build a fix for this.

However, it will not be easy as we need to have a tight control of the privileges of all the users. Let me give you an example from a customer I work with:

They are an online travel agency and have people working at the destinations with very restricted privileges. A lot of bookings (orders) are integrated from the booking systems, these should hence be spread out over many users instead of the single application user being used today. There is not natural user to direct the bookings to, as it is a B2C business, and no person at the travel agency “owns” these customers per se, so the load needs to be distributed in a more randomized fashion. So, let us say we have these users:

  1. John Smith – System Admin (Full access)
  2. John Doe – Power User (can create orders but not refunds)
  3. John Surf Dude – Destination Specialist (can view but not create orders, cannot even read refunds)

When rebuilding the integration, we can use user John Smith and John Doe but not John Surf Dude and the only way of generically knowing this is checking what we want to do and comparing this to the privileges of each user to get a shortlist of users that can be used for integration.

However, we do not want to use a user that is close to 20k API requests for that day, so we might need to query the current API Request entitlement usage per user, so that we can filter the current shortlist to an even shorter list before knowing which users to use for impersonation.

Analysis

  • A way forward. I think this can be used, although there are some tricks to it. For my customer we might be able to cut a significant amount of API calls this way which will make a huge difference when we compared to not using this technique.
  • Impersonation not always viable – as in the example above, when there is not obvious owner to link to, we need to figure some other logic out of how to spread the API entitlement load. And things start to become tricky.
  • Impersonation is complex.
    • More complex dependencies on security model
      As mentioned above, trying to execute an action as a user that does not have the correct privileges won’t work, so we need to know that first. And setting everyone as System Administrator just will not work.
    • Logical user or just random users – trying to map the users to some logical connection from the other system or just randomizing the load. Logical user is probably preferrable but probably will not be a very common pattern.
  • Integration often system-to-system not user-to-user
    • Integrations are more often done on a system-to-system basis, not user-to-user basis. When looking at CRM-ERP integrations for instance, the user base of these two systems seldom overlaps except for a few users.
  • Takes time to refactor code to handle impersonation – There are many organizations out there with numerous complex integrations. And changing integrations on this level will require significant work to be done and the question will be if there is time to complete this work before the entitlement feature goes to GA?
  • Strange audit trail – if we use randomized users to update or create data in dataverse that will undoubtedly create very strange audit trails, created by and modified by fields. These are some facts that need to be taken into consideration.
  • Power App – per App users have very few requests – Not all licenses have 20k API requests per 24h. The Power App per App has only 1000 API Request entitlement per 24h, these can run out just by a using the system heavily. So do consider the API Entitlements when looking at the licenses.
  • Still not GA – Entitlements have still not gone GA. Hence the best time to let Microsoft know what you think is good or bad about this is now. But do be civil, there will be some feature like this, that will handle fairness management of compute consumption. Contact Microsoft through your local User Group, your local MVP or via the comment below or send me a message on LinkedIn and I will put you in contact with the right people. You can also submit an idea to the idea portal.
  • There might be a point to binding all entitlements to users, in the case that if, in the future, any overshooting would not only result in angry emails, but service degradation or shut-off for that user. Imagine having creative citizen devs creating some infinitive looping Flow or massively recursive logic unknowingly which causes a lot of requests. This approach would then just cause a block for that user, not the entire tenant. Significantly reducing the severity of the problem.

Suggestions

Personally, I think this method is just way to complex. I think just having a simple pooling on the tenant level of all the API entitlements would be fair and then deducing all usage from this. I think that Microsoft could skip the 100 000 for the non-licensed user, for simplicity. Based on the examples above, that would make:

5000 Sales Enterprise

  •  5 000 users who each have 20 000 API request entitlements.

Total API Entitlement for the Tennant: 100 M / 24 h

5 Sales Enterprise users

  • 5 users who each have 20 000 API request entitlements.

Total API Entitlement for the Tennant: 100 K / 24 h

And all users, and all non-licensed users use from the same pool.

As for the potential problem of creative users potentially blocking the entire tenant, I would suggest adding a “per user” API request limit, which can be changed by the admins, but by default is set at exactly the same as the entitlements. That would allow admins to reduce the limit to 10k for enterprise users, to ensure the server-to-server integrations were still enabled in a proper and entitled way.

I think this would align with Microsoft’s goals and make it easy to understand for customers and we do not have to rewrite tons of code and make strange workarounds. But maybe there is something I am missing. If so, and you see it, please leave a comment! 

Why I love Kingswaysoft

Why I love Kingswaysoft

The reason I often recommend Kingswaysoft over other methods of data migration or even sometime data integration is rather simple, it can do what others can’t. Not even Microsoft. And it can do it fast.

So have I been payed by Kingswaysoft to write this? No, not a penny, I havn’t even been given a free license even though they might if I asked. I think it is just fair that I explain why I am such a strong proponent for this product and if anyone disagrees, please feel free to drop a comment below.

API knowledge

Kingswaysoft both know how the API:s of the Power Platform/CDS/Dataf??x (called CDS below) work, sometimes even better than Microsoft themselves as I have seen feedbacks given in such detail from them on what is missing from specific API:s to reach feature parity, that it is scary. Kingswaysoft also has a blog which has details recommendations and built in recommendations in the product on how to maximize performance with the Dataflex API. They have also built in handling for throttling, multi-threading, batching and more to just make it transparent. When looking at other integration products and connectors. I have never seen anything that is close to this depth.

Datamodel and Dynamics knowledge

Dynamics 365 Sales, Customer Service and other first party apps have some very peculiar oddities that need to be taken into consideration. These oddities range from for example:

  • How to delete Marketing List members – by using the party member and list as key
  • How audit logs are handled
  • How Activities work, with activity pointer and activity parties
  • Setting CreatedOn/CreatedBy/ModifiedOn/ModifiedBy
  • Reading/Writing personal views (saved advanced finds)
  • Setting statusvalues of some fringe entities

Even though many tools, like Power Automate, Data Flows, Azure Data Factory, etc. all seem to handle common tasks like creating or reading a contact or account rather well, it soon becomes a problem when you start looking at some of the areas above. And you often need to in a migration.

It can use the power of SSIS

SQL Server Integration Services is a very powerful framework once you start getting your head around it. Yes, it has some quirks to it but in general it can do some rather cool things and is good when you want to sequence many different tasks in order to reduce overall runtime. Using built in features like Cache Transforms with memory storage, you can make filters using memory based lookups with millions of records super fast, once everything is loaded.

…and extend it

And of course Kingswaysoft didn’t settle for just building connectors to Dynamics 365. They have a pack called “Productivity Pack” that adds a lot of nice features to SSIS that makes your day a lot easier.

And if you have problems – great support

And if you ever run into problems, their support is great. We have identified buggs and they have fixed the bug and sent us a special deploy just a day or two after.

Flip side

There is always a flip side. I think one is that SSIS isn’t always your most stable product. For this I don’t think Kingswaysoft are to blame but it affects their product experience none the less.

Another obvious flip side is that this requires a skillset that is a bit “off the tracks” even though it isn’t super hard to learn the basics, becoming really proficient with SSIS takes time.

Finally

So, a product as good as this is probably super expensive, right? No. It isn’t. If you are running things from within Visual Studio, then you don’t even need a license, but if you plan to run a migration or so, I certainly recommend one anyway, to get access to support. And buy the Ultimate license to get access to the productivity pack and all the other connectors while you’re at it. It is worth it.

Just to be clear, I am not saying that the other products are bad. I am just saying that once you choose a product, you will start to decend the rabitts hole. If the product isn’t up to speed you will have a couple of options:

  • Back up and redo with another technology
  • Patch with another technology
  • Create some kind of workaround

Neither of these solutions is rather palatable. Hence it is often tempting to choose a product that you know can do the job. And apart from coding, SSIS with Kingswaysoft will very seldom have any issues.

Team member licensing – hammer coming down!

Team member licensing – hammer coming down!

The team member licensing option is something that has been a subject of debate for quite some time. What can it actually be used for and what not? I have heard “experts” suggesting that it be used for integrations but if your read the Dynamics 365 Licensing Guide, appendix A, you can read all about what Team member licensing is and isn’t. Typical scenarios for Team members are:

  • Read only users
  • Users that use only slim parts of the system, and not the “cool” First-Party-App features in Sales, Customer Service, Field Service etc.
  • Users that just track activities

There are some updates to the details of what a team member can and cannot do, and I think the most important is that a team member cannot CUD (Create, Update or Delete) accounts any more.

How urgent is this? Well, for new instances being created, this will be enforced as of April 1 2020. But for existing instances, it will be enforced on July 1 2020. Hence, you still have some time if you have an exiting org.  

You can opt-in to early access updates. Goes without saying that you shouldn’t do in your production system, and probably not even in you dev/test environments since as soon as you have you won’t be able to depoy changes. So, having an out-of-ALM-environment where you can test this might be a good idea. 

So, the BIG question is, “Are we compliant?” 

Well, there is actually a report that you can generate from the Power Platform Admin Center and I have recorded a video below that shows how you can use that and collate the data with pivoting in Excel.

Top Table Usage in PPAC

Top Table Usage in PPAC

Top Ten Table usage is back from being lost when Organizational Insights was discontinued. A bit tricky to find so check out the video. Awesome tool when trying to reduce the size of large instances, especially important now that the prices per GB are going up to $40/GB (subject to you license agreement).

API per user limits – The good, the bad and the ugly

API per user limits – The good, the bad and the ugly

(Updated) Microsoft recently released some throttling that have been causing some stir in the community, especially since the latest throttle, the concurrency throttling, was not very openly announced, some partners and customers were hit rather hard by it as it affected their abilities to manage large dataloads in the system.

Now Microsoft have announce another API based limitation which is based on the users and the type of licenses the have. You can read some about it here if you like. This article will discuss what this means and my personal view of the good, the bad and the ugly of it.

First of all we need to understand what it is. It is a API limit that will be set per user and based on the type of license that the user is allocated. The highest is if you have a Dynamics 365 App user license, like Sales, Customer Service or similar, which will give you 20 000 requests per 24 hours. The lowest is a Power App – Per App license which will give you 1 000 requests per 24 hours. Note that these are connected to the user and not summed/aggregated to the instance level (allthough I would think that would be a good idea). Well, really, the lowest of them all are Application, Non-interactive or admin-users that don’t use a license as these will be allocated 0.

I have not seen any UI for this yet, so I don’t know how this will look, but what the page is saying is that API-calls can be reallocated from normal users to application users/non-interactive users. (UPDATE – See update at the bottom regarding this, thank you observant readers!) Not sure if it will also be possible to reallocate API-calls between normal user and another normal user.

There will also be an additional SKU for buying 10 000 additional API calls per day that can be allocated to a user.

 

The Good

What is good about this then you might ask? Well, I think it is fair. Large customers pay a lot of money for their instances and usually use it a lot with a lot of integrations. It is only fair that they are allowed to use the API:s more than a small customer who has created some super duper application that blasts Dynamics with massive amounts of calls. The small customer can still do this, but they just have to pay a bit extra for those API-calls if they arn’t covering that with their users.
I also hope that this might enable Microsoft to relax the currently rather tight throttling on the API:s a bit.

According the the licensing documentation in general, existing customers will not be hit by this until October 2020, in other words, more than a year from now. This will hence probably only now affect new customers.

The bad

This implementation certainly has some bad parts. The most obvious is the too stringent connection to users which makes it weird. I don’t know how this will be managed in the UI but let’s say we have an instance with 500 users mixed Sales Enterprise, Customer Service Professional and Team Member. We also have 10 application users that are used for Portals, Forms Pro and custom integrations to many other systems. Each integration using a separate integration user to reduce the attack area in the unlikely event of a hacker attack. So what we will need to do is to first figure out how much API-usage we are using for all the normal users (for instance via PCF:s, Flows, Plugins, Workflows etc) and all the integration application users. Currently the https://admin.powerplatform.microsoft.com does not give us this granularity. There are indications but in this case one would need deep granualar data, preferably with trend analysis.

Another part of this that could be done better is the “buying addional API-calls”. Why not just adapt the method used in Azure? In other words, you pay as you go. With this current method, you have to know beforehand how much a particular user will use and if you overshoot the user will be shut down causing unnecessary support costs for customers, partners and Microsoft.

I also wonder how this practically is going to be handled? Are admins going to go into each of the 500 user records, reduce the API-calls allocated and move to Application users? If the admin moves all calls, which effectivly will stop plugins, workflows, javascripts with server calls etc how will the error handling of that look?

The Ugly

What is really the difference between something bad and something ugly? I would say that something bad is a design decision that we might dislike or might be disadvantage to the customers, it requires some sort of conscious perspective. Ugly on the other hand is the parts where where, in this case, Microsoft just have forgotten to think about something or neglected perspectives which causes issues for partners or customers. Based on this, I would say that the following are the bad aspects of this;

Timing

Again Microsoft are rolling out a change with a rather short timeframe. They probably feel that a month or two of notice by publishing the article above is notice enough, but they have to realize that many customers cannot act that fast. If you are a small customer with extensive use of Dynamics, for instance if you are using Dynamics 365 in a B2C aspect with a Marketing Automation integration and you are targeting millions of customers with sendouts and hits on your webpage being mirrored to your Dynamics all the time, this will cause some hefty API traffic. And your org might not be very big if you are totally e-commerce oriented.

Maybe only new customers, for now

Lastly I really hope that it is true that the API limitation will not affect current customers, it is not very clear and hence we are left in the dark again. If there is a problem with application users etc not being able to log in, I hope Microsoft support will be ready for the storm that will hit them.

On the other hand, new customers might have tested the system, evaluated the costs and are now faced with this. Not sure that will be optimal either, there is risk of loosing a customer or two there.

Communication

As this is a rather drastic change and may be viewed as a “breaking change” if not the one year grace period mentioned in the licensing in general applies to this. No matter, this should have been communicated very clearly months ahead to remove any kind of doubt from partners and customers. Both via blogs, emails to admins of organizations using Application users/non-interactive users as this should be easy to figure out via telemetry. Currently no one knows exactly when this will hit them/their customers or how they are to manage it.

 

This is generally very unclear. I shouldn’t have to write an article like this, speculating about what is or isn’t going to happen. If I have problems figuring this out, being an MVP, customers are probably very much in the dark, both existing and new.

 

Conclusion

In conclusion I think this is a good idea that got rushed. It should have been passed through a couple of more hoops before being launched to get the right feedback. The main things that I think Microsoft should change before rolling this out that, from my perspective, still give the same effect, are:

  1. Aggregate all API-Calls that are counted to a per instance level. It will make it easier to manage, stop the breaking change and make it easier to understand.
  2. Enable admins to add a per-use, after the fact, payment option, (like Azure) for any additional API-calls.

     

    If this is going to be useful or not also is very dependent on the fact that we can reallocate a lot of the API-calls from users to the integration users. For instance, I have a B2C customer with 1M+ API calls per 24/h and if it will not be possible to take the sum of hundreds of users and allocate those to the application users we are using, then this will be a very hurtful change.

    In the meantime, I do recommend that you keep a close eye to what is going on within this area as it will most likely affect you if you are running any application accounts, which you probably are, like Dynamics Portal, Forms Pro, Voice of the Customer and many more. If you go into the list of users and change view to “Application users” (or whatever it might be called in your language) you will see the list. I think Micrsoft will make some changes, or some announcements to this before October 1. Let’s see what.

Update 2019-09-04

There has been some chatter going around regarding this and do note the comments below which include interesting links and good thoughts. There are some additional points that need to be pointed out. Instead of changing the original article I will continue to add updates like these.

Normal UI usage will count

Initially I did not think that normal UI usage would count towards the API request calls. With “normal” in this case, as an old Dynamics 365/CRM geek, I of course mean a model driven App, but the same also goes for canvas Apps or actually any use of the CDS, what so ever. What this will mean when a user runs out of API requests, will be interesting to see. How many requests are used when the application is used, of course depends a lot on what you do. If you switch on F12 in Chrome you can check the network traffic and see for yourself.

Batching will be your friend

Using batching will from now on not only be a general best practice but also make you save money. If you use tools like Kingswaysoft this is easy to configure, to make sure that you have large batches when for instance doing CUD calls. When writing code directly, you will need to understand how to do this directly. Note that sometimes this will require entire rewrites of the code. I have seen programs off the shore of Orion that you wouldn’t believe with tons of single queries instead of one single call. Most often written by devs who have no or very little experience of writing code towards Dynamics 365/CDS.

Unclear if possible to move API-calls

As several people here and on Twitter have commented, it is probably incorrect to interpret that API:s can be moved from normal users to application users and non-interactive users. This will cause major headaches for some customers which will be struck with lots of additonal costs. Costs that are not very welcome as the per GB cost recently increased 800% hurting especially the larger customers with massive integrations and extensive use of the system. I do, for instance, have a customer that exceeds 1M requests per day 365 days a year. This would require them to buy over 100 addon 10k API requests SKU:s, despite the fact that their 500 users gives them a total of over 5M requests per day, something they will not be using through the UI unless someone is drinking very large amounts of coffee. – NEW Update: This was an incorrect interpretation. You cannot reallocate API calls from normal users. 

The price is here

The price for the 10k/24h SKU will be $50/month. This means that for a customer like mine having major integrations causing around 1M API-calls per day, this would cost an additional per month $5 000 or yearly $60 000. I sincerely hope they will relax the throttling to make it worth it. If/when they do, I will read my Macciavelli again.

 

Update 2019-09-05

First of all I will write a new blog article on this, when the dust settles and we know what is going on. Currently there are quite a lot of unknowns and I wouldn’t be surprised if Microsoft announced a thing or two soon. I have been told that the FAQ will be updated in a couple of days.

Batching – again

There were some discussions on if batching actually were going to be useful in this case or not. I have now gotten confirmed that a batched request will be considered as one (1) call. This is both for batched Creates/Updates/Deletes and Queries of multiple records (that would be very strange if it wasn’t one record, but I had to ask).

Data Export Service etc.

Data Export Service and other services that run under the system account will not count towards the API request. This is good news as this opens up for many users to be able to use this method to offload the API:s for reads.

What is the competition up to

I checked to see how SFDC are handling this and as far as I can see they have a similar setup as can be read here:

https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_api.htm

and here

https://support.geckoboard.com/hc/en-us/articles/216804218-I-ve-hit-my-Salesforce-API-request-limit

I am no expert on their licensing model, but I think it is good to know that this isn’t just a PowerPlatform thing. However, there are some distinct differences:

  1. The API calls are not counted for normal browser/client usage. Only “real” API calls.
  2. They have real enforcement blocking an entire instance/org if they overshoot
  3. All API:s per user license are summed up to the org level

Microsoft Addon apps will include request

If you buy Dynamics Portals, this will include some additional licenses. The same goes for Forms Pro. Hence there should be some default API request assignment to those application users that are installed. I do wonder if it would be financially beneficial to piggyback on those application users? There is also no current method for ISV:s to bundle API-requests into their product if they install an application user upon installation.

CSP / Distributor silence

We have still heard nothing of the 10k addon SKU from any distrubutor, EA or CSP. It will be interesting to see if it will reach the entire distribution chain by October 1 when customers will start being notified that they are in violation (new customers).