New Entitlement limits announced!

New Entitlement limits announced!

Yesterday at Ignite, new Entitlement limits were announced and the corresponding Microsoft Docs page was updated: Requests limits and allocations – Power Platform | Microsoft Docs
In general I am very positive to these changes as they more closely correspond to the overall goal of what was said to be point of Entitlements.

First of all, the most important fact is that non-licensed users have been upped from 100k to 500k for all Dynamics 365 Enterprise and Professional licenses. On top of that, 5k requests are added per Dynamics USL. Capped to 10M. Hence a small org with 5 Dynamics Pro users will get 500k + 5x5k = 525k requests per 24h. A large org with 1000 Enterprise users will get 500k + 1000x5k = 5.5M. A larger org with 10k enterprise sales users will be capped at 10M requests.

Normal, payed users have also been changed a bit. An enterprise or pro license is entitled to 40k requests. Note that this does not include team member licenses, which are entitled to 6k.

The capacity addon has also been changed to include 50k instead of the previous 10k. If the prices at $50 is still the same, I don’t know at this time. Then this price has been reduced to 20%. I will get back to this later.

This change is good as it will probably cause the majority of the customers to not exceed this. There will probably still be a few larger customers or complex solutions that will exceed and I do suggest that you talk to your partner and your Microsoft account manager to try to arrange something.

There are still some issues that I think need addressing;

  • How can ISV:s like Click Dimensions which by nature will be rather verbose be able bundle requests?
  • Larger corporate/global tennants with multiple instances are still punished by this model and would benefit from splitting the one large tennant to several smaller. But that makes it a lot more complex from an IT-perspective and isn’t the point that it is great to keep them all in one?
  • Licensing is still rather complex from a capacity perspective and that might scare customers. I have talked to customers that have chosen SalesForce just due to this reason.

There is more to be discussed regarding this, but I wanted to give my perspective on this as soon as I could and also put some light on this. I will be back on the subject.

Incorrect API Optimization recommendations by Microsoft

Incorrect API Optimization recommendations by Microsoft

On the page in Microsoft docs where they discuss API Service Protections there is towards the end of the page a part which gives some recommendations. Some are great, like the recommendation to use many threads and remove the affinity cookie, however when I read it I really bounced at the recommendation that batching shouldn’t be used. That just didn’t rime with my experiences of doing heavy dataloads to dataverse. So I thought I might just test to see if it was true or not by creating a simple script in SSIS with Kingswaysoft. My results, using batching compared to not using it gives more than a 10x performance increase. Continue reading to understand more about how I tested this and some deeper analysis.

Parameters and excel

The first thing I did was to create an excel sheet for storing all the results. I really did have to think about the different parameters that could affect the result, so I chose the following columns:

Dataload – how many records. This needed to be a bit larger to make sure that the throttling time of 5 min was passed.

Operation – Different dataverse operation take different amounts of time. For instance, creates are typically rather fast, but deletes, depending on table, can be a lot slower as the platform might execute cascading deletes based on one single delete. For instance, if you remove a contact with 100 tasks connected to it with the regarding relation set to “parental” or “cascade delete” it will actually remove all the 100 tasks. If set to “remove link”, the platform has to make an update to each of the tasks, removing the link. There are also special operations like merge which are rather complex.

Table – There is a large difference between the different tables. Some of the OOB tables have a lot of built in logic and really small non-activity custom tables can be a lot quicker to create, update or delete.

Threads – How many threads were used.

Batch – The size of the batches being used.

Duration / Duration (ms) – Duration is where I input the duration as a normal time. I created a calculation to calculate the corresponding amount of milliseconds.

Time per record (ms) – This is the division of the duration in ms with the total number of records. During this first test, I always used 100 000 records as the dataload, but it could be interesting in the future to see the differences between different dataloads, with all else being the same. This is also the main output from this test.

Strategy – It is possible to have different strategies. In this first version I just ran everything at once, hence I called the strategy “All at once”. Different strategies might be “5 on, 5 off”, meaning that you design the script to run superfast for 5 minutes, the throttling limit, and then stop and do nothing for 5 minutes and then loop this. Not always possible to use that kind of strategy, but for massive deletes of for instance market list members (cannot be removed with bulk delete) that might be an option.

API – There are currently two APIs that can be used. The new WebAPI which uses JSON payloads and the older SOAP API which used XML payloads. It stands to reason that the smaller JSON payload should cause the WebAPI to be faster than the corresponding SOAP API. However SSL encryption also causes the data to be compressed, which might make these differences smaller than expected. There is also a server side aspect to this, as the APIs will run through different parts of the code on the server side which could affect the performance.

No of columns – How many columns are being sent to the API. Of course there would be a difference if you send a create message with 3 columns compared to 30. Hence this is a relevant point. It is still a bit rough, as there is a huge difference in creating a boolean record, a 2000 character nvarchar or a lookup. This could also be something that was adapted.

Existing records – How many records existed in the system prior to running this? Not sure if this makes any difference, in other words, everything else equal, would it take more time to write 100k records to a system with 0 records or one with 10M records? As I don’t know, and cannot rule it out, I added it.

Latency (ms) – Daniel Cai, Founder of Kingswaysoft, always recommend that the SSIS script with Kingswaysoft be run “as close as possible to the dataverse”. That does in other words imply that the latency to the server affect the performance. Do calculate this, I used diag.aspx from the computer running the script.

Location – Which geo is the instance located in. This is more for general information, the latency is really the important factor here. The throughput might also have some affect if you are using a really bad line to the dataverse. I was using a wired 1 GBit line. In this test, I was using an instance I got hold of as MVP, which is located in the US and my own stationary computer at home (a AMD Ryzen 9 3900X 12-Core Processor 3.79 GHz with 32 GB of memory). Hence the latency was rather high and not in line with Daniel Cai’s recommendations. It is hence also something to investigate further.

No of users – As I, and some others in the community have described, throttling is based on a per-user and per-front end server basis. Hence utilizing several service principals/application users can effectivly multiply the throughput. In this test I used just one.

Instance type – It is well known that sandbox instances do not have the same performance as a production instance. If you find Microsoft support on a happy day and you are working with a larger (no of licenses) instance, you might also get them to relax the throttles a bit, especially if you mention that you are doing a migration. As these factors strongly affect the performance of large dataloads, I did have to add this. During this test I was using a non-enhanced production instance, in other words, a production instance on which no throttles had been relaxed.

DB Version – The final parameter that I thought might affect this is the actual version of the dataverse instance. As improvements and god forbidd sub optimal “improvements”, can cause enhancements or degradations of the performance, this is necessary to document.

SSIS/Kingswaysoft setup

For setup of create tests in SSIS with the Kingswaysoft addons I used a dataspawner (productivity pack) to generate the data. I then just sent this directly to the CDS Destination.

And the Data Spawner config:

And the CDS Destination config

The main changes done in this case were to the parameters “Batch size” (set to 20 in the picture above) and how many threads to use (also set to 20 in the picture above).

After each run, I checked the log from SSIS to see how long the entire process took. Due to the fact that I have a computer with many threads and for this case, enough memory, it is my perception that most of the threads allocated were also used.

Results

What are the results? This is a picture of the excel:

As you can see I did try both Create and Delete operations, but the results are rather obvious.

  • 20 threads/20 per batch of both create and delete, took around 45 minutes
  • Reducing to 16/10 made only a minor difference – 48 minutes
  • Microsofts recommendation of not using batching, ie 20 threads/1 batch – took over 10 h, both for delete and create.
  • Using only 1 thread with 1 batch was more or less the same as using 20/1
  • 1 thread with 20 in every batch (1/20) took almost 5 h, which is around half the 1/1 or 20/1

I think the results clearly show, that Microsoft docs are currently incorrect in their recommendation to not use batching. Perhaps they will update this soon. From an entitlement perspective, one needs to understand the additional cost of the “batch unpacking” request that is made. With 20 in every batch, this is an overhead of 1/21 but if you would lower the batches to 4, it would be 1/5. Hence using as large batch as you can, without loosing performance, is generally what I would recommend.

As I have implied in this article, there are a lot of other parameters to investigate in the API. I have a hunch that a create with 10 lookups compared to 10 textfields, will also make a significant difference, but I will need to test it.

Also do consider the request timeout. When working with complex and large batches, one request may taker quite some time. You will know, however, as it will return a timeout exception if you exceed it. Note that some records in that batch may have been written anyway. Just that your client wasn’t waiting around for the answer.

I do also encourage others to try out other parameters in the API. What is really optimal from many different aspects. From a mathematical perspective this can really be seen as a multidimentional surface where we are attempting to find the highest points. I have now started this journey, and I hope it was an interesting read. Please leave a comment, if you have any experience to share or just want to comment.

Why I love Kingswaysoft

Why I love Kingswaysoft

The reason I often recommend Kingswaysoft over other methods of data migration or even sometime data integration is rather simple, it can do what others can’t. Not even Microsoft. And it can do it fast.

So have I been payed by Kingswaysoft to write this? No, not a penny, I havn’t even been given a free license even though they might if I asked. I think it is just fair that I explain why I am such a strong proponent for this product and if anyone disagrees, please feel free to drop a comment below.

API knowledge

Kingswaysoft both know how the API:s of the Power Platform/CDS/Dataf??x (called CDS below) work, sometimes even better than Microsoft themselves as I have seen feedbacks given in such detail from them on what is missing from specific API:s to reach feature parity, that it is scary. Kingswaysoft also has a blog which has details recommendations and built in recommendations in the product on how to maximize performance with the Dataflex API. They have also built in handling for throttling, multi-threading, batching and more to just make it transparent. When looking at other integration products and connectors. I have never seen anything that is close to this depth.

Datamodel and Dynamics knowledge

Dynamics 365 Sales, Customer Service and other first party apps have some very peculiar oddities that need to be taken into consideration. These oddities range from for example:

  • How to delete Marketing List members – by using the party member and list as key
  • How audit logs are handled
  • How Activities work, with activity pointer and activity parties
  • Setting CreatedOn/CreatedBy/ModifiedOn/ModifiedBy
  • Reading/Writing personal views (saved advanced finds)
  • Setting statusvalues of some fringe entities

Even though many tools, like Power Automate, Data Flows, Azure Data Factory, etc. all seem to handle common tasks like creating or reading a contact or account rather well, it soon becomes a problem when you start looking at some of the areas above. And you often need to in a migration.

It can use the power of SSIS

SQL Server Integration Services is a very powerful framework once you start getting your head around it. Yes, it has some quirks to it but in general it can do some rather cool things and is good when you want to sequence many different tasks in order to reduce overall runtime. Using built in features like Cache Transforms with memory storage, you can make filters using memory based lookups with millions of records super fast, once everything is loaded.

…and extend it

And of course Kingswaysoft didn’t settle for just building connectors to Dynamics 365. They have a pack called “Productivity Pack” that adds a lot of nice features to SSIS that makes your day a lot easier.

And if you have problems – great support

And if you ever run into problems, their support is great. We have identified buggs and they have fixed the bug and sent us a special deploy just a day or two after.

Flip side

There is always a flip side. I think one is that SSIS isn’t always your most stable product. For this I don’t think Kingswaysoft are to blame but it affects their product experience none the less.

Another obvious flip side is that this requires a skillset that is a bit “off the tracks” even though it isn’t super hard to learn the basics, becoming really proficient with SSIS takes time.

Finally

So, a product as good as this is probably super expensive, right? No. It isn’t. If you are running things from within Visual Studio, then you don’t even need a license, but if you plan to run a migration or so, I certainly recommend one anyway, to get access to support. And buy the Ultimate license to get access to the productivity pack and all the other connectors while you’re at it. It is worth it.

Just to be clear, I am not saying that the other products are bad. I am just saying that once you choose a product, you will start to decend the rabitts hole. If the product isn’t up to speed you will have a couple of options:

  • Back up and redo with another technology
  • Patch with another technology
  • Create some kind of workaround

Neither of these solutions is rather palatable. Hence it is often tempting to choose a product that you know can do the job. And apart from coding, SSIS with Kingswaysoft will very seldom have any issues.

Team member licensing – hammer coming down!

Team member licensing – hammer coming down!

The team member licensing option is something that has been a subject of debate for quite some time. What can it actually be used for and what not? I have heard “experts” suggesting that it be used for integrations but if your read the Dynamics 365 Licensing Guide, appendix A, you can read all about what Team member licensing is and isn’t. Typical scenarios for Team members are:

  • Read only users
  • Users that use only slim parts of the system, and not the “cool” First-Party-App features in Sales, Customer Service, Field Service etc.
  • Users that just track activities

There are some updates to the details of what a team member can and cannot do, and I think the most important is that a team member cannot CUD (Create, Update or Delete) accounts any more.

How urgent is this? Well, for new instances being created, this will be enforced as of April 1 2020. But for existing instances, it will be enforced on July 1 2020. Hence, you still have some time if you have an exiting org.  

You can opt-in to early access updates. Goes without saying that you shouldn’t do in your production system, and probably not even in you dev/test environments since as soon as you have you won’t be able to depoy changes. So, having an out-of-ALM-environment where you can test this might be a good idea. 

So, the BIG question is, “Are we compliant?” 

Well, there is actually a report that you can generate from the Power Platform Admin Center and I have recorded a video below that shows how you can use that and collate the data with pivoting in Excel.

Top Table Usage in PPAC

Top Table Usage in PPAC

Top Ten Table usage is back from being lost when Organizational Insights was discontinued. A bit tricky to find so check out the video. Awesome tool when trying to reduce the size of large instances, especially important now that the prices per GB are going up to $40/GB (subject to you license agreement).