Large migration – lessons learned

Large migration – lessons learned

I was recently in charge of a large migration. It all went fine but not without hickups that typically are connected to moving large amounts of data to dataverse. We were using SSIS with Kingswaysoft and ended up using a local SQL database as staging database too. This article will discuss the different lessons learned and give some concrete tips when doing similar migrations.

One of my more popular articles is the article that describes how to optimize the writing of data to Dataverse/CDS. If you are working with migration of large amounts of data, as I will be describing here, I do suggest you have a look at it: https://powerplatform.se/fast-data-management-in-a-limited-cds-world/. I will not discuss those concepts in any detail here but we did use all aspects mentioned in that article.

I recently was in charge of a migration which used CSV-file exports from an old German system (with German field names!) which had many millions of records, in both large tables like “Contact” and “Sales Order”. However, the system we migrated from had a completely different data modell than the one used in Dynamics. For instance, each row describing a “Flight” had to be divided into two rows, one for the outgoing flight and one for the homecoming flight, in the order detail table. We also had to create a lot of related data which was referenced from the “Flight” table, for example location, agent and brand. In other words, there was quite a lot of heavy transformations going on and a lot of logic involved, such as change format on the old data to match the Dataverse model and apply rules to resolve old issues, such as bugs.

Initially we only got a quite small subset of the entire database load, and we started our migration journey by creating all the migration logic in SSIS (which facilitates the script and makes updates easy to handle). The script did include some functions that “joined” rather large tables, both from the CSV files but also related data fetched from the Dataverse based on primary and alternate keys. I was clear with the customer from the very beginning that I wanted a full export with the same amount of data that we could expect in the final migration, mainly for the opportunity to stress-test the SSIS script before the migration to the production environment took place and after a while we got the big files…

…And this was when the excrement hit the wind generator. The afore mentioned lookups just stalled forever. We noted that having a lookup (using Kingswaysoft Premium Lookup) works fine on a computer with 16 GB memory up to a few 100k of records. However, once the data starts reaching 500k and more, it just stalls forever (and don’t even get me start on the sort tools…). Not sure exactly if it would have been possible to fix this by adding more cores and memory, we didn’t try. We hence had to rewrite the script and implement a staging database instead. What we found, is that a dataflow with 1M+ records of lookups will be 100x faster if you import the data into SQL and do a join instead. Lookups still works for smaller tables and I am not against them per se, as they do make the migration simpler. Adding more tables to a migration database will increase complexity, and if you want to add a column in a table, that table do not only has to be added to one SSIS dataflow, but probably a few more. And you also must do an ALTER TABLE in SQL to add the field there too. It is therefore important to have a good mapping set before you start to create the script. And keep the complexity as simple as possible. You can also use SQL tasks in the migration script to update the tables straight after you read them to the staging database per automation, if you need to apply some kind of rules after the read to the staging database, and find it easiest with an SQL query.

The method we used for developing the migration was to first make a “skeleton” migration, based on the target data model. In other words, we started with trying to get a few of the easiest fields, not all, from all tables that was to be involved in the migration – maybe it could be called – model-first-approach, instead of starting with one table, completing this and then moving on to the next. The advantage of the model-first-approach is that you quite early can start some tests on the data, for instance setting up some quantitative test by checking in the source system for the quantities of contacts and then comparing these quantities to the target. The tests can typically be done by other people than the people building the migration scripts and hence this methods scales a lot better than table-by-table-approach. It is also possible for several devs to work in parallell with different tasks. Typically the more senior will build the skeleton and then more junior can add fields by field to each respective table. A negative aspect of this approach is that it requires a lot of re-loads (keep in mind that this was a first migration, so there are no prior data in the Dataverse that we needed to consider) and re-mapping. And it may be easier to “fall out of” the structure, if you just need “to add a little bit here and there”. It is however indeed hard to go table for table, especially with related data. If you already have a lot of live data, you should think about a way to easy identify the migrated data so you can bulk-deleted. And do not forget to engage the client early with raised questions and the mapping to make sure you have understood everything correctly and avoid unnecessary errors.

We also tried to create unique row identities that strictly was based on the source data. This is very useful as that allows for delta-migration, or to continue where we left off in case of a problem. Let’s say for instance that you want to migrate 3 million contacts. If, after 2.1 Million contacts the script breaks for some reason, it is good to be able to continue at 2.1M instead of restarting. In this case we didn’t use modifiedon-date to be able to do a full delta migration logic but it is certainly possible. For this we used the cache-transforms, easily fetch the already migrated data (if any) with the unique and sort out the already migrated data if it matched the key.  

Another pattern that we used was that, after creating a specific record, like contact, we reimported the recordid (in this case contactid) together with the legacyid. This allowed us to directly join with this table when later adding tables with dependencies like lookups towards the contact table, could be joined with this mapping table so that we directly got the contactid when querying the related table.

Tips

  • When migrating from CSV, import them directly as source tables in the staging database. That way, in case you need to fix something, you have a good reference for quantities.
  • Get an example of the full data load as early as possible. A script that works for a subset might not work at all for the full dataload as was the case for us.
  • Automate as much as possible. Don’t use any hardcoded values that are environent specific, such as transactioncurrencyid, but rather read these to small tables or to SSIS variables. Use SQL Truncate to remove all data quickly in a table, and make this part of the SSIS script as an SQL task at the appropriate stage.
  • Always check the quantities. How many rows in source data, how many rows after a match and check if it differs so you very early can identify bugs in your script that might be the reason for dropping rows. For example, you might use a JOIN when you should use an OUTER JOIN. Always check the total number and see if it is what you expect. Watch out for duplicates, and always check so your unique IDs (if you got some from the source data) really are unique and not NULL. Do note that if you have duplicates, that you join on, that will create multiplications. Hence it is possible, after a select-statement with joins to get more records that the initial table.
  • Define reasonable goals and test cases for the migration. Some examples:
    • 99.9% of all contacts to be migrated correctly. With 1 M records, this means that anything lower than 1000 incorrect migrated contacts/missed, is defined as still ok.
    • Randomly pick 10-20 records on a base level, like 20 customer, and then compare these in the UAT/Test environment to the source system, as it is seen there. This needs to be done by the business people, so that they can have a say if the migrated data is fine.
    • Select some filters, like “all customers in Munich” and some other segmentations and compare source system to destination. If there are large amounts of errors, backtrack to the staging database to see where you did loose some records or created too many (not uncommon).
  • Complete entire transformation to destination tables in the staging db. Then you can move directly from there to dataverse. This is particularly important when moving large quantities of data when managing the data in SSIS can be problematic.
  • Make sure to have unique identifiers on all tables that preferably can be regenerated from the data. Store these in some “Legacy ID” field. This allows for delta-migration logic, ie. where part of the data is migrated and then the rest later. If you have some issues during one of the dataflows, and it stops on 3 230 234-th record of 6 M, you can continue from there and you don’t have to redo it all. If there is no decent way of getting a legacy id, you can generate classic row numbers by creating an identity column. This will make the migration utilize this, but only within that particular instance and load of the staging db. Hence you must be careful everytime you reload the database.
  • Utilize the backup-restore functionality of the dataverse environments. Do note that you can make manual backups just before you start migration. If you have a production environment, this will need to be converted to a sandbox environment before you can restore to it. Another option I got from a colleague was to use 3 different environments, with temporary names, and then just rename the final one when done.
  • Once you have transfered an entire table to the source system, it is typically very useful to have a mapping table, with just the table record id and the legacy id. So for instance, after migrating Contact, read all contacts from dataverse with the contactid and the legacy id. That way, when later migrating “salesorders”, which identify the customer by legacy id, it is easy to just join with this table to get the contactid.
  • Production environments are faster. Fastest is to ask Microsoft Support to relax throttles on all environments that are used during migration.
  • Use a VM that is located geographically (or really with low latency and high throughput) to where the environments are hosted. This is a very common recommendation by Kingswaysoft too.
  • The settings for number of threads and batch size needs to be set based on some factors, namely:
    • Production/Sandbox
    • Have throttles been relaxed
    • Size of payload (ie how many columns) – larger payload -> smaller batches.
    • Type of action – creates are faster than deletes. Updates are in-between.

I hope these tips can help you along. If you have any comments or you have other experiences in this subject, don’t hesitate to leave a comment.

During this migration and the writing of this article, I had excellent help from my highly intelligent colleague Ebba Linnea Nilsson and it is certainly true that two heads are better than one, and the end result is often a lot better than just the sum of two people. So for my final recommendation, make sure to have a good colleague with you to help you out, as you most probably will run into some issues and having someone to discuss with is really great!

Good luck on your migration challenge!

Why I love Kingswaysoft

Why I love Kingswaysoft

The reason I often recommend Kingswaysoft over other methods of data migration or even sometime data integration is rather simple, it can do what others can’t. Not even Microsoft. And it can do it fast.

So have I been payed by Kingswaysoft to write this? No, not a penny, I havn’t even been given a free license even though they might if I asked. I think it is just fair that I explain why I am such a strong proponent for this product and if anyone disagrees, please feel free to drop a comment below.

API knowledge

Kingswaysoft both know how the API:s of the Power Platform/CDS/Dataf??x (called CDS below) work, sometimes even better than Microsoft themselves as I have seen feedbacks given in such detail from them on what is missing from specific API:s to reach feature parity, that it is scary. Kingswaysoft also has a blog which has details recommendations and built in recommendations in the product on how to maximize performance with the Dataflex API. They have also built in handling for throttling, multi-threading, batching and more to just make it transparent. When looking at other integration products and connectors. I have never seen anything that is close to this depth.

Datamodel and Dynamics knowledge

Dynamics 365 Sales, Customer Service and other first party apps have some very peculiar oddities that need to be taken into consideration. These oddities range from for example:

  • How to delete Marketing List members – by using the party member and list as key
  • How audit logs are handled
  • How Activities work, with activity pointer and activity parties
  • Setting CreatedOn/CreatedBy/ModifiedOn/ModifiedBy
  • Reading/Writing personal views (saved advanced finds)
  • Setting statusvalues of some fringe entities

Even though many tools, like Power Automate, Data Flows, Azure Data Factory, etc. all seem to handle common tasks like creating or reading a contact or account rather well, it soon becomes a problem when you start looking at some of the areas above. And you often need to in a migration.

It can use the power of SSIS

SQL Server Integration Services is a very powerful framework once you start getting your head around it. Yes, it has some quirks to it but in general it can do some rather cool things and is good when you want to sequence many different tasks in order to reduce overall runtime. Using built in features like Cache Transforms with memory storage, you can make filters using memory based lookups with millions of records super fast, once everything is loaded.

…and extend it

And of course Kingswaysoft didn’t settle for just building connectors to Dynamics 365. They have a pack called “Productivity Pack” that adds a lot of nice features to SSIS that makes your day a lot easier.

And if you have problems – great support

And if you ever run into problems, their support is great. We have identified buggs and they have fixed the bug and sent us a special deploy just a day or two after.

Flip side

There is always a flip side. I think one is that SSIS isn’t always your most stable product. For this I don’t think Kingswaysoft are to blame but it affects their product experience none the less.

Another obvious flip side is that this requires a skillset that is a bit “off the tracks” even though it isn’t super hard to learn the basics, becoming really proficient with SSIS takes time.

Finally

So, a product as good as this is probably super expensive, right? No. It isn’t. If you are running things from within Visual Studio, then you don’t even need a license, but if you plan to run a migration or so, I certainly recommend one anyway, to get access to support. And buy the Ultimate license to get access to the productivity pack and all the other connectors while you’re at it. It is worth it.

Just to be clear, I am not saying that the other products are bad. I am just saying that once you choose a product, you will start to decend the rabitts hole. If the product isn’t up to speed you will have a couple of options:

  • Back up and redo with another technology
  • Patch with another technology
  • Create some kind of workaround

Neither of these solutions is rather palatable. Hence it is often tempting to choose a product that you know can do the job. And apart from coding, SSIS with Kingswaysoft will very seldom have any issues.

Fast data management in a limited CDS world

Fast data management in a limited CDS world

In May 2019 Dynamics 365 CE/CDS enacted some new throttling mechanisms that have caused some headaches for anyone wanting to manage a lot of data in CDS (I will refer to Dynamics 365/CDS as just CDS below). There are several different throttles but the one that has cause me most trouble is the concurrency throttle. Kingswaysoft will release support for handling this in the next release and you can also request a special version from them if you ask nicely. In the meanwhile this post can give you some help on how to work as fast as possible using application user mulitplexing and a loop with a 5 min wait to make sure that the throttles are reset.

The new throttling on the main CDS API, as described here: https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/api-limits needs to be carefully considered when doing heavy data manipulations in the CDS. One of my customers has a large system with numerous integrations of which the most data heavy are the Marketing Automation systems and the booking systems. And yes, this is Business to consumer.

With the new per GB pricing, keeping the database as small as possible has become an essential task and using the bulk delete just doesn’t work for large data loads, at the time of writing this article. I do hope that Microsoft increase the speed of it so that it does become more useful but currently its speed is somewhere around 1-2 records per second.

The bulk delete also has limitations on that it can only base it selections on a query, i.e. a FetchXML. Often this is not enough, for instance when you want to remove “All emails except those that have any connection to either a case or a contact which has a case”.

For these reasons I almost always opt for using SSIS with Kingswaysoft connectors to CDS when working with complex data management. This article will be on how to get some performance now that there is tougher throttling to take into consideration.

User multiplexing

As the throttling is measured on a “per user”, one trick is of course to use multiple users and spread the load over all these users. You can, of course use normal users, but that will cost you licenses so the smart person will of course use application users instead. If you don’t know how to create application users in Dynamics 365, check it out here: https://docs.microsoft.com/en-us/dynamics365/customer-engagement/admin/create-users-assign-online-security-roles#create-an-application-user . In the example below, I will be using four different application users, one as the source account and three as destinations. The reason for this is that it is typically easier to read several thousand rows per request, but seldom efficient to do batch creates/writes/deletes of more than 10-20.

To do this with SSIS/Kingswaysoft you should start by setting up the connections. In this case, the four CDS/CRM connections and use the OAuth auth-type like below. 

As you might want to have several packages in the same project and have them share the connections, it may be a good idea to use project connections. I also use an Azure SQL db for logging any errors. Previously I used to use CDS but now with the throttling, that is not such a good idea as the error itself might be throttling and hence the error can cause an error. Writing to some target that you know will not fail is hence a good idea for logging errors. When you are done with the connections, it should look something like this:

Now it is time to build the actual flow. If you’d normally have a Source and a Target, it will now look something like the image below, which I will explain.

First of all, the Premium Derived Column creates a new column which simply contains the row number. It will look something like this: 

I like to use the components that are available in the Productivity pack from Kingswaysoft, and this Premium Derived Column is one of these. In this case I think it is actually equal if you use IncrementalValue() or RowIndex(). I think you can create this logic with a normal Derived Column too, it just has less features.

Next we need to create a Conditional split that divides the rows evenly between the three destination components. This is done using the mathematical operator modulus which is written using the “%”-sign. For those that didn’t study this in school, it simple means “the rest” in a division. For instance 5%3=2, if you divide 5 by 3 you will get 1 and a rest of 2. What we will do, is assign RowNr%3 == 0 to Case 1, RowNr%3 == 1 to Case 2 and the rest to Case 3. That should divide them evenly. It looks like this: 

You then create the three destination components. I typically create one first, copy it and change it, as that is faster. Make sure that you set the Connection Manager to the three different Target Connections.

I also recommend that you fiddle a bit with the batch size and the number of threads and test out which gives the best results for you and the entity and action you are working on. There is no one answer here. I would typically start at Batch 10, Threads 16.

Tuning DataFlow property settings

If you back out to the Control Flow view and right click on the Data Flow you have created, there are some other interesting setting you can twirk.

DefaultBufferMaxRows – 10 000

DefaultBufferSize – 10 485 760 (10MB)

EngineThreads – 10

These can also be tuned to allow for the Data Flow to handle more rows, more memory and use more parallell threads which of course will make it faster (if that is the bottle neck, typically not when working with Dynamics)

What I have found is changing the maxrows to 100k, the buffer size to 100 MB and engine threads to 32 will not hurt but you can find several other blog articles specializing in SSIS that discuss this.

Crude throttle handler

What I have noticed is that many of my Dataflows simple seem to grind to a halt after 400-600k rows read from Dynamics. Not sure if it the read or write part that is causing this but what I figured is that probably the most pragmatic way of solving this would be to create a loop that runs a data flow that is limited in the number of records, typically 400k, wait 5 minutes then iterate. Smartest version is of course to have a control variable which checks to see when when there are no more rows and then breaks the loop, simpler version is to just loop n number of times to cover the amount of data you are trying to move, ie. number of rows per iteration x number of iterations. It would look something like the picture to the left.

If you would like to refine the loop a bit to make it more automatic, create a variable of type Int, for instance RowCount, set the initial value to be 10 or something different from 0. Then set the EvalExpression to “@RowCount > 0”. After this add a RowCounter control to the Data Flow and connect this to the variable RowCount. When the Data Flow runs and returns 0 rows, it will run to the end, the EvalExpression will evaluate to “False” which will cause it to break.

Using this technique, I am able to remove several million records in just a few hours. With one of these jobs I managed to remove 20 GB of structured data in less than two days (no attachments or similar, just records). By adding more application accounts and of course both to the source and particulary to the destination side, you can increase the speeds you are getting.

I do also advise you to be on the lookout for Kingswaysofts new version which I think will come soon, and do as I, make sure to always download both the Dynamics and Productivity Pack. I have read that there are great things coming to the productivity pack!

How to reduce SubscriptionTrackingDeletedObject table in CDS/Dynamics 365 – Updated!

How to reduce SubscriptionTrackingDeletedObject table in CDS/Dynamics 365 – Updated!

One of my customers is a B2C customer with a very large online database exceeding 500 GB. With a very active Marketing automation tool interated, we generate a lot of data in Dynamics 365 CE which after defined retention periods needs to be removed. This has caused some side effects, that a table called SubscriptionTrackingDeletedObject has become very large. This article will describe how to set a configuration to reduce its size. UPDATED – Based on some new learning and information from Microsoft this article has now been updated!

We often monitor the Organizational Insights, and now lately the brand new capacity feature that can be found in the left hand menu in https://admin.powerplatform.microsoft.com, if you have a CDS/Dynamics 365 CE instance.

An interesting table started growing rapidly and we had no clue what this was, and I had during my now 15 years of working with Dynamics 365 never seen it. It was called SubscriptionTrackingDeletedObject. When I came back from my Swedish summer vaccation, it had grow to over an amazing 181M records. Time to fix this.

First thing, as usual is of course to google it (yes, it is a verb, get used to it). All I found was this somewhat informative post by my good friend Chris Cognetta who is an ace with infrastructure issues.

http://cognettacloud.net/2016/06/21/crm-database-log-growth-issue/ 

However, it seemed that they just truncated the table, and we were online so that was a bit tricky, to say the least. I was at this time a bit upset that Microsoft were taking up around 50GB of space for my customer without giving me any way of managing that, or having any direct use of it. I counted to ten and called Microsoft Support.

After a few emails back and forth, the excellent support technician at Microsoft informed me that there is actually a setting in the infamous super secret setting tool with the Star Trek-sounding name OrgDBOrg (it is pronounced “Org-D-Borg” in case you ever get stuck in Dynamics trivia). The setting is called ExpireSubscriptionsInDays. I will quote the support technician in what this table is used for and if anyone has any more information, please leave a comment.

“The SubscriptionTrackingDeletedObject table is the table that logs records for number of days before deleting inactive subscriptions as well as timed out deletion services.”

I am not sure for which purpose. If it is in regards to GDPR or some restore mechanism. I would like to know though. Default value for this i 90, which means that these logs will be stored for 90 days. The minimum they can be set to is 1. As I am currently not entirely sure what these logs are used for, I would not recommend you set them to 1, but I did set my customers to 5, hoping that this is not going to come back with a vengance.

We have during the day seen a dramatic drop in the amount of records in this table, with about 30M but and it is still ongoing, but hard to measure as there is a delay in the capacity measurement the Powerplatform admin portal.

Update! The size of the table fell to 160M rows but never below this so after some further discussions with Microsoft support they did some more investigation into this subject and came back with the following recommendation:

1. Reduce the value gradually from 90 to 60 and then on
2. Never go below 15

There is however, another related setting called ExpireChangeTrackingInDays which is located just next to the ExpireSubscriptionsInDays. This is defaulted to 30. We reduced this to 15. 

Based on these recommendations we tried 60 days and this resulted in a most dramatic drop to around 20M rows. – End of update

 

So, how do you do this? First, download the OrgDbOrgSettings tool and install it in the instance where you are having issues. Check out these links below for that:

https://github.com/seanmcne/OrgDbOrgSettings/blob/master/readme.md

https://github.com/seanmcne/OrgDbOrgSettings/releases

A word of advice regarding OrgDbOrg; don’t think that you are Captain Kirk and go flying off into the Beta Quadrant and beam every single setting just because you can. It won’t make your system better, rather the opposite. Make really sure on what you are doing and don’t even trust a blog article like this, read the KB-article linked in the tool and make up your own mind. It is a powerful tool, like a jackhammer.

After you have installed the OrgDbOrgSettings tool, you can see it and open it by clicking the display name.

Then just find the “ExpireSubscriptionsInDays” – Press “Edit” and change to whatever you would like it to be. You will typically have to confirm to save it to Dynamics 365 CE/CDS

With that done you should just have to wait for the magic to be done.

As far as I have understood these two settings and this table is used to indicate how long changes and deletes are stored in this table and related tables for integrating systems to be able to read. This can, for example be Data Export Service, the old Dynamics for Outlook client etc. Hence reducing the numbers to, for instance 15 (the lowest recommended number by Microsoft) can result in some changes not being propageted to these integrated systems in the case that the integrations or just an offline client being offline for more than 15 days. And I also got the feeling that setting it to 5 was below some internal threashold and hence wasn’t really supported despite the fact that it says in OrgDBOrg that the lowest value is 1.

Anonymizing data in UAT/Dev – GDPR

Anonymizing data in UAT/Dev – GDPR

On the eve of GDPR what could be more fitting than a post on GDPR. I think everyone is probably deadly tired of all the consent emails and I think that they will probably even have reached our friends in the US and Asia by now.

This article relates to legal matters on GDPR and are based on my personal interpretations and are not to be viewed as legal advice.

One interesting thing that has to be considered in relation to GDPR is how to handle personal information in non-production environments/instances. Microsoft have made it painfully easy with the instance manager to be able to copy the production instance but do you really have the right to use your customers personal information in a development, UAT or staging environment? Do you have legal support for that? I think that would be a very hard argument to make? Have you gotten your customers explicit consent for using their personal data for that purpose? Probably not. Hence, if you are planning on keeping the instance/environment for more than 30 days, you will need to remove all personal information from to stay within the boundaries of GDPR. I have found that using SSIS with Kingswaysoft and the Anonymization component in the productivity toolkit is very useful. I will in this rather lenghty article describe how I have used it to set up an anoymization script.

First of all you need to download SSDT and Kingswaysoft Dynamics 365 and Productivity Packs

Then start a new Business Intelligence -> Integrations Services project.

Then start by right clicking in the empty field at the bottom where it says “Connection managers” and choose “New connection…”

In the dialog that shows up choose “DynamicsCRM”

You should now see a dialog showing the connection settings to Dynamics.

Connection settings for your instance

Choose the right settings for your instance. Test your connection at the bottom when you are done to make sure it works. Make extra sure you are not connecting to your production environment, wouldn’t want to anonymize that!

When this is done, it is time to make your first Data Flow Task. Work in SSIS is divided into two parts, Control Flow and Data Flow. The control flow is the orchestration, which tells SSIS in which order everything is to be run. If you want thing to run in parallel, just have to boxes next to each other, if you want one Data Flow to run before the other, drag the arrow from the first to the second. It is also possible to have entire “Sequence containers” which can hold several components and make sure they execute before moving to the next stage.

Let’s start by dragging one new Data Flow from the Toolbox on the left hand side to the Control Flow work pane. Then double click it. This will open it up.

You will now see that the tab at the top has changed to “Data Flow” from the previous “Control Flow”. In the “Data Flow” view you will also have a different set of toolbox components available.

In the “Data Flow” you will control a single data flow. For instance the anonymization of Contact.

Start by dragging the Dynamics CRM Source from the toolbox (on the left) to the workspace (the big pane in the middle. Then double click it. – Before looking at the details of the source component, I like using FetchXML when building queries and of course the best way to build queries with FetchXML is using FetchXMLBuilder in XrmToolBox (thanks Jonas Rapp and Tanguy Touzard for all your work!) but if that is too much heavy lifting (it really isn’t), the easiest way to get a FetchXML query is to make an advance find query and export it with the “Download FetchXML” button in the top right hand side of the ribbon of the Advanced Find query builder. So let’s say we have decided the following fields in Contact are personal information and need to be anonymized:

The column editor in advaced find – don’t use composite fields like “fullname” or Address1

Downloading the FetchXml, and setting the Source component in SSDT (Visual Studio) will make it look like this:

I have set the “Connection Manager” to “Target” as we are using the same source and target (reading and writing to the same system.

I am leaving batch size as 2000. Seems to work well. Don’t reduce it too much, remember the API limit of 60 000 calls per 5 min period.

If you would like to try it a bit, you can set the “Max rows returned” to for instance “10” and then try it out a bit to see that it isn’t going crazy.

Source type I like as FetchXML – remember that you can have FetchXMLs with data from several entities which can make queries a lot easier than trying to match the data with lookups in SSIS.

I always try to read all data in UTC and write it in UTC which in most cases makes it correct. But make sure you understand how timezones work if you need to fiddle with this.

Also, don’t include more columns than you need. It will just make your script slow. After adding the FetchXML or changing it, it will try to parse it and read the meta data from Dyn365/CRM. Hence there might be a slight delay. You can check what data you will output from this component by clicking on “Columns” on the left hand side.

When done, press “OK” to go back to the “Data Flow” pane.

Now add a “Data anonymization” component and drag the arrow from the Source component to the Anonymizer. Then double click the anoymizer.

You should see something like this, where I have set anonymization settings for the different columns. By default it will say “Ignore” on all columns.

Try out the different anonymization types. Some are more generic than others. When done, click Ok and go back the data flow pane. Add a Dynamics CRM Destination and drag the blue arrow from the anonymizer (blue arrows are the normal data output, red arrows are error output) to the Dynamics CRM Destination component. Then double click it. The view in the data flow should look something like the picture below.

Dyn365 Soruce -> anoymizer -> writing to Dyn365
Dyn365 destination component – set values where the arrows are

When setting up the destination component, there are a few things to consider:

  • In this case we are always doing updates – hence set the action to update. It is faster than upsert.
  • You have to set the Destination Entity.
  • If you write data to Dyn365 with high latency, no batching and no threads you will be able to update at about 2-3 records per second. With low latency, correct batch setting and multi threading, I have been able to get up to 300 records per second. Very dependant on entity. Hold the mouse of the blue “i” just after the “Enable Multithread Writing” for some deep end tips from the scholars at Kingswaysoft.
  • Error handling is recommended to be directed at a file or some other output where you can monitor it. If you do nothing about it, and you get an error, it will break the flow and stop. You can control error handling of the destination component by clicking on the “Error handling” tab on the left hand side. Remember that all types of exceptions thrown by Dyn365 you will get here as well, like missing rights, disabled records etc.
A normal problem that needs to be handled is that records are deactivated and deactivated records cannot be changed, they have to be opened first, then changed, then re-closed.
This is an example of a dataflow handling this:
Data flow which splits the deactivated records to the left, adds two special columns to reactivate them, writes an update with only the statecode & statusreason to the record and then merges the two data streams and writes the original values, which recloses the ones that were closed from the beginning

This is how the conditional split is defined – if statecode is 0, send them to the output called “open” otherwise send them to the output called closed

Then you can test run your data flow, by right clicking the data flow pane, and pressing “Execute Task”

And when you have assemble an larger control flow – like for instance this:

A control flow with several dataflow being disabled – all in sequence.

you can execute the entire flow by clicking the green plus sign in the ribbon.

Gustaf Westerlund
MVP, Founder and Principal Consultant at CRM-konsulterna AB
www.crmkonsulterna.se