API per user limits – The good, the bad and the ugly

API per user limits – The good, the bad and the ugly

(Updated) Microsoft recently released some throttling that have been causing some stir in the community, especially since the latest throttle, the concurrency throttling, was not very openly announced, some partners and customers were hit rather hard by it as it affected their abilities to manage large dataloads in the system.

Now Microsoft have announce another API based limitation which is based on the users and the type of licenses the have. You can read some about it here if you like. This article will discuss what this means and my personal view of the good, the bad and the ugly of it.

First of all we need to understand what it is. It is a API limit that will be set per user and based on the type of license that the user is allocated. The highest is if you have a Dynamics 365 App user license, like Sales, Customer Service or similar, which will give you 20 000 requests per 24 hours. The lowest is a Power App – Per App license which will give you 1 000 requests per 24 hours. Note that these are connected to the user and not summed/aggregated to the instance level (allthough I would think that would be a good idea). Well, really, the lowest of them all are Application, Non-interactive or admin-users that don’t use a license as these will be allocated 0.

I have not seen any UI for this yet, so I don’t know how this will look, but what the page is saying is that API-calls can be reallocated from normal users to application users/non-interactive users. (UPDATE – See update at the bottom regarding this, thank you observant readers!) Not sure if it will also be possible to reallocate API-calls between normal user and another normal user.

There will also be an additional SKU for buying 10 000 additional API calls per day that can be allocated to a user.

 

The Good

What is good about this then you might ask? Well, I think it is fair. Large customers pay a lot of money for their instances and usually use it a lot with a lot of integrations. It is only fair that they are allowed to use the API:s more than a small customer who has created some super duper application that blasts Dynamics with massive amounts of calls. The small customer can still do this, but they just have to pay a bit extra for those API-calls if they arn’t covering that with their users.
I also hope that this might enable Microsoft to relax the currently rather tight throttling on the API:s a bit.

According the the licensing documentation in general, existing customers will not be hit by this until October 2020, in other words, more than a year from now. This will hence probably only now affect new customers.

The bad

This implementation certainly has some bad parts. The most obvious is the too stringent connection to users which makes it weird. I don’t know how this will be managed in the UI but let’s say we have an instance with 500 users mixed Sales Enterprise, Customer Service Professional and Team Member. We also have 10 application users that are used for Portals, Forms Pro and custom integrations to many other systems. Each integration using a separate integration user to reduce the attack area in the unlikely event of a hacker attack. So what we will need to do is to first figure out how much API-usage we are using for all the normal users (for instance via PCF:s, Flows, Plugins, Workflows etc) and all the integration application users. Currently the https://admin.powerplatform.microsoft.com does not give us this granularity. There are indications but in this case one would need deep granualar data, preferably with trend analysis.

Another part of this that could be done better is the “buying addional API-calls”. Why not just adapt the method used in Azure? In other words, you pay as you go. With this current method, you have to know beforehand how much a particular user will use and if you overshoot the user will be shut down causing unnecessary support costs for customers, partners and Microsoft.

I also wonder how this practically is going to be handled? Are admins going to go into each of the 500 user records, reduce the API-calls allocated and move to Application users? If the admin moves all calls, which effectivly will stop plugins, workflows, javascripts with server calls etc how will the error handling of that look?

The Ugly

What is really the difference between something bad and something ugly? I would say that something bad is a design decision that we might dislike or might be disadvantage to the customers, it requires some sort of conscious perspective. Ugly on the other hand is the parts where where, in this case, Microsoft just have forgotten to think about something or neglected perspectives which causes issues for partners or customers. Based on this, I would say that the following are the bad aspects of this;

Timing

Again Microsoft are rolling out a change with a rather short timeframe. They probably feel that a month or two of notice by publishing the article above is notice enough, but they have to realize that many customers cannot act that fast. If you are a small customer with extensive use of Dynamics, for instance if you are using Dynamics 365 in a B2C aspect with a Marketing Automation integration and you are targeting millions of customers with sendouts and hits on your webpage being mirrored to your Dynamics all the time, this will cause some hefty API traffic. And your org might not be very big if you are totally e-commerce oriented.

Maybe only new customers, for now

Lastly I really hope that it is true that the API limitation will not affect current customers, it is not very clear and hence we are left in the dark again. If there is a problem with application users etc not being able to log in, I hope Microsoft support will be ready for the storm that will hit them.

On the other hand, new customers might have tested the system, evaluated the costs and are now faced with this. Not sure that will be optimal either, there is risk of loosing a customer or two there.

Communication

As this is a rather drastic change and may be viewed as a “breaking change” if not the one year grace period mentioned in the licensing in general applies to this. No matter, this should have been communicated very clearly months ahead to remove any kind of doubt from partners and customers. Both via blogs, emails to admins of organizations using Application users/non-interactive users as this should be easy to figure out via telemetry. Currently no one knows exactly when this will hit them/their customers or how they are to manage it.

 

This is generally very unclear. I shouldn’t have to write an article like this, speculating about what is or isn’t going to happen. If I have problems figuring this out, being an MVP, customers are probably very much in the dark, both existing and new.

 

Conclusion

In conclusion I think this is a good idea that got rushed. It should have been passed through a couple of more hoops before being launched to get the right feedback. The main things that I think Microsoft should change before rolling this out that, from my perspective, still give the same effect, are:

  1. Aggregate all API-Calls that are counted to a per instance level. It will make it easier to manage, stop the breaking change and make it easier to understand.
  2. Enable admins to add a per-use, after the fact, payment option, (like Azure) for any additional API-calls.

     

    If this is going to be useful or not also is very dependent on the fact that we can reallocate a lot of the API-calls from users to the integration users. For instance, I have a B2C customer with 1M+ API calls per 24/h and if it will not be possible to take the sum of hundreds of users and allocate those to the application users we are using, then this will be a very hurtful change.

    In the meantime, I do recommend that you keep a close eye to what is going on within this area as it will most likely affect you if you are running any application accounts, which you probably are, like Dynamics Portal, Forms Pro, Voice of the Customer and many more. If you go into the list of users and change view to “Application users” (or whatever it might be called in your language) you will see the list. I think Micrsoft will make some changes, or some announcements to this before October 1. Let’s see what.

Update 2019-09-04

There has been some chatter going around regarding this and do note the comments below which include interesting links and good thoughts. There are some additional points that need to be pointed out. Instead of changing the original article I will continue to add updates like these.

Normal UI usage will count

Initially I did not think that normal UI usage would count towards the API request calls. With “normal” in this case, as an old Dynamics 365/CRM geek, I of course mean a model driven App, but the same also goes for canvas Apps or actually any use of the CDS, what so ever. What this will mean when a user runs out of API requests, will be interesting to see. How many requests are used when the application is used, of course depends a lot on what you do. If you switch on F12 in Chrome you can check the network traffic and see for yourself.

Batching will be your friend

Using batching will from now on not only be a general best practice but also make you save money. If you use tools like Kingswaysoft this is easy to configure, to make sure that you have large batches when for instance doing CUD calls. When writing code directly, you will need to understand how to do this directly. Note that sometimes this will require entire rewrites of the code. I have seen programs off the shore of Orion that you wouldn’t believe with tons of single queries instead of one single call. Most often written by devs who have no or very little experience of writing code towards Dynamics 365/CDS.

Unclear if possible to move API-calls

As several people here and on Twitter have commented, it is probably incorrect to interpret that API:s can be moved from normal users to application users and non-interactive users. This will cause major headaches for some customers which will be struck with lots of additonal costs. Costs that are not very welcome as the per GB cost recently increased 800% hurting especially the larger customers with massive integrations and extensive use of the system. I do, for instance, have a customer that exceeds 1M requests per day 365 days a year. This would require them to buy over 100 addon 10k API requests SKU:s, despite the fact that their 500 users gives them a total of over 5M requests per day, something they will not be using through the UI unless someone is drinking very large amounts of coffee. – NEW Update: This was an incorrect interpretation. You cannot reallocate API calls from normal users. 

The price is here

The price for the 10k/24h SKU will be $50/month. This means that for a customer like mine having major integrations causing around 1M API-calls per day, this would cost an additional per month $5 000 or yearly $60 000. I sincerely hope they will relax the throttling to make it worth it. If/when they do, I will read my Macciavelli again.

 

Update 2019-09-05

First of all I will write a new blog article on this, when the dust settles and we know what is going on. Currently there are quite a lot of unknowns and I wouldn’t be surprised if Microsoft announced a thing or two soon. I have been told that the FAQ will be updated in a couple of days.

Batching – again

There were some discussions on if batching actually were going to be useful in this case or not. I have now gotten confirmed that a batched request will be considered as one (1) call. This is both for batched Creates/Updates/Deletes and Queries of multiple records (that would be very strange if it wasn’t one record, but I had to ask).

Data Export Service etc.

Data Export Service and other services that run under the system account will not count towards the API request. This is good news as this opens up for many users to be able to use this method to offload the API:s for reads.

What is the competition up to

I checked to see how SFDC are handling this and as far as I can see they have a similar setup as can be read here:

https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_api.htm

and here

https://support.geckoboard.com/hc/en-us/articles/216804218-I-ve-hit-my-Salesforce-API-request-limit

I am no expert on their licensing model, but I think it is good to know that this isn’t just a PowerPlatform thing. However, there are some distinct differences:

  1. The API calls are not counted for normal browser/client usage. Only “real” API calls.
  2. They have real enforcement blocking an entire instance/org if they overshoot
  3. All API:s per user license are summed up to the org level

Microsoft Addon apps will include request

If you buy Dynamics Portals, this will include some additional licenses. The same goes for Forms Pro. Hence there should be some default API request assignment to those application users that are installed. I do wonder if it would be financially beneficial to piggyback on those application users? There is also no current method for ISV:s to bundle API-requests into their product if they install an application user upon installation.

CSP / Distributor silence

We have still heard nothing of the 10k addon SKU from any distrubutor, EA or CSP. It will be interesting to see if it will reach the entire distribution chain by October 1 when customers will start being notified that they are in violation (new customers). 

TCP Chimney

TCP Chimney

Today I was working with a customer and we have had some weird SSIS intermittent errors where the Native OLE DB / SQL Client was giving me some trouble with really weird errors.

Errors like these:

CRM service call returned an error: CRM service call returned an error: The request channel timed out while waiting for a reply after 00:01:59.9990005. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. (Error Type / Reason: Timeout)

[OLE DB Destination [611]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: “Microsoft SQL Server Native Client 11.0” Hresult: 0x80004005 Description: “Communication link failure”.
An OLE DB record is available. Source: “Microsoft SQL Server Native Client 11.0” Hresult: 0x80004005 Description: “TCP Provider: An existing connection was forcibly closed by the remote host.

After some searching and troubleshooting, I found the following blog which finally seemed to address the error at hand. When I tried removing the “TCP Chimney” , it the error went away. Please read more about it here:

TCP Chimney Setting and SQL Server Error: TCP Provider: An existing connection was forcibly closed by the remote host

SSIS and Kingswaysoft are great tools but sometimes the errors are really tricky and you have to be rather persistant to fint the root cause. In this case it seems to be related to the NIC.

 

Embedded PowerApp in Dynamics 365 CE

Embedded PowerApp in Dynamics 365 CE

PowerApps or as the new name for it is, Canvas driven Apps, is a very fast low/no-code way of creating applications that can fill in the blanks where Dynamics 365 CE sometimes fit. Can for instance be when there is a mapping entity, a complex calendar functionality or even integrations with a camera or other applications directly. These things can be done, some are even supported within the new UI/UCI if you are running it through the tablet/mobile client. However, there is currently no easy way that I have found to just add a PowerApp/Canvas Drive App (I will just call it PowerApp below to make things easier) to Dynamics. But after a chat with my friend, ex MVP and now Microsoft Global Black Belt Carsten Groth (isn’t that a cool name for a team by the way?) he told me that it is possible to embed PowerApps. So I googled a bit a found that no one seemed to have blogged about this (if you have, please tell me in the comments) except for other it concerning other applications like SharePoint. I tried it out and found that it wasn’t that hard at all to embedd a PowerApp into Dynamics 365.

The first thing we have to start with is to create our PowerApp. You can get to this place by browsing to https://web.powerapps.com or clicking the waffle in Office365 and then selecting PowerApps. In my case, I used the Dynamics 365 template and pointed to the Case Entity and did more or less pressed Next-next-finnish. I did replace the field that was shown in the first screen from the Guid of the case to the field “title” as that makes a little bit more sense looking at.

So what I was looking for when creating the PowerApp was the AppId, in this case:
904e3ac1-1e26-4b4b-a384-971485f6709c

What we then need is the embedding syntax for PowerApps which is:
<iframe src=”https://web.powerapps.com/webplayer/iframeapp?source=iframe&screenColor=rgba(104,101,171,1)&appId=/providers/Microsoft.PowerApps/apps/[AppId]” />

You need to replace the [AppId] with your AppId. Also, if you do not specify it, the iframe it will typically render rather small so I would recommend either setting the size of it directly with width and height attributes or using css, the latter being more kosher.
I tried setting an Iframe directly with this in Dynamics 365, but that didn’t seem to work. All I got was a spinner like this:
Spinner which never stops when tried to add PowerApp directly
Not sure why this is.
So, I circumvented it and created a very small html-page like this:
<html>
<body>
<iframe width=”1024px” height=”768px” src=”https://web.powerapps.com/webplayer/iframeapp?source=iframe&screenColor=rgba(104,101,171,1)&appId=/providers/Microsoft.PowerApps/apps/904e3ac1-1e26-4b4b-a384-971485f6709c” />
</body>
</html>
This time, when trying the TestPowerApp.html-page from my local drive, the spinner was quickly replaced by this and then by the App
Then I uploaded this file as a webresource to Dynamics.
Create a webresource with the simple html-page pointing to the PowerApp
After uploading it, in the WebResource screen, you can test the page by clicking on the URL. It should show a webpage with the PowerApp.
The PowerApp running as a webresource in Dynamicws 365
Now you can use this web resource wherever you want. Like directly in a sitemap:
Added PowerApp webresource to the Marketing Area as a Subarea called “Power App Cases”
It should be possible to send parameters to the PowerApp as well making them context aware so that we can create PowerApps that for instance show up within a case and show specific data on that case after you have selected it in Dynamics 365. However, if you look at the iframe syntax you will see that the AppId and stuff is already a parameter and I need to research this a bit more to be able to figure it out. I think that the html web resource probably has to shuffle the query string parameters from Dynamics to PowerApps. I will leave this hanging and hope that someone else in the community catches this ball! Otherwise I will try to figure it out later.
As CDS (common data service) and Modular Driven Apps are the same thing as Dynamics 365, just with less sugar on top, this technique can be used there as well. 
If you find anything unclear or have any questions, please leave a comment (moderated).

Gustaf Westerlund
MVP, Founder and Principal Consultant at CRM-konsulterna AB
www.crmkonsulterna.se

Setting up Data Export Service without PowerShell Script

Setting up Data Export Service without PowerShell Script

Setting up Dynamics 365 Data Export Service requires a Azure KeyVault to be set up which is typically done using a PowerShell script which can be found in the Data Export Service setup wizard. However, if you run into issues setting this up, it might be easier to do this directly in Azure by minimizing the steps of the scripts. This was a tip that my friend and Business Solution MVP Scott Durow recommended. He mentions this in his very instructive video, but doesn’t actually show how, so I thought I’d just detail how I made it work.

First some background. The reason why I even started investigating how to do this manually was that when I tried running the PowerShell script supplied by Microsoft in the wizard.

Press the “i” icon to get a window containing the PowerShell Script that Microsoft recommends for setting up the Key Vault.

When running the PowerShell script both as myself (not a global admin) and asking a global admin to do it, it failed in the latter parts. The key vault was created by some of the access policies seemed to be missing and it just didn’t work. My users rights in Azure was Contributor in the Resource Group, and it was a bit interesting cause the global admin and I got different error messages, but when I finally managed to create the key vault manually, I could do it all with my user, so it didn’t seem I was missing any rights to do it.

First step is to make sure you have all your data straight. The power shell script is good for this. Check out Scott clip if you want to know how to find the different strings. He shows it very clearly.

 Just copied from the PS-Script:

$subscriptionId = '<subscription ID>'
$keyvaultName = '
MyVault'
$secretName =
'MySecretName'
$location = '
North Europe'
$connectionString = 'Server=tcp:
<db-name>,
1433;
Initial Catalog=<catalog>;
Persist Security Info=False;
User ID={your_username};
Password={your_password};
MultipleActiveResultSets=False;
Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;'
$organizationIdList = '<DYN365GUID>

$tenantId = ‘<AZURE TENANT ID> 

The highlighted parts have to be replaced by your settings. I will use these variables to have something to reference to further in this article.

Search for Key Vault and add the “Key vault”, the top one in this picture

Then we have to set it up. Not so tricky if you have worked with Azure before. Consider if you want to work in an existing Resourcegroup or if you want to create a new one. Typically you need to have Azure SQL services running as well so it might be good to keep them all together to be able to see the costs and control who has access why a resource group might be a good idea. But that should hence already exist. If not, you can create it. I would recommend keeping Azure SQL and Key vault in the same, not sure if it actually works in different resource groups, probably does, but I haven’t tested.

Creating the key vault – in this case I am creating a new resource group, normally it would already exist

Azure will add you as the default principal with access to the key vault. We will add Data Export Service to this later. For now, just create it.

Now we need to open the Key vault and select the “Secrets” section in the menu on the left hand side and press the button:

“+ Generate/Import” 

Then you have to enter you Secret name ($secretName) and the connection string ($connectionString) into the value.

Creating a secret – $secretname in Name and $connectionstring in Value

Press “Create”.

You should now return to the previous screen and see a row for your secret.
Select it.

It should open the settings panel for the Secret, press the “Tags” part which is located in the middle and add a tag which has $OrgIdList ($organizationIdList) as the key and Tennant ($tenantId) as value. I have blurred them out below as they are rather private.

Adding a tag with OrgIdList and tenantId to a Secret

You then need to go back to the Key Vault and click on the “Access Policies” menu item, you should then see yourself as the principal as this was set when we created the key vault. We now need to add Data Export Service as a valid Principal with read access rights.

So click “Add”, click “Select Principal” and search for “b861dbcc-a7ef-4219-a005-0e4de4ea7dcf” which is the ID for Data Export Service. It should show up like this:

It needs to have “Secret Management Operations – GET” permissions and nothing else.

Now, go back to the Secret and copy the URI to the Secret.

Getting the URI for the Key Vault Secret

Paste it into the Data Export Service Wizard field for Key Vault.

Fill in the other information and press validate. Hopefully it will work out well!

Some issues

Being too cheap with the Azure SQL level
If you don’t go for a Azure SQL P1 and choose a lower tier, you might get this warning:

We tried an S0 for our Dev environment and tried to sync a couple of million records and that just didn’t work, we got tons of errors. We upgraded the ASQL to S2 and then at least we didn’t get any errors. We are planning for P1s in UAT and production.

Might have to set activation date on secret
Seems that you might have to set an activation date on the secret. Not sure why this is, the PS-script doesn’t seem to do this. But not very hard.

Added activation date on the Secret from June 4.th

Using Database schema that is not created
The default database schema is “dbo” in the Data Export Service Wizard. If you change this to something else like “crm” and you haven’t created this in the database, you will get an error. It is simple to fix, you just have to go into the database and create the schema. To create the schema “crm” open a query and run:
CREATE SCHEMA crm

For more information on how to create schemas, check this site: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-schema-transact-sql?view=sql-server-2017

Once the schema has been created, there should be no problem using it, as long as the user has permissions using it.

I hope this works for you. If you have any questions, don’t hesitate to leave a comment.

Gustaf Westerlund
MVP, Founder and Principal Consultant at CRM-konsulterna AB
www.crmkonsulterna.se

New API Limit

New API Limit

Photo by Vidar Nordli-Mathisen on Unsplash

Related to my last post, on working with the API quickly, Microsoft have now released official
documentation that they will, effectivly March the 19:th start limiting the number of API-calls per instance that is allowed to stop what is called “noisy neighbour” problems.

First of all, read the full article here: https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/api-limits 

Let’s break this down a bit, 60 000 calls per 5 minutes, translates to about 200 calls per second. If you break this, you will start getting exceptions, until the 5 minute period has ended. You are expected to back off, and essentialy handle this. That is the short version. Read the full article for more details.

Update: George Doubinski, a friend of mine and one of the brains of CRM Tip of the Day made me aware of the fact that the limit is per user. I will update the article below on what this means.

What does this mean? Is this a problem?

For most organizations, no, at least that I work with, I not even close to breaking this. If they are using some integration tools like Kingswaysoft or other tools which enable multithreaded integrations, but generally do not need that kind of data throughput then you might temporarily be shut down, but it should self heal after some time, as after each 5 minute time span, you will get another 60 000 requests. That could probably quite easily be fixed by checking the settings of the integration tool. Update: Also, if you integrate each system using a separate accont, you do not risk one system temporarily blocking many other systems from integrating to Dynamics 365. If you are using normal users, this will of course entail a certain license cost, why I generally recommend using app users for integrations, if possible. And after this, you should have one app user for each integrating system.

However, there are some organizations where I forsee issues, and these are organizations which have combinations of any of the following criteria:

  1. Third party products which, like Marketing Automation, (ClickDimensions, FreshRelevance, SalesForce Marketing Cloud) which have not had time, or got this in their scope yet, and have large amounts of data that they integrate into Dynamics 365. Update: Especially if the user they are using to integrate, the service user, is a normal user, either used by a normal user, or shared with integrations with other systems.
  2. Legacy Code that has been upgrade to new SDK but uses inefficient architecture – can for example have issues with using ExecuteMultiple which in the article above is described as the recommended best practice. Typically for the reason that the architecure of the code, would require major rewriting to allow for ExecuteMultiple. Update: In this case I strongly recommend looking at using a dedicated user for this specific integration, to isolate any limiations set on that user.
  3. Organizations with multiple heavy integrations to Dynamics 365. Will be hard to control that the sum total does not exceed 60k per second, and handle back-off in a controlled way. The only reasonable way would probably be to rewrite the integrations to use a proxy or queue instead like Azure Service Bus Queues to integrate and have a single integration interface. Probably a lot easier to write in a blog article than to do in real life. Update: This was an incorrection deduction from my part, as it is not based on the sum total, but on the sum per user, this is not a risk unless many integrations use the same user for integration which I do not recommend.
  4. Organizations with complex heavy integrations with thousands of lines of integration code that need to be redesigned, rewritten, tested and deployed before March 19:th. And there is no way to test it as there is no TAP/Beta program for this “Feature”. Update: This is still very relevant. Even such a small change as changing the integrating user for an external system should be thoroughly tested and for larger implementations that can be hard to do before March 19.

Example

I see is a typical B2C organization running Dynamics 365 with a marketing automation addon with email tracking and webtracking. They also have a very time critical integration of orders to be able to handle any incidents. Even if the order integration in itself does not reach the limits, it is not unforseeable that a mailblast, especially a good mail blast, to which many customers read the emails click the links, go their site, check their offers and start ordering, would cause a surge of traffic on the Marketing Automation integration – Dynamics 365 API. This of course depends on the settings of this, but perhaps it is critical that all events be tracked to Dynamics. With a mailblast to let’s say 1 Million recipients, quickly hitting the 60 k/5 min limit would happen. When this happens, this would also block all orders from going to Dynamics, causing an effective stop for working with any new incidents in the system.
Update: This is, of course, only relevant if both systems are integrating using the same user. Don’t. However, the marketing automation system above, would hit the limit fast anyway and if the supplier of this system didn’t have time to update their product/service then it would handle this incorrectly. I recommend checking integrating systems and try to turn down the verbosity of what they are writing to Dynamics 365. Then after March 18 when we see how this falls out in detail, you can test a more verbose setting in a test environment, and then see how that falls out.

Summary

For small and medium companies with low complexity working mainly with B2B. I don’t see that much of a problem. Larger companies with complex integrations, large databases, integrations to webtracking, email tracking which often will be B2C companies which have higher levels of automation and larger databases of customers, will probably have larger problems with this and need to start think about this right now.

We need to come back to this subject post March 19, to see how this will really work. But I think the real problem will be for the larger orgs with many and heavy integrations.

I would be really glad to hear your views on this like I got Georges’.

Gustaf Westerlund
MVP, Founder and Principal Consultant at CRM-konsulterna AB
www.crmkonsulterna.se