Migrating data to a live on-premises CRM system?

While working on one of the project, we were required to move data from legacy system into a live production system with limited amount of black out window. One can calculate the time required based on number of records, server configuration, etc. and can even have a buffer. But you never know when Murphy will play his role.

Below are few points that, if considered can help Murphy stick to its seat and don’t show up. (Just some pre-checks that helps the processJ)

  • Confirm disk space on SQL, web and application server. Have some buffer space as logs will grow.
  • As your target system is a live production system, surly there will be some scheduled maintenance jobs running to maintain server health as part of disaster management. These jobs are life savers but problem with them is that these are scheduled in down time as they consume high resources. And unfortunately this is the only time when we can perform our import. Thus you may have to consider pausing the maintenance jobs. But do remember to turn them on once done with the import. Some of the resource consuming jobs are: consistency check, database backup, async-operation cleanup, POA, etc. Pausing this job helps the import utilize maximum available server resources.
  • Check the database log size and clear. The log will grow with the import and if it reaches the maximum available threshold the import processes throws timeout errors. Also check the shrink process, preferred if simple.
  • Check for CPU and Memory usage on SQL, web and application server.
  • Check for any blockage on SQL server, if any script is blocking or slow running queries.
  • Clear AsyncOperationBase table.
  • You may run into scenario where it is required to restart the SQL service. Make sure that this do not affect any other process. Also in case of NLB, upon restarting the SQL service switches the active node. Thus you will also have to consider that node for performance check.
  • You may have to disable the user logging and turn off the workflows and plugins.

And last but not the least is FULL database backup before start of the process. Also you make to do this in several passes due to amount of data and limited amount of down time. Identity the steps that can be performed outside black out that do not affect live system. This provides some extra time for completing the critical steps.

I would recommend to do the import in multiple small passes which helps in keeping the buffer and reduces the chances of breaking things or running on edges. After all “Rome was not built in a day”.

These are some of the steps that helped me. As always, these may not match exactly to your requirement but some of them will surly. And I don’t guaranty of anything from the steps as risk will be yours as it’s your production system.

If you have anything to add, please write in comment and I will update the content. Thanks!

Advertisements

Performance issue after upgrade to CRM 2015

While working on one upgrade project, after successfully completion of all activities there were multiple performance issues experienced. While we were trying to troubleshoot the issue, may be in wrong direction; found this great article today from Sudhir Nory which explains the root cause. This may not be the issue for all performance issue on upgrade projects but is surly helpful to know.

When a database is upgraded to SQL Server 2014 from any earlier version of SQL Server, the database retains its existing compatibility level if it is at least 100(SQL Server 2008 R2). If the compatibility-level setting is 110 or lower it uses the old query optimizer which may not be effective. We found the current compatibility level displayed in the Compatibility level list box was SQL server 2008 for the ORG_MSCRM Database SQL Server 2014 includes substantial improvements to the component that creates and optimized query plans and gets utilized only when database compatibility level is reset to 120 (SQL server 2014).

http://blogs.msdn.com/b/crminthefield/archive/2015/06/04/improve-crm-query-performance-using-compatibility-version-120-when-using-sql-2014.aspx

Microsoft System Center Management Pack for Dynamics CRM 2015

You can get the System Center Operations Management pack from the Microsoft Download Center

The System Center Management Pack for Microsoft Dynamics CRM 2015 enables you to administer the Microsoft Dynamics CRM 2015 application in Microsoft System Center Operations Manager (SCOM) 2012 or a later version.

Feature Summary

  • Monitors the availability and heath of the following components
    • Microsoft Dynamics CRM Server 2015
    • Microsoft Dynamics CRM 2015 E-mail Router
    • Microsoft Dynamics CRM 2015 Reporting Extensions
  • Monitors the availability and health of the following component services:
    • Microsoft Dynamics CRM Asynchronous Processing Service
    • Microsoft Dynamics CRM Asynchronous Processing Service (maintenance)
    • Microsoft Dynamics CRM Sandbox Processing Service
    • Microsoft Dynamics CRM E-mail Router Service
    • Microsoft Dynamics CRM Unzip Service
    • World Wide Web Publishing Service
    • Indexing Service
  • Monitors the availability and health of the following application components and functionality:
    • Operability of ISV plug-ins
    • Web application requests processing, SOAP exceptions, and unexpected failures
  • Monitors the performance metrics of the following components:
    • Web application requests processing
    • Database query processing
  • Monitors the system for configuration-related failures.

Supported Operating System

Windows Server 2008 R2 SP1, Windows Server 2012, Windows Server 2012 Essentials

Other Software

  • System Center Operations Manager 2012 or a later version
  • Microsoft Dynamics CRM Server 2015

 

***Reference: The Microsoft Dynamics CRM Blog

Plug-in performance analysis and Plug-in Type Statistics

There are a number of ways to determine what is happening with the plugin in CRM application if the problem occurs outside of the database layer.

The first is analysis of plug-in performance. For plug-ins operating in the sandbox, plug-in statistics can be queried for within the application to give an indication of how often the plug-in is running, and statistics on how long it typically takes to run. A plug-in type statistics stores information about the run-time behavior of a plug-in type and other statistical information. This entity is used by the platform to record execution statistics for plug-ins registered in the sandbox (isolation mode). The schema name for this entity is PluginTypeStatistic.

 

 

When certain errors are occurring, using the server trace files to understand where related problems may be occurring in the platform can also be useful.

Why Plug-ins/workflows aren’t intended for long running processes?

Plug-ins/workflows aren’t batch processing mechanisms. Long running or volume actions aren’t intended to be run from plug-ins or workflows.

Dynamics CRM isn’t intended to be a compute platform and especially isn’t intended as the controller to drive big groups of unrelated updates. If you have a need to do that, offload and run from a separate service, such as Azure worker role in Dynamics CRM Online (see here) or a Windows Service for on-premises deployments.

This question is still open at one end and any new inputs are welcomed. I will be happy to add it to the blog!

Only update things you need to

While it is important not to reduce the benefit of a Dynamics CRM system by excluding activities that would be beneficial, often requests are made to include customization that add little business value but drive real technical complexity.

Consider a simple business scenario:

If every time we create a task we also update the user record with the number of tasks they currently have allocated, that could introduce a secondary level of blocking as the user record would also now be heavily contended. It would add another resource that each request may need to block and wait for, despite not necessarily being critical to the action. In that example, consider carefully whether storing the count of tasks against the user is important or if the count can be calculated on demand or stored elsewhere such as using hierarchy and rollup field capabilities in Dynamics CRM natively. Limited update philosophy should also be adopted while extending CRM using Processes, Plugins or Scripts.

 

 

This is something that reminds me of normalization in DBMS J

***References : Scalable Dynamics CRM Customization documentation

Transactions in Dynamics CRM and “Waiting for Resources” process status

In Dynamics CRM, the database is at the heart of almost all requests to the system and the place data consistency is primarily enforced.

  • No CRM activities, either core platform or implementation, work completely in isolation.
  • All CRM activities interact with the same database resources, either at a data level or an infrastructure level such as processor, memory, or IO usage.
  • To protect against conflicting changes, each request takes locks on resources to be viewed or changed.
  • Those locks are taken within a transaction and not released until the transaction is committed or aborted.

A common reason that problems can occur in this area is the lack of awareness of how customizations can affect transactions. SQL Server determines the appropriate locks to be taken by transactions on that data such as:

  • When retrieving a particular record, SQL Server takes a read lock on that record.
  • When retrieving a range of records, in some scenarios it can take a read lock on that range of records or the entire table.
  • When creating a record, it generates a write lock against that record.
  • When updating a record, it takes a write lock against the record.
  • When a lock is taken against a table or record, it’s also taken against any corresponding index records.

Let’s consider SQL Server database locking and the impact of separate requests trying to access the same data. In the following example, creating an account has set up a series of processes, some with plug-ins that are triggered as soon as the record is created, and some in a related asynchronous workflow that is initiated at creation. The example shows the consequences when an account update process has complex post processing while other activity also interacts with the same account record. If an asynchronous workflow is processed while the account update transaction is still in progress, this workflow could be blocked waiting to obtain an update lock to change the same account record, which is still locked.

wp1

It should be noted that transactions are only held within the lifetime of a particular request to the platform. Locks aren’t held at a user session level or while information is being shown in the user interface. As soon as the platform has completed the request, it releases the database connection, the related transaction, and any locks it has taken.

**REFERENCE – Scalable Dynamics CRM Customization document.

Timeouts and Platform constraints with Dynamics CRM

The Dynamics CRM platform has number of constraints (which I used to treat as errors) imposed to prevent any one action to impact the rest of the system. The question was whether the constraints can be lifted, but this is rarely a good approach when you consider the broader implications. As per the Scalability document provided by Microsoft, at the heart of these constraints is the idea that the Dynamics CRM platform is a transactional, multiuser application where quick response to user demand is the priority. It’s not intended to be a platform for long running or batch processing. It is possible to drive a series of short requests to Dynamics CRM but Dynamics CRM isn’t designed to handle batch processing. Equally, where there are activities running large iterative processing, Dynamics CRM isn’t designed to handle that iterative processing. In those scenarios, a separate service can be used to host the long running process, driving shorter transactional requests to Dynamics CRM itself. It is worth being aware of and understanding the platform constraints that do exist, so that you can allow for them in your application design. Also, if you do encounter these errors, you can understand why they are happening and what you can change to avoid them.

  • Plug-in timeouts
    • Plug-ins will time out after 2 minutes
    • Long running actions shouldn’t be performed in plug-ins. Protects the platform and the sandbox service and ultimately the user from poor user experience
  • SQL timeouts
    • Requests to SQL Server time out at 30 seconds to protect against long running requests
    • Provides protection within a particular organization and its private database
    • Also provides protection at a database server level against excessive use of shared resources such as processors/memory
  • Workflow limits
    • Operates under a Fair Usage policy
    • No specific hard limits, but balance resource across organizations
    • Where demand is low, an organization can take full advantage of available capacity, but where demand is high, resources and throughput is shared
  • Maximum concurrent connections
    • Maximum connection pool limit of 100; connections from web server connection pool in IIS to the database
    • Have never seen a scenario where this should be increased. If you hit this, it is an indication of an error in the system; look at why so many connections are blocking
    • With multiple web servers, each with 100 concurrent connections to the database of typical <10ms, this suggests a throughput of >10k db requests for each web server. This should not be required and would hit other challenges well before that
  • ExecuteMultiple
    • ExecuteMultiple is designed to assist with collections of messages being sent to Dynamics CRM from an external source
    • The processing of large groups of these requests can tie up vital resources in CRM at the expense of more response critical requests by users, therefore this is limited to 2 concurrent ExecuteMultiple requests per organization

A common misconception is that these limits are only applied in Dynamics CRM Online. This isn’t accurate as they are applied equally in an on-premises deployment of Dynamics CRM. In an on-premises deployment, there is more scope to configure and relax these constraints, which gives the impression of more throttling in Dynamics CRM Online.

However, as described earlier, hitting these constraints is an indication of behavior that affects other areas of your system, so investigating and addressing the root cause of the behavior is preferable to simply loosening the constraints, even in an on-premises deployment. In an on-premises deployment you’re still affecting your users with slower than necessary performance.

How GUID.NewGuid(); affects CRM performance?

Recently I read this interesting article which explains how CRM uses sequential GUIDs for performance and recommends that users creating a record in CRM using SDK message, either in plugin or workflow should use the create GUID service instead of using System.Guid.NewGuid().

Microsoft Dynamics CRM SDK Best practices for developing with Microsoft Dynamics CRM, states, “Allow the system to automatically assign the GUID (Id) for you instead of manually creating it yourself”. This suggestion allows Microsoft Dynamics CRM to take advantage of sequential GUIDs, which provide better SQL performance.

A plugin or an application that needs to create records in CRM using the SDK populate the record’s ID with GUID generated using System.Guid’s NewGuid method. The System.Guid’s GUID does not generate a sequential GUID which affects the performance.

Detaild explanation provided in below link. Thanks!

http://blogs.msdn.com/b/crminthefield/archive/2015/01/19/the-dangers-of-guid-newguid.aspx

Performance Best Practices for Dynamics CRM

  1. Use multithreaded application.
    Add threading support to your application to break up the work across multiple CPUs.
  2. Allow the system to automatically assign the GUID instead of manually creating it.
    This suggestion allows Microsoft Dynamics CRM to take advantage of sequential GUIDs, which provide better SQL performance.
  3. Use early-bound types where ever possible.
  4. Write plug-ins that execute faster.
    If you intend to register your plug-ins for synchronous execution, we recommend that you design them to complete their operation in less than 10 seconds.
  5. Limit the data you retrieve.
    When you use the methods that retrieve data from the server, retrieve the minimum amount of data that your application needs. You do this by specifying the column set, which is the set of entity attributes to retrieve.
  6. Limit operations that cascade to related entities.
    When you use the Update method or UpdateRequest message, do not set the OwnerId attribute on a record unless the owner has actually changed. When you set this attribute, the changes often cascade to related entities, which increases the time that is required for the update operation.

http://msdn.microsoft.com/en-in/library/gg509027.aspx