If you are planning to integrate Microsoft Dynamics CRM 2011 with another business application, there are several items which you will want to have in place prior to go-live to ensure the success of the integration. A good partner like Customer Effective will do their best to make the integration run error-free; however, any integration needs periodic care and feeding to maintain good performance.
The following list of items was written with Scribe Insight in mind, but most would apply to any integration tool, such as SQL Server Integration Services (SSIS)
A local integration administrator: this is someone who works for you who has been trained on the integration tool and is familiar with the integration processes. This person is the first line of troubleshooting when a problem arises. While you still may rely on your partner for enhancements and break fixes, if you don’t have someone who is familiar with how the integration works, you may find yourself in a situation where the integration stops and nobody locally knows about it.
Error trapping: In Scribe, this takes the form of setting up rejected rows tables to capture rows that fail during the integration. This is an optional step that is frequently overlooked in integration development, and can cause unnecessary problems down the road. During integration design and testing, everything runs well, but for some reason, if later on your integration stops and a record fails to migrate, it can lead to a chain reaction of failures as any child records that depend on that record will also fail. If you don’t capture the errors and failed rows, it can make it very difficult to troubleshoot and identify the problem.
Alerts: If there is a problem, somebody needs to know about it. That’s why it is very important that you be notified of the problem when one occurs. Of course, before you can be alerted, you need to have someone identified who will be the integration administrator (#1). The alerts should be sent to the administrator, who can then open up the Scribe Console and view the execution log and rejected rows and determine the issue. You can also have the alerts copied to your integration partner (like Customer Effective).
Redundancy: In previous posts, I’ve discussed how to filter your data to import a delta of records created or changed since the last run date. If you do this, it is important to have some level of redundancy in your process so that if a row fails for some reason, the row will be re-tried several times. In many cases, failures are dependency issues—the order is related to a customer that has not yet been imported, so it fails, but when the the customer load runs next, the customer is loaded. In this scenario, if the order fails initially, when it is retried, it will come in successfully because all dependencies are now present.
There are several approaches that you can take to build in redundancy for failed rows. If you use xml message queues in Scribe, there is automatic redundancy as Scribe will retry failed messages a number of times. If your process is more of a batch type process, such as an hourly integration process, you will need to account for retrying failures in your dts design. Any of the following approaches can work:
Use a “dirty bit:” this approach includes a bit attribute in the source table. When a record is successfully imported, set the value to 0, and if the import fails, set it to 1. You can then set your query in your integration process to look for records where dirtybit = 1. This means that if a row fails, it will be retried the next time a job runs. It also gives you a very easy way to determine which rows have not yet been successfully imported.
Have a supplemental integration process that queries the rejected rows table as its source. When a row fails in Scribe, the complete row gets written to the rejected rows table, and you can have a DTS query that table to retry the row, and if successful, delete the row from the scribeinternal database.
Build redundancy into your time thresholds—so say you are bringing in all records created or updated in the past hour, if the integration stops or for some reason one of the records cannot be imported, when it gets to the next hour, any rows not imported will be left behind. To accommodate for that potential issue, you can set your time threshold wider, such as instead of looking for records created or updated in the past hour, set it to records created or updated in the past day. This will make records get updated multiple times, but will also ensure that if a row fails it is retried multiple times.
By building in redundancy into your process, most failures and data issues will be self-healing.
5. Scheduled maintenance for the Scribeinternal database: The heart of Scribe is the scribeinternal database in SQL. It stores your integration process schedule, the execution log, product registration information, and other items that the Scribe application needs to function. You should maintain this database just like you do other SQL databases. Back it up regularly, update statistics, and rebuild indexes.
Scribe makes this easy by providing a SQL maintenance script called Scribeinternal.sql. You can find this SQL file in the c:/Program Files (x86)/Scribe folder. This script will delete old execution log entries and alerts, update statistics, and perform other database maintenance to keep Scribe performing optimally.
Take note that by default, the query will delete execution log entries that are over a year old. If you have a very frequently running or high volume integration, or if you ever notice that the execution log in the console is unresponsive, you may want to edit the query to delete more recent execution log entries. In most cases, you really just want to see the most recent execution log for the past couple of weeks—that way any jobs that depend on last run information can function correctly, and if there are any recent issues, you will also want to see them. In most cases there is no issue for making the maintenance script delete execution log entries older than 1 month.
Bonus tip: Want to give your users an easier way to monitor the success or failure of integration jobs? Bring it into CRM.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.