top of page

Duplicate Management: Implementing a Well-Rounded Approach

By: Bethany Avery, Director

When it comes to managing duplicate records in Blackbaud CRM™, it's important to implement a well-rounded approach. Even the most seasoned organizations can benefit from a periodic review of their processes to minimize gaps and ensure they are taking advantage of a broad range of functionality. When holistically reviewing your processes, there are four things we recommend.

1. Have a blend of automated and manual processes: First and foremost, it's important you have a solid nightly duplicate search and merge process in place to keep up with the most egregious duplicates that have entered the system over the course of the day. This process should look for your highest match scores (many organizations use 90% and higher as the threshold), and automatically merge them so that valuable staff time is not lost reviewing obvious duplicates. Keep in mind that there are checks in place within the standard processes to exclude any types of records that you would not want to risk pulling into an automated process (such as major donors, board members, organizations, etc.), so this shouldn't be a reason to shy away from automation. It's important to run this regularly to prevent the run times from getting out of hand.

It's equally important to have a process that specifically targets the types of records you do want to manually review and don't want to automate. If you are always excluding, say, major donors, child sponsors, members, or whoever your highest touch audiences are- build a process that is specifically targeted to this group so you can periodically ensure there are no duplicates on this front either. This is exceedingly common since even major donors, board members, etc., may engage with your website, virtual events, etc.

2. Leverage both out-of-the-box and low-lift custom functionality: One of the biggest pain points and downfalls with the out-of-the-box duplicate search functionality in Blackbaud CRM™ is that it is heavily dependent on a physical mailing address and not great at finding pairs that may have identical everything (including email addresses) if there is no mailing address on one or both of the records. To get around this, we have built many variations of a custom global change that can search for and stage duplicate pairs based on the criteria you need most. Some examples include where first name, last name, and email address are an exact match or where names have a fuzzy match and email addresses match. These lightweight global changes can stage these merge pairs as a "Duplicate record source" for use in the out-of-the-box merge processes, so you are effectively outsourcing the aspects of the out-of-the-box search that aren't so great while still leveraging the standard merge process itself, which works quite well. Pair targeted global changes such as these (which focus on email) with the out-of-the-box processes (which are quite strong with mailing addresses), and you have a well-rounded while, economical way to stay on top of your dupes.

"The work provided by BrightVine resulted in a 98% decrease in manual work for my team. Accurate data leads to increased revenue for the organization and better stewardship of our donors." Lori Mead, Save The Children

3. Ensure your import logic doesn't add to your messes: Even with the most well-oiled nightly processes, you need to ensure that you aren't creating more duplicates at source than those processes can keep up with. Fix the problem upstream, not downstream, to the extent possible. For those leveraging the BrightVine Data Link, there are myriad ways to control for duplicates at the point of entry, from configurable weighted match scores that let you increase or decrease duplicate scores based on your particular needs to the ability to match by common formulas like Name Soundex, to point and click selection of incoming data points to use in your formula and much, much more. At a minimum, make sure that your imports are set up to take advantage of these. On top of that, it is recommended that you periodically review the duplicates that are caught in your nightly processes. Which process(es) is the origin of the bulk of them? Often you'll see that one or a few problem platforms/ integrations are responsible for the lion's share of your duplicates. Where this is the case, you'll want to take a closer look at those imports, in particular about fine-tuning the matching logic based on the particular nuances of said platforms.

4. Set up Queues: Finally, to manage the mayhem, queues are your friend! Be strategic about how you set them up. For your nightly queue, start with the automated search and merge processes that will find and process the most records at the top of the queue. This leaves fewer records for the lower-level processes to churn through. Double-check that your "End queue" and "Continue queue" dependencies are appropriately set so that if a dependency step fails, you halt your queue accordingly. When it comes to scheduling, if you're finding it challenging to fit everything into your nightly jobs timing-wise, there are ways to split it up. For instance, you can schedule the longer running out-of-the-box searches for the morning (some crossover with business hours is typically not impactful with these processes in particular) and the quick custom global change searches + merge processes for the evening.

Has all of this left you thinking that your organization could use a hand with reviewing or optimizing your duplicate management processes? If so, send us a message or give us a call! We'd love to partner with you on a tailored plan or augment your existing processes wherever you have gaps.


bottom of page