You need to sign in to do that
Don't have an account?

Prevent Duplicate Records
Hi,
I have created an Object called Daily Diet. It holds the information of daily diet of a patient. It has following fields:
Patient - Master
Date - Date
Diet -Lookup
Food Item - Lookup
Consumed Quantity - Decimal
Eg: Mark - 12.10.2008 - Breakfast - Biscuit - 5 => Daily Diet ID 1 (Name Field)
Mark - 12.10.2008 - Breakfast - Tea - 1 => Daily Diet ID 2
Currently, if I add new record with the same values, it will be simply added
Mark - 12.10.2008 - Breakfast - Biscuit - 6 => Daily Diet ID 3
Similarly, if I change Tea to Biscuit in record 2, it will be like
Mark - 12.10.2008 - Breakfast - Biscit- 1=> Daily Diet ID 2
So, I have three records with same data. How can I prevent it ?
I have created an Object called Daily Diet. It holds the information of daily diet of a patient. It has following fields:
Patient - Master
Date - Date
Diet -Lookup
Food Item - Lookup
Consumed Quantity - Decimal
Eg: Mark - 12.10.2008 - Breakfast - Biscuit - 5 => Daily Diet ID 1 (Name Field)
Mark - 12.10.2008 - Breakfast - Tea - 1 => Daily Diet ID 2
Currently, if I add new record with the same values, it will be simply added
Mark - 12.10.2008 - Breakfast - Biscuit - 6 => Daily Diet ID 3
Similarly, if I change Tea to Biscuit in record 2, it will be like
Mark - 12.10.2008 - Breakfast - Biscit- 1=> Daily Diet ID 2
So, I have three records with same data. How can I prevent it ?
Check the Apex documentation for examples of Triggers.
Note, What if Mark actually does order Tea and Biscuits twice on the same day? Are you going to reject it? On what criteria? In other words, be sure that you really know what constitutes a duplicate and that you have sufficient data in the record to catch one.
Best, Steve.
In case of update, if there is a duplicate row, it is clear that the user has updated the record in order to make the contents similar to those of an existing record. So, it should not be allowed. In case of insert, there may be possiblity of cases you have mentioned (tea and biscuit twice on a day). Here total quantity field of the existing record should be updated by adding with thecorresponding value in the new record. So, my doubt regarding that part is ok.
Let me give the trigger I have written:
Here a patient can have different diet dates. And on each date he may take different diets (eg, breakfast, lunch, dinner etc.). each diet may have different food items.
So, how can we retrieve the daily diet details in a single query ? . I've written a quey inside the trigger. But I know that it is nota a good one as Patient A has updated for dates 10 and 11. Patient B has updated records for date 14 and 15. According to my query, it will retrieve records of Patient A and B for 10, 11, 14 and 15 (Please note tha no need to retrieve data on 14 and 15 for Patient A).
So would you please tell me it is enough or can we make a more optimized one ? Expecting your valuable reply
public void checkDuplicatesInDailyDiet()
{
// Get the list of updated or inserted Daily Diet records
List<SF__Daily_Diet__c> dailyDiets = (List<SF__Daily_Diet__c>)m_triggerData;
// Create maps for all patient, all dates, all diets and all food items
Map<ID,ID> mapPatient = new Map<ID,ID>();
Map<Date,Date> mapDate = new Map<Date,Date>();
Map<ID,ID> mapDiet = new Map<ID,ID>();
Map<ID,ID> mapFoodItem = new Map<ID,ID>();
// For each inserted or updated record, map all patient, all dates, all diets and all food items
for (SF__Daily_Diet__c dailyDiet : dailyDiets)
{
if (!mapPatient.containsKey(dailyDiet.SF__Patient__c))
mapPatient.put(dailyDiet.SF__Patient__c, dailyDiet.SF__Patient__c);
if (!mapDate.containsKey(dailyDiet.SF__Date__c))
mapDate.put(dailyDiet.SF__Date__c, dailyDiet.SF__Date__c);
if (!mapDiet.containsKey(dailyDiet.SF__Diet__c))
mapDiet.put(dailyDiet.SF__Diet__c, dailyDiet.SF__Diet__c);
if (!mapFoodItem.containsKey(dailyDiet.SF__Food_Item__c))
mapFoodItem.put(dailyDiet.SF__Food_Item__c, dailyDiet.SF__Food_Item__c);
}
// Get all daily diet records of any of the patients in any of the dates in trigger data
Map<ID, SF__Daily_Diet__c> mapDailyDietObject = new Map<ID, SF__Daily_Diet__c>([select Id,SF__Patient__c,SF__Date__c,SF__Diet__c,SF__Food_Item__c from SF__Daily_Diet__c where
SF__Patient__c in :mapPatient.keySet() and
SF__Date__c in :mapDate.keySet() and
SF__Diet__c in :mapDiet.keySet() and
SF__Food_Item__c in :mapFoodItem.keySet()]);
// Check if any of the database object have same fields as that of trigger data list, it means that the record is duplicated
for (SF__Daily_Diet__c dailyDiet : dailyDiets)
{
for (SF__Daily_Diet__c dbObject : mapDailyDietObject.values())
{
if (Trigger.isUpdate)
{
if (dailyDiet.SF__Patient__c == dbObject.SF__Patient__c &&
dailyDiet.SF__Date__c == dbObject.SF__Date__c &&
dailyDiet.SF__Diet__c == dbObject.SF__Diet__c &&
dailyDiet.SF__Food_Item__c == dbObject.SF__Food_Item__c)
throw new AlreadyExistingException('Duplication of Daily Diet Record' );
}
else
{
// existing total += new total
}
}
}
}
If you know that this is always going to be input manually, and never in bulk data loads, etc. you could code it as a simple loop through all the records in the trigger, running a select on each one to find a match, and handling it accordingly. It's not bulk-safe, but if you know your usage, this may not be vital.
Your code seems to be trying for bulk safety, which is good. But, it seems like you're trying to constrain all the values at the same time. I wonder if this is needed? Assuming Patients are unique, and there are few DailyDiet records for each patient (given the cost of a day in a hospital, I'd think so :-) ) Then perhaps you could decide that Patient is the field that will be the most effective filter of rows being returned to you which need to be checked. (or perhaps Date?)
Instead just loop through Trigger.new and build a Set of the Patient values that are in all the rows in the Trigger. Then, query for all the DailyDiet records with an "IN" constraint against that set.
So now you'll have a superset of all the records in the system for the Patients in the records in your Trigger. Now loop through Trigger.new again, this time manually comparing the rest of the fields against the values you've returned from the query. As you go through, you deal with the duplicates you find differently depending on isUpdate or isInsert. Perhaps not as efficient in that it returns more rows, but perhaps faster execution time overall? I'm not sure but I might think about it.
However, if knowing your data you decide that building multiple "IN" constraints is the better way to go, I suppose that's fine as well. However, since you'll still have to loop through the Trigger records again and search for them in your returned data set, I wonder if you're saving much.
Your Map constructs (Map<ID,ID> mapFoodItem = new Map<ID,ID>();) could be simple Sets,
Note too that you probably don't want to just throw an exception that knocks out the entire Trigger. Look at the definition and uses for sObject.addError() for the row and something like SF__DailyDiet__c.Food_Item__c.addError, to add error messages to specific fields. You want to add an error to the specific record in the set of Trigger records that causes a problem. That way if most of the records in a bulk insert/update are fine, and one has an error, the rest may progress.
Either way, exception handling in Triggers is worth looking into.
Best, Steve.
Message Edited by SteveBower on 10-28-2008 10:23 PM
Thanks for your valubale information.
From your explanation, I came to understand that if we need a combination of fields as unique (here, Patient-Date-Diet-Food), we should have no way otherthan writing multiple queries in nested loops. Like:
for eachadaily diet record, create a map like
Map<Patient ID, Map<Date,Map<Diet ID,Map<Food ID,Daily Diet Record>>>>
Then create nested loops for this map like
for each Patient ID
{
for each Date
{
for each Diet
{
for each Food
{
Select from Daily DietTable matching this patient,date, diet and food
If a record exists, Daily Diet Record.addError(Duplicate')
}
}
}
}
I'm new to Apex. So would you please tell me if I am in right way ?
Rgds
Shabu
Hi Steve,
Thanks for your valubale information.
From your explanation, I came to understand that if we need a combination of fields as unique (here, Patient-Date-Diet-Food), we should have no way otherthan writing multiple queries in nested loops. Like:
for eachadaily diet record, create a map like
Map<Patient ID, Map<Date,Map<Diet ID,Map<Food ID,Daily Diet Record>>>>
Then create nested loops for this map like
for each Patient ID
{
for each Date
{
for each Diet
{
Select from Daily DietTable matching this patient ID,date, diet Id and food in food key list
If a record exists, Daily Diet Record.addError(Duplicate')
}
}
}
I'm new to Apex. So would you please tell me if I am in right way ?
Rgds
Shabu
List<DailyDiet> patientsToUpdate = new List<DailyDiet>{}
Set<ID> setOfPatients = new Set<ID>();
for (SF__Daily_Diet__c d: Trigger.new) {
setOfPatients.add(x.Patient); // Add the Id of the Patient from this trigger record to the set
}
// Get all daily diet records of any of the patients in any of the dates in trigger data
List <SF__Daily_Diet__c> patient_records =
new List<SF__Daily_Diet__c>(
[select Id, SF__Patient__c, SF__Date__c, SF__Diet__c, SF__Food_Item__c
from SF__Daily_Diet__c where SF__Patient__c in :setOfPatients];
// as you did in your previous post, if you think this will return too many records, you could build another setOfFood_Items, and
// change the query to: "where SF_Patient__c in :setOfPatients and SF_Food_Item__c in :setofFood_Items" However I'm not
// sure if that will really reduce the numbers all that much in exchange for a more complicated query.
//
// Also, if the Trigger will almost always fire with just one record in Trigger.new, you could also write a check for
// Trigger.new.size() ==1 and then issue the query directly against the values in Trigger.new.
for x in Trigger.new {
for p in patient_records {
if ((p.SF__date__c != x.SF__date__c) or
(p.SF__food_item__c != x.SF__food_item__c) or
(p.SF__diet__c != x.SF__diet__c)) continue;
// If we're here, it is a duplicate.
if (Trigger.isInsert()) {
x.addError('Don't want to insert a duplicate. Instead, update the original');
p.SF__Quantity__c = p.SF__Quantity__c + 1;
patientsToUpdate.add(p)
} else {
// update
x.addError('Not allowed to update existing record to be a duplicate of an already existing record');
}
}
}
update patientsToUpdate;
return;
Since you have potentially multiple records in Trigger.new, you can't write one query that will just return the set of exact duplicates. Your initial approach meant you were getting a superset based on a union of criteria, My approach means you're potentially getting too many patient records because you're excluding other criteria. Both are valid, but either way you have to do extra scans. Your's might mean bringing back fewer rows to scan, mine might execute faster. Try yours, it degenerates into an exact query when there is only one record in Trigger.new.
I, too don't like too many select queries.
I think a lot of record for a single patient itself. So, the previous query can do the best.
Once again thanks for your patience.
One more suggestion.
In future version, would you please introduce a funtionality to implement a combination unique key in salesforce objects ? :-)
Rgds
Shabu
Declarative composite key's would be a nice feature. If you really wanted to, I suppose you could come up with a formula field that hashed the values for all the other fields together into a text field. Then you could do the same thing in your trigger and just do a simple query against that field.
You could check with ideas.salesforce.com and post the idea of nobody else has done it already.
Best, Steve