-
-
Notifications
You must be signed in to change notification settings - Fork 6
Combine CSV files, remove duplicates #11
Comments
@libchris I think you've got the right approach here - but first a question - is there a common value in both files (e.g. a system number, or even just the title?) |
If true duplicates should have same accession number which is recorded in the files |
Safer than title, as potentially two unique books could have same title |
Thanks @libchris. You could take the approach you describe - combine the two files outside OpenRefine, import the combined file, sort by the Accession number column and (this is important for later) 'Reorder permanently' (an option that you access from the 'Sort' drop down menu which will display once you have applied a sort). You could do a 'Facet by Duplicate' on the Accession number column - this will give you all the lines in the file that are repeated in the file. You then have a few options:
To combine data from multiple rows which have the same accession number into a single row you need to convert your rows into 'records' (which allows you to have multiple rows linked as a 'record'). The way the 'records' mode works is slightly odd, but basically it relies on the first column having the 'key' to the record in it, with the first row of the record containing the key value, and the next X rows (that belong to the same record) being blank. This is probably easier to see in practice than through words:
This is clearly a bit complicated, but it is fairly mechanical and once you get used to the records/rows modes in OpenRefine relatively straightforward - there is a good introduction to this at http://kb.refinepro.com/2012/03/difference-between-record-and-row.html |
Have two CSV files, never borrowed and never browsed. want to create a file that shows those books that are never borrowed or browsed. So ... Presumably I can combine using functions shown in wk2 ...
Then use open refine to find the duplicates. Then a little unsure how I then remove one of the duplilcates, so all titles are shown, but one of duplicates is removed
The text was updated successfully, but these errors were encountered: