Design and Implementation of a Computerised Library Stock Matching System

Design and Implementation of a Computerised Library Stock Matching System

Design and Implementation of a Computerised Library Stock Matching System

 

Chapter One of Design and Implementation of a Computerised Library Stock Matching System

INTRODUCTION

Data matching process enables an analyst to reduce data duplication and improve data source. Matching analyses the degree of duplication in all records of a single data sources, returning weighted probabilities of a match between each set of records compared. You can then decide which records are matched and take the appropriate action in the source data. De Andes (1993).
Matching a data comes with several benefits which includes the following: it enables elimination of differences between data values that should be equal determining the correct values and reducing the errors that data differences can cause. For example names and addresses are often the identifying data for overtime. Performing matching to identify and correct these errors can make use and maintenance easier.                                                  (Winkler 1993).
Data matching also enables the names of books in the library that are equivalent but were entered in a different style of format will be rendered uniform. It is also necessary to know that data matching and merging records that correspond to the same entities from several databases. Most times the entities under consideration are commonly people, such as patients, customers, tax payers or travellers but for now this research will be considering data matching in the a library scene.

his research involves the full or partial integration of two or more data sets on the basis of information held in common. It enables data obtained from separate sources to be used more effectively thereby enhancing the value of the original sources. Data matching can also reduce the potential burden on data provided by reducing the need for further data collection. However, where data matching involves the integration of records for the same units. The product of the research will raise important issues about confidentiality and security. Copas J.R; & F.J Hilton (1990).

AIM OF RESEARCH

This project aims at creating and developing a computerized matching record for the university library. In developing a data match for the school library attempts will be made to achieve absolute confidence in the accuracy, completeness, robustness and consistency overtime of these identifiers, because any error in such an identifier will result in wrongly matched records.

OBJECTIVE OF RESEARCH

  1. Common entity identifier will be used in the database to be matched and in order to achieve these attributes that contain partially identified information, such as name of publisher, location of publication and dates of publication will be used. The name and brief details of the writer could also be used Winkler (1986, 1987).
  2. Rather than develop a special survey to collect data for policy decisions, data from available books sources will be matched which have potential advantages because it contains greater amount of data and their data might be more accurate due to improvement over some period of years Swain et al (1992).

SCOPE OF RESEARCH

The research sets out how all those involved in the production of data matching for uncial library will meet their commitment to protect the confidentiality of data within their care whilst also, and where appropraite, maximizing the value of those data through data matching.Coper, W.S & M.E Maron (1987).

LIMITATIONS OF RESEARCH

There are several limitations that would be encountered during these research work and    thereafter. Some of these challenges are:

  1. Lack of unique entity identifier and data quality.
  2. Computation complexity.
  3. Lack of training data containing the true match status.
  4. Privacy

LACK OF UNIQUE IDENTIFIER
Generally, the databases to be matched/de-duplicated does not contain unique entry identifiers or keys.Even when entity identifier are available in the databases to be matched, one must be absolutely confident in the accuracy, completeness, robustness and consistency over time of these identifiers, because any error in such as identifiers will result in wronly matched record.
Finally, if no entity identifiers are available in the databases to be matched then the matching needs to rely upon the attribute that are common across the databases. Decurre.Y(1998).
COMPUTATION COMPLEXITY
When matching the databases pontentially each record from one database needs to be compared with all the records in the other database in order to determine if a pair of records correspond to the same entity or not. The computation complexity of data matching therefore grows quadratically as the databases to be matched gets large.

 LACK OF TRAINING DATA CONTAINING THE TRUE MATCH STATUS
In many data matching applications, the true status of two records that are matched across the two databases is not known, that is to say that there is no ground truth or gold in the standard data available that specifies if two records correspond to the same entity or not. Without extra information one  cannot be sure that the outcomes of a data matching project are correct. Deming, W.E & G.J Glesser(1959).
PRIVACY AND CONFIDENTIALITY
As previously mentioned, with data matching commonly relying on personal information such as names, addresses, dates, privacy and confidentiality need to be carefully considered. The analysis of matched data has the potential to uncover aspects of individuals or group of entities that are not obvious when a single database is analysed seperately. (Harberman,S.J ()1975).

JUSTIFICATION OF THE RESEARCH

One of the important reasons why the research is necessary and reasonable is it enables users to eliminate differences between data values that should be the same, determining the  correct values and reducing the errors that data differences can cause. Another reason while these topic is justified is that it ensures that values that are equivalent, but were entered in a different format or style, are rendered uniform . Hill,T.(1991).
Futhermore, there will be avoidance of duplicate records in a database where different identifiers are used for the same entity(Fellegi 1999). Finally, data matching identifies exact and approximate matches, enabling the user or administrator to remove duplicate data as it is being defined.

TERMS ASSOCIATED TO DATA MATCHING

  1. key : the combination of data fields which are the basis of comparison in a data matching application.
  2. Matched Results:  the set of matched records produced by a data matching application.
  3. Matched Records:  Two or more records brought together as a match.
  4. Name Inconsistencies:  When the same individual is recorded with varied identity datail by different agencies.
  5. Name Tokens :  A component of the full or raw name such as family name, first given name or title.
  6. Name Type:  Describes the nature of a name used currently or previously by an individual such as legal, maiden name or an alais.
  7. Non matched records : Records for which data matching application failed to find a matching record in one or more other data files  N/B:  This is not to say that a record for the individual does not exists elsewhere, only that the application failed to find one.
  8. Profile groups: In the interpretation of identity data matching results, the allocation of matched records to particular groups depending on the ways in which matching records was obtained. Used to better allocate resource to subsequent processing of results.
  9. Unicode standard :  A character code 1-4 bytes that defines every character. In most of the speaking languages in the world .
  10. Data matching :  The bringing together of data from different sources and comparing it.
  11. Data topology:  The order relationship of specific items of data to other items of data.
  12. Address elements : The individual component elements/fields of an address string e.g street number, street name, street type, town/suburb.
  13. Algorithm : A set of logic rules determined during the design phase of a data matching application. The ‘blueprint’ used to turn logic rules into computer instructions that detail what step to perform in what order.
  14. Application:  The final combination of software and hardware which performs the data matching.
  15. Control group : In data matching context, a set of records of a known type (e.g previous identfied fraudulent identities, decreased individuals) which are used to better interprete data matching results.
  16. Cross Agency :  The matching of data from one agency with those of one or more other agencies.
  17. Data matching database: A structured collection of records or data that is stored in a computer system.
  18. Data cleansing: The proactive identification and correction of data quality issues which affect an agency’s ability to effectively use its data.
  19. Data integrity : The quality of correctness, completeness and complain with the intention of the creators of the data i.e ‘fit for purpose’
  20. Enrollment :  The process of a

Similar Posts