Travellers on London Underground will be familiar with the term “Mind the Gap” which is both a visible, and an audio, warning to make sure that you do not become an unfortunate statistic on the railway – you may lose items between the train and platform (hang on to your Kindle when boarding!) or, worse still, trip/slip in the gap.
It is also important that you “Mind the (data) gap” when ‘things’ are transferred between organisations, departments and systems. I have used the term ‘thing’ to cover designs, products, services, responsibilities etc. Read the rest of this entry »
One of the often quoted examples (challenges) over business definitions is the case of gaining a common definition of the term “Customer” within an organisation. This can result in lengthy debate over whether a customer is the person who pays the bill, who recieves the goods, is a branch office of a purchasing organisation etc. etc.
A recent conversation reminded me that there is another area where there is difficulty gaining a common definition, and that is ensuring that there is a common definition of the word ‘asset’. Without common definitions, investments in data, processes and systems may be put at risk and the effectiveness of business decision making may be compromised. Read the rest of this entry »
One of the greatest factors which can lead to the degradation of quality of information relates to the behavior of users, data administrators and external parties. I have referred to this in a previous post “There is no such thing as a data quality problem…” which was deliberately being a little provocative in order to make a point. In this related post I will explore an interesting contradiction in staff behaviours related to physical assets and behaviours related to information assets.
As some of those who know me will appreciate, a lot of my experience relates to the world of physical asset management and maintenance management, however, some of the points I raise also relate to more general business contexts.
Read the rest of this entry »
I recently came across two examples of how to handle poor quality supplier data, one good and one bad, in the same business unit of a large organisation.
The organisation concerned is reliant on contractors supplying accurate data for the work they have undertaken. Due to the complexity of data entries, in particular where different ‘layers’ of data interact, the likelihood of data error is high. Additionally, there are cost and safety implications if any one had to physically check the data, and that this same data has cost and possible long term safety implications if incorrect, it is essential that data supplied for input is correct.
One team member demonstrated a very poor approach when recieving data that they believed to be incorrect – they entered data into the system to represent what they thought the correct data should be! This is decidedly risky in that they may actually be making the error worse, by changing data supplied by the contractor they are taking over liability for the accuracy of the data and the contractor will tend to repeat the errors in future, as they know the data is likely to be checked. If the data had been entered as supplied, then this team member would have retained some liability for the data, as they knew it to be wrong, additionally, the contractor would have retained the majority of the liability.
A different team operated the correct process for addressing data that was supplied with errors – they rejected the batch of data stating that there were data errors, but did not state the nature of the errors. In this case the supplier would have to ensure they understood the data requirements in order to correct the data, which in turn would lead to better quality data in future.Additionally, the team recieving the data would not have picked up any liability for data errors.
The second approach takes less effort for less risk, whereas the first took more effort and assumed a lot of the risks – so why did they do it?