KM in FM

i.e. Knowledge Management in Facilities Management


Many of you will know that we farm and store data and when data is correlated and made available it becomes information. Information can then be used to create knowledge and when done properly ‘knowledge is power’.

 

All too often we lose information or don’t fully exploit it to benefit our organisations.

 

Quite often information is not shared as the axiom for the Politian in any organisation knows that ‘knowledge shared is power lost’.

 

With property and estates representing such a large proportion of any organisations Capital and Operating expenditure - not managing knowledge is like burning money.

 

Over the next four articles I am going to look at Knowledge Management and Information Tools in FM, the reasons why you should be considering it and looking at what can be done.

 

OK so you’ve got data and information, but how do you make the best use of it?

 

Let’s start with a more general overview of KM:

 

 

 

 

Many organisations realise that information is a key asset, but the amount of information commonly generated and stored can make it very difficult to manage – particularly when there are duplicates and versions.

 

So how can you manage this knowledge to let others in your organisation get to the information they need efficiently?

 

 

 

 

 

There are two main types of solution to the problem of access to information:

 

  • Index or maps
  • Search engines
 

Search engines like Google use a novel approach with a Hadoop programming framework to bypass the need for indices or maps in many situations, but maps still have their place.

 

The most basic form of knowledge map is a listing of resources, both assets and people, on paper and/or as web pages.

 

These ‘portal’ sites proliferate on the web, compiled by enthusiasts, organisations and search engines to point to key resources in other fields.

 

Although technically straight forward to implement in its most basic form, such a resource requires a dedicated person or team to keep the classification of catagories and descriptions up to date.

 

Some organisations with dedicated knowledge managers, librarians or information services may distribute a standard set of intranet and internet ‘bookmarks’, which makes available a coherently organised set of information sites to the rest of the organisation from their desktops.

 

This of course, can happen at an informal level as colleagues swap new discoveries.

 

It’s easy to imagine the advantages of having a dynamically updated information system that would not need to wait on planned releases before it could be modified.

 

On reflection though, when you compare it with paper … paper is portable, it never crashes, it is easily annotated and more.

 

A better strategy is a user-centred design approach which would typically consult user groups to establish their most common information needs (how do they think about the world?).

 

Users may be asked to build an actual map of their world, and conduct evaluations of the current search engine to see how ell it performed.

 

The big ‘BUT’ is that it is often the case that users do not always know what they need and that new tools can change the way people behave and present new opportunities they hadn’t imagined.

 

When you think about the way we rapidly retrieve information sources on the internet with search engines such as Google.

It’s often quicker to type in a few keywords than to take the trouble of bookmarking the page, or retyping the full address if known.

 

The answer to this dilemma is in planning time for deploying a series of prototypes and evaluating the resulting pattern of usage,

 

These patterns cannot be seen in advance, but are a function of the specific user group working under the demands of their unique situations.

 

 

https://ontotext.com/knowledgehub/fundamentals/metadata-fundamental/

 

 

Now, essential to KM is  ‘Metadata’, that is data that is used to describe other data.

 

Metadata about the contents of an information resource, or the way in which people behave, can come from people or computers. The contrast is between (human) declared structure or (machine) inferred structure.

 

To make this clearer, consider some examples:

  • Portals (topic-centred websites with categorised, validated links to relevant information). The categories can be either predefined by the portal’s designers, or automatically clustered by analysing the content of linked information.
  • E-commerce website customer profiles. Customers can either select their shopping interests from a predefined list of topics, or the site can try to analyse their interests by tracking their purchases, and inferring what other kinds of products they might buy.
  • Document/news classification. Users can either be asked to assign keywords that provide a machine-readable summary of the document or news item (such as terms selected from a controlled vocabulary), or the system can try to analyse the text and build an abstraction of its 'meaning'.
 

The advantage of having machines infer the meaning of texts or images is that they are not so vulnerable to these human traits, and can more easily keep up with the changing information space as new material is published.

 

Tools concerned with text mining, information extraction, thesaurus generation/maintenance and ICT network usage analysis will continue to grow in sophistication.

 

The advantages of asking humans to classify and abstract is that, on a case by case basis, they may do a better job than a machine (but someone has to define the categories, staff have biases, an incomplete awareness of who may want to find the document, and are notoriously poor in the use of complex categorisations).

 

Metadata from trained librarians or information scientists (typically employed only in large organisations) is likely to be high quality in terms of consistency and coverage. Metadata from other staff may still be useful, but varies in quality.

 

The advantages of asking humans to classify and abstract is that, on a case by case basis, they may do a better job than a machine (but someone has to define the categories, staff have biases, an incomplete awareness of who may want to find the document, and are notoriously poor in the use of complex categorisations).

 

The advantage of having machines infer the meaning of texts or images is that they are not so vulnerable to these human traits, and can more easily keep up with the changing information space as new material is published. Tools concerned with text mining, information extraction, thesaurus generation/maintenance and ICT network usage analysis will continue to grow in sophistication.

 

Machine-generated analyses of raw texts, images, and user activity patterns, therefore, may be a promising way to support the construction of meta-knowledge that can answer questions such as, 'what do we know?', 'what’s out there?', and 'what do people do?', since they do not require people to modify (and perhaps change) their behaviour by explicitly categorising their work products or processes.

In the next blog I want to look at KM applied specifically to Facilities and Estate Management.

 



BACK TO BLOG


Enterprise Asset Management: 3D Navigator and Asset Registration Console 14/11/2018

In this blog I will talk about the Enterprise Asset Management Application. I thought I would share with you two specific aspects in the application: the 3D Navigator in the Equipment Sys...
read more view all blog posts

GET IN TOUCH