Sunday, November 30, 2003
About Boxes and Arrows
Boxes and Arrows is the definitive source for the complex task of bringing architecture and design to the digital landscape. There are various titles and professions associated with this undertaking—information architecture, information design, interaction design, interface design—but when we looked at the work that we were actually doing, we found a “community of practice� with similarities in outlook and approach that far outweighed our differences.
Boxes and Arrows is a peer-written journal dedicated to discussing, improving and promoting the work of this community, through the sharing of exemplary technique, innovation and informed opinion.
Boxes and Arrows strives to provoke thinking among our peers, to push the limits of the accepted boundaries of these practices and to challenge the status quo by teaching new or better techniques that translate into results for our companies, our clients and our comrades.
Boxes and Arrows is the definitive source for the complex task of bringing architecture and design to the digital landscape. There are various titles and professions associated with this undertaking—information architecture, information design, interaction design, interface design—but when we looked at the work that we were actually doing, we found a “community of practice� with similarities in outlook and approach that far outweighed our differences.
Boxes and Arrows is a peer-written journal dedicated to discussing, improving and promoting the work of this community, through the sharing of exemplary technique, innovation and informed opinion.
Boxes and Arrows strives to provoke thinking among our peers, to push the limits of the accepted boundaries of these practices and to challenge the status quo by teaching new or better techniques that translate into results for our companies, our clients and our comrades.
MDLinx
links healthcare professionals and patients to tomorrow's medical knowledge, and provides the pharmaceutical and healthcare industry with highly targeted, interactive marketing, education, content, and research solutions.
MDLinx's proprietary content aggregation technology is the industry leader in the health care vertical market. Currently, MDLinx owns and operates a network of 34 Websites and over 700 different, daily e-mail newsletters that provide highly focused content to 255,000 physicians and healthcare professionals, as well as to a growing number of patients.
Through MDLinx's software and expert medical team, the company solves the needs of both healthcare professionals and healthcare marketers. For healthcare professionals, MDLinx has created a network of comprehensive one-stop sites for each medical specialty and therapeutic category that provides the focused information healthcare professionals need to stay current. MDLinx recently launched PatientLinx, a free website that provides reliable clinical updates for patients. For healthcare marketers, MDLinx provides a comprehensive online marketing solution that utilizes specialty focused Websites, e-mail newsletters, licensing content, CME solutions, and market research offerings to reach the highest quality audience.
Using the power of the Internet, MDLinx puts clinical physicians a mouse-click away from the most up-to-date medical news in their specialty and subspecialty. With software that scans hundreds of the most trusted peer-reviewed medical publications, we can locate top articles and reports in a given specialty area, and categorize them by very focused subspecialties. We eliminate the need to surf the Web, saving doctors valuable time better spent with patients. Simply put, MDLinx is one-stop-medical-resource for physicians.
links healthcare professionals and patients to tomorrow's medical knowledge, and provides the pharmaceutical and healthcare industry with highly targeted, interactive marketing, education, content, and research solutions.
MDLinx's proprietary content aggregation technology is the industry leader in the health care vertical market. Currently, MDLinx owns and operates a network of 34 Websites and over 700 different, daily e-mail newsletters that provide highly focused content to 255,000 physicians and healthcare professionals, as well as to a growing number of patients.
Through MDLinx's software and expert medical team, the company solves the needs of both healthcare professionals and healthcare marketers. For healthcare professionals, MDLinx has created a network of comprehensive one-stop sites for each medical specialty and therapeutic category that provides the focused information healthcare professionals need to stay current. MDLinx recently launched PatientLinx, a free website that provides reliable clinical updates for patients. For healthcare marketers, MDLinx provides a comprehensive online marketing solution that utilizes specialty focused Websites, e-mail newsletters, licensing content, CME solutions, and market research offerings to reach the highest quality audience.
Using the power of the Internet, MDLinx puts clinical physicians a mouse-click away from the most up-to-date medical news in their specialty and subspecialty. With software that scans hundreds of the most trusted peer-reviewed medical publications, we can locate top articles and reports in a given specialty area, and categorize them by very focused subspecialties. We eliminate the need to surf the Web, saving doctors valuable time better spent with patients. Simply put, MDLinx is one-stop-medical-resource for physicians.
KM Toolbox
The following ‘toolbox’ presents some of the most common tools and techniques currently used in knowledge management programmes. The aim is to give an introduction, to present an overview of what is involved, and to provide some pointers to further resources.
Each item in the toolbox follows a common format:
What is it?
What are the benefits?
How do I go about it?
Are there any other points I should be aware of?
Resources and references
The following ‘toolbox’ presents some of the most common tools and techniques currently used in knowledge management programmes. The aim is to give an introduction, to present an overview of what is involved, and to provide some pointers to further resources.
Each item in the toolbox follows a common format:
What is it?
What are the benefits?
How do I go about it?
Are there any other points I should be aware of?
Resources and references
Saturday, November 29, 2003
Future of KM: Business Roadmap
Knowledge Organsation Transformation
European KM Forum
Jun 2003 Release
Good section on Knowledge Transformation Cycle
Knowledge Organsation Transformation
European KM Forum
Jun 2003 Release
Good section on Knowledge Transformation Cycle
Sharing Knowledge Through Stories
The story format is a teaching tool. A good story for imparting knowledge drawn from personal experience includes the following:
Background - how you learned what you know
Context - Why was it important
Beginning - what you did or tried first
Middle - the failure or breakthrough that was the lesson
End - how you confirmed your success or failure
Reflection - what you have learned about it since
Stories work best when kept brief and concise. As they say " Brevity is the soul of wit" Most busy readers would agree.
Ref: Figallo, Cliff & Rhine, Nancy (2002). Building the KNowledge Management Network. Wiley Technology Publishing. Pp. 221
The story format is a teaching tool. A good story for imparting knowledge drawn from personal experience includes the following:
Background - how you learned what you know
Context - Why was it important
Beginning - what you did or tried first
Middle - the failure or breakthrough that was the lesson
End - how you confirmed your success or failure
Reflection - what you have learned about it since
Stories work best when kept brief and concise. As they say " Brevity is the soul of wit" Most busy readers would agree.
Ref: Figallo, Cliff & Rhine, Nancy (2002). Building the KNowledge Management Network. Wiley Technology Publishing. Pp. 221
The nonsense of 'knowledge management'
T.D. Wilson
Professor Emeritus
University of Sheffield, UK
Visiting Professor, Högskolan i Borås
Borås, Sweden
Abstract
Examines critically the origins and basis of 'knowledge management', its components and its development as a field of consultancy practice. Problems in the distinction between 'knowledge' and 'information' are explored, as well as Polanyi's concept of 'tacit knowing'. The concept is examined in the journal literature, the Web sites of consultancy companies, and in the presentation of business schools. The conclusion is reached that 'knowledge management' is an umbrella term for a variety of organizational activities, none of which are concerned with the management of knowledge. Those activities that are not concerned with the management of information are concerned with the management of work practices, in the expectation that changes in such areas as communication practice will enable information sharing.
Full Article
T.D. Wilson
Professor Emeritus
University of Sheffield, UK
Visiting Professor, Högskolan i Borås
Borås, Sweden
Abstract
Examines critically the origins and basis of 'knowledge management', its components and its development as a field of consultancy practice. Problems in the distinction between 'knowledge' and 'information' are explored, as well as Polanyi's concept of 'tacit knowing'. The concept is examined in the journal literature, the Web sites of consultancy companies, and in the presentation of business schools. The conclusion is reached that 'knowledge management' is an umbrella term for a variety of organizational activities, none of which are concerned with the management of knowledge. Those activities that are not concerned with the management of information are concerned with the management of work practices, in the expectation that changes in such areas as communication practice will enable information sharing.
Full Article
Friday, November 28, 2003
Data Mining Concepts and Techniques
Jiawei Han and Micheline Kamber
The Morgan Kaufmann Series in Data Management Systems, Jim Gray, Series Editor
Morgan Kaufmann Publishers, August 2000. 550 pages. ISBN 1-55860-489-8
Jiawei Han and Micheline Kamber
The Morgan Kaufmann Series in Data Management Systems, Jim Gray, Series Editor
Morgan Kaufmann Publishers, August 2000. 550 pages. ISBN 1-55860-489-8
Thursday, November 27, 2003
Competing on Knowledge
2000 Handbook of Business Strategy
(New York: Faulkner & Gray, 1999), pp. 81-88
Michael H. Zack [see his other papers]
College of Business Administration
Northeastern University
214 Hayden Hall
Boston, MA 02115
(617) 373-4734
m.zack@nunet.neu.edu
--------------------------------------------------------------------------------
Introduction
Most executives and managers today embrace the notion of knowledge as a strategic asset. They are implementing a variety of knowledge management programs to explicitly and proactively harness and exploit the intellectual resources of their organizations. However, despite the potential strategic advantage from developing and applying knowledge, most knowledge management projects today are not explicitly aligned with competitive strategy. Several recent knowledge management surveys and research projects confirm the findings of my own research and consulting experiences with over 25 companies. In not one case was business strategy or firm value the primary motivator of the project or measure of its success. Despite their nod to strategy, organizations most frequently conceive of knowledge management as an operational issue addressed by information technology. Online repositories represent the primary approach to managing and sharing what is often called explicit or codified knowledge; an online repository of best practices created by most consulting firms is a good example.
Organizations are beginning to recognize that explicit knowledge is merely the tip of the intellectual iceberg. The vast majority of knowledge in organizations is tacit, hard-to-articulate, and held in peoples' heads. It is created and shared via direct person-to-person interaction, story-telling, and shared experience. While explicit knowledge is more easily managed and shared, tacit knowledge potentially has more strategic value, being derived from particular circumstances and events and thus unique and hard to imitate. Many firms are following the framework developed by Professor Ikujiro Nonaka within which tacit knowledge is made explicit so that it may be shared with others in the organization who then internalize it as tacit knowledge by reusing it in a new context. This interplay between tacit and explicit knowledge provides a balance between knowledge creation and application on the one hand, and knowledge sharing and reuse on the other. This distinction has led organizations to supplement their information technology with new organizational forms and cultures that promote interaction and collaboration. By aligning and integrating technological and organizational capabilities, these firms become well positioned to create, share and apply both explicit and tacit knowledge. However, being able to manage knowledge well, does not guarantee that the firm is managing the right knowledge.
Full Article
2000 Handbook of Business Strategy
(New York: Faulkner & Gray, 1999), pp. 81-88
Michael H. Zack [see his other papers]
College of Business Administration
Northeastern University
214 Hayden Hall
Boston, MA 02115
(617) 373-4734
m.zack@nunet.neu.edu
--------------------------------------------------------------------------------
Introduction
Most executives and managers today embrace the notion of knowledge as a strategic asset. They are implementing a variety of knowledge management programs to explicitly and proactively harness and exploit the intellectual resources of their organizations. However, despite the potential strategic advantage from developing and applying knowledge, most knowledge management projects today are not explicitly aligned with competitive strategy. Several recent knowledge management surveys and research projects confirm the findings of my own research and consulting experiences with over 25 companies. In not one case was business strategy or firm value the primary motivator of the project or measure of its success. Despite their nod to strategy, organizations most frequently conceive of knowledge management as an operational issue addressed by information technology. Online repositories represent the primary approach to managing and sharing what is often called explicit or codified knowledge; an online repository of best practices created by most consulting firms is a good example.
Organizations are beginning to recognize that explicit knowledge is merely the tip of the intellectual iceberg. The vast majority of knowledge in organizations is tacit, hard-to-articulate, and held in peoples' heads. It is created and shared via direct person-to-person interaction, story-telling, and shared experience. While explicit knowledge is more easily managed and shared, tacit knowledge potentially has more strategic value, being derived from particular circumstances and events and thus unique and hard to imitate. Many firms are following the framework developed by Professor Ikujiro Nonaka within which tacit knowledge is made explicit so that it may be shared with others in the organization who then internalize it as tacit knowledge by reusing it in a new context. This interplay between tacit and explicit knowledge provides a balance between knowledge creation and application on the one hand, and knowledge sharing and reuse on the other. This distinction has led organizations to supplement their information technology with new organizational forms and cultures that promote interaction and collaboration. By aligning and integrating technological and organizational capabilities, these firms become well positioned to create, share and apply both explicit and tacit knowledge. However, being able to manage knowledge well, does not guarantee that the firm is managing the right knowledge.
Full Article
Proceedings of The Third European Conference on Organizational Knowledge, Learning and Capabilities, Athens, Greece, April 5, 2002
Presentations available for download.
Presentations available for download.
Sunday, November 23, 2003
AFTER ACTION REVIEW (AAR)
What is an AAR?
The after acton review (AAR) is a simple process to help people learn from experience. AARs focus on learning during and immediately after an event so that people can apply what is learnt as quickly as possible to subsequent actions. The ultimate objective is to improve performance. The process was developed by the U.S. Army to enable its transformation from a late "industrial age" army to an "information age" army for the 21st century. It is considered one of the most powerful learning tools ever used.
When should you have an AAR?
The AAR is used durng and immediately after each identfiable event while memory is fresh and unvarnished, participants are still available and learning can be applied straightaway.
What is an event?
An event must have a beginning and an end. It may be either a small action or part of a larger action (shift handover, planning meeting, and so on).
How do you conduct an AAR?
Every event is divided into specific activities, each of which should have an identifiable objective and plan of action. A discussion lasting 30 to 45 mins should cover each question.
- what was supposed to happen? (25% of time s spent on ths subject)
Everyone shares their understanding of what should have happened. It is established how well the objective and plan were understood from the start. Actions to correct any lack of clarity are identified.
- what actually happened? (25% of time is spent on this subject)
The facts about hat happened are established - the ground truth. In seeking the ground truth, the group tries to identify a problem, not a culprit.
- why were there differences? and what can we learn? (50% of time is spent on this subject)
The real learning begins when the group compares the plan to what actually happened. Successes and shortfalls are identified and discussed. Action plans are agreed on to sustain the successes and improve on the shortfalls.
Who is involved in an AAR?
Everyone involved in the event participates, including the team conducting the event and other teams that were involved. The leaders and partcipants are all on equal footing in the learning process. A facilitator (a team leader or a close observer of the project) ensures that everyone's views are heard but does not provide the answers.
Important Reminders
It is important to be objective: balance inquiry and advocacy. There must be a climate of openess and learning; the objective is to fix the problem, not place blame. AARs are learning events, not evaluations or critique sessions.
Learnings must be converted into usable knowledge and disseminated to selected audiences for a broader learnng experience throughout the organisation.
AARs can be audiotaped or videotaped for use as training tools, but conducting an AAR should never be put on hold because of unnecessary details. An AAR can be done with paper and pencil, and should be done at a key time durng the event or as soon as possible afterward in order to capture the lessons learned while they are still fresh in participants' minds.
Source: Bard, Lloyd and Henderson, John C. (2001). The Knowledge Engine. Berrett-Koehler Publishers. CA.
What is an AAR?
The after acton review (AAR) is a simple process to help people learn from experience. AARs focus on learning during and immediately after an event so that people can apply what is learnt as quickly as possible to subsequent actions. The ultimate objective is to improve performance. The process was developed by the U.S. Army to enable its transformation from a late "industrial age" army to an "information age" army for the 21st century. It is considered one of the most powerful learning tools ever used.
When should you have an AAR?
The AAR is used durng and immediately after each identfiable event while memory is fresh and unvarnished, participants are still available and learning can be applied straightaway.
What is an event?
An event must have a beginning and an end. It may be either a small action or part of a larger action (shift handover, planning meeting, and so on).
How do you conduct an AAR?
Every event is divided into specific activities, each of which should have an identifiable objective and plan of action. A discussion lasting 30 to 45 mins should cover each question.
- what was supposed to happen? (25% of time s spent on ths subject)
Everyone shares their understanding of what should have happened. It is established how well the objective and plan were understood from the start. Actions to correct any lack of clarity are identified.
- what actually happened? (25% of time is spent on this subject)
The facts about hat happened are established - the ground truth. In seeking the ground truth, the group tries to identify a problem, not a culprit.
- why were there differences? and what can we learn? (50% of time is spent on this subject)
The real learning begins when the group compares the plan to what actually happened. Successes and shortfalls are identified and discussed. Action plans are agreed on to sustain the successes and improve on the shortfalls.
Who is involved in an AAR?
Everyone involved in the event participates, including the team conducting the event and other teams that were involved. The leaders and partcipants are all on equal footing in the learning process. A facilitator (a team leader or a close observer of the project) ensures that everyone's views are heard but does not provide the answers.
Important Reminders
It is important to be objective: balance inquiry and advocacy. There must be a climate of openess and learning; the objective is to fix the problem, not place blame. AARs are learning events, not evaluations or critique sessions.
Learnings must be converted into usable knowledge and disseminated to selected audiences for a broader learnng experience throughout the organisation.
AARs can be audiotaped or videotaped for use as training tools, but conducting an AAR should never be put on hold because of unnecessary details. An AAR can be done with paper and pencil, and should be done at a key time durng the event or as soon as possible afterward in order to capture the lessons learned while they are still fresh in participants' minds.
Source: Bard, Lloyd and Henderson, John C. (2001). The Knowledge Engine. Berrett-Koehler Publishers. CA.
Friday, November 21, 2003
A Review of the Various Methods of Measuring and Valuing Knowledge
Abstract
Every business generates value in some form, otherwise it would not exist. In response to the growing recognition that traditional financial systems are inadequate in capturing the true value of modern business, various methods have been proposed to measure and value knowledge in an organization. This paper looks at three of these methods and proposes a list of criteria to evaluate their effectiveness. The three methods are Skandia’s Intellectual Capital Model, the Balanced Scorecard and the Intellectual Capital Index. While the evaluation shows that these methods do address the gaps in the traditional financial measurement systems, there are limitations. A more rigorous, complete and practical method still needs to be researched before measuring and valuing knowledge becomes an integral part of running a business.
Introduction
Knowledge is the accumulation of everything an organization knows and uses in the carrying out of its business. It originates and resides in the minds of people (Davenport and Prusak, 1998). In an era of rapid technological change, when entire product categories can disappear overnight, when competition can come from unexpected directions, and new types of relationships are being forged between suppliers, manufacturers and customers, managers are recognizing that the most significant acts of value creation that keep companies alive, are in the cultivation and leverage of knowledge and less on the management and measurement of physical and financial assets (Edvinsson and Malone, 1997).
What is the value of knowledge to an organization? It is dependent upon its vision, strategies and objectives. The knowledge that companies have embedded into processes has a value that may be known as far as the owning companies are concerned but, when traded, has a value dependent on the context of use of the buyer and this varies from buyer to buyer (Pike and Roos, 2000). How then do we measure the knowledge in an organization? It is difficult to measure directly because of its intangible nature. Instead, proxy measures are used where the outcome of applying knowledge is being measured. Companies measure intellectual capital (IC), which is the sum of the knowledge of its members and the practical translation of this knowledge, that is brands, trademarks and processes (Roos et al, 1997).
In traditional accounting methods adopted by companies, value lies in assets. Assets in turn are everything owned by a company that has money value. Intangible assets arose in response to a growing recognition that non-bookkeeping-type factors could have an important role to play in a company’s real value. They include patents, trademarks, copyrights, exclusive market rights – all conferred on their owners a competitive advantage that has an impact on the bottom line. Others seemed to affect the liability side. For example, when a company dedicated years of research and development funds to the development of a new process or technology, that investment, too, ultimately contributed to the value of the company. And a systematic method, called amortization, was derived to convert this cost into an expense as the intangible asset was used up. But even this wasn’t enough to capture all the intangible assets of a company. There were other, even less rigorous factors that often only made themselves known when the enterprise was sold. What is this added value? It might be the loyalty of customers, or the recognition of a business name that had been around for decades, or store location, even the character of the employees. These factors were lumped under the title goodwill and can be amortized over intervals ranging from five to forty years, depending on how long it takes to enjoy the full benefits of that goodwill.
One major problem with the current approach to valuing assets is that, investments in research and development which are supposed to make the company’s financial numbers look good in the long run, will, in the near term make those same numbers appear weak against shortsighted competitors who maintain the status quo – and that in turn will compromise the company’s ability to obtain capital. Simply put, the smart, forward-looking company is punished for trying to maintain its competitiveness and earnings capability (Edvinsson and Malone, 1997). Companies have found the current financial accounting system inadequate as it fails to capture the true value of the modern enterprise. To address this inadequacy, there are many attempts to identify what those intangible factors are off the balance sheet, measure them, and find a way to present them in a coherent way.
This paper looks at three well-known methods of measuring and valuing knowledge in an organization. None of them are pure financial models, since, as noted above, financial models alone are inadequate. The three methods selected for this review are:
· Skandia’s Intellectual Capital Model (IC Model)
· Balanced Scorecard (BSC)
· Intellectual Capital Index (IC-Index)
Criteria for Evaluation
The following criteria is used to evaluate the effectiveness of the three methods of measuring and valuing knowledge reviewed in this article (Pike and Roos, 2000):
· Is the method auditable and reliable?
· Is it easy to use and does it impose a large measurement overhead?
· Does it facilitate strategic and tactical management?
· Does it generate the information needed by shareholders and investors?
Auditable and reliable. One obvious approach to measuring and valuing knowledge is to try to retain as much of the rigor of conventional accounting as possible (Pike and Roos, 2000). The measurement approach must meet the following requirements in order for it to be auditable and reliable:
· Validity - the measure must be acceptable based on truth and supported by theory.
· Reliability - the measure must be able to give consistent results in the way you expect. It must be observable and measurable and unlikely to fail by giving inaccurate or misleading results.
· Verifiable - the measure may be proved by a third party to make certain that it is correct.
· Precise - the measure must be exact and accurate without ambiguity. It should be distinct and free from overlaps.
· Rigorous - the measure must have been looked at from every angle to make certain it is correct and agreeable in that they are an agreed measure of the attribute.
Overhead and ease-of-use. Measurement schemes must be easy and practical to implement. Two dangers emerge from over measurement. The first is that the cost of data collection far outweighs the benefits of having it and that its collection also causes considerable irritation amongst those doing the measurement and those being measured. This is especially if the redundancy leads to justified accusations of micro-management and the tendency to instill unwanted behaviours. This latter point arises since people tend to want to improve performance and will tend to focus on many trivial elements in an over-elaborate measurement system. In doing this, they loose sight of the bigger and more important picture.
Strategic management. If a measurement and management system is to be of any real value, then it must give managers a means of translating their strategic intent into appropriate actions and feedback information showing whether these actions are working or not. Furthermore, if measurement is to support management effectively, then the measures have to be dominated by those that look forward. Here lies one of the principle weaknesses of accounting-based methodologies of knowledge. Accounting is based on historical transactions and is thus dominated by lagging measures. For companies to succeed in the next century, they will have to find a way of managing the present by looking to the future rather than the past.
Shareholder information. In order to communicate with stakeholders outside the company, the information must be in a form that the stakeholder understands. In the knowledge era, the concept of stakeholder value extending beyond simple financial performance measures is a crucial change. To communicate with stakeholders now requires a deeper understanding of the attributes of value from the point of view of the stakeholder groups. The ideal measurement approach would facilitate the comparison of results across companies, industries and countries.
Skandia’s Intellectual Capital Model (IC Model)
Skandia is a Swedish global insurance company. Skandia first developed its IC report internally in 1985 and in 1994, it became the first company in the world to publish an Intellectual Capital Report to augment its Annual Report and Accounts. The Skandia Navigator and the value scheme of IC-components that underlies it, used in the report, is the first systematic attempt to uncover the true value of a company and to establish the key indicators for establishing their metrics. Skandia defined IC as the possession of the knowledge, applied experience, organizational technology, customer relationships and professional skills that provide Skandia with a competitive edge in the market (Edvinsson and Malone, 1997). It followed then that the value of IC was the extent to which these intangible assets could be converted into financial returns for the company. The Skandia IC Model targeted both valuation and navigation.
Figure 1 shows the Skandia Market Value Scheme. It contains both financial and non-financial building blocks and allows Skandia to achieve a balance in trying to represent both financial and non-financial reporting, uncovering and visualizing its intellectual capital, tying its strategic vision to the company’s core competencies and reflecting its market value better.
At the heart of the Skandia IC model was the idea that the true value of a company’s performance lies in its ability to create sustainable value by pursuing a business vision and its resulting strategy. From this strategy, one could determine certain success factors that must be maximized. These success factors could in turn be grouped into four distinct areas of focus: Financial, Customer, Process, and Renewal and Development. A commonly shared fifth area is Human Focus. Within each of these five areas of focus, one could identify numerous key indicators to measure performance. Combined, these five factors created a new holistic and dynamic reporting model, which Skandia called the Navigator. Figure 2 shows the Skandia Navigator.
Figure 1. The Skandia Market Value Scheme
Figure 2. The Skandia Navigator
Auditable and reliable. Skandia’s IC Model has commonalities with at least three other companies (WM-Data, PLS-Consult & Celemi) that independently reported intangible assets (Sveiby, 1997). These companies faced the same core questions and developed almost identical reporting formats and indicators. Though these efforts certainly do not provide full validation of the Skandia IC Model, they do suggest that when a company decides to look into the measurement of its intangible assets, it will inevitably follow a similar path.
The Skandia IC report uses 164 metrics to measure the five areas of focus. The indices can easily be audited. However, it is still a challenge for an organization to find out which ones are redundant and which ones are less important vs the company’s mission and objectives. This is subject to interpretation by the executives of the company and is therefore a source of ambiguity.
Another weakness is the inclusion of Structural Capital indices that include computers, etc. as creators of true value. This inclusion presumes that employees showing up for work and sitting in front of their computers end up investing knowledge into that computer which translates into the company’s competitive advantage. We know for sure that this is not a valid assumption, as data given to the employee must be transformed into added-value knowledge before any value can be derived and this is rarely automatic.
Overhead and ease-of-use. Skandia’s IC Model is easy to understand and visualize. However, it has 164 indicators to measure. This is a sizable number and would incur huge measurement overheads. At the same time, there is a lack of understanding of the priorities and the relationships between the different indicators. Thus, all indicators are taken to be equally important and this of course makes the management task that much more complex.
Strategic Management. Skandia’s market value scheme enables management to look beyond traditional assumptions of what creates value for organizations while Skandia’s Navigator is an aid to a company’s leadership as it maps the organization’s IC patterns. The five focus areas include leading and lagging indicators as well as measurements focusing on the outside and inside of the company. It is a management system that allows a company to keep track of many dimensions in a systematic way. However, the measurements are static and do not take into consideration of dynamic flows within an organization.
Shareholder Information. Skandia’s IC Model allows shareholders to visualize hidden IC of a company. It helps shareholders to accurately assess the future competitiveness, development and investment potential of companies. The Skandia Navigator can also be applied to non-profit organizations as it looks at human and structural factors and not just financial factors, as value creators. It provides a common yardstick to measure and compare value growth in every type of enterprise in a society. However, generic standards for measuring IC among companies or across industries are not established as yet and the current diversity of the indices and the context specificity hinder any possible comparison.
Balanced Scorecard (BSC)
After a multi-year, multi-company study, Kaplan and Norton (1996) suggested that managers need a multi-dimensional measurement system to guide their decisions - a Balanced Scorecard (BSC). The name reflected the balance provided between short- and long-term objectives, between financial and non-financial measures, between lagging and leading indicators, and between external and internal performance perspectives. BSC encourages systematic measurement of these indicators and to link all these measures in a coherent system. The objectives and measures of the scorecard are derived from an organization’s vision and strategy. The objectives and measures view organizational performance from four perspectives: financial, customer, internal business process, and learning and growth. These four perspectives provide the framework for the BSC as indicated in Figure 3 below.
All measures of a BSC are linked through a cause and effect chain that culminates in a relation to financial results. As time goes by, managers monitor whether the strategy they chose is correctly implemented and then check whether the assumptions they made about the cause and effect relations hold true. If financial results are not achieved, then either the causal chain is different from their hypothesis, or time lags are longer than forecasted.
Figure 3. The Balanced Scorecard
Auditable and reliable. BSC provides a clear business planning methodology that is auditable. The drivers are derived from an explicit and rigorous translation of the organization’s strategy into tangible objectives and measures.
The four perspectives of the BSC have been found to be robust across a wide variety of companies and industries. No mathematical theorem exists that four perspectives are both necessary and sufficient. Kaplan and Norton indicated that they have yet to see companies using less than these four perspectives.
In terms of reliability, BSC has the following weaknesses:
· The perspectives drive the Key Success Factors (KSF). This is limiting because KSF typically will be cross perspective, impacting simultaneously more than one dimension of the intangible resources of the company.
· Considerations on the external environment are limited to customers. Companies interact and leverage their relationship with suppliers, alliance partners, local community, unions and final consumers.
· Employees are lumped together with IT systems into the learning and growth perspective while innovation is part of the internal business process perspective. In reality, innovation is the result of human learning and action. It feels almost as if innovation is considered a routine, something the organization can do without the people, or at least independently of them. As a consequence, the specific challenge of managing people and their knowledge is underestimated by the BSC.
Overhead and ease-of-use. Considering that each of the four perspectives in the BSC can require between four to seven separate measures, businesses often have scorecards with up to 25 measures. Kaplan and Norton do not think this is too much to handle, as the BSC is not a replacement for an organization’s day-to-day measurement system which has more than 25 to keep them functioning.
A typical BSC rollout project can last for 16 weeks. The schedule is largely determined by senior executives’ availability for interviews, workshops, and subgroup meetings. Once built, BSCs are embedded into ongoing management systems. BSCs are dynamic and need to be continually reviewed, assessed, and updated to reflect new competitive, market, and technological conditions.
BSC is well developed and has consistent literature to support systematic measurements that are linked in a coherent system. There is also clear correlation between indicators and financial performance. Once developed, the BSC is easy to use.
Strategic Management. The process of building a BSC starts with a reinterpretation of the vision, or long-term strategy through the lenses of the four perspectives. This yields key success factors for each perspective, which can be translated into critical measures.
The BSC emphasizes that financial and non-financial measure must be part of the information system for employees at all levels of the organization. It includes leading and lagging indicators as well as measurements focusing on the outside and inside of the company. It is more than a measurement system. Innovative companies use the scorecard as the central, organizing framework for their management processes. It brings together seemingly disparate elements of a company’s competitive agenda. The BSC also reduces sub-optimization. Here it forces the user to consider all the operational measures together i.e. whether improvements in one area is done at the expense of another.
Shareholder Information. External comparisons are difficult since businesses have their vision and key success factors. BSC is therefore more effective as an internal document.
Intellectual Capital Index (IC-Index)
Goran Roos and his colleagues at Intellectual Capital Services Ltd first advanced the notion of an IC-Index. It is an example of ‘second generation’ of IC practices that attempt to consolidate all the different individual indicators into a single index, and to correlate the changes in intellectual capital with changes in the market (Roos et al. 1997).
Many companies have so far applied only a ‘balance sheet’ approach to IC. A complementary ‘profit and loss’ approach that considers the dynamic flows is a natural extension. Thus, flows among the different forms of capital, intellectual and material, should be analyzed and managed as much as the stocks. It is these flows that generate and alter the stocks. A summary index provides an immediate improvement to having long lists of individual indicators, because it requires companies to understand the priorities and relationships that exist between their different measures. Figure 4 shows how the IC-Index changes over time with respect to the Relationship Capital, Innovation Capital, Human Capital and Infrastructure Capital indices of a financial company.
Figure 4. IC-Index of a Financial Company.
Obviously, second generation practices do not substitute for first generation ones; rather, they complement them. It will be necessary to examine the components to understand what caused a given change in the aggregated index.
Auditable and reliable. The concept of the IC-Index focuses on the monitoring of the dynamics of IC. This is a great improvement over existing IC measurement systems. Like most other measures of intangible assets, an IC-Index does depend on value judgments, in the choice of weights and indicators. The IC-Index is capable of taking into account performance from prior periods. One limitation of this is that it is subject to ‘one-off special events’ which can have a strong influence on moving the index up or down for some years after the event.
The most important characteristic of the IC-Index is that it is a self-correcting index in that if performance of the IC-Index does not reflect changes of the market value of the company, then the choice of capital forms, weights and/or indicators is flawed. Better tests of correlation between IC-Index and market value are needed, and more research needs to go into the study of the time delays, so that they can be included into the picture as well.
Overhead and ease-of-use. The IC-Index approach consolidates IC indicators into an aggregate IC index. It prioritizes the different measures and thus reduces them to only a handful to help managers reduce complexity. Trends are used to gain useful management insights and therefore precise measurements of the indicators are not necessary. As a single index, it is much easier to implement and measurement overheads are greatly reduced.
Strategic Management. The IC-Index allows managers to understand the effects a particular strategy has on the IC of a company and compare two alternatives to understand which one is preferable from an IC point of view. At the same time, it will also allow managers to understand the efficiency of the company in transforming IC into financial value and financial capital into IC.
Shareholder Information. The IC-Index has made an even bigger part of the company (stock flows) visible to internal and external stakeholders compared to first generation IC approaches. The value of an IC-Index lies in its measurement of changes in IC stocks. This stock flow perspective is quite powerful for researchers, since they can examine firms as organizational learning systems that try to minimize stock flow misalignment. Bontis et al. (1999) suggest that changes in an IC-Index reflect changes in the underlying IC elements, that in turn signal changes in the underlying drivers of future earnings potential.
The IC-Index provides the basis of comparing IC performance of companies. However, it must be noted that it is very much context specific and is therefore limited in its universality among companies.
Summary
A summary of the advantages and limitations of the three methods of measuring and valuing knowledge in comparison with the traditional financial systems approach is listed in Table 1.
Criteria Test TraditionalFinancial Systems Method 1 - Skandia’s IC Model Method 2 - BSC Method 3 -IC-Index
Auditable and reliable Validity High Low Low Low
Reliability High Medium Medium Medium
Verifiable High High High High
Precise High Low Low Low
Rigorous High Low Low Low
Overhead & Ease of Use Measurement Overhead Low High High High
Easy to initiate & use High Low Low Low
Strategic Management Data addresses forward looking needs - High High High
Allows trade-off decisions Medium Low Low High
Measures stock High High High Low
Measures flow - - - High
Shareholder Information Comparability Medium Low Low High
Provides data at all levels in the company Medium High High High
Engages all the value attributes of all stakeholders Low Medium Medium High
Table 1. Comparison of the various methods of measuring and valuing knowledge
Conclusion
Various methods exist that measures the value of knowledge. This paper has presented a number of criteria used to evaluate three of these methods. As the analysis shows, each method has its own advantages and limitations. Therefore, in the application of these methods, it is important to recognize what each of the methods aims to accomplish and the conditions under which these accomplishments can happen.
References
1. Bontis, Nick (2001). Assessing knowledge assets: a review of the models used to measure intellectual capital. International Journal of Management Reviews 3(1), 41-60.
2. Bontis, Nick, Dragonetti, N.C., Jacobsen, Kristine and Roos, Goran (1999). The Knowledge Toolbox: A Review of the Tools Available to Measure and Manage Intangible Resources. European Management Journal 17(4), 391-402.
3. Davenport, T.H. and Prusak, L. (1998). Working Knowledge: How Organisations Manage What They Know. HBSP, Boston.
4. Edvinsson, Leif and Malone, Michael S. (1997). Intellectual Capital. Judy Piatkus (Publishers) Ltd, London.
5. Kaplan, Robert S. and Norton, David P. (1996). The Balanced Scorecard. HBSP, Boston.
6. Pike, Steve and Roos, Goran (2000). Intellectual Capital Measurement and Holistic Value Approach (HVA). Works Institute Journal (Japan) 42.
7. Roos, Johan, Roos, Goran, Dragonetti, Nicola Carlo and Edvinsson, Leif (1997). Intellectual Capital: Navigating the New Business Landscape. Macmillan Press Ltd, London.
8. Sveiby, K.E. (1997). The New Organisational Wealth. Berrett-Koehler, San Francisco.
Abstract
Every business generates value in some form, otherwise it would not exist. In response to the growing recognition that traditional financial systems are inadequate in capturing the true value of modern business, various methods have been proposed to measure and value knowledge in an organization. This paper looks at three of these methods and proposes a list of criteria to evaluate their effectiveness. The three methods are Skandia’s Intellectual Capital Model, the Balanced Scorecard and the Intellectual Capital Index. While the evaluation shows that these methods do address the gaps in the traditional financial measurement systems, there are limitations. A more rigorous, complete and practical method still needs to be researched before measuring and valuing knowledge becomes an integral part of running a business.
Introduction
Knowledge is the accumulation of everything an organization knows and uses in the carrying out of its business. It originates and resides in the minds of people (Davenport and Prusak, 1998). In an era of rapid technological change, when entire product categories can disappear overnight, when competition can come from unexpected directions, and new types of relationships are being forged between suppliers, manufacturers and customers, managers are recognizing that the most significant acts of value creation that keep companies alive, are in the cultivation and leverage of knowledge and less on the management and measurement of physical and financial assets (Edvinsson and Malone, 1997).
What is the value of knowledge to an organization? It is dependent upon its vision, strategies and objectives. The knowledge that companies have embedded into processes has a value that may be known as far as the owning companies are concerned but, when traded, has a value dependent on the context of use of the buyer and this varies from buyer to buyer (Pike and Roos, 2000). How then do we measure the knowledge in an organization? It is difficult to measure directly because of its intangible nature. Instead, proxy measures are used where the outcome of applying knowledge is being measured. Companies measure intellectual capital (IC), which is the sum of the knowledge of its members and the practical translation of this knowledge, that is brands, trademarks and processes (Roos et al, 1997).
In traditional accounting methods adopted by companies, value lies in assets. Assets in turn are everything owned by a company that has money value. Intangible assets arose in response to a growing recognition that non-bookkeeping-type factors could have an important role to play in a company’s real value. They include patents, trademarks, copyrights, exclusive market rights – all conferred on their owners a competitive advantage that has an impact on the bottom line. Others seemed to affect the liability side. For example, when a company dedicated years of research and development funds to the development of a new process or technology, that investment, too, ultimately contributed to the value of the company. And a systematic method, called amortization, was derived to convert this cost into an expense as the intangible asset was used up. But even this wasn’t enough to capture all the intangible assets of a company. There were other, even less rigorous factors that often only made themselves known when the enterprise was sold. What is this added value? It might be the loyalty of customers, or the recognition of a business name that had been around for decades, or store location, even the character of the employees. These factors were lumped under the title goodwill and can be amortized over intervals ranging from five to forty years, depending on how long it takes to enjoy the full benefits of that goodwill.
One major problem with the current approach to valuing assets is that, investments in research and development which are supposed to make the company’s financial numbers look good in the long run, will, in the near term make those same numbers appear weak against shortsighted competitors who maintain the status quo – and that in turn will compromise the company’s ability to obtain capital. Simply put, the smart, forward-looking company is punished for trying to maintain its competitiveness and earnings capability (Edvinsson and Malone, 1997). Companies have found the current financial accounting system inadequate as it fails to capture the true value of the modern enterprise. To address this inadequacy, there are many attempts to identify what those intangible factors are off the balance sheet, measure them, and find a way to present them in a coherent way.
This paper looks at three well-known methods of measuring and valuing knowledge in an organization. None of them are pure financial models, since, as noted above, financial models alone are inadequate. The three methods selected for this review are:
· Skandia’s Intellectual Capital Model (IC Model)
· Balanced Scorecard (BSC)
· Intellectual Capital Index (IC-Index)
Criteria for Evaluation
The following criteria is used to evaluate the effectiveness of the three methods of measuring and valuing knowledge reviewed in this article (Pike and Roos, 2000):
· Is the method auditable and reliable?
· Is it easy to use and does it impose a large measurement overhead?
· Does it facilitate strategic and tactical management?
· Does it generate the information needed by shareholders and investors?
Auditable and reliable. One obvious approach to measuring and valuing knowledge is to try to retain as much of the rigor of conventional accounting as possible (Pike and Roos, 2000). The measurement approach must meet the following requirements in order for it to be auditable and reliable:
· Validity - the measure must be acceptable based on truth and supported by theory.
· Reliability - the measure must be able to give consistent results in the way you expect. It must be observable and measurable and unlikely to fail by giving inaccurate or misleading results.
· Verifiable - the measure may be proved by a third party to make certain that it is correct.
· Precise - the measure must be exact and accurate without ambiguity. It should be distinct and free from overlaps.
· Rigorous - the measure must have been looked at from every angle to make certain it is correct and agreeable in that they are an agreed measure of the attribute.
Overhead and ease-of-use. Measurement schemes must be easy and practical to implement. Two dangers emerge from over measurement. The first is that the cost of data collection far outweighs the benefits of having it and that its collection also causes considerable irritation amongst those doing the measurement and those being measured. This is especially if the redundancy leads to justified accusations of micro-management and the tendency to instill unwanted behaviours. This latter point arises since people tend to want to improve performance and will tend to focus on many trivial elements in an over-elaborate measurement system. In doing this, they loose sight of the bigger and more important picture.
Strategic management. If a measurement and management system is to be of any real value, then it must give managers a means of translating their strategic intent into appropriate actions and feedback information showing whether these actions are working or not. Furthermore, if measurement is to support management effectively, then the measures have to be dominated by those that look forward. Here lies one of the principle weaknesses of accounting-based methodologies of knowledge. Accounting is based on historical transactions and is thus dominated by lagging measures. For companies to succeed in the next century, they will have to find a way of managing the present by looking to the future rather than the past.
Shareholder information. In order to communicate with stakeholders outside the company, the information must be in a form that the stakeholder understands. In the knowledge era, the concept of stakeholder value extending beyond simple financial performance measures is a crucial change. To communicate with stakeholders now requires a deeper understanding of the attributes of value from the point of view of the stakeholder groups. The ideal measurement approach would facilitate the comparison of results across companies, industries and countries.
Skandia’s Intellectual Capital Model (IC Model)
Skandia is a Swedish global insurance company. Skandia first developed its IC report internally in 1985 and in 1994, it became the first company in the world to publish an Intellectual Capital Report to augment its Annual Report and Accounts. The Skandia Navigator and the value scheme of IC-components that underlies it, used in the report, is the first systematic attempt to uncover the true value of a company and to establish the key indicators for establishing their metrics. Skandia defined IC as the possession of the knowledge, applied experience, organizational technology, customer relationships and professional skills that provide Skandia with a competitive edge in the market (Edvinsson and Malone, 1997). It followed then that the value of IC was the extent to which these intangible assets could be converted into financial returns for the company. The Skandia IC Model targeted both valuation and navigation.
Figure 1 shows the Skandia Market Value Scheme. It contains both financial and non-financial building blocks and allows Skandia to achieve a balance in trying to represent both financial and non-financial reporting, uncovering and visualizing its intellectual capital, tying its strategic vision to the company’s core competencies and reflecting its market value better.
At the heart of the Skandia IC model was the idea that the true value of a company’s performance lies in its ability to create sustainable value by pursuing a business vision and its resulting strategy. From this strategy, one could determine certain success factors that must be maximized. These success factors could in turn be grouped into four distinct areas of focus: Financial, Customer, Process, and Renewal and Development. A commonly shared fifth area is Human Focus. Within each of these five areas of focus, one could identify numerous key indicators to measure performance. Combined, these five factors created a new holistic and dynamic reporting model, which Skandia called the Navigator. Figure 2 shows the Skandia Navigator.
Figure 1. The Skandia Market Value Scheme
Figure 2. The Skandia Navigator
Auditable and reliable. Skandia’s IC Model has commonalities with at least three other companies (WM-Data, PLS-Consult & Celemi) that independently reported intangible assets (Sveiby, 1997). These companies faced the same core questions and developed almost identical reporting formats and indicators. Though these efforts certainly do not provide full validation of the Skandia IC Model, they do suggest that when a company decides to look into the measurement of its intangible assets, it will inevitably follow a similar path.
The Skandia IC report uses 164 metrics to measure the five areas of focus. The indices can easily be audited. However, it is still a challenge for an organization to find out which ones are redundant and which ones are less important vs the company’s mission and objectives. This is subject to interpretation by the executives of the company and is therefore a source of ambiguity.
Another weakness is the inclusion of Structural Capital indices that include computers, etc. as creators of true value. This inclusion presumes that employees showing up for work and sitting in front of their computers end up investing knowledge into that computer which translates into the company’s competitive advantage. We know for sure that this is not a valid assumption, as data given to the employee must be transformed into added-value knowledge before any value can be derived and this is rarely automatic.
Overhead and ease-of-use. Skandia’s IC Model is easy to understand and visualize. However, it has 164 indicators to measure. This is a sizable number and would incur huge measurement overheads. At the same time, there is a lack of understanding of the priorities and the relationships between the different indicators. Thus, all indicators are taken to be equally important and this of course makes the management task that much more complex.
Strategic Management. Skandia’s market value scheme enables management to look beyond traditional assumptions of what creates value for organizations while Skandia’s Navigator is an aid to a company’s leadership as it maps the organization’s IC patterns. The five focus areas include leading and lagging indicators as well as measurements focusing on the outside and inside of the company. It is a management system that allows a company to keep track of many dimensions in a systematic way. However, the measurements are static and do not take into consideration of dynamic flows within an organization.
Shareholder Information. Skandia’s IC Model allows shareholders to visualize hidden IC of a company. It helps shareholders to accurately assess the future competitiveness, development and investment potential of companies. The Skandia Navigator can also be applied to non-profit organizations as it looks at human and structural factors and not just financial factors, as value creators. It provides a common yardstick to measure and compare value growth in every type of enterprise in a society. However, generic standards for measuring IC among companies or across industries are not established as yet and the current diversity of the indices and the context specificity hinder any possible comparison.
Balanced Scorecard (BSC)
After a multi-year, multi-company study, Kaplan and Norton (1996) suggested that managers need a multi-dimensional measurement system to guide their decisions - a Balanced Scorecard (BSC). The name reflected the balance provided between short- and long-term objectives, between financial and non-financial measures, between lagging and leading indicators, and between external and internal performance perspectives. BSC encourages systematic measurement of these indicators and to link all these measures in a coherent system. The objectives and measures of the scorecard are derived from an organization’s vision and strategy. The objectives and measures view organizational performance from four perspectives: financial, customer, internal business process, and learning and growth. These four perspectives provide the framework for the BSC as indicated in Figure 3 below.
All measures of a BSC are linked through a cause and effect chain that culminates in a relation to financial results. As time goes by, managers monitor whether the strategy they chose is correctly implemented and then check whether the assumptions they made about the cause and effect relations hold true. If financial results are not achieved, then either the causal chain is different from their hypothesis, or time lags are longer than forecasted.
Figure 3. The Balanced Scorecard
Auditable and reliable. BSC provides a clear business planning methodology that is auditable. The drivers are derived from an explicit and rigorous translation of the organization’s strategy into tangible objectives and measures.
The four perspectives of the BSC have been found to be robust across a wide variety of companies and industries. No mathematical theorem exists that four perspectives are both necessary and sufficient. Kaplan and Norton indicated that they have yet to see companies using less than these four perspectives.
In terms of reliability, BSC has the following weaknesses:
· The perspectives drive the Key Success Factors (KSF). This is limiting because KSF typically will be cross perspective, impacting simultaneously more than one dimension of the intangible resources of the company.
· Considerations on the external environment are limited to customers. Companies interact and leverage their relationship with suppliers, alliance partners, local community, unions and final consumers.
· Employees are lumped together with IT systems into the learning and growth perspective while innovation is part of the internal business process perspective. In reality, innovation is the result of human learning and action. It feels almost as if innovation is considered a routine, something the organization can do without the people, or at least independently of them. As a consequence, the specific challenge of managing people and their knowledge is underestimated by the BSC.
Overhead and ease-of-use. Considering that each of the four perspectives in the BSC can require between four to seven separate measures, businesses often have scorecards with up to 25 measures. Kaplan and Norton do not think this is too much to handle, as the BSC is not a replacement for an organization’s day-to-day measurement system which has more than 25 to keep them functioning.
A typical BSC rollout project can last for 16 weeks. The schedule is largely determined by senior executives’ availability for interviews, workshops, and subgroup meetings. Once built, BSCs are embedded into ongoing management systems. BSCs are dynamic and need to be continually reviewed, assessed, and updated to reflect new competitive, market, and technological conditions.
BSC is well developed and has consistent literature to support systematic measurements that are linked in a coherent system. There is also clear correlation between indicators and financial performance. Once developed, the BSC is easy to use.
Strategic Management. The process of building a BSC starts with a reinterpretation of the vision, or long-term strategy through the lenses of the four perspectives. This yields key success factors for each perspective, which can be translated into critical measures.
The BSC emphasizes that financial and non-financial measure must be part of the information system for employees at all levels of the organization. It includes leading and lagging indicators as well as measurements focusing on the outside and inside of the company. It is more than a measurement system. Innovative companies use the scorecard as the central, organizing framework for their management processes. It brings together seemingly disparate elements of a company’s competitive agenda. The BSC also reduces sub-optimization. Here it forces the user to consider all the operational measures together i.e. whether improvements in one area is done at the expense of another.
Shareholder Information. External comparisons are difficult since businesses have their vision and key success factors. BSC is therefore more effective as an internal document.
Intellectual Capital Index (IC-Index)
Goran Roos and his colleagues at Intellectual Capital Services Ltd first advanced the notion of an IC-Index. It is an example of ‘second generation’ of IC practices that attempt to consolidate all the different individual indicators into a single index, and to correlate the changes in intellectual capital with changes in the market (Roos et al. 1997).
Many companies have so far applied only a ‘balance sheet’ approach to IC. A complementary ‘profit and loss’ approach that considers the dynamic flows is a natural extension. Thus, flows among the different forms of capital, intellectual and material, should be analyzed and managed as much as the stocks. It is these flows that generate and alter the stocks. A summary index provides an immediate improvement to having long lists of individual indicators, because it requires companies to understand the priorities and relationships that exist between their different measures. Figure 4 shows how the IC-Index changes over time with respect to the Relationship Capital, Innovation Capital, Human Capital and Infrastructure Capital indices of a financial company.
Figure 4. IC-Index of a Financial Company.
Obviously, second generation practices do not substitute for first generation ones; rather, they complement them. It will be necessary to examine the components to understand what caused a given change in the aggregated index.
Auditable and reliable. The concept of the IC-Index focuses on the monitoring of the dynamics of IC. This is a great improvement over existing IC measurement systems. Like most other measures of intangible assets, an IC-Index does depend on value judgments, in the choice of weights and indicators. The IC-Index is capable of taking into account performance from prior periods. One limitation of this is that it is subject to ‘one-off special events’ which can have a strong influence on moving the index up or down for some years after the event.
The most important characteristic of the IC-Index is that it is a self-correcting index in that if performance of the IC-Index does not reflect changes of the market value of the company, then the choice of capital forms, weights and/or indicators is flawed. Better tests of correlation between IC-Index and market value are needed, and more research needs to go into the study of the time delays, so that they can be included into the picture as well.
Overhead and ease-of-use. The IC-Index approach consolidates IC indicators into an aggregate IC index. It prioritizes the different measures and thus reduces them to only a handful to help managers reduce complexity. Trends are used to gain useful management insights and therefore precise measurements of the indicators are not necessary. As a single index, it is much easier to implement and measurement overheads are greatly reduced.
Strategic Management. The IC-Index allows managers to understand the effects a particular strategy has on the IC of a company and compare two alternatives to understand which one is preferable from an IC point of view. At the same time, it will also allow managers to understand the efficiency of the company in transforming IC into financial value and financial capital into IC.
Shareholder Information. The IC-Index has made an even bigger part of the company (stock flows) visible to internal and external stakeholders compared to first generation IC approaches. The value of an IC-Index lies in its measurement of changes in IC stocks. This stock flow perspective is quite powerful for researchers, since they can examine firms as organizational learning systems that try to minimize stock flow misalignment. Bontis et al. (1999) suggest that changes in an IC-Index reflect changes in the underlying IC elements, that in turn signal changes in the underlying drivers of future earnings potential.
The IC-Index provides the basis of comparing IC performance of companies. However, it must be noted that it is very much context specific and is therefore limited in its universality among companies.
Summary
A summary of the advantages and limitations of the three methods of measuring and valuing knowledge in comparison with the traditional financial systems approach is listed in Table 1.
Criteria Test TraditionalFinancial Systems Method 1 - Skandia’s IC Model Method 2 - BSC Method 3 -IC-Index
Auditable and reliable Validity High Low Low Low
Reliability High Medium Medium Medium
Verifiable High High High High
Precise High Low Low Low
Rigorous High Low Low Low
Overhead & Ease of Use Measurement Overhead Low High High High
Easy to initiate & use High Low Low Low
Strategic Management Data addresses forward looking needs - High High High
Allows trade-off decisions Medium Low Low High
Measures stock High High High Low
Measures flow - - - High
Shareholder Information Comparability Medium Low Low High
Provides data at all levels in the company Medium High High High
Engages all the value attributes of all stakeholders Low Medium Medium High
Table 1. Comparison of the various methods of measuring and valuing knowledge
Conclusion
Various methods exist that measures the value of knowledge. This paper has presented a number of criteria used to evaluate three of these methods. As the analysis shows, each method has its own advantages and limitations. Therefore, in the application of these methods, it is important to recognize what each of the methods aims to accomplish and the conditions under which these accomplishments can happen.
References
1. Bontis, Nick (2001). Assessing knowledge assets: a review of the models used to measure intellectual capital. International Journal of Management Reviews 3(1), 41-60.
2. Bontis, Nick, Dragonetti, N.C., Jacobsen, Kristine and Roos, Goran (1999). The Knowledge Toolbox: A Review of the Tools Available to Measure and Manage Intangible Resources. European Management Journal 17(4), 391-402.
3. Davenport, T.H. and Prusak, L. (1998). Working Knowledge: How Organisations Manage What They Know. HBSP, Boston.
4. Edvinsson, Leif and Malone, Michael S. (1997). Intellectual Capital. Judy Piatkus (Publishers) Ltd, London.
5. Kaplan, Robert S. and Norton, David P. (1996). The Balanced Scorecard. HBSP, Boston.
6. Pike, Steve and Roos, Goran (2000). Intellectual Capital Measurement and Holistic Value Approach (HVA). Works Institute Journal (Japan) 42.
7. Roos, Johan, Roos, Goran, Dragonetti, Nicola Carlo and Edvinsson, Leif (1997). Intellectual Capital: Navigating the New Business Landscape. Macmillan Press Ltd, London.
8. Sveiby, K.E. (1997). The New Organisational Wealth. Berrett-Koehler, San Francisco.
Thursday, November 20, 2003
Application of Data Mining to support Customer Relationship Management in the Financial Services Industry
Data mining is the process of exploration and analysis, by automatic or semi-automatic means, of large quantities of data in order to discover meaningful patterns and rules (Berry & Linoff, 2000). The purpose of data mining is to make better decisions. Difficult and high value problems that would benefit from data mining includes allocating marketing and promotional resources, identifying likely buyers for products and services, and assessing customer behaviour and value. The financial services industry (FSI) provides such kind of environment. This paper looks at how data mining is applied to customer relationship management (CRM) in this industry.
The Financial Services Industry
The FSI consists of the banking, insurance and securities industries. The insurance industry, in particular, is filled with lots of data and data analysis is used to measure risk and price products (SAS Institute, 2002). The life insurance business is quite different from the property and casualty business because whole life insurance is really an investment, and term life insurance often complements investments. Life insurance companies expect their competition in the future to be other financial services companies, such as banks and mutual funds. As banks, insurance companies and securities firms continue to form new corporations, holding companies and distribution agreements, the trend toward competition in FSI should continue if not increase significantly. Today’s competitive environment mandates that financial service providers align their sales and service strategies with customers’ needs to retain those they have and those they want (Peppard, 2000). And detail data analysis is fundamental. This is where data mining comes into the picture. A significant number of financial services provider have done data mining and absolutely rely on predictive modeling to run their business profitably (Mazier, 2002). For these companies, data mining is something that enhances their capability to run their business.
Customer Relationship Management
CRM refers to a marketing approach that uses continuously refined information about current and potential customers to anticipate and respond to their needs (Peppard, 2000). CRM appears to be the philosophy that will drive marketing strategies in the 21st century (Danna & Gandy, 2002). There are only three ways to increase the profitability of a customer base: acquire more customers, optimize the value of existing customers, or retain the right customers longer. All of these benefits must be achieved with lower costs. As the economic climate continues to become more competitive, the fight over customers intensifies. Of the three choices above, acquiring new customers is the most expensive. Research shows that acquiring a new customer costs 5 to 10 times more than retaining an existing one. Studies also show that loyal customers will buy more over their lifetime and are willing to pay a premium for doing business with someone they like and trust. Therefore, while organizations will clearly continue looking for new customers, once acquired, they now know that it is worth a significant investment to keep them. CRM is a way to do that.
CRM begins with in-depth analysis of customer behaviour and attributes to achieve complete knowledge of the customers, their habits and desires and their needs (Peppard, 2000). It then applies this knowledge to the formulation of marketing campaigns, strategies, and treatment plans. Herein lies the value of data mining. Data mining technologies have allowed firms to discover and predict whom their most profitable customers will be by analyzing customer information aggregated from previously disparate databases. These customers are then targeted for special treatment based on their anticipated future value to the company. In order to support CRM, a data mining system must be able to sift enterprise-wide data enabling financial service providers to:
· Analyse the profiles and preference of existing customers
· Predict customer buying habits
· Focus sales and marketing campaigns on prospects who have a high probability of becoming customers (SAS Institute, 2002).
Specifically, data mining can support CRM efforts of financial services providers in the following areas:
Customer Retention
A bank was concerned with the behaviour of customers over time - whether they would be transactors (who pay off their balances every month), revolvers (who pay the minimum balance and lots of interest), or convenience users (who pay off the balance over several months) (Berry & Linoff, 2000). In particular, the bank is concerned about churn, the word used to describe customers who are likely to leave in the near future. Churn management consists of developing techniques that enables companies to retain their profitable customers and it aims at increasing customer loyalty (Lejeune, 2001). The premise being that existing customers are more profitable than new customers; that it is less expensive to sell an incremental product to existing customers; customer retention would be maximized by matching products and levels of service more closely to customer expectations; and attracting new customers is expensive (Peppard, 2000).
Through data mining activities, banks are able to identify which customers have high lifetime value ranking and are at risk of leaving the bank for another financial services provider (Koh & Chan, 2001; Danna & Gandy, 2002). At the same time, they are also able to identify which customers have a low lifetime value and are unlikely to leave the bank. Armed with this information, the banks are then able to vary their offerings to reduce churn. In the case of customers with high lifetime value ranking, the offerings are meant to strengthen the bank’s relationship with the customer. In communicating the offerings to such customers, the bank emphasizes the value it places on having them as customers. Thus banks are able to reward those customers who are perceived to be more valuable to the bank with lower prices for services while those who are perceived to be less lucrative will face a situation where the service fees are likely to rise. Chase Manhattan Bank, for example, used data mining to model customer churning and implemented the unusual step of reducing the required minimum balance in customers’ checking accounts for two consecutive years. The result was that the percentage of profitable customers to overall customers improved.
Data mining is also used to model the lifetime value of its customers and to estimate the “growability” of certain segments (Danna & Gandy, 2002). The purpose is to take a more customer-centric approach to management by tiering its customer base in order to better channel communication and services. With this data in hand, a bank can set out to differentiate its offerings. Take the example of the Royal Bank of Canada. The bank converted 60% of its customers that were paying on a fee-for-service basis into flat-fee packages because such customers tend to stay loyal to the bank. The bank was not concerned about retaining the loyalty of the 40% of its fee-for-service customers it did not manage to convert.
Customer Acquisition
Many financial institutions now use sophisticated profitability, potential and propensity models to determine how best to invest scarce marketing resources for customer acquisition (Peppard, 2000). Data mining methods are used to discover attributes in customer databases that predict response rates to a bank’s marketing campaigns (Koh & Chan, 2001). Attributes that are identified as campaign friendly can then be matched to new lists of non-customers in order to increase the effectiveness of the marketing campaign. The Canadian Imperial Bank of Commerce utilised data mining to achieve a phenomenal response rate of 47% to their direct mail campaign. Much of the success was attributed to targeting the right customers and being able to predict their responses. At the Fleet Bank, data mining was used to identify the best prospects for marketing its mutual funds based on customer demographics and account data.
A life insurance company has recognized the need for a direct insurance business that would supplement the agent networks where most of their life insurance is sold (Berry & Linoff, 2000). The key to selling life insurance is determining which prospects are likely to purchase it. The purchase is a one-time event, but it often comes only after multiple contacts to the prospect. As part of the investment for building the direct side of the business, the company built a prospects data warehouse for data mining to support direct marketing campaigns.
Customer Extension
A major bank used data mining to improve its ability to perform cross-selling (Berry & Linoff, 2000). Selling additional services to the customers you already have is referred to as cross-selling. The closely related case of getting existing customers to trade up to more profitable products is called up-selling. When banks merge, one of the justifications usually given is the opportunity to cross-sell products and services. For example, when a customer comes in for a new car loan, what could be more natural than to offer him or her car insurance at the same time? This requires determining precisely which products should be offered to which customers and figuring out how best to reach them with the message. The bank built on data drawn from its customer information file to improve its ability to cross-sell new products to existing customers by determining each existing customer’s best next offer - the offer that is most likely to elicit a positive response from that customer. Equipped with customer profiles enriched by data mining, customer service representatives in Customer Service Call Centre were able to identify products and services that are most relevant to callers (Koh & Chan, 2001).
Prudential, for example, predicts the likelihood of customers purchasing additional products (Mazier, 2003). This helps the insurer to limit its marketing dollars to those people most likely to respond to the offers. The global marketing department at Prudential primarily uses data mining to identify key characteristics of products and customers to indicate how they may react to things.
Customer Risk Management
Banks use data mining to manage risk (Koh & Chan, 2001). Credit card fraud detection is the application that best represents success in data mining. Here, there are massive amounts of data and the penalty is severe when a mistake is made and the classification has to be done very quickly online. Data mining techniques that can spot a potentially fraudulent credit card transaction before it has been completed are a good example. In the moments after the credit card is swiped, the most important thing is to make a quick and accurate prediction (Berry & Linoff, 2000).
In loan approval and overdraft facilities, assessing credit risk has been a rule-based affair until more accessible and easier-to-use data mining software has made it possible for powerful data mining techniques to be applied to risk assessment (Berry & Linoff, 2000). For example, a decision tree solution for credit risk assessment produces credit-scoring rules for all the accounts in a bank database, and credit jeopardy lists can be drawn up by the use of multiple database queries. This scoring or classification of high/low risk is based on the attributes of each consumer account such as overdraft records, outstanding loans, history of derogatory credit reports, account type, income levels and other information.
Data mining is also used to profile consumers’ risk criterion level (Danna & Gandy, 2002). Those profiled above some risk criterion level are unlikely to learn about lending programs and other credit offers. A decision about who receives information about lending programs is made in a context where risk is no longer defined in terms of default, but as the failure to be significantly profitable.
For example, the Corestates Bank analyses its customer and credit portfolio to reduce its credit risk and monitor high-risk accounts. The Bank of Montreal analyses its customers’ mortgage transactional history in checking, saving and other accounts for insight into customers’ risk of default. At the Bank of America, the mortgage division uses data mining on customer behaviour data to estimate bad loans so that credit risk managers can allocate optimal loan loss reserves which affects profitability directly.
Insurance companies rely on data mining to make profitable business decisions. Insurers must be able to accurately assess the risks posed by their policyholders to set insurance premiums at competitive levels (Apte, Liu, Pednault & Smyth, 2002). For example, overcharging low-risk policy holders would motivate them to seek lower premiums elsewhere; undercharging high-risk policy holders would attract more of them due to the lower premiums. In either case, costs would increase and profits inevitably decrease. The Farmers Insurance Group was able to use data mining to turn up one particular segment of experienced drivers who are unusually accident-prone. The conventional wisdom is that experienced drivers tend to have relatively low claim frequencies.
In order to set policy premiums, insurers need to predict the cost of claims filed by policy holders annually, given what is known about each policy holder. For example, data for motor insurance is analysed to build a predictive model to estimate loss ratios for policies (Berry & Linoff, 2000). Loss ratio is the insurance term for the ratio of claims paid out to premiums collected. It is a key driver of profitability.
At the Farmers Insurance Group, data mining found that if a sports car was not the only vehicle in the household, the accident rate is not much greater than that of a regular car. The conventional wisdom is that all drivers of high-performance sports cars are more likely to have accidents than drivers of other types of cars. By focusing on owners of multiple cars who happen to have a sports car, and offering them reduced rates, the car insurance company was able to grow market share with a minimum of risk.
For fraud detection, MetLife Auto & Home Division of MetLife Inc. uses data mining on its own claims data from the past two years to look for policyholders committing rate evasion, such as lying about where they live or where they garage their car to pay lower premiums, by checking their ZIP codes with their home telephone numbers to see whether the cities match (Chordas, 2003).
Conclusion
In FSI, every transaction is linked to a particular customer about whom much is already known. Bank statements, policy documents, ATM transactions, and online banking Web sites all have the potential to be used for personalized messages chosen on the basis of information gathered throughout the history of the relationship of the customer with the financial services provider. Although this paper focuses on FSI, the application of data mining to support CRM is an important strategy for any company that offers its customers a wide range of products or services. Data mining is a way to make businesses more profitable – something that is bound to be of interest in any industry!
References
1. Apte, Chidanand; Liu, Bing; Pednault, Edwin P.D. & Smyth, Padhraic (2002). Business Applications of Data Mining. Communication of the ACM. Volume 45, No.8, pp. 49-53.
2. Berry, Michael J. A. & Linoff, Gordon (2000). Mastering Data Mining: The Art and Science of Customer Relationship Management, Wiley Computer Publishing, New York, NY.
3. Chordas, Lori (2003). Data-Mining Information Helps MetLife Detect Fraud. Best’s Review.
4. Danna, Anthony, & Gandy, Oscar H. Junior (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics. Volume 40, No. 4, pp. 373.
5. Koh, Hian Chye & Chan, Kin Leong, Gerry (2001). Data Mining and Customer Relationship Marketing in the Banking Industry. Singapore Management Review. Volume 24, No. 2.
6. Lejeune, Miguel A.P. (2001). Measuring the impact of data mining on churn management. Internet Research. Volume 11, No. 5, pp. 375-87.
7. Mazier, E.E. (2002). Insurers Are Striking Gold With Data Mining Technology. National Underwriter, pp. 48-49
8. Peppard, J. (2000). Customer Relationship Management in Financial Services. European Management Journal. Volume 18, No. 3, pp. 312-27.
9. SAS Institute (2002). Data Mining in the Insurance Industry. A SAS Institute White Paper.
Data mining is the process of exploration and analysis, by automatic or semi-automatic means, of large quantities of data in order to discover meaningful patterns and rules (Berry & Linoff, 2000). The purpose of data mining is to make better decisions. Difficult and high value problems that would benefit from data mining includes allocating marketing and promotional resources, identifying likely buyers for products and services, and assessing customer behaviour and value. The financial services industry (FSI) provides such kind of environment. This paper looks at how data mining is applied to customer relationship management (CRM) in this industry.
The Financial Services Industry
The FSI consists of the banking, insurance and securities industries. The insurance industry, in particular, is filled with lots of data and data analysis is used to measure risk and price products (SAS Institute, 2002). The life insurance business is quite different from the property and casualty business because whole life insurance is really an investment, and term life insurance often complements investments. Life insurance companies expect their competition in the future to be other financial services companies, such as banks and mutual funds. As banks, insurance companies and securities firms continue to form new corporations, holding companies and distribution agreements, the trend toward competition in FSI should continue if not increase significantly. Today’s competitive environment mandates that financial service providers align their sales and service strategies with customers’ needs to retain those they have and those they want (Peppard, 2000). And detail data analysis is fundamental. This is where data mining comes into the picture. A significant number of financial services provider have done data mining and absolutely rely on predictive modeling to run their business profitably (Mazier, 2002). For these companies, data mining is something that enhances their capability to run their business.
Customer Relationship Management
CRM refers to a marketing approach that uses continuously refined information about current and potential customers to anticipate and respond to their needs (Peppard, 2000). CRM appears to be the philosophy that will drive marketing strategies in the 21st century (Danna & Gandy, 2002). There are only three ways to increase the profitability of a customer base: acquire more customers, optimize the value of existing customers, or retain the right customers longer. All of these benefits must be achieved with lower costs. As the economic climate continues to become more competitive, the fight over customers intensifies. Of the three choices above, acquiring new customers is the most expensive. Research shows that acquiring a new customer costs 5 to 10 times more than retaining an existing one. Studies also show that loyal customers will buy more over their lifetime and are willing to pay a premium for doing business with someone they like and trust. Therefore, while organizations will clearly continue looking for new customers, once acquired, they now know that it is worth a significant investment to keep them. CRM is a way to do that.
CRM begins with in-depth analysis of customer behaviour and attributes to achieve complete knowledge of the customers, their habits and desires and their needs (Peppard, 2000). It then applies this knowledge to the formulation of marketing campaigns, strategies, and treatment plans. Herein lies the value of data mining. Data mining technologies have allowed firms to discover and predict whom their most profitable customers will be by analyzing customer information aggregated from previously disparate databases. These customers are then targeted for special treatment based on their anticipated future value to the company. In order to support CRM, a data mining system must be able to sift enterprise-wide data enabling financial service providers to:
· Analyse the profiles and preference of existing customers
· Predict customer buying habits
· Focus sales and marketing campaigns on prospects who have a high probability of becoming customers (SAS Institute, 2002).
Specifically, data mining can support CRM efforts of financial services providers in the following areas:
Customer Retention
A bank was concerned with the behaviour of customers over time - whether they would be transactors (who pay off their balances every month), revolvers (who pay the minimum balance and lots of interest), or convenience users (who pay off the balance over several months) (Berry & Linoff, 2000). In particular, the bank is concerned about churn, the word used to describe customers who are likely to leave in the near future. Churn management consists of developing techniques that enables companies to retain their profitable customers and it aims at increasing customer loyalty (Lejeune, 2001). The premise being that existing customers are more profitable than new customers; that it is less expensive to sell an incremental product to existing customers; customer retention would be maximized by matching products and levels of service more closely to customer expectations; and attracting new customers is expensive (Peppard, 2000).
Through data mining activities, banks are able to identify which customers have high lifetime value ranking and are at risk of leaving the bank for another financial services provider (Koh & Chan, 2001; Danna & Gandy, 2002). At the same time, they are also able to identify which customers have a low lifetime value and are unlikely to leave the bank. Armed with this information, the banks are then able to vary their offerings to reduce churn. In the case of customers with high lifetime value ranking, the offerings are meant to strengthen the bank’s relationship with the customer. In communicating the offerings to such customers, the bank emphasizes the value it places on having them as customers. Thus banks are able to reward those customers who are perceived to be more valuable to the bank with lower prices for services while those who are perceived to be less lucrative will face a situation where the service fees are likely to rise. Chase Manhattan Bank, for example, used data mining to model customer churning and implemented the unusual step of reducing the required minimum balance in customers’ checking accounts for two consecutive years. The result was that the percentage of profitable customers to overall customers improved.
Data mining is also used to model the lifetime value of its customers and to estimate the “growability” of certain segments (Danna & Gandy, 2002). The purpose is to take a more customer-centric approach to management by tiering its customer base in order to better channel communication and services. With this data in hand, a bank can set out to differentiate its offerings. Take the example of the Royal Bank of Canada. The bank converted 60% of its customers that were paying on a fee-for-service basis into flat-fee packages because such customers tend to stay loyal to the bank. The bank was not concerned about retaining the loyalty of the 40% of its fee-for-service customers it did not manage to convert.
Customer Acquisition
Many financial institutions now use sophisticated profitability, potential and propensity models to determine how best to invest scarce marketing resources for customer acquisition (Peppard, 2000). Data mining methods are used to discover attributes in customer databases that predict response rates to a bank’s marketing campaigns (Koh & Chan, 2001). Attributes that are identified as campaign friendly can then be matched to new lists of non-customers in order to increase the effectiveness of the marketing campaign. The Canadian Imperial Bank of Commerce utilised data mining to achieve a phenomenal response rate of 47% to their direct mail campaign. Much of the success was attributed to targeting the right customers and being able to predict their responses. At the Fleet Bank, data mining was used to identify the best prospects for marketing its mutual funds based on customer demographics and account data.
A life insurance company has recognized the need for a direct insurance business that would supplement the agent networks where most of their life insurance is sold (Berry & Linoff, 2000). The key to selling life insurance is determining which prospects are likely to purchase it. The purchase is a one-time event, but it often comes only after multiple contacts to the prospect. As part of the investment for building the direct side of the business, the company built a prospects data warehouse for data mining to support direct marketing campaigns.
Customer Extension
A major bank used data mining to improve its ability to perform cross-selling (Berry & Linoff, 2000). Selling additional services to the customers you already have is referred to as cross-selling. The closely related case of getting existing customers to trade up to more profitable products is called up-selling. When banks merge, one of the justifications usually given is the opportunity to cross-sell products and services. For example, when a customer comes in for a new car loan, what could be more natural than to offer him or her car insurance at the same time? This requires determining precisely which products should be offered to which customers and figuring out how best to reach them with the message. The bank built on data drawn from its customer information file to improve its ability to cross-sell new products to existing customers by determining each existing customer’s best next offer - the offer that is most likely to elicit a positive response from that customer. Equipped with customer profiles enriched by data mining, customer service representatives in Customer Service Call Centre were able to identify products and services that are most relevant to callers (Koh & Chan, 2001).
Prudential, for example, predicts the likelihood of customers purchasing additional products (Mazier, 2003). This helps the insurer to limit its marketing dollars to those people most likely to respond to the offers. The global marketing department at Prudential primarily uses data mining to identify key characteristics of products and customers to indicate how they may react to things.
Customer Risk Management
Banks use data mining to manage risk (Koh & Chan, 2001). Credit card fraud detection is the application that best represents success in data mining. Here, there are massive amounts of data and the penalty is severe when a mistake is made and the classification has to be done very quickly online. Data mining techniques that can spot a potentially fraudulent credit card transaction before it has been completed are a good example. In the moments after the credit card is swiped, the most important thing is to make a quick and accurate prediction (Berry & Linoff, 2000).
In loan approval and overdraft facilities, assessing credit risk has been a rule-based affair until more accessible and easier-to-use data mining software has made it possible for powerful data mining techniques to be applied to risk assessment (Berry & Linoff, 2000). For example, a decision tree solution for credit risk assessment produces credit-scoring rules for all the accounts in a bank database, and credit jeopardy lists can be drawn up by the use of multiple database queries. This scoring or classification of high/low risk is based on the attributes of each consumer account such as overdraft records, outstanding loans, history of derogatory credit reports, account type, income levels and other information.
Data mining is also used to profile consumers’ risk criterion level (Danna & Gandy, 2002). Those profiled above some risk criterion level are unlikely to learn about lending programs and other credit offers. A decision about who receives information about lending programs is made in a context where risk is no longer defined in terms of default, but as the failure to be significantly profitable.
For example, the Corestates Bank analyses its customer and credit portfolio to reduce its credit risk and monitor high-risk accounts. The Bank of Montreal analyses its customers’ mortgage transactional history in checking, saving and other accounts for insight into customers’ risk of default. At the Bank of America, the mortgage division uses data mining on customer behaviour data to estimate bad loans so that credit risk managers can allocate optimal loan loss reserves which affects profitability directly.
Insurance companies rely on data mining to make profitable business decisions. Insurers must be able to accurately assess the risks posed by their policyholders to set insurance premiums at competitive levels (Apte, Liu, Pednault & Smyth, 2002). For example, overcharging low-risk policy holders would motivate them to seek lower premiums elsewhere; undercharging high-risk policy holders would attract more of them due to the lower premiums. In either case, costs would increase and profits inevitably decrease. The Farmers Insurance Group was able to use data mining to turn up one particular segment of experienced drivers who are unusually accident-prone. The conventional wisdom is that experienced drivers tend to have relatively low claim frequencies.
In order to set policy premiums, insurers need to predict the cost of claims filed by policy holders annually, given what is known about each policy holder. For example, data for motor insurance is analysed to build a predictive model to estimate loss ratios for policies (Berry & Linoff, 2000). Loss ratio is the insurance term for the ratio of claims paid out to premiums collected. It is a key driver of profitability.
At the Farmers Insurance Group, data mining found that if a sports car was not the only vehicle in the household, the accident rate is not much greater than that of a regular car. The conventional wisdom is that all drivers of high-performance sports cars are more likely to have accidents than drivers of other types of cars. By focusing on owners of multiple cars who happen to have a sports car, and offering them reduced rates, the car insurance company was able to grow market share with a minimum of risk.
For fraud detection, MetLife Auto & Home Division of MetLife Inc. uses data mining on its own claims data from the past two years to look for policyholders committing rate evasion, such as lying about where they live or where they garage their car to pay lower premiums, by checking their ZIP codes with their home telephone numbers to see whether the cities match (Chordas, 2003).
Conclusion
In FSI, every transaction is linked to a particular customer about whom much is already known. Bank statements, policy documents, ATM transactions, and online banking Web sites all have the potential to be used for personalized messages chosen on the basis of information gathered throughout the history of the relationship of the customer with the financial services provider. Although this paper focuses on FSI, the application of data mining to support CRM is an important strategy for any company that offers its customers a wide range of products or services. Data mining is a way to make businesses more profitable – something that is bound to be of interest in any industry!
References
1. Apte, Chidanand; Liu, Bing; Pednault, Edwin P.D. & Smyth, Padhraic (2002). Business Applications of Data Mining. Communication of the ACM. Volume 45, No.8, pp. 49-53.
2. Berry, Michael J. A. & Linoff, Gordon (2000). Mastering Data Mining: The Art and Science of Customer Relationship Management, Wiley Computer Publishing, New York, NY.
3. Chordas, Lori (2003). Data-Mining Information Helps MetLife Detect Fraud. Best’s Review.
4. Danna, Anthony, & Gandy, Oscar H. Junior (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics. Volume 40, No. 4, pp. 373.
5. Koh, Hian Chye & Chan, Kin Leong, Gerry (2001). Data Mining and Customer Relationship Marketing in the Banking Industry. Singapore Management Review. Volume 24, No. 2.
6. Lejeune, Miguel A.P. (2001). Measuring the impact of data mining on churn management. Internet Research. Volume 11, No. 5, pp. 375-87.
7. Mazier, E.E. (2002). Insurers Are Striking Gold With Data Mining Technology. National Underwriter, pp. 48-49
8. Peppard, J. (2000). Customer Relationship Management in Financial Services. European Management Journal. Volume 18, No. 3, pp. 312-27.
9. SAS Institute (2002). Data Mining in the Insurance Industry. A SAS Institute White Paper.
Wednesday, November 19, 2003
Knowledge Sharing and Collaboration: Strategies for Sales and Marketing
Sales and marketing bridges the gap between an organization’s external knowledge (about customers, competitors and markets) and its internal knowledge (about services, products and technology). Competitive advantage does not flow automatically from the possession of knowledge. One has to know how to extract value from knowledge through sharing and collaboration. This paper looks at the crucial role of Sales and Marketing in facilitating the sharing of knowledge between the marketplace and the organization. Presented in the light of advances in the field of knowledge management, sales and marketing professionals will be able to derive new strategies that would be most effective in their own organizations. In today’s competitive environment, the speed and effectiveness at which knowledge is shared will be key to an organisation’s survival.
For a copy of the paper, drop me a note.
Sales and marketing bridges the gap between an organization’s external knowledge (about customers, competitors and markets) and its internal knowledge (about services, products and technology). Competitive advantage does not flow automatically from the possession of knowledge. One has to know how to extract value from knowledge through sharing and collaboration. This paper looks at the crucial role of Sales and Marketing in facilitating the sharing of knowledge between the marketplace and the organization. Presented in the light of advances in the field of knowledge management, sales and marketing professionals will be able to derive new strategies that would be most effective in their own organizations. In today’s competitive environment, the speed and effectiveness at which knowledge is shared will be key to an organisation’s survival.
For a copy of the paper, drop me a note.
Tuesday, November 18, 2003
Most KM sites focus on the breadth of information, here the focus is on the depth. There will be specifics, so that you can take home and use and report back any success or issues. We will investigate specific KM topics in detail and drill down. Together we can make KM more practical ... Enjoy.