jump to navigation

Decision Automation in BI: Design Guidelines for Business Analytics and Rules June 18, 2007

Posted by Cyril Brookes in Decision Automation, General, Issues in building BI reporting systems.
1 comment so far

Authors routinely ignore the specifics of designing automated decision components for BI systems. I guess this is often because they believe these details are application dependent. However, I believe that there can be, should be, more rigor in the specification of business analytics, rules and predictions that underlie these designs and specifications. Generalizations can only take us so far, sooner or later we have to get down and dirty.

In this, my third recent post that discusses decision automation, I offer some guidelines that can provide the requisite structure; but you, Dear Reader, can judge for yourself, as always.

To recap, hopefully avoiding the need for you to read my recent stuff, it is my hypothesis that the project selection and specification of BI systems with decision automation incorporated should follow five steps as below. Earlier posts have considered the overall issue (here) and Phases 1 through 3 (here).

  1. Identify the controllable business variables in your business environment
  2. Determine the business processes, and relevant decisions, that impact those controllable variables
  3. Identify the BI systems that support the business processes and decision contexts selected in Phase 2
  4. Design the business analytics that are the basis of the decision automation: business rules, predictive analyses, time series analyses, etc. wherever Phase 3 indicates potential value
  5. Evaluate feasibility and profitability of implementing the analytics created in Phase 4

This post covers Phase 4, arguably the most interesting from a technical viewpoint.

It is axiomatic, I believe, that we should revisit now the steps in the decision making process that are to be automated. Each step requires a different style of automation, offers distinct benefits; and some are more complex than others.

Drawing on Management 101, with Herbert Simon as our mentor, we know that the universal key steps in the decision making process, be it manual or automated, high level strategic or low level operational, are:

  • Measuring and assessing the business process status: Where are we? Is it good or bad? What are people saying about us?
  • Finding “problems”: i.e. situations (including opportunities) that need a response, out of specification KPIs, adverse trends or predictions, unusual circumstances, people telling us we have a problem!
  • Diagnosing the problem context: i.e. how bad is it, what happens if nothing is done, has this happened before, what happened then, etc.?
  • Determining the alternatives for problem resolution: i.e. what did we do last time, what are people suggesting we do?
  • Assessing the consequences of the outcomes from each alternative: i.e. predictive modelling, computing merit formulae, what happened after previous decisions in this problem area?
  • Judging the best perceived outcome, and hence making the decision: i.e. comparing the merit indicators, accepting or rejecting peoples opinions.

Most of you, Dear Readers, will know all this, but we need to be sure we are all on the same page. Otherwise, confusion reigns, as indeed it does in several of the recent articles on this subject, especially the marketing hype ones.

Our objective with BI system design is to enable improved business process performance. Our primary channel to do this is to collect and report information that supports management decision making. Apart from passively dumping a heap of facts and figures on a screen, we know we can empower improved decision making in two ways:

Create action oriented BI systems by presenting the information in a pre-digested way that highlights good and bad performance and spurs the executive to react appropriately. Dashboards and scorecards are obvious examples of how we can do this. I proposed in earlier posts some general design principles for summarization and drill-down specification. OR

We can actually make decisions automatically as part of the BI system, adjusting the controllable parameters of the business process without reference to a manager.

We’re focusing here on the second option. It’s not that this is a new idea, we’ve been doing it for decades, but the new business analytics, rule management and prediction software now available does make the whole process much easier than when we had to rely on IF…AND …THEN…ELSE statements to make it all work. A reference that shows the scope of what’s available is by James Taylor.

Let us now consider how we can automate all, some, or one of Simon’s decision process steps. Clearly, even automating one step effectively may be beneficial. It shouldn’t be essential to “go the whole hog “ in the name of decision automation, just do what makes business sense and leave the balance to the manager. Further, we can complete the automation project in stages, progressively removing human interaction; this is Phase 5 from the list above.

Assessing status:

Our BI system will tell us the status, if it doesn’t then fix it. We must have available, in machine readable format, all the business KPIs and metrics which are relevant to the business process, including the state of the controllable variables (Phase 1 of the methodology above). The trick is to manipulate that information so we can compute automatically whether this status is good or bad. Remember, we won’t have a manager to decide this for us if we’re automating. Don’t forget that often the status assessing process will include comments or opinions from real people. Just because we’re automating the decision, doesn’t mean that we can’t accept human inputs.

The most common type of human input that impacts the decision automation is a commentary on the validity, accuracy or timing of a KPI or other important metric. And the most common impact such an input has is: Abort Immediately. You don’t want the automated system making decisions with garbage data.

For this reason, the design should desirably contain some output of status data for human monitoring purposes.

When we have the metrics and other input data accessible, we can move to consider automating the Problem Finding step.

Finding problems:

Situations that need a response can often be determined automatically by examining the degree of status goodness or badness. Fortunately, there are a limited number of available techniques for this. They are mostly the same methods we use to alert managers to problems in a non-automated environment. Almost all can be automated by applying business rules, statistical procedures and/or predictive models. The only human input likely is when the CEO, or another potentate, says you have a problem; this being self-evident thereafter.

The automatable problem finding techniques I use most often include:

Performance Benchmark Comparison: Compare the important KPIs with benchmarks that make sense from a problem identification viewpoint. Obvious examples include: actual versus budget, plan; previous corresponding period, best practice, etc. In addition, you can compute all kinds of metrics that relate to performance and compare them across divisions, products, locations, market segments, etc.

Performance Alerting: The next step is to use the above automated comparisons to identify bad, or superior, performance. This normally involves placing relevant metrics on a scale of awful to excellent. It’s a form of sophisticated exception analysis. The need for action response is usually determined automatically by the assessed position of the metrics on the scale.

Trend Analysis and Alerting: If no problem is found with the basic performance analysis, it is time to bring in the heavy statistical artillery. Trends of performance metrics, either short or long term, are often good indicators of problems that are not immediately apparent. Alerts based on good or adverse trends that trigger a need for a response are easily automated. Current application development software is very sophisticated.

Forecasting and Alerting: Even if current metrics are within acceptable bounds, the future may be problematical. Often this should be corrected earlier, not later. Applying predictive models and then reassessing the adequacy of the forecast critical performance metrics is often valuable, and also relatively easy to automate.

Alerting to Unusual Situations: Time series analysis will often highlight hidden issues, e.g. with changes in customer, supplier, manufacturing or marketing activity. For example, the credit rating of a customer may be altered if the statistical properties of its payment pattern alter significantly.

Diagnosing the context:

Scope, and necessity, for diagnosis in an automated decision environment is limited.

In a non-automated context this is an important part of a BI or decision support system. It involves assisting the human decision maker to understand how bad the problem is, what will happen if no action is taken, and how rapidly disaster will strike.

Normally the automated decision context is operational and relatively simple. I have found that it is often desirable to validate the problem identification procedures specified earlier. Hence, I look for ways to check that the problem is both real and significant enough to warrant automatic rectification action. This could include notifying a human monitor that action is imminent and giving a veto opportunity.

Determining alternatives:

If you’re following the methodology I outlined earlier you will have identified the controllable variables in the target business process. This would have been done in Phase 1. Some suggestions as to potential controllable variables were presented in an earlier post.

Obviously this is a critical step in the design of an automated decision system. However, provided you have done the initial homework on the control levers available for adjusting the performance of the business process, it is easy. It is simply a case of determining which levers to move, whether they move up or down, and by how much.

It may require some modelling work to answer these questions, but most often (I find) a basic table linking variance of performance metric to control adjustment is adequate. Implementing such a specification using modern rule management systems is trivial.

Evaluating outcomes:

In the automated decision context this is usually a simple or non-existent step. The rules for determining the alternatives usually imply a certain outcome.

Only one alternative is often available due to the shortage of control variables. If more than one solution option is available, e.g. inadequate sales volume presages either a decreased price, or increased advertising expense, it may require some modelling to determine the best outcome.

Complexity arises when more than one performance metric is out-of-specification. This will usually imply that more than one control variable needs adjustment. There may be interactions between the variables that requires arbitration; or simply throwing in the automation towel, and advising a human monitor of the issues.

Decision making:

For most decision automation systems the decision is effectively made with the alternative determination, and judgement is not required. If more than one alternative is identified, then an automated assessment of the evaluation determines the decision. Subjective input is usually not relevant or sought. If subjective issues are relevant, then a human assessor is required.

 

In a further post I will consider the implementation issues and recap on the overall method, since I’ve been formalizing my thinking as these posts have been created. Please advise, Dear Reader, if you have any comments on the process thus far, especially if you find it helpful or otherwise.

Building Automated Decision Making into BI System Design – A Methodology Overview May 18, 2007

Posted by Cyril Brookes in BI Requirements Definition, Decision Automation, General, Issues in building BI reporting systems.
5 comments

Automated decision making for business is about flavor of the month. Most emphasis has been on automating business analytics, say, underwriting in the insurance industry and stock market program trading. But there are ample opportunities for incorporating automation in more conventional BI systems, especially corporate performance management, where there has been, so far, little discussion.

Tom Davenport’s recent work on business analytics has been widely reported and commented. The consultants and software marketers are circling the wagons.

To highlight opportunities and stimulate discussion among BI analysts this post explores how relevant BI system targets for automation might be identified.

Most BI analysts see their role as designers of systems to support management decision making through effective presentation of information. That is, of course, commendable and important. But is that all there is? That focus doesn’t preclude building automated decision making systems if the context is suitable. It’s just that it isn’t done often. We seem to be reluctant to try and replace managers, maybe it’s because they are our bread and butter?

There are three generally accepted classes of decisions in business; operational, tactical and strategic. It’s pretty obvious that automatic decision making is almost always associated with operational, and perhaps some tactical, contexts. If it’s strategic, then forget it. Since many BI environments serve a mix of strategic and operational users, the prevailing focus is almost always on information presentation, rather than active replacement of human decision makers.

This discussion reminds me of a 25 year prediction from a long forgotten business journal article of the 1960s “Boards of Directors will be retained for sentimental reasons; computers will make all the decisions….”. Didn’t happen, and won’t. A similar, but contrary, forecast in the HBR of June 1966 “A manager in the year 1985 or so will sit in his paperless, peopleless office with his computer terminal and make decisions based on information and analyses displayed on a screen…” There still seem to be a lot of executive assistants around!

My intention with this post is to suggest a methodology or process which demonstrates how BI analysts can effectively and efficiently identify opportunities beyond the passive aim of information presentation. Even if the resulting design only partially automates decision making, it is likely to be a better, more effective solution than its passive counterpart, simply because it will be the result of a more creative and challenging design process.

In the current spate of articles there are many examples of apparently successful automated business process systems. While these may whet the appetite of a designer they are not, in my view, useful guides when the task of synthesising a BI system incorporating is being undertaken. When your child is given his/her first bicycle, showing someone cycling down the street isn’t going to be much help in teaching how to ride. Hands-on synthesis is needed. Big pictures may create envy, but don’t instruct much.

I suggest that it will be worthwhile for a BI analyst and executive team to review the corporate BI environment, existing and planned, and assess the potential for including automated decision making in the BI systems supporting each business segment.

Further, such a review should use a project planning method which segments activities into several bite sized Phases. Here’s a suggested outline, with more detail on each Phase to follow.

Phase 1: Identify the controllable business variables in the target businesses, ignoring specific business processes

Most articles on automated decision making start with the business process and BPM analyses. I think this is the wrong initial focus. To me, the optimal review starting point is to identify the control parameters of typical business processes that are amenable to automatic adjustment. The number of business process control “levers” available to management is finite, quite small in fact, and the number that might be controlled automatically, with profit, is even smaller. Examples include: Automatic pricing adjustment, dynamic production scheduling, staff re-assignment.

A more complete discussion on identifying control variables follows in a later post. It is, I believe, the most important part of project selection and specification. Get this wrong and you will certainly miss out on the best opportunities.

Phase 2: Identify potential business processes, existing or planned, that utilize one or more of these candidate control parameters and may benefit from automation

The same control variables are likely to appear in multiple business processes. For example, automatic price adjustment could impact BI systems supporting Order Entry, Production Scheduling, CRM, Inventory Management, etc.

Phase 3: Identify components of the candidate BI systems that may profitably incorporate automated decision making

Management 101, since Herbert Simon’s day, tells us that there is a defined decision making process, with several component steps between becoming aware of a problem or opportunity, and deciding what action to take. Automating the decision process clearly requires that one or more of these steps should be performed without reference to a human.

It is relatively easy to consider each of these decision process components in turn, to determine the extent to which it/they can be automated. My later post will give more detail if you are interested, Dear Reader.

Phase 4: Design the business analytics; business rules, predictive analysis, time series analysis wherever Phase 3 indicates potential utility

This is the fun part. The software tools for business rules management are much improved since I first started playing with IF…AND…THEN….ELSE statements as the basis for automation, as are the forecasting and statistical analysis packages.

I leave it to you to work out the details, as they are always application dependent. But always be aware that rules change, sometimes quickly, so dynamic management, or decision making agility if you will, is important. Enjoy.

Also, note that Phase 4 will be an iterative process, with frequent Phase 5 reviews to ensure that business sense prevails, limiting the scope for white elephant projects; even though they can be fun.

Phase 5: Evaluation and feasibility reviews of the costs and benefits of automated decision making components within the BI system

Try not to let the excitement of creating rules and embedding predictive analytics in a BI system carry you away; well only a little bit anyway! To me, this is one of the most interesting and absorbing roles of being a BI analyst and designer; certainly it beats specifying reports.

Building automation into BI is highly recommended, especially if you are looking for a challenge!

DIY BI Design Best Practice April 23, 2007

Posted by Cyril Brookes in BI Requirements Definition, General, Issues in building BI reporting systems.
add a comment

Backing up my conviction that DIY business intelligence is going mainstream, I’ve put together a set of good practice guidelines that might, with profit, be followed by the responsible BI Rogue. Will these renegades with spreadsheet in hand, data warehouse on tap and a vague specification in mind have regard for guidelines? Only time will tell, but we won’t have to wait long, the Mongolian hordes are at the PerformancePoint gate.

Many of these points are covered already in this blog, but Dear Reader, let’s face it; a man only gets a few good ideas in a lifetime, so one must expect some repetition!

Check #1: Existence

Does another existing report or spreadsheet cover the perceived requirements, fully or partially?

A no-brainer, but has to be asked

Check #2: Compliance

Will reporting these data and information complicate the Corporate regulatory situation in respect of SOX and similar? Are there security issues relating to the data to be purloined, massaged and disseminated?

This is probably best ignored by the average DIY BI Rogue, except in a bank or some such place where spooks abide. Worry about it when a result is to hand?

Check #3: Iterations

Irrespective of your confidence in your spreadsheet skills and all other aspects of this BI project, be assured that it will require several iterations of specification, build and test before the result is deemed adequate, or other issues supersede the whole episode.

Plan on starting simple; and increase complexity and report niceties in subsequent iterations.

Check #4: Specifications

This is where it is all at. Do this right and it will be fine, ignore it and a mess will result. Make sure you have a specification for each iteration. A whole treatise can be written here, but see for example, Dear Reader, if you want detail look here.

It is self-evident to say that you need to know what information is to be provided, the data required to obtain the information and the transformations needed to convert data to information. Don’t start without at least this. See Check #6 for suggestions on presentation, but they can be later iterations – get the data and basic transformation going first.

Check #5: Know your data

Knowing your data implies – metadata, lineage, update schedules, dimensions, planned amendments. My tool to do this is described here.

Just because a cube has the data you want today, doesn’t mean it will be there tomorrow, or that the update schedule is right for your specification. Don’t waste a lot of time on MDX expressions that will only work on Thursdays to Mondays, because that’s when the update cycle is complete.

Check #6: Presenting results to aid assessment

Part of the specification task, but best left to later iterations, is the design of result presentation. I don’t mean graph versus table versus bar charts, this is relatively trivial. What is important is the way the raw information obtained from the data transformations is pre-analyzed to aid the assessment of implications. This is the point where the amateur and professional, or competent, DIY Rogues part company. Chalk and cheese has nothing on this differentiator.

Again there’s a treatise here, but basically the conscientious DIY BI Rogue should be aware that he/she can offer at a minimum:

Goal Variances (exception reporting if you will);

Benchmark Comparisons (actual versus budget, plan or anything reasonable);

Trend Analysis;

Forecasts (based on time series of the data, if it’s available of course)

Drilldown (more detail about a context, provided the narrower dimensions are in the data cube)

Check #7: Validation

Even DIY Rogues should be aware that the non-numeric data associated with supposedly factual data is important. By this I mean the comments, previous assessments, opinions, suggestions, etc. that relate to this sales or gross margin figure. My more complete and earlier exposition is here.

At a minimum, the subject expert who can offer clarification and amplify context for a number should be identified as part of the reporting. Links to team comments, forecasts, etc are probably beyond the scope of your average DIY BI project, but keep them in mind for later iterations.

See, it’s not that hard!

Social Bookmarks and Tagging in BI Fail the Just-in-Time Test February 20, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Tacit (soft) information for BI, Taxonomies, Tags, Corporate Vocabularies.
11 comments

Tagging and Social Book-marking for BI applications is a hot topic. See, for example, Bill Ives comment. But I think there are barriers to it’s success in the corporate context. It doesn’t lend itself easily to the dynamics that are, or should be, key aspects of BI system design.

Sure, I am completely in agreement that information, particularly soft information, needs to be tagged, or classified, before it can be useful. I’ve talked about this several times in this blog. Social book-marking is better than none.

If information isn’t categorized then it cannot be selectively disseminate or easily searched for.

The social book-marking ethos implies that people create their own tags. But, of course, no one else knows (at least knows in a short time frame) that this tag is being applied for this purpose.

Until the tag’s existence and meaning is widely known, no item of, say, competitive intelligence with this tag can be subject to targeted personalization to relevant decision makers. More importantly, if the tag describes a concept that is identical to, or nearly so, those linked to one or more other tags then confusion is likely.

It follows that social book-marking can be effective in information retrieval, if the tags are managed, moderated and disseminated. However, this approach is not likely to be valuable for alerting purposes, especially in dynamic business environments. This is because those being alerted will not know of the tags existence, and will be frustrated by multiple tags with the same meaning.

In any case, corporate wide management of social bookmark tags is always going to be a big ask.

Knowledge in a business is often created via group collaboration. The smart corporation enables such new knowledge to be disseminated rapidly to those who should know it, and can take requisite action. There is no time to create new tags that may be redundant anyway, and to disseminate their existence and meaning widely.

Business intelligence has two basic purposes:

1. Helping executives and professionals assess status and find problems

2. Supporting problem solving, usually by less senior staff

For the corporate BI context the alerting and problem finding objectives are usually more valuable than problem solving. Knowing an issue exists will often be absolutely critical, resolving it is usually less difficult and less important. We cannot solve problems we don’t know exist.

As I opined recently, it is the combination of subject matter and assessed importance that is the key to effective alerting, or selective dissemination. And if an executive is to have a personalization profile it must use tags that are pre-specified and whose meaning is understood widely. Social book-marking does not usually imply assessing importance. Often importance can only be determined by people outside the group that creates the information, and the tag.

In the BI context a corporate vocabulary of preferred terms will be more useful than various sets of personally created, and probably redundant, social bookmarks. This is because the standard terms are widely known. Further, they are usually grouped in hierarchies of broader and narrower concepts and this facilitates retrieval and alerting.

 

Executives can seek items of high importance that are classified by a broader term (say, overall gross margin issues), or those about a narrower term (say, product X gross margin) that are of lower importance. In either case, they will not be inundated with large numbers of items.

Of course, inside a project team and other tightly knit groups social bookmarks may be suitable ways to tag documents and other material for retrieval.

However, I don’t believe that the wider corporate environment will benefit to the same extent. It’s a case where more formality and discipline brings better results.

Collaborative BI Implies a Personalized Grapevine – but, make it Smart Alerting or its all Blah! February 12, 2007

Posted by Cyril Brookes in BI Requirements Definition, General, Issues in building BI reporting systems, Tacit (soft) information for BI.
3 comments

Effective collaboration depends on the dynamic creation of groups that can exchange and share intelligence. Collaborating people in a group create knowledge; often it is new knowledge that can improve business performance. However, finding the right group participants, and disseminating the knowledge to empower action, both require targeted, selective dissemination, of information – that’s personalization. Truly, I heard it on the grapevine!

Some regard personalized alerting in BI as creating a “market of one” for information..

I disagree. As I see it, it is creating a group of relevant people, the “A list” if you will, for the issue at hand. How can this happen, not occasionally with serendipity, but routinely? Groups must be dynamic, different for each issue, expanding and contracting in size as the issue grows in importance, or declines.

Markets of one work for marketing situations, e.g. books with Amazon.com, but I don’t believe it is the paradigm for collaborative BI.

Clearly, the traditional BI report, with information prepared by others submitted to potential decision makers is discredited. Today we have lakes and lakes of information available; Herbert Simon got it right in 1971: “Information abundance creates scarcity of attention”. And one can add: Knowledge poverty.

Informing decision makers doesn’t cut it anymore. Maybe it never did? We need to change the process, introducing dynamics to the grapevine.

Issues grow in business importance when people, in the know, determine they have grown in importance. There’s no other way.

All messages, ideas, news items, etc. on a topic are not of the same value or criticality to a business. Most are irrelevant to decision makers; they are waffle, padding, dross, blah.

Some of those items will be interesting to the professional; fewer are important, business-wise; but very few are critical to the business. How do we distinguish? Well, it’s simple: subject experts tell us they’re critical.

If you’re still with me, Dear Reader, personalized alerting, selective dissemination, of intelligence items on a topic can only be effective, therefore, if someone tells us (or the dissemination authority/process) what is important and what is not.

I don’t believe that automated importance classification works in practice – in a business anyway. It might do for spooks, but not the rest of us.

Some years ago, I built a selective dissemination collaboration system based on a patented importance escalation process. I called it grapeVINE. It employed this model of escalation and dynamic audiences for information. It was most effective when seeded with news, marketing reports, or other items. They were automatically classified, using a standard taxonomy or vocabulary, and selectively disseminated based on client interest profiles.

grapeVINE’s special character emerged when a subject expert commented on an item, raising it’s importance level – saying something like “this is important because the implications are….”. Immediately the audience would increase for this, and only this, discussion thread. More people are interested in important stuff than dross. One of these new recipients might then escalate the discussion further, bringing in more people – likely action oriented players. Then the game is on.

Two dimensional personalization of business intelligence, based on a combination of subject matter and importance to the business, is an effective driver of dynamic group formation.

Provided the culture of sharing is established in the business (and that’s an important IF), the potential for improvement in decision making is immense. It is the optimal vehicle for combining structured (numeric) and unstructured (text) information into BI systems.

Paraphrasing Crocodile Dundee: THIS IS A GRAPVINE!

Personalization in BI: Selective Dissemination and Targeted Retrieval of Important Information January 30, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Stafford Beer, Tacit (soft) information for BI.
add a comment

Personalization in BI grows in significance with the near universal recognition that passive reporting, designed for the masses by supposed experts, is limited in its utility. Action oriented reporting is preferable; it always has been. However, many business analysts do not recognize that selective dissemination of information, aka personalization, is a pre-requisite if reporting is to stimulate action. Only specific people can or need to take action, and common sense tells us that they must be targeted.

See, for example, the article by Neil Raden.

Two sorts of personalization apply in a BI context:

         Push and Pull

Each can be applied to external or internal recipients.

The focus of this blog is on selectively pulling information, predominately by internal people. This is the principal aim of action oriented BI, directing valuable information (and only valuable information) to executives and professionals that assess a situation and/or take action as a result. This is not to say that other aspects of BI, such as keeping people informed as to status of the business, should be ignored. But these objectives are far less vital than supporting executvie actions.

I leave discussion on the much more fraught selective information push situation to others. Determining what information will be of interest to a customer or supplier and pushing this category of stuff to them can be a valuable marketing tool, or (more likely IMHO) a PR disaster. We always hear of Amazon.com and its success with cognate book promotions, but books are easily categorized in a universally accepted manner; most other items are not so easily classified , and the implications for inappropriate information push can be dysfunctional.

Any discussion on the effectiveness of BI for improving the quality of executive decisions (and what other purpose might it have?) must have regard to the actual decision making process. The theory of this process is well established, notably by Herbert Simon. Many researchers have also considered the relationship between this process and the information required for its operation. In this context I particularly value the work of Stafford Beer and Henry Minzberg.

Information that enables effective decision making belongs to one of two categories, and both are essential if decisions are to be optimal:

It helps the executive find problems and opportunities – situations that need a response.

Stafford Beer calls this Attenuation information, and I have discussed this in detail earlier

It helps the executive solve problems he/she has found (or been told about)

Beer calls this Amplification information, also discussed in this blog earlier

But I diverge.

Returning to the personalization theme; selective dissemination is vital in the problem finding context.

Obvious candidates include:

  • Alerting the executive to important exceptions, out-of-specification performance, unusual situations, adverse forecasts of key indicators, and unacceptable (or advantageous) trends.

  • Equally, if not more, critical is soft information (opinions, comments, assessments, etc. that portend problems, or throw doubt on the accuracy of factual information.

Targeted information retrieval is also vital to support problem solving.

  • The solution process that needs supporting includes diagnosis of the severity of the problem (what will happen if nothing is done), identifying possible alternatives and assessing their implications.
  • During a decision making process executives must be able to retrieve important, valuable, information as distinct from the routine stuff. This applies to both factual and soft (tacit) information. In this context, the latter includes ideas about problem implications, suggestions for potential solution alternatives and recollections about what we did last time this happened.

The key word in both these situations is “importance”.

Alerting to, and targeted retrieval of, useful information implies that some assessment must be made of the significance of a data item, either using an automated rule system, or a personal assessment.

Truly this is the stuff of business intelligence. Without importance classification all information is equal, but obviously this is not reality.

Selective dissemination and targeted retrieval, the basis of all personalization, depend therefore on the BI context being able to distinguish information importance as well as its subject, topic, or data class.

Importance, in turn, depends on two characteristics: urgency and value to the business.

I have experimented over 20 years with different retrieval/alerting procedures for corporate BI systems, using both automated and human importance assessment. I’ll detail this experience in the next post.

BI System Design incorporating Wiki and other Web 2.0 Components January 10, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Tacit (soft) information for BI.
3 comments

Suddenly collaboration is flavor of the month, or year anyway. Customers are re-designing products, buyers are guiding the choice of other buyers, repair and service people are specifying work procedures, those with spare time collaborate in wikis-everything; and maybe the lunatics are running the asylum? But what of Business Intelligence systems, what is their place in all this?

I’ve been preaching the utility of collaboration an essential element of BI for 20 years now. Maybe it’s going to happen at last?

But, even with the current enthusiasm, it won’t happen at Youtube speed. Collaborative BI is more complex than just loading some videos or other data into a category for others to retrieve. And, supposedly, corporate people don’t have time to surf the Intranet, let alone the Web, looking for relevant stuff.

Therefore, it will take time; and remember the old timer’s adage: “You can tell the pioneers; they’re the ones with the arrows in their backs!”

Here’s a set of axioms that I believe are relevant if we are to succeed in this collaborative endeavor, all of which raise barriers, some large, some not, it depends on the business environment:

  • Corporate people who come across Web 2.0 style intelligence often don’t know its value, and whom to tell
  • They usually only have part of the story anyway
  • They often lack the background to be able to assess implications
  • Supplying intelligence to Web 2.0 style repositories or applications is time consuming, and may not be at all rewarding to the author, only to others
  • Intelligence can’t be searched for, or be subject to push messaging and alerting, if it’s not categorized
  • Categorization must conform to a corporate standard vocabulary, or it will not facilitate sharing and collaboration
  • All BI items indexed by a category are not of equal significance or value; some may be critical to the business, others routine news that’s already well known
  • The high value items ought be separated from the dross and given wider audience, or personalization will be ineffective; but how to do this?
  • BI is useless unless the recipient can assess its implications, and often this requires additional input of BI, or experience, from other people or sources – the collaboration imperative
  • Corporate collaboration raises infinitely more cultural and behavioral red-flags than Web 2.0 practitioners could dream of; see earlier post.

Nonetheless, I’m sure that we’ll see increasingly effective means for accommodating the issues raised by the above points. I have suggested some design principles in another earlier post, but the rapid evolution of wiki style knowledge creation, with the attendant blog explosion, is opening up new opportunities.

I believe that the issue of bringing new knowledge to the attention of the right people, personalized distribution as the knowledge is created, will remain a substantial BI issue. My ideas on this will be the subject of a later post.