jump to navigation

Memo to Business Analysts: A Compelling Treatise on Why and How Businesses should Automate Decisions December 12, 2007

Posted by Cyril Brookes in Decision Automation, General, Issues in building BI reporting systems.
2 comments

You may know that fellow blogger James Taylor is the author, with Neil Raden, of a new book on the current hot topic of Automated Decision Making, titled Smart Enough Systems. In it they present a compelling proposition to business intelligence analysts and executives:

Look out for decisions that can be automated in your business;

Automate them and the business will be much better for it.

I suggest that you bring this work to the attention of the more creative managers and professionals in your business.

I have written on decision automation, in particular how to identify candidate decisions, a while back, here and here. You may care to revisit these, Dear Reader.

James and Neil propose that it is practicable to identify decisions that can be automated, and that the subsequent system design path is now both well trodden and amply supplied with technical support. They then give lots of detail on how to do it.

After reading the book, I put a set of questions to James, and here are his responses. I believe you’ll find them interesting.

Q1. On page 1 in the Introduction you say the book comprises two unequal “halves”, the first being general and the second technical. Are you implying that executive readers should read the first “half”, Chapters 1-4, and then move to the implementation proposals in Chapter 9?

A1. I sometimes feel like we had two books, one for executives of any kind and one for technology focused people, which we had to put inside one set of covers! The introduction did try and guide people to read as you suggest, but it’s hard in a physical book to do that well. In general I do think this would be a good approach for a non-technical reader except that I would also encourage them to read chapter 8 “Readiness Assessment” and at least skim the stories in the other chapters.

Q2. Much of the book refers to “automating hidden operational decisions” or “micro-decision automation”. Does the EDM approach described in the book only apply to automated decisions, or is it also relevant to partially automated decisions or even to a decision support role for human decision makers?

A2. I recently came across the work of Herbert Simon again and found his classification of problems very helpful:

  • Structured – well understood, repetitive, lend themselves to well-defined rules and steps
  • Unstructured – difficult, ambiguous, no clear process for deciding
  • Semi-structured – somewhere in between

Clearly EDM works particularly well as an approach to handle structured problems, whether you want to automate them completely or automate a large part of the decision while leaving some room for human intervention and participation in the decision making process. I think EDM also has value for the semi-structured problems, especially in areas like checking for completeness or eligibility. At some level EDM solutions blur into decision support solutions but an EDM solution is always going to deliver actions or potential actions not just information.

Q2(a). Does this reply imply that EDM is principally a decision support process or tool – relevant when a problem situation is identified or predictable in detail. Hence, alternative BI systems, applications or techniques are required to help executives, professionals and automation systems understand current business status and to find problem situations that require a response (to use a Herbert Simon term); presumably a response determined using EDM principles?

A2(a). Absolutely. I don’t like calling EDM decision support, though, because I find people have a mental model of decision support that can confuse them when thinking about EDM. A problem needs to be relatively well defined and understood to be amenable to automation using EDM. While many of these problems come from process-centric analysis, it is very often the case that more ad-hoc analytics are used to find the problems that offer the best pay off and to see how well an EDM solution is working once it is implemented. In particular the adaptive control aspect of EDM solutions requires good performance monitoring and analytics tools.

Q3. Similarly, the reference in Chapter 5 to Analytics and Predictive Modeling, and in Chapter 7 to Optimization and Operations Research, could imply that EDM has a role in higher level decision support, especially at tactical level. Is this a correct inference?

A3. I don’t think so. I think it is more true that some of the techniques and technologies that work for higher level decision support are also useful in the development of EDM solutions. The mindset though, that of automating decisions not simply helping someone who has to make the decision, is quite different. The different solution type also means the techniques are often applied in quite different ways – producing an equation rather than a picture for example.

Q3(a). Following my earlier theme, and looking ahead to your answer to Question 8: is there a role for EDM in automating the finding and diagnosis of problem situations in the business, perhaps without actually producing the “equation” that will solve it – leaving that part to a human?

A3(a). This is one of the edge conditions for EDM, where the system takes the action of diagnosing something (rather than fixing it) and is certainly not uncommon. It is often found where the delivery of the action cannot easily be automated. Interestingly, it has been found to be more effective to allow the human to provide inputs that rely on human skills, such as entering the mood of a customer, and having the system produce an answer than having the system provide options or diagnosis and then having the human interpret that.

Q4. IT groups worldwide are committing to SOA as a major part of their strategic plan implementation. You refer to SOA many times in the book, and how EDM and Decision Services are complementary to the concept. To what extent is the implementation of EDM and micro-decision automation generally dependent upon the enterprise having implemented SOA principles?

A4. I don’t think it is dependent but it is certainly true that companies already adopting SOA will find it much easier to adopt EDM. I also think that companies adopting SOA are more ready both for the explicit identification of a new class of solution (a decision service) and more open to adopting new technology to develop such services. I would not be at all surprised if the vast majority of EDM projects were not done alongside or following on from SOA initiatives.

Q5. Some years ago there was much discussion about Organizational Learning, the Learning Organization, Designing Learning Systems, etc. Is your approach to Adaptive Control in Chapter 7 related to this, or does it have different underlying purpose and concepts?

A5. To some extent. Part of the value of adaptive control is that it means an organization is committed to constantly challenging its “best” approach to see if it can come up with a better one – either because it knows more or because the environment has changed. In that sense the adaptive control mindset matches that of a learning organization. I also think that the use of business rules as a core technology has to do with the learning organization in that it gives a way for those who understand the business to actively participate in “coding” how the systems that support that business work.

Q6. The champion/challenger process described in Chapter 7 is depicted as two or three dimensional in the diagrams. Is it more difficult to implement when there are several alternative control variables? Are there project selection and management criteria that will help ensure champion/challenger successful approaches?

A6. It is much easier to implement adaptive control and champion/challenger when there are clear and well defined metrics with which to measure the relative value of different approaches. If the value of an approach is a complex mix of factors then it will be harder to compare two and pick the “winner”. Without a champion/challenger approach, of course, you are even worse off as you don’t even know how those alternatives might have performed.

Q7. You have clearly and comprehensively outlined the case for automating micro-decisions in Chapters 1 to 3. This argument ought to be compelling for many executives. However, there are many options for implementing rule based, model based and adaptive control based systems using the technologies in later chapters. Is it practicable to describe other implementation procedures introduced in Chapters 8 and 9, possibly via the book’s Wiki?

A7. One of the big challenges when writing the book was that many of the component technologies and approaches have value in other contexts besides that of decision management. Rules and analytics can both, for instance, improve a business process management project. There are so many of these that I don’t think even the book’s wiki would be able to handle it. We are engaged in some interesting research around decision patterns and a decision pattern language. I think this is an interesting area – identifying and describing the implementation patterns for decisions where decision management makes sense.

Q8. It appears from your discussion in Chapter 9 that the key to successfully implementing EDM in a “Greenfield” site is the selection of the initial projects. You propose selecting two applications; one rule based, and then a second that is predictive model based. This sounds sensible, but are there alternative project selection methods that might be applied? Other examples could include:

  • Partial automation of more complex, but valuable, decisions; e.g. using rules or models to find problem situations without solving them in Phase 1, with later phases implementing the automation fully?
  • Analyzing the informal “know-how” knowledge base of key professionals to determine if an initial project can be built by capturing and encoding their knowledge?
  • Use industry best practice reports to identify enterprise deficiencies that may be rectified using EDM?

A8. Chapter 9 was hard to write because different customers have succeeded in different ways. The way we outline was the one that seemed like the most likely to work overall. Each individual company may find that a different approach works better for them. Companies reluctant to fully automate decisions, for instance, may well find the first of your examples to be very useful in getting them more comfortable with the idea of automation. Identifying problem areas using best practice and driving to fix those would also be very effective, though it might well be implemented using a first rules project and a first analytic project as we suggest.

In general I don’t think that starting with know-how and working out is a good approach, however. Our experience is that you need to have the decision identified and “grab it by the throat” to successfully adopt EDM. Decision first. Always.

Collaboration and Knowledge History Creation in BI – The Twin Pyramid Model October 2, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Taxonomies, Tags, Corporate Vocabularies.
add a comment

Pyramids and BI deficiencies are a popular blog topic. Rising to the challenge of Andy Bailey in his “Where has BI fallen short” paper, I have some comments on the Collaboration and Knowledge/History categories of shortcomings. Other examples include, for example, James Taylor’s observation here and Neil Raden’s paper of a few months ago.

First the Collaboration bit. Regular readers will know that this is a big issue with me. I believe most businesses do it badly, for the reasons I’ve already given. But to explain how it needs to be “operationalized” we need to look at pyramids, one regular and one inverted. They’re different from Neil Raden’s but are pyramids nonetheless.

The basic problem of managing knowledge creation, collecting history and making valuable stuff rise to the top of the “action” pyramid stems from abundance.

Herbert Simon got it right when he said “The impact of information is obvious. It consumes the attention of its readers. Therefore, a wealth of information creates a poverty of attention.” The totality of information available, both internally and externally, is overwhelming. It follows that filtering and other controls on information delivery are necessary if benefits from information resources are to be achieved.

Hence the pyramid pair as depicted below. Most documents are of interest to only a few, perhaps only one person, in a business. They can be said to have an importance at Level 1. But a few documents are of Level 4 import, and are of interest to many people. Obviously, it needs to be the function of any Collaboration and Knowledge Creation function to cause the important items to rise to the top of the pyramid.
The Collaboration Pyramids

Knowing his makes the specification of the application relatively straightforward. It needs a web-crawler, document trawling feature, categorization capability, a subject expert and escalation of importance sub-system and the usual alerting, search, browse features. Simple; just like the picture below!

 

Note: Click on the diagram and it may be clearer.

 

Collaboration Process

If you, Dear Reader, are going to overcome shortcomings in your BI context, this is a great place to start.

Unstructured Information – Tacit Versus Explicit for Profit and BI Best Practice August 9, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Tacit (soft) information for BI, Unstructured Information.
add a comment

A picture may be worth a thousand words, a news item also has about a thousand and a marketing strategic plan may have around five thousand. OK, but a great idea for a new marketing message, an expert’s adverse comment on the marketing plan, or a chance serendipitous airplane conversation about a competitor’s plans may be each worth a million dollars, for just a few hundred words. What do you want, words or dollars?

I believe there is far too much emphasis on the analysis of documented unstructured information as a BI resource. The basic important data just isn’t there for most businesses. You can search as long as you like; mine it, categorize it, summarize it, but to no avail, the well is dry.

This post follows on from my earlier, definitional, piece on this subject.

Of course, I recognize there is potential import in some written material, for example, recent emails, salesperson call reports, customer complaints and their ilk. But these are like seeds, rather than the fruit off the tree. They are the beginning of a BI story, not the whole enchilada.

At risk of making the discussion too deep, Dear Reader, I think we need to consider the basic concepts before coming to any conclusions about how a corporation should manage its unstructured data, and the tools required.

I find it valuable to characterize unstructured information with a 2 x 3 matrix.

The horizontal axis has the above two basic categories of unstructured information:

Explicit unstructured items are those that are basically unformatted, but have a physical, computable, presence; e.g. documents, pictures, emails, graphs, etc.

Tacit items are basically anything unformatted that is not explicit, they’re still in the minds of professionals and managers, but are nonetheless both real and vital; e.g. mental models, ideas, rumors, phone calls, opinions, verbal commentary, etc.

The vertical axis has the three categories of unstructured information (according to moi!): independent, qualification and reference items.

Independent items stand alone being self-explanatory in the first instance, not requiring reference to other pieces of information, be they structured, unstructured, explicit or tacit.

Qualification items have an adjectival quality, since they add value to other items (structured or unstructured), but are therefore relatively useless without reference to the appropriate one or more Independent or other Qualification items (note there may be one or more threads to a discussion based on an Independent item)

Reference items are pointers to subject experts who can provide details or opinions, and other sources of information, structured or unstructured, together with quality assessments of the value, reliability and timeliness of those sources. As Samuel Johnson said “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information on it”.

Here’s a descriptive tabulation.

Explicit

Tacit

Independent, stand alone items

Meeting minutes

News items

Analyst reports

Marketing call reports

Legal judgments

Proposals

Government regulations

Suggestion box items

Customer complaints

Strategic plans

Manuals of best practice

Emails about new issues or competitive intelligence

Unrecorded meeting discussions

Ideas

Suggestions (undocumented)

Potential problems

Know-how

Competitive intelligence from informal customer/industry contacts

Stock market (racehorse) tip

Rumors

Intuitions

Off-the-record talks with government officers

Qualification, commentary items

Written comments on a report/news/analyst item

Documented opinion on problem or situation

Formal assessments of status implications

Verbal comments on a report/news/analyst item

Verbal comments on emails

Verbal opinions on problems

Verbal assessments of issues

Possible solution options

Comments on a rumor

Reference, source quality items

Lists of subject experts

Ratings of experts

Document sources, catalogs

Written reviews of document sources

Unrecorded subject expert identity

Opinions on expert quality

People who “know-how”

Informal unrecorded information source documents

Assessments of document source utility

Ask yourself, Dear Reader, which of these cells contains high value information, likely to assist your corporate executives find problems and make decisions? If they’re only on the explicit side, then you’re in the sights of UIMA and lots of enthusiastic vendors; good luck. If some are on the tacit side, please read on.

I’ve covered several of the relevant aspects of managing tacit information in earlier posts, e.g. here and here. However, there are some additional relevant observations to be made in the tacit versus explicit context.

  • The first, possibly most important, observation is apparently self-defeating to my thesis. All important, currently relevant, items of tacit unstructured information should be made explicit as soon as practicable.
  • It is not possible to identify, collect, store, disseminate, and facilitate collaboration on purely tacit items; it will happen in a “same time” meeting, of course, but wider ramifications demand that the prelude and/or outcome be made explicit.
  • Independent intelligence items, be they initially explicit (e.g. a recent email) or tacit, are very rarely complete as regards background to the issue, its importance to the business, its time criticality, and assessments of potential impact. If you will, the knowledge has not yet been created, only its seed.
  • The information required to complete the knowledge building that starts with an Independent Item is rarely in one location or person’s mind.
  • The knowledge building is based mostly on tacit information
  • The knowledge building process is most effective if performed via collaboration between the people who have, or know where to find, the necessary Qualification Items of information.
  • Some process for collaboration audience selection is required, one based on issue content, criticality and importance. It shouldn’t be left to pure chance.
  • Desirably, the collaboration process, but certainly the end result, should be made explicit, to avoid resolving the same issue many times over.

In my previous post I offered some questions that might provoke your curiosity, Dear Reader

  1. What are the most useful sources of unstructured information in our business? Are they Explicit or Tacit?
  2. If Explicit, how do we best marshal the information and report it?
  3. If Tacit, ditto?
  4. Is the information we get from our unstructured sources complete, and ready for promulgation, or do we need to amplify or build on it before it’s useful?

I expect that you will be able to answer 1 and 4 for your business; I’ve outlined the issues as best I can.

I’ll defer offering pointers you might consider for 2 and 3 to the next post, because I believe we still need to revisit the processes and constraints that inhabit the strange corporate world of collaborative knowledge building.

Implementing Decision Automation BI Projects July 13, 2007

Posted by Cyril Brookes in Decision Automation, General, Issues in building BI reporting systems.
1 comment so far

The feedback on my three earlier posts on specification guidelines for automated decision capability in BI systems has been both positive and heartening. My objective has been to show how these BI projects for operational business processes may be built relatively simply, and to generate enthusiasm for this among the legions of business analysts. You can indeed try this at home!

This post summarizes the major issues that received favorable comment and then deals briefly with profitability, feasibility and implementation techniques for these systems. It concludes the series of (now) four posts on decision automation for BI that commenced here.

I haven’t attempted to place this subject in its context, or to cite various examples of success or otherwise. Thomas Davenport, Jeanne Harris, James Taylor and Neil Raden have done this comprehensively in their recent articles and books.

You may recall, Dear Reader, the underlying principles for this methodology are:

Selecting the right targets is critical to success:

Doing the right thing is much better than doing things right (thanks Peter Drucker!). My prescription is to avoid trying to pick winners from the various business processes that may be candidates for BI automated decisions. Rather we should look to a different set of candidates as the start point.

Identify the controllable variables – the levers that are adjustable in the business process that can shift performance towards improvement

These are easy to pick. There are relatively few options available in most businesses, variables like changing prices, adjusting credit ratings, buying or selling stock or materials, approving or rejecting a policy proposal, etc. A more complete discussion is in my second post on this subject.

Only consider automated decision making BI systems where controllable variables exist

This is a no-brainer, I guess. It’s only possible to automate when automation is possible. If we can’t control the process when there’s a problem, because nothing is available to be done (e.g. we can’t raise prices if all customers are on fixed contracts), then don’t let’s start automating.

Segment the design processes into logical sub-projects so the project doesn’t run away uncontrolled

I suggest in the third post that the Herbert Simon style decision process elements are an effective segmentation. This allows focus on (say) finding problems and then on deriving the relationship between adjusting a control variable and the resultant outcomes.

Enough of a recap: here are some basic suggestions for project management.

Implementation of a decision automation project is always tricky. In most cases it is not possible to “parallel run” the new with the old, since only one option can be tried each time in real life, and comparison is not possible longitudinally.

I suggest that an iterative implementation is therefore appropriate. It should incorporate feasibility and profitability analyzes as well.

Referring to the more detailed methodology in the third post:

  • Build the status assessment and problem finding sections first, and leave the action part to the management.
  • Then design the diagnosis and alternative selection modules and instruct a human manager what to do (always leaving the override option, of course). This is simple as long as there is only one controllable variable available for the business process and only one KPI metric, or a set of related KPIs and metrics, that are out of specification, hence signifying the problem. If there are more than one of these, then it can (almost certainly will) become complex. Certainly it’s achievable, but there’s a good deal of modeling and use case analysis required that is beyond the scope of a blog post.
  • Finally, link the alternative action chosen to the automatic adjustment of the control variable(s) and you’re home free.

I hope, Dear Reader, you’ve been infected in some small way with the enthusiasm I have for automated decisions in BI applications. In many ways they are the most satisfying aspects of Business Analyst work, since you get to design the system, and then get to see it perform. Working on high level strategic projects is often more intellectually challenging, but you rarely get to have full closure, it’s the executive users who have that pleasure, often long after you’ve left the scene.

Decision Automation in BI: Design Guidelines for Business Analytics and Rules June 18, 2007

Posted by Cyril Brookes in Decision Automation, General, Issues in building BI reporting systems.
1 comment so far

Authors routinely ignore the specifics of designing automated decision components for BI systems. I guess this is often because they believe these details are application dependent. However, I believe that there can be, should be, more rigor in the specification of business analytics, rules and predictions that underlie these designs and specifications. Generalizations can only take us so far, sooner or later we have to get down and dirty.

In this, my third recent post that discusses decision automation, I offer some guidelines that can provide the requisite structure; but you, Dear Reader, can judge for yourself, as always.

To recap, hopefully avoiding the need for you to read my recent stuff, it is my hypothesis that the project selection and specification of BI systems with decision automation incorporated should follow five steps as below. Earlier posts have considered the overall issue (here) and Phases 1 through 3 (here).

  1. Identify the controllable business variables in your business environment
  2. Determine the business processes, and relevant decisions, that impact those controllable variables
  3. Identify the BI systems that support the business processes and decision contexts selected in Phase 2
  4. Design the business analytics that are the basis of the decision automation: business rules, predictive analyses, time series analyses, etc. wherever Phase 3 indicates potential value
  5. Evaluate feasibility and profitability of implementing the analytics created in Phase 4

This post covers Phase 4, arguably the most interesting from a technical viewpoint.

It is axiomatic, I believe, that we should revisit now the steps in the decision making process that are to be automated. Each step requires a different style of automation, offers distinct benefits; and some are more complex than others.

Drawing on Management 101, with Herbert Simon as our mentor, we know that the universal key steps in the decision making process, be it manual or automated, high level strategic or low level operational, are:

  • Measuring and assessing the business process status: Where are we? Is it good or bad? What are people saying about us?
  • Finding “problems”: i.e. situations (including opportunities) that need a response, out of specification KPIs, adverse trends or predictions, unusual circumstances, people telling us we have a problem!
  • Diagnosing the problem context: i.e. how bad is it, what happens if nothing is done, has this happened before, what happened then, etc.?
  • Determining the alternatives for problem resolution: i.e. what did we do last time, what are people suggesting we do?
  • Assessing the consequences of the outcomes from each alternative: i.e. predictive modelling, computing merit formulae, what happened after previous decisions in this problem area?
  • Judging the best perceived outcome, and hence making the decision: i.e. comparing the merit indicators, accepting or rejecting peoples opinions.

Most of you, Dear Readers, will know all this, but we need to be sure we are all on the same page. Otherwise, confusion reigns, as indeed it does in several of the recent articles on this subject, especially the marketing hype ones.

Our objective with BI system design is to enable improved business process performance. Our primary channel to do this is to collect and report information that supports management decision making. Apart from passively dumping a heap of facts and figures on a screen, we know we can empower improved decision making in two ways:

Create action oriented BI systems by presenting the information in a pre-digested way that highlights good and bad performance and spurs the executive to react appropriately. Dashboards and scorecards are obvious examples of how we can do this. I proposed in earlier posts some general design principles for summarization and drill-down specification. OR

We can actually make decisions automatically as part of the BI system, adjusting the controllable parameters of the business process without reference to a manager.

We’re focusing here on the second option. It’s not that this is a new idea, we’ve been doing it for decades, but the new business analytics, rule management and prediction software now available does make the whole process much easier than when we had to rely on IF…AND …THEN…ELSE statements to make it all work. A reference that shows the scope of what’s available is by James Taylor.

Let us now consider how we can automate all, some, or one of Simon’s decision process steps. Clearly, even automating one step effectively may be beneficial. It shouldn’t be essential to “go the whole hog “ in the name of decision automation, just do what makes business sense and leave the balance to the manager. Further, we can complete the automation project in stages, progressively removing human interaction; this is Phase 5 from the list above.

Assessing status:

Our BI system will tell us the status, if it doesn’t then fix it. We must have available, in machine readable format, all the business KPIs and metrics which are relevant to the business process, including the state of the controllable variables (Phase 1 of the methodology above). The trick is to manipulate that information so we can compute automatically whether this status is good or bad. Remember, we won’t have a manager to decide this for us if we’re automating. Don’t forget that often the status assessing process will include comments or opinions from real people. Just because we’re automating the decision, doesn’t mean that we can’t accept human inputs.

The most common type of human input that impacts the decision automation is a commentary on the validity, accuracy or timing of a KPI or other important metric. And the most common impact such an input has is: Abort Immediately. You don’t want the automated system making decisions with garbage data.

For this reason, the design should desirably contain some output of status data for human monitoring purposes.

When we have the metrics and other input data accessible, we can move to consider automating the Problem Finding step.

Finding problems:

Situations that need a response can often be determined automatically by examining the degree of status goodness or badness. Fortunately, there are a limited number of available techniques for this. They are mostly the same methods we use to alert managers to problems in a non-automated environment. Almost all can be automated by applying business rules, statistical procedures and/or predictive models. The only human input likely is when the CEO, or another potentate, says you have a problem; this being self-evident thereafter.

The automatable problem finding techniques I use most often include:

Performance Benchmark Comparison: Compare the important KPIs with benchmarks that make sense from a problem identification viewpoint. Obvious examples include: actual versus budget, plan; previous corresponding period, best practice, etc. In addition, you can compute all kinds of metrics that relate to performance and compare them across divisions, products, locations, market segments, etc.

Performance Alerting: The next step is to use the above automated comparisons to identify bad, or superior, performance. This normally involves placing relevant metrics on a scale of awful to excellent. It’s a form of sophisticated exception analysis. The need for action response is usually determined automatically by the assessed position of the metrics on the scale.

Trend Analysis and Alerting: If no problem is found with the basic performance analysis, it is time to bring in the heavy statistical artillery. Trends of performance metrics, either short or long term, are often good indicators of problems that are not immediately apparent. Alerts based on good or adverse trends that trigger a need for a response are easily automated. Current application development software is very sophisticated.

Forecasting and Alerting: Even if current metrics are within acceptable bounds, the future may be problematical. Often this should be corrected earlier, not later. Applying predictive models and then reassessing the adequacy of the forecast critical performance metrics is often valuable, and also relatively easy to automate.

Alerting to Unusual Situations: Time series analysis will often highlight hidden issues, e.g. with changes in customer, supplier, manufacturing or marketing activity. For example, the credit rating of a customer may be altered if the statistical properties of its payment pattern alter significantly.

Diagnosing the context:

Scope, and necessity, for diagnosis in an automated decision environment is limited.

In a non-automated context this is an important part of a BI or decision support system. It involves assisting the human decision maker to understand how bad the problem is, what will happen if no action is taken, and how rapidly disaster will strike.

Normally the automated decision context is operational and relatively simple. I have found that it is often desirable to validate the problem identification procedures specified earlier. Hence, I look for ways to check that the problem is both real and significant enough to warrant automatic rectification action. This could include notifying a human monitor that action is imminent and giving a veto opportunity.

Determining alternatives:

If you’re following the methodology I outlined earlier you will have identified the controllable variables in the target business process. This would have been done in Phase 1. Some suggestions as to potential controllable variables were presented in an earlier post.

Obviously this is a critical step in the design of an automated decision system. However, provided you have done the initial homework on the control levers available for adjusting the performance of the business process, it is easy. It is simply a case of determining which levers to move, whether they move up or down, and by how much.

It may require some modelling work to answer these questions, but most often (I find) a basic table linking variance of performance metric to control adjustment is adequate. Implementing such a specification using modern rule management systems is trivial.

Evaluating outcomes:

In the automated decision context this is usually a simple or non-existent step. The rules for determining the alternatives usually imply a certain outcome.

Only one alternative is often available due to the shortage of control variables. If more than one solution option is available, e.g. inadequate sales volume presages either a decreased price, or increased advertising expense, it may require some modelling to determine the best outcome.

Complexity arises when more than one performance metric is out-of-specification. This will usually imply that more than one control variable needs adjustment. There may be interactions between the variables that requires arbitration; or simply throwing in the automation towel, and advising a human monitor of the issues.

Decision making:

For most decision automation systems the decision is effectively made with the alternative determination, and judgement is not required. If more than one alternative is identified, then an automated assessment of the evaluation determines the decision. Subjective input is usually not relevant or sought. If subjective issues are relevant, then a human assessor is required.

 

In a further post I will consider the implementation issues and recap on the overall method, since I’ve been formalizing my thinking as these posts have been created. Please advise, Dear Reader, if you have any comments on the process thus far, especially if you find it helpful or otherwise.

Building Automated Decision Making into BI System Design – A Methodology Overview May 18, 2007

Posted by Cyril Brookes in BI Requirements Definition, Decision Automation, General, Issues in building BI reporting systems.
5 comments

Automated decision making for business is about flavor of the month. Most emphasis has been on automating business analytics, say, underwriting in the insurance industry and stock market program trading. But there are ample opportunities for incorporating automation in more conventional BI systems, especially corporate performance management, where there has been, so far, little discussion.

Tom Davenport’s recent work on business analytics has been widely reported and commented. The consultants and software marketers are circling the wagons.

To highlight opportunities and stimulate discussion among BI analysts this post explores how relevant BI system targets for automation might be identified.

Most BI analysts see their role as designers of systems to support management decision making through effective presentation of information. That is, of course, commendable and important. But is that all there is? That focus doesn’t preclude building automated decision making systems if the context is suitable. It’s just that it isn’t done often. We seem to be reluctant to try and replace managers, maybe it’s because they are our bread and butter?

There are three generally accepted classes of decisions in business; operational, tactical and strategic. It’s pretty obvious that automatic decision making is almost always associated with operational, and perhaps some tactical, contexts. If it’s strategic, then forget it. Since many BI environments serve a mix of strategic and operational users, the prevailing focus is almost always on information presentation, rather than active replacement of human decision makers.

This discussion reminds me of a 25 year prediction from a long forgotten business journal article of the 1960s “Boards of Directors will be retained for sentimental reasons; computers will make all the decisions….”. Didn’t happen, and won’t. A similar, but contrary, forecast in the HBR of June 1966 “A manager in the year 1985 or so will sit in his paperless, peopleless office with his computer terminal and make decisions based on information and analyses displayed on a screen…” There still seem to be a lot of executive assistants around!

My intention with this post is to suggest a methodology or process which demonstrates how BI analysts can effectively and efficiently identify opportunities beyond the passive aim of information presentation. Even if the resulting design only partially automates decision making, it is likely to be a better, more effective solution than its passive counterpart, simply because it will be the result of a more creative and challenging design process.

In the current spate of articles there are many examples of apparently successful automated business process systems. While these may whet the appetite of a designer they are not, in my view, useful guides when the task of synthesising a BI system incorporating is being undertaken. When your child is given his/her first bicycle, showing someone cycling down the street isn’t going to be much help in teaching how to ride. Hands-on synthesis is needed. Big pictures may create envy, but don’t instruct much.

I suggest that it will be worthwhile for a BI analyst and executive team to review the corporate BI environment, existing and planned, and assess the potential for including automated decision making in the BI systems supporting each business segment.

Further, such a review should use a project planning method which segments activities into several bite sized Phases. Here’s a suggested outline, with more detail on each Phase to follow.

Phase 1: Identify the controllable business variables in the target businesses, ignoring specific business processes

Most articles on automated decision making start with the business process and BPM analyses. I think this is the wrong initial focus. To me, the optimal review starting point is to identify the control parameters of typical business processes that are amenable to automatic adjustment. The number of business process control “levers” available to management is finite, quite small in fact, and the number that might be controlled automatically, with profit, is even smaller. Examples include: Automatic pricing adjustment, dynamic production scheduling, staff re-assignment.

A more complete discussion on identifying control variables follows in a later post. It is, I believe, the most important part of project selection and specification. Get this wrong and you will certainly miss out on the best opportunities.

Phase 2: Identify potential business processes, existing or planned, that utilize one or more of these candidate control parameters and may benefit from automation

The same control variables are likely to appear in multiple business processes. For example, automatic price adjustment could impact BI systems supporting Order Entry, Production Scheduling, CRM, Inventory Management, etc.

Phase 3: Identify components of the candidate BI systems that may profitably incorporate automated decision making

Management 101, since Herbert Simon’s day, tells us that there is a defined decision making process, with several component steps between becoming aware of a problem or opportunity, and deciding what action to take. Automating the decision process clearly requires that one or more of these steps should be performed without reference to a human.

It is relatively easy to consider each of these decision process components in turn, to determine the extent to which it/they can be automated. My later post will give more detail if you are interested, Dear Reader.

Phase 4: Design the business analytics; business rules, predictive analysis, time series analysis wherever Phase 3 indicates potential utility

This is the fun part. The software tools for business rules management are much improved since I first started playing with IF…AND…THEN….ELSE statements as the basis for automation, as are the forecasting and statistical analysis packages.

I leave it to you to work out the details, as they are always application dependent. But always be aware that rules change, sometimes quickly, so dynamic management, or decision making agility if you will, is important. Enjoy.

Also, note that Phase 4 will be an iterative process, with frequent Phase 5 reviews to ensure that business sense prevails, limiting the scope for white elephant projects; even though they can be fun.

Phase 5: Evaluation and feasibility reviews of the costs and benefits of automated decision making components within the BI system

Try not to let the excitement of creating rules and embedding predictive analytics in a BI system carry you away; well only a little bit anyway! To me, this is one of the most interesting and absorbing roles of being a BI analyst and designer; certainly it beats specifying reports.

Building automation into BI is highly recommended, especially if you are looking for a challenge!

DIY BI Design Best Practice April 23, 2007

Posted by Cyril Brookes in BI Requirements Definition, General, Issues in building BI reporting systems.
add a comment

Backing up my conviction that DIY business intelligence is going mainstream, I’ve put together a set of good practice guidelines that might, with profit, be followed by the responsible BI Rogue. Will these renegades with spreadsheet in hand, data warehouse on tap and a vague specification in mind have regard for guidelines? Only time will tell, but we won’t have to wait long, the Mongolian hordes are at the PerformancePoint gate.

Many of these points are covered already in this blog, but Dear Reader, let’s face it; a man only gets a few good ideas in a lifetime, so one must expect some repetition!

Check #1: Existence

Does another existing report or spreadsheet cover the perceived requirements, fully or partially?

A no-brainer, but has to be asked

Check #2: Compliance

Will reporting these data and information complicate the Corporate regulatory situation in respect of SOX and similar? Are there security issues relating to the data to be purloined, massaged and disseminated?

This is probably best ignored by the average DIY BI Rogue, except in a bank or some such place where spooks abide. Worry about it when a result is to hand?

Check #3: Iterations

Irrespective of your confidence in your spreadsheet skills and all other aspects of this BI project, be assured that it will require several iterations of specification, build and test before the result is deemed adequate, or other issues supersede the whole episode.

Plan on starting simple; and increase complexity and report niceties in subsequent iterations.

Check #4: Specifications

This is where it is all at. Do this right and it will be fine, ignore it and a mess will result. Make sure you have a specification for each iteration. A whole treatise can be written here, but see for example, Dear Reader, if you want detail look here.

It is self-evident to say that you need to know what information is to be provided, the data required to obtain the information and the transformations needed to convert data to information. Don’t start without at least this. See Check #6 for suggestions on presentation, but they can be later iterations – get the data and basic transformation going first.

Check #5: Know your data

Knowing your data implies – metadata, lineage, update schedules, dimensions, planned amendments. My tool to do this is described here.

Just because a cube has the data you want today, doesn’t mean it will be there tomorrow, or that the update schedule is right for your specification. Don’t waste a lot of time on MDX expressions that will only work on Thursdays to Mondays, because that’s when the update cycle is complete.

Check #6: Presenting results to aid assessment

Part of the specification task, but best left to later iterations, is the design of result presentation. I don’t mean graph versus table versus bar charts, this is relatively trivial. What is important is the way the raw information obtained from the data transformations is pre-analyzed to aid the assessment of implications. This is the point where the amateur and professional, or competent, DIY Rogues part company. Chalk and cheese has nothing on this differentiator.

Again there’s a treatise here, but basically the conscientious DIY BI Rogue should be aware that he/she can offer at a minimum:

Goal Variances (exception reporting if you will);

Benchmark Comparisons (actual versus budget, plan or anything reasonable);

Trend Analysis;

Forecasts (based on time series of the data, if it’s available of course)

Drilldown (more detail about a context, provided the narrower dimensions are in the data cube)

Check #7: Validation

Even DIY Rogues should be aware that the non-numeric data associated with supposedly factual data is important. By this I mean the comments, previous assessments, opinions, suggestions, etc. that relate to this sales or gross margin figure. My more complete and earlier exposition is here.

At a minimum, the subject expert who can offer clarification and amplify context for a number should be identified as part of the reporting. Links to team comments, forecasts, etc are probably beyond the scope of your average DIY BI project, but keep them in mind for later iterations.

See, it’s not that hard!

Social Bookmarks and Tagging in BI Fail the Just-in-Time Test February 20, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Tacit (soft) information for BI, Taxonomies, Tags, Corporate Vocabularies.
11 comments

Tagging and Social Book-marking for BI applications is a hot topic. See, for example, Bill Ives comment. But I think there are barriers to it’s success in the corporate context. It doesn’t lend itself easily to the dynamics that are, or should be, key aspects of BI system design.

Sure, I am completely in agreement that information, particularly soft information, needs to be tagged, or classified, before it can be useful. I’ve talked about this several times in this blog. Social book-marking is better than none.

If information isn’t categorized then it cannot be selectively disseminate or easily searched for.

The social book-marking ethos implies that people create their own tags. But, of course, no one else knows (at least knows in a short time frame) that this tag is being applied for this purpose.

Until the tag’s existence and meaning is widely known, no item of, say, competitive intelligence with this tag can be subject to targeted personalization to relevant decision makers. More importantly, if the tag describes a concept that is identical to, or nearly so, those linked to one or more other tags then confusion is likely.

It follows that social book-marking can be effective in information retrieval, if the tags are managed, moderated and disseminated. However, this approach is not likely to be valuable for alerting purposes, especially in dynamic business environments. This is because those being alerted will not know of the tags existence, and will be frustrated by multiple tags with the same meaning.

In any case, corporate wide management of social bookmark tags is always going to be a big ask.

Knowledge in a business is often created via group collaboration. The smart corporation enables such new knowledge to be disseminated rapidly to those who should know it, and can take requisite action. There is no time to create new tags that may be redundant anyway, and to disseminate their existence and meaning widely.

Business intelligence has two basic purposes:

1. Helping executives and professionals assess status and find problems

2. Supporting problem solving, usually by less senior staff

For the corporate BI context the alerting and problem finding objectives are usually more valuable than problem solving. Knowing an issue exists will often be absolutely critical, resolving it is usually less difficult and less important. We cannot solve problems we don’t know exist.

As I opined recently, it is the combination of subject matter and assessed importance that is the key to effective alerting, or selective dissemination. And if an executive is to have a personalization profile it must use tags that are pre-specified and whose meaning is understood widely. Social book-marking does not usually imply assessing importance. Often importance can only be determined by people outside the group that creates the information, and the tag.

In the BI context a corporate vocabulary of preferred terms will be more useful than various sets of personally created, and probably redundant, social bookmarks. This is because the standard terms are widely known. Further, they are usually grouped in hierarchies of broader and narrower concepts and this facilitates retrieval and alerting.

 

Executives can seek items of high importance that are classified by a broader term (say, overall gross margin issues), or those about a narrower term (say, product X gross margin) that are of lower importance. In either case, they will not be inundated with large numbers of items.

Of course, inside a project team and other tightly knit groups social bookmarks may be suitable ways to tag documents and other material for retrieval.

However, I don’t believe that the wider corporate environment will benefit to the same extent. It’s a case where more formality and discipline brings better results.

Collaborative BI Implies a Personalized Grapevine – but, make it Smart Alerting or its all Blah! February 12, 2007

Posted by Cyril Brookes in BI Requirements Definition, General, Issues in building BI reporting systems, Tacit (soft) information for BI.
3 comments

Effective collaboration depends on the dynamic creation of groups that can exchange and share intelligence. Collaborating people in a group create knowledge; often it is new knowledge that can improve business performance. However, finding the right group participants, and disseminating the knowledge to empower action, both require targeted, selective dissemination, of information – that’s personalization. Truly, I heard it on the grapevine!

Some regard personalized alerting in BI as creating a “market of one” for information..

I disagree. As I see it, it is creating a group of relevant people, the “A list” if you will, for the issue at hand. How can this happen, not occasionally with serendipity, but routinely? Groups must be dynamic, different for each issue, expanding and contracting in size as the issue grows in importance, or declines.

Markets of one work for marketing situations, e.g. books with Amazon.com, but I don’t believe it is the paradigm for collaborative BI.

Clearly, the traditional BI report, with information prepared by others submitted to potential decision makers is discredited. Today we have lakes and lakes of information available; Herbert Simon got it right in 1971: “Information abundance creates scarcity of attention”. And one can add: Knowledge poverty.

Informing decision makers doesn’t cut it anymore. Maybe it never did? We need to change the process, introducing dynamics to the grapevine.

Issues grow in business importance when people, in the know, determine they have grown in importance. There’s no other way.

All messages, ideas, news items, etc. on a topic are not of the same value or criticality to a business. Most are irrelevant to decision makers; they are waffle, padding, dross, blah.

Some of those items will be interesting to the professional; fewer are important, business-wise; but very few are critical to the business. How do we distinguish? Well, it’s simple: subject experts tell us they’re critical.

If you’re still with me, Dear Reader, personalized alerting, selective dissemination, of intelligence items on a topic can only be effective, therefore, if someone tells us (or the dissemination authority/process) what is important and what is not.

I don’t believe that automated importance classification works in practice – in a business anyway. It might do for spooks, but not the rest of us.

Some years ago, I built a selective dissemination collaboration system based on a patented importance escalation process. I called it grapeVINE. It employed this model of escalation and dynamic audiences for information. It was most effective when seeded with news, marketing reports, or other items. They were automatically classified, using a standard taxonomy or vocabulary, and selectively disseminated based on client interest profiles.

grapeVINE’s special character emerged when a subject expert commented on an item, raising it’s importance level – saying something like “this is important because the implications are….”. Immediately the audience would increase for this, and only this, discussion thread. More people are interested in important stuff than dross. One of these new recipients might then escalate the discussion further, bringing in more people – likely action oriented players. Then the game is on.

Two dimensional personalization of business intelligence, based on a combination of subject matter and importance to the business, is an effective driver of dynamic group formation.

Provided the culture of sharing is established in the business (and that’s an important IF), the potential for improvement in decision making is immense. It is the optimal vehicle for combining structured (numeric) and unstructured (text) information into BI systems.

Paraphrasing Crocodile Dundee: THIS IS A GRAPVINE!