jump to navigation

Cyril’s Last Post January 7, 2008

Posted by Cyril Brookes in General.

Sadly on Wednesday December 13 Cyril died in a tragic accident at home. He was standing outside his house while a load of timber was being delivered. Somehow that load came loose and fell upon him. He died shortly after.  We understand he suffered no pain.

Cyril Brookes was a distinguished professor, engineer, businessman, student, and author.  A lover of travelling, fly-fishing, good wine and spicy laksa.

It is often said that in the end a man will not be judged by what he says, he will be judged by his actions. My father’s actions were carried out with honour, respect and devotion. 

To lose him in such a sudden and shockingly incomprehensible way is devastating. 

Dr CHP Brookes was born in  Sydney 1938. His parents had emigrated from Chicago in 1929. His father was sent to establish the Sunbeam corporation here in Australia.

My father Cyril or, “little Cyril” as he was originally known was their only child.

In 1949 the family  moved to a colonial property with a beautiful gothic sandstone homestead built in 1887.  This was to be our family home for almost 40 years. Dad never could go back there after we left, such was his affection for it.

Cyril was a student at Riverview from 1950-55. Following on to complete a Bachelors and Masters degree in Electrical Engineering with first class honours from the University of Sydney

In 1962 Dad was awarded a scholarship to St Edmund Hall, Oxford University to study his Doctorate. His thesis entitled “Adaptive Control Systems” is legendary amongst some circles. (Mostly whose members carry a three pen set in their top pocket?)

My father was a man born out of a past generation, a generation which admired academic and intellectual achievement, far more than we do so today. His breadth and depth of knowledge was truly astounding.

From the science of business information, to the art of drinking fine wine, to the correct method of priming the dam pump. He excelled. 

Despite being born in an older world he did not become lost as that world changed so rapidly, he adapted to become successful in the modern corporate world. 

In 1964-he joined BHP and over the next 10 years he became the first senior executive in charge of information technology. By 1972 he was responsible for the entire computing function of the company, with computer installations at six locations in Australia and a staff of over 1,000.

Dad once told me how in late 60s as a young man he was asked by the BHP management to head to the US and oversee the purchase of a new computer system.

This task carried much responsibility, with a budget of one million dollars. an enormous sum in those days…

Story goes that nearing the end of an extensive journey across the computing hotspots in the US. the local consultant announced there was a big problem. The budget.. They had too much left over!. Over the next few days. Staying at the Plaza Hotel in New York ,cases of champagne  were ordered, and the budget problem was over. Cyril knew when a party was needed. My family hope many of you will join us this afternoon celebrate his life. 

Perhaps Cyril’s defining moment came in 1974 when at only 36 he was  appointed the foundation Professor of Information Systems at the University of New South Wales,  over the next 20 years  he built up one of the world’s largest schools of applied information technology, with over 30 academic staff and 1,000 students. He taught all aspects of the application of computer systems to business and the government, with special emphasis on corporate computer strategies.

He published many papers in this field, and more recently outlined his thoughts through his regular internet blog. “Cyril on business intelligence” 

It was this research work which enabled him to move into the corporate world.

Cyril formed Grapevine Technologies Limited in 1987, to develop and market the software product which evolved from his research. The technology was adopted by many large global corporations and the operations of the company were successfully sold to a US corporation in November 2000.

More recently he established EIS Pathfinder with my brother Richard. Together they developed a unique methodology for determining the requirements for business reporting systems.  

A lifetime devoted to the development of information technology and knowledge management. He was truly a teacher and ahead of his time.

Our modern world in obsessed with lists, lists stating who is the most beautiful or richest for the year. In 2003 my dad made one of those lists. The Bulletin declared that he was one of the top 100 smartest Australians, recognised in his field of information technology.

If it’s possible to measure a man’s life in a single word, then I think the word to best describe my father would be devotion – to his wife, his family and his passions. He lived life to the full and sadly died too soon for us. We were lucky to be the recipients of his devotion and our world has been a better place because of his teachings. At the very least, we all know how to fix a pump!

Dad, we love you dearly, rest well and in peace.

The Brookes Family.


Memo to Business Analysts: A Compelling Treatise on Why and How Businesses should Automate Decisions December 12, 2007

Posted by Cyril Brookes in Decision Automation, General, Issues in building BI reporting systems.

You may know that fellow blogger James Taylor is the author, with Neil Raden, of a new book on the current hot topic of Automated Decision Making, titled Smart Enough Systems. In it they present a compelling proposition to business intelligence analysts and executives:

Look out for decisions that can be automated in your business;

Automate them and the business will be much better for it.

I suggest that you bring this work to the attention of the more creative managers and professionals in your business.

I have written on decision automation, in particular how to identify candidate decisions, a while back, here and here. You may care to revisit these, Dear Reader.

James and Neil propose that it is practicable to identify decisions that can be automated, and that the subsequent system design path is now both well trodden and amply supplied with technical support. They then give lots of detail on how to do it.

After reading the book, I put a set of questions to James, and here are his responses. I believe you’ll find them interesting.

Q1. On page 1 in the Introduction you say the book comprises two unequal “halves”, the first being general and the second technical. Are you implying that executive readers should read the first “half”, Chapters 1-4, and then move to the implementation proposals in Chapter 9?

A1. I sometimes feel like we had two books, one for executives of any kind and one for technology focused people, which we had to put inside one set of covers! The introduction did try and guide people to read as you suggest, but it’s hard in a physical book to do that well. In general I do think this would be a good approach for a non-technical reader except that I would also encourage them to read chapter 8 “Readiness Assessment” and at least skim the stories in the other chapters.

Q2. Much of the book refers to “automating hidden operational decisions” or “micro-decision automation”. Does the EDM approach described in the book only apply to automated decisions, or is it also relevant to partially automated decisions or even to a decision support role for human decision makers?

A2. I recently came across the work of Herbert Simon again and found his classification of problems very helpful:

  • Structured – well understood, repetitive, lend themselves to well-defined rules and steps
  • Unstructured – difficult, ambiguous, no clear process for deciding
  • Semi-structured – somewhere in between

Clearly EDM works particularly well as an approach to handle structured problems, whether you want to automate them completely or automate a large part of the decision while leaving some room for human intervention and participation in the decision making process. I think EDM also has value for the semi-structured problems, especially in areas like checking for completeness or eligibility. At some level EDM solutions blur into decision support solutions but an EDM solution is always going to deliver actions or potential actions not just information.

Q2(a). Does this reply imply that EDM is principally a decision support process or tool – relevant when a problem situation is identified or predictable in detail. Hence, alternative BI systems, applications or techniques are required to help executives, professionals and automation systems understand current business status and to find problem situations that require a response (to use a Herbert Simon term); presumably a response determined using EDM principles?

A2(a). Absolutely. I don’t like calling EDM decision support, though, because I find people have a mental model of decision support that can confuse them when thinking about EDM. A problem needs to be relatively well defined and understood to be amenable to automation using EDM. While many of these problems come from process-centric analysis, it is very often the case that more ad-hoc analytics are used to find the problems that offer the best pay off and to see how well an EDM solution is working once it is implemented. In particular the adaptive control aspect of EDM solutions requires good performance monitoring and analytics tools.

Q3. Similarly, the reference in Chapter 5 to Analytics and Predictive Modeling, and in Chapter 7 to Optimization and Operations Research, could imply that EDM has a role in higher level decision support, especially at tactical level. Is this a correct inference?

A3. I don’t think so. I think it is more true that some of the techniques and technologies that work for higher level decision support are also useful in the development of EDM solutions. The mindset though, that of automating decisions not simply helping someone who has to make the decision, is quite different. The different solution type also means the techniques are often applied in quite different ways – producing an equation rather than a picture for example.

Q3(a). Following my earlier theme, and looking ahead to your answer to Question 8: is there a role for EDM in automating the finding and diagnosis of problem situations in the business, perhaps without actually producing the “equation” that will solve it – leaving that part to a human?

A3(a). This is one of the edge conditions for EDM, where the system takes the action of diagnosing something (rather than fixing it) and is certainly not uncommon. It is often found where the delivery of the action cannot easily be automated. Interestingly, it has been found to be more effective to allow the human to provide inputs that rely on human skills, such as entering the mood of a customer, and having the system produce an answer than having the system provide options or diagnosis and then having the human interpret that.

Q4. IT groups worldwide are committing to SOA as a major part of their strategic plan implementation. You refer to SOA many times in the book, and how EDM and Decision Services are complementary to the concept. To what extent is the implementation of EDM and micro-decision automation generally dependent upon the enterprise having implemented SOA principles?

A4. I don’t think it is dependent but it is certainly true that companies already adopting SOA will find it much easier to adopt EDM. I also think that companies adopting SOA are more ready both for the explicit identification of a new class of solution (a decision service) and more open to adopting new technology to develop such services. I would not be at all surprised if the vast majority of EDM projects were not done alongside or following on from SOA initiatives.

Q5. Some years ago there was much discussion about Organizational Learning, the Learning Organization, Designing Learning Systems, etc. Is your approach to Adaptive Control in Chapter 7 related to this, or does it have different underlying purpose and concepts?

A5. To some extent. Part of the value of adaptive control is that it means an organization is committed to constantly challenging its “best” approach to see if it can come up with a better one – either because it knows more or because the environment has changed. In that sense the adaptive control mindset matches that of a learning organization. I also think that the use of business rules as a core technology has to do with the learning organization in that it gives a way for those who understand the business to actively participate in “coding” how the systems that support that business work.

Q6. The champion/challenger process described in Chapter 7 is depicted as two or three dimensional in the diagrams. Is it more difficult to implement when there are several alternative control variables? Are there project selection and management criteria that will help ensure champion/challenger successful approaches?

A6. It is much easier to implement adaptive control and champion/challenger when there are clear and well defined metrics with which to measure the relative value of different approaches. If the value of an approach is a complex mix of factors then it will be harder to compare two and pick the “winner”. Without a champion/challenger approach, of course, you are even worse off as you don’t even know how those alternatives might have performed.

Q7. You have clearly and comprehensively outlined the case for automating micro-decisions in Chapters 1 to 3. This argument ought to be compelling for many executives. However, there are many options for implementing rule based, model based and adaptive control based systems using the technologies in later chapters. Is it practicable to describe other implementation procedures introduced in Chapters 8 and 9, possibly via the book’s Wiki?

A7. One of the big challenges when writing the book was that many of the component technologies and approaches have value in other contexts besides that of decision management. Rules and analytics can both, for instance, improve a business process management project. There are so many of these that I don’t think even the book’s wiki would be able to handle it. We are engaged in some interesting research around decision patterns and a decision pattern language. I think this is an interesting area – identifying and describing the implementation patterns for decisions where decision management makes sense.

Q8. It appears from your discussion in Chapter 9 that the key to successfully implementing EDM in a “Greenfield” site is the selection of the initial projects. You propose selecting two applications; one rule based, and then a second that is predictive model based. This sounds sensible, but are there alternative project selection methods that might be applied? Other examples could include:

  • Partial automation of more complex, but valuable, decisions; e.g. using rules or models to find problem situations without solving them in Phase 1, with later phases implementing the automation fully?
  • Analyzing the informal “know-how” knowledge base of key professionals to determine if an initial project can be built by capturing and encoding their knowledge?
  • Use industry best practice reports to identify enterprise deficiencies that may be rectified using EDM?

A8. Chapter 9 was hard to write because different customers have succeeded in different ways. The way we outline was the one that seemed like the most likely to work overall. Each individual company may find that a different approach works better for them. Companies reluctant to fully automate decisions, for instance, may well find the first of your examples to be very useful in getting them more comfortable with the idea of automation. Identifying problem areas using best practice and driving to fix those would also be very effective, though it might well be implemented using a first rules project and a first analytic project as we suggest.

In general I don’t think that starting with know-how and working out is a good approach, however. Our experience is that you need to have the decision identified and “grab it by the throat” to successfully adopt EDM. Decision first. Always.

What Does Web 2.0 Really Mean for BI’s Future? December 3, 2007

Posted by Cyril Brookes in Enterprise 2.0, General, Web 2.0.

Will the My Spacers really act differently from us oldies when it comes to collaborate time? No doubt, like, My Space, Facebook etc. are great, like. Everybody uses them, like. Sadly, most of us could have built them instead of working on far less remunerative activities, like. Ah well, life is naught but missed opportunities, like!

But I diverge, like!

BI is about helping businesses perform better, that is, producing better numbers. Successful BI enables the executives and professionals to make better decisions. Although touchy feely, warm and cozy, can help with the collaboration and teamwork, they’re not the end results we’re after – it’s better decisions, made right time, right place. This implies the business has instrumentation and metrics. And metrics invariably come down to numbers.

Therefore, BI is a numbers play; presenting them, assisting their assessment, finding issues hidden in them, empowering action to resolve issues, and sometimes automating them. There will always be numbers in corporate BI.

Web 2.0’s contribution to corporate BI isn’t related to collection, storing, and disseminating numbers though. It can’t be because it’s social, and social isn’t numeric. Web 2.0 is about making sense of situations, finding people with similar interests, sharing experiences, and (my hobby horse) creating knowledge from tacit resources inside people’s heads.

For BI, the critical element, Dear Reader, is that Web 2.0 enables collaborative commenting and assessing significance of facts – especially numbers that may or may not be important.

Therefore, Web 2.0 will play a vital, but subsidiary, support role in BI; it’s not the main event. That doesn’t mean it’s not important, just that it will be a secondary design, something built to enhance the numeric reporting.

I’ve opined before that Andrew McAfee’s Enterprise 2.0, especially Wiki’s, are much more appropriate BI facilitators than My Space style stuff; they are really a clever innovation with big BI implications. I’m not sure how a corporate My Space would play out in the longer term. There’s heaps of scope for embarrassment.

To ask the hard question; will the My Spacers more readily use collaborative tools than our generation? I know they’ve had different growing up environments;Mark Madsen has spelt that out for us.

Certainly, the new generation of My Spacers will be familiar with online collaboration. Will this translate to sharing information and knowledge that could come back to bite them? I think not. Whatever the new generation is, it’s not stupid!

Will messengers with bad tidings stop getting shot? Not likely. Tall poppies lose their heads, now and will always.

Will young realtor agents start sharing prospects and potential for-sale properties with colleagues – with the risk that this knowledge will give the others a benefit? Of course not, that would be irrational.

Will project team ingénues blow the whistle on a bad manager, or will they respect team solidarity? Solidarity will surely remain dominant.

I think that the My Spacers will be just like employees are today; they’ll be most circumspect when it comes to collaboration that could be risky to their careers; such as publicizing a rumor that could be wrong, but if correct would be very important to the business. They won’t take that risk.

However, they will be much more ready to participate in corporate collaborative activity (Wikis if you will) that is:

  • peer level,
  • peer driven, and
  • allows each person’s contribution to be recognized.

Further, the mash-up concept that links hard data to soft data – the numbers with the opinions, assessments, will work well with fresh brains and lower inhibition thresholds.

Summarizing, Web 2.0 capability and experience will lead to a more collaborative local workplace, but self-interest will ensure the corporate wide, more strategic, cultural barriers remain. Sidere mens eadem mutato: but here the “stars” will be the My Spacers, not the oldies!

Collaboration and Knowledge History Creation in BI – The Twin Pyramid Model October 2, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Taxonomies, Tags, Corporate Vocabularies.
add a comment

Pyramids and BI deficiencies are a popular blog topic. Rising to the challenge of Andy Bailey in his “Where has BI fallen short” paper, I have some comments on the Collaboration and Knowledge/History categories of shortcomings. Other examples include, for example, James Taylor’s observation here and Neil Raden’s paper of a few months ago.

First the Collaboration bit. Regular readers will know that this is a big issue with me. I believe most businesses do it badly, for the reasons I’ve already given. But to explain how it needs to be “operationalized” we need to look at pyramids, one regular and one inverted. They’re different from Neil Raden’s but are pyramids nonetheless.

The basic problem of managing knowledge creation, collecting history and making valuable stuff rise to the top of the “action” pyramid stems from abundance.

Herbert Simon got it right when he said “The impact of information is obvious. It consumes the attention of its readers. Therefore, a wealth of information creates a poverty of attention.” The totality of information available, both internally and externally, is overwhelming. It follows that filtering and other controls on information delivery are necessary if benefits from information resources are to be achieved.

Hence the pyramid pair as depicted below. Most documents are of interest to only a few, perhaps only one person, in a business. They can be said to have an importance at Level 1. But a few documents are of Level 4 import, and are of interest to many people. Obviously, it needs to be the function of any Collaboration and Knowledge Creation function to cause the important items to rise to the top of the pyramid.
The Collaboration Pyramids

Knowing his makes the specification of the application relatively straightforward. It needs a web-crawler, document trawling feature, categorization capability, a subject expert and escalation of importance sub-system and the usual alerting, search, browse features. Simple; just like the picture below!


Note: Click on the diagram and it may be clearer.


Collaboration Process

If you, Dear Reader, are going to overcome shortcomings in your BI context, this is a great place to start.

Unstructured Information in BI – Implementation Practicalities with Tacit Data August 30, 2007

Posted by Cyril Brookes in General, Tacit (soft) information for BI, Taxonomies, Tags, Corporate Vocabularies, Unstructured Information.
add a comment

Designing an unstructured information based BI system must take account of the explicit and tacit distinction. There is consensus for this, in blog-speak that means “me and my mate in the next office agree”! Feedback on my last two posts does, however, unanimously support this contention. The issue remains, however, so what do we do about it? Here’s what I propose.

Most businesses have a reasonably adequate process for collecting explicit unstructured information, the documents, news, emails, reports, etc. And if your’s doesn’t, the corporate portal experience awaits your attention. For heavy hitters the UIMA approach with its multi vendor retinue is available, and willing, for a substantial sum.

I have opined in the last posts here and here that explicit unstructured information is not where BI relevance is at. It can be a start, but the real value lies in the qualification that the executive and professional mind-space can give to seeds of BI, both explicit and tacit. The tacit realm is the goldmine; it is where the current, relevant, actionable, validated business intelligence lies.

How, then Dear Reader, do you capitalize on your tacit resources?

It’s a 9 step process, as I see it

  1. Encourage contributions from everyone, everywhere, based on credible rumor, opinion, assessment, etc.
  2. Scour the corporate world for knowledge building seeds, explicit and tacit – web crawlers, internal and external portals, news feeds, etc.
  3. Selectively disseminate raw data seeds to subject specialists – formally appointed for preference
  4. Encourage comments on those seeds by the specialists – acts, sources, cross-references, importance, time criticality – with discussion threads escalating in importance as appropriate
  5. Selectively disseminate comments – dynamic audience creation, so that more people, and more senior executives, are aware of more important issues
  6. Encourage issue identification by executives and professionals – implications, assessments, importance value adjustments, criticality adjustments
  7. Selectively disseminate the discussion – dynamic audience modification as business significance becomes clearer, possibly creating closed group discussions if the issue becomes strategic
  8. Propagate decisions made to the appropriate staff
  9. Store knowledge created – with time stamp, sunset clause if appropriate, to help avoid multiple solutions to the same problem

Obviously this must be an explicit process, where the tacit input is first encouraged, then amplified, assessed, amplified again until either the issue dies, is resolved, or mutates into another issue. But make no mistake; it’s the tacit input that drives the successful implementation.

Essentially we are making explicit that which was tacit; but on a selective basis, right time, right people, right place.

There is downside, however. Creating a workable tacit unstructured information BI system with the above features is non-trivial. I have done it many times, and it was never easy.

Caveats and Dependencies

Cultural Crevasses

Culture of collaboration is the all important enabler. If the people related barriers to sharing the knowledge creation process are not addressed, the venture will fail. No question about it. I have made an earlier post on the cultural issues and how they can be managed, but, briefly, the most critical barriers are, in my experience:

  • There’s no reward mechanism for contributing intelligence, and it’s a lot of work for no personal benefit
  • You don’t know who to tell, and it’s a lot of effort to find out
  • You don’t know if this BI snippet you have come across is accurate, you don’t want to bother someone else unnecessarily and someone else must know it anyway
  • There’s no important person around to hear what you have to say; so keep this intelligence to yourself until there is the right audience – the more valuable it is, the longer you’ll wait.
  • Tall poppies lose their heads, so keep your head down, and messengers get shot
  • You don’t want to embarrass your boss, or peer group, so keep it quiet

Source Validation

The source of intelligence is most people’s key to determining apparent accuracy of any tacit input. If you get a stock market tip, you will always check where it came from before acting. It’s the same for a rumor on a competitor’s product recall.

Audience Creation

Dissemination is completely dependent on adequate categorization. If a document, email, news item, etc. is not classified it cannot be circulated to the right audience. And everyone in the business must use the same terms for categorization, or they will miss relevant documents.

Crucial Taxonomies

This implies a standard comprehensive corporate vocabulary or taxonomy. Setting this up is not trivial either.

Automatic Categorization is Oversold

It’s not sufficient to classify documents by internal content references. The real, useful keywords for document that is relevant to BI may not even appear in the text. In spite of the tremendous advances in text analysis, the personal categorization by a subject expert still wins the classification stakes, in my opinion. By all means use the automated technique to get the item to a subject expert, but he/she will always be the best determinant of cross-references, importance and time criticality.


I believe that an important principle BI analysts need to fully understand is “the strategic and most valuable information in your business is in the minds of the managers and professionals” as first enunciated by Henry Minzberg. Turning this tacit unstructured information into explicit useful stuff is universally a high priority task. Done well, it creates the difference between learning and non-learning enterprises.

Unstructured Information – Tacit Versus Explicit for Profit and BI Best Practice August 9, 2007

Posted by Cyril Brookes in General, Issues in building BI reporting systems, Tacit (soft) information for BI, Unstructured Information.
add a comment

A picture may be worth a thousand words, a news item also has about a thousand and a marketing strategic plan may have around five thousand. OK, but a great idea for a new marketing message, an expert’s adverse comment on the marketing plan, or a chance serendipitous airplane conversation about a competitor’s plans may be each worth a million dollars, for just a few hundred words. What do you want, words or dollars?

I believe there is far too much emphasis on the analysis of documented unstructured information as a BI resource. The basic important data just isn’t there for most businesses. You can search as long as you like; mine it, categorize it, summarize it, but to no avail, the well is dry.

This post follows on from my earlier, definitional, piece on this subject.

Of course, I recognize there is potential import in some written material, for example, recent emails, salesperson call reports, customer complaints and their ilk. But these are like seeds, rather than the fruit off the tree. They are the beginning of a BI story, not the whole enchilada.

At risk of making the discussion too deep, Dear Reader, I think we need to consider the basic concepts before coming to any conclusions about how a corporation should manage its unstructured data, and the tools required.

I find it valuable to characterize unstructured information with a 2 x 3 matrix.

The horizontal axis has the above two basic categories of unstructured information:

Explicit unstructured items are those that are basically unformatted, but have a physical, computable, presence; e.g. documents, pictures, emails, graphs, etc.

Tacit items are basically anything unformatted that is not explicit, they’re still in the minds of professionals and managers, but are nonetheless both real and vital; e.g. mental models, ideas, rumors, phone calls, opinions, verbal commentary, etc.

The vertical axis has the three categories of unstructured information (according to moi!): independent, qualification and reference items.

Independent items stand alone being self-explanatory in the first instance, not requiring reference to other pieces of information, be they structured, unstructured, explicit or tacit.

Qualification items have an adjectival quality, since they add value to other items (structured or unstructured), but are therefore relatively useless without reference to the appropriate one or more Independent or other Qualification items (note there may be one or more threads to a discussion based on an Independent item)

Reference items are pointers to subject experts who can provide details or opinions, and other sources of information, structured or unstructured, together with quality assessments of the value, reliability and timeliness of those sources. As Samuel Johnson said “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information on it”.

Here’s a descriptive tabulation.



Independent, stand alone items

Meeting minutes

News items

Analyst reports

Marketing call reports

Legal judgments


Government regulations

Suggestion box items

Customer complaints

Strategic plans

Manuals of best practice

Emails about new issues or competitive intelligence

Unrecorded meeting discussions


Suggestions (undocumented)

Potential problems


Competitive intelligence from informal customer/industry contacts

Stock market (racehorse) tip



Off-the-record talks with government officers

Qualification, commentary items

Written comments on a report/news/analyst item

Documented opinion on problem or situation

Formal assessments of status implications

Verbal comments on a report/news/analyst item

Verbal comments on emails

Verbal opinions on problems

Verbal assessments of issues

Possible solution options

Comments on a rumor

Reference, source quality items

Lists of subject experts

Ratings of experts

Document sources, catalogs

Written reviews of document sources

Unrecorded subject expert identity

Opinions on expert quality

People who “know-how”

Informal unrecorded information source documents

Assessments of document source utility

Ask yourself, Dear Reader, which of these cells contains high value information, likely to assist your corporate executives find problems and make decisions? If they’re only on the explicit side, then you’re in the sights of UIMA and lots of enthusiastic vendors; good luck. If some are on the tacit side, please read on.

I’ve covered several of the relevant aspects of managing tacit information in earlier posts, e.g. here and here. However, there are some additional relevant observations to be made in the tacit versus explicit context.

  • The first, possibly most important, observation is apparently self-defeating to my thesis. All important, currently relevant, items of tacit unstructured information should be made explicit as soon as practicable.
  • It is not possible to identify, collect, store, disseminate, and facilitate collaboration on purely tacit items; it will happen in a “same time” meeting, of course, but wider ramifications demand that the prelude and/or outcome be made explicit.
  • Independent intelligence items, be they initially explicit (e.g. a recent email) or tacit, are very rarely complete as regards background to the issue, its importance to the business, its time criticality, and assessments of potential impact. If you will, the knowledge has not yet been created, only its seed.
  • The information required to complete the knowledge building that starts with an Independent Item is rarely in one location or person’s mind.
  • The knowledge building is based mostly on tacit information
  • The knowledge building process is most effective if performed via collaboration between the people who have, or know where to find, the necessary Qualification Items of information.
  • Some process for collaboration audience selection is required, one based on issue content, criticality and importance. It shouldn’t be left to pure chance.
  • Desirably, the collaboration process, but certainly the end result, should be made explicit, to avoid resolving the same issue many times over.

In my previous post I offered some questions that might provoke your curiosity, Dear Reader

  1. What are the most useful sources of unstructured information in our business? Are they Explicit or Tacit?
  2. If Explicit, how do we best marshal the information and report it?
  3. If Tacit, ditto?
  4. Is the information we get from our unstructured sources complete, and ready for promulgation, or do we need to amplify or build on it before it’s useful?

I expect that you will be able to answer 1 and 4 for your business; I’ve outlined the issues as best I can.

I’ll defer offering pointers you might consider for 2 and 3 to the next post, because I believe we still need to revisit the processes and constraints that inhabit the strange corporate world of collaborative knowledge building.

Unstructured Information in BI – Only for Spooks? What about Business Analysts? July 30, 2007

Posted by Cyril Brookes in General, Tacit (soft) information for BI, Unstructured Information.

There’s a big marketing and consultant push on UIMA, unstructured information management architecture. But I think it is largely missing the point for the real world corporate BI people, i.e. those not spooks or librarians. The critical concept, ignored by many, is that unstructured information is of two kinds, explicit and tacit. Even Wikipedia gets it wrong, ignoring the latter.

I believe BI gains most from tacit intelligence, but that’s not where the product marketing thrust lies. The importance of tacit unstructured information in BI is summed up well by Timo Elliott in a cartoon and by James Taylor’s recent blog post.

The heavy hitters in BI software, as evidenced by the recent takeover activity, e.g. Business Objects and Inxight for one, are pressing home apparent advantage to be gained by corporations with analysis of masses of emails, news, documents, etc. See, for example, the description of UIMA.

Well and good. But you can’t make a silk purse out of a sow’s ear, as Jonathan Swift said. And you can’t create relevant action oriented information for executives out of data that has no embedded useful information in it. The ocean of documents, with some exceptions, is a BI desert for most companies. But I mix my metaphors.

Basically, I believe that a corporation’s vast compendium of historical documents has little BI relevance. It may be useful to track or assess a person’s background, or to isolate the cause of a problem. But history rarely contains the up-to-date information that’s relevant to managing a business, assessing current performance and finding problems. The real lies in exploiting the tacit stuff.

I’ve quoted Henry Minzberg often before, but it bears repeating, as the message hasn’t yet been fully understood in the mainstream of BI: “The strategic database of an organization is in the minds of its managers, not in the databases of its computers”. This is as true today as it was in 1974. Today, one can add: “Or in the morass of historical documents and emails”.

Of course recent emails and documents often contain important information that can, and should, be part of a BI context; but usually only as the seed for a collaborative knowledge building process. This is the nub of the issue; it’s hard to identify, collate, disseminate and collaborate on tacit unstructured information. Perhaps this is why most authors steer clear of the issue. But we need to address it if we are to be effective. More on this in my next post.

I wrote last year detailing some of my research in the 90s on the subject of “hard” and “soft” information, how valuable it is in many BI contexts particularly CRM, but also how difficult it is to exploit. In this context hard information refers to the structured, numeric, formatted, BI reports. Soft information is the unformatted, unarticulated, information in managers’ and professionals’ minds.

An interesting article by Rick Taylor deals with unstructured information is relevant here. It says, in part:

The key to defining knowledge management is to make sure you are separating “explicit” knowledge from “tacit” knowledge. Explicit knowledge is anything easy to quantify, write down, document or explain. Tacit knowledge is everything else. The knowledge based on ones experiences, and often times, at a subconscious level. It is information that you don’t necessarily know you know until you are reminded of it. If you were asked to write down everything you know, could you do it?

The explicit and tacit labels were used first in this context, I believe, by Nonaka and Takeuchi in The Knowledge-Creating Company.

The BI key questions that arise from this discussion are, I believe:

  1. What are the most useful sources of unstructured information in our business? Explicit or Tacit?
  2. If Explicit, how do we best marshal the information and report it?
  3. If Tacit, ditto?
  4. Is the information we get from our unstructured sources complete, and ready for promulgation, or do we need to amplify or build on it before it’s useful?

I believe that the above analysis outlines the problem of utilizing tacit unstructured information reasonably well. I’ll offer my answers to these issues in the next post.


Implementing Decision Automation BI Projects July 13, 2007

Posted by Cyril Brookes in Decision Automation, General, Issues in building BI reporting systems.
1 comment so far

The feedback on my three earlier posts on specification guidelines for automated decision capability in BI systems has been both positive and heartening. My objective has been to show how these BI projects for operational business processes may be built relatively simply, and to generate enthusiasm for this among the legions of business analysts. You can indeed try this at home!

This post summarizes the major issues that received favorable comment and then deals briefly with profitability, feasibility and implementation techniques for these systems. It concludes the series of (now) four posts on decision automation for BI that commenced here.

I haven’t attempted to place this subject in its context, or to cite various examples of success or otherwise. Thomas Davenport, Jeanne Harris, James Taylor and Neil Raden have done this comprehensively in their recent articles and books.

You may recall, Dear Reader, the underlying principles for this methodology are:

Selecting the right targets is critical to success:

Doing the right thing is much better than doing things right (thanks Peter Drucker!). My prescription is to avoid trying to pick winners from the various business processes that may be candidates for BI automated decisions. Rather we should look to a different set of candidates as the start point.

Identify the controllable variables – the levers that are adjustable in the business process that can shift performance towards improvement

These are easy to pick. There are relatively few options available in most businesses, variables like changing prices, adjusting credit ratings, buying or selling stock or materials, approving or rejecting a policy proposal, etc. A more complete discussion is in my second post on this subject.

Only consider automated decision making BI systems where controllable variables exist

This is a no-brainer, I guess. It’s only possible to automate when automation is possible. If we can’t control the process when there’s a problem, because nothing is available to be done (e.g. we can’t raise prices if all customers are on fixed contracts), then don’t let’s start automating.

Segment the design processes into logical sub-projects so the project doesn’t run away uncontrolled

I suggest in the third post that the Herbert Simon style decision process elements are an effective segmentation. This allows focus on (say) finding problems and then on deriving the relationship between adjusting a control variable and the resultant outcomes.

Enough of a recap: here are some basic suggestions for project management.

Implementation of a decision automation project is always tricky. In most cases it is not possible to “parallel run” the new with the old, since only one option can be tried each time in real life, and comparison is not possible longitudinally.

I suggest that an iterative implementation is therefore appropriate. It should incorporate feasibility and profitability analyzes as well.

Referring to the more detailed methodology in the third post:

  • Build the status assessment and problem finding sections first, and leave the action part to the management.
  • Then design the diagnosis and alternative selection modules and instruct a human manager what to do (always leaving the override option, of course). This is simple as long as there is only one controllable variable available for the business process and only one KPI metric, or a set of related KPIs and metrics, that are out of specification, hence signifying the problem. If there are more than one of these, then it can (almost certainly will) become complex. Certainly it’s achievable, but there’s a good deal of modeling and use case analysis required that is beyond the scope of a blog post.
  • Finally, link the alternative action chosen to the automatic adjustment of the control variable(s) and you’re home free.

I hope, Dear Reader, you’ve been infected in some small way with the enthusiasm I have for automated decisions in BI applications. In many ways they are the most satisfying aspects of Business Analyst work, since you get to design the system, and then get to see it perform. Working on high level strategic projects is often more intellectually challenging, but you rarely get to have full closure, it’s the executive users who have that pleasure, often long after you’ve left the scene.