Wednesday, August 30, 2006

Three gallons of abstract thought

A few days ago I found myself saying to a coworker, "we need to make the compliance sell." The company I work for makes software that helps comapanies and government agencies manage information. With the collapse of Enron and the advent of Sarbanes-Oxley, compliance with information management rules and regulations is a big deal.

We talk about compliance a lot at my company. We consider it a major reason why someone would buy our product. It's in all of our marketing literature. But I suddenly realized that I have no idea what it means.

"Compliance" isn't a thing. It's a state of being. We can't sell compliance because you can't buy compliance. You can't buy happiness either. Try walking into a store and asking for half a pound of Zen.

So when companies promise a smile in every box, or satisfaction, or romance, or the "being part of the legend", what they're really talking about is how you'll feel after you've purcahsed and used their product. They want you to visualize how you'll feel, the end result, the ultimate goal.

For luxury goods or impulse buys, this strategy works. If you make chocolate, selling "satisfaction" makes sense. Customers can see how one leads to the other. They've had chocolate before, so they know what it's all about. If you make motorcycles, spreading "the legend" makes sense, because we've all seen the movie. It's asking customers to connect the dots in a visceral, Pavlovian sort of way. The word "performance" should start sports car buyers salivating.

But what about products that require more serious thought? Does the approach work? At some point, someone will ask what "increasing transparency" or "harnessing synergy" means. How does your product or service do that?

And you'd better have a short and compelling answer for them. Because nothing grates more than a company that sells "peace of mind" presenting you with some of the most complicated and stressful decisions you'll ever make. Or firms that promise exclusivity to you -- and everyone else with a few spare bucks.

Monday, August 28, 2006

Hierarchy vs Social Network

Today I am writing more about this self-tagging vs peer tagging thing, because I think there is a balance to be had between the two models.

IBM has in their organization a system for expert location that I imagine many companies have, the giant staff lookup that includes tags designating expertise. The traditional staff directory is not new - thats been around in paper format since the dawn of the industrial age. But IBM being well - Big Blue, has added an expert tagging and context aspect to their staff directory that ecxceeds the average staff lookup system. Called IBM Bluepages (aka Fringe Contacts), users can view that person’s reporting chain (who they report to and their boss’s boss and so on, all the way up to the CEO), their peers, and the person’s direct reports. The system also provides duel tagging - self tagging, and 'corporate' tagging. Here’s a screenshot. (props to Library Clips, my source for this).

A good system has both - a Peer Tagging/Reputation system AND a self-tagging system.

The trick to making all this metadata work though, is to overcome the maxim "People are Lazy". The only thing that makes an expertise system (such as IBM's Fringe Contacts) valuable is currency.

At my last company we had a similar system. It was a home grown application that ran inside of our coporate intranet and was connected to the staffing system. It would designate what percentage of your time was devoted to what projects, What the project team was, who reported to who, and what internal teams supported the work. It also included tags for the technologies on the projects, linked those to the ones you were working with in your job function, and had a separate section for you to 'self tag'. Every three months, or every time you changed a project, you had to go into the system and update your expertise tags (and you got a really annoying nag email every day until you did). The expert tags worked a lot like resume submission - you listed the technology or expertise area, years of experience with the technology, level of expertise, and last time you "used" that technology. Every time your staffing assignment changed (even if it was from one task team to another in the same project) your profile was updated on the project. It was both authoritative, and augmented by additional self-tags.

The context of this was great - at one point I joined a project team of a system that had evolved as a project over the past 5 years. It was my job to update and enhance legacy code for the application. By going into our staffing system, I learned that there was someone in the LA office still with the company who had been the lead developer on the project. I reached out an made a connection - which led to some hidden documentation and a great brain to pick anytime I needed to ask "Now why would the developers have chosen to do it that way?"

The system was also great for finding that hidden expertise in companies. Often times, companies act as though your expertise only began when you walked in the door and became an Acme, Inc employee. Of course - this is rarely true! Most people have a varied background, and just because you are no longer working in a certain area means you lost your expertise in it. Once upon a time I was a Perl Guru, and since I was tagged in the expert system as such I even got the occasional Perl question from random developers in the office who could see that while I wasn't the most current Perl expert, I WAS sitting just down the hall... Which is probably one of the more insidious problems in companies today: How to find the expert next door.

Continuing my praise for the self tag - something used most on the resumes, in the list of keywords and skills we all write up about ourselves. This of course has its limits, because people lie, abbreviate, and neglect their resumes. That too falls under the metadata trap of people lying. I don't mean people are all evil - I've 'lied' on my resume. No, I didn't make up degrees I didn't have. I left things off. On purpose! At a certain point in your career, everyone falls into this trap - My internship as a recieving clerk in a deparment store has little relevance to my career as a technology consultant. And I do in fact know a little PL/SQL. But ask me to leverage either of those expertises and I will deny, deny, deny to know them! Fortunately, this is a acceptable white lie, and an interesting quandry system for expertise location. Peer tagging is good, but self tagging creates the chance to "opt out".

So now, in my dreams I'm wishing for a mashup of Fringe Contacts and Qunu...

Sunday, August 13, 2006

Proposal 101: ATFQ

I've spent several hours this weekend working on a proposal for work. It's a rush job, as many proposals are. But this one was in bad shape.

I have high standards in this regard. I used to work for a firm that did a lot of government contracting. They had responding to RFPs down to a science. My job was to manage the final assembly of the proposal, including all its forms, attachments, slides, spreadsheets, pictures, and other paraphenalia. This meant that I had to study the RFP closely and make sure that the final response conformed to all of the requirements.

My responsibilities included the mechanical aspects of the proposal: page counts, font sizes, whether the graphics matched the captions, etc. But the editors I worked for were concerned with the technical approach. One in particular used to scrawl "ATFQ" in flaming red pen next to the margins of questions. He would refuse to do any editing on a question that had ATFQ written next to it. One day I asked him what it meant, and why he was so insistent about it.

"It means" he said dramatically, and somewhat more crudely, "Answer the Freaking Question!" It's the best advice I've ever had for responding to RFPs.

Answer the Friggin' Question

The most imporatant thing to remember about responding to an RFP is that your audience is really, really bored. They've probably read three other proposals before yours, and all they really want is an easy way to separate the good proposals from the bad.

You want to make your answer as concise as possible, so that they can tick their mental checkbox that your product or service satisfies the requirement. In round two, you can always refine your statements and expand upon your answer. But for the first look, your job is to make it easy for the reviewer to put your proposal in the stack of "compliant" proposals.

This means that your response to each question in an RFP ought to come in the very first line of your answer. If possible, it should come in the very first word.

For example, if an RFP asks this:

Does your gizmo conform to the Phoom 2.3 specification for advanced wuzzle?

Your answer should not begin:

Since 1872, ACME Corporation has manufactured advanced fizzlers to the highest standards of nazblad quality, in accordance with our mission of serving the needs of the gibbler market better than our competition and for a more reasonable price....

Your answer should begin:

Yes, ACME gizmos are compliant with the Phoom 2.3 specification.

See how much better the second answer is than the first? A lazy reviewer only has to read the first word to know everything he needs to know in a first-pass review. Later, he can read the entire first sentence, first paragraph, or extended essay if he wants more detail. But he doesn't need to do that. You've ATFQ'd in his mind, and everything else is window dressing.

Make it Obvious

I deliberately used nonsense words in my examples above, because frankly that's how most technical RFPs look to the average person. What's a wuzzle? What's the Phoom specification? Not only might you not know what the words mean, but chances are the person who assembled the RFP document doesn't know either. They're just parroting some phrases they heard the other day. It sounded important, so it went into the RFP.

Chances are, your answer is going to be just as difficult for a nontechnical reviewer to evaluate. What's the gibbler market? What's nazblad quality? So the idea is to make the those first words or sentences clear enough that a layman can make one of the following determinations:

  1. The respondent answered yes.
  2. The respondent answered no.
  3. The respondent gave a qualified answer.

That's all that's necessary for the first round.

Friday, August 11, 2006

When you don't know what you don't know

Lots of asyncronous dialog about my Qunu, the 'push-to-talk-to-an-expert' company, post going on this week. The Qunu team wrote some well thought out comments back to my critique. And Dean had a really good point to make about my last post on Qunu, . When it comes to needed expert advise on a subject, bulletin boards are a actually better. Why? Peer review. If you ask a technical question on the bulletin board, both the question and the answer are public. So if someone gives you bad advise, it tends to get corrected. And it should be faster, because in theory the first person who knows the answer to your question responds. However - Dean points out the true value of the free-form talk - when you don't know what you don't know. This is where google, search engines, and many expert locators fail. Its easy enough to find the answer to a question when you know the question. But its not uncommon as a knowledge worker to need to know something that is well outside your area of expertise. And when you are working outside your area of expertise, its hard to know where to start, or even how to ask the question. And thats where something like Qunu really comes into play: the chance to engage an expert in an extended dialog.

Tuesday, August 08, 2006

Can you Talk to an Expert?

As part of my job as government consultant and collaboration guru I support a XMPP/Jabber pilot program. I'm also busy thinking about the subject of expertise location, since its a big concern for a 'corporation' as large as the 5-million plus strong US Department of Defense. So naturally I have to talk about a company that involves both of those things - Qunu. Qunu provides a Folksonomy-based categorization of expertise and a XMPP-based chat (instant message) tool that lets people with questions connect in real time to people ("experts") with answers:

What I find most interesting about this tool is that it enables synchronous ("real time") expertise. The web already does a great job of providing asynchronous (time-delayed) expertise through bulletin boards, articles, corporate websites, and blogs like this one. Even Google answers, which promises live researchers to answer your questions (for a small fee) only delivers a not-quite-real-time response in 24 hours or less.

This real-time question is an interesting capability because it better replicates the process a lot of people follow right now - a phone call or a conversation with someone they know and trust who is an expert in the area of concern. The new feature here is the ability to first FIND someone who has an expert, and then initiate that conversation right away.

Cool idea, but it raised more questions than it answered for me. I think they still have a lot of flaws to work through:

  • Self-Tagging Bias- just because you say you are an expert doesn't make you an expert
  • No Trust - Online trust is the currency that drives person-to-person interactions online. There is no way in Qunu to peer review the experts, or rank the advise you got. Can you imagine ebay without the rating system? Slashdot without karma?
  • Dumb questions - Have you ever worked a help desk? If so, you realize that 95% of questions are repeats of questions you've answered before. Where's the mechanism for participants to say "You idiot! RTFM!"
  • Expert Overload - if you really are an expert, you're likely to be a busy person. So, I suspect this system self-selects to bad, unbusy experts only
  • No Reward - people volunteer their time for a lot of things. But for most folks, they want something in return. For Google answers, it's cash. For the open source community it's reputation. Sure, there are people who just contribute for entertainment, or for hubris. But enough to sustain a business model?

Not to rain on Quru's parade - its one of the more novel thoughts I've seem come out of the philosophical question of "what could you do if chat were as standard and ubiquitous as email?" But its still not my silver bullet for locating and leveraging expertise. darn.

Sunday, August 06, 2006

Wiki Resolved

As part of the continuing series of posts about my company's SDK Wiki, I've decided to lay out the arguments for and against it being publicly readable.

Those of you that followed the link above know which side ultimately got the better of the argument, but I think it's important to discuss the pros and cons of the decision. Our company is likely to see similar debates in the future, and if your employer is anything like mine, you're likely to experience them as well.

The Debate

The arguments for moving the SDK wiki to a login-only setup essentially boiled down to two concerns. First, that we were giving away our intellectual property. Second, that the information posted to the site could result in damage to the company's reputation or to litigation. Let's examine each objection.

The intellectual property argument was that one of our competitors could take the information on our SDK wiki and reverse engineer features of our product. In recent years, our industry niche, Enterprise Content Management, has seen serveral waves of mergers and consolidation, and the competition is quite fierce.

I suppose the hesitation is understandable, but the SDK Wiki was really more of an extended user manual than anything else. If a competitor wants to reverse engineer our product from a manual, well, good luck. It'll take them years. In the intervening time, I hope we'll have moved on to something better.

And as one senior executive pointed out, our competitors already have copies of our software and manuals. If they were serious about taking our product apart, they would have done so already. The only people we'd be protecting our IP from are our partners and customers -- legitimate users of our product.

What of the second point, that damaging information could be posted to the site? We'd already configured the SDK Wiki so that users have to log in to edit, so presumably no user can post information there that we hadn't approved. As for the content itself, we monitor changes regularly, so it's easy to fix if something goes wrong. So even though litigation or the potential of lost sales would be costly, it's not a very likely scenario.

On the other hand, what of the potential to increase customer loyalty through providing more and better information in a timely fashion? Or improving relations with our partners by making it easier to work with our product? Or helping to inform trade analysts? While these benefits are hard to quantify, they more than compensate for the risks, in my view.

Ah, the skeptics asked, why couldn't we couldn't simply provide all those people with a login? Why does the SDK Wiki have to be public to achieve those objectives?

Well, the main reason is that we don't really know who "all those people" are. We sell our product to large organizations, but we might not know who within those organizations uses the product. The same logic applies to our partners. We simply don't know which users need logins in advance.

We could allow them to sign themselves up, but as I mentioned in my previous wiki post, Internet search engines won't be able to index the content. Since many people use a search engine as their primary way to navigate the web, they might never find our site. Even if they do find the site, they won't know what sort of great stuff is posted there until after they sign up.

In the case of our partners, there's an additional incentive. These systems integrators, consultants and resellers often work with a variety of products. They already have a bewildering array of logins and passwords to manage. Most will opt out of signing up for yet another technology website, unless they're certain they need it.

Chances are, they'll look somewhere else for the information first.Perhaps they'll go to another website, one that we don't control, and who knows what they'll see there? At least if they come to our site, they can get the answers straight from the source. So it's potentially less damaging for us to make our content widely available.

The Decision

On balance, the arguments were strongly in favor of leaving the SDK Wiki publicly viewable. So after a month behind closed doors, the wiki has re-emerged with a stylish new look. We added a few disclaimers to highlight pages still under review and established a more formal monitoring process. I'm really happy with the final result.

And as frustrating as the ordeal was, it's made me appreciate two things. One is that this information management thing is hard, even for a company that does it for a living. I have a much better understanding of the work our customers go through to implement our ECM product now. I'll listen much more sympathetically to our customers as a result.

The other thing is that I work with really great people. It's been a challenge to deal with everything that's happening at our company lately. You can sense the frustration in a previous wiki post. But the folks at my company are awesome.

And I'm not just saying that because they let me win this one.

Thursday, August 03, 2006

What is this Expert Location thing?

Maybe this has happened to you before: You are one of those fabled 'knowledge workers'. You realize that you really need to know about a certain subject. We'll call that subject FooBar. You look in your corporate knowledge base (you do have one, right?) and you find a document there with 'FooBar' in the keywords list. Unfortunately, you know nothing about FooBar, so you can't really tell if this is the latest thinking on FooBar, or what this whole FooBar thing is about, what all these FooBar buzzwords really mean. Maybe you just need to talk to the person who wrote this document and learn more. What you need is a FooBar Expert.

But how do you find that expert? Theres been a lot of ink spilled on that very question, but I'm not sure anyone has found the holy grail of expert location. I've been thinking about the subject myself (heck, just the other week someone accused me of being an expert on expert location!) . And I figure if fingers are getting pointed, I should share my thoughts on the subject.

So what makes an expert? There are a lot of definitions of “expert” out there, but I like this one: “An expert is someone widely recognized as a reliable source of knowledge, technique, or skill whose judgment is accorded authority and status by the public or their peers.” I like that definition because some key ideas – wide recognition, reliability, trust. I think each of those concepts is worthy of its own discussion, so I'll just start at the top - recognition. How DO we find that FooBar expert?

For computers, recognition is done through some form of people tagging. While there is certainly a lot to be said on how humans tag expertise in their head, for now we'll stick to how computers can aid in finding experts. If you think about it, there are a couple of ways for an expert location system to tag experts: Creator (Self) tagging, Expert Tagging (i.e. A librarian), Machine Tagging (i.e. Entity Extraction), Social Tagging (i.e. Folksonomy, group, or "Peer" tagging).

Each type of tagging has its pros and cons:

  • Self tagging isn't a bad approach necessarily, but the typical definition of expert tends to be "a widely recognized and trusted source of knowledge"; so being self-tagged instead of peer-tagged has its flaws (hubris!). On the other hand, no one else tends to know what you know better than yourself.
  • Expert Tagging comes from the library science way of defining things. The benefit is (one hopes) an independant certification of what one knows. Expert tagging can also come implicitlly in a corporate environment from organizational structure. If you are the lead or even a member of the FooBar group, you're assumed to be a FooBar expert.
  • Machine Tagging is an emerging concept, but not a new one. Intelligence analysts (aka spies) have been using machine-based tagging to connect people with what they know through all sorts of intelligent (pun intended) algorithms. The basic concept is that you can pull key words (usually called 'entities') from text (newspapers, audio transcripts, overhead phone calls, and so forth) to connect the content of what is being said to the person doing the talking, the person they are talking to, and the person they are talking about. Its a great approach for finding a needle in the expertise haystack, and as a result organizations spend a lot of money on tools that find quality connections. However, it assume you have a lot of hay (and a lot of dough) to work with.
  • Social Tagging is probably the most common "non-computer-aided" way to tag an expert. Back to my definition - an expert is someone widely recognized. Social tagging the traditional way comes from professional groups. First, you have the groups that certify expertise. This can be anything from the American Bar (lawyers) to the Institute of Electrical and Electronics Engineers (IEEE - Engineers, naturally).
    In academia, you also find a ranking system - citations. The value of papers written on subjects and ones expertise credentials are boosted by the number of your peers who cite your work as a source for theirs. Those who are considered experts in their field are those whose work is more frequently cited (the same is arguable true of blogs).
    Expertise in other fields can also be through informal referral networks - especially in fields where people don't write or obtain certifications for a living. Using general contracting and contruction as an example, people maintain personal networks and evaluate expertise based on performance. All plumbers are certified through a variety of trade guilds and professional groups. But that may not be the only measure of expertise. One might be considered an expert plumbler if 9 out of 10 general contractors recommend you to fix leaky pipes - and in many fields the personal trust outweighs the formal titles.
    Is there a weakness to social tagging? Of course - first it requires people to contribute to the classification of other people (People are too lzy to tag themselves, will they really tag others?); second it assumes that others are a good judge of what it is you know.

There is a lot more to be said on the subject, but thats a start. After all, its not like I'm an expert on this.