googleinvasion

Are we Outsourcing Common Sense to the Internet?

It is a very human instinct to trust somebody the better we know them. And how well we know somebody is often a result of the amount of time we spend with them. This level of trust can often equate to how much we are willing to believe, or be influenced by, them.

Influence comes from trust.

Interestingly, many people assign human attributes to not only other people but items as well, including pets, inanimate objects (“I stubbed my toe on that stupid rock”) and, most importantly, interactive systems (such as computers). In fact, regarding interactive systems, we are rapidly moving towards the point where a growing number of people assign the same level of trust to an online venue as they might a real life friend or family member. This seems especially true in the area of search and information distribution systems, that are by design “user” friendly (translate “more humanistic”).

Unfortunately, in a world where we are all “publishers” and sources of information, not all information has the same value or trustworthiness.

As a result, many people are exposing themselves to a veritable web of incorrect or manipulated information. To some extent this abdicates our very human instincts of questioning and common sense to information sources that are inherently unreliable, easily manipulated or just simply overwhelm us in terms of the sheer volume of information that we have to deal with on a daily basis. In fact, it seems many are lowering their friendship & trust requirements to meet the abilities of the Internet-based systems with which they interact on a daily basis.

Let me illustrate with a few examples:

Example 1: Google v Bing (Nicaragua v Costa Rica)

For over a century, Nicaragua and Costa Rica have argued over a relatively small, yet seemingly important, slice of land along their north-eastern border (a dispute that was supposedly settled diplomatically over a century ago with the aid of the U.S. government). This dispute, now in front of the Organization of American States (OAS), has been renewed, with Nicaragua reclaiming the land and deploying military troops (Costa Rica, by the way, has no military of its own). It has even become part of a controversy surrounding the “rumored” desire of Nicaragua, Venezuela and Iran to build an alternative to the Panama Canal (there’s a conspiracy to bite into!).

Google, with the unenviable (but self-allocated) task of tracking global borders, is at the center of this renewed dispute, having used an admittedly erroneous map which featured an incorrect/out-dated border. Nicaragua has been touting the incorrect Google Maps version (which was subsequently/conveniently used as part of their justification to move troops into Costa Rica) while Costa Rica has been touting the correct Microsoft BING version (yes, BING got it right). Interestingly, Google Maps has since corrected their border, which Nicaragua in turn has requested not be corrected.

For many, being right equates to being first, or more well known, on the Internet.

On it’s surface, this would seem like a trivial non-event, especially given the history of the world and the inability of anybody to accurately map the world (a task that is extremely difficult as borders are constantly changing and the world is full of “disputed” territories). But Google has made their share of mistakes, including recently deleting the entire town of Sunrise, Florida (and it’s 90,000 inhabitants). More importantly, had Nicaragua not taken action against Costa Rica, I doubt that no more than a handful of people in the entire world would have noticed the error, nor would anybody else (and I’ll include myself here) had any reason to question its accuracy.

Example 2: The Wikipedia Incident.

I have a good friend who recently recounted an amazingly absurd event involving Wikipedia: the crowd-sourced online encyclopedia that has increasingly become an acceptable source of “it must be true if it is on Wikipedia” information. My friend is an expert in a certain field and happened to discover an error involving a certain event in early American history. Being the helpful guy that he is, he went through the Wikipedia registration process and corrected what was a fairly obvious error. However, the following day he discovered that his correction had been subjected to a classic “undo” and reverted back to the original inaccurate text.

Influence or control don’t always equate to experience.

After several “re-corrections” and a series of similar “undo” events, he finally tracked down the person that was rejecting his corrections – a person who apparently had a bit more Wiki-clout than my friend. After explaining that the information on Wikipedia was incorrect, and citing numerous written texts and documents as proof, his corrections were clearly and decisively rejected – the justification essentially being that the original text was corraborated by another Wikipedia entry (which itself was not only incorrect but used the original incorrect entry in a circular reference between the two entries, demonstrating that two Wiki-wrongs do in fact make a Wiki-right). So much for crowd-sourced intelligence when the crowd you are dealing with is a crowd of one.

Example 3: The $200M/Day Obama trip to India

This one simply defies logic. An official in the Maharashtra Government (a state in western India) is referenced by a story on NDTV.com which stated “A top official of the Maharashtra government privy to the arrangements for the high-profile visit has reckoned that a whopping $200 million per day would be spent by various teams coming from the US in connection with Obama’s two-day stay in the city.” This included, by the way, the erroneous renting out of the entire Taj Mahal Hotel in Mumbai and covering the costs of a (massively overstated) 3,000 person entourage (including the press, which works on their dime, not the U.S. government’s checking account).

This “creative” estimate from the “privy” official, as well as a subsequent NDTV.com report that 34 U.S. warships (including an aircraft carrier) were being diverted to support the President’s visit to India were picked up by the Drudge Report and subsequently became fact amongst the anti-Obama press in the U.S. By this time, a mere hour, of course, the $200M/day for the two days in India had become $2Billion: $200M/day for not just India but for the entire 10 day Asia trip (side note: we only spend $190M/day on our entire war effort in Afghanistan – see this nice rebuke by the WSJ.com). Despite a dubious source, and a series of incorrect enhancements courtesy of the instant Internet, the story gained traction – significant traction well beyond what normal common sense would dictate.

Where has our common sense gone?

There was a time when both our individual and collective common sense would have immediately questioned each of the three examples above. But increasingly that is no longer the case. We have replaced questioning of sources and common sense with a misplaced sense of trust to an often automated (or highly manipulated) information stream.

If we interact with an information system enough, we learn to trust it. This is not always a good thing.

We now live in a world where search engines (knowing that we’re more likely to click on search results at the top of the list) allow the manipulation of search results. We live in a world where not only can you sponsor Tweets on Twitter, but you can now sponsor Twitter Trends (talk about the ultimate ability to take advantage of “what’s hot now”).

Is there anything left that we can trust? Yes. Our common sense. And that is our challenge – to not only make sure that we ourselves question and fact-check what we see, read or hear (in any information/news medium), but to be a leader, help encourage, influence and empower others to do the same.

If we can do that, then perhaps we have a chance to restore some sanity and faith to this digital world in which we all live.


Note: Map Graphic courtesy of Erika Orban, Wikipedia image courtesy of Wikipedia, Barack Obama image source unknown

album-under-the-influence-of-giants

The role of Influence in Analyst Relations

I read a great piece recently by my friend Lisa Petrilli on Influence vs Empowerment. She raised some excellent points about the differences between the two and, more importantly, the effectiveness of the two. And in a pure social/commercial market, her points were dead on.

One of the more interesting notes in her post was the result of research by Steve Knox, CEO of Proctor and Gamble’s WOM Unit Tremor. They found that rather than “influencers” in any given market, there were actually “connectors” – people who linked people and ideas together (perhaps a much better, or accurate, description of influence!).

But in the world of Analysts and Analysts Relations, all too often the word “influence” is the yardstick by which people, initiatives and, ultimately corporate value, are measured: How much “influence” over a particular market, or a group of analysts or a group of [pick any category that involves at least one person who buys anything] does an individual actually have, and how can they improve that.

So with a tip of the hat to Lisa and the work of Steve Knox and the team at Tremor, I think that there are some very distinct ways that we can alter/change/improve the way that we “connect” people and ideas together with the ultimate goal of shaping (influencing) an analysts decision to recommend a product, or ultimately (and perhaps directly) a consumer’s decision to purchase a product, in favor of another vendor’s product. Ideally, this influence should be subtle enough to allow the analyst or consumer to feel that they have made the correct choice themselves, as a result of their own empowerment and selection process.

Here are a few things to think about – questions that are worth answering as we begin to consider just how much influence vs “connective ability” Analysts and Analyst Relations people actually have, and how they can improve it:

  • Can “empowerment” actually be used to shape an Analyst’s opinion of a vendor’s product?
  • Can a vendor’s Analyst Relations team go directly to an end-user, bypassing an Analyst, to both help shape an Analyst’s opinion AND drive direct sales results? And does this violate the separation of AR & Marketing?
  • Is there value in a vendor’s Analyst Relations team working directly with Marketing and Public Relations to help shape the way that a vendors customers (believers) can become connectors, and increase a vendor’s brand awareness
  • Can basic outreach techniques (blogging, speaking, etc.) be used by a vendor’s Analyst Relations team to help create a larger group of “connectors” (and thus influence) in a market?

Note: Image courtesy of the band Under the Influence of Giants self-titled album released 2006 on Island Records

wwff1

Outsourcing Analyst Relations: A viable option?

Last week I participated in an interesting discussion regarding influence and the role of analyst relations (AR) – specifically around the issue of how AR staff could increase their influence through a variety of different mechanisms or channels. But one key point that kept creeping into the conversation was one of limited resources: “we simply don’t have the staff to aggressively pursue everything that we would like to accomplish” (a point echoed by many in smaller or fast-growing firms).

After a bit of digging, two basic issues kept making their way into the discussion: a lack of full-time resources and a lack of “R”-level funding (which is often split between Analyst Relations, Investor Relations, Public Relations and Marketing).

That said, there seemed to be a general consensus that yes, there are “parts” of the AR function, regardless of the size of the firm, that could be outsourced based on the size/type of organization, the goals that need to be accomplished and the availability of “outside” resources (or more importantly, funding) – all with the understanding that there must be an accountable person in-house to properly manage and drive the effort.

OUTSOURCING

Here are three basic examples where outsourcing of AR activities might make sense:

  • The Introduction: Sometimes finding the right analyst, or getting in front of the right analyst, can be a challenge. This can be difficult in situations where a firm is moving into a new market sector (product and/or geographic) and may not be familiar with the most appropriate information analysts to reach (think of a US firm trying to move into ASIA/PAC as an example). Using an outside resource (an agency, advisor or perhaps even another industry analyst) to help find the right “connected” or “influencial” person can be extremely effective.
  • The Event: Outsourcing clearly makes sense whenever the word “event” is involved. In fact, the bigger or more important the event, the more outsourcing becomes a viable option (especially for a staff-constrained AR team). Much of the event coordination and publicity can (and should be) handled by hired guns (working under your direction, of course) and free up an AR pros time for more 1:1 analyst “relationship building” activities. This is also a great opportunity to involve PR and Marketing (see below).
  • The Startup: For firms that are just entering into the market, the ability to recruit – and pay for – a quality AR team may simply be beyond their means (CAPEX vs OPEX in a manner of speaking). In this situation, outsourcing the entire AR function to an outside “professional” team, under the control of a “C”-level or Senior “R”-level person my be the most cost-effective approach (especially if the level of work activity will fluctuate considerably over the first year or two).

Now let’s take a look at “insourcing” as a means to leverage in-house budgets and expertise to your advantage.

INSOURCING

As I mentioned above, AR typically competes with IR, PR and Marketing for budget allocation. Interestingly, all of these functions tend to be a bit cyclical in nature and feed off of each other: it is not uncommon to find periods where one group is more “active” than another (that is not to say that any of these groups have “time off” or have any idle time on their hands). But depending upon the situation, the best outsourced resource for AR may actually be an insourced resource in the form of IR, PR and Marketing. This type of in-house insourcing, or collaboration, is something that most organizations could, and should, benefit from if properly executed (different roles, but working to help each other out by lending their own expertise).

This is not to say that every time AR needs a helping hand that they should look to an internal corporate ‘R” function for support, but rather that part of any company’s “R” strategy should include a dose of cross-function support. This not only helps with resource and budgetary issues, but can also be part of a much larger integrated marketing campaign (IMC) that can best get solid, reliable results when IR, AR, PR and Marketing are all working in sync with each other (notice I’ve left out sales – that is a separate function for a different discussion). Remember that while all of the “R” functions have very different responsibilities and areas of expertise, coordination of effort is critical to the success of any firm.

THE PARTING THOUGHT

There are clearly times when outsourcing AR/Influence-related tasks can make sense – certainly the number of established PR, Marketing and Investor Relations agencies show that this model can work extremely well if executed properly. There are also times when, do to the nature or sensitivity of the work, outsourcing may not be a viable option. But if you are a small firm, or branching out into new market sectors, outsourcing certain AR “outreach” functions can definitely work (from both an access/influence and a financial perspective). And if you are a larger, more established firm, a combination of outsourcing and cross-function insourcing should definitely be part of your overall strategy.

One important item to keep in mind in both of these scenarios is “expertise”. Before you outsource anything related to corporate “influence”, make sure that you are selecting the right person (or team) for the job. Going with the lowest-cost option is almost always the wrong approach, while overpaying for “bloated reputation” can often be a waste of time, money and opportunity.

If you are in a position (or think that you might be at some point in the future) where in-house insourcing is a viable option, make sure that there is an established cross-training program and a solid team focus in place as part of the corporate culture before you start to rely on other groups for support. And remember, if you ask for support from one of your other in-house “R” functions, don’t be surprised if you are asked to return the favor – that’s what teamwork is all about.

B2Binfluence

Buyers vs Influencers: Who really controls the deal?

My first true experience in the world of B2B marketing was in 1990. My task was simple: market one of our new products to company “X”. My task could not have been simpler – while “X” was not a current customer/partner, they were very well known for purchasing our type of product, which was then integrated and resold as part of their own, much larger, product line.

Diligently, I worked my way into their organization armed with all the right information. I knew exactly what and how much they were buying, what their key price points were, who the buyer would be, who evaluated new products, who the decision maker would be and, most importantly, who controlled the funding.

my B2B had become a B Not 2 B”

And after 9 months of chasing every lead, every opportunity, meeting at every trade show, and even managing to get their staff to do a side-by-side product comparison, I was left with absolutely nothing. No sale. No opportunity. Nothing.

Nothing, that is, except the realization that I had started my task without one necessary key bit of information: the name of the actual “influencer” who could make such a deal a reality. For lack of this name, my B2B had become a B Not 2 B.

In this case, it turned out that the influencer was the head of operations for a single customer of my target, a customer that had such a significant installed base that when my target inquired about their willingness to introduce a new product (mine) into their network, they informed my target that while they were not totally happy with the existing product mix, they saw no value in adding a new, potentially disruptive, component into their network, even if it was smaller and less expensive. It had nothing to do with price or features and everything to do with mitigation of risk and not having to retrain their internal staff on a network that was “doing just fine”.

Had I known this to begin with, I would have found a way to “influence” my target’s actual customer directly. It may have worked, or it may not have. But I do know that my chances of success would have increased considerably (and yes, I do think I could have closed the deal).

In the years since, the lesson I learned has driven my business decisions in every single venture I’ve started: if you identify the key influencer any deal, and get them to buy into your product or concept before your actual market pitch to the target buyer, your chances of success increases significantly (your time to sale can decrease as well).

As a side note, one other lesson I’ve learned is that if you can’t convince the influencer (especially if it is your target’s customer) of your value, it’s sometimes, but not always, better to factor in the opportunity cost and find a better prospect.

Flash forward to today and the era of social marketing and the ever increasing pace of technology and product development. It’s more important than ever to understand how each individual deal is influenced, and who is involved as they key influencer, which can differ extremely on a case-by-case basis. It may be a person within the business you are targeting, or it may be consultant, advisor or industry analyst. Or, like my first experience, it may be a customer of your target (or a collective group of customers) who in turn may be influenced by their own C2C communications or consultants, advisors and industry analysts (industry analysts, btw, are my favorite starting point, since a really good analyst will know who the consultants and advisors are working with, as well as the requirements of both vendors and consumers in any given market).

The real challenge today is in identifying the real influencer(s), as the number and type of people who can influence a B2B (or B2C) deal has grown tremendously (as has the speed with which, in the era of social networking , a deal can be influenced one way or another). As a result, an increased number of  purchasing/partnership deals that are not totally internally driven involve more than one outside “influencer” – a trend that mirrors both uncertainty in the consumer space as well as the fragmentation of the consultant/advisor/analyst space.

To counter this, we see a strong requirement for an increased level of cross-domain collaboration within businesses, involving marketing, public relations, analyst relations and even customer support being required to correctly identify and target the influencer in any B2B or B2C marketing or sales strategy.

So although the technologies and markets may have changed, the “influencer” axiom, which has been around since the first bartered exchange in human history, applies today more than ever, and is one of the first questions I always ask myself before beginning any business effort.

And if this question isn’t one of the first questions you ask, it should be.

twitter_1402

Twitter’s Privacy Invasion (edited update)

NOTE: This is an edited excerpt from a prior post, highlighted here at the suggestion of a few readers who thought it worthy of its own individual post.

For a while now, Twitter has been testing its own t.co link shortener to shorten/wrap long URLs in private Direct Messages sent between users via their website (transparently to the user, btw – you can read more in their June 8th blog “Links and Twitter: Length Shouldn’t Matter“ – a blog that is hosted by Google’s Blogger network and that I doubt most Twitter users don’t even know exists). In a recent email to users, dated August 30, 2010, they explain that the use of t.co will be expanded to all messages, including those sent via 3rd party applications, and that the length of the shortened URL may vary based on the application/device the receiving user is using, to quote:

“A really long link such as http://www.amazon.com/Delivering-Happiness-Profits-Passion-Purpose/dp/0446563048http://t.co/DRo0trj might be wrapped as for display on SMS, but it could be displayed to web or application users as amazon.com/Delivering- or as the whole URL or page title.”

As a primarily TweetDeck user, the advantage is minimal to me – it already has a function that shows you what a shortened URL expands into. There is also a “post-click, pre-connect” malware check to ensure that you are not connecting to a bad site – again a feature that I already have in my browser.

But the way that they internally use the t.co link shortener is what causes me concern. All links, including those in private DMs (as well as those already shortened through services such as bit.ly which will be “wrapped” internally by Twitter in the t.co format), will be tracked on a per user/per click basis, allowing Twitter to create a data repository of what links you click, the type of content you are accessing – from news to product/vendor sites – and potentially who sent them. Their justification is “Twitter will log that click…to provide better and more relevant content to you over time.”

Sorry, Twitter, but I don’t need or want you to provide content to me. I follow people, not you, for content and conversations.  And I’m far from thrilled that you will now start keeping track of the links users click in the name of providing relevant content, which could be interpreted to mean anything from suggested users to follow to targeted advertising to whatever you decide is most profitable (it’s the “whatever” that concerns me, as this is potentially valuable marketing information that could be sold to/exploited by 3rd party groups).

Everybody understands that what you publicly post is public, but there is also an expectation of privacy with respect to Direct Messages that are not part of the public timeline, not searchable and not shared with 3rd-party search engines (a variation on their “protected tweets” theme). The thought of Twitter tracking content in private Direct Messages – which have become an alternative to quick email exchanges for many people – leaves me with a Facebook-like “invasion of privacy” feeling.

Will I stop using Twitter? No, its value still out-weighs its disadvantages. But I will start to view it in a different light and will probably be less inclined to click on sponsored or vendor-oriented links.

twitter_140

Twitter: 4 Lessons to Learn about Marketing & Privacy

Last week (along with all other Twitter users) I received Twitter’s “Update: Twitter Apps and You” email. It announced:

“Over the coming weeks, we will be making two important updates that will impact how you interact with Twitter Applications”, namely 1) the anticipated mandatory use of OAuth for 3rd-party application user verification and 2) the expanded use of Twitter’s t.co link shortener as a default standard for Twitter messages.

Most of what they announced was anticipated, but their email, while informative, raised an interesting point about user privacy and was a great example of how not to get out “the message”. Here are four thoughts and lessons that I think Twitter needs to understand, all important to me in judging their progress transitioning from a disruptive startup to a viable long-term business.


1. Twitter doesn’t know how to, or can’t, reach its audience efficiently.

I manage a number of different Twitter accounts and would have expected to receive all of the emailed updates within a relatively short period of time. Sure, they have over a hundred million users, but it took a surprisingly long time for all my accounts to be notified by email (through Sept. 4th) for an announcement that was effective August 31st and partially posted on their blog site on August 30th.

Lesson #1: Announce upcoming updates before, not after, they have occurred.

2. “There are over 250,000 applications built using the Twitter API.”

This statement in the email really got my attention – 250,000 apps is a huge number. But it begs the question “really?” I’ve searched around and can’t find any verification of the number, or a list of more than a couple thousand apps (twitdom.com, lists less than 2,000 leading apps and Twitter’s own “Top Ten Twitter Apps” shows Twitter.com at #1 with 78% user share and UberTwitter at #10 with only 2%, leading one to conclude that there might be just a few “dead” apps lying around there somewhere). Additionally, I’d be very interested in the selection process used when they listed the following examples (especially if I were a Twitter app developer with competing applications):

“applications like TweetDeck, Seesmic, or EchoFon, websites such as TweetMeme, fflick, or Topsy, or mobile applications such as Twitter for iPhone, Twitter for Blackberry, or Foursquare.”

Lesson #2: If you throw out a really big number, people will want to know more. Don’t keep them guessing.

3. Twitter doesn’t understand how contradictions lead to confusion.

From a pure “information” perspective, the email was a bit confusing with some odd contradictory statements.

Example A: Their opening statement “Over the coming weeks, we will be making two important updates that will impact how you interact with Twitter applications” is a bit confusing given that:

1) Their new OAuth policy had already been put into effect as of August 31st, as they stated in their August 30th blog post: Twitter Applications and OAuth (interestingly hosted by Google’s Blogspot.com site), and

2) the expanded use of their t.co link shortener directly involves their own website as well (interestingly, Twitter counts their own website as an application, something I doubt most users do, especially when you take into account the listing of applications in Item 2 above doesn’t include their website).

Example B: The first sentence of their explanation of OAuth (which is probably now, and forever, a meaningless word to 90% of their user base) states that it allows 3rd-party applications to access your Twitter account “without asking you directly for your password”. Humorously, the next sentence goes on to state that “applications may ask for your password”. Granted, they may ask only once, but they could have phrased it differently, such as (my wording):

“OAuth is an authentication technology that requires you to provide your Twitter password only once in order to authorize a 3rd-party application to access your Twitter account. You will not be required to enter your password again for that application. Further, the 3rd-party application cannot store your Twitter password, providing you with an added layer of security (you can even change your Twitter password if you like without having to provide it again to the application).”

Ironically, their August 30th blog post (listed above) does a much better job at explaining how OAuth will work than their email did – too bad they didn’t link to it in their email, or, better yet, use the same text.

Lesson #3: Consistency of message (especially across multiple sources) is critical to credibility.

4. Twitter tracks the links you click, in public or private messages, in any 3rd-party app.

This is probably the most significant point of the entire email update. For a while now, Twitter has been testing its own t.co link shortener to shorten/wrap long URLs in private Direct Messages sent between users via their website (transparently to the user, btw – you can read more in their June 8th blog “Links and Twitter: Length Shouldn’t Matter“). In their email, they explain that the use of t.co will be expanded to all messages, and that the length of the shortened URL may vary based on the application/device the receiving user is using, for example:

“A really long link such as http://www.amazon.com/Delivering-Happiness-Profits-Passion-Purpose/dp/0446563048 might be wrapped as http://t.co/DRo0trj for display on SMS, but it could be displayed to web or application users as amazon.com/Delivering- or as the whole URL or page title.”

While this might be a nice feature, it is the way that they use it that causes me concern as the t.co link shortener also includes a “post-click, pre-connect” malware check to ensure that you are not connecting to a bad site and that “Twitter will log that click…to provide better and more relevant content to you over time.”

First off, I don’t need the malware check (a feature that many users already have in their browser or security software). Secondly, that last statement seems to directly imply that Twitter will now start keeping track of the links each individual user clicks, whether they are in public or private Direct Messages and regardless of the app (such as Twitter’s website or any 3rd-party app) – all in the name of providing relevant content, which could be interpreted to mean anything from suggested users to follow to targeted advertising to whatever.

Everybody understands that what you publicly post is public, but there is also an expectation of privacy with respect to Direct Messages. The thought of Twitter tracking what links people click (especially in Direct Messages – which have become an alternative to quick email exchanges for many people) leaves me with a Facebook-like “invasion of privacy” feeling, and that is the last issue that Twitter wants to deal with at this point in their business.

Lesson #4: If you use the phrase “log that click” you must explain exactly how that information is used.

So there you have it. Four points that jumped out at me after reading Twitter’s latest update email. From presentation to content, this email is a border-line #fail.

googlewave

Google Wave: Intentional Failure?

Given the way that Google (who can often seem to do no wrong) has pulled the plug on their Wave product, I expect that there will be a flood of Wave-related blogs, editorials and posts – most with cute word-plays in the title. So I’ll get to the point and keep this one short.

I remember the first time I waved goodbye to Google Wave – about 2 days after getting my “invite” access to Google’s latest and greatest social tool. Why? Despite the fact that I was extremely impressed (from an analytical perspective) by the elegance and sophistication of Wave, there was a practical side of me that looked at Wave and simply had to wonder “who in the world is actually going to use this?” If Wave were a wine, you could describe it as “complex, with a hidden sophistication and an alluring bouquet”. Unfortunately, it would also be undrinkable.

As a result, I never expected Wave to amount to much – the design was too complex, the competition too stiff, the user interface somewhat non-intuitive, the integration with other Google tools almost non-existent and the marketing roll-out almost embarrassing. Rather, I viewed Wave as an experiment – an experiment that reveals quite a bit about Google and their internal mindset.

From a practical perspective, there are many products that Google offers that are extremely impressive, well-embraced by the user community and add significantly to the Google aura. G-Mail is a great example, as is Google Maps/Earth, Google News, Chrome, Docs, etc. Almost none of these, however, generate significant (if any) revenue for Google. In fact, the majority of their $6.77 billion in revenue for the quarter ended March 31, 2010 came from their core capability: Google Search and advertising-related sub-businesses – not from all the nifty gadgets and tools that give Google it’s extra edge. That’s an interesting stat for an IT company in an industry where the viability of almost every product is viewed in termed of its profitability. Loss-leaders are not the norm.

But Google is different, and has been from the beginning, such as their 20% free-time idea to help foster innovation within the firm – allowing their own employees to use 20% of their work-week to go wild with whatever crazy (or not so crazy) idea that floated through their extremely creative craniums. By anybody’s logic, this is the ultimate R&D experience of a lifetime, and a great example of why Google has no problem launching (or no problem with a lack of) “experiments” like Wave (which, by the way, is far from dead as they have now tossed it into the public domain and will very likely find a way to use some of the core development breakthroughs in both current and future products).

When Google launched Wave, I doubt they had any expectation of significant revenue generation, nor did they necessarily view it as core to their search/advertising/content business model. Rather, I believe it was a calculated experiment – one that clearly pushed the boundaries of real-time collaborative social networking, but was always considered just another cutting edge experiment. So don’t view Wave as a failed product, but rather view it as a successful experiment that yielded Google some very interesting, and valuable, data about user technology adoption, coding and development concepts, and the value/need for product co-operation in certain product areas.

Most importantly, I think that Google clearly recognizes the value of both advertising revenue and content that social sites can bring to the brand (Google Blog is a decent example). Social Media – true “social media” – is one of the few areas that can bring both (well beyond what Google Blog could ever generate). With Wave out of the way, don’t be surprised if they take the same step in advertising and content creation as they did with YouTube and video content, buy their way in. On the other hand, they could have another project in the pipeline to dominate the still nascent social media market, but I wouldn’t count on it.

For an interesting alternative/in-depth perspective, check out this ars technica article by Ryan Paul – a very good read.

twoway

Vendor/Analyst Influence: A 3-way Street

In any given industry, there exists a symbiotic relationship between vendors, analysts and customers. Each one is vying for their piece of nirvana: the best value for their dollar spent. In an industry, such as the IT industry, where analysts play a significant role, it is assumed that they are the “influencers” in the market, but in reality, it doesn’t always, and shouldn’t, work that way.

Earlier this year, Steve Loudermilk (@loudyoutloud) and I started the #ARchat group on Twitter to discuss issues involving the Analyst/Influencer Relations industry: essentially an open forum to discuss how vendor-based Analyst Relations (AR) professionals and Industry Analysts interact (I use the term “industry” here to differentiate from financial or other types of analysts). Through the course of the year, we’ve covered many topics that have yielded some very interesting discussions.

Throughout all of these discussions, however, I’ve been noticing a common thread involving “influence” and the fact that not everybody views the influencer:influencee relationship in the same manner. The most common mis-perspective is the traditional viewpoint that the Analyst is the market’s influencer (since their role is to be a trusted advisor to their client  – the Vendor’s consumer – by providing advice regarding technologies, trends, implementation strategies, etc. that “influence” their clients actions). However, that is an incomplete view, that leaves out the more complicated relationships with Vendors and Clients/Consumers.

In this view (Perspective A), the Analyst sits atop the influence model as the sole provider of guidance to both Consumers and Vendors. Sorry, but this just isn’t how it works. Perspective B provides a bit more clarity, demonstrating that a good Vendor “educates” an Analyst about their product capabilities, and the Analyst then provides the the appropriate advice/guidance to their Client (the Vendor’s Customer) who can then make their own choice, based on what is right for their needs (features, budget, availability, scalability, etc.). But even this viewpoint, while better, is still incomplete.


Hopefully, as shown in Perspective B, the Vendors and Analysts learn to share information in a two-way manner. But more importantly, there are, in fact, three distinct influencers in any given market, the Vendor, the Analyst AND the Client/Consumer. Perspective C shows a more complete “sphere of influence”, and how symbiotic each of the three different groups are (note here that there is a tangential sphere of influence that exists solely within the Consumer community, a great example being Trade Associations, who tend to have their own collaborative exchange and discussions about best practices when it comes to Vendor products & implementation strategies).

But the most complete picture of how a market sphere of influence works is when you take a look at Perspective D.  In this influence model, you can see that the entire market is driven by a series of multi-directional channels of communication, where each of the three players (Vendors, Analysts, Consumers) have their own way of providing influence (in the form of information, requirements, capabilities, etc.) that get communicated to the other two players in the market.

In this way, the best possible product offerings can be designed and deployed, giving each of the three participants what they need – the best value for their dollar spent. Note too that in Perspective D I have placed the Consumer at the top of the circle, since they ultimately control what is purchased, and their needs and requirements should be what ultimately influences the market.

Unfortunately, this isn’t always how the system works. And, of course, there are a series of other methods that dictate how information and requirements (and thus influence) are distributed through the group. But in the basic world of Vendors, Analysts and Customers, this sphere of influence is definitely a 3-way street with the Customer directing traffic.

ipad1

APP-etizing Journalism with the iPad

I’ve been waiting for Apple’s iPad for about 10 years, ever since the first real “tablet” PC prototypes began to hit the market, and I’ve been logging some serious time on it since it came out — enough to say that if AT&T retains its newly announced tiered data plan structure, I’ll be in the top 2% that will take advantage of the unlimited plan.

But there are still a few ingredients missing from the media’s recipe for the iPad.

Yes, I’m impressed with the iPad. Great book readers. Perfect for email and social media sites, not to mention web surfing and tons of cool apps (even though many of them are still suffering from Rev 1.0 Crashing Syndrome). And I’m sure I’ll be equally impressed with many of the coming Droid-based dPad’s and the Microsoft-based mPads that I’ll also buy, analyze and try to break.

But what I really like about the iPad is the device’s “concept” – it’s not a “touch-screen PC” or laptop replacement and is clearly not a true “content creation” device, as evidenced by the fact that writing this piece on my 64Gig 3G unit — without an external keypad — is like watching my 2 yr old try to unlock my cell phone (slightly amusing at first, but ultimately annoying when he figures it out and starts deleting emails).

Rather it’s a new breed of device with a form, fit and function radically different from its bigger brother (the Mac) and its smaller siblings (the iPhone/iTouch/iPod, etc.). While the iPad is not bad for email, taking notes, social media sites, etc., this device is more dominantly a “content delivery and consumption” device.

With this in mind, I expected the iPad to be a phenomenal tool for getting news/ analysis online. But after visiting about 40+ different “media” sites, I realized that:

  • Most “news/analysis” sites have not yet figured out the iPad’s real function or how to present information in this new X by Y format, not to mention the internal inconsistencies that abound (such as sites that routinely mix Flash and non-Flash video on a page by page basis, or those that offer different page layouts based either by author or subject matter — a major turn-off).
  • The iPad highlighted differences between “blogs”, “analytic” and “journalistic” sites (Mashable, btw, still comes across as a blog, CNN as more of a newsy site, the WSJ as a clear journalistic site and the NYTimes as a hybrid split personality “not quite sure” site), and
  • Nobody has yet figured out how to appropriately use different media formats to best convey their news/information on the iPad (a great example being a five-page, text-only news story that I read — I don’t remember what the story was about but I do remember it made me feel like I was sitting on a runway tarmac for five hours without bottle of water).

Clearly there are issues with the iPad – and everyone seems quick to highlight them. But these issues are technical in nature and they will be solved (for example, fixed-size images work great on a laptop, but “tappable” thumbnails that expand are ideal for an iPad device).

Without doubt, a new type of “content creation model” or “content creator” is required to match the capabilities of new interactive, highly-mobile, media rich pad-type delivery devices.

But the most significant theme that kept coming to mind as I cruised from site to site involved the shortcomings of the individuals who were actually producing the online content — the editors and writers themselves! It wasn’t that their content was bad, but that more often than not their “content creation” approach just didn’t match up to the UI (user interface), screen size and “application-oriented” potential of the iPad.

Having spent much of my career working with businesses that involved cutting edge technologies or thought methodologies, I really appreciate that the iPad allows for an amazing interweaving and mixture of different media contents: text, graphics, video, audio, etc. in ways that you just can’t achieve on a typical laptop/PC (or mobile phone). The possibilities for innovation are endless. And that could cause a major discontinuity as technology innovators and news/analysis publications continue to rapidly change the ground rules for content delivery but leave the content creators out of the process! What makes it even worse is the current cost-cutting trend of making journalists and analysts responsible for the entire writing, graphical, editorial, and publishing process.

Without doubt, a new type of “content creation model” or “content creator” is required to match the capabilities of new interactive, highly-mobile, media-rich pad-type delivery devices. Look at it this way, you would not use the same content style to write a newspaper article that you would to cover the same story from a TV news anchor desk. Two very different mediums that require two very different approaches.

Technology shouldn’t be driving how writers write, or how content is delivered, it should be the other way around.

Similarly, the iPad opens up enough possibilities that the writing style that works for a traditional web site just isn’t going to cut it for iPad-optimized sites or apps since the shift from laptop/PC-oriented websites to iPad apps is as profound as was the shift from traditional print media to laptop/PC sites.

I saw this same issue in 1997 when I started my own news/analysis firm. Our concept was to produce incredibly rapid analysis of breaking news events that would be delivered exclusively online in a user-configurable/on-the-fly format. This forced us to think in terms of concise bullet-point actionable content that was database driven and flexible in its purpose. It also forced us to seek out and/or educate a different type of analyst from the traditional advisory-based analysts. Traditional analysts were thinking “PDF-based reports” while we were thinking “flexible content”. They emailed fixed documents to their customers while we let our website/ database create the right content for each type of user.

There was no “right or wrong” issue here, just different approaches. But the point was clear: a shift to a new content creation or delivery model required a shift to a different type of thought process as well, and the iPad clearly represents a major shift in content delivery possibilities. Unfortunately, many existing journalists, analysts, etc. have been (or soon will be) placed in a situation where their traditional “content creation” skills need to rapidly evolve and adapt if they want their content to have meaning and “high consumer satisfaction” on iPads and similar devices.

So how do we fix this problem, given that we can’t just wait until all the current journalists and analysts retire (a really bad idea, btw)? Here are a few thoughts:

First, content “producers” must be intimately familiar with the different content distribution mechanisms. If your firm delivers content in 10 different formats or on 10 different types of devices, give all 10 devices to every author, journalist or analyst.

Second, websites need to adopt universal “site-wide” guides regarding the look and feel of information as it is presented on different devices. There is no AP Style Guide for an iPad, so write your own and let it evolve as you get more comfortable with both the device and the content creation/delivery process.

Third, as different information presentation formats are developed (and they will quickly as I expect many news/analysis sites to adopt an “app” approach in place of a website approach), content creators will have to start thinking about writing their core content in ways that allow it to be easily re-purposed or distributed in different formats (including not just different devices but different media, including text, pod-cast, video, etc.).

Lastly, while there are some really great thought leaders out there educating and developing a whole new wave of web-smart journalists and analysts, we have got to get the existing group of writers and editors out there proactively involved in the technological process. Technology shouldn’t be driving how writers write, or how content is delivered, it should be the other way around. I know some very smart journalists and analysts who could really take content creation and delivery into some very interesting areas, if only they had the chance.

– Author’s Note: This piece was originally prepared for the WeMedia Tabula Rasa event held in Washington, DC. For more information, check it out here: http://wemedia.com/2010/06/14/wethink-tabula-rasa-dc-preview-thoughts/

MICROWHO

Does Microsoft bing + Yahoo = MicroWho?

Microsoft and Yahoo finally announce a deal that should have taken place 18 months ago and won’t be complete for another 24 months. Even then, it probably still won’t be a game changing move for any of the players.

Today’s big news is the Microsoft/Yahoo deal that will finally give Microsoft access to a much larger audience for its new bing search engine and give Yahoo some breathing room to focus on what it does best (I’m not sure what that is, and personally, I found the announcement of Kodak’s 1080p, Image-Stabilized HD Pocket Camcorder just as interesting as this announcement).

That said, this deal is still significant for a number of reasons:

  • It isn’t costing Microsoft anything (consider that they were willing to pay $9B for the same type of deal just a year ago),
  • Microsoft is hungry for bing-driven advertising dollars and user information, not to mention a way to improve their struggling image (let’s face it, Microsoft is not exactly a “loved” company outside of their employees), and
  • Yahoo has been going exactly nowhere since adopting a strategy of confusing its audience with a “we can do anything you want, oh, and we also still have a search engine” approach.

But realistically, this deal will take at least 12 months to begin to take shape and at least 24 months for the full bing integration and value proposition to begin to appear (check out CNN’s take for market-share and the potential financial impact). Unfortunately, it is also a deal that will likely not have a significant impact on the dynamic duo’s arch rival Google any time soon. For while this is a business-oriented deal, the market that they are targeting is driven not by business needs but by emotions and that is a war that Google is in a much stronger position to win.

Why? Yahoo has adopted what can best be described as the old AOL approach of being the window to the world, with every type of content – and advertising – known to mankind on its front page. Search is clearly a second seat to everything else available (just take a look at the next generation of Yahoo’s home page for a better example – a layout, btw, that I think is a clear step forward for their type of business). Unfortunately, the number of people that are comfortable with Yahoo’s current image strategy is not that large (granted, it is larger than Microsoft’s, with less than 10% of the search engine segment). In fact, many people bypass the Yahoo main/search page to get to the real value of Yahoo – pages like finance.yahoo.com, which is actually a decent site.

Google, on the other hand, has always kept their image clean, search-focused and fun. There isn’t much on the front page to distract from the fact that they are all about providing the best search experience possible (a point not lost on bing‘s marketing team as demonstrated by their zen-like home page at bing.com). Sure, Google offers a slew of apps (some of which should really scare Microsoft), but they treat those as secondary “opt in” value-adds (even though that is where the real value lies in Google’s future). The fact that more people emotionally connect to this approach when it comes to searching the web is shown in Google’s overwhelming control of the search engine market.

So while many people fear Microsoft as Big Brother, even though Google is much closer to Orwell’s vision of 1984 as smashed by Apple’s famous Macintosh ad, emotions and “comfort” will likely remain significant factors in how the new Microsoft/Yahoo partnership plays out over the next few years – a period during which Google will continue to:

  • expand their search capabilities (both through internal development and potential acquisitions of firms like Topsy and Collecta that are pushing the state of social network and real-time searching),
  • enhance their direct access to social media networks, and
  • improve their inroads into Microsoft’s application domination through increased “open” and “cloud-based” software tools.

What do you think? Is this deal really that significant? Is it too early to call a winner? Or is it likely to be more of a move that will keep Google in check without any significant long-term impact? My guess is that it is the latter.

analysis – innovation – execution