Promoting a Personal Website

A personal website is all about alternating between creating and promoting content. People create content in order to attract traffic and the traffic provides feedback and motivation to create new content. The two activities reinforce one another, eliminating one or the other results in an abandoned website.

There is no point in promoting a website if the results of the promotion cannot be assessed. Gathering data about the visitors and recognizing patterns in that data is of the utmost importance. Web-statistics packages are a great help in this task. Therefore, it is important to have a web-host provider who provides access to a good web-statistics package.

A good web-statistics package reveals a lot about the visitors. It lists the most popular pages on the website and the links that directed visitors to them. It reports about the visitors directed to the website by search engines and the popular search phrases. This statistics report generated by awstats shows the kind of information these packages can provide.

There is no correct way of interpreting and using the information provided by a web-statistics package. Invariably traffic reports will show trends that can be capitalized on. The traffic reports for this website clearly show some trends and new content on this website reflects this influence. Even the idea of writing this article was due to the observation that many visitors are finding this website by searching for “personal websites”.

Many people regard search engines as the most important source of visitors. This is not totally untrue but focusing on search engine submissions is a waste of time. Most search engines are unimportant. In fact, the only search engine worth focusing on is Google. Google assesses the importance of a webpage by looking at inbound links to that page. If the inbound links to a webpage are from websites highly regarded by Google and the anchor text of the links contains keywords used in a search, Google will give the webpage a high rank for that search.

All of this means that search engine traffic to a website is almost completely dependent on inbound links. The maintainer of a website seems to have no control over inbound links but this is not true. The maintainer decides what content to put on a webpage and this choice determines the set of potential candidate websites for inbound links.

Informing the potential candidates of the existence of content on a personal website is very important. Hoping that inbound links will automatically materialize is hoping for too much. Websites such as Slashdot, OSNews, AMDboard, and many others feature stories and articles from other websites. Such websites also provide a story submission interface, and this interface is the key to informing a popular website of content on a personal website.

It is important to understand that submissions to popular sites often get rejected. Link submissions should only be made to sites to whom the link carries value. The article “How Intel Wrecked Itanium” on this website was submitted to AMDboard and Slashdot. Slashdot rejected the article but AMDboard featured the article on its homepage. The article was very relevant to people who care about AMD but somewhat less relevant to Slashdot readers, and hence the different outcome. It is also helpful to keep in mind that some of the editors in charge of reviewing the submitted links are complete morons, and reject perfectly good content for no good reason.

Finding websites that accept submissions is not a hard task. Web searches for keywords relevant to the webpage to be submitted work pretty well in practice. Once a candidate website has been located more candidates can be located using Alexa. It is extremely important to have good anchor text for the link submitted. The article “Search Engines and The Art of Linking” gives some guidelines for choosing the right anchor text.

A submission accepted by a popular website leads to a burst of traffic. This is great but the inbound link from the popular website also improves search engine ranking of the linked webpage. The article “Why Personal Websites Matter” was recently featured on Slashdot. This resulted in over 17,000 individuals visiting this website. Google considers Slashdot a really important website and as of now, searching for “personal websites” on Google returns the above article as the top ranked result.

Creating content for a personal website is hard and demanding work. The effort required to effectively promote a personal website is a small fraction of the time needed to create that content. Not taking the time to promote content is completely incomprehensible and a very unintelligent thing to do.

by Usman Latif [Jan 11, 2004]

Guideline for installing KODI on Amazon Fire TV Stick

In this article you will learn how to install Kodi 17.6 on FireStick or Fire TV in short time. Here you will also get to know how to install Kodi 18 Leia. But Krypton 17.6 is better than 18 Leia. The Leia 18 version of kodi is not bug free yet. You can go through this version if you are not worried about bugs.

Amazon Fire TV/Stick

Amazon Fire TV/ Stick is good at streaming videos and is now-a-days top of the list for buying it. The offers put forth are amazing such as Hulu, Sling, Netflix, Hotstar etc. One of the most interesting things is that Amazon Fire TV/Stick supports Alexa.

You can further enjoy all this stuff absolutely free on a Jailbroken Firestick.

Step 3: Kodi 17.6 Krypton installation on Fire Stick

The third step is to install kodi on FireStick. Kodi can be installed in many ways but here we will install it by using ES explorer. This app consists of all the multimedia content on your Amazon Fire TV Stick.

1) Open the ES file explorer menu. Now on the left side of the screen you will see the tools option further click on the download manager and then click on new.

2) Now a small window will appear on screen which will ask your path and name. On the path field, enter the path for the media location and in the name field type Kodi Krypton.

3) After entering the path and name click on download now.

4) When download is done then open the file and click on install. The installation process will begin.

5) Now by using remote click install option.

Now go to Kodi app home screen and move towards settings go to Applications and click on Manage Installed Applications go to kodi and then launch it.

Guideline for using Kodi on FireStick

After installing Kodi on your firestick now it’s time to learn how to use it. By using you have access to watch wide range of content such as movies or TV shows for free.

One thing I want to mention here is that everything you will stream on kodi is clearly visible to the government so if you want to hide the content which you are streaming you have to use VPN by using VPN you can hide your identity and location from the government and internet service providers.

One of the best VPN I am recognized with is Express VPN it provides it users the secure and high quality connection. It will give you a trial for the first 30 days and even if you are not satisfied with it performance you can refund your money.
So, VPN can be activated in few steps:

1) Open the app store and search for Express VPN now click on the desired result.

2) Now click download and install it on your device.

3) Now open the app and enter your login details like your email and password and then click on sign in. If you are the new user then click on the new user below the sign in form and fill the required information.

4) The last and the final step is to click the power button on after that your connection is VPN secure. You can also set up location by clicking Choose location.

Enjoy VPN secured connection.

Now once your streaming activities are private now you can stream the content which you want but make sure any illegal content is strictly prohibited. If you want to stream movies and TV shows of your own choice you have to install different Kodi add-ons for it.

Guideline for installing Kodi 18 Leia on FireStick

The installation process of Kodi 18 Leia is same as Kodi krypton 17.6. The only difference is there is a different URL for kodi 18 Leia for downloading it.

1) Open the ES file explorer menu. Now on the left side of the screen you will see the tools option further click on the download manager and then click on new.

2) Now a small window will appear on screen which will ask your path and name. On the path field, enter the path for the media location and in the name field type Kodi 18.

3) After entering the path and name click on download now. It will download the Kodi 18 Leia APK on Fire Stick.

4) When the download is done then open the file and click on install. The installation process will begin.

5) Now by using remote click install option again.


6) This will help you to install Kodi 18 on your firestick. Now click open to launch it on your Amazon Fire TV Stick.

Guideline for installing Kodi on Fire Stick Using AppStarter

If you lack behind to jailbreak Amazon Fire TV/Stick using any of this procedure mentioned above then don’t panic AppStarter is a one which offers you the installation of the apps which are blacklisted.

Now again you have to enable the ADB debugging. And turn on the Apps from unknown Sources and turn off the Collect App Usage Data.

After following these three steps now install the ES file Explorer as mentioned above.

After these steps you are now able to install Kodi 17.6 Krypton on FireStick and hack it using AppStarter. Following are the steps:

STEP 1: Firstly run the ES File Explorer and open the menu.

STEP 2: Now click Tools and click on Download Manager and then select +.

STEP 3: Now a new Window will appear on your screen here you have to enter the path and name and then click Download Now

STEP 4:  After the download is completed select Open File and then click Install.

STEP 5: When the AppStarter is installed run it.

STEP 6: To install Kodi on your Firestick go to Updates” and click Install.
If the method mentioned above is failed then don’t worry I have another one do try it to jailbreak Amazon FireTV/Stick.

To install Kodi on Fire TV using FireDL
This process is little time-consuming. If the method mentioned above fails then install kodi using FireDL.


This method also requires enabling installation from Unknown Sources. Enable it and then follow the steps mentioned below:

STEP 1: To search FireDL on your Fire TV/Stick.

STEP 2: Now Install the FireDL and then Open it.

STEP 3: Now type URL on the space given on the top.

STEP 4: After entering the URL and download will be started.

STEP 5: When the download is done. You will see Install on screen click on Install and wait few minutes. After sometime it will be install on your FireStick.

Guideline for installing Kodi on FireStick Using Downloader App

Downloader app is basically in replacement of ES file Explorer. By using this method you can jailbreak Amazon FireTV/Stick.

Make sure you have enabled ADB debugging. And turned on the Apps from unknown Sources and turned off the Collect App Usage Data.

STEP 1:  Now go to Home Screen and search Downloader on the search bar. When you find Downloader in the results found click on it.

STEP 2: Now click on download and install Downloader app on your Fire TV Stick.

STEP 3: After downloading it run the app.

STEP 4:  Now here URL is asked from you type the URL then click Go.

Now you’ll be headed to the Kodi website. You’ll now have to navigate using your Fire TV remote. You’ll see a red circle on the screen which will be your mouse pointer. Scroll down and click “Android” icon (below image).

STEP 5: Now you will see a new page. Scroll down the page and select ARMV7A (32 bit).

STEP 6: Now you will see the Downloader app will start downloading Kodi APK on your Fire TV / Stick. The weight of kodi app is 87 MB. Wait for some time until the file is in progress.

STEP 7: When the download is done click on install to install it.

STEP 8: Then again click install Click.

STEP 9: Kodi installed successfully on your FireTV/Stick.

Now enjoy the variety of content by having kodi on Amazon FireTV/Stick using any of the methods mentioned above. But stay away from the illegal content because it is illegitimate.

Beyond Google Chrome

Ever since Google Chrome’s introduction, every other web browser has been enviously aping its user interface minimalism. Firefox has gotten on the tabs in the title bar bandwagon, switched to a Chrome style status bar, and turned the menu bar into a menu. Internet Explorer has gone even further, and merged the tab bar with the address bar.

Minimal user interfaces are clearly the vogue in the web browser world but has the user interface on a diet trend gone too far?

Not at all! By the end of this article, we will work through a web browser user interface (an implementation for Firefox is provided) that utilizes no more screen space than Google Chrome’s Omnibox but packs an order of magnitude more functionality.

Google Chrome’s Omnibox

Chrome's Omnibox

Packing in more functionality is easy if one disregards usability, but the new design is an order of magnitude more usable as well. The core of the new design is a user interface widget that allows listing/searching of data source data, and seamless switching between different data sources. The data sources may be internal such as bookmarks, and browser history; or they can be external such as the various publicly available search APIs. The combination list/search interface and the uniform treatment of data sources adds consistency to the design and enhances usability.

Earlier, I mentioned an implementation. Below is a screenshot from the implementation. It shows the new design:

A Better Omnibox?

Toolbar

The design makes no modifications to the tab bar so it is not shown. It combines the address bar, the search bar, and the bookmarks bar into one unified toolbar.

Apparently, the new design has no text boxes for the user to type queries. Can you guess where the user types? Below is a more revealing screenshot:A search menu showing search results for a query

Search for lego mindstorms

Search results are shown in search menus. Search menus are popup menus that contain search boxes. Search menus are hidden out of view until the user clicks a search button. This arrangement lets the user view search results in the browser interface instead of having to view them at a search engine website.

The new design eliminates the need to navigate away from the current webpage in cases where the answer to a query is embedded in the search results.A case where the query is answered in the search results

Schadenfreude definition

The hidden search boxes may suggest that this user interface is slower to use than Google Chrome’s Omnibox, but this is not the case. The user can click a search button (or press a keyboard shortcut) and start typing immediately. This is no slower than clicking a search box to focus and typing.

Below is a third screenshot. It is a repeat of the search shown in the earlier screenshot via a different data source.

The Shop search menu shows prices

Lego Mindstorms Price

The interesting thing about this screenshot is that we got to it from the state depicted in the earlier screenshot by exactly one additional mouse click. Can you see the magic of the design now? Repeating searches via a different data sources is effortless. (The black toolbar shared by Google search pages does something similar but it is limited to Google websites only.)

The News search menu searches news

Lego Mindstorms in the news

The really neat thing about the design is that it is data source neutral. It does not prefer one data source over any other. You can have Google, Bing, Blekko, Wolfram Alpha, and whatever else you want as search buttons, and the interface will treat them all equally. Searches via any of the data sources require just one click. There is no default search engine of the traditional browser user interface. There is no need for one.

Even better, all data sources complement one another. Failed to find something in the Bookmarks? No problem, Google is just a click away. Google failed as well? No need to worry, Bing is there, and so is Blekko. Search engines instead of shutting one another out, shore up each other’s weaknesses in this design.

The new design gives even niche data sources such as Wolfram Alpha a chance. A user can try them because now she knows that if the search fails a regular search engine is just a click away.

Another magical feature is the ability to turn text selection into searches. This too takes a single button press. With this feature, a user can lookup the definition of a word with just three clicks: a double click for the word selection and a search button click for the lookup. The screenshots below show how selection searches work:

First, select the text you want to search

Selection

Second, click on a search button of your choice

American Dairy Association Search

Want to see the next page of search results? That is easy too:

Two pages of search results

American Dairy Association Search 2

Being popup based, the new design encourages a different search workflow than the traditional workflow. Under the traditional search workflow, a user opens a search engine results page (SERP) and visits interesting links one by one. She uses the browser back button to get back to the SERP after every link visit.

The new design does not have a SERP to come back to, so it encourages a workflow where the user opens all interesting links in tabs before closing the search menu. This ends up requiring a tab focus change (this cost can be eliminated by opening new tabs in foreground) but saves the user waiting time on slow loading links.

Opened links shown in red. Clicking a red link closes the search menu

Multiple opened links

Another interesting feature of the new design is that it is modal. It has two modes, one for listing/searching different data sources (referred to as searching in the rest of this article) and one for browsing the web. Under this design, the browser starts in browse mode. It shifts to search mode when the user presses a search button. It stays in search mode if the user presses another search button while in search mode. Finally, it moves back to browse mode when the user hides all search menus.

This design lets the browser reset search boxes to their default state after a user is done searching. In simple words, it lets the browser automatically clear the search boxes. Traditional browser user interfaces do not have this capability as they intermix browsing with searching.

This property allows the web browser to associate a default search with every search button. The web browser presents the results of the default search to the user if a search menu on opening is in its default state.

For instance, for the Bookmarks button, the default search lists the most recent bookmarks. For the search button, the default search lists the most popular pages (as returned by the Bing API) from the website the user is at.

Default Bookmarks Search

Default Bookmarks Search

The default search is not the same as autocompletion. Autocompletion suggests potential search text completions to the user, but the default search presents search results to the user. For many data sources, the default search eliminates the need for having separate interfaces for listing and searching data from a data source. It greatly adds to the usability of the new design by allowing the new design to perform the tasks traditional user interfaces perform via two separate interfaces.

The next screen shot takes us to the address bar:

Address bar search menu

Addressbar newyorker

The new design treats the address bar as another search button with an associated search menu. This choice leads to greater consistency in the design but beyond that it solves a very nasty problem with the traditional address bar.

The traditional address bar suffers from a problem that is best described as peeing in your drinking cup. Basically, it uses the same text box for output and also for input. It updates the text box’s current value whenever the current webpage changes but also lets the user replace that value with something different. Once this is allowed the user cannot always be sure if the current value in the address bar is the address of the current website or a value she entered earlier.

Firefox address bar showing google.com as yahoo.com

Addressbar showing google as yahoo.com

The new design has none of that confusion. The address of the current website always shows as the label of the address bar, and the user types in the address bar’s search menu.

Address bar search menu with label google.com but search text yahoo.com

Addressbar yahoo.com

As an added benefit, the new design allows a default search for the address bar too. The default search for the address bar can list the most frequently typed website addresses, the most visited websites, or something equally useful.

As search menus are so important to the new design, a conscious decision was made to increase their utility by adding the ability to edit items directly in the menus. For instance, some search menus indicate the bookmarked status of an item by a star and the user can click the star to bookmark an item or to edit a bookmark. Other search menus allow the user to tag items.

In the next screenshot, we have shifted to the bookmarks data source. Notice, you can directly tag bookmarks via the tag icons to the right of each bookmark.

Bookmarks search menu with several items opened

Bookmarks science search with several opened items

A user may also sort and filter search menu items in various ways. For instance, you can setup a Reading List by filtering the bookmarks data source by the unread tag and sort it by title. This combines with the tagging capability of the search menus to let you take things off the Reading List directly. You can even add more items to the Reading List by turning the unread filter off and tagging additional bookmarks unread. Traditional user interfaces special case such functionality, but here the experience is smooth, uniform and seamless.

Bookmarks filtered by unread tag

Bookmarks with unread tag

Still skeptical? Want to try the interface for yourself? If you use Firefox, start by downloading the add-on Categorize. It contains a working implementation of the design outlined here.

Notes on the Implementation:

The implementation only supports Bing and Amazon APIs. Other search engines either do not provide any APIs or their API usage terms are too restrictive. Also, the results yielded by the APIs are much poorer than those provided by regular search engines. In fact, the Amazon API results are so poor that a search button for it is not included in the default setup. The poor results are not a problem with the user interface but a deliberate choice by API providers to treat API users as second class.

The implementation does not follow the design outlined in this article strictly, and many features do not work as outlined. These issues are due to the implementation being a Firefox add-on and can be circumvented.

By Usman Latif
Published: 
July 29, 2012

Sun Microsystems’ x86 Strategy

Over the years Sun Microsystems has been extremely apprehensive of Linux, and x86 based servers. Sun did recognize cheap x86 hardware as a threat long time ago, but the company didn’t know what to do about it. Sun was afraid of entering the x86 market as the margins in the x86 market were not healthy, and such a move would have cannibalized its high-margin UltraSparc business.

After the dotcom bubble burst, Sun made a number of bizarre moves with regards to the x86 market, but these days Sun looks firmly committed to a game plan. Solaris x86 which Sun was once planning to discontinue has become central to the company’s plans. Sun is cutting back on UltraSparc development, and is readying itself to pursue life as a major x86 server vendor.

According to a report by The Register, Sun will be rolling-out a number of in-house engineered Opteron based servers and storage solutions in 2005. Another report by The Register claims that, Sun is planning to sell 414,000 Opteron based servers in 2007, and the company is aiming for a double-digit share of the x86 server market.

Some of Sun’s x86 gains will surely come at the expense of its high margin Sparc business so the company has to compensate for that loss. But, Sun can’t expect to gain market share quickly if it doesn’t price its x86 hardware competitively. Sun is faced with two conflicting goals in the x86 market, but the company has figured out a way to extract decent margins while pricing its x86 hardware competitively.

Sun has calculated that if it generates reasonable Opteron server volume, it can get massive discounts from AMD. Sun anticipates that none of the major x86 servers vendors will move away from Intel’s Xeon processors any time soon. For pragmatic reasons, Dell, HP, and IBM can’t afford a duplication of their server lines; IBM and HP are offering a few Opteron based servers, but they are not seriously marketing them. Consequently, Sun can own the Opteron based server market, and push AMD for the volume discounts it needs to make money on x86 sales.

If Sun succeeds Dell and others will be tempted to enter the Opteron market, but by that time Sun will have developed massive Opteron server design expertise, big volumes, and a complete range of Opteron server offerings. These first mover advantages will act as entry barriers and will stop Sun’s competitors from getting in the Opteron server market.

Sun also expects that Intel will not initiate a price war with AMD. A price war will only hurt Intel’s profitability as AMD isn’t making too much from server processors. Moreover, none of Intel’s major customers can force Intel to get into a price war with AMD. Intel’s major customers Dell, HP, and IBM can only try to pressure Intel, but that is the limit of what they can do.

Dell does get special treatment from Intel but not in the form of better pricing. If Dell was getting better pricing, IBM and HP would have reported Intel for anti-competitive behavior. Intel simply allocates large quotas of premium desktop processors (processors in short supply) to Dell, and Dell sells these processors at good margins. The only thing exclusive about the arrangement is that only Dell could have benefited from any such arrangement. Intel would have gladly duplicated the same arrangement with IBM and HP, but IBM didn’t have much of a PC business, and HP likely caters to a customer-base which isn’t too interested in acquiring the latest at a premium.

Intel will only get into a price war with AMD if Sun tries to take more than 25-30 percent of the x86 server market. Of course, Sun won’t hit that kind of market share any time soon, and the company has plenty of room for profitable growth.

Sun has the hardware costs under control, but it also needs to control Solaris development costs. According to an article by Wired, Sun spent $500 million developing Solaris 10. Solaris 9 was released in April 2002 so Sun must be spending roughly $200 million a year developing new functionality for Solaris.

An operating system also requires support infrastructure and maintenance, and these costs can be quite significant. Apart from the usual, compilers/assemblers/debuggers, utilities, patches/updates, drivers, code reviews, etc., Sun also has to support multiple Solaris ports. Sun is already committed to Sparc, Opteron, and Xeon ports, and the company will likely need to support an Itanium port as well. This suggests that Sun’s total Solaris development costs are no less than $400 million a year.

This amount of money is no pocket change, Dell spent $464 million on R&D in all of 2003, and Dell is much bigger in terms of revenues than Sun. The development cost is all the more significant for Sun as in the x86 market Solaris will be competing against Linux, and Linux development costs get shared by thousands of volunteers and businesses. Right now Redhat and Novell are charging good money for Linux, but this doesn’t mean they will continue to do so even when Solaris becomes a threat. The Linux development model allows Redhat and Novell a lot of leverage in setting prices, and they will use this leverage if required.

Any attempts to duplicate the Linux development model won’t work for Sun. Sun is releasing Solaris under an open source license, but it is mostly a marketing ploy to gain mind-share, and win Microsoft’s blessing. Sun knows Linux developers are not going to switch to Solaris in the short-term, and many Linux features are being maintained by commercial companies anyway.

Sun intends to combat the Linux advantage by assuring that the money it puts in Solaris yields a competitive advantage in the form of clear technical superiority over the competition. Sun will also attempt to tightly integrate software and hardware development in order to quickly bring advanced functionality to the market. But, the real key to Sun’s success will be volumes.

If Sun manages to sell millions of servers, Solaris development costs will get dispersed over the large number of units shipped and become irrelevant. Moreover, Sun will be able to make money from add-on sales, and service/maintenance contracts. Also, Solaris will displace Linux as the open source operating system of choice, and this will allow Sun to steal IBM and HP’s Unix customers.

Microsoft will be very supportive of Sun’s attempts to push Solaris as the open source operating system of choice. In the long-term a popular and technically superior Solaris will divert development effort from Linux and hurt Linux vendors. Solaris will be open source but it will still be controlled by Sun and will not pose a threat to Microsoft as Sun’s ambitions are quite limited. Sun wants to make money and is not interested in starting margin squeezing price wars with Intel and Microsoft.

Sun’s overall strategy suggests that UltraSparc has no future. Sun has poured large sums into UltraSparc development in the past and is being forced to do so even now, but UltraSparc has been clinically dead for a long time now. It lags benchmarks and likely costs the company more than 50 percent of its R&D budget. If Sun’s Opteron sales takeoff, the company will become less reliant on UltraSparc revenue, and the incentive to keep wasting money on UltraSparc will diminish.

Sun has placed a very bold bet on x86, and the company will emerge highly profitable and competitive if it manages to execute its game plan effectively. The downside is that if the game plan fails so will Sun. In that case, Sun will get swamped by hardware and software development costs and quickly go out of business.



by Usman Latif  [Dec 20, 2004]

Why Good Ideas Fail, Part II

In recent times the software industry has come under a lot of pressure from open source developers. Open source software (OSS) has the tendency to persevere and continue attacking a market in spite of failure. This characteristic is precisely the prerequisite for success in the software business. Consequently, OSS is the most unsettling competition to the software industry.

The success of Linux has emboldened open source developers, and people are now openly questioning the viability of the whole software business. Is the commercial software industry really necessary or will majority of software development become open source?

One would expect the software industry to innovate in the face of competition from OSS, but this has not happened so far. Microsoft is the most vulnerable company, but Microsoft has only wasted money on research; Microsoft research is focused on producing papers and not products. Moreover, the company has more than 50 billion dollars sitting idle in the bank; this is money which should have been used for product development, but Microsoft simply hasn’t figured out what it wants to develop.

Software is important to users and this importance is increasing with time; users care about software because it directly contributes towards productivity. Inspite of this utility, the average computer user is spending more on junk food than on software. If things continue the way they are going, soon investors will be convinced that all the money the software industry was capable of making has been made, and it is time to move on.

Software industry has to innovate as growth of the lucrative industrialized world PC market has slowed, and software companies can no longer rely on new PC sales for growth. Innovating in the software industry is not easy as existing software users tend to play it safe by sticking with products that have demonstrated clear productivity benefits. New and innovative products do sell, but in order to be successful they require patience and persistence on the part of software companies. This necessarily implies bigger investments, more risk, and a longer wait on returns.

New startups are always poorly funded, and can not be expected to try breaking into mature software markets. However, well funded companies capable of introducing innovative products are also taking a relaxed attitude. Instead of going the tough route of selling new software products, these companies want to make easy money by releasing endless upgrades.

Software companies believe they can out-innovate OSS products. This is wistful thinking; once an open source software project takes roots, it starts improving at a pace similar to those of commercial offerings. Moreover, software upgrades suffer from diminishing returns: users care about the first few updates, but after a while additional functionality stops being of interest to them. This behavior is not acknowledged by software companies as they have managed to sell upgrades in the past. However, past experience is not a good guide in an immature industry. Products that entered the market at the inception of the software industry are only now starting to experience diminishing returns; continued dependence on upgrades will ultimately lead to disaster.

The software industry has to address two problems in order to avert the crisis situation which is developing: the industry needs to justify long term investments in innovative products, and it needs to keep OSS at bay.

If the software industry manages to address the first problem it will automatically address the second problem. OSS has become a threat precisely because the software industry has not been innovating for the last 10 years. Currently, the software industry is making most of its money from a few well understood products that are not too hard to clone. This situation has given open source developers well defined targets that are easy to attain.

The bigger problem facing the software industry is that of justifying long term investments. A sophisticated software product such as an operating system can require a long time to develop and gain acceptance in the marketplace. Such projects are inherently high risk, and can fail even at the implementation stage. Because of the long time to profitability, opportunity costs become a big issue, and the total cost of the project can run into hundreds of millions of dollars.

There does not seem to be an easy way out; OSS is not going to go away and long term investments in new products are inherently risky. Fortunately, other industries have faced the same problems and successfully risen to the challenges; there is no reason the software industry can’t do the same.

The pharmaceutical drugs industry faces problems that are similar to those of the software industry but an order of magnitude worse. On average a new pharmaceutical drug requires $800 million to develop, and the development process can last 15 years. Once a drug is on the market it has a limited window of opportunity to make money. This window of opportunity is the period during which the drug has patent protection. After the patents for the drug expire, generic drug manufacturers start producing the drug and the product stops being a big money maker.

In both the software and the pharmaceutical industries, the primary threat to profitability comes from the eventual easy replicability of the ideas behind a product. Non-generic competition can be a threat at times, but does not alter the pricing structure of a market catastrophically.

Pharmaceutical companies understand the threat of generic competition very well. Pharmaceutical companies make highly risky long term investments in innovative products to counteract the effects of their big money makers becoming generic; failure to do so typically results in a massive loss of profitability. This is precisely what the software industry has to do to stay ahead of OSS (no advocation of software patents implied here).

Generic drugs constitute a majority of the drugs available to patients, but this has not stopped pharmaceutical companies from growing and staying extremely profitable. Pharmaceutical companies make their money by targeting lucrative market segments; thus, they are able to charge fat margins. Trends in the software industry suggest the same thing will eventually happen with software. Most of the software in popular use now will eventually become open source, and software companies will need to target market segments to make money.

In the pharmaceutical industry generic and non-generic drugs are complementary. Generic drugs play an important role, they ensure cheap availability of drugs to patients and force pharmaceutical companies to innovate. OSS seems to be assuming an analogous role in the software industry. Generic drugs are cheap but not free; generic drug manufacturers charge nominal profits on top of the manufacturing costs. Linux OSS distributions are starting to exhibit the same trend.

The profit making window of opportunity for a product varies vastly in the pharmaceutical industry, and depends on patent expirations. Software sales behave somewhat similarly but for entirely different reasons. Software products do not stop yielding fat margins abruptly, but they do stop making money eventually. In the software industry the profit making window of opportunity is a function of the popularity and implementation complexity of a product. This is a consequence of the limited resources available for OSS development.

OSS developers cannot afford to waste development effort on everything that comes along; they have to wait and watch to discover the successful products, and then develop implementations. Depending on the complexity of the product, this whole process can easily take more than 10 years. Also, a good open source implementation does not immediately kill commercial products; software migration is a very slow process and commercial products can continue to make money for quite a long while.

The pharmaceutical industry uses a lot of tricks to maintain consistent profitability. The industry uses diversification to guard against risk, it uses pipelining to guarantee a steady flow of new products, and it uses market research to direct development funds. All of these concepts are totally alien to the software industry.

Most software companies are one product companies, and have nothing in the pipeline apart from upgrades. As OSS developers get better organized, one product software companies will find it tough to survive. Only companies with diversified product portfolios, big R&D; budgets, and large pipelines will manage to deliver consistent results. Companies unable to restructure themselves will become roadkill.

Microsoft is a classic case of a company in dire need of restructuring. All of Microsoft’s products are high volume products; consequently, they are high priority targets for OSS developers. The OS functionality is already generic and MS Office is quickly headed that way. Microsoft has absolutely nothing in the pipeline to make up for the loss of revenue that is imminent. The company has so little confidence in the growth of its software business that it is diversifying out of software and into products like XBox.

Microsoft can only avoid the confrontation with OSS developers by segmenting the high volume markets. This is not too hard to do in case of MS Office. For instance, the $50 spreadsheet is being used for everything from financial analysis, and statistical analysis to Monte-Carlo simulations; this is akin to selling cheap Aspirin as a cure-all. Microsoft can easily create specialized products for the bigger segments, and make good money doing so. People will gladly pay a lot more than $50 for the productivity benefits such specialization will bring. OSS will try to match Microsoft, but if Microsoft continuously pipelines new products, the company can keep growing and stay profitable.

Microsoft’s big problem is the OS market. The OS market can be segmented as well, but the company does not have the expertise to do so. Currently, the OS design expertise of Microsoft is no greater than that of OSS developers. Microsoft immediately needs to start four-five OS projects with different design goals, and pipeline more of them in the future. A good chunk of these projects will fail, but Microsoft will eventually have some successful products, and the expertise to combat open source operating systems.

Microsoft might be a slow learner but it has adapted to changing market conditions in the past, and can be reasonably expected to do the same this time as well. Other big software companies will likely follow suit. This restructuring will be good for innovation in the software industry, and it will also be good for the ‘failed good ideas’ of the past.

Innovation is something the software industry simply cannot do without. The competitive nature of the industry ensures that good ideas cannot permanently die in this industry. Good ideas of the past are always waiting for smart and enterprising entrepreneurs to resurrect them.


by Usman Latif  [Mar 14, 2004]

Why Good Ideas Fail

Sometimes an excellent software product comes to market, gets rave reviews, and then disappears. This happens regardless of a clear need and want for the product. The problem is not specific to any particular software domain; BeOS, and Lotus Improv were both great products but seem to have nothing in common apart from being good ideas that failed.

Lotus Improv introduced a radical spreadsheet metaphor, and although BeOS was not as revolutionary it too brought plenty of good ideas with it. Both products received extensive media coverage and favorable reviews, but made no impact on the software market.

Why does software that everyone seems to like and want die? Surely, the ideas behind the products and the implementations can not be at fault. Ideas from such software have been recycled successfully in vastly inferior forms by competing products: Microsoft Excel’s pivot tables were inspired by Lotus Improv, but have been a big success. BeOS too has yielded many nice ideas; some of these ideas have been copied and others are being copied.

Was poor marketing the reason for the demise of BeOS and Lotus Improv? Marketing is always a factor in the success of a product; however, these products did receive extensive favorable media coverage. Better marketing might have enabled these products to do slightly better, but it is unlikely that marketing alone could have rescued them. So, what exactly did go wrong?

Software users tend to stick with software they are familiar with. This is not irrational behavior; software is complex to learn and use productively, and people prefer not having to relearn computer skills over and over again. Most people are inquisitive about new products but can not see straight away if these products carry any productivity benefits. Moreover, the utility of new functionality often becomes apparent only after individuals get accustomed to it.

People do adopt new software but may require a very long time to do so. Rapid adoption of new software occurs only if there are no pre-existing substitutes for the software. When the PC software industry was young many software companies went through rapid expansion because their products had no counterparts. As the PC software industry matured, software companies found it much harder to sell new ideas. For instance, Lotus 123 was the first good spreadsheet available on the PC and it became an instant hit, but MS Excel took a long time to displace Lotus 123, even though it was a superior product. Excel would have taken even longer to displace Lotus 123 if Windows 3.0 had not come along. The popularity of Windows 3.0 changed the market in favor of Excel, and thus accelerated Excel’s acceptance.

Good ideas fail mostly because they are not allowed sufficient time to succeed. More time allows an idea to mature, and also enables it to benefit from favorable marketplace changes. Additionally, the backers behind the idea get to learn from failure and are able to market the idea better.

The time needed for a new product to establish itself is mainly a function of the effort required of users to switch to the new product and be productive. Image viewers and the like can be instant successes, but more sophisticated applications need many years to gain a foothold. The experience of Linux suggests that a new OS can require more than a decade to gain credibility.

Lotus and Be both blundered by not allowing their products sufficient time on the market. Be was financially constrained and had to call it quits; however, Lotus acted in an incredibly stupid manner by killing the company’s only product with any chance of success.

The practice of discontinuing good ideas prematurely is widespread in the software industry. The story of BeOS and Lotus Improv has been repeated so often that innovation in the PC software industry has stalled. Software companies are convinced that users are indifferent to good ideas, and money spent on developing new ideas is a waste. The OS scene is one big casualty of this attitude of the software industry.

The lack of innovation in the OS scene is certainly not due to a lack of ideas. Good ideas are so abundant that people need not even look for them. An entire book, Unix Hater’s Handbook, is dedicated to pointing out all the things Unix got wrong. Some of the shortcomings mentioned in that book have been addressed, but others simply cannot be addressed in the framework of current generation operating systems. Of course, no software company has the courage to explore these ideas.

Things are hardly any better in the application software market. Only recently, a new startup, Quantrix, reintroduced the ideas pioneered by Lotus Improv (see the Quantrix tour for an overview of Lotus Improv ideas) to the Windows market. For almost a decade nobody had the courage to touch an insanely cool idea.

Software companies love to blame ideas instead of management for product failures. The industry simply does not understand that ideas need time to succeed, and has a hit or miss attitude towards products. This attitude is a consequence of the initial success of the PC software industry. The PC software industry is barely 25 years old and in the initial 10 years of its inception many products became instant hits. This initial phase of the software industry established a culture of impatience. Although, the PC software market has changed and products rarely become instant hits nowadays, the old habits continue to persist.

Only Microsoft seems to be immune from this culture of impatience. Unlike, Lotus and most other software companies who achieved instant success, Microsoft went through 5-6 extremely lean years after the founding of the company; this seems to have shaped the management culture at Microsoft. Unfortunately, Microsoft has primarily limited itself to copying successful ideas, and is not interested in radically different ideas.

Apart from commercial software another plausible source of innovation in the software industry is non-profit software development; however, non-profit development is biased towards cloning commercial software. Non-profit software development is about creating the most amount of utility for the most amount of people, and this goal is best served by copying successful ideas. Going for radical innovation is a suboptimal use of the very limited resources available for non-profit development. Linux exists because Linus wisely chose not to be ambitious. Had Linus tried fancy ideas, Linux might not have existed today.

The future of software looks bleak; Microsoft and other big software companies are unwilling to back good ideas, and non-profit developers are unable to do so. Can the software innovation stalemate be broken or is software innovation dead for good? Part II of this article will examine the issues the software industry needs to tackle in order to bring innovation back to the software scene.

LAST UPDATED by Usman Latif  [Feb 29, 2004]

Minesweeper Cascade Algorithm

A cascade in Minesweeper (Windows version) occurs when a single click uncovers many adjacent squares. A cascade can expand and uncover large portions of the board. Understanding the cascade algorithm is not only interesting from a programming point of view but can also provide a slight advantage when playing the game.

Inferring the behavior of the cascade algorithm just by playing Minesweeper is fairly difficult. However, it should be obvious that a cascade occurs only when all adjacent squares (next to the square clicked) do not contain mines. An interesting situation occurs when all the mines adjacent to the square clicked are marked as mines. One would expect a cascade to occur under this situation but this is not the case and can be verified.

Intuitively it seems that a cascade expands by uncovering adjacent mine-free blocks of 3×3 squares. To verify this conjecture I disassembled Minesweeper executable. The functions, StepBox, StepXY, and CountBombs were found to be relevant to the cascade algorithm. The cascade algorithm is summarized below:

  1. Initialize a queue
  2. If current square is non-mine uncover it and add to queue, otherwise gameover
  3. Remove a square from queue
  4. Count mines adjacent to it
  5. If adjacent mine count is zero, add any adjacent covered squares to queue and uncover them
  6. Go to step 3 if queue is not empty, otherwise finish

The description above is very high level and slightly simplified as compared to the actual algorithm. The actual data-structure used in the algorithm is a circular queue, implemented using an array, with a capacity of 100. Of course this assumes that there will never be more than a 100 elements in the queue. To justify this assumption Windows Minesweeper forces a maximum board size.

In the algorithm, step 5 looks to be fairly complex. The number of adjacent squares differs from place to place on the Minesweeper board. A corner square has only three adjacent squares, a square next to an edge of the board has five adjacent squares, and every other square has eight adjacent squares. It seems Minesweeper needs three cases dealing with each of these scenarios. Even counting the number of adjacent squares seems like quite a lot of code, so what does Minesweeper do?

The solution is quite clever. The visible Minesweeper board is the center of a larger board. For example, the 9×9 board is the center of an 11×11 board. The extra squares in the 11×11 board form a border around the 9×9 board. This allows every square of the visible board to have 8 adjacent squares, and therefore no special cases result. The cascade algorithm would break down if a non-visible square was put in the queue. This is avoided by initializing the extra squares as non-mine and uncovered. Now, these squares can never be put in the queue as step 5 of the algorithm only puts squares in the queue which are covered.

Another interesting detail is the iterative nature of the cascade algorithm. It is clear that the queue data-structure can be replaced with any container data-structure. A stack is an obvious choice. Which begs the question: why the implementors did not use recursion to implement the algorithm? The use of recursion in this case would have considerably simplified the algorithm as it would obviate the need for the queue data-structure. The recursive algorithm would work as follows:

  1. If current square is a mine gameover, otherwise uncover square
  2. Count mines adjacent to current square
  3. If adjacent mine count is zero, uncover all adjacent covered squares and make a recursive call for every one of them (steps 2-3)

Understanding the cascade algorithm yields some hints for gameplay as well. A cascade is more likely when the square clicked has fewer adjacent squares. This is true of the corners and edges. On the flip side, the cascade will have less room to expand when there are fewer adjacent blocks. Consequently the area uncovered by the cascade will be smaller.


LAST UPDATED by Usman Latif  [Nov 24, 2003]

Digital Implementation of the Library of Babel

In the well-known story “The Library of Babel” author Jorge Luis Borges envisions a library that contains all possible books of 410 pages. The library (Library of Babel) contains all books that have ever been written and also includes all possible books that can ever be written. The library is not limited to books that make sense. It stores all books consisting of any combination of a 25 symbol alphabet.

On first thought it seems that a digital implementation of the Library of Babel is impossible as it contains almost unimaginable amount of content. Surprisingly, this is not the case and digital implementations of such a library can exist.

In the digital Library of Babel every book is stored in ascii. Ascii allows more symbols than the 25 used by the Library of Babel, but is more convenient to use and avoids the effort of defining a new characterset.

A digital Library of Babel needs to store all the books in a data-structure and provide an interface for people to search for a particular book. To locate a particular book in the Library of Babel, searches based on titles, author names, and ISBN numbers cannot be used.

Titles don’t provide any useful information about the books in the library. The library contains gazillions of books with the same title but different contents. Exactly the same argument applies to author names as every author has written every possible book.

ISBN numbers in order to be practical must identify a book uniquely but not contribute to the book’s uniqueness. Otherwise, the library will contain many copies of every book which differ only in ISBN numbers.

Even with this restriction ISBN numbers are useless in the Library of Babel. The ISBN number identifying a book (on average) has to be as big as the contents of the book. ISBN numbers are practical only if the ratio of books that exist to books that are possible is very small. Suppose the world has only two books, written with symbols a and b, and having the contents:abab,baaa

It is easy to identify the first book with the tag a and the second book with the tag b. The Library of Babel is not so sparsely populated. It contains all possible books which includes the following list:a,b,c..,aa,ab,ac..,ba,bb,bc..

It is possible to identify the books consisting of abab and baaa with a and b respectively, but then the books with contents a and b have to be identified with a larger tag than their length. On average an identification tag as big as the book itself will be required.

Without loss of generality it can be assumed that books in the Library of Babel don’t have titles, author names, ISBN numbers, and any other identifying tags. Such tags as noted above don’t contribute anything towards easing searching for books and are only a waste of space.

The only choice for uniquely identifying the books in Library of Babel is the text of the books. Specifying only partial text does not identify books uniquely. The library is too densely populated and even leaving one bit out of the full-text results in non-unique matches. Therefore, the full-text of a book is required in order to locate a particular book.

Unlike the original library the digital version does not impose any restrictions on the length of the books contained in it. The digital Library of Babel contains the books as raw text. It does not store any formatting for the books. The users are free to format the books as they wish after they have retrieved them.

The following data-structure given as Haskell code can be used to implement the library:data Tree = Branch (Tree,Tree)
treeOfBabel = Branch (treeOfBabel,treeOfBabel)

The implementation uses a circular data-structure, treeOfBabel, to store all the data in the library using O(1) storage space. Searching for a book involves traversing the correct path in the tree. At each branch in the tree the first tree corresponds to a 0 bit and the second tree corresponds to a 1 bit. Given a search string, the search function navigates the tree using the bits of the search string. It accumulates the bits of the result while navigating the tree and after processing the search string it returns the accumulated bits as the result of the search. These bits can then be interpreted as ascii text.

The following implementation of the search function uses a slight optimization:searchBabel = id -- id is Haskell's identity function (id x = x)

The optimization is based on the fact that the user is always going to provide the full-text of the book so the input can be returned to the user as an answer to his/her query. This optimization does not work for ordinary libraries as they might or might not contain a book but the Library of Babel is assured to have any book.

The library of Babel exists, there can be no doubt about it. The big question that needs to be answered is whether the library of Babel is full of information or is it devoid of any information? The perspective of a philosopher suggests that the library is as full as it can be. On the other hand intuition suggests that the library is empty, it carries no useful information. Is our intuition right or wrong?

I will examine this issue in detail, using insights from information theory, in another article.


LAST UPDATED by Usman Latif  [Nov 08, 2003]

Part 3 – Google’s Bid-for-placement Patent Settlement Cover-up

Google always had excellent search engine indexing technology, but Google’s search technology by itself never generated profits for the company. Google’s profitability comes from its search technology combined with text ads and an ad placement mechanism that allows advertisers to bid for the placement of their ads (bid-for-placement mechanism). From a profitability perspective, the bid-for-placement mechanism is as valuable as Google’s indexing technology. In the absence of the bid-for-placement mechanism, ad pricing can at best be inefficient. The bid-for-placement mechanism frees up extensive resources that would otherwise be required to set ad prices, and it allows Google to charge ad sponsors in proportion to the value Google is delivering to the sponsors.

The bid-for-placement mechanism was pioneered by Overture, a paid search specialist company. In July 2001, the US patent office issued Overture a patent covering the mechanism. Patent 6,269,361 also known as the ‘361 patent was bad news for Google: it threatened Google’s core business model. It was imperative for Google to have access to the ‘361 patent, but Google never managed to negotiate a satisfactory licensing agreement with Overture. Consequently, in April 2002 Overture sued Google over patent infringement.

Google greatly miscalculated the threat posed by Overture and was eventually forced to settle at a most inopportune time. However, at the time, Google likely believed Overture was totally dependent on the ‘361 patent and could not afford any risk of having the patent invalidated in a court. Google calculated that it will win outright, if the courts dismiss Overture’s lawsuit early, but if things don’t go its way, it will still manage to cut a palatable deal. Unfortunately for Google, Overture did happen to have a few other options.

Overture started life as a paid listing search engine. The company was known as GoTo.com when it first began operations. Advertisers placed ads with GoTo.com and in response to queries the GoTo.com website produced a listing of ads ranked by the ad sponsors’ bids for the search keywords. The GoTo.com website never made it big, and the company was forced to pitch the idea of bid-for-placement to outside search engines. The company even changed its name to Overture to better reflect its new business model.

Overture had a bit more success with this new business model. Under this new business model Overture signed up affiliates and managed their ad sales via its bid-for-placement system. Overture recognized the ad sales of its affiliates as revenue on its books. The portion of the ad sales that went to its affiliates was recorded as traffic acquisition costs.

The paid-search market took off in 2001, and Overture prospered in the rapidly growing market. Overture earned $73 million on revenues of $667 million in 2002, but then a disturbing trend became apparent: Overture’s traffic acquisition costs started growing rapidly. Overture’s traffic acquisition costs grew from 53 percent of revenue to over 62 percent of revenue in just the last three quarters of 2002.

Even after handing over 62 percent to affiliates, a 38 percent cut of ad sales for providing access to the bid-for-placement mechanism is hardly unimpressive, and reflects the revenue generating power of the bid-for-placement mechanism. The trouble was the 38 percent number was just an average, and not every affiliate was paying the same amount to Overture. Overture was increasingly growing dependent on fewer and fewer affiliates for more and more of its revenue. Overture’s smaller customers were in decline as advertisers were focusing primarily on highly trafficked websites. Overture’s 2002 annual report reveals that Microsoft and Yahoo were responsible for 60 percent of Overture’s revenue that year. This dependency allowed the big affiliates to claim bigger and bigger chunks of ad sales as their share of the pie.

Overture certainly had a very valuable patent in the ‘361 patent, but the patent had not been tested in court. Overture knew if either Yahoo or Microsoft walked, the company could expect major layoffs and a severely weakened bargaining position with the rest of its affiliates. Worse, Overture could lose it all, if the company suffered a setback in court.

Overture’s management realized that the ‘361 patent was much more valuable to a company not so severely dependent on a single source of revenue. Such a company could bargain better, and could handle setbacks in courts. Yahoo made a lot of sense as Yahoo handled more web-traffic than anyone else. If the paid-search market grew as anticipated, Yahoo was going to generate most of its revenue from paid-search. Essentially, Yahoo would become what Overture wanted to be, but without the handicap of having to hand over vast chunks of revenue as traffic acquisition costs to affiliates. Consequently, it was reasonable to expect that over the long term Yahoo’s stock will perform as well as or better than Overture’s stock.

Yahoo too realized the value of Overture. Yahoo had always outsourced its search functionality to outside search engines. With Google becoming all powerful and no new contenders emerging in the search arena, Yahoo was in a tight spot. Yahoo realized that to be competitive in the search engine space, it needed access to Google’s intellectual property. Overture’s claim on the core of Google’s business model was just the bargaining chip Yahoo needed.

In July 2003 Yahoo acquired Overture for $1.63 billion. This was an expensive deal as Yahoo’s stock was not flying very high at the time. (Yahoo’s stock has more than doubled since then.) Also, Overture did not have anything valuable apart from the ‘361 patent. Yahoo and Microsoft counted for the bulk of Overture’s revenue, and were it not for the ‘361 patent, Microsoft would have certainly walked.

The Yahoo Overture deal meant that Google could no longer expect to cut a deal whenever things turned against it. Yahoo was now in a position to cut a very tough bargain with Google.

Google and Yahoo settled the ‘361 patent dispute in August 2004. Google disclosed the settlement in an SEC filing just before its IPO. The relevant excerpt from Google’s SEC filing reads:

… Overture will dismiss its patent lawsuit against us and has granted us a fully-paid, perpetual license to the patent that was the subject of the lawsuit and several related patent applications held by Overture. The parties also mutually released any claims against each other concerning the warrant dispute. In connection with the settlement of these two disputes, we issued to Yahoo 2,700,000 shares of Class A common stock.

At the time of the patent settlement disclosure, 2.7 million shares of Google represented roughly 1 percent of the company. Google estimated that the shares were worth somewhere between $260 and $290 million. (The estimate was based on Google’s proposed initial public offering price range of $108 to $135.) Yahoo had spent billions to corner Google so why would the company settle for such a paltry sum?

Actually, Google exaggerated the value of the shares it issued to Yahoo. Only a few days after the disclosure of the patent settlement, Google lowered its proposed initial public offering price range to between $85 and $95. Moreover, Google attempted to muddle up the math even further by jumbling together the numbers of the patent licensing settlement with the settlement of a separate second dispute with Yahoo.

In the second dispute, Yahoo claimed that a warrant it held, in connection with a June 2000 services agreement between the two companies, entitled it to 3.7 million shares of Google. Google disputed that claim and argued that it compensated Yahoo fully on the warrant account by issuing 1.2 million shares in June 2003.

Google’s 2004 annual report has the settlement value pinned down to $229.5 million, and it sheds some light on how much was paid for what. The section about the Yahoo settlement reads:

In the year ended December 31, 2004, the Company [Google] recognized the $201.0 million non-recurring charge related to the settlement of the warrant dispute [with Yahoo] and other items. The non-cash charge associated with these shares was required because the shares were issued after the warrant was converted. The Company realized a related income tax benefit of $82.0 million. The Company also capitalized $28.5 million related to certain intangible assets obtained in this settlement.

In the IPO filing the focus was on the patent dispute, but here the emphasis is clearly on the warrant dispute and ‘other items’. The attempt to fudge is there, but it is obvious the ‘other items’ do not cover the patent settlement. This because the $201.0 million amount was expensed all at once, whereas patents have a life and are expensed over that life. Google recorded the rest $28.5 million as intangible assets on its books. Patents are recorded as intangibles, so the $28.5 million amount is all Google paid for the patent licenses.

Why did Yahoo charge only $28.5 million for patents it acquired for $1.63 billion? The $28.5 million seems to cover Yahoo’s legal expenses associated with the patent litigation and likely does not represent any payment for patent licenses. Google must have compensated Yahoo in some other way, and the company is not being forthright about the settlement terms.

Yahoo certainly craved Google’s intellectual property, but the terms of the settlement make no mention of Yahoo licensing any of Google’s patents. Obviously, Google could not expect to get away with hiding an IP licensing agreement with its biggest competitor; therefore, it is reasonable to assume that Yahoo did not get such an agreement. But, Yahoo had an indirect way of achieving access to Google’s intellectual property.

Google’s SEC filings mention a “fully-paid, perpetual license,” but they omit the word non-revocable. Nowhere is there any mention of the patent license being, “fully-paid, non-revocable, and perpetual.” It is unreasonable to expect Google to inadvertently miss the word non-revocable, so Google’s license to the ‘361 patent has to be revocable. (Non-revocable patent licensing deals are nothing exotic.)

Now, there remains the question of the terms which if violated cause Google’s patent license to become revocable. Yahoo was in a very strong bargaining position as its patents covered the core of Google’s business model, so it must have asked for something big. The only obvious thing that makes sense was for Yahoo to have conditioned the revocability of Google’s patent license on Google’s not litigating against Yahoo. Such a condition puts Google on a leash, and effectively grants Yahoo the authority to use Google’s IP with complete immunity. Of course, there are other possibilities, but again there is no reason for Yahoo to let Google off the hook. Whatever Google is hiding is more than a little embarrassing.

The disclosure of any embarrassing patent licensing deal could have derailed Google’s IPO, so Google chose to cover it up. But why did Google settle for such poor terms? Why not go for an IPO without settling with Yahoo?

At the time the deal was being negotiated, Google was having a hard time selling its IPO to investors. Worse, a preliminary ruling concerning the Overture lawsuit was due, and Google expected extensive saber rattling from Yahoo if it did not settle. All these factors combined with market conditions could have derailed Google’s IPO or could have caused Google’s shares to tumble after they started trading. Clearly, some people at Google were excessively worried about the value of their options; greed was certainly at work.

Interestingly, Google had no need to go through an IPO. The company was profitable and doing quite well. The pressure to do an IPO was coming from the venture capitalists and employees. The VCs wanted to show impressive gains on their Google investment, and the employees wanted to cash out their stock options. The company could have made a better deal had it stayed private. Under this scenario, there would be no incentive to rush the deal, and Google could have held out for better terms.

Sadly, selfish interests have pushed Google to the brink of disaster. As a public corporation, Google was obligated to inform its shareholders of all significant risks to its business. Google not only failed to disclose the risks, but it intentionally tried to mislead its shareholders and potential investors. Google is certainly asking for an SEC investigation into its business practices and might have to pay fines. Worse, if ever Google’s stock takes a nosedive, Google can expect shareholder lawsuit hell. Shareholders are going to claim that Google intentionally hid potential risks from them, and they are going to be right about that.

Google is a great company but it needs new management. Managers who misrepresent information and manipulate investor expectations with an intent to profit from such actions are certainly not fit to run Google. Google’s shareholders must demand the complete dissolution of the current ineffective board of directors. (Some of the directors might be complicit in the cover-up.) New, responsible directors should be brought in to rid the company of unscrupulous managers and the culture of greed which is threatening to destroy the company.


by Usman Latif  [Apr 29, 2005]


LASTE Updated: May 07, 2005

Part 2 – Google’s Bid-for-placement Patent Settlement Cover-up

Google always had excellent search engine indexing technology, but Google’s search technology by itself never generated profits for the company. Google’s profitability comes from its search technology combined with text ads and an ad placement mechanism that allows advertisers to bid for the placement of their ads (bid-for-placement mechanism). From a profitability perspective, the bid-for-placement mechanism is as valuable as Google’s indexing technology. In the absence of the bid-for-placement mechanism, ad pricing can at best be inefficient. The bid-for-placement mechanism frees up extensive resources that would otherwise be required to set ad prices, and it allows Google to charge ad sponsors in proportion to the value Google is delivering to the sponsors.

The bid-for-placement mechanism was pioneered by Overture, a paid search specialist company. In July 2001, the US patent office issued Overture a patent covering the mechanism. Patent 6,269,361 also known as the ‘361 patent was bad news for Google: it threatened Google’s core business model. It was imperative for Google to have access to the ‘361 patent, but Google never managed to negotiate a satisfactory licensing agreement with Overture. Consequently, in April 2002 Overture sued Google over patent infringement.

Google greatly miscalculated the threat posed by Overture. Google likely believed Overture was totally dependent on the ‘361 patent and could not afford any risk of having the patent invalidated in a court. Google calculated that it will win outright, if the courts dismiss Overture’s lawsuit early, but if things don’t go its way, it will still manage to cut a palatable deal. Unfortunately for Google, Overture did happen to have a few other options.

Overture started life as a paid listing search engine. The company was known as GoTo.com when it first began operations. Advertisers placed ads with GoTo.com and in response to queries the GoTo.com website produced a listing of ads ranked by the ad sponsors’ bids for the search keywords. The GoTo.com website never made it big, and the company was forced to pitch the idea of bid-for-placement to outside search engines. The company even changed its name to Overture to better reflect its new business model.

Overture had a bit more success with this new business model. Under this new business model Overture signed up affiliates and managed their ad sales via its bid-for-placement system. Overture recognized the ad sales of its affiliates as revenue on its books. The portion of the ad sales that went to its affiliates was recorded as traffic acquisition costs.

The paid-search market took off in 2001, and Overture prospered in the rapidly growing market. Overture did well in 2001 and 2002, but then a disturbing trend became apparent: Overture’s traffic acquisition costs started growing rapidly. Overture’s traffic acquisition costs grew from 53 percent of revenue to over 62 percent of revenue in just the last three quarters of 2002.

Even after handing over 62 percent to affiliates, a 38 percent cut of ad sales for providing access to the bid-for-placement mechanism is hardly unimpressive, and reflects the revenue generating power of the bid-for-placement mechanism. The trouble was the 38 percent number was just an average, and not every affiliate was paying the same amount to Overture. Overture was increasingly growing dependent on fewer and fewer affiliates for more and more of its revenue. Overture’s smaller customers were in decline as advertisers were focusing primarily on highly trafficked websites. Overture’s 2002 annual report reveals that Microsoft and Yahoo were responsible for 60 percent of Overture’s revenue that year. This dependency allowed the big affiliates to claim bigger and bigger chunks of ad sales as their share of the pie.

Overture certainly had a very valuable patent in the ‘361 patent, but the patent had not been tested in court. Overture knew if either Yahoo or Microsoft walked, the company could expect major layoffs and a severely weakened bargaining position with the rest of its affiliates. Worse, Overture could lose it all, if the company suffered a setback in court.

Overture’s management realized that the ‘361 patent was much more valuable to a company not so severely dependent on a single source of revenue. Such a company could bargain better, and could handle setbacks in courts. Yahoo made a lot of sense as Yahoo handled more web-traffic than anyone else. If the paid-search market grew as anticipated, Yahoo was going to generate most of its revenue from paid-search. Essentially, Yahoo would become what Overture wanted to be, but without the handicap of having to hand over vast chunks of revenue as traffic acquisition costs to affiliates. Consequently, it was reasonable to expect that over the long term Yahoo’s stock will perform as well as or better than Overture’s stock.

Yahoo too realized the value of Overture. Yahoo had always outsourced its search functionality to outside search engines. With Google becoming all powerful and no new contenders emerging in the search arena, Yahoo was in a tight spot. Yahoo realized that to be competitive in the search engine space, it needed access to Google’s intellectual property. Overture’s claim on the core of Google’s business model was just the bargaining chip Yahoo needed.

In July 2003 Yahoo acquired Overture for $1.63 billion. This was an expensive deal as Yahoo’s stock was not flying very high at the time. (Yahoo’s stock has more than doubled since then.) Also, Overture did not have anything valuable apart from the ‘361 patent. Yahoo and Microsoft counted for the bulk of Overture’s revenue, and were it not for the ‘361 patent, Microsoft would have certainly walked.

The Yahoo Overture deal meant that Google could no longer expect to cut a deal whenever things turned against it. Yahoo was now in a position to cut a very tough bargain with Google.

Google and Yahoo settled the ‘361 patent dispute in August 2004. Google disclosed the settlement in an SEC filing some time before its IPO. The relevant excerpt from Google’s SEC filing reads:

… Overture will dismiss its patent lawsuit against us and has granted us a fully-paid, perpetual license to the patent that was the subject of the lawsuit and several related patent applications held by Overture. The parties also mutually released any claims against each other concerning the warrant dispute. In connection with the settlement of these two disputes, we issued to Yahoo 2,700,000 shares of Class A common stock.

At the time of the patent settlement disclosure, 2.7 million shares of Google represented roughly 1 percent of the company. Google estimated that the shares were worth somewhere between $260 and $290 million. (The estimate was based on Google’s proposed initial public offering price range of $108 to $135.) Yahoo had spent billions to corner Google so why would the company settle for such a paltry sum?

Actually, Google exaggerated the value of the shares it issued to Yahoo. Only a few days after the disclosure of the patent settlement, Google lowered its proposed initial public offering price range to between $85 and $95. Moreover, Google attempted to muddle up the math even further by jumbling together the numbers of the patent licensing settlement with the settlement of a separate second dispute with Yahoo.

In the second dispute, Yahoo claimed that a warrant it held, in connection with a June 2000 services agreement between the two companies, entitled it to 3.7 million shares of Google. Google disputed that claim and argued that it compensated Yahoo fully on the warrant account by issuing 1.2 million shares in June 2003.

Google’s 2004 annual report has the settlement value pinned down to $229.5, and it sheds some light on how much was paid for what. The section about the Yahoo settlement reads:

In the year ended December 31, 2004, the Company [Google] recognized the $201.0 million non-recurring charge related to the settlement of the warrant dispute [with Yahoo] and other items. The non-cash charge associated with these shares was required because the shares were issued after the warrant was converted. The Company realized a related income tax benefit of $82.0 million. The Company also capitalized $28.5 million related to certain intangible assets obtained in this settlement.

Again there is an attempt to fudge, but it is clear that out of $229.5 million settlement $201.0 million were for the settlement of the warrant dispute. This because the $201.0 million amount was expensed all at once, whereas patents have a life and are expensed over that life. Google recorded the rest $28.5 million as intangible assets on its books. Patents are recorded as intangibles, so the intangible assets mentioned above have to be the patent licenses Google acquired.

Why did Yahoo charge only $28.5 million for patents it acquired for $1.63 billion? The $28.5 million seems to be compensation for Yahoo’s legal expenses and likely does not represent any monetary compensation for patent licenses. Google must have compensated Yahoo in some other way, and the company is not being forthright about the settlement terms.

Yahoo certainly craved Google’s intellectual property, but the terms of the settlement make no mention of Yahoo licensing any of Google’s patents. Obviously, Google could not expect to get away with hiding an IP licensing agreement with its biggest competitor; therefore, it is reasonable to assume that Yahoo did not get such an agreement. But, Yahoo had an indirect way of achieving access to Google’s intellectual property.

Google’s SEC filings mention a “fully-paid, perpetual license,” but they omit the word non-revocable. Nowhere is there any mention of the patent license being, “fully-paid, non-revocable, and perpetual.” It is unreasonable to expect Google to inadvertently miss the word non-revocable, so Google’s license to the ‘361 patent has to be revocable. (Non-revocable patent licensing deals are nothing exotic.)

Now, there remains the question of the terms which if violated cause Google’s patent license to become revocable. Yahoo was in a very strong bargaining position as its patents covered the core of Google’s business model, so it must have asked for something big. The only obvious thing that makes sense was for Yahoo to have conditioned the revocability of Google’s patent license on Google’s not litigating against Yahoo. Such a condition puts Google on a leash, and effectively grants Yahoo the authority to use Google’s IP with complete immunity. Of course, there are other possibilities, but again there is no reason for Yahoo to let Google off the hook. Whatever Google is hiding is more than a little embarrassing.

The disclosure of any embarrassing patent licensing deal could have derailed Google’s IPO, so Google chose to cover it up. But why did Google settle for such poor terms? Why not go for an IPO without settling with Yahoo?

At the time the deal was being negotiated, Google was having a hard time selling its IPO to investors. Worse, a preliminary ruling concerning the Overture lawsuit was due, and Google expected extensive sabre rattling from Yahoo if it did not settle. All these factors combined with market conditions could have derailed Google’s IPO or could have caused Google’s shares to tumble after they started trading. Clearly, some people at Google were excessively worried about the value of their options; greed was certainly at work.

Interestingly, Google had no need to go through an IPO. The company was profitable and doing quite well. The pressure to do an IPO was coming from the venture capitalists and employees. The VCs wanted to show impressive gains on their Google investment, and the employees wanted to cash out their stock options. The company could have made a better deal had it stayed private. Under this scenario, there would be no incentive to rush the deal, and Google could have held out for better terms.

Sadly, selfish interests have pushed Google to the brink of disaster. As a public corporation, Google was obligated to inform its shareholders of all significant risks to its business. Google not only failed to disclose the risks, but it intentionally tried to mislead its shareholders and potential investors. Google is certainly asking for an SEC investigation into its business practices and might have to pay fines. Worse, if ever Google’s stock takes a nosedive, Google can expect shareholder lawsuit hell. Shareholders are going to claim that Google intentionally hid potential risks from them, and they are going to be right about that.

Google is a great company but it needs new management. Managers who misrepresent information and manipulate investor expectations with an intent to profit from such actions are certainly not fit to run Google. Google’s shareholders must demand the complete dissolution of the current ineffective board of directors. New, responsible directors should be brought in to rid the company of unscrupulous managers and the culture of greed which is threatening to destroy the company.

LAST UPDATED by Usman Latif  [May 31, 2005]