Social Payments: the Future is Unified

Physical credit cards will soon be a thing of the past. Is the rest of the US startup industry ready?

The next real-world cash-replacement could be powered by Facebook, Google, Apple, Square, Intuit, Paypal, or some other company hiding in the wings.  There’s a few obvious names in there, and then there’s a few left-field ones to some people. This post isn’t about how those left-field plays could happen. I simply wanted to explain how the landscape is changing.

There’s a convergence happening right now between social, payments, and e-commerce. Imagine this predictable future:

You buy some coffee at Starbucks. You take out your phone and swipe it at the terminal. Your [insert phone app name here] Bucks (from here forth known as: “Phone Bucks”) are deducted from your account. Your purchase is optionally posted on your Facebook/Twitter stream. You get highly-targeted Groupon-clone notice for a Starbucks coupon redeemable online immediately. You decide to buy it using your Phone-Bucks — no signing in, no additional authorizations — by clicking a button.

We’re talking about a future where your online wallet (today, known as Paypal, Facebook Credits, etc.) follows you into the real world and ties directly into your mobile phone. This represents a single unified wallet. And it makes sense. That’s the future. That’s where we are headed now. I’ve been watching this trend happen for the past few years, and it’s exciting to finally see some big players waking up to this reality. Which players are the closest to achieving this? In this order:

1. Facebook – Due to its large install base (virtually all smart phones) and an existing currency platform (Credits), they are best positioned to move into the real world. And they recently made a huge move indicating a desire to do exactly this (creating a subsidiary is the first step in buffering liabilities that come with real-world payments).
2. Square (or Intuit depending on how things play out) – They would solve this from the other direction: they have a stronger real-world presence, and moving into the digital space might be easier than vice-versa.
3. Google – They will approach this from the platform (Android) by opening it (Google Checkout 2.0) up to developers and creating an ecosystem. They also recently stole a key exec from Paypal, so you know they’re serious.

It’s my belief that any startup entering the e-commerce landscape right now needs to make sure they are thinking about this convergence. To get big valuations, I think a startup needs to not only understand these trends, but be the first to market in the new paradigm that will be coming (really soon!). This convergence will create an opportunity for new players to emerge and destroy existing leaders. All mobile startups around commerce, Groupon, Paypal, and even the advertising arm of Google are probably already adjusting to these trends. Is your startup?

Think about it.

Is PHP here to stay?

As a LAMP developer, I am starting to question the long term viability of PHP. PHP was born during an era when knowing HTML was a valid and valuable resume bullet. Because of this, most of the “advanced” aspects of PHP — which relate to the OOP functionality — were introduced only after PHP 4/5, and weakly at that. Additionally, new languages have since become popularized that show the weakness of PHP. Don’t get me wrong, I am very supportive of PHP. I just believe that it’s important that people understand both the strengths and weaknesses of the tools they use.  There are two main points I want to cover:

  1. PHP thread support is weak
  2. PHP OOP = Broken

The second point is rather technical, but it closely relates to another strength and weakness of PHP: it is loosely typed. More on that later.

Thread Support is Weak

True threading support in PHP does not exist. The closest thing is the pcntl_fork method, which copies the current process, rather than create a thread. This means asynchronous processing within a single process is not supported. Threading is useful in event-driven architectures (common in JavaScript) or when doing blocking operations such as network calls.

Because the forked process is a clone of the original, it shares all of the original resources, including database and file resources. This means that the forked process must be self-aware of whether it is a child or not, and must be careful not to modify or close these resources. This encourages spaghetti code that contains large logic forks (“if I am not a clone, else…”). Because of this, forking is messy and error prone. This gets further complicated when PHP is executed by Apache in a web environment. In fact, the PHP manual advises avoiding forking with web servers:

Process Control should not be enabled within a web server environment and unexpected results may happen if any Process Control functions are used within a web server environment.

Not to mention the method is incredibly C-like in that it is very “raw” (unlike other native PHP methods/classes). This increases the barrier to entry significantly, which ultimately serves to have the feature ignored by most shops.

Why is all of this important? Well, at most companies, one language is selected for all in-house development. This is because cross training and hiring is simplified if everybody speaks the same language. There are a few common tasks that are unnecessarily difficult to do in PHP:

  • Asynchronous work — handing off work such as connecting to a remote server to a child and wait for a response
  • Manage thread pools — this sort of work requires significant “by-hand” management of any processes spawned by the parent via pcntl_fork

The threading issue is only a pain point that impacts processes that need to become parallelized. It is a pain most big shops live with, or, alternatively, introduce other languages to help solve.

PHP OOP = Broken

Because of the loosely typed nature of PHP, true, well-formed object oriented programming is broken. I know that for many PHP programmers, “Object Oriented” means putting together classes and reusing code as objects. However, that is truly, sincerely, only a portion of the point of OOP. Some of the most powerful aspects of OOP are lost in PHP’s implementation of the concept. Don’t get me wrong: these decisions were probably the right fit for the niche PHP was filling, but I don’t believe most PHP programmers are fully aware of what they are missing.

While the language, thankfully, has interfaces and abstract classes, they are woefully underused. This is, in part, due to to the developer community being largely self-taught. This creates a misconception about the nature of OOP, which ultimately leads to the devaluation of the most important feature of OOP: interfaces.

I can go into why they are so important in another article, but the point is: without interfaces, true polymorphic code is impossible. Or, rather, extremely susceptible to spaghetti code and fatal errors.

In other languages (Java), code might look like this:

The interface in this example defines a uniform way to access a class through a standardized API (thus the name, application programming interface). In a strongly typed language where all variables must have a type, the cat variable is defined as an implementation of Animal. This enforces and allows the method call makeSound(). If cat has a meow() and dog has a woof() method, they can not be called here without a compiler error. This is because in this function call, the parrot variable is defined as being an instance of Animal (versus being a Dog, Cat, or Parrot). As such, only Animal methods work here.

More importantly, because the compiler does this type checking, any invalid calls, such as the last one, would error and never compile. Even if the Parrot class has a moveAround() method, it can not be called in the code above. This is an extremely important aspect of OOP since, as a definer of the Animal class, I want to make it very specific how Animals should be treated (you can only makeSound!). If a programmer tries to do something to an Animal that I haven’t defined, they get an error. If they wanted to make that last line work, they would need to use object typecasting:

Or by changing the function definition:

But note that in this case, the user had to make an explicit choice to stop using Animal’s interface. Yes, parrot is still an Animal, but it doesn’t have to be. This, in short, helps prevent spaghetti code because it forces the developers to think about whether or not they want to deviate from a particular interface. Realistically, if presented with these alternatives, a Java programmer would probably use other types of abstraction techniques (e.g., dependency injection)  to keep this method from needing to be used. However, this example was necessary to illustrate how things are done in PHP.

So how would this look in PHP? Why isn’t this the same there? Well, take a look at the following code that, unlike the Java example, works perfectly fine and raises no red flags.

This code works great. We have three arguments all forced to use the Animal interface. Great. As a casual observer, there is really, truly, nothing wrong with this code. It’s a little strange, but if it’s commonly known that Birds can moveAround(), there is no problem. In fact, in most PHP shops, I will bet money that type hinting is NOT used. This will further illustrate how bad the spaghetti is about to get (read on).

Now imagine in six months if we decide we wanted to group up this code so that it uses a single array/collection as an argument. This is where things would look like traditional polymorphic code. I mentioned spaghetti above. Let me show you why:

Wow, look at what we just did. A harmless piece of code in PHP six months ago completely breaks when you try to refactor it to use a fairly typical design pattern. More importantly, unless I put in even MORE code to do type checking, there’s a chance that the makeSound() line will actually die in a fatal error if, for example, a string is passed in as an element of the argument array! See example without Parrots:

PHP is extremely flexible when it comes to hacking out a page, but when it comes to OOP, it’s about as brittle as you get. Refactoring is painful and error prone, and elegant design patterns like the ones you might see in a message-passing language such as Objective-C, Scala, or Erlang don’t work. Remember that by using functions such as method_exists() and is_object(), I can emulate the desired behavior; however, the extra code means more places for bugs and less time spent making the program do what you want it to do. The point is that the OOP constructs in PHP don’t fully work. As a result, certain very important aspects of OOP don’t translate very well to PHP.

Some people may still cling on to the notion that “ultimately, you can still do it, it just requires more code!” But I argue that preventing “more code” is the exact reason why OOP was invented. By writing more boiler plate error checking code, we are wasting time. The issue is exacerbated by the fact that the error checking code isn’t required, unlike say, if you were throwing exceptions. It isn’t immediately obvious in that last example that you need to do error checking for is_object() on the $animal variable. It’s these types of oversights that really damage PHP as the code base gets larger.


What I’m realizing is that PHP isn’t meant to scale. Yes, it can take a lot of web traffic, but that’s not what I mean. I’m talking about scaling in the sense of growing team size and code base. The design of the language promotes coding paradigms that ultimately damage the code base. This is because PHP makes it harder use good OOP practices on legacy code. To illustrate:

  • PHP became popular because it is easy to hack things out, even if that something required doing it the “wrong” way. These problems come back and bite you when the code base grows.
  • PHP can’t support a large development team as effectively because its weak typing allows for sidestepping certain core OOP principles (see above)
  • PHP  allows for invisible future-bugs (see above) to be inserted without any immediate cause for alarm
  • As applications get complex and require threading or distributing of processes, PHP fails to keep up (so other languages get used)
  • Because PHP does not use dynamic dispatching (message passing), calling a method can cause runtime FATAL ERRORS (unacceptable and very hard to debug!)

All of this makes me rethink the popularity of PHP. There are some new languages, still in their infancy, that pose a threat to PHP’s current dominance. I believe that in the next few years, as today’s systems become “legacy,” today’s newcomers will finally be production ready. At that point, we might see companies adopt the newer languages, which will support more modern programming paradigms. We are seeing this today with Ruby, for example.

Of course, I could be wrong. I once told people that PHP was “C of the web.” It’s possible it’s here to stay forever, despite all of its flaws. And, for the record: I do not believe Python or Ruby will be the language that will overtake PHP, but that’s for another post.

I just want everybody to know that I am a PHP developer, so I speak from experience. We should recognize that technology changes and evolves, and it is important that we constantly update our skill to ensure they don’t become obsolete. I’m just pointing out that perhaps PHP isn’t as timeless as C (or, possibly, Java).

Lastly, I will plug my personal belief that being “religious” about a language because it is “the best” is short sighted. New languages are born, literally, every week. It’s only a matter of time before a language comes along that does what your language does more elegantly, faster, and with less code.

Only time will tell. 🙂

On the Web 2.0 Bubble

Everybody, listen. There’s a Web 2.0 bubble right now. I know it’s difficult for some people to acknowledge, and many people may even casually agree with me without actually believing the statement in full. But it’s true, and the quicker you realize this, the better it will be for your pocket books.

Lately, I’ve been doing stock trading, and have come to learn first hand about the energy and commodities bubbles that were slamming the market. And when that thing was going crazy, it helped deflate the banking bubble, which was a direct result of the housing bubble. And in many ways, the housing bubble was a result of the dot-com bubble bursting due to people exiting the stock market in search for a new investment. And everybody in the web industry likes to think they are wise to bubbles because they learned their lesson in the dot-com boom. But it is increasingly evident that this is not the case.

An Example Exercise

The problem here is that people are approaching this with the mind set of “this will be somebody else’s problem after I sell it.” It’s important we try to figure out what happens to the eventual owner of the startup.

  1. Take your favorite Internet 2.0 company. Decide how much you think that company is worth. $5 million? $10 million? $50 million? $500 million? The sky is the limit!
  2. Imagine now that you are going to trade your life savings for a current minority chunk in the company. If the company doubles up, so does your savings, but if it goes under, your savings are wiped out.
  3. Remember that number you threw up there in step #1? You aren’t allowed to cash out ANY PROFITS until the company is sold to a buyer.
  4. Your startup may not sell until it has reported a yearly net revenue of 10% of your purchase price.

That last part is the key because it effectively stops the hot potato game and forces you to examine if the company is truly viable. Some people would accuse that of being an unfair restriction, but I will show you why this is the key part in understanding why there is a 2.0 bubble.

Defining the Bubble

Let’s take a second to define a bubble:

An investment yields a return, much like a chicken can produce eggs, a savings account produces a yield, and a farm produces crop. This return is not always immediate, and is not always in the same terms as the input. Also, it is almost a law that returns are proportional to risk (some investments have negative returns). But ultimately, it is called an investment because it will (usually and) eventually generate more value than what you put in.

Now consider an investment that does not create a return. Such an example would be the web stocks of the dot-com boom. Back then, fundamentals like earnings, operating margins, and profitability were ignored when evaluating a stock. Companies that bled millions of dollars a year saw their stocks rising at record levels. This is because the investment – the stock – was being traded to somebody else for a profit because that next person believed they could trade it for an even higher profit.

A bubble is defined as a trend where merely owning something long enough to sell it is profitable. It is a giant game of hot potato. Everybody is essentially a middle man between the original owner and the eventual owner — adding to the price tag at every step. Eventually, people wise up and no longer want to trade the hot potato, causing the bubble to burst.

And most importantly, let’s define a bursting bubble:

A bubble is defined as bursting when the value of the traded item reverts to its true market value.

Understanding the Exercise

So let’s talk about the exercise again: if you thought the company was worth a paltry $50M, then your assets are stuck inside that stock until the startup can earn $5M in revenue AND be profitable while doing so. Why did I pick such restriction? Because those are reasonable things to assume when buying any other type of company. Why would another company offer to buy the startup if it failed to produce respectable revenues?

Given this extremely reasonable reality-check requirement, would you want to tie your personal investment to the startup being able to produce a profit? If the startup you chose has revenues and is profitable, then this article doesn’t apply to you. =)

Speculation Should Still be Grounded on Fundamentals

People aren’t investing for what 2.0 companies are worth today, it’s all about tomorrow. I agree that it is important that tomorrow’s profits are taken into today’s valuations, BUT isn’t this reasoning eerily similar to the reason people listed as to why they bought over-priced houses and profitless dot-com stocks? Both were purchases made while completely disregarding the fact that the *current* valuation of the items were negative.

But since that day of profitability is so far away into the future, you end up playing a giant game of corporate hot potato. Most people would agree that a profit of 10% is far better than a loss of 90%. So as soon as you find a sucker to pay 10% more than you paid, you bail. And of course that guy who bought your stake is thinking the exact same thing — sell this to somebody else for 10% profit before something bad happens. That’s a bubble, my friends.

In Conclusion

In 2001, the bubble was all about going IPO so that the general public could hold the hot potato.

Today, the bubble is all about selling to a big corporate entity that will hold the hot potato.

There is no difference.

If you are currently thinking about entering the 2.0 scene, think carefully about what your end goal is. If it isn’t “to be profitable”, then it’s likely just another bubble startup that will become completely worthless once the bubble pops. And believe me: given our current economy, that bubble is going to pop in the next year or two.

Finally, an extremely interesting speech about bubbles given in 2006 (gets good around part 2):

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Google’s Real Goal Behind All Their Free APIs

Ever wonder why Google gives away so many web-developer tools? Tools that otherwise seem like complete money-and-bandwidth-pissing schemes (notice how most of these don’t directly show ads):

This is all about obtaining browsing behavior in a long term bid to increase ad efficiency. Nothing else.

  1. It is not about making things more “open”
  2. It is not about making web development easier
  3. It is not about making an online operating system
  4. It is not about competing with Microsoft
  5. It is not about making the Google brand more ubiquitous
  6. It is not about showing ads in new places

If any of these above things happen, they are a (likely planned) side effect. For example, if a particular API makes something easier, that is good because it will encourage other developers to adopt it as well. But as I will explain shortly, the commonly held beliefs about Google doing Good or Google making the web more open are simply not the reason for these initiatives.

If you notice, all of their APIs use JavaScript. This means all of their APIs have the ability to note what computer a given request is coming from. This means that on top of your search preferences, they can eventually begin to correlate your browsing habits based on the sites that you visit that use Google APIs.

For example, if my blog were to use a YouTube embed, it would be possible for Google to read a cookie originally placed on your machine by YouTube and correlate it as traffic coming from this site. This means they can unique track every YouTube video your computer has ever watched since the last time your cleared your cookies. YouTube is just an example because most of Google’s APIs are far less obvious to the end user. For example, the unified AJAX libraries could be used by a good half of the “2.0” web sites out there without impacting performance (and in many cases would make the sites load faster for the end user). But because everything is going through Google, it’s possible (although I’m not saying that are) for them to track which sites you visit.

If this isn’t extremely valuable information, I don’t know what is. Don’t forget that the AdSense API is, in itself, a means for Google to track every website you’ve ever been to that uses AdSense, and for a way for Google to know exactly which type of ads interested you in the past. Once they know what sites you visit, they can surmise what a given site is about, and then determine, for example, what sort of products would interest you.

It’s the classic advertising chicken and egg problem: If I knew what my customers wanted, I could sell it to them, but they won’t tell me.

…And Google found the chicken. For the time being, they haven’t started using this information (at least noticeably), but I am sure they will as market forces move to make competition in that area more necessary.

Say goodbye to privacy. =( Oh wait, I’ve been saying that for quite some time now.

Falling Dollar Kills Off-shoring

For anybody following economic news, the US dollar is in shambles. Initially, I thought this would bring about a depression in the IT sector and possibly a bubble collapse. Don’t get me wrong: I believe there will be a softening in our industry due to things like the crashing housing market (think of all the Adwords purchases that will evaporate). However, another unintended consequence is that off-shoring (the practice of hiring foreign programmers), which was all the rage 2 years ago, is becoming impractical.

As evidence, check out the exchange rate between the Dollar and Rupee: it’s down 15%, and the vast majority only in the past few months. We can sit here and argue if the drop will continue, but we can probably agree that the dollar isn’t going to rise back up anytime soon.

Dollar vs Rupee

This sort of exchange rate suicide means off-shoring practices are now 15% more expensive across the board compared to 24 months ago. Any company doing this has more and more reason to boot the practice, especially considering the added cost of managing remote, non-native English speaking employees. Considering things like term contracts and the threat of a further falling dollar, the practice actually becomes quite dangerous. Just think: you sign a $3M two year off-shoring agreement only to have the dollar drop 10% next year — suddenly you owe an extra $300K.

So aside from all the arguments that exist about money flowing from the housing market back into IT in the form of capital investment (rather than through ad purchases), we can also safely assume that in-house IT operations themselves will become more valauble in 2008. 🙂

How the Housing Bubble Pop Will Burst the Internet Bubble too

I follow the tech news as closely as ever, it’s just that I think I am at a point where all news feels like “old news” by the time I have the free moment to write… 🙂 But I digress.

Recently, there’s been horrible turmoil in the financial markets due to the housing bubble’s collapse. It’s really beyond the scope of this blog to discuss this matter, but I believe the turmoil isn’t over for several more months. That said, I’ve wondered how it will impact the Internet 2.0 bubble that currently exists.

Unlike the previous bubble, this bubble is definitely fueled by results. As in, companies that are producing revenue are the ones that are becoming successful. But the big problem with today’s bubble – and why I refer to it as a bubble – is that the success is circular. If you cut any part of the circle, all other parts are adversely affected. It’s just like how in the housing bubble, everybody was making money because all prices were going up; success of one neighbor helped fuel the success of another.

Did you know that 37% of all online advertising dollars are spent on financial services such as mortgage loans? There are thousands of blogs, web services, and web applications out there that exist solely for advertising dollars. You might say, “That’s nice, but my company doesn’t deal with financial service ads!”

You are right. I would venture a guess that far less than 37% of web searches and web sites are financially oriented. However, if advertising dollars dropped, it chills the advertising market as a whole. If financial service advertising dollars dropped 30%, for example, we could be talking about a double digit revenue drop (11%) for companies like Google and Yahoo. But it doesn’t stop there.

Once news of a weakening economy starts to sink in, followed by news that Google and Yahoo fell far below sales projections, venture funding begins to cool. Right now, we are beginning to see the reemergence of dumb investments straight from the bubble 1.0 playbook. These types of investments will be be the first to die off as risky investing will be highlighted by the other crash. Certain breeds of “go until a fool buys us out” startups die. 

All of this relies on the fact that online advertising somehow grows less effective. If online advertising were to become even more effective, a correction would not occur. So then I close the circle with the other secondary effect: consumer spending.

As consumers begin to spend less due to a cooling economy, that definitely effects Internet ads. Internet shopping is the epitome of impulse buying: shopping is as simple as two clicks and typing in your credit card number. But these types of purchases are the first to die off when consumers feel a pinch.

Some of the largest sites in the world are still only small companies when compared to equally ranked brick and mortar businesses. And believe me, their equivalently ranked real-life business counterparts have a much larger profit. And like all bubbles, because the Web 2.0 bubble is circular, any slowing in one sector drags down the rest. That is because the bubble is driven by speculation. There are probably 100 or more funded video startups out there, but most if not all are just running losses.

Everybody is speculating that these will become the next big thing, and thus dumping cash on them. Wait, isn’t that exactly the same thing as what happened with all those homes?

In a speculation driven market, a correction is inevitable. It’s exactly how the housing market got to where it is now.

Facebook’s Take on the iPhone: Way More than 1%

Facebook has a polling feature they introduced a few weeks back. They allow people to vote on polls that show up on their home page if they match a certain demographic, as set by the poll creator. I just saw a poll titled:

Are you going to get an iPhone?


The results of the poll was 7% said Yes. This is significantly higher than the 1% Apple is supposedly hoping for. This is especially amazing since the vast majority of college students are probably way too broke to be affirming any intentions on buying an iPhone.

At 18% answering Yes in the 35-49 age group, they account for half the people affirming intentions to buy an iPhone. The 18-24 and 25-34 age group each had about 7% yes responses among their age group. Males dominated Yes votes 3:1.

Yes votes jump up another 8% across the board if AT&T loses exclusivity on the iPhone (people really hate AT&T). Interestingly, 63% voted No, regardless.

The poll sampled 1000 users whom were self-selected, and is by no means scientific. Still, an interesting bullet point for future considerations on Apple’s success.

iPhone: Which Missing Features Will We Download Later?

Why doesn’t the iPhone have chat, voice dialing, or MMS?

In a time where chatting is as important as the phone feature itself, why would Apple omit it? Given that Apple has iChat, why not have that installed on the iPhone by default? Why wouldn’t a phone that has no physical dial pad lack voice dialing? Since the iPhone has a camera, why can’t I take a picture and send it via MMS (picture message) when virtually every other camera phone can?

The answer: they ran out of time.

There’s a reason Apple doesn’t talk about their products before shipping, and that is because they can miss their targets and nobody would ever know. From the very beginning, icons have been moving around on that ever-famous iPhone interface. It seems there is now still room for four more icons on the final version coming out tomorrow (June 29, 6pm local time).

So which of these features should we see arrive in the coming months via iTunes sync update? Not chat or MSS. Apple may save those features for version 2 of the iPhone (which would come out late 2008 by my guess). Picture messaging and chat isn’t a big deal. You still have email, which replaces MMS, and chat can be done using text messages or web based chat programs like Meebo.

But I think voice dialing will come out later in a software patch. There are numerous problems with not having voice dialing on a touch screen device, most of all because dialing while driving becomes impossible — or extremely dangerous. At least with a physical keypad, most people can get to the call without looking down, but on an iPhone, it’s impossible to do without looking. This is one of the biggest usability requirements that has not been met on the iPhone, and because it involves the safety of the public at large, I’m pretty sure it will not be a feature they hold until version 2.

As an addendum, I also expect to see games come out perhaps in January. We probably won’t see songs as ringtones soon since that is a “political” issue (as in, carriers will hate the iPhone if it starts chewing up one of their cash cows). And flash support will likely continue to evade the iPhone since it was omitted for power consumption reasons — that is, until Adobe works something out with Apple, which they will.

Google Buys GreenBorder – Maybe for Google Pack?

Google made a strange acquisition today. Google bought GreenBorder, a security application that sand boxes browsers. Basically, it ensures that when you close your browser, you destroy any potentially malicious viruses and trojans along with your session. It is a “sand box” as in stuff that happens in your browser stays in your browser, keeping the rest of your system safe.

This is weird on multiple levels:

  • It has nothing to do with their core business
  • Computer security is a highly competitive and saturated market, leaving little room for massive unchecked growth
  • Browser security is a tiny niche of computer security and leaves other major Internet based attack vectors open, especially through Outlook (ugh, huh?).

One potential use for this application is the ability to keep competing search engines from changing a user’s default home page. This normally might exist to fight off spyware, but one can see the additional competitive edge.

Google’s aim may be to make the Internet experience much safer. But I can’t see this gaining wide adoption unless they give it away. Google Pack currently has Norton and an anti-spyware scanner, and this would make a cozy fit for a browser-centric solution. It would keep the computer clean where the scanners may have failed. Thus, in my eyes, this is all about bolstering the software offering of Google Pack.

Google Pack is a free software suite given away by Google (click on link below to try it out). It is a collection of free tools such as Firefox, Skype, and Picasa.

Google Pack is an important pawn in Google’s strategy. In giving away the software suite, they are also increasing the market shares of:

  • Firefox
  • Google Desktop
  • GTalk
  • Google Toolbar

These are all very important components in keeping Microsoft at bay. So my prediction is that we should see this (or a re-branded version of it) in Google Pack within six to eight months.

Update: Techcrunch has their own theory that this purchase is for enhancing Google Desktop.

Why Microsoft Should Abandon Its Monopoly Ideology and Restructure Itself

Microsoft has a monopoly ideology, that emphasizes total proprietary integration before all else. For the past two decades, this was Microsoft’s greatest strength, making it nearly impossible for competitors to gain a foothold. But now, as open source is gaining steam and the Internet makes interoperability a requirement, I think this ideology will be Microsoft’s biggest hurdle in the next decade. Microsoft has a lot of great products: Office, .NET, Visual Studio, MSSQL, IE, IIS, etc. But all of these products share a common theme of trying to stamp out established open standards.

This puts Microsoft at odds with the open source community and many governments. If Microsoft was new to the fire hydrant market, they’d be mass producing fire hydrants that required Microsoft Certified Wrenches to open. It is no wonder governments are weary of the continued reliance on Microsoft’s closed standards.

Microsoft uses closed architectures in order to lock its customers in – this will stop working soon. It’s difficult to stop using Outlook when all of your emails are in it and you can’t figure out how to get them out. It’s impossible to stop using IE when all of your web sites require ActiveX. This sounds like a great win-win for Microsoft, but I am saying it will actually hurt them in the coming years as people start to realize many of the technologies Microsoft offers are becoming a commodity.

Word applications are finally beginning to mature to a point where more new features aren’t worth dishing out more money for an upgrade. Open source database alternatives such as MySQL are finally catching up to the big boys and are used at many places, including Google. Linux is already one of the most popular web server operating systems. And Firefox, a popular open source browser, has gained major usage shares in Europe.

The Proposal

A few years ago, Microsoft was nearly broken up as a result of the infamous anti-trust lawsuit. People believed Microsoft would get split into three divisions: Windows, Office, and Everything-Else. Well, that didn’t happen, but looking at what could have happened as a result, I think we missed a grand opportunity. And so did Microsoft. Microsoft’s blind ambitions toward total user control has caused its products to corner themselves into a “Windows-only” box.

Microsoft should restructure itself for more autonomy between its subdivisions that emphasizes less reliance on other Microsoft technologies. As in, each component of Microsoft should try to sell its services to as wide of a market as possible. Microsoft’s products that stand to gain the most from this are:

  • Visual Studio / Windows Server
  • Office
  • Windows itself

On the flip-side, Microsoft’s product that will be threatened most is IE. But that is assuming IE doesn’t make some predictable improvements that should follow suit if things go as I predict. Allow me to explain.

Visual Studio / Windows Server

What needs to happen is simple: All of Microsoft’s web development frameworks should stop assuming or requiring that the client is using IE. As in, it needs to produce 100% standards complaint HTML and JavaScript, as well as not use any proprietary hooks such as Silverlight or ActiveX. With this simple assumption destroyed, you remove one of the top reason why some companies avoid using the ASP/C# development environment — their website breaks in some clients’ browsers. The other reason, of course, is the need to run it all on Windows machines.

While I don’t like the fact that ASP, .NET, and other server-side technologies require Windows, I am willing to put that aside for a moment. I realize that there are certain operating system hooks that Microsoft is using that makes this pairing very convenient. So while it is in their best long term interest to separate their server-side development environments with Windows, I won’t advocate that here — it’s simply unreasonable to expect them to decouple that for the time being. The time will come, however, when it will be in their best interest to make their server-side technologies work on other systems as well. Because of this coupling, we should see a huge spike in demand for Windows Server as developers become more open to using .NET. A wider acceptance of ASP/.NET also means a greater demand for Visual Studio.

Moreover, since the web involves strangers that may not use Windows technologies, lacking support for non-IE browsers is a major weakness in Microsoft’s offering. Fixing that weakness seems like a logical move to me. 


Office needs to support open-standards. The reasoning for this isn’t a plea by someone hoping for free alternatives; this is all about getting new customers. In some countries, Microsoft has a stranglehold on the market. However, in countries where Microsoft has a small presence, it is increasingly common for people to use cheaper, free alternatives. The problem is that a lack of support for open standards means those governments have a strong reason not to adopt Microsoft Office. They don’t want to edit someone’s file and break all the formatting, or otherwise render the file into a “MS-only” format.

The biggest market for Microsoft isn’t the current market: it’s the upcoming one. Office is still far, far, far ahead of its competitors in terms of integration, stability, and ease of use. The only component missing is the interoperability with open standards. If it gained this feature, I am confident Microsoft would stop bleeding away potential new customers (i.e., governments).


Windows is insecure, right? Or is it IE? Or is it Outlook? Well, to the average consumer, all three are insecure. But in reality, it’s the integration of these three elements that created one of the worst computer security nightmares in history. Because each trusted the other absolutely, compromising one part meant the other two were infected as well. This is how people would get a viruses by clicking on a link, or how an email would install IE spyware, or how looking at a picture would cause you to become an email-spam bot.

Microsoft has taken recent steps in trying to make these components isolated, but the root problem still remains: IE and Outlook are designed to utilize features in the OS that most applications – short of your virus scanner – do not. But to do this, Microsoft should make sure its applications play by all the same rules everybody else does. As in, its OS shouldn’t give special exceptions to IE or Outlook.

By forcing each component to work completely blind to the other, you increase security. This improves the overall image of each of the components, especially Windows. This is a long term investment.


IE doesn’t necessarily lose on this, but its importance to Microsoft changes. Microsoft shouldn’t care so much about this browser except its tie in to Live Search. It is no longer a vehicle to forcibly increase adoption of Microsoft technologies such as ActiveX. In the new age, it’s becoming clear that IE must support the standards. With the changes I suggest here, this becomes a requirement.

Microsoft had a lot to gain by making IE destroy standards because it helped keep the IE + ASP + Windows Server triad in place. However, since that scheme is now falling apart, IE sticks out like a sore thumb.

Supporting standards has benefits on multiple levels:

  1. Developers are just itching for the day when IE dies. IE is currently hated by developers because it makes web-programming take twice as long: once for IE, and once for all other browsers.
  2. This reduces the complexity of the code generated by Microsoft server-side technologies. This code would only grow more complex as new types of Microsoft-only versions of technology come out.
  3. IE doesn’t break on standards complaint web sites (this actually happens).

In a standards compliant word, IE has little to offer over browsers such as Firefox or Opera. However, Microsoft stands to lose the entire IE market share if they continue to ignore standards since governments may begin to standardize on the established open standards.

IE would become all about promoting Microsoft’s web properties (Live) rather than controlling web standards. 

The Problem

Right now, Microsoft is on cruise control. They’re happy with their position, and they don’t want to sacrifice it.

In the coming years, governments are the most likely to jump off of the Microsoft bandwagon. If governments begin valuing open document formats, just wait until they realize their web sites share the same problems as their documents. All of those government web sites that currently only support IE will begin to switch away from IE entirely. Once this land slide begins, it’s too late for Microsoft. Suddenly, the demand for their web development languages slumps and the entire virtual market they build around their proprietary .NET libraries falls apart.

As Balmer once said, “Developers, developers, developers” are the life line for Microsoft’s success. When the development dries up, Microsoft’s competitors gain an edge since all of the new applications appear there first. More importantly, as the notion of a “IE-only” web site becomes increasingly ridiculous, the tolerance for proprietary code drops. Thus, they must ensure their proprietary solutions always produce open data formats.

They have to act preemptively here. And this is the biggest hurdle Microsoft must over come: sacrificing their short term gains for much larger long term ones.