Come on, Facebook

Facebook just had their annual F8 conference, and it was a snooze-fest to me. Examples of things they could have done that would be ACTUAL game changers:

  • An HTML5-based social application platform (currently missing from their mobile strategy and making embedded FB apps less important on mobile)
  • Embedded contextual ads on existing Facebook widges, complete with rev-sharing – yep, it would be hated
  • A truly threatening Adsense competitor for off-Facebook publishers – yep, could be hated too
  • A FB credits fee of 5% (from 30%), and enable physical e-commerce – yep, threatens a working business model
  • A connect-to-pay Paypal clone – yep, would increase Facebook’s financial liabilties

Yes they might harm them. Yes they’re risky. But they’re game changers. They flip the market upside down. They make existing competitors freak out. It pushes you even further as a leader. And, thus, the absence of such a risk shows how conservative Facebook has become.

Instead, they announced a glorified scrap book and some cool automated like buttons.

I’m sorry, but the timeline is hardly revolutionary. How is this remotely on par with the introduction of apps or the Graph API at previous F8 conferences? As a company approaching IPO, this was their moment to really show the world what Facebook’s potential could be. And so they showed off a bunch of visual fluff. The press is fawning over the newly released changes like the second coming of Jesus.

Not to mention all of these new integration points don’t even work on their current web-based or native mobile applications! What am I missing here?

I am disappointed.

Did Digg Miss the Boat Again?

So Digg released a new layout the other day, and I feel like another boat was missed. They made a big splash about this and it was covered in numerous places (for example, TCMashable). The new layout is noticeably faster to load, which is a huge plus. They tout that this new version emphasizes a “My news” approach to Digg, where they personalize the Digg site based on what you dugg in the past. In practice, unless you’re a power user, your news ends up spammed with news from one or two blogs you frequent.

I feel they are addressing their threats in the wrong order. Their website wasn’t perfect, but it wasn’t their weakness either. Consider:

  1. The Like button is dominating right now. Virtually every blog has it.
  2. Facebook is a HUGE news traffic driver. Way bigger than Digg.
  3. A lack of personalization was never Digg’s problem. Plenty of news sites on the web are popular with no personalization functionality.

First, Digg needs to figure out a way to make article submission “fair” for the little guy (read: long tail of users). They should have fixed the fundamentally flawed “democracy” where certain users effectively had 1000 votes. Personalization is an approach to the problem, but it ultimately doesn’t stop popular individuals from heavily influencing all of their followers’ feeds (and thus accumulate votes). The main complaint was that only power users could effectively get articles to the front page. Perhaps the algorithm should better incorporate Digg-to-viewer ratios or weight Diggs from non-followers as greater. The point is, until this is fixed, Digg will never fully engage its non-power users due to a lack of incentive. This represents the vast majority of its user base.

Second, Digg needs to up readership engagement. They should really look at their Digg button and see how it compares against the infamous Like button. It needs to be as brainless as the Like button. Clicking on “Digg This” should instantly submit the article to Digg. No windows; no dialogs. This is how the Like button works, and its pervasiveness shows how simplicity can trump everything else. Of course, doing this might mean changing how article submissions work on Digg — no problem: let power users check a setting where they ARE prompted for a custom title or description. The point is, the process needs to have as few places as possible where a user can change their mind about participating in the Digging process.

The website, I think, was never the problem.

Google Chart API Released

Google just released a Chart API. It lets you link to a dynamic image which can then be used to generate graphs and charts. The API is amazingly robust. It supports all sorts of charts. It lets you make an image like the one below using a simple URL (see the image URL for an example):

Why is this better than hosted solutions? For 99% of web masters out there, Google’s up-time will beat the pants off of them. There’s really little to no question about the availability of their solution. Not to mention if it’s really an issue, these images could easily be cached by your application after generating them. The biggest draw, of course, is that unlike other hosted solutions, this one doesn’t use proprietary formats (flash), doesn’t introduce security vulnerabilities (for installing some foreign server-side package), and doesn’t add CPU or memory overhead to your application.

Charts like these aren’t using Flash or JavaScript, which means they work in mobile browsers and RSS readers. Since that’s where things are going now, this is Google’s way of getting a small but important piece of the web off the proprietary Flash format. This is especially important given the recent iPhone’s arrival and its lack of Flash support.

“To control and organize the world’s information.” This project is certainly a reflection of their motto.

Rabbid Hackers Beating Apple’s iPhone

Apple is set to release version 1.1.2 of their iPhone’s firmware.

The rumor is, it’s already been hacked, and it’s not even out yet.

Apple is finally getting a taste of being popular. This is the same fight Microsoft fights every time they release a new version of windows. One wonders if Apple will continue this fight thinking it can win, or if this fight is really just for show (to its carrier partners) and Apple doesn’t really care.

Either way, the hacked iPhone is so much cooler than an unhacked one thanks to all the customizations you can do to it. :)

No G-Phone, but Open Phone OS from Google

Finally, those damn rumors about a Google phone can die. As expected, Google has released a Google branded mobile phone OS. This operating system is called Android, and is based on Linux. They produced this with cooperation from 30 mobile handset partners, and we should see the first phones utilizing it within the next six months.

This approach made the most sense since production of hardware would alienate Google from all of the mobile handset makers they are trying to woo into setting Google as their default search engine. All those crazy screen shots we saw were probably leaked partner phones that were undergoing integration — it’s so obvious in hindsight, huh? Since Google arrived to the personal computing party two decades too late to have a say in operating systems, they are now re-positioning their new operating system to unseat Microsoft. And unlike most potential competitors, Google doesn’t personally care to become Microsoft of mobile phones. Instead, their goal is to unseat Microsoft with a free alternative that could be maintained by anybody.

Because Android is open source, it is entirely possible that in a few years, someone else will come in take over the project. Google’s reasoning is simple: in a completely level playing field, they are the preferred search engine choice.

With the combined functionality of the Google Docs, G-Chat, Google411, GMail, and Google Maps (a staple in mobile phones these days), Google’s plan is to gain even more insight into its users. Just imagine: now they’ll know what you search for, where you’ve been, who you’ve called, what you write about in documents, who you emailed, what restaurants you’re interested in, and with the integration of Google’s new Open Social, get an additional insight into your social life.

Google’s theme this week is to bring together splintered markets under a unified, open standard… for them to harness. Data is easiest to mine when it is in a consistent format.

Mozilla Prism – Bringing Web Apps to the Desktop

Mozilla Labs just released a new feature called Prism. The most interesting aspect of this announcement is that Mozilla is attempting to formally “upgrade” the web application development process from what it was since the web was born. While their new technology introduces no new significant features to web development, it strongly encourages web 2.0 style development by completely removing the navigation links and address bar.

refracting

Prism is an application that lets users split web applications out of their browser and run them directly on their desktop.

On the surface, it seems like a browser without a location bar. The main thing is that it allows web application the hooks that regular applications have such as being direct links in the application menu. But even that is replicated with modern operating systems by using shortcut links. It is superior to just visiting a regular website using a browser in a two ways:

  • If an application or Firefox crashes, it doesn’t crash the other instances. Each application runs as its own standalone instance with its own link in the task bar.
  • Customizations or extensions per application are now possible. For example, you might see an extension specifically for Facebook or YouTube that works in the application environment without cluttering up your Firefox install.

googlecalendar

Mozilla has greater ambitions for this project, as it is indicated that this is the first in a series of “experiments” on web-desktop integration:

…we’re also working to increase the capabilities of those apps by adding functionality to the Web itself, such as providing support for offline data storage and access to 3D graphics hardware.

They state that they have no intentions of doing these things using proprietary technologies as proposed in Adobe’s AIR or Microsoft’s Silverlight.

In short, by keeping this a separate piece from Firefox, it frees them from cluttering up their browser. By keeping it separate, they can cater to the needs of desktop applications as eventually (in the long term), whole sites might be applications you would need to open with this tool before you could use the off-line features (Firefox just gets you to the site).

At least, that’s my take on this.

What the $@#% is Sony Doing? PS3 w/o PS2 Support!

Sony just announced a new PS3 model in Japan. It will sit at a relatively reasonable $342, but it has no backwards compatibility with PS2 games, effectively making the console the only next-gen console without backwards support. Even the Wii plays Gamecube games. Sony fails to amaze me.

There are many great games for the PS2, and many people own the PS2. The ability to continue playing these games on the PS3 is a huge up-sell — the inability to do so is a huge drawback. Does anybody really want a Wii, PS2, *and* a PS3 sitting in their living room? But don’t worry, if you buy this new discounted PS3, you also get a copy of Spiderman 3. Wow, that makes up for the lack of backwards incompatibility… not.

This new, gimped model won’t be released in the US. It is clearly an attempt to fight Nintendo’s Wii on price to gain more market share in Japan.

These recent price cuts, models with lacking features, upgrades on capacity, more price cuts, and vaporware announcements are really starting to highlight Sony’s inability to figure out what to do. They married themselves to a horrible product life-cycle plan and now they want out, but can’t figure out how to save face. There is no silver bullet for a product that is simply too expensive.

I admire their willingness to keep fighting, but they’re pretty much screwed:

  • The backwards compatibility is a requirement not a feature, at least if they plan to continue using the "PS" trademark.
  • They can’t get rid of Blu-ray since game developers are already relying on the extra capacity.
  • Hard drive size isn’t adding that much to the price, and the Xbox Elite has them beat anyway.
  • They tried stripping luxury features like wireless controllers and that model of the console hardly sold at all and they eventually discontinued it.
  • If they have great games, it might save the console, but without lots of users, publishers won’t sign on exclusives.
  • They’re already bundling games or movies, and it isn’t exactly making sales any stronger.
  • And despite have solid titles like MGS4 coming up, popular games like Smash Bros or Mario Party didn’t save Nintendo and we shouldn’t expect one or two games to turn the entire console around. Great games have to be a trend, but it’s impossible now that exclusives are being bled to Microsoft.

When Sony has failed to respond to Microsoft’s price cut, it told me that they aren’t ready to do it. Why? Because lots of people trying to decide between the two consoles made up their mind once Microsoft cut their prices. At that moment, had Sony cut their prices, it would have made a compelling case to save your money and buy a PS3, but they didn’t. Sony’s PS3 production costs, therefore, are still likely too high to justify a real price cut, explaining why they are doing the smoke and mirror price cuts of either gutting core features or upgrading the hard drives and discontinuing models. As in, they want to do a price cut, but it’s too painful still.

What else can Sony do? What can we expect? For one, we should see a true price cut in the next few weeks. Sony has been gutting features and testing new price points, and based on this data, they will introduce a price cut of $50 – $100.

Of course, any kind of cut is a huge loss for Sony and its distributors since all existing inventory (which there is a ton of) instantly takes a huge loss on the margin (probably a loss). But they have no choice — if they fail to meet their holiday sales projections, it will pretty much doom the console as publishers will really start to run for the hills.

CS Degrees are Worth More… Again.

I’m not a computer science major, but I’m very familiar with the programmer market: it’s stronger than ever. After the dot-com bust and the huge off-shore-it-to-India scare, there was a wide-spread decline in interest in the computer science degree. And with that, a general pessimism about the long term viability of the programming career. But ultimately, programming is a form of engineering, and it was inevitable that its wages came back up.

Average starting salaries for CS graduates is at its highest levels since the dot-com bubble: $53,051. This only makes sense since there has been a rapid decline in the graduate pool, elevating the demand. Of course, this will probably wane again in the coming years as this current bubble deflates (a sudden, overnight crash is less likely this time) and college freshmen start piling into the major again.

Nevertheless, if you like programming: stick with it. It may be a little turbulent, but as an entry level job it will always pay more than any other non-engineering job (until you enter middle or upper management). Technology is fickle because it is constantly becoming out-dated; you should be prepared to face a career full of rapidly changing standards, including your pay check.

Zune 2 – More of the Same

Zune v2

There is a new Zune coming out. This is not a surprise given how much thunder the new iPods have stolen from Microsoft (huh? What’s a Zune again?)

In short, they are the old Zunes in new, slimmer cases. All of the nonsense about a Zune Phone have been officially smashed. That’s right, I want to be the first to say all of those bogus rumors people were latching onto about the magical Future-Zune were FALSE. Microsoft has taken Apple’s beating and came back for seconds. They spent nine months trying to play catch up with Old iPods ™ and Apple backhands them with the iPod Touch only a month before Zune 2.0 comes out.

Ouch.

Microsoft better starting firing their Zune designers because all they’re good at is copying what was in style — last year.

iPod Touch