Google just introduce a new RSS Feed JavaScript API. At first, I didn’t get why it was so useful, but after reading up on it, I realized its power.
First, it simplifies RSS parsing. This is awesome on its own level just because that can be a pain sometimes.
But the true power in this API is that it overcomes a critical problem in JavaScript in a safe and manageable way: you can read data from multiple domains. For those of you unaware, JavaScript has a limitation: you can only access one external domain in your script. When you try to get data from a second domain, JavaScript barfs up security warnings. This is there as a safety net so that developers don’t accidentally leave a security hole that lets some hacker throw in their JavaScript code that talks to the hacker’s server.
Google’s new API lets you side-step the entire issue by taking everything through Google. You can take a feed from Slashdot, Digg, and your favorite blog, all on one page, all at once, without having to use otherwise lame and unnecessary workarounds (using proxies or “middleman” scripts).
As I mentioned, the API makes parsing simple. Check out this example:
var feed = new google.feeds.Feed('http://www.digg.com/rss/index.xml');feed.load(function(result) {if (!result.error) {for (var i = 0; i < result.feed.entries.length; i++) {var entry = result.feed.entries[i];alert(entry.title);alert(entry.content);}}});
At first glance, I know it looks like regular JavaScript. But if you look carefully, it is very intuitive.
- The first line grabs the RSS feed from Digg and creates a new feed object in feed
- The feed is loaded.
- If there is no error…
- Go through each item.
- Get the entry.
- Display the title of the entry.
- Display the content of the entry, etc.
While this isn’t going to be a huge step for the advanced developers out there, it will be significant for those of us who are too lazy to or didn’t know how to work around JavaScript’s domain security model. The added ease of parsing feeds will be huge for developers who aren’t familiar with parsing XML (note: it is a huge PitA).
Thanks, Google.
A splogfest is highly unlikely as this is JavaScript only. That means spiders won’t be hitting the content. This won’t be very useful for splogs that are trying to get their blogs spidered. If by splog, you mean fake content, that problem is largely irrelevant if a site isn’t indexed, and isn’t really worse by this since ripping content using server side code has been possible since the dawn of the Internet. In short, this really won’t help in malicious use since that’s been possible for ages.
Awesome summary – thank-you. The one down side to this utility will be the splog-fest that this enables. Wonder if google has a TOS for this, and if they’ll enforce it?