Importance of Feed Burner in Blogging

What is Feed Burner ?

 

FeedBuner was officialy released by Dick Costolo, Eric Lunt, Steve Olechowski, and Matt Shobe in 2004. It’s a famous web feed management provider. On June 3, 2007, FeedBurner was acquired by Google Inc., for a rumored price of $100 million. One month later, two of their popular “Pro” services were made free to all users.Services provided to publishers include traffic analysis and an optional advertising system. Though it initially was not clear whether advertising would be well-suited to the RSS format, authors now choose to include advertising in two-thirds of FeedBurner’s feeds. Users can find out how many people have subscribed to their feeds and with what service/program they subscribed.
Published feeds are modified in several ways, including automatic links to Digg and del.icio.us, and “splicing” information from multiple feeds. FeedBurner is a typical Web 2.0 service, providing web service application programming interfaces to allow other software to interact with it. As of October 5, 2007, FeedBurner hosted over a million feeds for 584,832 publishers, including 142,534 podcast and videocast feeds.

 

Feed Burner

 

 

Web Feed:

 

A web feed is a document whose discrete content items include web links to the source of the content. News websites and blogs are common sources for web feeds, but feeds are also used to deliver structured information ranging from weather data to top-ten lists of hit tunes to search results. The two main web feed formats are RSS and Atom. “Publishing a feed” and “syndication” are two of the more common terms used to describe making a feed available for an information source such as a blog. Web feed content, like syndicated print newspaper features or broadcast programs, may be shared and republished by other websites. Feeds are more often subscribed to directly by users with aggregators or feed readers which combine the contents of multiple web feeds for display on a single screen or series of screens. Some modern web browsers incorporate aggregator features. Users typically subscribe to a feed by manually entering the URL of a feed or clicking a link in a web browser. Web feeds are designed to be machine-readable rather than human-readable, which tends to be a source of confusion when people first encounter web feeds. This means that web feeds can also be used to automatically transfer information from one website to another without any human intervention. Dave Winer published a modified version of the RSS 0.91 specification on the UserLand website, covering how it was being used in his company’s products, and claimed copyright to the document. A few months later, UserLand filed a U.S. trademark registration for RSS, but failed to respond to a USPTO trademark examiner’s request and the request was rejected in December 2001. The RSS-DEV Working Group, a project whose members included Guha and representatives of O’Reilly Media and Moreover, produced RSS 1.0 in December 2000.This new version, which reclaimed the name RDF Site Summary from RSS 0.9, reintroduced support for RDF and added XML namespaces support, adopting elements from standard metadata vocabularies such as Dublin Core. In December 2000, Winer released RSS 0.92 a minor set of changes aside from the introduction of the enclosure element, which permitted audio files to be carried in RSS feeds and helped spark podcasting. He also released drafts of RSS 0.93 and RSS 0.94 that were subsequently withdrawn. In September 2002, Winer released a major new version of the format, RSS 2.0, that redubbed its initials Really Simple Syndication. RSS 2.0 removed the type attribute added in the RSS 0.94 draft and added support for namespaces. Because neither Winer nor the RSS-DEV Working Group had Netscape’s involvement, they could not make an official claim on the RSS name or format. This has fueled ongoing controversy in the syndication development community as to which entity was the proper publisher of RSS. One product of that contentious debate was the creation of an alternative syndication format, Atom, that began in June 2003.The Atom syndication format, whose creation was in part motivated by a desire to get a clean start free of the issues surrounding RSS, has been adopted as RFC 4287. In July 2003, Winer and UserLand Software assigned the copyright of the RSS 2.0 specification to Harvard’s Berkman Center for Internet & Society, where he had just begun a term as a visiting fellow. At the same time, Winer launched the RSS Advisory Board with Brent Simmons and Jon Udell, a group whose purpose was to maintain and publish the specification and answer questions about the format. In December 2005, the Microsoft Internet Explorer team and Outlook teamannounced on their blogs that they were adopting the feed icon first used in the Mozilla Firefox browser Feed-icon.svg, created by Stephen Horlander, a Mozilla Designer. A few months later, Opera Software followed suit. This effectively made the orange square with white radio waves the industry standard for RSS and Atom feeds, replacing the large variety of icons and text that had been used previously to identify syndication data. In January 2006, Rogers Cadenhead relaunched the RSS Advisory Board without Dave Winer’s participation, with a stated desire to continue the development of the RSS format and resolve ambiguities. In June 2007, the board revised their version of the specification to confirm that namespaces may extend core elements with namespace attributes, as Microsoft has done in Internet Explorer 7. According to their view, a difference of interpretation left publishers unsure of whether this was permitted or forbidden.

 

API [ Application Programming Interface]:

 

It is a set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API usually is related to a software library. The API describes and prescribes the “expected behavior” while the library is an “actual implementation” of this set of rules. Web APIs are the defined interfaces through which interactions happen between an enterprise and applications that use its assets, which also is a Service Level Agreement to specify the functional provider and expose the service path or URL for its API users. An API approach is an architectural approach that revolves around providing a program interface to a set of services to different applications serving different types of consumers. When used in the context of web development, an API is typically defined as a set of specifications, such as Hypertext Transfer Protocol request messages, along with a definition of the structure of response messages, usually in an Extensible Markup Language or JavaScript Object Notation format. An example might be a shipping company API that can be added to an eCommerce-focused website to facilitate ordering shipping services and automatically include current shipping rates, without the site developer having to enter the shipper’s rate table into a web database. While “web API” historically virtually has been synonymous for web service, the recent trend has been moving away from Simple Object Access Protocol based web services and service-oriented architecture towards more direct representational state transfer style web resources and resource-oriented architecture. Part of this trend is related to the Semantic Web movement toward Resource Description Framework, a concept to promote web-based ontology engineering technologies. Web APIs allow the combination of multiple APIs into new applications known as mashups. In the social media space, web APIs have allowed web communities to facilitate sharing content and data between communities and applications. In this way, content that is created in one place dynamically can be posted and updated to multiple locations on the web. For example, Twitter’s REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data. API documentation describes what services an API offers and how to use those services, aiming to cover everything a client would need to know for practical purposes. Documentation is crucial for the development and maintenance of applications using the API. API documentation is traditionally found in documentation files but can also be found in social media such as blogs, forums, and Q&A websites.Traditional documentation files are often presented via a documentation system, such as Javadoc or Pydoc, that has a consistent appearance and structure. However, the types of content included in the documentation differs from API to API.In the interest of clarity, API documentation may include a description of classes and methods in the API as well as “typical usage scenarios, code snippets, design rationales, performance discussions, and contracts”, but implementation details of the API services themselves are usually omitted. Restrictions and limitations on how the API can be used are also covered by the documentation. For instance, documentation for an API function could note that its parameters cannot be null, that the function itself is not thread safe, or that a decrement and cancel protocol averts self-trading. Because API documentation tends to be comprehensive, it is a challenge for writers to keep the documentation updated and for users to read it carefully, potentially yielding bugs. API documentation can be enriched with metadata information like Java annotations. This metadata can be used by the compiler, tools, and by the run-time environment to implement custom behaviors or custom handling.

Comment PLZ