RSS

Monthly Archives: September 2008

Released GWT 1.5

Finally, GWT 1.5 is out! Read more about it here. Download it here. Some highlights:

  1. Java 5 language support (no need for typeargs anymore, and support for generics).
  2. new JavaScript overlay types to seamlessly and efficiently integrate with native JS code.
  3. New animations available on widgets, and theme support.
  4. New API for DOM access.

Good stuff!

Also, new Google APIs for GWT are almost finalized, here’s the link. These include: Gears, Gadgets, and Search API.

 
Leave a comment

Posted by on September 28, 2008 in GWT

 

Tags:

Google Web Toolkit and Client-Server Communications

This article is written by Miguel Mendez

Communication Infrastructure

Frame

  • Module com.google.gwt.user.User provides Frame class
  • Pass information to the server by manipulating the URL
  • Retrieve responses from the Frame’s content or the server can write script tags to be executed
  • History and browser compatibility issues to consider
  • Load events are not reliable across browsers; specifically in Safari 2

FormPanel

  • Module com.google.gwt.user.User provides FormPanel class
  • Provides interoperability with servers that accept traditional HTML form encoding
  • Data is sent asynchronously
  • Any Widget that implements HasName which is part of the FormPanel will have its data sent on submit
  • Enables file uploads

FileUpload Example

final FormPanel form = new FormPanel();

form.setAction("/myFormHandler");

// FileUpload requires the POST method, and multipart MIME

// encoding.

form.setEncoding(FormPanel.ENCODING_MULTIPART);

form.setMethod(FormPanel.METHOD_POST);

// Create a FileUpload widget.

FileUpload upload = new FileUpload();

upload.setName("uploadFormElement");

form.setWidget(upload);

// Get a root panel and add the form and a button

RootPanel rootPanel = RootPanel.get();

rootPanel.add(form);

rootPanel.add(new Button("Submit", new ClickListener() {

public void onClick(Widget sender) {

form.submit();

}

}));

RequestBuilder (XHR)

  • Module com.google.gwt.http.HTTP provides RequestBuilder
  • Builder for making HTTP GETs and POSTs requests
  • Asynchronous communications only
  • Restricted by the same origin policy
  • Browsers limit the possible number of simultaneous connections so don’t go crazy firing off requests

RequestBuilder Example

public void onModuleLoad() throws RequestException {

String url = GWT.getModuleBaseURL() + "get";

RequestBuilder builder =

new RequestBuilder(RequestBuilder.GET, url);

// Create a callback object to handle the result

RequestCallback requestCallback = new RequestCallback() {

public void onError(Request request, Throwable exception) {

//…

}

public void onResponseReceived(Request request,

Response response) {

//…

}

};
// Send the request

builder.sendRequest("payload", requestCallback);

}

XML Services

XML Encoding/Decoding

  • Module com.google.gwt.xml.XML declares XML related classes
  • XMLParser parses a string containing valid XML into a new Document instance
  • Document class can be used to explore and modify the structure of the document
  • Document class will also convert the structure back into a string
  • Manipulation of XML is somewhat laborious

XMLRPC Example

RequestBuilder rb = new RequestBuilder(RequestBuilder.GET, "...");

RequestCallback requestCallback = new RequestCallback() {

public void onResponseReceived(Request request,

Response response) {

// Parse xml response into a Document object

Document result = XMLParser.parse(response.getText());

// ...

}

// Error handling omitted

};

// Create Document

Document doc = XMLParser.createDocument();

// Add elements to the document as necessary…

// Send the XML request
rb.sendRequest(doc.toString(), requestCallback);

JSON Services

JSON Encoding/Decoding

  • Module com.google.gwt.json.JSON declares JSON related classes
  • JSONParser converts between strings and JSON objects
  • JSON is a fundamental data encoding that does not support cyclic structures
  • JSONP, JSONRPC are protocols built on top of the JSON encoding
  • Again, the conversion to/from JSON can be somewhat laborious

JSON Service Example

RequestBuilder rb = new RequestBuilder(RequestBuilder.GET, "...");

RequestCallback requestCallback = new RequestCallback() {

public void onResponseReceived(Request request,

Response response) {

// Parse json response into a JSONValue object

JSONValue result = JSONParser.parse(response.getText());

// ...

}

// Error handling omitted

};

rb.sendRequest("{...}", requestCallback);

Efficiency Tip: JavaScriptObject Overlays

Overlays result in no runtime overhead; very efficient

/**

* Java overlay of a JavaScriptObject, whose JSON

* representation is { count: 5 }.

*/

public class MyJSO extends JavaScriptObject {

// Convert a JSON encoded string into a MyJSO instance

public static native MyJSO fromJSONString(

String jsonString) /*-{

return eval('(' + jsonString + ')');

}-*/;

// Returns the count property of this MyJSO

public native int getCount() /*-{

return this.count;

}-*/;

}

GWT RPC

GWT RPC Overview

  • Designed to move Java instances between client code (in the browser) and a Java servlet
  • Uses Serializable and IsSerializable marker interfaces
  • Interfaces define the service, a generator creates the necessary marshaling code with built-inversioning and a serialization policy file
  • Supports Java 1.5 language constructs
  • Built on top of RequestBuilder (XHR)
  • Like the rest of GWT, recompile to pick up the latest performance improvements – faster serialization code, etc.

Declaring GWT RemoteServices

// Implemented by the servlet

@RemoteServiceRelativePath("tasks")

public interface TaskRemoteService extends RemoteService {

List<Task> getTasks(int startIndex, int maxCount)

throws TaskServiceException;

}

// Implemented by generated client proxy, needs to match sync

public interface TaskRemoteServiceAsync {

void getTasks(int startIndex, int maxCount,

AsyncCallback<List<Task>> callback);

}

// TaskRemoteService servlet

public class TaskRemoteServiceImpl extends RemoteServiceServlet
implements TaskRemoteService {

public List<Task> getTasks(int startIndex, int maxCount) 
throws TaskServiceException {
// Code omitted
}
}

Invoking GWT RemoteServices

// Get client proxy, annotation causes auto addressing

TaskRemoteServiceAsync service =

GWT.create(TaskRemoteService.class);

// Create a callback object to handle results
AsyncCallback<List<Task>> asyncCallback =
new AsyncCallback<List<Task>>() {
public void onFailure(Throwable caught) {
// Deal with TaskServiceException...
}

public void onSuccess(List<Task> result) {
for (Task task : result) {
// Process each task...
}
}
};

// Actually call the service
service.getTasks(0, 10, asyncCallback);

Accessing the Request Object

  • Async method signature changed to return a Request instance
  • Useful for canceling the HTTP request used by RPC
// Modified async interface
public interface TaskRemoteServiceAsync {
// Method returns the underlying HTTP Request instance
Request getTasks(int startIndex, int maxCount,
AsyncCallback<List<Task>> callback);
}

Accessing the RequestBuilder Object

  • Change the return type of the async method to RequestBuilder, proxy returns a fully configured RequestBuilder
  • Provides access to HTTP timeouts, and headers
  • Caller must call RequestBuilder.send()
  • Wrap modified async interface to provide your own special manipulation code
// Modified async interface
public interface TaskRemoteServiceAsync {
// Method returns the underlying HTTP RequestBuilder instance
RequestBuilder getTasks(int startIndex, int maxCount,
AsyncCallback<List<Task>> callback);
}

Raw RPC Serialization

  • For pre-serialization of responses or custom transports
    * Client accesses the generated SerializationStreamFactory
    * Server uses RPC.encodeResponseForSuccess method to encode
  • Streams are not symmetric
public Object clientDeserializer(String encodedPayload)
throws SerializationException {
// Create the serialization stream factory
SerializationStreamFactory serializationFactory =
GWT.create(TaskRemoteService.class);
// Create a stream reader
SerializationStreamReader streamReader =
serializationFactory.createStreamReader(encodedPayload);
// Deserialize the instance
return streamReader.readObject();
}

Best Practices

  • Use stateless servers – better handling, better scalability
  • Keep conversational state in the client
  • Consider possible failure modes, keep the user’s needs in mind
  • Judiciously control the amount of data sent to the client – consider pagination of data instead of one bulk transfer
 
Leave a comment

Posted by on September 28, 2008 in GWT

 

Tags: , , ,

GWT JSNI: Talking to GWT code from JavaScript

Bruce Johnson has written an expansive post on understanding the GWT JavaScript Native Interface (JSNI). It starts out with the piece that some people know about, namely inlining native JavaScript such as this:

PLAIN TEXT

JAVA:

  1. // Java method declaration…
  2. native String flipName(String name) /*-{
  3. // …implemented with JavaScript
  4. var re = /(\w+)\s(\w+)/;
  5. return name.replace(re, ‘$2, $1′);
  6. }-*/;

But what about calling back out to Java from within native land?

JAVA:

  1. package org.example.foo;
  2. public class Flipper {
  3. public native void flipName(String name) /*-{
  4. var re = /(\w+)\s(\w+)/;
  5. var s = name.replace(re, ‘$2, $1′);
  6. this.@org.example.foo.Flipper::onFlip(Ljava/lang/String;)(s);
  7. }-*/;
  8. private void onFlip(String flippedName) {
  9. // do something useful with the flipped name
  10. }
  11. }

You can also access any JavaScript, loaded from a script source or however via:

JAVA:

  1. // A Java method using JSNI
  2. native void sayHelloInJava(String name) /*-{
  3. $wnd.sayHello(name); // $wnd is a JSNI synonym for ‘window’
  4. }-*/;

But finally, what about if you wrote a bunch of Java code for GWT, and you want JavaScript to call that? Simply link the code back through the $wnd world:

JAVA:

  1. package org.example.yourcode.format.client;
  2. public class DateFormatterLib implements EntryPoint {
  3. // Expose the following method into JavaScript.
  4. private static String formatAsCurrency(double x) {
  5. return NumberFormat.getCurrencyFormat().format(x);
  6. }
  7. // Set up the JS-callable signature as a global JS function.
  8. private native void publish() /*-{
  9. $wnd.formatAsCurrency =
  10. @org.example.yourcode.format.client.
  11. DateFormatterLib::formatAsCurrency(D);
  12. }-*/;
  13. // Auto-publish the method into JS when the GWT module loads.
  14. public void onModuleLoad() {
  15. publish();
  16. }
  17. }
 
Leave a comment

Posted by on September 28, 2008 in GWT/ JSNI / COMPILER

 

Tags: ,

GWT: JSNI (Javascript Native Interface)

 
3 Comments

Posted by on September 28, 2008 in GWT/ JSNI / COMPILER

 

Tags: , , ,

Tutorial – Writing Instant Messenger Application – GWT

 
2 Comments

Posted by on September 28, 2008 in GWT/CHAT/COMET

 

Tags: , , ,

GWT Chat Application – (GWT with xmpp protocol)

emite (xmpp & gwt) – This library implements the xmpp communications protocol using the bosh technique with gwt. It also handles the xmpp Instant Messaging protocol but has a modular architecture to support any other kind of communications.

  • stable, pure java (no js), portable library
  • ready, full featured and easy to use instant messaging implementation
  • extensible architecture
  • Chat rooms support (Multi-User Chat).
  • Other XEP like: Chat State Notifications.
  • well tested (junit test, coverage support)

Features :

  • Common features support (chat, chat rooms, presence, roster)
  • Sound and visual advices when new messages arrive
  • Drag & Drop support to start a conversation and for chat room invitations
  • i18n support
  • Focused in be very usable
  • Based in extjs and gwt-ext

Screenshot

emite Chat Demo – click here

emite Chat application download

 
6 Comments

Posted by on September 27, 2008 in GWT/CHAT/COMET

 

Tags: , , , , ,

Chattr: GWT Sample Chat application

This project is an interesting example of instant messenger web application built using the Google Web Toolkit (GWT). This instant messenger application is mainly for those who are working or knowledge about the RPC (Remote Procedure Call).

The source for Instant Messenger Application is available in the Downloads section, and you can either get the ‘complete’ app version, or a version that is ‘working.’

 
Leave a comment

Posted by on September 26, 2008 in GWT/CHAT/COMET

 

Tags: , ,

GWT-Instant Messaging

Instant Messaging History from Wikipedia

Instant messaging applications began to appear in the 1970s on multi-user operating systems such as UNIX, initially to facilitate communication with other users logged in to the same machine, then on the local network, and subsequently across the internet. Some of these used a peer-to-peer protocol (eg talk, ntalk and ytalk), while others required peers to connect to a server (see talkers and IRC). Because all of these protocols were based inside a console window, most of those discovering the internet in the mid-1990s and equating it with the web tended not to encounter them.

In the last half of the 1980s and into the early 1990s, the Quantum Link online service for Commodore 64 computers offered user-to-user messages between currently connected customers which they called “On-Line Messages” (or OLM for short). Quantum Link’s better known later incarnation, America Online, offers a similar product under the name “AOL Instant Messages” (AIM). While the Quantum Link service ran on a Commodore 64, using only the Commodore’s PETSCII text-graphics, the screen was visually divided up into sections and OLMs would appear as a yellow bar saying “Message From:” and the name of the sender along with the message across the top of whatever the user was already doing, and presented a list of options for responding.[1] As such, it could be considered a sort of GUI, albeit much more primitive than the later Unix, Windows and Macintosh based GUI IM programs. OLMs were what Q-Link called “Plus Services” meaning they charged an extra per-minute fee on top of the monthly Q-Link access costs.

Modern GUI-based messaging clients began to take off in the late 1990s with ICQ (1996) and AOL Instant Messenger (AIM, 1997) . AOL later acquired Mirabilis, the creators of ICQ. A few years later AOL was awarded two patents for instant messaging by the U.S. patent office. Meanwhile, other companies developed their own applications (Yahoo, MSN, Excite, Ubique, IBM), each with its own proprietary protocol and client; users therefore had to run multiple client applications if they wished to use more than one of these networks.

In 2000, an open source application and open standards-based protocol called Jabber was launched. Jabber servers could act as gateways to other IM protocols, reducing the need to run multiple clients. Modern multi-protocol clients such as Gaim, Trillian, Adium and Miranda can use any of the popular IM protocols without the need for a server gateway.

Recently, many instant messaging services have begun to offer video conferencing features, Voice Over IP (VoIP) and web conferencing services. Web conferencing services integrate both video conferencing and instant messaging capabilities. Some newer instant messaging companies are offering desktop sharing, IP radio, and IPTV to the voice and video features.

The term “instant messenger” is a service mark of Time Warner[2] and may not be used in software not affiliated with AOL in the United States. For this reason, the instant messaging client formerly known as GAIM or gAIM is now only to be referred to as Gaim or gaim.

 
1 Comment

Posted by on September 26, 2008 in GWT/CHAT/COMET

 

Tags: , ,

Going Offline With GWT and Gears

Why?

Web apps are great …if you can connect

•Planes

•Trains

•Automobiles

•Users may not want their data in the cloud

•Increase response time for online apps

How?

GWT – Google Web Toolkit

Google Gears

s

  • Browser Plugin (FireFox, Internet Explorer)
  • http://gears.google.com
  • Features:
    LocalServer (“programmable cache”)
    SQLite Database
    Worker Threads for JavaScript

Take Your App Offline

  1. Manifest file of your app’s resources
  2. Download resources
  3. Create database schema
  4. Go

Manifest File

Load Resources

LocalServer localServer = new LocalServer();ManagedResourceStore managedRS =
localServer.createManagedResourceStore(“CompanyApp”);
managedRS.setManifestURL(“http://company.com/manifest.json&#8221;);
managedR.checkForUpdate();
new Timer() {
public void run() {
switch (managedResourceStore.getUpdateStatus()) {
case ManagedResourceStore.UPDATE_OK:
statusLabel.setText(“Ready for offline access”);
break;
case ManagedResourceStore.UPDATE_CHECKING:
case ManagedResourceStore.UPDATE_DOWNLOADING:
schedule(500);
break;
case ManagedResourceStore.UPDATE_FAILED:
statusLabel.setText(“Unable to go offline);
break;
} } }.schedule(500);

Gears Database

Create Database

  • “create table if not exists person (id integer, first_name text, last_name text)”
  • Datatypes: Integer, Real, Text, Blob
  • Constraints: “primary key”, “not null”, “unique”, etc.

private Database m_database = null;try {
m_database = new Database(“Test”);
ResultSet rs = m_database.execute(“create table…”);
rs.close();
} // try
catch (Exception e) {
// Gears not installed
} // catch

Queries

String sql = “select id, first_name, last_name  from person”;
ResultSet rs = m_database.execute(sql);
ArrayList results = new ArrayList();
while (rs.isValidRow()) {
PersonBean person = new PersonBean();
person.setID(rs.getFieldAsInt(0));
person.setFirstName(rs.getFieldAsString(1));
person.setLastName(rs.getFieldAsString(2));
results.add(person);
rs.next();
} // while
rs.close();

Insert, Update

String args[] = new String[3];
args[0] = Integer.toString(person.getID());
args[1] = person.getFirstName();
args[2] = person.getLastName();ResultSet rs = m_database.execute(“insert into person
(id, first_name, last_name) values (?,?,?)”, args);
rs.close();args = new String[3];
args[0] = person.getFirstName();
args[1] = person.getLastName();args[2] = Integer.toString(person.getID());
rs = m_database.execute(“update person set
first_name=?, last_name=? where id=?)”, args);
rs.close();

Demo Source Code

Syncing Issues

  • Need GUIDs
  • Need timestamps (SQLite has no Date)
  • Need a strategy:
    Last one wins
    Lock / Check out
    Let user decide

Conclusion

Google Gears allows web applications to run offline
Google Web Toolkit makes it easy to program Gears

 
Leave a comment

Posted by on September 25, 2008 in Google Gears

 

Tags:

Google Gears Tutorial

About:

This tutorial introduces you to Google Gears. I tried the tutorial in the Developers Guide and thought it could be simplified. So here is a very simple tutorial for the Google Gears.

Day by day, web applications try to replace the desktop applications. The power of web applications lies in the availability (you are no longer tied to a particular PC), portability(any hardware/any OS) and simplicity (simple navigation model – links, back, forward). But the problem with these online applications is that, well, they are available only online. Once you are disconnected from the internet, those applications are inaccessible.

Think of your mail client like Outlook or Thunderbird. They do need a network connection to receive and send the mails, but even when you are not connected to the network, you can read the locally stored mails. You can compose new mails, they are saved locally and when you connect to a network, they silently send them. Can we bring this kind of off-line functionality to an online application? Yes. Welcome to Google Gears. The off-line web application is no more an oxymoron. Its a practical reality.

How does this work?

To start with, you need an online page.
The online page should present the user to switch to off line mode.
When the user selects to off line mode, the online page should download the required files and save it locally.
Even when there is no internet connection, the user will be able to do the operations with the local files.
Once the user goes online, the data that is modified will be synced with the server.

This sounds so simple. But the devil is in the details. Let us explore each step in detail.

First is the online page. Lets have the online.html as the initial page. On clicking the “Go Offline” button, we download this page and store it locally. This is done by this javascript:

localServer = google.gears.factory.create(“beta.localserver”);
store = localServer.createManagedStore(“first_offline_web_app”);
store.manifestUrl = manifest.json;
store.checkForUpdate();

The first line, we create an instance of LocalServer from the Google Gears factory. When you are in offline mode, LocalServer serves your HTTP requests. The contents are stored either in ResourceStore or in ManagedResourceStore. The former is used to store the not-so-frequently-updated resources like images. The latter is used to store a group of resources that might change frequently. Google Gears will automatically retrieve and update the local cache. The contents of a ManagedResourceStore is defined by a manifest file, which we set in line no 3. The checkForUpdate method in the ManagedResourceStore will initiate the update task and return immediately. The update task which will run the background will:
Get the manifest file from the URL
Compare the version number in the locally stored version
If they are different then fetch the contents specified in the manifest
For each URL, IfModifiedSince will be requested. If the URL gives a newer timestamp than the previous one, then the contents will be requested
Once all the URLs are fetched and saved, the version number of the local files are updated with the latest one

A point to note is that Google Gears will go and fetch the latest version if its DIFFERENT from the local one. So if the local version is “Version 5.9″ and the remote version is “Version 4.0″, the local cache will still be updated.

Everything looks fine now. One last thing to do is to install Google Gears :-)

Google Gears is a plugin for Firefox. Every URL request goes thru this plugin. Whenever there is a LocalServer available for the URL and its active, then the contents will be fetched locally. Else the normal behavior of fetching from the network will resume.

To detect whether Google Gears has been installed and to access its API, we need to include the gears_init.js file in the HTML.

 
Leave a comment

Posted by on September 25, 2008 in Google Gears

 

Tags:

A Comparison of Push and Pull Techniques for AJAX

A Comparison of Push and Pull Techniques for AJAX

Abstract

AJAX applications are designed to have high user interactivity and low user-perceived latency. Real-time dynamic web data such as news headlines, stock tickers, and auction updates need to be propagated to the users as soon as possible. However, AJAX still suffers from the limitations of the Web’s request/response architecture which prevents servers from pushing real-time dynamic web data. Such applications usually use a pull style to obtain the latest updates, where the client actively requests the changes based on a predefined interval. It is possible to overcome this limitation by adopting a push style of interaction where the server broadcasts data when a change occurs on the server side. Both these options have their own trade-offs. This paper explores the fundamental limits of browser-based applications and analyzes push solutions for AJAX technology. It also shows the results of an empirical study comparing push and pull.

1. Introduction

The classical style of the web called REST (Representational State Transfer) [5] requires all communication between the browser and the server to be initiated by the client, i.e., the end user clicks on a button or link and thereby requests a new page from the server. In this scheme, each interaction between the client and the server is independent of the other interactions. No ‘permanent’ connection is established between the client and the server maintains no state information about the clients. This scheme helps scalability, but precludes servers from sending asynchronous notifications.
There are, however, many use cases where it is important to update the client-side interface as soon as possible in response to server-side changes. An auction web site where the users needs to be averted that another bidder has made a higher bid, a stock ticker, a news portal, or a chat-room where new messages are sent immediately to the subscribers, are all examples of such use cases. Today, such web applications requiring real-time event notifications are usually implemented using a pull style, where the client component actively requests the state changes using client-side timeouts. An alternative to this is the push-based style, where the clients subscribe to their topic of interest, and the server publishes the changes to the clients asynchronously every time its state changes. The recent breed of Web 2.0 applications dubbed AJAX (Asynchronous JavaScript and XML) [7] is designed to have high user interactivity and low user-perceived latency [13]. Introducing the push style into AJAX systems [10] can further improve the responsiveness of such applications towards end users. However, implementing such push solution for web applications is not trivial, mainly due to the limitations of the HTTP protocol. This research explores the fundamental limits of browser-based applications in providing real-time data. We explore how real-time event notification can be added to today’s AJAX technology and compare the pull and push approaches by conducting an empirical study to find out the actual trade-offs of each approach. This paper is further organized as follows. Section 2 shows current techniques to implement HTTP based push and discusses the BAYEUX protocol [17], which tries to bring a standard to HTTP push. Section 3 explains our setup for the push-pull experiment. Section 4 presents the results of the empirical study involving push and pull. Section 5 discusses the results of the study. Section 6 summarizes related work on this area. Finally, Section 7 ends this paper with concluding remarks.

2. Web-based Real-time Event

Notification
2.1. AJAX

AJAX [7] is an approach to web application development utilizing a combination of established web technologies: standards-based presentation using XHTML and CSS, dynamic display and interaction using the Document Object Model, data interchange and manipulation, asynchronous data retrieval using XMLHttpRequest, and JavaScript binding everything together. XMLHttpRequest is an API implemented by most modern web browser scripting engines to transfer data to and from a web server using HTTP, by establishing an independent communication channel in the background between a web client and server. It is the combination of these technologies that enables us to adopt principal software engineering paradigms, such as component- and event-based, for web application development. Our earlier work [13] on an architectural style for AJAX, called SPIAR, gives an overview of the new way web applications can be architected using AJAX. Adopting AJAX has become a serious option not only for newly developed applications, but also for migrating [14] existing web sites to increase the responsiveness. The evolution of web and the advent of Web 2.0, and AJAX in particular, is making the users experience similar to using a desktop application. Well known examples include Gmail, and the new version of Yahoo! Mail.
The REST style makes a server-initiated HTTP request impossible. Every request has to be initiated by a client, precluding servers from sending asynchronous notifications without a request from the client [11]. There are several solutions used in the practice that still allow the client to receive (near) real-time updates from the server. In this section we will analyze some of such solutions.

2.2. HTTP Pull

Most AJAX applications check with the server at regular user-definable intervals known as Time to Refresh (TTR). This check occurs blindly regardless of whether the state of the applications has changed. In order to achieve high data accuracy and data freshness, the pulling frequency has to be high. This, in turn, induces high network traffic and possibly unnecessary messages. The application also wastes some time querying for the completion of the event, thereby directly impacting the responsiveness to the user. Ideally, the pulling interval should be equal to the Publish Rate (PR), i.e., rate at which the state changes. If the frequency is too low, the client can miss some updates. This scheme is frequently used in web systems, since it is robust, simple to implement, allows for offline operation, and scales well to high number of subscribers [8]. Mechanisms such as Adaptive TTR [3] allow the server to change the TTR, so that the client can pull on different frequencies, depending on the change rate of the data. This dynamic TTR approach in turn provides better results than a static TTR model [18]. However, it will never reach complete data accuracy, and it will create unnecessary traffic.

2.3. HTTP Streaming

HTTP Streaming is a basic and old method that was introduced on the web first in 1992 by Netscape, under the name ‘dynamic document’ [15]. HTTP Streaming comes in two forms namely, Page and Service Streaming.

Page Streaming

This method simply consists of streaming server data in the response of a long-lived HTTP connection. Most web servers do some processing, send back a response, and immediately exit. But in this pattern, the connection is kept open by running a long loop. The server script uses event registration or some other technique to detect any state changes. As soon as a state change occurs, it streams the new data and flushes it, but does not actually close the connection. Meanwhile, the browser must ensure the user-interface reflects the new data, while still waiting for response from the server to finish.
Service Streaming Service Streaming relies on the XMLHttpRequest object.
This time, it is an XMLHttpRequest connection that is longlived in the background, instead of the initial page load. This brings some flexibility regarding the length and frequency of connections. The page will be loaded normally (one time), and streaming can be performed with a predefined lifetime for connection. The server will loop indefinitely just like in page streaming, the browser has to read the latest response (responseText) to update its content.

2.4. COMET and the BAYEUX Protocol

The application of the Service Streaming scheme under AJAX is now known as Reverse AJAX or COMET [16]. COMET enables the server to send a message to the client when the event occurs, without the client having to explicitly request.

As a response to the lack of communication standards [13] for AJAX applications, the Cometd group1 released a COMET protocol draft called BAYEUX [17]. The BAYEUX message format is defined in JSON (JavaScript Object Notation) 2 which is a data-interchange format based on a subset of the JavaScript Programming Language. The protocol has recently been implemented and included in a number of web servers including Jetty3 and IBM Websphere. This protocol currently provides a connection type called Long Polling for HTTP push, which is implemented in Jetty’s Cometd library4.
Long Polling (also known as Asynchronous-Polling) is a mixture of pure server push and client pull. After a subscription to a channel, the connection between the client and the server is kept open, for a defined period of time (by default 45 seconds). If no event occurs on the server side, a timeout occurs and the server asks the client to reconnect asynchronously. If an event occurs, the server sends the data to the client and the client reconnects. This protocol follows the ‘topic-based’ [4] publishsubscribe scheme, which groups events according to their topic (name) and map individual topics to distinct communication channels.

Figure 1. Experimental Environment

Participants subscribe to individual topics, which are identified by keywords. Like many modern topic-based engines, BAYEUX offers a form of hierarchical addressing, which permits programmers to organize topics according to containment relationships. It also allows topic names to contain wildcards, which offers the possibility to subscribe and publish to several topics whose names match a given set of keywords.

BAYEUX defines the following phases in order to establish a COMET connection:
1. Client performs a handshake with the server, receives a client id and list of supported connection types (IFrame, long-polling, etc.).
2. Client sends a connection request with its id and its preferred connection type.
3. Client later subscribes to a channel and receives updates In the remainder of this paper, we will use BAYEUX as the protocol for server push, and compare its performance with a pure pull based solution.
3. Experimental Design In this section we will present our experimental setup.

3.1. Goals and Setup

The goals of our experiment consist of exploring the actual performance trade-offs of a COMET push implementation and compare it to a pure pull approach on the web by conducting a controlled empirical study. The experiment has to be repeatable for push and pull but also for different input variables such as number of users, number of published messages and intervals.

We aim at achieving these goals by:
• creating a push application consisting of the client and the server parts,

• creating the same application for pull,

• creating an application which publishes a variable number of data items at certain intervals,

• mimicking many concurrent web clients operating on each application,

• gathering data and measuring: the mean time it takes for clients to receive a new published message, the load on the server, number of messages sent or retrieved, the effects of changing the data publish rate and number of users,

• analyzing and explaining the measurements found. To see how the application server reacts to different conditions, we use different combinations of three variables:

• Number of concurrent users (100, 200, 350, 500, and 1000). The variation helps to find a maximum number of users the server can handle simultaneously and 1000 seemed to be the upper-bound for our test. This is because the server was already running on 100% CPU with 1000 users. We also tried 2000 and 5000 users, however the server was so saturated that it was not able to send any updates anymore.

• Publish interval (5, 10, 15, and 50 seconds): The frequency of the publishing updates is also important. Because of the long polling implementation in BAYEUX (See Section 2), the system should act more like pure pull when the publish interval is small, and more like pure push when it is bigger. We chose the interval 50 seconds, because the client timeout of BAYEUX protocol is 45 seconds, and we expect this interval to cause many disconnects, hence affecting the performance.

• Push or Pull: We also made an option in our test script that allowed us to switch between pull and push. To make the total number of combinations smaller, we set the pull interval as 15 seconds.

• Total number of messages: For each combination, we generated a total of 10 publish messages.

3.2. Tools

In order to simulate a high number of clients, we evaluated several open source solutions. Grinder5 seemed to be a good option, providing an internal TCPProxy, allowing to record events sent by the browser and later replay them. It also provided scripting support, which allowed us to create a script that simulates a browser connecting to the push server, subscribing to a particular stock channel and receiving push data continuously. In addition, Grinder has a built-in feature that allows us to create multiple threads of a simulating script. Because of the distributed nature of the simulated clients on different nodes, we used Log4J’s SocketServer6 to set up a logging server that listens for incoming log messages. The clients then send the log messages using the SocketAppender. We used TCPDump7 to record the number of TCP (HTTP) packets sent to and from the server. We also created a script that uses the UNIX top utility8 to record the CPU usage of the application server. This was necessary to observe the scalability and performance of each approach.

3.3. Sample Application

In order to respond to publish events and create client-side processing, we developed a Stock Ticker application.

The Push version consists of a JSP page which uses Dojo’s Cometd library9 to subscribe to a channel and receive the Stock data. We use Rico10 to give color effects to different data values on the web interface. For the server side, we developed a Java Servlet (PushServlet) that pushes the data into the browsers using the Cometd library. The PushServlet manages the client connections, receives data from back-end, and publishes it to the clients.

The pull version has also one JSP page, but instead of Cometd, it uses the normal bind method of Dojo to request data from the server. The pull nature was set using the standard setInterval JavaScript method. On the server, a PullServlet was made which keeps an internal stock object (the most recent one) and simply handles every incoming request the usual way.

The Service Provider Java application was created which uses the HTTPClient library11 to publish stock data to the Servlets. The number of publish messages as well as the interval at which the messages are published are configurable.

Simulating clients To simulate many concurrent clients we use the TCPProxy to record the actions of the JSP/Dojo client pages for push and pull and create scripts for each in Jython12. Jython is an implementation of the high-level, dynamic, object-oriented language Python, integrated with the Java platform. It allows the usage of Java objects in a Python script and is used by Grinder to simulate web users. In our tests, Jython scripts are actually imitating the JSP/Dojo client pages.

3.4. Testing Environment

We use the Distributed ASCI Supercomputer 3 (DAS3)13 to run various numbers of web clients on different distributed nodes. The DAS3 cluster at the Delft University consists of 68 dual-CPU 2.4 GHz AMD Opteron DP 250 compute nodes, each having 4 GB of memory. The cluster is equipped with 1 and 10 Gigabit/s Ethernet, and runs Scientific Linux 4. The application server runs on a Pentium IV, 3 Ghz (Hyperthreading) machine with 1 Gb memory, and Linux Fedora as its Operating System. We use Jetty 6.1.2 as our application server, because it is the only open-source Java EE application server that currently implements the COMET BAYEUX protocol. Jetty uses Java’s new IO package (NIO). NIO package follows the event-driven design, which allows the processing of each task as a finite state machine (FSM). As the number of tasks reach a certain limit, the excess tasks are absorbed in the server’s event queue. The throughput remains constant and the latency shows a linear increase. The Eventdriven design is supposed to perform significantly better than thread-concurrency model [20, 21].
The connectivity between the server and DAS3 nodes is through a 100 Mbps ethernet connection.

3.5. Sequence of events

A routine test run consists of the following steps (See Figure 1):
1. The Service Provider publishes the stock data to the application server via an HTTP POST request, in which the creation date, the stock item id, and the stock data are specified.

2. For push: The application server pushes the data to all the subscribers of that particular stock. For pull: the application server updates the internal stock object, so that when clients send pull requests, they get the latest
data.

3. Each simulated client logs the responses (after some calculation) and sends it to the statistics server. Grinder also processes the data from each client and sends the statistics, such as response time, to the statistics server,
which runs on a separate machine.

It is worth noting that we use a combination of the 64 DAS3 nodes and Grinder threads to simulate different numbers of users.

3.6. Data Analysis

We created a Data Analyzer that reads the data from Grinder and Logging Server logs and writes all the info into a database using Hibernate14. This way, different views of the data can be obtained easily using queries to the database.

4. Results

In the following subsections, we present the results which we obtained using the combination of variables mentioned in 3.1. Figures 2–5 depict the results. Note that for each number of clients on the x-axis, the five publish intervals in seconds (5, 10, 15, 20, 50) are presented.

Figure 2. Mean publish triptime.

4.1. Publish triptime

We define triptime as follows:

Triptime = | Data Creation Date − Data Receipt Date |

Data Creation Date is the date on the Service Provider (Publisher) the moment it creates a message, and Data Receipt Date is the date on the client the moment it receives the message. Triptime shows how long it takes for a publish message to reach the client and can be used to find out how fast the client gets notified with the latest events. Note that it is very important to synchronize the datetime for both the Service Provider and the clients.

Figure 2 shows the mean publish triptime versus the total number of clients, for both pull and push techniques.

4.2. Server Performance

Since push is stateful, we expect it to have some administration costs on the server side, using resources. In order to compare this with pull, we measured the CPU usage for both approaches. Figure 3 shows the mean server CPU usage as the number of clients grow, for push and pull.

4.3. Received Publish Messages

To see how pull compares to pure push in message overhead, we published a total of 10 messages and we counted the total number of (non unique) messages received on the client side. Figure 4 shows the mean number of

Figure 3. Server application CPU usage.

received non-unique publish items versus the total number of clients, for both push and pull. Note that if a pull client makes a request while there is no new data, it will receive the same item multiple times. This way a client might receive more than 10 messages.

4.4. Received Unique Publish Messages

It is also interesting to see how many of the 10 messages we have published reach the clients. This way we can determine if the clients miss any publish items. Figure 5 shows the mean number of received unique publish items versus total number of clients.

5. Discussion
5.1. Data Coherence

We define a piece of data as coherent, if the data on the server and the client is synchronized. We check the data coherence of both approaches by measuring the triptime. As we can see in Figure 2, the triptime is, at most, 1,750 milliseconds with push. In Pull, this can go up to 25 seconds. This shows us that pull is not as responsive as push, and if we need high data coherence, we should always choose the push approach. In Figure 2 we also see that with 1000 users and a publish interval of 50 seconds, the triptime increases noticeably. With such a big interval, no response is being sent to the client, and the client is waiting for data, thus occupying a thread. This makes it hard for other clients to reconnect and get new data,

Figure 4. Mean Number of Received Publish Items.

which increases the triptime. With an interval of 5 seconds, the triptime is lower. This is because the clients are quickly receiving responses and disconnecting. This makes some threads available, which makes it possible for other clients to connect.

5.2. Server Performance

One of the main issues of all distributed systems and in particular that of web-based applications is scalability and performance. As it is depicted in Figure 3, the pull style has a much better performance compared to push and this is valid even for small number of users (e.g., 100). With push, when the number of clients is increased to 350, the server is practically saturated, i.e., CPU is running at almost 100%. This is mainly due to the fact that the push server has to maintain all the state information about the clients and also manage the corresponding threads and connections. A push server based on long polling also needs to generate numerous request/ response cycles to keep the connection alive, which impact the resources. With pull only the publish interval has a direct measurable effect on the performance. This shows us that if we want to use a push implementation even for a
couple of hundreds of users, some load balancing solution and multiple servers are needed.

5.3. Network Performance

As we mentioned in Section 2.2, in a pure pull system, the pulling frequency has to be high to achieve high data accuracy and data freshness. If the frequency is higher than the data generation interval, the pull client will pull the same data more than once, leading to some overhead.

In Figure 4 we see that with a publish interval of 50 seconds, pull clients receive approximately 35 messages, while we published only 10. In the same figure we see that Push clients received approximately a maximum of 10 messages. This means that, more than 2/3 of total number of pull requests were unnecessary. Furthermore, we see that the number of packages received does not depend on the number of clients.

If we look at Push graph in Figure 5, we notice that as the number of users increase, not all clients receive all 10 messages. The number of correctly received messages is quite well with 100 users, but, unlike the pure pull approach, it begins to degrade as the users increase. This shows that Jetty’s Cometd implementation is not stable and scalable enough.

5.4. Data Misses


According to Figure 5, if the publish interval is 20 or 50 (i.e., larger than the pull interval), the client receives all the messages. However as we have discussed in the previous subsection, this will generate an unnecessary number of messages. Looking at the figure again, we see that when the pull interval is smaller than the publish interval, the clients will miss some updates, regardless of the number of users. So, with the pull approach, we need to know the exact publish interval. However the publish interval tends to change, which makes it difficult for a pure pull implementation. With push, when the number of clients is small, the client will receive all the messages. However if the number of clients increases, and the publish interval is large, some data loss starts to occur. This is again due to high number of idle threads, which affects the server performance.

5.5. Threats to Validity

We use several tools to obtain the data. The shortcomings and the problems of the tools themselves can have an effect on the outcome. In addition, implementation issues in the application server Jetty 6.1.2 might lead to the high CPU usage.

Another threat is the pull interval. We use only 1 pull interval, namely 15 seconds. Different pull intervals might have an influence on the performance of the server and the data coherence. Clients can also have different environments (i.e.: the browser they use, the bandwidth they have, etc.). This can have an influence on the triptime variable. In order to avoid

Figure 5. Mean Number of Received Unique Publish Items.

that, we used the same test-script in all the simulated clients and allocated the same bandwidth.

The time can also be a threat to validity. To measure the trip-time, the difference between the data creation date and data receipt date is calculated. However if the time on the server and the clients are different, this might give a false trip time. In order to prevent this problem, we made sure that the time on server and client machines are synchronized by using the same time server.

We measure the data coherence by taking the trip time. However, the data itself must be ’correct’, i.e., the received data must be the same data that has been sent by the server. We rely on HTTP in order to achieve this ”data correctness”. However, additional experiments must include a self check to ensure this requirement.

6. RelatedWork

There are a number of papers that discuss server-initiated events, known as push, however, most of them focus on client/server distributed systems and non HTTP multimedia streaming or multi-casting with a single publisher [1, 9, 6, 2, 19]. The only work that focuses on AJAX is the whitepaper of Khare [10]. Khare discusses the limits of the pull approach for certain AJAX applications and mentions several use cases where a push application is much more suited. However, the white-paper does not mention possible issues with this push approach such as scalability and performance. Khare and Taylor [11] propose a push approach called ARRESTED. Their asynchronous extension of REST, called A+REST, allows the server to broadcast notifications of its state changes. The authors note that this is a significant implementation challenge across the public Internet.

The research of Acharya et al. [1] focuses on finding a balance between push and pull by investigating techniques that can enhance the performance and scalability of the system. According to the research, if the server is lightly loaded, pull seems to be the best strategy. In this case, all requests get queued and are serviced much faster than the average latency
of publishing. The study is not focused on HTTP.

Bhide et al. [3] also try to find a balance between push and pull, and present two dynamic adaptive algorithms: Push and Pull (PaP), and Push or Pull (PoP). According to their results, both algorithms perform better than pure pull or push approaches. Even though they use HTTP as messaging protocol, they use custom proxies, clients, and servers. They do not address the limitations of browsers nor do they perform load testing with high number of users.

Hauswirth and Jazayeri [8] introduce a component and communication model for push systems. They identify components used in most Publish/Subscribe implementations. The paper mentions possible problems with scalability, and emphasizes the necessity of a specialized, distributed, broadcasting infrastructure.

Eugster et al. [4] compare many variants of Publish/Subscribe schemes. They identify three alternatives: topic-based, content-based, and type-based. The paper also mentions several implementation issues, such as events, transmission media and qualities of service, but again the main focus is not on web-based applications. Flatin [12] compares push and pull from the perspective of network management. The paper mentions the publish/subscribe paradigm and how it can be used to conserve network bandwidth as well as CPU time on the management station.suggests the ‘dynamic document’ solution of Netscape [15], but also a ‘position swapping’ approach in which each party can both act as a client and a server. This solution, however, is not applicable to web browsers. Making a browser act like a server is not trivial and it induces security issues.

As far as we know, there has been no empirical study conducted to find out the actual trade-offs of applying pull/push on browser-based or AJAX applications.

7. Conclusion

In this paper we have compared pull and push solutions for achieving web-based real time event notification. The contributions of this paper include the experimental design, a reusable implementation of a sample application in push and pull style as well as a measurement framework, and the experimental results.
Our experiment shows that if we want high data coherence and high network performance, we should choose the push approach. However, push brings some scalability issues; the server application CPU usage is 7 times higher as in pull. According to our results, the server starts to saturate at 350-500 users. For larger number of users, load balancing and server clustering techniques are unavoidable.

With the pull approach, achieving total data coherence with high network performance is very difficult. If the pull interval is higher than the publish interval, some data miss will occur. If it is lower, network performance will suffer. Pull performs well only if the pull interval equals to publish interval. However, in order to achieve that, we need to know the exact publish interval beforehand. However, the publish interval is rarely static and predictable. This makes pull useful only in situations where the data is published frequently according to some pattern.

These results allow engineers to make rational decisions concerning key parameters such as pull and push intervals, in relation to, e.g., the anticipated number of clients. Furthermore, the experimental design allows them to repeat similar measurements for their own (existing or to be developed) applications.

Our future work includes adopting a hybrid approach that combines pull and push techniques for AJAX applications to gain the benefits of both approaches. We also intend to extend our testing experiments to different versions of Jetty and alternative push server implementations, for example ones that are based on holding a permanent connection (e.g., Lightstreamer15) as opposed to the long polling approach discussed in this paper. Additional experiments with a variety of pull intervals are also desired.

Acknowledgments

Partial support was received from SenterNovem, project Single Page Computer Interaction (SPCI), in collaboration with Backbase.

—————————————

[1 ] http://www.cometd.com
[2 ] http://www.json.org
[3 ] http://www.mortbay.org
[4 ] http://www.mortbay.org
[5 ] http://grinder.sourceforge.net
[6 ] http://logging.apache.org/log4j/docs/
[7 ] http://www.tcpdump.org/
[8 ] http://www.unixtop.org/
[9 ] http://dojotoolkit.org/
[10] http://www.openrico.org/
[11] http://jakarta.apache.org/commons/httpclient/
[12] http://www.jython.org
[13] http://www.cs.vu.nl/das3/overview.shtml
[14] http://www.hibernate.org
[15] http://www.lightstreamer.com
—————————————

References


[1] S. Acharya, M. Franklin, and S. Zdonik. Balancing push and pull for data broadcast. In SIGMOD ’97: Proceedings of the 1997 ACM SIGMOD international conference on Management of data, pages 183–194. ACM Press, 1997.
[2] M. Ammar, K. Almeroth, R. Clark, and Z. Fei. Multicast delivery of web pages or how to make web servers pushy. Workshop on Internet Server Performance, 1998.
[3] M. Bhide, P. Deolasee, A. Katkar, A. Panchbudhe, K. Ramamritham,
and P. Shenoy. Adaptive push-pull: Disseminating dynamic web data. IEEE Trans. Comput., 51(6):652–668, 2002.
[4] P. T. Eugster, P. A. Felber, R. Guerraoui, and A.-M. Kermarrec. The many faces of publish/subscribe. ACM Comput. Surv., 35(2):114–131, 2003.
[5] R. T. Fielding and R. N. Taylor. Principled design of the modern web architecture. ACM Trans. Inter. Tech., 2(2):115–150, 2002.
[6] M. Franklin and S. Zdonik. data in your face: push technology in perspective. In SIGMOD ’98: Proceedings of the 1998 ACM SIGMOD international conference on Management of data, pages 516–519. ACM Press, 1998.
[7] J. Garrett. Ajax: A new approach to web applications. Adaptive Path: http://adaptivepath.com/publications/
essays/archives/000385.php, 2005.
[8] M. Hauswirth and M. Jazayeri. A component and communication model for push systems. In ESEC/FSE ’99, pages 20–38. Springer-Verlag, 1999.
[9] K. Juvva and R. Rajkumar. A real-time push-pull communications model for distributed real-time and multimedia systems. Technical Report CMU-CS-99-107, School of Computer Science, Carnegie Mellon University, January 1999.
[10] R. Khare. Beyond Ajax: Accelerating web applications with real-time event notification. Knownow.com, white-paper.
[11] R. Khare and R. N. Taylor. Extending the representational state transfer (REST) architectural style for decentralized systems. In ICSE ’04: roceedings of the 26th International Conference on Software Engineering, pages 428–437. IEEE Computer Society, 2004.
[12] J.-P. Martin-Flatin. Push vs. pull in web-based network management.
http://arxiv.org/pdf/cs/9811027, 1999.
[13] A. Mesbah and A. van Deursen. An architectural style for Ajax. In WICSA ’07: Proceedings of the 6th Working IEEE/IFIP Conference on Software Architecture, pages 44– 53. IEEE Computer Society, 2007.
[14] A. Mesbah and A. van Deursen. Migrating multi-page web applications to single-page Ajax interfaces. In CSMR ’07: Proceedings of the 11th European Conference on Software Maintenance and Reengineering, pages 181–190. IEEE Computer Society, 2007.
[15] Netscape. An exploration of dynamic documents. http://
wp.netscape.com/assist/net sites/pushpull.html, 1996.
[16] A. Russell. Comet: Low latency data for the browser. http:
//alex.dojotoolkit.org/?p=545.
[17] A. Russell, G. Wilkins, and D. Davis. Bayeux – a JSON protocol for publish/subscribe event delivery protocol 0.1draft3. http://svn.xantus.org/shortbus/trunk/ bayeux/bayeux.html, 2007.
[18] R. Srinivasan, C. Liang, and K. Ramamritham. Maintaining temporal coherency of virtual data warehouses. In RTSS ’98: Proceedings of the IEEE Real-Time Systems Symposium, page 60. IEEE Computer Society, 1998.
[19] V. Trecordi and G. Verticale. An architecture for effective push/pull web surfing. In 2000 IEEE International Conference on Communications, volume 2, pages 1159–1163, 2000.
[20] M.Welsh, D. Culler, and E. Brewer. Seda: an architecture for well-conditioned, scalable internet services. SIGOPS Oper. Syst. Rev., 35(5):230–243, 2001.
[21] M. Welsh and D. E. Culler. Adaptive overload control for busy internet servers. In USENIX Symposium on Internet Technologies and Systems, 2003.

 
1 Comment

Posted by on September 25, 2008 in Push/Pull techniques

 

Tags: ,

HTTP Streaming and Internet Explorer

Michael Carter wrote about the trials and tribulations of getting HTTP streaming with IE. He knew that the htmlfile ActiveX object was the key, but kept getting errors.

Then he stumbled on the solution:

We happen to be in luck. Changing JavaScript variables, including Array functions, seems okay as far as the gods of htmlfile streaming are concerned. So our solution is to simply append event payloads to an array from within the iframe, have the parent window use use a timer loop (’setInterval’) to periodically check the array for new messages, and then pass them to the callback. It’s not as elegant as I’d like…but it beats all the other techniques I’ve tried.

Why not just call a function attached the parent window, you wonder? It turns out htmlfile’s iframe doesn’t care where the function object lives; instead, it cares which thread is used to execute the code. The htmlfile thread is a capricious beast, and will rebel when employed to do too much DOM work. The effect of setInterval is to move the actual DOM manipulations to a thread that is perfectly safe for that sort of scripting. This fix works for IE 5.01+.

Michael just put up another post that details the solution after learning about the nuances of IE. He ended up using the following code:

JAVASCRIPT:

  1. function connect_htmlfile(url, callback) {
  2. // no more ‘var transferDoc…’
  3. transferDoc = new ActiveXObject(“htmlfile”);
  4. transferDoc.open();
  5. transferDoc.write(
  6. “<html><script>” +
  7. “document.domain='” + document.domain + “‘;” +
  8. “</script></html>”);
  9. transferDoc.close();
  10. var ifrDiv = transferDoc.createElement(“div”);
  11. transferDoc.body.appendChild(ifrDiv);
  12. ifrDiv.innerHTML = “<iframe src='” + url + “‘></iframe>”;
  13. transferDoc.callback = callback;
  14. }

And in the iframe:

HTML:

  1. <script>parent.callback([“arbitrary”, “data”, ["goes", "here"]);</script>
 
Leave a comment

Posted by on September 25, 2008 in COMET

 

Tags:

Long Polling vs Forever Frame

One of the oft-cited advantages of forever frame over long polling is that it does not suffer from the 3x max latency issue. This is when an event occurs the instant after a long poll response is sent to a client, so the event must wait for that response and the subsequent long poll request before sending a response containing that event. Thus while the average latency of long polling is very good, the theoretical max latency is 3x the average latency, which is the time taken to transit the network one-way.

Forever frame is said not to suffer from this issue, as it can send a response back at any time, even the instant after a previous event has been sent. Strictly speaking, that is not always the case, as forever frame implementations also need to terminate responses and issue new requests, at the very least to prevent memory leaks on the client. But for the purposes of this musing, let’s assume that it is true.

Does this theoretical lowering of the maximum latency actually enable any applications to be developed that would be impossible with the 3x max latency? For example, could forever frame be used to implement a first person shooter game that would be unplayable with long polling injecting a 3x latency on occasions (normally just as you charge into the room full of enemy guns lagging…).

Unfortunately, I think not. The problem is that comet will never be suitable for any application that cannot accept a bit of jitter in the application latency. Comet can achieve great average latency, often <100ms over the internet, but it is always going to suffer from the possibility of an occasional long delay.

The reason is that TCP/IP is by definition the transport that will be used for comet (that adheres to open standards) and TCP/IP is simply not a protocol that can guarantee constant low latency. Like long polling, TCP/IP gives very good average latency, but all it takes is 1 dropped packet and you will incur a TCP/IP timeout and resend, which by definition will be at least 3 x the network traversal time (sender must wait at least 2x network time before deciding that the ack will never come, then it must resend). Sure TCP/IP has lots of tricks and optimizations that are designed to help with latency for missed packets (e.g. fast resend, piggyback ack), but they rely on other traffic being sent in order to quickly detect the dropped packet. If a lone event is sent in a single packet, then at least 3x latency will result. One could even argue that the client’s need to send a new poll request with long polling will provide a convenient data packet on which an ack can piggyback, and could improve latency in some situations.

So any application that cannot tolerate 3x max latency is an application that should not be considered for comet. Comet is ideal for applications that thrive on good average latency, but that can tolerate the odd delay. For such applications, long polling is a good match and the theoretical latency gains of forever frame are probably just that—theoretical.

 
Leave a comment

Posted by on September 25, 2008 in COMET

 

Tags:

The Future of Comet: HTML 5’s Server-Sent Events

http://cometdaily.com/2008/01/10/the-future-of-comet-part-2-html-5’s-server-sent-events/ by Jacob Rus

HTML 5 and Comet

Comet doesn’t have to be a hack. Currently, Comet relies on undocumented loopholes and workarounds, each one with some drawbacks. We can make Comet work effectively in every browser, using streaming transports on subdomains of the same second-level domain, or using script tag long polling across domains. But this leaves Comet developers implementing (and more frustratingly, debugging) several transports across several browsers. Traps are numerous and easy to stumble into.

Recognizing the benefit of a straight-forward, standardized Comet technique, the WHATWG’s HTML 5 specification includes sections about server-sent events, and a new event-source element, which aim to de-hackify Comet. For now, only Opera has implemented these, and its implementation remains incomplete, but both Mozilla and Apple have committed to HTML 5, and an implementation is at least in-the-works for Safari.

Every Comet developer should familiarize himself with these specifications, because they provide the best streaming transport for Opera since version 8.5, and can only grow in importance as browsers adopt them. Furthermore, HTML 5 is a work-in-progress, and now is the best time to provide feedback to the WHATWG. Those interested in the future of Comet should comment now, before the specifications have been repeatedly implemented and can no longer be easily modified.

Basics of the server-sent events data format

The server-sent events portion of HTML 5 defines a data format for streaming events to web browsers, and an associated DOM API for accessing those events, by attaching callback functions to particular named event types. The format, which is sent by a Comet server with content type application/x-dom-event-stream, is straight-forward. Each event is made up of key-value pairs. For example:

key 1: this is the value associated with key 1
key 2: this value for key 2 stretches
key 2: over three lines, which are concatenated
key 2: by a browser supporting server-sent events
; these lines, which each begin with a `;`, are
; comments, and are ignored by the browser

key 1: after a pair of newlines, this is a new
key 1: event, with its own set of key-value pairs
; the following key has an empty corresponding value
empty key

...

Each key-value pair is known as a field, and several special fields are defined in the specification. In particular, the Event field names the event as a specific type. The browser can attach different callback functions to specific named event types. If omitted, the event type is assumed to be message. Also, though it is only mentioned in examples, the data field is useful for our event payloads.

So an event stream sent by Orbited might look something like:

Event: orbited
data: this is our event payload for the
data: first Orbited event

Event: orbited
data: this is the payload for the second
data: Orbited event

Event: ping
data: \o/

Event: orbited
data: here's the third Orbited event

...

The specification mandates that all files are UTF-8, but considers any of carriage return (\r), line feed (\n), or both (\r\n) an acceptable newline—in the browser, multiple lines of a single field are joined with \n alone.

The event-source element

Server-sent events are received by objects supporting the RemoteEventTarget interface, which merely means that they support two methods, addEventSource and removeEventSource, each of which takes a URI string as input, and adds or removes it, respectively, from the list of event sources for the object.

In addition to any JavaScript objects supporting server-sent events, HTML 5 defines an event-source HTML element, which declaratively indicates the use of a Comet source in a web page. As well as supporting the addEventSource and removeEventSource methods, the event-source element has a src attribute. When the src is changed, the event-source closes its previous connection, and opens a new Comet connection to the new URI.

Cross-domain usage

HTML 5 allows connections across domains, through use of the Access-Control HTTP header, as defined in a separate W3C specification (which applies identically to normal XHR usage and to server-sent events). A request is made for a resource as usual, but if that resource on the server (in this case, an event stream from a Comet server), includes the Access-Control HTTP header with values allowing the use of the resources, browsers will treat it as if it came from the same domain as the main document. If the header is not found, or if it denies the requested use, browsers will behave as if the resource does not exist (so that denied requests reveal no information about the resource).

Additionally, HTML 5 defines a “cross-document messaging” mechanism, which allows cooperation between documents (perhaps in iframes, etc.) from different domains, using a postMessage function.

Opera’s implementation

Opera has, since version 8.5, implemented a subset of these HTML 5 technologies. Opera versions since 8.5 support the event-source element, and recent versions have pure-JavaScript interfaces as well.

To support versions back to 8.5, we must create event-source elements, set their src attribute, and attach them to the document. Then we can add “event listener” callback functions to the event-source for each type of named event in the event stream. In Orbited, we use the following JavaScript to accomplish this:

connect_server_sent_events: function () {
  var es = document.createElement('event-source');
  es.setAttribute('src', this.url);
  document.body.appendChild(es);

  var event_cb = this.event_cb;
  es.addEventListener('orbited', function (event) {
    var data = eval(event.data);
    if (typeof data !== 'undefined') {
      event_cb(data);
    }
  }, false);
},

Where this.event_cb is some callback function, a property of the Orbited object, which will receive the event payload of every orbited event. By default we send payloads in JSON format, so evaling each yields a JavaScript object.

Also, it is quite easy to test browsers for server-sent events support from JavaScript, using code something like:

if ((typeof window.addEventStream) === 'function') {
  // ... browser supports server sent events
} else {
  // ... no support.  fall back on another transport
}

Caveats

Opera’s implementation differs from the specification in a few key ways, however, so Comet application authors must be careful.

  • Opera ignores events without a defined Event field, rather than assuming its value to be message. It is possible to include multiple named event types, attaching separate event listeners for each type.
  • Event payloads must be in the data field. A callback attached to the event source will receive an event object as input, with event.data set to the value of the data field.
  • At present, Opera only supports linefeed (\n) characters as newlines in event streams, and will silently fail if carriage returns are used in newlines (\r\n and \r are not supported newlines).

Legacy support

Even though Opera is the only browser to natively implement server-sent events, it is possible for a Comet server to treat the last several versions of Safari and Firefox as if they did, with a few caveats. We can use a modified XHR streaming technique (as described in part 1), and implement our own parsing for the application/x-dom-event-stream format (at least the version supported by Opera) in JavaScript. To pull this off we must make one concession to Safari: 256 bytes of dummy data at the beginning of our event stream, so that Safari will begin incremental parsing.

Our Comet server can then be blissfully unaware that these browsers have no real server-sent event support, and we can get away with implementing only two transports on the server side: the iframe transport, supported by most browsers since at least 1999 or 2000, and—using the htmlfile ActiveX object—capable of a flawless user experience back to Internet Explorer 5.01; and server-sent events.

To make them still more compatible, we might be able to build support for event-source into Firefox and Safari using JavaScript alone, creating objects supporting the addEventSource, removeEventSource methods, and capable of dispatching named event types to event listeners. Adding such support would require deeper magic than I currently possess as a journeyman JavaScript hacker; if any masters can shed insight, please comment here, or shoot an email to the Orbited mailing list.

In Orbited, we have not yet tried reducing XHR streaming to server-sent events for Firefox and Safari, but it is on the to-do list.

Arbitrary DOM events and controversy

In addition to this straight-forward Comet transport, server-sent events provide more general, and potentially powerful, capabilities, whose complexity is somewhat controversial. Indeed, the whole server-sent events section of the specification currently includes a notice that it may be removed. I expect that it will change at least somewhat from its current form before the specification is finished.

The specification demands that every element supporting the EventTarget interface should also support the RemoteEventTarget interface. A Comet server can thereby send arbitrary DOM events to page elements. This includes events such as mouse clicks, key presses, etc. As specified, event streams can also include a Target field, which can target events to the top level of the document, or to specific element IDs. And browser vendors could, if they desired, add further objects supporting the RemoteEventTarget interface, potentially enabling declarative Comet applications.

So the theory goes. Many of us Comet developers, including Michael Carter and Alex Russell (in a recent IRC discussion), remain unconvinced that there is a benefit in pushing application logic from browser-side JavaScript to the server side. We expect that real-time applications will always have a significant need for client-side logic, so the specification may as well embrace that—and remain as simple as possible. Specification simplicity benefits not only browser vendors who must build conforming and compatible implementations but also Comet developers who must learn its ins and outs.

But I hope server-sent events are not scrapped altogether. Comet would benefit greatly from specification and intentional browser support. The version supported by Opera strikes a reasonable balance, meeting the needs of Comet developers without going beyond those needs.

Conclusion

With improving browser support, creation of more high-quality open-source Comet servers, better developer resources such as Comet Daily, and the examples provided by big-name Comet applications, Comet’s future looks bright. Any standardization efforts by browser vendors that further reduce barriers to entry will only lead to more and better Comet applications.

HTML 5 is coming, with at least Opera, Apple, and Mozilla committed to its adoption, and one way or another it will include improved Comet support. But the Comet-related portions of HTML 5 are still very much unfinished, and in need of feedback from browser vendors and from us, the community of Comet developers. The discussions are transparent and easy to join, and time for action is now. We should figure out what we need and tell the W3C and the WHATWG about it. They are all ears.

 
1 Comment

Posted by on September 25, 2008 in COMET

 

Tags:

Comet is Always Better Than Polling

Comet techniques are advocated when your application needs low latency events to be delivered from the server to the browser. Comet can deliver sub-second latency for messages making possible web applications like chat, games and real-time monitoring of prices and states.

It is often asserted that for applications that don’t need low latency (e.g. email readers), traditional polling is a better alternative. However, some rudimentary analysis shows this assertion to be wrong, and that Comet techniques can be applied to all required latencies and event rates to provide improved data rates and average latencies.

For this analysis, I have considered polling vs. the long-polling technique where a server may hold a poll request for a period of time while waiting for events/messages to be delivered to the browser. In order to ensure that we are comparing apples with apples and oranges with oranges, I have compared configurations that provide the same maximum latency. For example, for a maximum latency of 10s, the polling technique must issue a poll every 10s and the average latency is half that. For the Comet technique long polling, a long poll needs to be issued 10s after the last long poll completes, but the server may hold the long poll for up to 300s while waiting for an event.

The attached spreadsheet contains the calculations, which I have graphed for 1s, 10s and 100s maximum latency for message rates of 1, 10, 100 and 1000 messages per second for 1000 users. The results show that the Comet long polling technique uses less bandwidth than polling in all situations, and uses significantly less bandwidth when the average period between messages is longer than the maximum latency:

Not only does Comet long-polling provide superior data rates, it also provides superior average latency. The Comet technique allows the average latency to be lower than the maximum latency because once the pause between polls is complete, the long poll is ready to respond immediately to events.

This analysis shows that for the worst case, when the message rate is high, load and latency for Comet long-polling are identical to traditional polling, and for most cases the load and latency are significantly better than traditional polling.

The calculations for these graphs are in this Comet vs Polling spreadsheet. The assumptions made are that all messages are equal in size (150Bytes) and that the message arrival times are randomly but uniformly distributed. A maximum long poll timeout of 240s is assumed. The Y axis of the graphs is logarithmic so larger differences appear smaller.

 
1 Comment

Posted by on September 25, 2008 in COMET

 

Tags:

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: