RSS

Monthly Archives: October 2008

Diagrams library for Google Web Toolkit (GWT)

gwt-diagrams is GWT library which provides diagramming capability to web applications. 

Project homepage: http://code.google.com/p/gwt-diagrams/.

Demo source code download.

Demo utilizes gwt-dnd library, so as to add drag and drop functionality. But it is not necessary to use it, and gwt-diagrams do not depends on gwt-dnd.

Screenshot

 
1 Comment

Posted by on October 30, 2008 in GWT

 

Tags: , ,

Open Flash Chart GWT Widget Library

The OFCGWT project provides a simple to use chart widget for GWT based on Open Flash Chart 2. The library includes the needed flash insertion, update and manipulation methods for the chart widget. It also includes a POJO model for the chart elements and components that assist in the generation of the JSON to provide the correct chart data for OFC 2.x API.

Source

Demo

Screenshots:

 
2 Comments

Posted by on October 30, 2008 in GWT

 

Tags: , ,

Google Mashup Editor, built with GWT

At Google Developer Day launched the Google Mashup Editor – a quick way to build simple applications. The Mashup Editor lets you create mashups without having to do much coding; instead, you use standard HTML and extended tags, which correspond to UI controls that can display and manipulate RSS, Atom and GData feeds.

The product consists of three parts:

  • The Mashup Editor, which is itself an AJAX application.
  • A server-side hosting framework, which provides developer services (e.g., source code management via Google Code project hosting) and mashup services such as Google Base and a data store that can be accessed via feeds.
  • A JavaScript client library that implements the mashup UI controls and data processing functionality. The server-side components leverage Google’s scalable infrastructure and provide access to Google services via the Google data APIs protocol; the client-side components were developed exclusively using the Google Web Toolkit.

Before starting the project, our team already had a lot of experience building complex AJAX applications by hand — and had experienced many of the problems associated with this approach. Here are some of the reasons why we chose to use GWT rather than rolling our own native JavaScript framework this time around:

  1. Tools matter. As a veteran of the long-ago vi versus emacs debates, it’s interesting to see the same enthusiasm go into the Eclipse vs. IntelliJ IDE arguments. Whichever side you’re on (I fought for the latter in both cases, but we have members of both camps on our team), tools can make a huge difference in terms of developer productivity. You used to think twice before refactoring a large component that needed attention; having the tool take care of these kinds of complicated, repetitive (and error-prone) tasks makes life easier and can lead to better quality.
  2. OO is a good idea. I remember figuring out how to make JavaScript objects polymorphic and finally understanding what a closure is. Indeed, my colleague Stephan Meschkat, who works on the Maps API, often reminds me of JavaScript’s inherent power and elegance. However, I like to have keywords like “interface,” “private,” and “final” at my disposal — even better to have my compiler (and my editor) remind me that I’m attempting to call a function with inappropriate arguments. Type safety saves debugging time, and OO abstractions can help to reduce complexity in your code.
  3. Compatibility. Java’s original slogan of “write once, run anywhere” fell victim to the intense competition between browser developers. Although JavaScript, being a smaller core language, has fared somewhat better, the complexities of juggling different DOM implementations over a growing number of browser platforms makes writing cross-platform AJAX components difficult. GWT’s ability to insulate you from much of this complexity probably makes it a no-brainer for this benefit alone.
  4. The client is only half the story. Both the Mashup Editor and the resulting mashups themselves interact with Google services; being able to code both sides of a remote method call in the same language has some obvious benefits. Aside from the relative simplicity afforded by the GWT RPC mechanism, both client and server components can share constant definitions and in some cases, simple functions.
  5. Open systems are less scary. A programming framework is something that introduces abstractions. The benefits include making complex concepts simple and quicker to implement; the downside is that if you want to do something that the framework wasn’t designed for, you’re on your own. It was important for us to be able to get under the hood and tweak the native JavaScript. For example, the Mashup Editor’s template-processing functionality uses a native JavaScript library that we borrowed from the Google Maps API.

Of course, the other huge benefit of open systems (and especially open source projects) is learning from the collective wisdom of everyone who uses the technology. And so we’re looking forward to incorporating the ongoing developments of GWT within the Mashup Editor.

Interested in playing around with the Google Mashup Editor? Head over to its homepage to sign up for the limited beta, and then check out the mashup gallery and developer forum for sample mashups built by the community.


 
Leave a comment

Posted by on October 23, 2008 in Mashups

 

Tags:

Is side-effects by using a HttpSession?

Detailed explanation on side-effects by using a HttpSession refer to the article Java theory and practice: Are all stateful Web applications broken?

The session state management mechanism provided by the Servlets framework, HttpSession, makes it easy to create stateful applications, but it is also quite easy to misuse. Many Web applications that use HttpSession for mutable data (such as JavaBeans classes) do so with insufficient coordination, exposing themselves to a host of potential concurrency hazards.
While there are many Web frameworks in the Java™ ecosystem, they all are based, directly or indirectly, on the Servlets infrastructure. The Servlets API provides a host of useful features, including state management through the HttpSession and ServletContext mechanisms, which allows the application to maintain state that persists across multiple user requests. However, some subtle (and largely unwritten) rules govern the use of shared state in Web applications, of which many applications unknowingly fall afoul. The result is that many stateful Web applications have subtle and serious flaws………….

 
Leave a comment

Posted by on October 23, 2008 in Web

 

Tags: ,

Comet Components – xSocket and xLightweb

xSocket Overview

xSocket is an easy to use NIO-based library to build high performance, scalable network applications. It supports writing client-side applications as well as server-side applications in an intuitive way. Issues like low level NIO selector programming, connection pool management, connection timeout detection are encapsulated by xSocket.

With xSocket you are able to write high performance, scalable client and server components such as SMTP Server, proxies or client and server components which are based on a custom protocol.

xSocket core:

  • Blocking and non-blocking connection support
  • Blocking and non-blocking connection pooling (client-side only)
  • Dynamic callback handler architecture to provide asynchronous communication approaches
  • Configurable threading behaviour (multi-threaded, non-threaded) on callback class and method level
  • Quality of service management by providing dynamic data transfer rate control
  • SSL (which can also be activated in a ad-hoc manner)
  • TCP and UDP transport protocol
  • JMX-based monitoring and management
  • OSGi and Maven support on deployment level
xLightweb Overview
xLightweb (formerly xSocket-http) is an easy to use http network library to build high performance, high scalable network applications. xLightweb provides a simple and intuitive API to write client side HTTP applications as well as server side HTTP applications. In contrast to the Servlet API, xLightweb is not only focused on server side programming. xLightweb also supports client side programming by providing high-capacity client side classes such as an HttpClient. xLightweb’s shared foundation classes like HttpRequest or HttpResponse allows you to write custom artefacts such as a Log-Filter which can be used on the client side as well as on the server side.
xLightweb is not limited to blocking/synchronous programming. It supports both, blocking/synchronous programming as well as non-blocking/asynchronous programming in a very dynamic way, to provide application types such as COMET-Applications or HTTP proxies. xLightweb implements a high optimized HTTP parser and makes use of non-blocking/asynchronous capabilities of the underlying NIO library xSocket.
 
1 Comment

Posted by on October 23, 2008 in GWT/CHAT/COMET

 

Tags: , , ,

Asynchronous HTTP Architecture

Comet has popularized asynchronous non-blocking HTTP programming, making it practically indistinguishable from reverse Ajax, also known as server push. In this article, Gregor Roth takes a wider view of asynchronous HTTP, explaining its role in developing high-performance HTTP proxies and non-blocking HTTP clients, as well as the long-lived HTTP connections associated with Comet. He also discusses some of the challenges inherent in the current Java Servlet API 2.5 and describes the respective workarounds deployed by two popular servlet containers, Jetty and Tomcat.

While Ajax is a popular solution for dynamically pulling data requests from the server, it does nothing to help us push data to the client. In the case of a Web mail application, for instance, Ajax would enable the client to pull mails from the server, but it would not allow the server to dynamically update the mail client. Comet, also known as server push or reverse Ajax, enhances the Ajax communication pattern by defining an architecture for pushing data from the server to the client. Comet enables us to push an event from the mail server to the WebMail client, which then signals the incoming mail.

Comet itself is based on creating and maintaining long-lived HTTP connections. Handling these connections efficiently requires a new approach to HTTP programming. In this article I introduce asynchronous, non-blocking HTTP programming and explain how it works. While I do present a Comet application at the end of the article, this style of programming is not restricted to Comet applications. Accordingly, this article describes asynchronous, non-blocking HTTP programming in general.

I start with an overview of client-based asynchronous message handling and message streaming, and then begin demonstrating the many uses of asynchronous HTTP on the server side. I explain the role and current limitations of the Java Servlet API 2.5, and demonstrate the use of the xSocket-http library to work around some of these limitations. The article concludes with a look at a dynamic Web application that leverages the two techniques associated with Comet architectures: long polling and streaming. I also show how this application could be implemented on Jetty and Tomcat, respectively.

 

Asynchronous message handling

At the message level, asynchronous message handling means that an HTTP client performs a request without waiting for the server response. In contrast, when performing a synchronous call, the caller thread is suspended until the server response returns or a timeout is exceeded. At the application level, code execution is stopped, waiting for the response before further actions can be taken. Client-side synchronous message handling is very easy to understand, as illustrated by the example in Listing 1.

Listing 1. Client example — synchronous call

HttpClient httpClient = new HttpClient();
// create the request message
HttpRequest req = new HttpRequest("GET", "http://tools.ietf.org/html/rfc2616.html");
// the call blocks until the response returns
HttpResponse resp = httpClient.call(req);
int status = resp.getStatus();
// ...

When performing an asynchronous call it is necessary to define a handler, which will be notified if the response returns. Typically, such a handler will be passed over by performing the call. The call method returns immediately. The application-level code instructions after the send statement will be processed without waiting for a server response. The server response will be handled by performing the handler’s callback method. If the response returns, the network library will execute the callback method within a network-library-controlled thread. If necessary, the request message has to be synchronized with the response message at the application-code level. An asynchronous call is shown in Listing 2.

Listing 2. Client example — asynchronous call

HttpClient httpClient = new HttpClient();
// response handler
IHttpResponseHandler responseHandler = new IHttpResponseHandler() {
public void onResponse(HttpResponse resp) throws IOException {
int status = resp.getStatus();
// ...
}
// ...
};
// create the request message
HttpRequest req = new HttpRequest("GET", "http://tools.ietf.org/html/rfc2616.html");
// send the request in an asynchronous way
httpClient.send(req, responseHandler);
// ...

The advantage of this approach is that the caller thread will not be suspended until the response returns. Based on a good network library implementation, no outstanding threads are required. In contrast to the synchronous call approach, the number of outstanding requests is not restricted to the number of possible threads. The synchronous approach requires a dedicated thread for each concurrent request, which consumes a certain amount of memory. This can become a problem if you have many concurrent calls to be performed on the client side.

HTTP pipelining

Asynchronous message handling also enables HTTP pipelining, which you can use to send multiple HTTP requests without waiting for the server response to former requests. The response messages will be returned by the server in the same order as they were sent. Pipelining requires that the underlying HTTP connection is in persistent mode, which is the standard mode with HTTP/1.1. In contrast to non-persistent connections, the persistent HTTP connection stays open after the server has returned a response.

Pipelining can significantly improve application performance when fetching many objects from the same server. The implicit persistent mode eliminates the overhead of establishing a new connection for each new request, by allowing for the reuse of connections. Pipelining also eliminates the need for additional connection instances to perform concurrent requests.

Message content streaming

Asynchronous message handling can improve application performance by avoiding waiting threads, but another performance bottleneck arises when reading the message content.

It is not unusual for an HTTP message to contain kilobytes of content data. On the transport level, such larger messages will be broken down into several TCP segments. The TCP segment size is limited and depends on the underlying network and link layer. For Ethernet-based networks the maximum TCP segment size is up to 1460 bytes.

Bodyless HTTP messages such as GET requests don’t contain body data. Often the size of such bodyless messages is smaller than 1 kilobyte. Listing 3 shows a simple HTTP request.

Listing 3. HTTP request

GET /html/rfc2616.html HTTP/1.1
Host: tools.ietf.org:80
User-Agent: xSocket-http/2.0-alpha-3

The correlating response of the request shown above contains a message body of 0.5 megabytes. On a personal Internet connection, the response message shown in Listing 4 would be broken into several TCP segments when sent.

Listing 4. HTTP response

HTTP/1.1 200 OK
Content-Length: 509497
Accept-Ranges: bytes
Last-Modified: Tue, 20 Nov 2007 03:10:57 GMT
Date: Sun, 03 Feb 2008 09:46:31 GMT
Content-Type: text/html; charset=US-ASCII
ETag: "d4026-7c639-9d13d240"
Server: Apache/2.2.6 (Debian) DAV/2 SVN/1.4.4 mod_python/3.3.1 Python/2.4.4 mod_ssl/2.2.6 OpenSSL/0.9.8g mod_perl/2.0.3 Perl/v5.8.8

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang="en" xml:lang="en">
<head>
    <meta http-equiv="Content-Type" content="text/html; charset=us-ascii" />
    <meta name="robots" content="index,follow" />
    <meta name="creator" content="rfcmarkup version 1.53" />
    <link rel="icon" href="/images/rfc.png" type="image/png" />
    <link rel="shortcut icon" href="/images/rfc.png" type="image/png" />
    <title>RFC 2616 Hypertext Transfer Protocol -- HTTP/1.1</title>  

[...]

</small></small></span>
</body></html>

Data transfer fragmentation can be hidden at the API level by accessing the body data as a steady and continuous stream. This approach, known as streaming, avoids the need to buffer large chunks of data before processing it. Streaming can also reduce the latency of HTTP calls, especially if both peers support streaming.

Using streaming allows the receiver to start processing the message content before the entire message has been transmitted. Often an HTTP message contains unstructured or semi-structured data, such as HTML pages, video, or music files, which will be processed immediately by the receiving peer. For instance, most browsers start rendering and displaying HTML pages without waiting for the complete page. For this reason most HTTP libraries support a stream-based API to access the message content.

In contrast to the body data, the message header contains well-structured data entries. To access the message header data most HTTP libraries provide dedicated and typed setter and getter methods. In most use cases the header can only be processed after the complete header has been received. The HTTP/1.1 specification doesn’t define the order of the message headers, though it does state that it’s a good practice to send general-header fields first, followed by request-header or response-header fields, and ending with the entity-header fields.

Streaming input data

To process received body data in a streaming manner, the receiving peer has to be notified immediately after the message header has been received. Based on the message header information, the receiver is able to determine the type of the received HTTP message, if body data exists, and which type of content the body contains.

The example code in Listing 5 (below) streams a returned HTML page into a file. The response message data will be processed as soon as it appears. Based on the retrieved body channel the FileChannel‘s transferFrom() implementation calls the body channel’s read() method to transfer the data into the filesystem. This occurs in a blocking manner. If the socket read buffer is empty, the body channel’s read() method will block until more data is received or the end-of-stream is reached. Blocking the read operation suspends the current caller thread, which can lead to inefficiency in system resource usage.

Listing 5. HTTP message example — blocking input streaming

HttpClient httpClient = new HttpClient();

   HttpRequest req = new HttpRequest("GET", "http://tools.ietf.org/html/rfc2616.html");

   // returns immediately if the complete header (not message!) is received
   HttpResponse resp = httpClient.call(req);

   if (resp.getStatus() == 200) {
      // create the output file
      File file = new File("rfc2616.html");
      file.createNewFile();
      FileChannel fc = new RandomAccessFile(file, "rw").getChannel();

      // get a blocking message body channel
      ReadableByteChannel inputBodyChannel = resp.getBlockingBody();

      // and transfer the data
      fc.transferFrom(inputBodyChannel, 0, 900000);
      fc.close();
   }

   // ...

To process the message body in a non-blocking mode, a handler similar to the one seen in the asynchronous message calling example from Listing 2 can be used. In this case, a non-blocking body channel will be retrieved instead of a blocking channel. In contrast to the blocking channel the non-blocking channel’s read() methods return immediately, whether data has been acquired or not. Notification support is required to avoid repeated, unsuccessful reads within a loop.

The BodyToFileStreamer of the example code in Listing 6 implements such a notification callback method. After retrieving the non-blocking body channel, the body handler will be assigned to the channel. The setDataHandler() call returns immediately. Setting the handler ensures that the body channel checks whether data is already available. If data is available, the handler’s onData() method is run.

The callback method is also called each time body data is available. The network library takes a (pooled) worker thread to perform the callback method. This thread is only assigned to the body channel as long as the callback method is executed. For this reason no outstanding threads are required.

Listing 6. HTTP message example — non-blocking input streaming

HttpClient httpClient = new HttpClient();

   HttpRequest req = new HttpRequest("GET", "http://tools.ietf.org/html/rfc2616.html");

   // returns immediately if the complete header (not message!) is received
   HttpResponse resp = httpClient.call(req);

   if (resp.getStatus() == 200) {
      // create the output file
      final File file = new File("rfc2616.html");
      file.createNewFile();
      final FileChannel fc = new RandomAccessFile(file, "rw").getChannel();

      // get a non blocking message body channel
      NonBlockingBodyDataSource nbInputBodyChannel = resp.getNonBlockingBody();

      // data handler
      IBodyDataHandler bodyToFileStreamer = new IBodyDataHandler() {

         public boolean onData(NonBlockingBodyDataSource bodyChannel) {
            try {
               int available = bodyChannel.available();

               // data to transfer?
               if (available > 0) {
                  bodyChannel.transferTo(fc, available);

               // end of stream reached?
               } else if (available == -1) {
                  fc.close();
               }
            } catch (IOException ioe) {
               file.delete();
            }
            return true;
         }

         // ...
      };

      // set the data handler
      nbInputBodyChannel.setDataHandler(bodyToFileStreamer);
   }

   // ...

Streaming output data

The streaming approach can also be used when sending message data. This avoids buffering large chunks of data. To do this, the message content will be transferred during the method call by using an InputStream or a ReadableByteChannel. After writing the message header, the body data will be transferred based on the body stream or channel. Listing 7 is an example of how implicit output streaming works. In this case the output streaming will be managed by the network library. Performing the HTTP call means that the user has to pass over a channel object, which represents the handle of a streamable resource.

Listing 7. Client example — implicit output streaming

       HttpClient httpClient = new HttpClient();

        // call request blocks until the response returns
        File file = new File("rfc2616.html");
        FileChannel fc = new RandomAccessFile(file, "r").getChannel();

        HttpRequest req = new HttpRequest("POST", "http://localhost:80/upload/rfc2616.html", "text/html", fc);

        // response handler
        IHttpResponseHandler responseHandler = new IHttpResponseHandler() {

           public void onResponse(HttpResponse resp) throws IOException {
              int status = resp.getStatus();
              // ...
           }

           // ...
        };

        // send the request by input streaming (this also works for the call method)
        httpClient.send(req, responseHandler);

        // ...

In some use cases the output (or body) streaming should be managed by application-level user code. An explicit, user-managed steaming approach requires that the user retrieves an output channel to write the body data. In Listing 8 a message header object is sent instead of a complete message object after the send() method has been called. This method call responds immediately by returning an output body channel object, which will be used by the application code to write the body data. The message-send procedure finalizes by calling the body channel’s close() method.

Listing 8. Client example — user-managed output streaming

       HttpClient httpClient = new HttpClient();

        // create a http message header
        HttpRequestHeader reqHdr = new HttpRequestHeader("POST", "http://localhost:80/upload/greeting", "text/plain");

        // response handler
        IHttpResponseHandler responseHandler = new IHttpResponseHandler() {

           public void onResponse(HttpResponse resp) throws IOException {
              int status = resp.getStatus();
              // ...
           }

           // ...
        };

        // sending the message header (instead of the complete message)
        WritableByteChannel outputBodyChannel = httpClient.send(reqHdr, responseHandler);

        // writing the message body data
        outputBodyChannel.write(ByteBuffer.wrap(new byte[] { 45, 78, 56}));
        // ...

        // close the request
        outputBodyChannel.close();

Both approaches, streaming input data and streaming output data, will read and write data as soon as it appears. However,streaming doesn’t mean that the data will be read or written directly to the network. All read and write operations work on internal socket buffers. When a write method is called, the operating system kernel transfers the data to the socket’s sendbuffer. Returning from the write operation just says that the data has been copied to this low-level send buffer. It doesn’t say that the peer has received the data.

Restrictions of the Java Servlet API 2.5

All of the examples in the previous sections show different ways of handling messages and content on the client side. As you will see later in Listing 10, it is possible to use the same programming style, as well as the same input and output message object representations, on the server side in a very seamless way. When you develop server-side HTTP-based applications, however, you must give consideration to the Java Servlet API.

The Servlet API defines a standard programming approach for handling HTTP requests on the server side. Unfortunately, the current Servlet API 2.5 supports neither non-blocking data streaming nor asynchronous message handling. When you implement a servlet’s service method such as doPost() or doGet(), the application-specific servlet code will read the request data, perform the implemented business logic, and return the response. To simplify writing servlets, the Servlet API uses a single-threaded programming approach. The servlet developer doesn’t have to deal with threading issues such as starting or joining threads. Thread management is part of the servlet engine’s responsibilities. Upon receiving an HTTP request the servlet engine uses a (pooled) worker thread to call the servlet’s service method.

Message handling

The downside of the Servlet API 2.5 is that it only allows for handling messages in a synchronous way. The HTTP request and response object have to be accessed within the scope of the request-handling thread. This message-handling approach is sufficient for most classic use cases. When you begin working with event-driven architectures such as Comet or middleware components such as HTTP proxies, however, asynchronous message handling becomes a very important feature.

When implementing an HTTP proxy, for instance, a request message has to be forwarded, and the response message has to be returned without wasting a request-handling thread for each open call. When you implement an HTTP proxy based on the current Servlet API, each open call requires one worker thread. The number of concurrent proxied connections is restricted to the number of possible worker threads.

Writing a synchronous HTTP proxy

Listing 9 shows an HTTP proxy based on the Servlet API. The servlet’s doGet() method will be called each time a new GET request is received. After some proxy-related handling the request will be copied and forwarded using HttpClient. Upon receiving the correlating response some proxy-related handling will be performed and the HttpClient response message will be copied to the servlet response message. After leaving the doGet() method the servlet engine finalizes the response message.

Listing 9. Synchronous proxy example (GET request proxy)

public class ProxyServlet extends HttpServlet {

   private final HttpClient httpClient = new HttpClient();
   private String forwardHost;
   private String forwardPort;

   @Override
   public void init(ServletConfig config) throws ServletException {
      forwardHost = config.getInitParameter("forward.host");
      forwardPort = config.getInitParameter("forward.port");
   }

   @Override
   protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {

      try
      // compute the new url
      String uri = req.getRequestURI();
      if (req.getQueryString() != null) {
         uri += "?" + req.getQueryString();
      }
      uri = req.getScheme() + "://" + forwardHost + ":" + forwardPort + uri;

      // handle proxy issues (hop-by-hop headers, cache, via header, ...)
      // ...

      // create the forward request
      HttpRequest forwardRequest = new HttpRequest(req.getMethod(), uri);

      // copy the request headers
      for (Enumeration<String> en = req.getHeaderNames(); en.hasMoreElements(); ) {
         String headername = en.nextElement();
         for (Enumeration<String> en2 = req.getHeaders(headername); en2.hasMoreElements(); ) {
            forwardRequest.addHeader(headername, en2.nextElement());
         }
      }
      forwardRequest.setHost(forwardHost + ":" + forwardPort);

      // forward the request in a synchronous manner
      HttpResponse response = httpClient.call(forwardRequest);

      // handle proxy issues (hop-by-hop headers, ...)
      // ...

      // copy the response headers
      response.removeHeader("Server");
      for (String headername : response.getHeaderNameSet()) {
         for (String headervalue : response.getHeaderList(headername)) {
            resp.addHeader(headername, headervalue);
         }
      }
      // copy the body (if exists)
      if (response.hasBody()) {
         byte[] body = response.getBlockingBody().readBytes();
         resp.getOutputStream().write(body);
      }
   }
}

Using the asynchronous HttpClient'send() method instead of the call() method won’t help. The current Servlet API doesn’t support writing requests out of the scope of the servlet request-handling thread. In essence, the Servlet 2.5 API is insufficient to write an asynchronous message-handling proxy.

Writing an asynchronous HTTP proxy

Writing an asynchronous message-handling proxy requires using an API other than the Servlet 2.5 specification. The HTTP proxy in Listing 10 is based on the same network library (xSocket-http) used in the previous client-side examples. xSocket-http is an extension module of the xSocket network library that supports HTTP programming on the server side, as well as the client side. The network library is independent of the Servlet API and does not implement a servlet container.

Whereas the Servlet API uses an HttpServletResponse object to send a response message, the xSocket-http network library uses an HttpResponseContext object. The xSocket-http network library doesn’t pre-create a response message in an implicit way. Furthermore, in contrast to the Servlet API, neither the request object nor the response-context object is bound to the request-handling thread. Both artifacts can be accessed outside the network’s library-managed threads.

Like the servlet’s doGet() method, the ForwardHandler‘s onResponse() method will be called each time a request is received. After performing some proxy-related code the received request message will be forwarded using the asynchronousHttpClient‘s send() method. This method requires a response handler to handle the received response message. As you saw in Listing 2, using the HttpClient‘s send() method avoids the need for outstanding threads.

The most important aspect of this implementation is that the available threads don’t restrict the number of concurrent proxied connections. The scalability of such an asynchronous proxy is only driven by the message-parsing cost and the capability to maintain the required system resources for an open TCP connection in an effective way. Each open TCP connection requires a certain number of socket buffers, control blocks, and file descriptors at the operating-system level.

Listing 10. Asynchronous proxy example

  class ForwardHandler implements IHttpRequestHandler {
   private final HttpClient httpClient = new HttpClient();
   private String forwardHost;
   private int forwardPort;

   public ForwardHandler(String forwardHost, int forwardPort) {
      this.forwardHost = forwardHost;
      this. forwardPort = forwardPort;
   }

   public void onRequest(HttpRequest req, final IHttpResponseContext respCtx) throws IOException {

      // handle proxy issues (hop-by-hop headers, cache, via header, ...)
      // ...

      // update the target UTI (Host header will be update automatically)
      req.updateTargetURI(forwardHost, forwardPort);

      // create the response handler (timeout is not handled here)
      IHttpResponseHandler responseHandler = new IHttpResponseHandler() {

         @Execution(Execution.NONTHREADED)   // performance optimization
         public void onResponse(HttpResponse resp) throws IOException {
            // handle proxy issues (hop-by-hop headers, ...)
            // ...

            // return the response
            respCtx.send(resp);
         }

         // ...
      };

      // .. and forward the request
      try {
         httpClient.send(req, responseHandler);
      } catch (ConnectException ce) {
         respCtx.sendError(502);
      }
   }
}

IServer proxy = new HttpServer(8080, new ForwardHandler("localhost", 80));
proxy.run();

Content streaming

The onResponse() and onRequest() methods in Listing 10 will be performed immediately after the message header is received. (ThexSocket-http module also supports an InvokeOn annotation to specify if the callback method should be performed after receiving the message header or after receiving the complete message).

To reduce the required buffer sizes and to minimize call latency, the message body should be streamed. In Listing 10 this will be done implicitly by the xSocket-http module. The environment detects that an incomplete received message should be forwarded and registers a non-blocking forward handler on the incoming message-body channel.

Lower buffer sizes are required within the proxy because only parts of the message have to be buffered internally when it is transferred. Furthermore, the latency incurred by forwarding the message could be reduced significantly. If the message is forwarded in a non-streaming manner, first the whole message will be received and buffered before being forwarded. This adds the elapsed time between receiving the first and last message byte to the complete call latency.

Upload data streaming in Java Servlet API 2.5

Streaming is also supported by the current Servlet API, but only in a blocking way. When running the UploadServlet in Listing 11 in Tomcat (version 6.0.14, default configuration on Windows), thedoPost() method will be called immediately after the request header is received. This allows you to stream the incoming message body. The UploadServlet reads some message-header entries and streams the message body into a file. If not enough data is available, the HttpServletRequest‘s input stream read() method will block. This means the request-handling thread will be suspended until more data is received.

Listing 11. Upload servlet example

class UploadServlet extends HttpServlet {

   protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {

      String requestURI = req.getRequestURI();

      if (requestURI.startsWith("/upload")) {
         String filename = requestURI.substring("/upload".length() + 1, requestURI.length());
         File file = new File("files" + File.separator + filename);
         file.createNewFile();

         FileOutputStream os = new FileOutputStream(file);
         InputStream is = req.getInputStream();

         byte[] transferBytes = new byte[8196];
         int len;
         while ((len = is.read(transferBytes)) > 0) {
             os.write(transferBytes, 0, len);
         }
         os.close();
         is.close();

      } else {
         resp.sendError(404);
      }
   }
}

Non-blocking upload data streaming

Outstanding threads can be avoided by using non-blocking streams. Listing 12 shows an UploadRequestHandler, which reads and transfers the incoming message body in a non-blocking way. Similar to the client-side non-blocking streaming example in Listing 6, a non-blocking body channel will be retrieved and a body-data handler will be set. After this operation the onRequest() method returns immediately, without sending a response message. If body data is received, the body-data handler will be called to transfer the available body data into a file. If the complete body is received, the response message will be sent.

Listing 12. Asynchronous, non-blocking server-side example

class UploadRequestHandler implements IHttpRequestHandler {

   public void onRequest(HttpRequest req, final IHttpResponseContext respCtx) throws IOException {

      String requestURI = req.getRequestURI();

      if (requestURI.startsWith("/upload")) {
         String filename = requestURI.substring("/upload".length() + 1, requestURI.length());
         final File file = new File("files" + File.separator + filename);
         file.createNewFile();

         final FileChannel fc = new RandomAccessFile(file, "rw").getChannel();

         IBodyDataHandler bodyToFileStreamer = new IBodyDataHandler() {

            public boolean onData(NonBlockingBodyDataSource bodyDataSource) {
               try {
                  int available = bodyDataSource.available();

                  if (available > 0) {
                     bodyDataSource.transferTo(fc, available);

                  } else if (available == -1) {
                     fc.close();
                     respCtx.send(200);
                  }
               } catch (IOException ioe) {
                  file.delete();
                  respCtx.sendError(500);
               }
               return true;
            }

            //...
         };

         // set handler to stream the body into a file in a non-blocking manner
         req.getNonBlockingBody().setDataHandler(bodyToFileStreamer);

      } else {
         respCtx.sendError(404);
      }
   }
}

IServer server = new HttpServer(80, new UploadRequestHandler());
server.run();

Comet Architecture

 
5 Comments

Posted by on October 23, 2008 in GWT/CHAT/COMET

 

Tags: , ,

Jaxcent – An Alternative to GWT

Jaxcent is a Java-only AJAX framework and API. Like GWT, it does not require JavaScript programming.

 

Unlike GWT, Jaxcent is a server-side framework. Instead of being compiled into JavaScript, the Java code directly runs on the server, and communicates with the client via a small JavaScript file.

Being a server-side framework provides many advantages. The coding is very straightforward. The development environment is the exact one that developers are used to. Any tools, debuggers, JMX/JConsole, third party libraries, text files, databases, everything is accessible in the normal manner.

There may be some concern that a server-side framework may be putting more burden on the server, compared to GWT. However, a GWT-like approach does require the server to maintain, manage and deliver multiple JavaScript files. In real terms, that can be a significant server load. In contrast, Jaxcent has a single small JavaScript file, that will be cached by normal browsers. The actual load on the server is comparable to any server side pre-AJAX framework, such as servlets, JSP, ASP etc.

To see a few Jaxcent samples, please visit the samples page.

Jaxcent is available for download at the download page.

Jaxcent documentation is available at the documentation page for review. It is also included in the download.

An online tutorial for Jaxcent is available at http://www.jaxtut.com/.

For any feedback or questions, please use the form at the contact page.

 
1 Comment

Posted by on October 22, 2008 in Like GWT

 

Tags: , , , , , ,

Metawidget adds supports for Google Web Toolkit (GWT) 1.5

GWT currently suffers from a lack, even if many people try to solve this : there is no perfect solution yet to bind data to UI. But here comes a new challenger : Metawidget (http://www.metawidget.org/) 

The main goal of Metawidget is to provide an easy way to build UI component based on model objects analysis. Thus, developpers no longer have to spend too many time to bind UI with POJO.
The two main concepts in Metawidget are : – Metawidgets : they are responsible of building UI widgets in the desired front-end technology (can be GWT or Swing, Struts…) – Inspectors that collects the meta data on the model objects in order describe them.

 

 
Leave a comment

Posted by on October 18, 2008 in GWT

 

Tags: ,

GWT in the Adobe AIR

With Google Chrome release, But the platform is not dead yet. For instance, GWT in the AIR just released its Milestone 1 version. A quick reminder: the aim of this project is to allow the development of AIR applications, i.e. desktop applications running on top of the Adobe AIR runtime, from GWT code.

GWT in the AIR makes the Adobe AIR API available for GWT using JSNI. This project also provides tools to ease “GWT in Adobe AIR” development includint test tools. The weakness of this project is its lack of documentation but it seems to be progressing. For the moment, the showcase and Adobe AIR documentation should be your starting points.

The Adobe AIR API for GWT……..

Makes the Adobe AIR API available for GWT (using JSNI) and provides tools to ease “GWT in Adobe AIR” development:

  • a GWT Linker to compile Java to JavaScript and produce an AIR application (or intermediate package) in a single step
  • an RMI BrowserManager to run JUnit unit tests within the ADL (AIR Debug Launcher)
  • a GWTShell subclass (AIRDebugLauncher) to launch the GWTShell (eventually with the embedded Tomcat) and run the application in the ADL

 

Given the lack of AIR hosted mode, you’d generally use GWT-in-the-AIR when developping applications targetting both the web and the desktop.

You’ll need GWT 1.5 RC2 or later to use GWT-in-the-AIR

Download

 
Leave a comment

Posted by on October 18, 2008 in GWT-AIR(ADOBE)

 

Tags: ,

GWT 1.6 – Tomcat or Jetty ?

Bruce Johnson (tech lead of the Google Web Toolkit ) is asking the community if it would rather have Jetty or Tomcat as the hosted mode embedded HTTP. Unfortunately, I do not know Jetty enough to have an opinion. But if you know both Jetty and Tomcat, you should not hesitate to participate in this debate.

The GWT team has started putting together a 1.6 roadmap, which we’ll publish as soon as we have it nailed down. Two of the areas we want to work on for 1.6 are some improvements to hosted mode startup time and a friendlier output directory structure (something that looks more .war-like).

As part of this effort, we’ve all but decided to switch the hosted mode embedded HTTP server from Tomcat to Jetty. Would this break you? (And if so, how mad would you be if we did it anyway?)

http://groups.google.com/group/Google-Web-Toolkit/browse_thread/thread/604aec6b7460c133?hl=en&pli=1

 
 

Tags: , ,

ItsNAT – New Java Based Comet Tool

ItsNat is an innovative open source (dual licensed, GNU Affero General Public License v3/commercial license for closed source projects) Java AJAX Component based Web Framework. It offers a natural approach to the modern web development. Why natural? ItsNat leverages the old tools to build the new AJAX based Web 2.0 applications: pure (X)HTML templatespure Java W3C DOM!. ItsNat is server centric using a unique approach called TBITS, “The Browser Is The Server”: ItsNat simulates a Universal W3C Java Browser at the server, with ItsNat the server mimics the behavior of a web browser, containing a W3C DOM Level 2 node tree and receiving W3C DOM Events.

ItsNat provides many more things: web-continuations (continue events), user defined events, timers, long running server tasks,COMET, DOM utils (to simplify DOM manipulation), resolution of ${} based variables in markup, ElementCSSInlineStylesupport in the server, automatic page remote/view control of other users/sessions!!, XML generation, non-HTML namepaces support like pure SVG with AJAX and SVG embedded in XHTML, JavaScript generation utilities, events fired by the server sent to the client simulating user actions for instance to test the view using the server, custom pretty URLs, previous/forward document navigation (pull and push referrers) with back/forward button support, degraded modes (AJAX disabled and JavaScript disabled modes) etc.

ItsNat provides a web based Component System too. These components are AJAX based from the scratch inspired in Swing and reusing Swing as far as possible such as data and selection models (but is not a forced Swing clone in web). Components included: several button types, text based components, labels, lists, tables, trees (all of them with content editable “in place”)… In ItsNat every DOM element or element group can be a component.

Supported desktop browsers: Internet Explorer 6+, FireFox 1+, Safari 3+, Opera 9+, Google Chrome, QtWebKit and QtJambi (Qt 4.4), Arora (QtWebKit based)

Supported mobile browsers: Opera Mini 4, Opera Mobile 8.6, NetFront 3.5, Minimo 0.2, IE Mobile of Windows Mobile 6, iPhone/iPod Touch/iPhone SDK, Android (v0.9 Beta r1), S60WebKit (S60 3rd), Iris 1.0.8 and QtWebKit of Qt Embedded for Linux and Windows CE (Qt 4.4).

Links

Homepage

ItsNat Article 1, Article2

Demo

 
Leave a comment

Posted by on October 18, 2008 in COMET

 

Tags: , , ,

JavaFX and GWT

Integrate JavaFX with GWT

JavaFX it the new kid on the Java block. There is a lot of buzz about this new technology. Although it is not quite stable yet and some people think Sun is too late to win the RIA (Rich Internet Application) market. None the less I’m a Java guy, so I just had to have a look at JavaFX. At first I had some problems to understand how things work with JavaFX. They made quite some changes to the API over the course of developing it and lots of the examples on the net are not up to date anymore. This causes some confusion and does not make things easier………

More info Click here

Screenshot:

Useful links:

GWT Applet Integration

JavaFX

 
Leave a comment

Posted by on October 18, 2008 in GWT

 

Tags: , ,

New GWT Chat Application

Leeloo Chat is a “smart” chat application. People can discuss publicly, register, build a profile, and exchange rich text messages with embedded images. The contact list is “smart” too: when you’re talking with someone, the chat will detect two-way dialogs, and move the relevant persons above the contact list, to allow for quick access to that person’s profile and for messaging. You can also that person to your friends. It’s evolving daily… Try it :-)

URL : http://leeloo.webhop.net/

Developed by : Joel Bourquard

Screenshot :

 
1 Comment

Posted by on October 18, 2008 in GWT/CHAT/COMET

 

Tags: , ,

Developing Portlets with GWT

I found this article in Deligent

How to write a portlet with the aim of GWT? Modifying Gridpshere Home If you want you can also apply the respective changes to gridsphere’s home directory. This can be useful if, for some reason, you want to redeploy gridsphere. Installing GWT Download the latest version of GWT from here. Uzip the file.

 
2 Comments

Posted by on October 17, 2008 in GWT-PORTLET

 

Tags: , , ,

Why Cache the Web?

The short answer is that caching saves money. It saves time as well, which is sometimes the same thing if you  believe that “time is money.” But how does caching  save you money?

It does so by providing a more efficient mechanism for distributing information on the Web. Consider an example from our physical world: the distribution of books. Specifically, think about how a book gets from publisher to consumer. Publishers print the books and sell them, in large quantities, to wholesale distributors. The distributors, in turn, sell the books in smaller quantities to bookstores. Consumers visit the stores and purchase  individual books. On the Internet, web caches are analogous to the bookstores and wholesale distributors.

The analogy is not perfect, of course. Books cost money; web pages (usually) don’t. Books are physical objects, whereas web pages are just electronic and magnetic signals. It’s difficult to copy a book, but trivial to copy electronic data.

The point is that both caches and bookstores enable efficient distribution of their respective contents. An Internet without caches is like a world without bookstores. Imagine 100,000 residents of India each buying one copy of Comics Book from the publisher in India. Now imagine 50,000 Internet users in Australia each downloading the Yahoo! home page every time they access it. It’s much more efficient to transfer the page once, cache it, and then serve future requests directly from the cache.

In order for caching to be effective, the following conditions must be met:

  • Client requests must exhibit locality of reference.
  • The cost of caching must be less than the cost of direct retrieval.

We can intuitively conclude that the first requirement is true. Certain web sites are very popular. Classic examples are the starting pages for Netscape and Microsoft browsers. Others include searching and indexing sites such as Yahoo! and Altavista. Event-based sites, such as those for the Olympics, NASA’s Mars Pathfinder mission, and World Cup Soccer, become extremely popular for days or weeks at a time. Finally, every individual has a few favorite pages that he or she visits on a regular basis.

It’s not always obvious that the second requirement is true. We need to compare the costs of caching to the costs of not caching. Numerous factors enter into the analysis, some of which are easier to measure than others. To calculate the cost of caching, we can add up the costs for hardware, software, and staff time to administer the system. We also need to consider the time users save waiting for pages to load (latency) and the cost of Internet bandwidth.

Three primary benefits of caching web content:

  • To make web pages load faster (reduce latency)
  • To reduce wide area bandwidth usage
  • To reduce the load placed on origin servers
Types of Web Caches

Web content can be cached at a number of different locations along the path between a client and an origin server. First, many browsers and other user agents have built-in caches. For simplicity, I’ll call these browser caches. Next, a caching proxy (a.k.a. “proxy cache”) aggregates all of the requests from a group of clients. Lastly, a surrogate can be located in front of an origin server to cache popular responses.

Browser Caches

Browsers and other user agents benefit from having a built-in cache. When you press the Back button on your browser, it reads the previous page from its cache. Nongraphical agents, such as web crawlers, cache objects as temporary files on disk rather than keeping them in memory.

Netscape Navigator lets you control exactly how much memory and disk space to use for caching, and it also allows you to flush the cache. Microsoft Internet Explorer lets you control the size of your local disk cache, but in a less flexible way. Both have controls for how often cached responses should be validated. People generally use 10–100MB of disk space for their browser cache.

A browser cache is limited to just one user, or at least one user agent. Thus, it gets hits only when the user revisits a page. As we’ll see later, browser caches can store “private” responses, but shared caches cannot.

Caching Proxies

Caching proxies, unlike browser caches, service many different users at once. Since many different users visit the same popular web sites, caching proxies usually have higher hit ratios than browser caches. As the number of users increases, so does the hit ratio.

Caching proxies are essential services for many organizations, including ISPs, corporations, and schools. They usually run on dedicated hardware, which may be an appliance or a general-purpose server, such as a Unix or Windows NT system. Many organizations use inexpensive PC hardware that costs less than $1,000 / 45000 Rs. At the other end of the spectrum, some organizations pay hundreds of thousands of dollars/ Rupees, or more, for high-performance solutions from one of the many caching vendors.

Caching proxies are normally located near network gateways (i.e., routers) on the organization’s side of its Internet connection. In other words, a cache should be located to maximize the number of clients that can use it, but it should not be on the far side of a slow, congested network link.

Caching Proxy Features

The key feature of a caching proxy is its ability to store responses for later use. This is what saves you time and bandwidth. Caching proxies actually tend to have a wide range of additional features that many organizations find valuable. Most of these are things you can do only with a proxy but which have relatively little to do with caching. For example, if you want to authenticate your users, but don’t care about caching, you might use a caching proxy product anyway.

Authentication
A proxy can require users to authenticate themselves before it serves any requests. This is particularly useful for firewall proxies. When each user has a unique username and password, only authorized individuals can surf the Web from inside your network. Furthermore, it provides a higher quality audit trail in the event of problems.  

Request filtering
Caching proxies are often used to filter requests from users. Corporations usually have policies that prohibit employees from viewing pornography at work. To help enforce the policy, the corporate proxy can be configured to deny requests to known pornographic sites. Request filtering is somewhat controversial. Some people equate it with censorship and correctly point out that filtering schemes are not perfect.  

Response filtering
In addition to filtering requests, proxies can also filter responses. This usually involves checking the contents of an object as it is being downloaded. A filter that checks for software viruses is a good example. Some organizations use proxies to filter out Java and JavaScript code, even when it is embedded in an HTML file. I’ve also heard about software that attempts to prevent access to pornography by searching images for a high percentage of flesh-tone pixels.  

 

 For example, see http://www.heartsoft.comhttp://www.eye-t.com, and http://www.thebair.com.

 

Prefetching
Prefetching is the process of retrieving some data before it is actually requested. Disk and memory systems typically use prefetching, also known as “read ahead.” For the Web, prefetching usually involves requesting images and hyperlinked pages referenced in an HTML file.  

Prefetching represents a tradeoff between latency and bandwidth. A caching proxy selects objects to prefetch, assuming that a client will request them. Correct predictions result in a latency reduction; incorrect predictions, however, result in wasted bandwidth. So the interesting question is, how accurate are prefetching predictions? Unfortunately, good measurements are hard to come by. Companies with caching products that use prefetching are secretive about their algorithms.

Translation and transcoding
Translation and transcoding both refer to processes that change the content of something without significantly changing its meaning or appearance. For example, you can imagine an application that translates text pages from English to German as they are downloaded.  

Transcoding usually refers to low-level changes in digital data rather than high-level human languages. Changing an image file format from GIF to JPEG is a good example. Since the JPEG format results in a smaller file than GIF, they can be transferred faster. Applying general-purpose compression is another way to reduce transfer times. A pair of cooperating proxies can compress all transfers between them and uncompress the data before it reaches the clients.

Traffic shaping
A significant number of organizations use application layer proxies to control bandwidth utilization. In some sense, this functionality really belongs at the network layer, where it’s possible to control the flow of individual packets. However, the application layer provides extra information that network administrators find useful. For example, the level of service for a particular request can be based on the user’s identification, the agent making the request, or the type of data being requested (e.g., HTML, Postscript, MP3).  

Meshes, Clusters, and Hierarchies

There are a number of situations where it’s beneficial for caching proxies to talk to each other. There are different names for some different configurations. A cluster is a tightly coupled collection of caches, usually designed to appear as a single service. That is, even if there are seven systems in a cluster, to the outside world it looks like just one system. The members of a cluster are normally located together, both physically and topologically.

A loosely coupled collection of caches is called a hierarchy or mesh. If the arrangement is tree-like, with a clear distinction between upper- and lower-layer nodes, it is called a hierarchy. If the topology is flat or ill-defined, it is called a mesh. A hierarchy of caches make sense because the Internet itself is hierarchical. However, when a mesh or hierarchy spans multiple organizations, a number of issues arise.

List of caching products

Squid
http://www.squid-cache.org
Squid is an open source software package that runs on a wide range of Unix platforms. There has also been some recent success in porting Squid to Windows NT. As with most free software, users receive technical support from a public mailing list. Squid was originally derived from the Harvest project in 1996.  

Netscape Proxy Server
http://home.netscape.com/proxy/v3.5/index.html
The Netscape Proxy Server was the first caching proxy product available. The lead developer, Ari Luotonen, also worked extensively on the CERN HTTP server during the Web’s formative years in 1993 and 1994. Netscape’s Proxy runs on a handful of Unix systems, as well as Windows NT.  

Microsoft Internet Security and Acceleration Server
http://www.microsoft.com/isaserver/
Microsoft currently has two caching proxy products available. The older Proxy Server runs on Windows NT, while the newer ISA product requires Windows 2000.  

Volera
http://www.volera.com
Volera is a recent spin-off of Novell. The product formerly known as Internet Caching System (ICS) is now called Excelerator. Volera does not sell this product directly. Rather, it is bundled on hardware appliances available from a number of OEM partners.  

Network Appliance Netcache
http://www.netapp.com/products/netcache/
Network Appliance was the second company to sell a caching proxy, and the first to sell an appliance. The Netcache products also have roots in the Harvest project.  

Inktomi Traffic Server
http://www.inktomi.com/products/network/traffic/
Inktomi boasts some of the largest customer installations, such as America Online and Exodus. Their Traffic Server product has been available since 1997.  

CacheFlow
http://www.cacheflow.com
Intelligent prefetching and refreshing features distinguish CacheFlow from their competitors.  

InfoLibria
http://www.infolibria.com
InfoLibria’s products are designed for high reliability and fault tolerance.  

Cisco Cache Engine
http://www.cisco.com/go/cache/
The Cisco 500 series Cache Engine is a small, low-profile system designed to work with their Web Cache Control Protocol (WCCP). As your demand for capacity increases, you can easily add more units.  

Lucent imminet WebCache
http://www.lucent.com/serviceprovider/imminet/
Lucent’s products offer carrier-grade reliability and active refresh features.  

iMimic DataReactor
http://www.imimic.com
iMimic is a relative newcomer to this market. However, their DataReactor product is already licensed to a number of OEM partners. iMimic also sells their product directly.  

 
Leave a comment

Posted by on October 16, 2008 in Web

 

Tags:

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: