To AJAX or Not to AJAX? That is the question!

0

When faced with a new web project these days you typically hear the clients listing AJAX as one of the must haves in their brand new web application. Pretty cool as you might be accustomed yourself to AJAX to the extent that you can hardly imagine returning back to the page reload per click days. But, if you are more sensible (or better yet, your clients are so) you would think twice before entirely abandoning the normal site browsing model for an AJAX based one.

Why? I hear you say. Many reasons, including the fact that we live in the early 21st century, where – get ready for this – not all Internet access devices are equipped with state of the art browsers that can consume your AJAX interfaces or whatever Javascript or CSS magic you throw at them. Many mobile phones (millions to say the least) can hardly parse plain old HTML, some can do CSS but not Javascript

Ok, you tell me. “I will have to do two versions, one that is full of AJAX effects and one old boring HTML only version.” STOP IT, I say, you can't be more wrong. Thank God there could be more elegant solutions to the problem than just writing another application around the same database. I present to you my humble take on the problem. Using the Ruby on Rails Framework (you can apply similar thoughts in other frameworks if you like, and many ideas can be copied easily as they only involve Javascript)

First off, the controllers. The controllers are responsible for receiving requests and sending responses. What we need to do is make them intelligent enough to understand different types of requests and respond accordingly. This is done using Rails magical method “respond_to”

class IssuesController < ApplicationController
def index
...
respond_to do |format|
format.html { # do something }
format.js { # do another thing }
format.json { # and another thing }
format.xml { # ok, enough }
end
end
...
end

In the above example we see that each format will have a different response. This is great for a start, that way we can implement slightly varying responses for the AJAX and the none AJAX calls. To make things easier on us we will implement a very simple case of AJAX. Each rhtml view is rendered in a DIV tag within an rhtml layout. In the none AJAX model, pages are rendered by rendering both the layout and the inner view. In the AJAX model, only the inner view is rendered and is sent back to the browser to replace whatever resides in the content DIV.

So, our controllers will work as follows:

class IssuesController < ApplicationController
def index
...
respond_to do |format|
format.html # will render index.rhtml
format.js { render :layout => false }
# the above line will render index.rhtml but without the layout
end
end
...
end

The above lines made our controller ready to respond to normal or AJAX requests (given that AJAX requests will have the .js format). In the former case it will return back the whole page but in the latter it will omit rendering the layout and only send the content.

Ok, but what we still need two views. I hear you, and fear not, you will have to change nothing. Actually it's only a trivia to adapt your views to this model. Let's see how this can be done.

Here's a normal view code sample, and pardon me, I won't use the link_to helper method for clarity purposes:

...
<div id=”content”>
...
<ul>
<li><a href=”url1”>Link1</a></li>
<li><a href=”url2”>Link2</a></li>
<li><a href=”url3”>Link3</a></li>
</ul>
...
<form target=”url4”>
...
<input type=”submit”>
</form>
...
</div>
...

The above fragment shows a list of links and a form. All should behave in the normal way and reload the page when clicked. Now let's imagine that the user is using a Javascript capable browser. What effect could this coming fragment have on his experience?


<!-- Warning, this fragment requires prototype.js -->
<script>
function ajaxifyLinks(){
// check if there is AJAX support
if(!Ajax.getTransport())return false;
// loop on all links
$$('a').each(function(link){
// attach an event observer to each link's 'onclick' event
Event.observe(link, 'click', function(event){
// call the original url (with .js added) with AJAX
new Ajax.Updater('content',link.href+”.js”);
// stop the browser from following the link
return false;
});
});
// loop on all forms
$$('form').each(function(form){
// attach an event observer to each form's 'onsubmit' event
Event.observe(form, 'submit', function(event){
// send the form contents via AJAX
new Ajax.Updater('content',form.action+”.js”,
{params:Form.serialize(form),
method:'post'});
// stop the browser from submitting the form
return false;
});
});
}
</script>

The above code will transform EVERY link and form in the page to AJAX, that is, in case that the browser supports both Javascript and AJAX. Otherwise links and forms will remain untouched and they will behave as usual.

Of course this is a minimalistic example. We knowingly avoided touching on any special case but, in another installment of this article we will get more intimate with the subject and may be we can handle more aggressive ... techniques!

Using action+client caching to speed up your Rails application

0

Labels: , ,

Too many visitors are visiting your website and loads of dynamic data are being delivered to your clients?. Of those visitors, you have more people reading your site's content than people modifying it? meaning, you get lots more GET requests than POST, PUT or DELETE?

If the above questions are all answered with a YES, then, my friend, you are desperately in need of caching. Caching will help you lessen the load on your servers by doing two main things:
  1. It eliminates lengthy trips to the (slow by nature) database to fetch the dynamic data
  2. It frees precious CPU cycles needed in processing this data and preparing it for presentation.
I have faced the same situation with a project we are planning, we are bound to have much more GETS than any other HTTP command, and since we are building a Restful application we will have a one to one mapping between our web resources (urls) and our application models. The needs of our caching mechanism are the following:
  1. It needs to be fast
  2. It needs to be shared across multiple servers
  3. Authentication is required for some actions
  4. Page presentation changes (slightly) based on logged in user
  5. Most pages are shared and only a few are private for each user
We have two answer the following now, what caching technique and what cache store we will use?

The cache store part is easy, memcached seems like the most sensible choice as it achieves points 1 & 2 and is orthogonal to the other 3 requirements. So it is memcached for now.

Now, which caching technique?. Rails has several caching methods, the most famous of those is Page, Action and Fragment Caching. Greg Pollack has a great writeup on these here and here. Model caching is also an option, but it can get a bit too complicated, so I'm leaving it out for now, it can be implemented later though (layering your caches is usually a good idea)

Page caching is the fastest, but we will use the ability to authenticate (unless we do so via HTTP authentication, which I would love to, but sadly is not the case). This leaves us with action and fragment caching. Since the page contains slightly different presentation based on the logged in user (like a hello message and may be a localized datetime string) fragment caching would sound to be the better choice, no? Well, I would love to be able to use action caching after all, this way I can server whole pages without invoking the renderer at all and really avoid doing lots of string processing by Ruby.


There is a solution, if you'd just wake up and smell the coffee, we are in Web 2.0 and we should think in Web 2.0 age solutions for Web 2.0 problems. What if add little JavaScript to the page that dynamically displays the desired content based on user role. And if the content is really little, why not store it in a session cookie? Max Dunn implements a similar solution for his wiki here and thus the page is served the same with dom manipulation kicking in to do the simple mods for this specific user. Rendering of those is done on the client so no load on the server, and since the mods are really small, the client is not hurt either, and it gets to get the page much faster, it's a win win situation. Life can't be better!

No, It can!. In a content driven website, many people check a hot topic frequently, and many reread the same data they read before. In those cases, the server is sending those a cached page yes, but it is resending the same bits which the browser has in it's cache. This is a waste of bandwidth, and your mongrel will be waiting for the page transfer to finish before it can consume another request.

A better solution is to utilize client caching. Tell the browser to use the version in its cache if it is not invalidated. Just send the new data in a cookie and and let the page dynamically modify itself to adapt to the logged in user. Relying on session cookies for dynamic parts will prevent the browser from displaying stale data between two different session. But the page itself will not be fetched over the wire more than once, even for different users on the same computer.

I am using the Action Cache Plugin by Tom Fakes to add client caching capabilities to my Action Caches. Basically things go in the following manner:
  1. A GET request is encountered and is intercepted
  2. Caching headers are checked, if none exists then proceed
    else send (304 NOT MODIFIED)
  3. Action Cache is checked if it is not there then proceed
    else send the cached page (200 OK)
  4. Action processed and page content is rendered
  5. Page added to cache, with last-modified header information
  6. Response sent back to browser (200 OK + all headers)
So how to determine the impact of applying these to the application
  1. We need to know the percentage of GET requests, which can be cached as opposed to POST, PUT and DELETE ones
  2. Of those GET requests, how many are repeated?
  3. Of those repeated GET requests, how many originate from the same client?
Those numbers can tell us if our caching model works fine or not, this should be the topic of the next installment of this article

Happy caching