Thursday, November 01, 2007

Monitor Your Servers in Facebook

If you have been thinking of an excuse to tell your boss why you need to use Facebook at work, here's your answer.

Monitor your web servers in Facebook! Get a notification when your site goes down and when it comes back up. Show your server status on your profile too so your friends can see if you're a network admin rockstar or not.

  • Notifications via Facebook feed, email and SMS
  • Checks every ten minutes
  • 10 second setup
  • FREE!

Check it out here.

Tuesday, October 16, 2007

New Scaling Out Blog at

I started a new blog focused on Amazon Web Services and cloud computing over at It will cover the latest news about the industry and showcase how people are using EC2, S3, etc. Check it out, subscribe, let me know your thoughts.

Tuesday, October 02, 2007

How to Add OpenID Support to your Java Application

Here is a quick and easy step by step to add OpenID support to your application. We are using joid because it's the lightest weight implementation with the least number of dependencies (2 jars).

  1. Download joid from
  2. Copy joid.jar, log4j-*.jar, and tsik.jar to your lib directory (so they end up in WEB-INF/lib).
  3. Add OpenIdFilter to your web.xml (see below for how to add it)
  4. Add OpenId login form (see below for a sample jsp page)

After a user logs in, you can access the username that they're signed in as with:

String loggedInAs = OpenIdFilter.getCurrentUser(session);

Simple huh?

OpenIdFilter for web.xml

<description>Optional. Will store the identity url in a cookie under "openid.identity" if set to true.</description>
<description>Optional. Domain to store cookie based on RFC 2109. Defaults to current context.</description>

OpenID Login Form

<%@ page import="" %>
<%@ page import="" %>
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
String returnTo = UrlUtils.getBaseUrl(request);

if (request.getParameter("signin") != null) {
try {
String id = request.getParameter("openid_url");
if (!id.startsWith("http:")) {
id = "
http://" + id;
String trustRoot = returnTo;

String s = OpenIdFilter.joid().getAuthUrl(id, returnTo, trustRoot);
} catch (Throwable e) {
An error occurred! Please press back and try again.
<head><title>A Page I Want to Login To</title></head>
This is a sample login page where a user enters their OpenID url to login.

String loggedInAs = OpenIdFilter.getCurrentUser(session);
if(loggedInAs != null){
<p align="center">
<span style=" background- padding:5px;">You are logged in as: <%=OpenIdFilter.getCurrentUser(session)%></span> - <a href="logout.jsp">Logout</a>

<div style='margin: 1em 0 1em 2em; border-left: 2px solid black; padding-left: 1em;'>
<form action="index.jsp" method="post">
<input type="hidden" name="signin" value="true"/>
<b>Login with your OpenID URL:</b> <input type="text" size="30" value=""
<input type="submit" value="Login"/><br/>
<i>For example: <tt></tt></i>

<strong>Don't have an OpenID?</strong> <a href="" target="_blank">Go</a>
<a href="" target="_blank">get</a>
<a href="" target="_blank">one</a>.


Friday, August 10, 2007

Downloading JDK to your Linux Server via SSH

This is a pretty common problem that I'm sure almost every java developer has had to deal with. Due to Sun's licensing restrictions, you cannot simply use up2date, apt-get or yast to get the latest JDK. Even if you add the extra repositories for them to look at, it's usually not the latest JDK. On Windows, it's easy since you generally always have a UI via Remote Desktop. Linux is usually accessed via SSH so all you have is your shell command line which means you can't surf on over to Sun's Java site, click "Ok, I accept all the terms and conditions", and download it. So you end up downloading it on your local machine, then uploading it to your server via SCP. But this takes forever because you download 60 MB on your slow local connection, then upload it even slower (assuming your upload speed is slower than you download speed). So an hour later you finally get your jdk on the server.

So is there a better, quicker way to do it?

I'm glad you asked. Yes there most definitely is:

  1. Go to jdk download site:
  2. Click Download button on the distro you want (usually the basic JDK)
  3. Accept the license agreement
  4. Right click the download link (usually the RPM in self extracting bin) and in the right click context menu choose Copy Link Location if using Firefox or Copy Shortcut in Internet Explorer.
  5. Now you have the location of the download so go to your SSH shell where you are logged into your Linux server and use the following command:
    wget PASTE_FILE_LOCATION_HERE -O jdk6-rpm.bin
    1. You need the -O otherwise the filename length is too long
  6. Now install it:
    1. chmod a+x jdk6-rpm.bin
    2. ./jdk6-rpm.bin
  7. And finally, add JAVA_HOME to your environment variables:
    1. create or edit .bash_profile file in /root and add "export JAVA_HOME=/usr/java/latest"
That's about it. Much much faster.

Friday, June 15, 2007

Third App Migrated to Amazon EC2

A couple of days ago, I moved another application to Amazon Web Services. This is my third application I've setup on their infrastructure and I have to say, I think I'm addicted to the throwaway nature of Amazon Elastic Compute Cloud. I've got the whole long term, persistent storage thing nailed and can do quick recoveries so I am no longer worried about servers crashing... and boy that's a good feeling.

On our previous dedicated servers, there's always a worry about having a server meltdown because there is no quick way to get a new one up and running. It could take a day or more. Now I just launch new servers for fun. ;)

The big lessons learned:

  1. Automate absolutely everything. So when something goes wrong, you run a script or two to get a new instance running. And practice, practice, practice doing emergency recoveries (I've had to recover from crashed instances several times now so I learnt this lesson the hard way).
  2. Scale early. As soon as you notice some performance or memory limitations, start adding more instances to spread out the load. Need performance, put your database on a separate server from app. Need redundancy, replicate your database on two servers. Need memory, fire up some more instances just for caching (memcached, ehcache, jbosscache).
  3. Keep your database as small as possible. Store as much data as possible on Amazon Simple Storage System (S3), NOT in your database and just store the S3 key to the data in your database. Consider putting any blob or large text fields in S3. This will make it much easier and faster to manage database backups, plus your database will perform better.
Are you an EC2 user? Let me know your experiences.

Monday, June 04, 2007

Amazon EC2 Ephemeral Storage (/mnt) and MySQL

Amazon Elastic Compute Cloud (EC2) includes 160 GB of local storage. This storage is split into two partitions:

  1. ~10 GB for your VM/AMI image instance which includes the OS and software.
  2. ~147 GB for your "ephemeral storage" which is where you should put your data or anything that is going to need a lot of space. This storage is in /mnt.
So in order to let your MySQL database grow, you'll need to put the data files in /mnt and the easiest way to do this is to make a symbolic link from the default MySQL data storage location (usually /var/lib/mysql) to a directory on /mnt. Be sure to do this immediately after installing MySQL so you won't run into any issues.

Here is a step by step how to do it:
  1. mkdir /mnt/mysql
    1. Creates your new data directory
  2. chown mysql:mysql /mnt/mysql
    1. Gives mysql ownership to the directory
  3. /etc/init.d/mysql stop
    1. Stop MySQL before moving data files
  4. mv /var/lib/mysql/* /mnt/mysql
    1. Move your data files
  5. rmdir /var/lib/mysql
    1. Delete the old directory
  6. ln -s /mnt/mysql mysql
    1. Create a symlink to your new directory on /mnt
  7. /etc/init.d/mysql start
    1. Restart MySQL
That's all she wrote. One last very important thing to remember: This storage is NOT reliable. If your instance is terminated or crashes, your data is lost forever! So always be sure to do regular backups to S3.

Friday, May 11, 2007

How To Fix LinkageError when using JAXB with JDK 1.6

If you run across an error like this when trying to use JAXB:

java.lang.LinkageError: JAXB 2.0 API is being loaded from the bootstrap classloader, but this RI
(from jar:file:/somedirectory/jaxb-impl.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class) needs 2.1 API. Use the endorsed directory mechanism to place jaxb-api.jar in the bootstrap classloader. (See

It's actually a very simple fix, but painful enough to warrant a post. Put the jaxb-api.jar that you're trying to use into JDK_HOME/jre/lib/endorsed. If the endorsed directory doesn't exist, make it. This is apparently only a problem with JDK 1.6, not with JDK 1.5.

Friday, April 27, 2007

Liveness Rule #2: Asynchronize Everything for Writes

And by everything, I mean at least as much as you possibly can. Writes are slow for all the same reasons brought up in the caching post, and then some. With writes you can add more potential performance speed bumps to "database hit list", such as table locking or row locking depending on how the database handles it and disk write speeds.

Caching works wonders on the Read side and because caching allows you to take the Read load off the database, it actually does help a lot to increase Write performance. What else can we do to increase liveness for operations that might take more than a fraction of a second? The answer: Go asynchronous.

Good examples of things that should be asynced:

  • Sending mail notifications. This usually takes a significant amount of time, so you should always async this
  • Database updates that the user doesn't need to see a response for. For instance, logging, counters (how many times has the user viewed item X), etc?

How Do I Implement Asynchronous Ops?

If you're using Java, the java.util.concurrent package is your new best friend. This has everything you need to asynchronize things in a simple and elegant way. The classes you will be interested in are Executor, ExecutorService, Executors (static factory methods). An Executor is essentially just a queue of tasks with a thread pool to dedicated to executing those tasks.

Here's a quick getting started guide to asynchronicity heaven:

  1. Create the ExecutorService:
    ExecutorService executor = Executors.newFixedThreadPool(THREAD_POOL_SIZE);
    Keep this available as a global variable, you can use this same Executor throughout your entire application if you want.
  2. Now pull out any code you want to execute asynchronously into small tasks that implement Runnable. For example, an emailer task might look like this:
    public class EmailTask implements Runnable {
    private Email email; // the email we want to send.
    public EmailTask (Email email) { = email;
    public void run() {
  3. Now when executing the event action, create a new instance of your Task class and pass it to the ExecutorService's execute command:
    EmailTask emailTask = new EmailTask(email);
  4. Done! That task will execute at some point in the future and you can respond to your user immediately without making him/her wait.

Another huge benefit of Asynchronizing things is that application automatically becomes more scalable and doesn't degrade as traffic goes up. Lets say you did the above example synchronously and it took 2 seconds on average to send an email. That may be fine when you have a small number of users (although 2 seconds is way too long to make someone wait), but what happens if you have have 1000 users? That's a total of 33.3 minutes (2000 seconds) of waiting time just for 1000 users. And lets say you have a maximum of 200 processing threads for your app server (Tomcat default). It won't take much to run out of processing threads if your traffic goes up, so now you have people waiting to get the application to respond at all. Pretty soon your app is on it's knees, dying a slow painful death. You can avoid all of these problems by taking a few extra seconds to implement it like I've shown above.

Learn it, live it, love it. Once you start getting into the habit of doing things like this, there's no turning back and you can rest easy knowing that you've future proofed your application.

Liveness Rule #1: Cache Everything for Reads

Caching is really the holy grail of liveness. Consider the difference between hitting your database for every request vs pulling that data directly out of memory. Here is a small comparison of performance related properties with pros in green and cons in red.


  1. Near instantaneous access to data
  2. Concurrent contention is virtually nil
  3. Limited amount of memory

Database Hit:

  1. Disk seek speed
  2. Concurrent contention as threads wait for disk access
  3. IO speed
  4. The database query speed
  5. Connection setup/teardown
  6. Marshalling and unmarshalling
  7. Large amount of disk space available

These are all pretty obvious points, but it is just to highlight how much more stuff is done when you don't cache. These things are really taxing on the entire system, much more so than pulling stuff from memory which also means your application is far more scalable.

Our experience:
Traffic was increasing in large amounts and the response times were getting worse and worse. Not to the point where it was the end of the world, but bad enough that we had to do something about it before it got even worse (over a second is bad). Now that we cache almost everything, response times are down to a few milliseconds, right up there with Google search responses.

Caching is actually pretty simple to implement. No matter what solution you use, the interface is always something like java.util.Map where you put, get, and remove elements.

Thursday, April 26, 2007

Liveness In Web Applications

Liveness: A concurrent application's ability to execute in a timely manner is
known as its liveness.

In other words, liveness is making your application respond extremely fast to your users requests. Having had to deal with this over the past few months on rel8r, I thought I'd share my experiences.

  1. Caching
  2. Asynchronizing

Tuesday, January 02, 2007

Renewing SSL Certificates on Tomcat

So I had to renew a couple of SSL certificates that are used by sites running standalone Tomcat, here is what I had to do:

  1. Generate a new CSR request. This is easier than when first starting out since you've already created your keypair from the last time you bought your certificate:
    keytool -certreq -keyalg RSA -alias tomcat -file certreq.csr -keystore .keystore
  2. Now open certreq.csr in a text editor and copy and paste the contents into your SSL issuers website form to finish the process of getting your new certificate.
  3. Now you should get an email to verify that you are the one who submitted the request and a link to Approve it. This email will go to the email address on your whois record for your domain name to verify that you own the domain.
  4. After you Approve the SSL request, you can download your new certificate along with the issuers intermediary certificate.
  5. You must first install the issuers Intermediate Certificate:
    keytool -import -alias intermed -keystore .keystore -trustcacerts -file sf_issuing.crt
  6. Then import your fresh new certificate:
    keytool -import -alias tomcat -keystore .keystore -trustcacerts -file
  7. And finally, restart Tomcat and double check by surfing to your page and checking out your certificate by clicking on the padlock icon on your web browser. Make sure the expiry date is correct.

That's about it. Nice and easy.