Tuesday, July 17, 2007

The n-Tier world .. hardware

Does an n-tier hardware environment actually make things better?  I was asked this question recently and, to be honest, it made me pause and reflect on the promises of the n-tier world versus the reality of an n-tier world.


The n-tier push really started gaining hold, again, in the 90's when the Internet was young and foolish and so was Al Gore.  As a result, most people associate the idea of an n-tier environment as one that is web browser based.  While this is not always the case, the majority of applications currently being developed are web based, so we will run with that and assume, for the purposes of this discussion, that our n-tier environment consists of a web browser communicating with a web server that in turn communicates with a database server.


With regard to the web browser, the idea was that with a "light" front end that was downloaded every time you requested it you did not need as much processing power on the desktop (Internet Computer anyone?) and you could make changes to the application without having to distribute an application to hundreds, thousands or even millions of customers.  This has proven to be a valuable and worthwhile objective of browser based deployments and allows for quicker changes with less client impact.


Separating the main logic on to a web server or cluster of web servers (see previous notes about this) then allows the developer to change the application and only have to do it in a limited number of locations.  While this has allowed the developer to deploy applications quickly, the problem here lies in the fact that the developer(s) build the application as if it was the only thing running on the server, when in reality it is usually one of many applications.  Resource contention (memory, CPU) usually mean that going to a cluster of servers is a requirement.  It is also a common misconception that adding more servers will make the application run faster.  Adding more servers allows you to support more simultaneous users, but does not necessarily make the application run faster.  As a result, a poorly performing application will perform just as poorly one machine as on a cluster of 20, although you can annoy more people on cluster.


By placing all of the database access on a machine, or cluster of machines, there are fewer connections to the database that need to be monitored and managed on the database server.  This reduces memory usage, CPU usage and allows the database to concentrate on serving data to clients.  Unfortunately, this is the where one of the biggest problems is in the n-tier world.  Developers need to optimize their code when accessing the database.  Whether it is reducing locks and deadlocks, reducing transaction length or simply designing better applications and databases, the database back end, regardless of whether it is an Intel box, a Unix server or an OS/390 behemoth can only handle so much work.  Web servers can be clustered, but in order to get more than one database server to service requests against the same data you need to have a much more sophisticated and much more complicated environment.  Just adding another server won't cut it as the database server is constrained in terms of the memory and CPU we can use.


So, has n-tier lived up to it's promise?  Sort of.  The web browser side:  yes.  The web/application server side:  mostly.  The database side:  not as much as expected.  The problem is not the technology, rather the people.  We have created an infrastructure that can do great things.  What we need to do now is teach people how to create great things with that infrastructure.

No comments: