Thursday, May 31, 2007

Rule #1 --- Redone

I love it when I’m wrong, because I learn things. As David Woods pointed out yesterday, my example for the minimal effort you should put in to a Catch was flawed. My code contained the following:

    Catch ex as Exception
Throw ex
End Try

WRONG


The problem with my post is that all of the stack information is lost. If you follow the example that David Woods has on his blog you will see what I mean. The correct code should have been:

    Catch ex as Exception
Throw
End Try

Or, if you don’t like typing, this minimalist approach:

    Catch
Throw
End Try

In the end, though, the point that needs to be made is that you need to either handle the exception yourself (handle does not mean ignore) or you need to bubble it up to see if someone else wants to handle it. In addition, you need to put into the exception information that will help in debugging the problem.

The Stakeholder Registry has such extensive exception information embedded within it. When their code experiences something like a SQL Timeout they display the stored procedure that was being called and even the parameters that were being passed in. This is tremendously helpful in determining what the problem might be.

Rule #2

I mentioned that Rule #1 was "If you catch an exception, you better record it or re-throw it." Today I will talk about the next rule in the series.


Rule #2: If you throw or catch exceptions, you should use the finally clause to clean up after yourself.


When you throw an exception, or catch an exception, one of the biggest things that happens is that you start to move outside the boundaries of the flow mechanism that you had in place. OK, what this really means is that when you throw or catch you skip a lot of code. If you are using resources that are not simple .NET objects, you need to clean up after yourself. The last part of the Try ... Catch ... Finally block is the Finally clause and this helps you clean up.


The code that you put in the Finally clause is always executed at the end of the Try ... Catch block. So, if you re-throw the exception, throw a new exception or, regretfully, ignore the exception, the code in the Finally clause will be executed. This give you opportunity to clean up after yourself by disposing of those resources and objects that are not simple .NET objects. This includes things like file handles, SQL Server connections, or other resources not necessarily handled by managed .NET code.


This does a lot of things, not the least of which is ensure that your applications does not run out of those resources!!! So, like I always tell my girls when they leave the kitchen table, "clean up after yourself, because no one else is going to do it for you."

Wednesday, May 30, 2007

Try and Catch

Please note that there is an error with this entry. See Rule #1 --- Redone for details.


Exceptions are powerful tools that give the developer the ability to understand what is going on with their application, particularly when there is a problem. Sadly, many programmers do not use this feature, or implement it so poorly as to provide no meaningful information.

Rule #1: If you catch an exception, you better record it or re-throw it.

What does this mean? Take a look at the following code:

Catch ex As Exception


End Try


What this code does is catch the exception, then let it get away. It’s like putting cheese on a mouse trap, but then gluing down the trigger so that the mouse can get away. I mean, seriously, what are you thinking? At the least, at the very least, you should have the follow:

Catch ex As Exception


Throw ex


End Try


This at least re-throws the exception so that something above you can make a decision as to what needs to be done. If I had my druthers, however, it would be more like this:

Catch ex As Not Implemented


Throw New Exception("Blow Up3 does not implement that functionality", ex)


Catch es As Exception


Throw New Exception("Unforeseen error in Blow_Up3 trying to access Don’s bank account", ex)


End Try


There is so much more information available to the method that initially gets the exception than the calling method has that it would be a shame to ignore that information and make life more complicated for everyone.

Thursday, May 24, 2007

DataSets

DataSets are an amazing construct. They allow you to pass back an entire result set, regardless of how many tables, to the caller and have them work with the data in the fashion that they want. Relationships can be established between tables, queries can be run against the dataset and updates can be made to the table.

You remember the saying "If something is too good to be true, it is"? Guess what? It's true with DataSets as well.

First of all, let me confess up front that I am not now, nor have I ever been, a very big fan of datasets, but probably not for the reasons you're thinking. I trust you will keep this in mind as you read.

DataSets provide a wealth of functionality to the developer, but it does come at a significant cost as well. DataSets hide much of the complexity associated with databases, particularly in the area of updating and populating fields on a web page or a report. While this hiding is, in some respects, quite welcome, it lulls the developer into a false sense of complacency and prevents the developer from truly understanding what is happening. I cannot tell you how many times over the past few years I have seem developers pass around hundreds of megabytes of data in a single dataset because they could. DataSets make it easier to be blissfully unaware of the consequences of doing something because of the fact that they hide things so well.


When I was younger I learned IBM 360 Assembler, both at the U of A and at NAIT. This low level language made me very conscious of the amount of data I was using and the most effective ways of manipulating it. Even earlier versions of Visual Basic (1.0 through 5.0) were fairly good at making you aware of what you were doing and the impact. As the complexity of the lower levels of programming has been covered up and hidden by successive updates to the languages and frameworks that support them, the ability of developers to understand the impact of what they are doing has been lowered. Today we have cases where hundreds of megabytes or even gigabytes of data are routinely moved from process to process because it is so simple to do so. The underlying impact, however, can bring a server to it's knees.


While I'm not advocating that everyone learns IBM 360 Assembler, I am advocating that developers fully understand the objects they are using and the impact of using those objects. If you aren't sure of the impact of what you are doing, experiment a little more, read a little more, learn a little more. The more you understand what you are doing with the languages you are using, the more productive you will be.

Wednesday, May 23, 2007

Failure

Being old, excuse me, older, than many of you gives me an advantage over you in a number of ways. I will be able to get the senior rate at the movies before you and I will be able to get discounts at hotels before you. What it has also done, is given me the opportunity to fail more often than you.

One of the best teachers in the world is failure, as it shows you what went wrong and what not to do. All you need to do now is learn from that failure and try to prevent that same situation from happening again. As someone who has been in this field for 20 years I have experienced a lot of failures, both on my part and those with whom I’ve worked. Each failure has been a learning experience that has allowed me to gain some piece of knowledge such that I am able to either not fail in the same manner or at least recover faster.

Unfortunately, failure is often seen as a bad thing, and from an overall project perspective it most certainly is a bad thing. However, small individual failures are not something that should be frowned upon, but embraced. Scott Berkun, in The Art of Project Management, wrote:

Courageous decision makers will tend to fail visibly more often than those who always make safe and cautious choices.

This applies to everyone that makes decisions, from the project manager down to the developer. If a decision was made that was, at the time, the right decision, celebrate the decision, regardless of whether or not it was a success. If the decision was bad, educate the decision maker so that they can learn from their mistake. (Educate does not mean punish.) By telling people you expect them to be perfect and that you do not expect any problems, you are telling them to play things safe and not try anything new. Mankind didn’t go to the Moon by playing safe. IBM played it safe with the personal computer and lost. Risks need to be taken at certain points and we need to train all of our staff, from developers to project managers, when failure and risk, is a good thing.

Writing Enough Code

Test Driven Development talks about writing enough code to pass the test. XP (Extreme Programming) talks about writing just enough code to meet the requirements. In both cases they make a case that you should not could for things that are not required, nor should you code for possibilities that may or may not occur.

This can easily get taken to extreme, however. I worked with a young man who took the idea of “just enough code” and went too far with it. He was writing an application that was designed to accept yearly reports. It was known from the start that the reports needed to be stored by year, searched by year, printed by year, etc. However, in the first release there was no historical data that needed to be kept, everything was in the current year. So, guess what? He omitted years from everything he did, database and code, because “it wasn’t necessary for this release of the application”.

Yes, write just enough code, but also use common sense. If you know for a fact that you are going to need to do something in a future release, don’t ignore that fact just because the current release doesn’t require it. If you know that you need to handle multiple years, design and code for that early on, even if the year is always going to be the same. Writing just enough code and common sense are not mutually exclusive ideas, at least, for most people.

Branching Guidance

Sometimes there just isn’t a shortcut to the right answer. You know what I mean: instead of researching the answer yourself, you lean over, talk to your buddy for 30 seconds and he gives you the answer you need. In many cases this works when you’re trying to solve a silly little problem or you just can’t remember the name of the runner up in last years American Idol.

Other problems, unfortunately, require that you understand the background behind the solution before you can actually understand the solution itself. String theory is like this. So are some aspects of quantum mechanics. Most business problems, don’t fall into this level of complexity, although I have seen the odd case where PhDs would be confounded by the sheer complexity of what has been engineered. Not necessarily what was required, just what was engineered.

In some cases there is some simple help, but it does require a bit of reading. I was recently asked for information about when to do branching and exactly how it should be done. In this case, I went to somebody who needs to do this on a frequent basis: Microsoft. Indeed, the information at CodePlex was excellent in terms of it’s understanding of the problem and the potential solutions. For those of you who think you understand how branching should be done, and for those of you who are at a loss, I recommend this document as an excellent source of information from which you can retrieve the bits and pieces that are of particular interest to you. It is not a light read as the amount of information it contains is quite voluminous (approximately 28 pages), but it gives you some interesting insight into an arcane subject.

Soccer Referee

Paraphrasing is a lost art in the IT world, but it is an art that really needs to be emphasized more. When I was younger I was helping a friend referee a soccer match. (Football to you foreigners) He wanted me to be a linesman and he told me that “When the red team kicks the ball out of bounds I want you to point the flag in the direction that the blue team will be moving when they get the ball on the throw in.” This made perfect nonsense to me as that seemed much too complicated, so I paraphrased it “You mean, point the flag at the team that kicked the ball out?” This confused him for a moment as he struggled to reconcile what I had repeated back to him with what he had told me, but he gradually agreed that the impact would be the same.


Sometimes when we write up specifications for an application we are too deep in the details and too aware of the intent, but not fully aware of the impact. We need to step back, take a look at what we have said or written, and see if we can rephrase it to make it simpler, yet still retain the same meaning. I do this quite often when writing these one minute comments. You should see some of the stuff that I write and throw out. (Then again, you have seen the stuff that I’ve gone ahead and sent out.) For instance, I’m currently writing this note because one on testing just doesn’t make any sense when viewed from outside the original context that most of the readers will not have.


The same thing is true of specifications. Not everyone reading the specification is going to have the same background as you or is operating with the same context. Not everyone is going to be an expert in the business area involved. (Or the subtleties of being a soccer referee.) What you write for a specification needs to be easy to understand, even for those that are unfamiliar with the business process. If it isn’t easy to understand then you need to step back, clear your mind, and try again. If it is hard for someone who knows the business to write the specifications, imagine how hard it is for someone not familiar with the application to understand what you have just written.

Tuesday, May 22, 2007

Best Practices

There are a ton of best practices floating out there in the infamous nether of cyberspace. Many of these interesting tidbits have actually come to roost in the minds of architects, programmers, testers, and, yes, even Project Managers. The question is, how do these best practices get communicated out to everyone?

Organizations sometimes do this through the creation of standards and templates for people to follow. These can be advantageous in that they prescribe certain actions that must occur. Standards, however, have some disadvantages in that the time from the creation of the standard to the implementation of the standard can be quite long. In other cases the standard provides either too much/not enough guidance and subsequently causes more confusion than if the standard had not existed. Enforcement is also a tricky thing to implement as grandfathering old projects needs to be taken into account versus the benefit of following the standard.

Some organizations produce lighter weight guidelines for people to follow. A guideline is a water downed standard in that it has not followed the same rigorous approval process, but is still considered something that should be followed, when possible. Guidelines, however, because they are not standards and subsequently not enforced, do not always provide the structure that is necessary to take full advantage of the material in question.

At the far end of the scale are those organizations that publish standards in a much more informal manner. The mere act of a certain group publishing something gives that tidbit of information the status of an organizationally approved standard that must be followed and will be enforced. This method presupposes that the group publishing the work is granted sufficient authority to make those decisions on behalf of the organization. Sometimes this authority is granted on a wide scale (all IT standards) or on a very narrow scale (all Visual Basic Programming Standards), depending upon the comfort level that management has with the group.

All of these methods, however, share one key thing: communication. Even on a project by project basis these communication mechanisms can be used to disseminate project standards to the rest of the team. Whether this is done through formal documents (standards), informal guidelines, or by having the Application Architect send out emails or write a blog, any method of communicating standards and guidelines is better than none, so get those best practices out of your head and on to paper (electronic or wood pulp) and let other people benefit from your experience.

Friday, May 18, 2007

High Performance Teams

In another life I was busy researching the idea behind "High Performance Teams" (HPT). These teams are not NASCAR fans, nor are they hooked on amphetamines. Instead, they are a group of individuals who work with each other really well and outperform other similar groups in terms of their quality of work and the speed with which the work gets done. You've seen these teams in hockey as the coach will normally put certain players together and keep them together throughout the season with few changes.

In IT, however, the concept of a high performance team does not always seem to be understood or even implemented in many areas. A team can be as small as two people, or it can be much larger, but there are some key traits that all of these teams share. (OK, here is where I differ from conventional wisdom so if you want you can tune out, even though you may be missing some really cool stuff.)

  • Trust. Perhaps the most important trait is that the members of the team trust each other to make the right decisions or at least a decision that can be lived with by everyone.

  • Communication. Team members communicate with each other effectively. Different people understand things in different ways. Some people like metaphors, others like analogies while others love diagrams. In a HPT the appropriate mechanism is used at the right time to maximize the effectiveness of the communication.

  • Commitment. Each team member knows that every other member of the team is just as committed as they are to producing a high quality product.

  • Continuous Improvement. An HPT is not satisfied with the status quo, they want to do the next job better than they did the last job and the one before that, by continually improving how things are done.


Some organizations are not ready for HPTs as it means setting a group up as being "special". Others are not interested as they believe, rightly or wrongly, that if people just follow the process everyone would be part of an HPT. Some larger projects do implement this concept within the overall project and find that the HPT is extremely productive and crucial to the success of the project.


It may not be your cup of tea, but at least you're aware of the possibilities.

Tuesday, May 15, 2007

Project Managers and Bullies: Are They One and the Same?

I hope my tease yesterday kept you on pins and needles. I have had the good fortune to be on a wide number of project teams, some of which had good project managers, some of which had bad project managers and some of which had me. Throughout all of these projects there is a common theme of trying to lead the team. Different PMs had different ideas about how to lead the team with some methods being more effective than others.

One popular method of leading the team is through the use of “It’s my way or the highway”. This is rarely effective as it stifles creativity, leads to aggressive behavious between the team members and the PM and usually causes people to devalue the importance of the PM. (Sometimes it leads to team members calling the PM an “arrogant SOB from Chicago”, but that is another story.)

Another popular method is consensus. Everyone needs to agree on something before it is done. While this may work for some areas, developing IT solutions requires a vision that needs to be followed. If that is the vision of one person or the organization as a whole, there needs to be someone driving the project forward and that is the PM.

An effective PM leads through something I like to call “motivation”. You need to be able to get the team engaged and believing that their part, no matter how small or obscure, is part of the overall picture and is important to the success of the project. And you know what, it is!!! Everything that someone is doing on the team is helping to craft the final product and is important, because if it’s not, why are you doing it? The PM needs to be cheerleader, traffic cop and salesman all rolled into one. Being a bully doesn’t cut it as that devastates the motivation piece and does nothing to engage the project team members.

Are Project Managers bullies? No, but that doesn’t mean they can’t be strong willed and opinionated, just that they know when to apply these features to the tasks at hand.

(Any similarities to people living or dead is purely coincidental, unless you are referring to that arrogant SOB from Chicago in which case …)

Project Managers: Who are they?

My apologies to those people who do the work of a Project Manager but don’t necessarily have the title of Project Manager as I may have unintentionally left you out. In some of my commentaries I refer to a Project Manager (notice the capital letters) but I never actually defined what a Project Manager does or explain that this may be more of an informal role that someone on the project team is playing as opposed to a formal title and list of responsibilities.

In my mind, at least today, as I’m writing this, a Project Manager is a person who guides the team closer to reaching the ultimate goal of creating a working, quality application for a client. This may be something which is being done on a piece meal basis, in which case the Project Manager may be the same person as the developer and designer – a team of one – or it may be something much more complex like a multi-year, multi-phase project where each phase has a Project Manager and the overall implementation has an uber Project Manager called a Programme Manager. In either case the target is the same, although the scope of action may be narrower.

So, why am I bringing this up? I’m reading a book called The Art of Project Management by Scott Berkun and it has a number of nuggets of information that I will probably be sharing with you over the course of the next few weeks. I wanted to make sure that we are all on the same page so that tomorrow, when I talk about “Project Managers and Bullies: Are They One and the Same?”, we all know the type of people I am talking about.

Saturday, May 12, 2007

Refactoring -- refactored

It looks like "refactoring" is indeed a four letter word for some of you, while others seem to have no problem with it. Perhaps what I need to do is define a little more closely what I mean by refactoring or, to be more precise, what I didn't mean.


  1. If you are not actively involved in making a change to a particular method or function, you do not refactor it. "If it ain't broke don't fix it", in this case, is true. When you open up a program your purpose is to implement specific functionality. Modifying code that is related to delivering that functionality is part of refactoring. If you see another method that needs to be refactored, put in some comments so that the next person knows what to do, but unless it is involved in what you are doing, don't refactor it.
  2. If your refactoring is going to end up changing interface contracts with other code, you are no longer refactoring, you are redesigning. At this point put down the mouse and tell the Project Manager. If you need to change the interface contract to deliver some specific functionality (see item #1 above) then there may be more problems and it should be looked at in closer detail.
  3. If you refactor code, you retest code. Now, since you are only refactoring code that you are changing to implement specific functionality (see item #1 above) that shouldn't be a problem. This code should be approached as if it is fresh, newly written code, and if it has been refactored it very well may be, so a complete test is going to be required. (Automated Unit testing anyone?)
  4. If you aren't sure if it is going to make the code better, don't refactor it. If you aren't sure if it is going to make things more maintainable, don't refactor it. If you aren't sure what constitutes better code or more maintainable code, don't refactor it.
  5. If your only purpose is to make it look prettier, don't refactor it.

Refactoring is meant to improve things, but it should also be limited to what you are working on, not unrelated areas of the application. These boundaries are put in place so that people don't spend all of their time refactoring code and none of it adding new functionality.

Friday, May 11, 2007

Really long rows

I'm a curious fellow. Sometimes much to my chagrin. But every so often I come across something which kicks the old brain cells into gear and gets me to writing. Something that I think the world (or in this case the people reading this note) need to hear.

I was looking over some stored procedures the other day and discovered a stored procedure that returned a table with a large number of columns. A disturbing large number of columns. I lost count at around 240 or 250. What was particularly disturbing was that a relational set of tables was being compressed into a single row.

By way of example, image if you will a database that stored your personal information and all of the credit cards you had. For each credit card it recorded the date of the last payment and the amount of the payment. Now image compressing that relational information into a single row. You would have a column for VisaPayment and VisaPaymentDate as well as AMEXPayment and AMEXPaymentDate. While in a static world this may not be a problem, what if you get another credit card. The layout of that row would need to change because you now need to add additional columns. Indeed, every time you added a different credit card or, hopefully, paid off a credit card you would need to change the layout. This is something that is headed for disaster.

Refactoring. There, I said it. The code needs to be refactored. It may be working right now, but there is a brick wall coming up fast and the brakes are failing, so it either needs to be fixed or we need to take out a lot of insurance. Fast.

(Please note that this was a fictional example, sort of. Look at the code you're writing and make sure that it doesn't have this brick wall built into it.)

Thursday, May 10, 2007

Business Tier or Database

In our current production environment, indeed, most production environments around the world, there are multiple machines front ending access to the database. In our particular set of circumstances we have a number of web/application servers sitting in front of a database server. It seems rather obvious, but I will say it anyway, this means that the database server is the "tier" which is the hardest to scale. We can add web and application servers with relative ease, but it is downright difficult to spread a single database over multiple servers without some really funky maneuvering.

So, if this is the case, and I know that most of you understand this, why are people still putting excessive amounts of processing on the database tier?

Yes, you should minimize the amount of data that you are transmitting across the tiers, but this doesn't mean that you need to move your business tier to the database!!! There is a balancing act that developers need to do in order to properly distribute the workload in their application and part of that balancing act is the understanding of where to put different pieces of logic. If you are performing a number of different calculations on the server so that you can stop sending back a single column to the client, don't bother, as it may be more efficient to do it in your code than in T-SQL. This is something that you will need to test and figure out.

Or, talk to your DBA. They've been through this before and your question is probably something that they've answered dozens of times. If they haven't then I'm sure that they are going to enjoy figuring out the answer.

Wednesday, May 09, 2007

A Common Phone Call

Occasionally we receive calls at the Deployment team that follow this sort of pattern:

"My application isn't working any more. It was fine yesterday, but today it's broken."

"Well, that seems a little strange as there were no changes to the environment last night."

"Oh, I forgot to mention that it's broken in UAT as well."

"When did this happen?"

"Well, this particular functionality has always been broken in UAT. We just thought that it would work when we moved to Production."


I know that some of you are laughing out there, but I also know that some of you are saying "Is he talking about me? Is he talking about ME?!?" If something doesn't work in our UAT environment, there is no guarantee that it is going to work in Production. Indeed, my bet would be that it doesn't work in Production either.

So why doesn't it work in UAT when it worked in Development? Well, unfortunately, I only have about 500 words to write an answer and in this case the answer is more like a 500 page novel. One of the biggest reasons is something we've talked about before: running something as an admin in development but with minimal rights in UAT. Just because you can run your application as an administrator on the box does not mean that you should.

I can't stress this often enough, and based on current evidence I definitely haven't, you need to run your application under the least amount of rights possible. If something doesn't work, fix it. Don't assume that there is something wrong with the environment and that it is magically going to get better when it is migrated to the next environment, as it may even get worse.

Tuesday, May 08, 2007

New Cobol

There is a phenomena that has risen in the last few years which, unfortunately, has had a detrimental effect on the IT industry. For the sake of simplicity, I shall call it the rise of New COBOL.

Now, some of you may not know what COBOL is, so a short history lesson is required. COBOL (COmmon Business-Oriented Language) was developed in 1959, primarily for use by the U.S. government. It quickly spread and became the de facto standard upon which billions of lines of code were written. It was also, almost single handedly, responsible for the boom in the IT business prior to Y2K because of the number of lines of code written in COBOL. While it has been criticized as verbose, it is capable of handling most business problems currently encountered and is a perfectly valid language for developing applications.

New COBOL, however, is something which people should not want. New COBOL is actually the use of an object oriented language in a procedural way. Most newer languages are object oriented in that business logic and data are encapsulated within an object and that interaction with that object is done through methods. When the developer discards the object oriented nature of the development language and essentially writes procedure COBOL, but in a new language, we have New COBOL.

This perversion of the intent of an OO language causes many problems, not the least of which is an increase in maintenance costs due to the barrier imposed in understanding the purpose of the underlying code. If the application was written in an OO style than a maintenance individual could understand it and make changes relatively quickly. If, however, the application was written in a procedural manner, but with an OO language, methods and data do not appear where they "should be", resulting in a higher learning curve for the developer.

If you've chosen to develop in an OO language, make use of the facilities of that language. Failure to do so is a disservice to your client, as well as yourself.

I look forward to your comments on this one, as I know for a fact that certain architects disagree with my stance.

Monday, May 07, 2007

Planning for Change

People have come up to me recently and expressed a concern that I am losing control of my mental faculties. To emphasize this point they point to the fact that I keep talking about "planning" and yet I am also an advocate of Agile Development. The usual comment I get is "In Agile Development, you don't plan, you just do."

Ah, the misguided conceptions of youth. Contrary to what people believe, Agile Development does believe in planning, but they are not slaves to the planning process. I once worked for a Programme Manager from Chicago who was quite ... stubborn ... with regard to his belief that if it isn't on the project plan you don't do it. Needless to say we butted heads a number of times, with him always coming out on the winning side because he was the Programme Manager.

This caused a few problems with the client and the application, however, as we were never responding quickly enough to requests. Each time something new came up we had to revise the project plan to take this into account and then confer with the client about the correct project variance that this would cause and sign off on the change request. This process took a number of weeks to complete. We were as agile as a beached whale.

After a while, we figured out how to handle the Programme Manager: insert line items into the plan, that planned for change. Indeed, that is what any agile project should do. Yes, you need to have an overall plan as to what you are going to do and when you are going to do it, but you also need to plan for change. You need to understand that change is natural in a project and you should develop your plan, and work out, in advance, how to deal with change with the client.

Change management is just as important to the development of an application as it is to the ongoing maintenance of an application.

Friday, May 04, 2007

Cutting and Pasting -- Part 2

Last week I went to one extreme and said that cutting and pasting was not necessarily a good thing. I was talking about using previously written applications as templates for your new application. Let's swing the pendulum to the other end of the extreme and talk about why you should never write certain code more than once and if you do you should slap yourself on the wrist.

Validation routines are things that every application needs. If you accept data from any source you need to verify if that data is correct. Sometimes the verification is quite simple (A9A 9A9 for postal codes) whereas in other cases verification requires database calls or lengthy computational processes involving logarithms and abstract mathematics. If your application is larger than a single screen or a single process, the odds are that there are going to be some common verification methods that can be shared between these areas.

In a previous life, my specialty was crafting routines for verifying a persons Social Insurance Number in any of a dozen different programming languages. I only wrote in once in each language, however, as I (and other people) used the same code in many different places. It varied by language and platform, but each time I used it, I essentially made a call to a utility library that did the validation for me and sent back the results.

I wrote it once and used it thousands of times.

I did not copy and paste it, but I did reuse it and I guess that is where the difference is. Copy and pasting the lines of code that did the function would reuse those lines of code, but it proliferate the number of areas where that code could potentially need to change. If you find yourself writing the same lines of code more than once, you may want to consider putting them in a utility library that you can call from anyplace. It may take some extra time to initially set things up, it saves a lot of time debugging and in maintenance.

Thursday, May 03, 2007

Cutting and Pasting -- Part 1

When I first started developing applications, back in the days of COBOL and green screen monitors, one of the things that we did quite often was use other applications as templates. COBOL, in case you didn't know, is one of those older languages responsible for the Y2K rush in the late 20th century. By templates, I mean that we cut and pasted chunks of other applications into our own so that we wouldn't have to type it all over again. Does this seem familiar?

Cutting and pasting is as old as developing applications itself. Even farther back I remember reusing punch cards from other applications so I wouldn't have to wait in line at the card punch machine for my turn. ( I gambled by not putting sequence numbers on my cards, allowing them to be reused at a later date. Boy, I lived on the edge in those days.)

There is a problem with this sort of programming, however, as it has a tendency of hiding many of the complexities that programmers need to understand in order to create better applications.

For instance, what would happen if you copied some code from another application, but the code was poorly designed or performed poorly. You would be propagating that "bad code". If people copied your application they would be doing future programmers the same disservice by arbitrarily inserting code, which they do not understand and which is fundamentally not the right code to copy, into their applications for future generations to debug.

Now, I'm not saying that you need to write every line of code over and over again, but I am saying you need to understand what you are cutting and pasting. Just because it works, doesn't mean you should necessarily copy it. You need to understand what you are doing. Blindly following someone else's lead without truly understanding is one of the reasons why Sanjaya Malakar got as far as he did.

Wednesday, May 02, 2007

Oh, That's Easy -- Part 2

"Oh, that's easy."

As a project manager I hated those words, particularly if they were coming from my technical team. The more technically adept a person is the more inaccurate the person is at coming up with estimates. Yes, there are exceptions, but they are few and far between and should not be counted on at all times.

Someone who is familiar with the technology and lives and breathes the code is someone who is going to be horrible at coming up with estimates for other people. Heck, they are even horrible at coming up with estimates for themselves. I have had the privilege of working with some very talented people over the past 25 years (damn, there's that age thing again) and with the rare exceptional circumstance, every single one of those people has problems with estimates.

So, how do you compensate? First of all you need to ensure that the developer has thought of everything. Most of the time an estimate only includes the raw time to develop the code, but not to test it, document it and ensure that any technological innovations are compatible with the existing environments. You need to keep track of this persons estimates and use previous estimates in guiding your decision about current estimates.

No, don't create the estimate for the developer as you probably don't know the technology. (See yesterday's note.) Understand, however, that the developer is going to be optimistic about his part of the solution so you need to take his estimate, inflate it, and come up with something that is realistic for the project. And, the most important point of all, keep track of previous estimates and how they matched reality. As mutual funds state "Past performance is not an indication of future results", but they're probably darn close.

Tuesday, May 01, 2007

Oh, That's Easy -- Part 1

"Oh, that's easy."

As a developer, I hate those words, particularly if they are coming from the Business Analyst, Team Lead, or, worse of all, the Project Manager. To be brutally honest, and when have I not been, most Project Managers are not familiar enough with the technology being used on a project to actually make that decision. Most, not all, but most project managers are more in tune with the business aspect of a project, but not the technical parts of the project.

A number of years ago on a large project I was brought into a meeting with the client and asked whether something or not could be done. My immediate response was "I'm not sure, but I think we can." This was immediately translated into "See, it's easy". Afterwards the project manager came up to me and asked me if I could have it done by the next meeting, in two weeks. When I told him I could have the estimate ready in two weeks he came back with "Oh no, you misunderstood, you're going to have the solution in place in two weeks."

Needless to say, no one was happy. The solution wasn't in place in two weeks, so the users were disappointed. The solution wasn't in place in two weeks so the Project Manager was angry. I didn't get the proper support from the Project Manager so I was quite disillusioned. (OK, I wasn't that disillusioned because I didn't expect anything else, but I didn't fill out the patent application either.)

The lesson I learned, and that I hope I have communicated, is that Project Managers should never give an estimate of the technological complexity of a request without first asking their team. It's a wonderful recipe for disaster if you do as no one is going to be happy, least of all the client.