Thursday, December 13, 2007

Cancelling a Project

Inertia.  Newton described as a body at rest tends to stay at rest and a body in motion tends to stay in motion, unless acted upon by an outside source.  OK, he actually said:

Corpus omne perseverare in statu suo quiescendi vel movendi uniformiter in directum, nisi quatenus a viribus impressis cogitur statum illum mutare.

But seeing as I don't understand Latin very well I thought I would translate for you.  You're welcome.

"Well, that's all very fascinating, but what does this have to do with IT?"  Glad you asked.  You see, a project is much like a body in motion:  it tends to stay in motion.  Regardless of whether or not the project is required anymore or even if the target has completely changed, the project still moves forward.  There are a few skeletons of this sort in my closet, that I almost ashamed to mention.  (Almost, but not quite.)

Have you ever worked on a project that was headed nowhere and doing it at a break neck speed?  Imagine a project where you're almost finished the design and the technology hasn't even been chosen yet.  Tough to finish your design, isn't it?  Image a project where business rules are changing, but the design of the project is two versions of rules behind.  Not going to be that successful is it?  Imagine a project where the developers are asked to work overtime.  For free.  And then tell them they need to work more over time because the project is behind.  Imagine a project where the Project Manger gets promoted, and removed from the project, and yet his line managers are punished for the current status of the project.  Imagine a project where the "technical guru" is unable to comprehend basic technology, yet insists that his technology choices are sound.  Now imagine him in charge of the overall project!!!!

All of these are reasons for a project to stop.  All of these are reasons for a re-assessment of the viability of the project.

But the project kept going.

Inertia kept the project going.  Inertia fuelled by pride and a stubborn reluctance to say "I think we need to stop".  There is no shame in stopping a project if it's headed in the wrong direction.  There is no shame in saying "Things have changed since we started, let's stop and re-evaluate things before we go too far".  The objective for any project should be for the benefit of the organization.  Sometimes it is better for the organization to stop a project and walk away then it is to let the project continue.  Understanding the long term results of proceeding is more important than finishing the project.

(By the way, the project I was referring to was with a previous employer and should not be confused with any current or previous project of Alberta Education, Alberta Advanced Education & Technology or Alberta Learning.)

Monday, December 10, 2007

Let Them Fail

One of the things that we do as parents is let our children fail at things. 

It's hard not to step in and immediately show them the right way, but we let them fail so that they will learn.  When learning to crawl and then walk, we show them what needs to be done, we act like babies ourselves and crawl around on the floor, but we let them figure out for themselves how to move their arms and legs.  When they are older we watch as they try to put a square peg in a round hole.  If we tell them it can't be done, that doesn't sink in as much as having them fail, repeatedly, and for days sometimes, to get that square peg through that hole.

As they get older, they get "smarter" in that they recognize much quicker when something is wrong and they change their actions.  No longer do they spend 10 minutes trying to get two legs in the same pant leg, as they recognize within moments that things are quite right.  After a while, however, their hormones kick in and they become as stubborn as babies with that square peg, because they are "smarter than you" and they "know more" than you did at that age.  Once again, we let them fail, knowing that the phase will pass and that they will learn.  Sometimes painfully, but they will learn what works and what doesn't and the fact that the tattooed, lip-pierced biker down the street might not have the same interests as them.

Finally they become adults, get an education (sometimes) and get a job (hopefully).  And what happens then?  Many organizations punish people who make mistakes or make it so intolerable to function that the person quits.  For the most formative years of our lives we are left to our own devices.  We are allowed to fail, knowing that someone is going to help us out by not necessarily helping us solve the problem for us, but by letting us know that it is safe to fail.  Why does this change when we get a job.

I've made so many mistakes and failed at so many things I can't count.  I recognize some of my biggest failures and I know that I have learned from them.  I recognize some of the repeated failures where I have hit my head against the wall over and over again, only to have something finally sink in.  But the one thing that I also notice is that I have been allowed to fail.  Yes, I have had my knuckles rapped for failing, but it was only when I failed to learn was the subject every really brought up.  I consider myself quite fortunate to have had supervisors who were willing to let me fail at something, so that I would learn, rather than coddling me and telling me exactly what to do. I essentially got on-the-job education from my mistakes.

Looking back on it I realize that the best thing we can do for people is let them fail and then support them as they try again.

Wednesday, December 05, 2007

Best Practices

In the past year I've stated a phrase so many times that I began to wonder what I really meant.  I'm sure you've seen the phrase many times before, both in my writing, in job postings, in management briefs, all over the place, but what does it mean?  The phrase is:

"... best practices ..."

This is kind of a loaded question.  I could state that what I mean by best practices is that it is something that I think is important to do.  That would definitely boost my ego, but for the most part when I call something a "best practice" it means something more. 

For the most part when I call something a best practice it is because it is:

An established sets of practices, procedures or methods used to achieve an optimal result.

Now, who defines what an optimal result is in this context?  There are a number of different contributors:

  • Academia.  Some things are called best practices because those in the academic world have studied, debated and decided that the best practice in question meets the criteria.  While those in academia do provide some input into best practices, the problem lies in the fact that the academic world is never as complex and complicated as real life.  So, take their opinion with a good dose of realism.
  • Standards Groups.  These people create certain standards and they should, theoretically, know how to make the most effective use of the standards.  They are usually more accurate with their proclamations, but, once again, ideal situations are difficult to find and some of their best practices do not work in real life.
  • Industry Consensus.  This is something that the majority of people in an industry think is a good thing.  It is usually decided upon by a grass roots movement by people who actually do the work and not strictly academics.  Their opinion is highly valued as it is more likely to provide meaningful results when used in the correct context.
  • Organization.  Based on the way that an organization does things, there may be some specific practices that need to be done by an organization to improve the end result.  These best practices may fly in the face of other best practices read and used over the years, but because they are specific to an organization they are probably the most relevant.  (Please note that if the organization changes, but the best practice doesn't then this becomes nothing more than a waste of electronic ink.)

All of these opinions, however, must be tempered with the fact that they were created with certain pre-conditions in place or under certain technological constraints.  A best practice is only a best practice if it actually applies to your situation.  Minor variances can sometimes be ignored, but if your naming standard was based on 6 character RPG standards, they probably aren't applicable to .NET 2.0, so some (un)common sense needs to be used as well.

Thursday, November 29, 2007

Specialization vs. Generalization

Have you ever seen Gordon Ramsey in action?  Not on his Fox Network show, but on his original show, Ramsey's Kitchen Nightmare?  (The Fox Network version is staged and edited to be as confrontational as possible.)  He tries to help a restaurant that is on the verge of closing down and does his best to change things around in a week.  To be honest, there's not a lot that can be done in a week, so if the basics aren't there then there is going to be trouble.

One of the key things that he stresses, however, is to keep the menu simple and to specialize in something.  In some respects this goes completely against the grain of what students are being taught in school and what our supervisors are busy telling us.  IT people are thought of as being replaceable cogs in "the machine", not to specialize, but to be replaceable.  To be honest, IT staff foster this perception as it makes it easier to sell their services internally within an organization and to external organizations, if they so wish.

In a previous life, when I worked for a "large, multinational consulting firm" it was not considered a good thing to be different than everyone else.  Because the types of problems faced by a large consulting firm are quite varied, having a large pool of generalists made it easy for the company to assign people to a project, after all, it's just a different type of nail and they have a lot of hammers (Bernard Baruch).  The more hammers they had the easier it was to sell services to a client.  If a special hammer was needed there was usually a small group (less than 1,500 people in a 65,000 person organization) who had more specialized skills and could be brought in (at a premium of course) to assist the project.

Is this the right way to do it?  Should most people be generalists with a few specialists who could be parachuted in when needed?

Personally, I believe this depends on the aspirations of the people you are dealing with.  In the consulting firm mentioned previously, the vast (90%+) majority of the people were on the fast track to management:  "up or out".  This meant that their time as a developer/designer was limited and that there was no need/desire to become proficient in one particular area of expertise, unless it was a business area.  For these individuals the generalist appellation is more than sufficient.  Some people, however, like the technology.  They like being able to make the computer do their bidding. This group of people is more likely to follow a more specialist perspective and become "experts" in this area.

Before labeling someone either a generalist or a specialist, find out what their aspirations are, then, when assigning new work, take these factors into account when determining who works on which project.  In some cases it may make more sense to assign a less experienced specialist to a project than a more experienced generalist as the final solution may be more effective at resolving the clients issues and isn't that what it's all about?

Friday, November 16, 2007

Some things should not be "added on"

When building an application there are some things that can be added on afterwards:  new functionality, better graphics and friendlier messages.  These are all things that add more value to the application.

There are some things, however, that should not be added on afterwards:

  • Error Handling.  What?  Don't add on error handling afterwards?  No.  It needs to be done at the start.  Now, I know what some of you are saying "But, Don, we'll add this in if we have a problem."  Face it, every new application has problems and if you don't have error handling in at the beginning you are spending needless cycles trying to debug the application and you are causing me to drink Pepto-Bismol as if it were Dr. Pepper.  We recently had to help a project debug their application in the UAT environment, but they had no error handling at all, except the default ASP.NET error handling.  Thank goodness for Avicode, as it helped us pinpoint the problem quickly, just far too late in the development cycle.
  • Object Cleanup.  If you create an object, kill the object.  It's simple.  It's so simple that even Project Managers can do it.  By not cleaning up after yourself you raise the potential for memory leaks to happen.  And you know what that means?  Alka-Seltzer, Petpo's cousin. I can't tell you the number of applications in which we recycle the component once it hits a certain limit because it would keep you awake at night.  (Lunesta, another cousin)  Suffice to say that many of our applications are forced to recycle after a certain period of time or when they exceed a certain size. 

The scary thing is that both of these items are considered  best practices for writing code in the first place.  I know the excuses "...the project manager won't let me do this ..." or "...I don't have enough budget to do this ..." or, the one heard most frequently "... I don't have the time to do this ...:.  Very few project managers tell their staff how to code, so the first excuse is just a cop out.  As for the budget, doing these items does not add significantly to the cost of application as it usually makes debugging faster and easier, so the budget excuse is just that, an excuse.  As for the time, if you're short on time, you need to do this as it will help you.

One of the things that many Health Organizations are putting in place is prevention of disease so that there is no need to cure the disease.  Prevention is much more cost effective.  Object Cleanup is prevention, pure and simple.  When someone has symptoms that need to be diagnosed, what does the doctor do?  Perform a seance?  Guess?  Or do they use a tool to help them out?  Ever heard of an MRI or even an X-Ray?  Think of Error Handling as a tool to help you diagnose the disease faster.  It's better than guessing.

So, object cleanup prevents problems and error handling helps diagnose problems.  So, I guess this means that I'll be seeing more applications with these items as an integral part of the overall application or do I need to go back to the medicine cabinet?

Wednesday, November 14, 2007

Work Smarter, not Harder

Raise your hand if you have had someone tell you to "work smarter, not harder"?  Ah, I see the majority of hands in the air.  (Careful about that.  People might think it strange for you to raise your hand in response to a line in an email.  I won't tell anyone though.)  So, how do you work smarter, not harder?  (i.e. increase productivity)  Yes, in the following paragraphs I am going to give you an entire book's worth of advice, so pay attention, this is pure gold!

The premise behind "smarter not harder" is that you only spend time on the most important things and leave the "back burner" stuff until there is time to do it.

There, that's it.  That'll be $19.95 CDN please.  Only PayPal at the moment.

Wow, that was most ... unsatisfying.  But, you know what, I think I've saved a lot of you $19.95.  Let's face it, there is no silver bullet for dramatically increasing productivity.  No magic spell is going to dramatically make you more productive.  Nothing you can do right now is going to have a significant impact on improving your productivity in the next couple of weeks, right when your supervisor wants it most.  You can read books, attend seminars, hire personal development coaches or a myriad of other things, but the truth is that change takes time.  if you type 30 words a minute, you aren't going to suddenly start typing 60 words a minute because your supervisor said you should.  If you can run a six minute mile, you aren't going to get down to a five minute mile just because a book told you that you could.

All of these things, including "smarter not harder" require practice.  A book might tell you what you need to practice.  A seminar might guide you in the right direction for general areas of improvement and a personal development coach might lay out a detailed plan, but the reality is that it all depends on you.  Without the practice, the commitment and the desire to work smarter, it isn't going to happen.  But even if all of these things are in place, it is going to take time.

So, where does this leave all of the people telling others to work "smarter, not harder"?  Well, the odds are that they are in a supervisory position.  The odds are also in my favour that this person is experiencing a time crunch whereby the amount of work has now exceeded the capacity of the staff.  So, in an effort to increase the capacity of the staff they are asked to work smarter.  This may or may not be used in conjunction with greed ("we'll give you a bonus if it's done on time"), fear ("we'll can you if it's not done on time"), heroism ("everyone is depending on you to save their butts") or, as I've seen in one instance, all three approaches.

Essentially, if you are at the point where you are telling people to work "smarter, not harder", you've already lost.  Suck it up, realistically plan the project and change either the target date (sometimes), the scope (sometimes) or add more people (dangerous as this will also increase the effort required).  If you really want people to work smarter, then help them at the beginning of the project, not when there is a crisis.  Help them plan their work.  Help them organize their InBox.  Help them become better developers prior  to you needing them to become better developers.

Post Mortem, but before Death

Justice Gray sent me an interesting note the other day in which he described a "post mortem" debriefing, but before the project had even started.  As Justice put it:  “It’s six months/a year/etc. from now and the project has failed.  Why did it fail?

One of the things that people have a hard time doing is learning from their mistakes.  George Santayana said "Those who do not learn from history are doomed to repeat it" and that is very true in the IT industry.  We work on projects that are months long, sometimes years, and during each project we make mistakes, fix them and go on to another mistake.  By the end of the project we have discovered, and fixed, dozens of mistakes.  We then make the biggest mistake of all, however, and repeat those mistakes on the very next project.  We failed to learn from our mistakes.

By setting up a post mortem before the project even starts, however, you're asking the developers to think about the project, think about what has gone wrong on previous projects and apply those lessons to the new project.  In essence we are learning from our mistakes and applying the fixes to the next project. 

Does this work?  Well, from my personal experience I've had mixed results.  We did this for a couple of smaller projects and we were quite successful at identifying potential problems early on and work on risk mitigation for those items.  And that is what we are talking about:  risk mitigation.  Identifying a potential problem and what can be done early on to minimize the odds of it happening.  We also tried this same approach on a much bigger project, but the whole process blew up in our faces.  We identified so many potential problems that we spent more time mitigating risk than we did building an application.  And even after all of that advance preparation, we were hit with some nasty problems that derailed the project because of a couple of people who were not committed to the process.

Can this work, discussing potential failures before the project starts?  Most definitely, but it is really important that people think about past mistakes and problems and how to mitigate the risk of them occurring.  It also depends on the commitment of the people involved.  If the people aren't committed to the process then the results may not be what you expect, or need.

Wednesday, November 07, 2007

Optimization -- Part 3

There's help you can get when you hit the wall for performance?

You bet your sweet bippy there is.  The most obvious places to look for help are with the people around you.  I find that one of the most effective resources I have is my wife.  Now, my wife is not particularly computer literate.  Seriously.  She is, however, an excellent sounding board.  When I explain something to her I have to get rid of the technical jargon and explain it to her in simple terms.  Usually, while in the process of trying to simplify the explanation, my mind goes off on a 90 degree tangent and i solve the problem that I am working on.  This doesn't mean that this will work for everyone, but it works for me.

Or, try reaching out to other designers and asking for their opinion.  Give them a high level overview of the problem, what you've been looking at and let them think for a few minutes.  Don't expect an instant answer, because if something has been troubling you for a while, don't expect someone else to solve the puzzle instantly.  (Although, my wife does really well on those little puzzles where you have to take rings off a rope and untangle hopelessly tangled messes of metal.)

You can even talk to the Deployment Team.  Yes, I know, normally you avoid that, but you'd be surprised at how much help we can be.  By deploying and supporting over 80 applications the odds are that we've seen your problem before and that there are at least two, if not seventeen different ways to resolve the problem and make your life much easier.

Sometimes the answer is really obvious, but because you are so deeply involved you "can't see the forest for the trees".  This is where reaching out to others, both technically adept and technically inept, can help.  It forces you to look at the big picture, the forest, and see whether or not there is a different path.

Monday, November 05, 2007

Secret Meeting

I got my hands on a recording of a secret meeting held between project managers last Friday.  Here it is in it's gory glory.

"OK, so it's agreed, we increase all of our estimates by 10% and blame it on the overhead caused by the Deployment Team.  Right?"

"Ah, hang on a moment.  There might be a problem with that."

"What is it <too garbled to understand>?  What don't you understand about pinning the blame on the Deployment Team?  In particular that pain in the butt, Don."

"Well, don't you think that he might notice what's going on if everyone suddenly complains about his team costing the organization a lot of money?  I mean, I don't think he's a complete moron and he might figure something out."

<sound of people spitting out their beer on to the floor>

"Not again?!?!?"  <sounds like the waitress bring a mop to clean things up>

"What makes you think Jessop can figure this out?"

"Well, he has been in projects before, from a project plan creation perspective and he understands that estimates need to be complete.  And this means that they have to take into account all of the costs of the project.  Some of those costs include interacting with the Deployment Team to ensure that the application is deployed properly."

<long silence>

"You're a plant, aren't you?  You're not really a project manager, your a mole!  Get HIM!!!!"

The recording breaks up at this point as the noise of chairs being tipped over and beer steins breaking on the floor seem to take up much of the remainder of the recording.  There is the faint sound of someone shouting " ... no blood inside the bar ... ", but that could be just my imagination.

So, Project Managers, how are those estimates coming along?

Friday, November 02, 2007

NaNoWriMo

When I started the National Novel Writing Month contest I wasn't sure what to expect.  After all, I didn't have a story in mind, no plot had immediately come forth pleading that it needed to be written, and most importantly, I didn't have the foggiest idea if I could do it.

Well, I am a ways into it now and I must say that the pep talks they gave on the site were correct:  the story does have a tendency of writing itself.  With only the opening sentence to work with the story slowly started to evolve and grow.  Characters started coming together, pasts started being revealed and the tone of the story started to come out.  (Now if only I could figure out some way of making these notes count towards my word total.)

It was really strange because, in many ways, that's how I've always done my programming as well.  To be honest, I was never one for getting all of the requirements together before building the application.  I would start with the framework and start building the rest of the application in pieces.  Sometimes this caused me no end of trouble because I had built the framework in such a manner as to cause endless rewrites due to a specific requirement.  I learned, however, and I developed better frameworks.  I learned to "steal" code from the best of the applications and re-use it where necessary.  In essence, I started with the barest of bones and built up the application in stages, much like what is happening with the novel.

Some might call it eXtreme Programming, while others might just lump it in with the generic term Agile Development.  All I can say is that it worked for me.  It does not work for everyone.  Indeed, the vast majority of people cannot do it this way because of the unknowns and the, to be honest, the fear of failure.  I've failed at so many things in my life that failing at writing a computer program, something I love doing, just never crossed my mind.

Managers and team leads need to understand that there is not just one type of developer.  You can't go to the store and pick up a box of Generic Developers (now with Vitamin B12) and have them substitute for your Toasty O Developers,  Supervisors, leaders of people, need to understand that there is a wide range of people and that some people, very few, can be left on their own to build the application.  Indeed, interfering with that development process is sometimes more harmful than letting them run loose.

Does this mean that you have an entire team of highly motivated, highly charged, highly independent developers at your disposal?  No, you don't.  The key is finding those that are and nurturing their growth.  Studies have been done with regard to programmer productivity, and anecdotal evidence abounds with stories of Software Heroes.  Suffice to say that they are relatively rare, so the odds of finding more than one or two on your staff is unlikely.  Which in some ways is a relief.

 

P.S.  For those that want to know, the opening sentence is:

The screaming didn’t start when the lights went out; it only started when the first body hit the floor.

Tuesday, October 30, 2007

Don't Worry About Performance!

There was a statement made a number of years ago in an ISWG (Integration Services Working Group) meeting which, to summarize, said "Don't worry about performance, we'll take care of that".  While that is probably going to be the epitaph of at least one person, I think it is time to set the record straight.

Worry about performance.

OK, now that we've gone 180 degrees it's time to put some parameters around this.

  • Don't worry about things over which you have no control.  The speed of a DCOM call is something you have no control over, neither is the time required to create a connection for a connection pool, the time required to retrieve session state, nor the time required to process an HTTP request.
  • Do worry about things over which you do have control.  While you can't do anything about the speed of a DCOM call, you can control, to an extent, the number of DCOM calls that you make.  Less chattiness is better.  While you do not have control over the speed of the resources, you have control over how effectively you use those resources.

The UAT and Production environments into which your application will eventually move has a myriad of interconnected pieces that, together, create the environment in which your application will exist.  While you cannot control those resources you control how you use them.  <Al Gore>  Ineffective use of resources and our unwavering dependency on technologies that produce greenhouse gases is threatening our livelihood and our very planet.</Al Gore>  Ineffective use of resources in any ecosystem is bad, whether that ecosystem is our planet, or our clustered production environment.  Infinite resources are not available and never will be, but effective use can be made of existing technology:

  • Up until the late 1990's the computers on board the Space Shuttle flight deck were Intel 8086 chips.
  • The original Xbox had a 733MHz CPU with 64MB of memory and yet it could (still can) outperform peoples desktops.
  • Mission critical systems have been designed with, when compared to modern technology, primitive means.
  • The first spreadsheet I used, Visicalc, ran on a 1 MHz processor.

All of these examples show that you can make something run well in limited circumstance, but you have to want to.

Identity Theft

I guess I can say that I am now a statistic.  You know, one of those millions of people who have been the victims of identity theft.  Let me tell you the story.

When I got home from work on Monday I noticed that I had supposedly been sending out emails from eBay account at lunch.  Within 15 minutes of the start of the emails I received an "A26 TKO Notice: Restored Account" from eBay UK stating that:

It appears your account was accessed by an unauthorized third party and used to send unsolicited emails to other community members, including email offers to sell items outside of eBay. It does not appear that your account was used to list or bid on any items.

The first thing I tried to do was log into my account.  Well, something either eBay UK did or something the hacker did was change my password.  I tried to enter in my answer to the Secret Question, but that didn't work either as the information on my account had been changed.  Following the various prompts on the eBay site I ended up sending them an email telling them what had happened and what the next step should be.

A couple of hours later I got another email that I apparently sent out eight hours after the first round.  Not content to sit by and wait for the email process to work its way through the system I then started scouring the eBay site for a phone number to call.  You know, that is one of the hardest things I had to do!!!!  I followed all the usual routes and ended up with forms to fill out.  I never did get a phone number, so I had to use their "Live Help" facility.  (My reluctance to go with this approach was due in part to a 45 minute wait on the weekend for "live help" from another company, which never even connected with a human being.)  In the case of eBay, however, the wait was less than two minutes, they told me my position in the queue (started at number 5) and the approximate wait time. 

The person who was on the other end of the chat could have been anyone, anywhere in the world.  The fact of the matter is, they looked at the information on my account, the notes they had sent to me and knew that I needed to talk to the Account Security division.  Within 30 seconds I was "chatting" with someone else who had the power to help.  Two minutes later things were fixed and that included changing the password on my account to a "stronger" password.

Was it brute force hacking of my account and password?  Not if this article is correct.

This particular episode was rather benign in that all that really happened was that some emails got sent and I had to change my password.  It could have been worse.  Much worse.  Think of that the next time you sign up for a web site.  Or, more importantly, think of that the next time you are building an externally facing application.  What are you doing to safeguard the information that you keep on your clients?  What are you doing to protect their safety?  Can you honestly say that you've done your best?

Monday, October 29, 2007

Creative Juices

One of the best things to do, in order to keep the brain alert and creative, is to read different things.  In the same vein, writing is a good therapeutic use of brain cells.  It keeps the neurons working and allows you to be more creative in your job.

To that end, I would like to introduce National Novel Writing Month.  In essence, you are being challenged to create a novel (50,000 words) in less than a month. That's 1500 words per day.  You are considered a "winner" if you actually succeed at getting in 50,000 words.  They don't have to be perfect, you just need to try.

In this case it is definitely the journey which is important, not the final product.  By pushing yourself to reach this goal you are going to be exercising a variety of different areas of your brain.  You will need to be creative to come up with a plot (and subplots), with characters that you empathize with and the words that tie all of this together.

So, how does this help you in the IT field?  I think you would be amazed at how much it will help.  People talk about "thinking outside the box" in order to get something done.  The problem isn't so much thinking outside of the box, it's understanding where the box is in the first place!!!  As you start writing the novel you will be able to see the box that you have created around your novel and this insight, this new vision that you've gained, can help you see the boxes that surround your problems.  Being able to see something is the first step in being able to avoid it or, in this case, think outside of it.

Are you suddenly going to see everything in a new light?  No, but by constantly stretching and pushing your own mind you will see the limitations (the box) that you have put around yourself.

Thursday, October 25, 2007

The Dark Side of Objects

The Dark Side of Objects?  (Luke, I am your father.) 

Sometimes you need reins on developers and designers.  Not because they aren't doing a good job, but because if you don't you may end up in a quagmire of objects that no one can understand. Objects are good, but they can be overdone.  Not everything should be an object and not everything lends itself to being objectified.  Sometimes a developers goes too deep when trying to create objects.

When I was learning about objects I had a great mentor who understood the real world boundaries of objects:  when to use them, how to use them and far to decompose them into additional objects.  Shortly after having "seen the light" with regard to objects I was helping a young man (okay, at my age everyone is young) write an application which was actually the sequel to the data entry application I mentioned in the previous note.  He needed to do some funky calculations so he created his own numeric objects.  Instead of using the built in Integer types he decided that he would create his own Number object.  This number object would have a collection of digits  When any calculations needed to be done he would tell one of the digits the operation to be performed and let that digit tell the other digits what to do.  Well, this gave him a method whereby he could perform any simple numeric operation (+-/*) on a number with a precision of his own choosing.  He spent weeks on perfecting this so that his number scheme could handle integers and floating point numbers of any size.  It was truly a work of art. 

And weeks of wasted time.

What he needed to do was multiple two numbers together or add up a series of numbers.  Nothing ever went beyond two decimal points of precision and no amount was greater than one million.  These are all functions built into the darn language and didn't need to be enhanced or made better.  The developer got carried away with objects and objectified everything when it didn't or in this case, shouldn't have been done.

Knowing when to stop using objects is just as important as knowing when to use objects.

Wednesday, October 24, 2007

Long Running Web Services

OK, the world is moving to web services.  I know that.  You know that.  So what more is there to discuss?  PLENTY!! For instance, how long should it take for a web service to complete?



Well, that's kind of a tricky question.  It basically comes down to "what is the web service doing?"  Some things should come back quickly.  Darn quick, in fact.  For instance, if you ask a web service, "What is the name of the person associated with this identifier?" you should be getting a response back in milliseconds.  If you are asking a web service "What course marks did this student get in high school?" you should be getting a response back in milliseconds.  If you are asking a web service "What are the names of all of the people associated with this school district?" you should be getting a response back in milliseconds.



What?  Getting the names of hundreds, potentially thousands of people, in just milliseconds?  Are you nuts?



Read carefully what I wrote "... getting a response back in milliseconds."  A perfectly valid response is: "Thank you for your request.  It is being processed and the results will be made available at a later date."  Web services should not be long running processes.  If they are long running or have the potential to be long running, then you need to implement some sort of callback mechanism to indicate when processing has finished.  This callback mechanism may be an email, a call to another web service with the results, depositing the results in a queue to be picked up later or even a combination of these methods.  Indeed, there are literally dozens of ways to get the response back to the caller.  What is important to understand is that you do not create a web service that has the potential to run for a long period of time.  Ever.  I'm serious about this one.



Other than the fact that you are probably going to hit the HTTP timeout, COM+ timeout, or cause an excessive number of locks to be held in the database, why other reasons could their be?  Well, imagine from a Denial of Service perspective.  If one call to this web service can generate so much synchronous work, what would 10, 100, or even a 1000 simultaneous calls to this web service due to the underlying server?  Crash it?  Cause it to perform so slowly that everyone gets timed out?  Bring down the back end database?  "But, Don, this is going to be an internal web service only.  What's the problem?"  Depending upon which survey you read, anywhere from 10% to 50% of all attacks on web sites are from insiders.  Imagine a disgruntled employee and what damage he could do to the system with just a little bit of knowledge.



While this topic is ostensibly about web services, we should not create any service (COM+, Web, WCF enabled) that takes a long time to execute.  If you are in the least bit confused about whether something should be synchronous or asynchronous in nature, the odds are it should be asynchronous.  Err on the side of caution.

Friday, October 19, 2007

Hubris

I mentioned a solution architect yesterday and how they needed to be the person with the vision, the person that led the team to final solution.  Well, there is one character trait that a lot of solution architects have that needs to be understood and managed.

Hubris.

Defined as "excessive pride", many solution architects are unwilling to admit they are wrong and will go to any lengths to avoid admitting that they made a mistake.  Sometimes these mistakes can be trivial and sometimes they can be the smallest of decisions that has the biggest of impacts.  Case in point: on a very large project I was working on the solution architect decided that the default order the client had been using for the past 75 years was not right.  So, in the system that we were using he decided that we should change the order of the day, month,and year in the fields that we displayed on the screen.   Yes, that's right, instead of YYMMDD or MMDDYY, he chose a new way of ordering the parts of the date.

This may seem trivial, but we were using a code generator to build the cod and another tool to help create the CICS screens (yes, it was a very old system).  As a result, we had to customize the tool to make it work properly.  OK, it was up to me to make it work properly.  Oh, yay.  Suffice to say that we spent significantly more time on making our dates work out than we would have if we had followed, not just what the client had previously used, but a format that was in use in North America.

No one could convince him he was wrong.  No one at all.  He was convinced that he was right and the rest of the world was wrong.  I know what you're thinking, Don, was he really wrong?  Wasn't this sort of decision part of his job?  I would agree with you, but for those of you old enough to remember, the Y2K scare in the IT field was a big thing.  Imagine a solution architect in the early 90's designing a system where you were unable to enter the century!!!!!  We (okay, me again) had to devise complex schemes that would accurately take a 2 digit year and add the correct century to it.

Solution architects are good, sometimes absolutely necessary, but they are also prone to making very silly mistakes.

Thursday, October 11, 2007

The Good Old Days

Why is it that the good stories start with "When I was younger ..."?

Anyway, when I was younger I was working on project to replace an aging mainframe based system with a new web based application.  The web based system utilized the existing database and a new database to create a whole new system that greatly increased the functionality and usability of the entire product.  One of the response time requirements we had was not with regard to individual screens, nor with regard to specific business processes, but with the length of the transaction in the database.  In order to get as much throughput as possible and minimize the amount of locking the requirement was that the new system operate in much the same manner as the old system and provide an average database transaction length of no more than 0.4 seconds.

400 milliseconds.

That is not a lot of time no matter how you look at it.  This 400 millisecond period was the average length over the course of a business day, but did not include any batch or asynchronous processing that occurrred.  This helped us out considerably because we had a lot of short transactions which lowered the average and a smaller number of longer transactions that raised the average.

Man, did we suck when we went live.  Over 2000 milliseconds for the first week and this did not include any of the deadlocks or timeouts that occurred. It took months, actually 18 of them, before we had things down to not just 400 milliseconds, but an average of just over 300 milliseconds. New hardware on the mainframe helped, but so did the fact that we worked really hard at lowering that average and we understood that anything that was going to take a long time was immediately turned into an asynchronous process or even part of a batch run that night.

The users understood this change in philosophy.  In order to get good online processing for everyone involved there was a need to do things asynchronously or in batch.  Processing something in the middle of the day, when everyone is using the system, is not always necessary.  Even if a short turn around is desired, asynchronous processing can be a valuable alternative.

Funny, but this seems remarkably similar to yesterday's note about web services and performance.  See, everything old is new again.  The problem isn't new, but the technology is.  The solution isn't new either, but the will to implement it might be.

Thursday, September 27, 2007

When not to choose the default

I talked recently about how you should leave the defaults the way they are in many cases because, well, for most circumstances they are probably the best values to use.  Sometimes, however, the default doesn't work that well and you need to understand the reasons why changing the default is a good thing.

Suppose you went to a web site and it asked you to fill in 10 fields.  You then hit submit and it came back and told you that field number 1 is suppose to be numeric, not alphanumeric.  You make the change and then submit.  It then comes back and tells you that filed number 4 is suppose to be less than field number 3.  You keep doing this for a number of changes until you finally say "Forget it" and you leave that site forever.  It's happened to me, so I can honestly speak from experience.

My biggest problem with the process wasn't so much the one error message at a time, but rather the fact that there was a round trip to the server for every error.  I had over half a dozen interactions with the server to fill out a darn form!!!  By default .NET sets the controls you place on an ASP.NET page to process interactions at the server (runat="server").  If you provide complete error checking for each page, then this may be a suitable method of operating.  However, if you only respond to the user one error message at a time, this is sure fired way of getting  someone annoyed with you.  And quickly.

To be brutally honest, some error checking should be done at the client side.  If you have a popular application, or even if it is not that popular, there is still a certain amount of overhead involved in getting IIS to receive the request, process the header information, pass the information along to the application pool and then have the application do what it needs to do in order to tell you that the field is only supposed to contain numbers.  Then the whole thread has to go backward, towards the user, in order to give them the message.  It is faster, more efficient, and less costly from an infrastructure point of view if you let the client take care of this sort of data validation.  Your application will also check when it gets the data from the client (don't ever assume the client is sending you perfect data), but many checks can be performed at the client end, decreasing turn around time for error processing, distributing the work load, and, more importantly, providing a better user experience.

Remember, when you're designing your application think in terms of what provides the best user experience.  Think about your experiences, what you've liked or, more importantly, what you've disliked, and go from there.  The default, while usually good, does not have to remain if there is a good reason to change.

Tuesday, September 25, 2007

Validity vs. Reasonableness

While most of our applications do validity checks on data, not all of them do reasonableness checks.  Let me explain the difference.

Data Validation.  Let us suppose you have a number of fields on the screen:  name, address, birth date, phone number, and spouse's name, spouse's address, spouse's birth date, spouse's phone number and a marriage date.  Data validation would ensure that if there is a birth date, it is a valid date.  So in this case it would check to ensure that all of the date fields are valid.  It would also check to ensure that the phone number follows any one of a number of different standards, but predominantly the fact that it is numeric in nature.  You can also extend data validation to more complex tasks such as determining if the postal code is correct.   In general terms, data validation serves to ensure that a single piece of data is a valid for that data type.

Data Reasonableness.  OK, now that we've gotten the basics out of the way, there are still a number of checks that we can perform.  If there is a marriage date, then the date must be a certain time period after the birth date of both parties.  This is not just a simple "if marriageDate > spouseBirthDate then Happiness()".  We need some additional logic to ensure that even if the data is valid, it also must make sense.  Having data make sense is as important as ensuring that it is valid.

While there are many schools of thought on this, most post secondary training lumps both data validation and data reasonableness together under the "validation" banner.  This, unfortunately, has had the effect, in most cases, of putting data reasonableness checks in the background or has the checks embedded deep within the business logic of the application.  In most cases these checks can be done at the UI level, really quickly and prevent a lot of background processing that clogs the servers.  The other big problem is that some of these reasonableness checks are missed because "I thought the client was going to do that".

Just remember, there is a lot of data checking that needs to be done and reasonableness is just another item in the list.

Monday, September 24, 2007

Temporary (Expires yyyy/mm/dd)

My wife renewed her drivers license recently, as she turned 29, again, on September 24th.  She went into the Registry near our place, filled out the form, paid her money, got her picture taken and was given a "temporary" drivers license to last her until her real license was sent to her.  If the real license takes too long her temporary license is going to expire on her and that could cause no end of grief.

My bus takes a little bit longer to get to work in the morning as there is a "temporary" detour on it's normal route.  The city has dug a hole in the road, the entire width of the road, in order to do some emergency repair on the pipes.  They don't have a lot of time to get this done as it impacts a lot of traffic, a lot of buses (both ETS and school) and a lot of residents.  The only way to get to certain houses is to wind your way through back alleys.  Oh yeah, this is definitely going to be temporary.

My daughter has a temporary spacer in her mouth.  One of her baby teeth, incorrectly filled by an earlier dentist, developed some severe damage and needed to be pulled.  A temporary spacer was inserted so that her teeth would grow in their proper place until the adult tooth comes in.  The spacer is going to be removed either when the adult tooth comes in, or when the baby tooth to which it is attached falls out.  No choice, it is temporary.

We have servers, both physical and virtual, which were set up for temporary purposes.  We now have to face the task of upgrading them from NT 4 to something supported.  (OK, I lied, but you get the point, don't you?)  If things are temporary, give us a date and we will set up the system to self destruct on the day after.  If things aren't temporary, for goodness sake, tell us!!!!!  The "Oh, it was temporary, but the client said ..." story is getting old and, quite frankly, has been done better by other teams. 

Remember temporary means that it goes away.  Pick a date.  Any date.  Please....

Weapons of Mass Destruction

America went to war with Iraq because they wanted to find and destroy the Weapons of Mass Destruction.

In many respects that what I do by looking at the way applications are installed, operate and behave when encountering errors:  I'm looking for weapons of mass destruction.

My first IT job was with a construction company and my crowning achievement was an application that accurately allocated work site costs to various job codes in the accounting system on a daily basis.  It was rather tricky due to the fact that the company I worked for was a multinational company and each country, indeed province/state, had different holidays so it needed to take into account when costs should be allocated based on whether or not the previous day had been a holiday or not, in the location where the construction site was located.  Jobs on which 7x24 construction was occurring had other conditions that needed to be met.  All in all it was a masterpiece of software engineering.

Almost.

You see, I was under a tight timeframe for getting this done, as the VP in charge of construction had told the CEO that it would be in place by July 1st.  I only had 4 more weeks to finish the coding and then implement the application in the main production batch jobs.  Time was tight so I did what every rookie (and many seasoned professionals) do when faced with something that is difficult to compute:  I hard coded the answers.  I hard coded the holidays for all jobs sites for the next 18 months inside the application. 

It was simple to do and saved me a lot of trouble, because, you see, I left the company 2 months later, leaving behind a ticking time bomb in their production systems.  In sixteen months things were going to blow up, all because I took the easy way out instead of doing it properly.

Tick, tick, tick, tick ...

Thursday, September 20, 2007

Groceries and Software

When you buy groceries from a grocery store, one of the things that they do for you is pack your groceries into plastic bags so you can take them home.  (OK, some grocery stores don't do that, but they will charge you a couple of pennies to give you a plastic bag so that you can do it yourself.)  When they pack the plastic bag they have certain, spoken and unspoken, rules.  For instance, you don't put a pound ... err, kilo... of hamburger in the same bag as fruits and veggies unless one of them is wrapped in an additional plastic bag.  You don't put a bag of potato chips at the bottom of the bag and potatoes on top of them and eggs are packed flat.

When creating a deployment for DeCo, think of ZIP files as being the similar to plastic bags from the grocery store.  Here are a few simple rules that you can follow to fill those bags:

  • Create a ZIP file for each item type that you are migrating.  For instance, if you are migrating a web components and COM+ components, create two zip files, one for each.
  • In each ZIP file put all of the pieces necessary to do the work for that type of deployment.  If it is a web zip file, include the MSI, any config files and even the documentation. 

The rules are really simple, but make life much easier for everyone.  For instance, by using ZIP files instead of a list of separate files, you reduce the number of times you need to attach a file to the request and you reduce the amount of space needed on the back end to store the files.  We current have over 9760 files that we are tracking from over 6800 deployments and these files take up over 19,680,000,000 bytes.  While some of those files are compressed, not all of them are. 

By using ZIP files you can help us keep a handle on the storage requirements for DeCo as well as making it easier for the Deployment Analyst or DBA to get all of the files they need for the deployment in one simple package.

Friday, September 14, 2007

Schopenhauer's Law of Entropy

So, just what is Schopenhauer's Law of Entropy?  Simply put, it is this:

If you put a spoonful of sewage in a barrel full of wine, you get sewage

So, what does sewage have to do with programming?  It's not sewage that I'm looking at, but rather the concept behind it.  In IT terms, what Schopenhauer is saying is that no matter how good the overall application, if one part doesn't work the whole application gets tarred with the bad brush. 

It is unfortunate that a single poorly designed, written or executed page can make someone believe that the entire application is poor.  Their perception of the application is what is important, not reality.  Kind of scary, isn't it, when perceptions are more important than reality.  But this is what happens in our business and it is something that we need to understand and do our best to influence.

So what influences this perception?  Well, consider this:  two web applications side by side on your desktop.  You push a button on the left one and you get the ASP.NET error page:  unfriendly, cryptic and somewhat unnerving.  You push a button on the right one and you get an error message in English, that explains there is a problem and that steps are being taken to resolve the issue.  Which one would you perceive to be better written and robust? 

How about another example?  You push a button on the left application and you get an error message that says "Unexpected Error.  Press the OK button".  You push a button on the right application and you get an error message that says "Our search engine is currently experiencing some difficulties and is offline.  Please try again later."  Which one do you perceive to be better?  Which one do you think your business clients will think is better?

It's not just one thing (error message or not) that gives you a feeling of confidence when dealing with an application, it is a multitude of little things.  Making things more personalized helps.  Translating from Geek ("Concurrency error") to English ("Someone else has updated the data before you") helps a lot.  Making it seem that you spent some effort to foolproof the system (i.e. don't make every error number in your application the same error number).

No matter how good the rest of your application, one bad move can create sewage.

Initialize All Variables at Their Points of Declaration

I was reading a book recently called Code Craft - The Practice of Writing Excellent Code and one of the comments struck a particular chord with me as it brought back memories of an upgrade that went horribly wrong.  At least for me.

There was a brief section called "Initialize All Variables at Their Points of Declaration".  Now, this may seem self explanatory and quite normal to some people, but others think that this is rather strange.  "Why would I initialize a variable that I may never use?"  The problem is, that not everyone follows the same coding practices in real life.  Sometimes the compilers help/hurt us in this regard.  Back when I was predominantly working on the mainframe, we were switching from an older version of COBOL to COBOL II.  Ooh.  COBOL.  I can see your eyes glazing over.  Stay with me, there is method to my madness.

The process of conversion was really quite simple.  Recompile.  It wasn't that hard.  However, we discovered a little bit of a problem.  When we did our testing we discovered that we were occasionally getting OC7 (data exception) errors when everything should have been working.  Indeed, running the program multiple times against the same data actually generated different results.  After a lot of head scratching we determined that the problem lay in the fact that the old compiler, by default, initialized variables when they were defined.  COBOL II did not do this by default.  When the application was loaded into memory it would occasionally access memory that had been initialized for some other purpose and the program would work.  Other times, however, it was accessing "garbage" and the program would blow up.  If the original developer had initialized the variables in the fist place we never would have had a problem.

So, we made a small change and everything was perfect.

Almost.  Because of how we were doing the upgrade process I had to baby sit the recompilation of 1900 COBOL programs in 4 different environments (7600 recompiles altogether).  Took almost 48 hours to do it and I got almost no sleep, and all because someone failed to initialize a couple of variables.

Tuesday, September 11, 2007

Antacids for the PM

PMs, can you imagine the look on the faces of those developers when I said that they needed to supply you with things like estimates and time sheets?  I bet some of them were knocked off their feet!

I mean it's not like you were asking them for anything difficult.  I mean, how hard can it be to create an estimate for a project?  It's not like it's rocket science.  Every developer should be able to do it.  I mean, the National Gun Registry hit it's target.  Well, maybe it didn't.  Well, what about the Denver luggage handling system?  Another fiasco?  OK, the FBIs Trilogy Project, now that was ... a disaster?

Everybody can look into the past and come up with failed estimate, but sometimes, the Project Manager does it to themselves.  This may be a shock for some of you, but sometimes Project Managers are under different pressures than you realize.  Back when I was younger I worked for a consulting company.  My job was to come up with the technical work plan and estimates for the projects the local office undertook.  I would then, using historical data and some darn fine guessing, come up with what the effort would be on the technical staff for the project (technical staff meaning project DBA and internal technical support). 

One of my proudest, and saddest, estimating moments came for a relatively simple business project that had a number of interesting technical complexities.  My estimate of the technical cost was 179 days.  When combined with the other parts of the project it turned out that this simple application was going to cost the client a lot of money.  In an effort to reduce the impact tot he client the scope was shuffled, but this did not reduce the technical effort that needed to be spent, so in a classic PM moment the number of days was arbitrarily reduced to 79.  I am proud/sad to say that this is one of the few times in my life where I was dead on with regard to the effort.

Project Managers, if your team says that something is going to take a certain amount of time, ask them questions, make sure they understand both the problem and their own estimate, but don't arbitrarily change it unless you know for a fact it can be done for less.  The odds are they understand what needs to be done better than you and by changing their estimate you are telling them that you don't trust them.  If you still feel the estimate is high, have them sit down with you and walk you through the estimating process they used.  But, if the numbers still add up to something you don't like, take a tums.

Monday, September 10, 2007

Load Balancing Failures

It shouldn't come as a surprise to anyone when I say that our Production environment is load balanced.  I have mentioned this before and I will be mentioning it again in the future.  But, for those who may have missed my previous tirades, let me explain the impact of load balancing on Session State.

One of the features of ASP.NET is to store information specific to a browser session (aka user) into something called Session State.  Session State is kind of like the junk drawer you have at home where you have batteries, twist ties, plastic spoons, stud finders and assorted other "stuff".  Session State allows you store what you need to store in order to keep track of where the user is in the application and what data you need to save on their behalf.  The next time the user accesses the application the session state is automatically loaded and you're ready to rock.

There are a number of places to store session state:  In Process, Out of Process or in SQL Server. 

In Process means that Session State is going to be stored in the Application Pool running the web site.  So, if the application pool recycles, all session state is going to be lost.  A number of projects currently use this method and are in danger of losing Session State, if they use Session State, as we use Application Pool recycling to solve a number of application issues.  In addition, if, for some reason, BigIP sends the user to a different web server to service the request then the Session State is not going to be present, potentially causing a number of application failures to occur

Out of Process is where the session state is hosted by ASP.NET in a different process, potentially on a different machine.  While somewhat safer than storing it in the same Application Pool, a problem arises if this service needs to be reset whereby the Session State is again lost.  Indeed, if the process is hosted on the same server as the web site, moving the request to another part of the load balanced cluster is going to be a problem as Session State will not be available for the request.  If Session State is stored on a separate machine then the biggest problem is that of durability of the data.  Any problems with the service may wipe out all session state for all machines.

Storing Session State in SQL Server is the slowest method, but is by far the safest method for durability and the best method when utilized in a cluster.  Each request for Session State goes out to SQL Server to ensure that the latest and greatest version of Session State for that user is retrieved and used.

In our environment we have asked people to use SQL Server Session State, and yet, by looking trough the web.config files of a number of projects I've noticed that they have their Session State set to In Process.  If Session State is actively being used, this is a recipe for disaster.  I urge each project team to take a quick look at their web.config files and change it to use SQL Server instead of In Process.  Even if you don't currently use Session State, you may in the future and this will prevent you from having a nasty accident.

Thursday, September 06, 2007

In Memoriam

There were a number of comments yesterday from people wondering what the heck I was talking about when I told everyone to take a break.

Recently, Wednesday to be precise, we lost a companion that we had known for more than thirteen years.  When my wife and I first moved into our house we found it to be a large, lonely place.  In order to fill up some of the space we went to the SPCA and picked up a pair of cats, a brother and sister pair.  We named him Spike, because of his spiky hair and we named her Willow.

For thirteen years they were our companions.  Through three kids, hundreds of hair balls conveniently coughed up in the middle of the path to the bathroom, tainted cat food scandals (yes we found a few cans), Spike and Willow were there.  Recently Willow had been having some trouble with her hips.  Old age seemed to setting in quite quickly for her and the doctor recommended a special diet for her kidneys and glucosamine for her joints.  She seemed good, for her, for almost two years, but swiftly went down hill about 10 days ago.  A visit to the vet confirmed the worst:  kidney failure, liver disease, and a host of other problems were manifesting themselves at the same time.  When asked how many months she had, the vet told us "one week".

We made the most of the week with Willow and spent a lot of time petting her and keeping her company, much like she had kept us company for thirteen years.  We soon saw, however, that her time had come to let her go.  The whole family had a good cry on Tuesday night.  Wednesday, a friend of the family helped my wife with the final details and we all had another good cry that night.

I enjoy my work.  I enjoy the people the people I work with, even the ones I yell at a lot.  But I also enjoy other things.  My work does not define who I am, however, just what I do for a living.  Sometimes you need to take a break, step back from the work and look at everything around you.  If you've spent so much time working that you can't remember the last time you truly relaxed, take a break.  If you can't remember the last time you hugged someone or something close to you, take a break. 

Enjoy life.

Take a Break

Today's message is short and simple. Tomorrow we'll explain why.

No matter who you are. No matter what you do. Sometimes you need to stop working and enjoy life.

Take a break from work. Take a break from education. Take a break from stress. Just take a break.

Really Simple Solution

Really Simple Syndication.  RSS.

The concept behind RSS is really quite simple:  users, on their own timetable, download an XML file that contains headlines and/or stories on a particular topic.  For instance, I subscribe to a RSS feed that is created based on the Blog of J.D. Meier.  He is the Project Manager for "security and performance on the patterns & practices team".  His blog contains a variety of interesting topics, but I never actually have to visit his web site to get the latest from him.  I have an RSS reader (in this case Outlook 2007, but there are hundreds of others) that periodically goes out and checks his RSS feed to see if there are any changes.  If there is new stuff it shows up in Outlook.  I subscribe to a variety of blogs and web sites this way:

While RSS feeds are not new, their use in business environments is relatively new.  There are some businesses that have an RSS feed per major application and they post outages, tips, tricks and, most importantly, changes that are or have occurred to the application.  This way their business area is not surprised Tuesday morning when a completely new version of their favorite application shows up.  Larger project teams can create an RSS feed that contains status reports, meeting updates or even updates on when the implementation party is occurring.

RSS feeds can solve a number of issues, but it is not a hammer that can be used on every nail.  It has specific benefits in specific situations.  Like everything else, it is a tool that you have in your toolbox, but it is not necessarily a tool that you need to use.

Friday, August 31, 2007

Apologies

My apologies for the flood of posts today, but I haven't updated the external blog in a long time, whereas the internal email was still being distributed daily. I will try to do better in the future.

Part of the Same Team

"We're all part of the same team, right guys?"


A Project Manager sometimes says this to his team when they've made a decision without consulting him and the decision has some repercussions elsewhere in the project:  money, time, or credibility.


A developer might say this to the management team of the project when the Project Manager or Team Lead has committed to a date that the developer knows is unrealistic, unattainable, or even justifiable.


The business area may say this to the project team when the team seems reluctant to embrace the total vision of the project and seems to be cautious, nervous, or even afraid of the impact.


It doesn't matter the perspective, nor does it matter the person who says it, when this statement is said there is an almost instant "us vs. them" mental image that pops into everyone's head.  Well, maybe not everyone.  Some people, some teams, actually work well together.  They understand the impact of their decisions and, if there are far ranging impacts they discuss them with the required people in advance of agreeing to them.  They understand that even though a request seems simple, they should talk it over with the rest of the team in case something is actually much harder than originally thought.  They understand that being part of a team is a good thing and that teamwork can overcome many obstacles.


Each of us has the ability to shape our team.  Each of us has the ability to help guide the team.  This isn't about being a Project Manager directing the team, it is about people being part of a team and committing to the common goals.


Five people working on the same project is not a team.  Five people, sharing the same vision and goals and working together, is a team. 


 

Orientation day

OK, it's probably pretty obvious that I have been pushing education a lot recently.  Well, today my daughter is attending an orientation day at her new Junior High.  When I think back to when I was her age (yes, the world was black and white back then) going to Junior High was a big change.  Instead of staying in one classroom for most of the day I switched from one room to another and even the people I was with changed throughout the day.


I no longer had the advantage of staying with one teacher a little bit longer and picking up on a concept I missed.  I was no responsible for learning it on my own and, in the event I still couldn't get it, only then was I going to talk to the teacher.  This was a big change in how my world operated up until then and it was really scary.  So, I empathize with my daughter.  I know what she is going to be going through and I will do my best to support her. 


This orientation day my daughter is attending is going to go a long way towards making her feel comfortable in her new school and comfortable with the process.


Now, fast forward ten years.  She's graduated from school and has her degree/diploma and has come to work for your project.  What do you have in place as orientation material?  What do you have that will help her get over the initial fear of a new experience?  What processes are in place to help her become as productive as possible in as short a time as possible?  If you're like most of us, the answer is probably "not much".  We all know the need is there, but filling that need just never seems to be a high priority.


The next time you've got a few minutes, think of my daughter, think of other peoples children, joining your project this year, next year or the year after.  What needs to be in place?  What can you do to help?

Education

Do you ever have a few minutes to kill and you're not sure what to do?  Get certified. 


OK, getting certified in something may take longer than a few minutes, but doing a test is an easy way to tell how close you are to the final goal.  For instance, there is a company called Brainbench that lets you write tests to "certify" yourself in various areas.   While many of these exams do cost money, I prefer looking up the "Free" exams.  Through this route I have taken an exam on Shorthand (I passed, but barely), Internet Security, Writing English, Typing, and others.


I've done these exams for a number of reasons, not the least of which is that I want to test myself to see if I actually know a topic.  I've been talking a lot about Education, recently, and how it is important to keep yourself informed about a topic.  The Brainbench site has a number of FREE exams right now on topics like .NET Framework 2.0, RDMBS concepts, Programming Concepts and Software Testing.  While I am not advocating this particular site, I am advocating education. 


If you are more serious about your education you can try for any one of a number of Microsoft certifications . There are a lot of sites that help you out with studying for these exams, with Transcender being one of the oldest companies in the business.  Or, for those who prefer studying at their own pace with a solid reference, most of the Microsoft exams have associated books.  (Imagine that, they charge for the exam and they charge for the book for studying.  What a racket!!!!) 


It doesn't really matter which route you choose, just go out and learn.

SQL Injection

Security of the data is important to every application.  Ensuring that only properly authenticated users receive access and that only properly authorized users view the data is critical to the success of an application.  Unfortunately, there are many ways to get access to an application and some of them are amazingly simple.  For this note, we're going to talk about "SQL Injection" attacks.


Much like the name implies, a SQL Injection attack is the insertion of SQL code into an existing call in order to compromise security.  Essentially what happens is that the application fails to parse the data coming into the application and allows for people to insert SQL code into an existing SQL call to the database.  For details of how this is done, Steve Friedl of UnixWiz.net has an interesting example.


Is this information hard to come by?  No, it's not.  The link above was actually the top one on the list that Google provided to me.  Detailed, step by step instruction on how to break into a poorly secured web site and the information is so easy to follow that even my daughters can try this out at home.  Many organizations have put standards in place to address this issue.  However, standards are only effective if they are followed and they aren't necessarily going to be followed if the person doing the work doesn't understand the reason why.


Essentially, this comes down to education.  Educate yourself on how to break into your system so that you can prevent others from doing so.  This doesn't mean that you need to be a security specialist, but what it does mean is that you should be conscious of the techniques that people use so that you can stop them from being used against you.  Information is the key.  Let's hope that this key is locking things up instead of opening the lock.

Side Benefits

In a recent note we talked about moving historical records out of the main table into a history table or, depending upon the purpose of the historical records, an audit table.  One of comments that I got back was that had a number of additional benefits:




  1. Easier to write code to retrieve data - no fancy date handling required

  2. Easier to use ad hoc reporting tools - same reason

  3. Better performance due to simplified date handling and smaller table sizes (as only most current record kept)

  4. Can control access to current vs historical data easily by restricting access to the various tables

  5. Easier to archive, as you only need to worry about the history table

(Thanks Rob)


It's easy to miss amongst the glitz and glamour of coming up with solutions that everything we do, every decision we make, has multiple ramifications.  What we may do to "simplify" something may cause severe repercussions in other areas, totally negating the positive benefits.  Sometimes we come across a solution that has both positive and negative impacts, but the positive impacts so far outweigh the negative that there doesn't seem to be a reason not to adopt the new approach.

Coming up with alternatives can be quite difficult, which is where "peer review" comes in really handy.  Grab a friend or two, someone who has done some design work before, and show them your design.  Help them understand the problems and the solutions that you've come up with.  Peer reviews are tremendous tools in that they help to validate approaches and ensure that other possibilities have been considered.  (Don't go overboard on documenting your design until after you've had a peer review, however, as the more time you invest in your solution the less likely you are to consider other options.)

Virtualization Technology

I was reading an article recently about virtualization that actually surprised me.  The Collier County School District in Florida is a very big proponent of virtualization technology.  Their technology plan calls for the replacement of traditional desktops with thin clients.  Users would essentially log into a virtualized desktop located at the District's central computing center.  By loading up blade servers with lots of RAM they are trying to get 30 or more desktops per server.


Wow!  Thirty virtual machines per physical host!  We have not been nearly so aggressive, with our biggest servers handling 15 or 16 virtual machines.  Many of our servers are much smaller and we have a correspondingly smaller number of virtual machines.  Right now we have in excess of 190 virtual machines, some of these being used as desktops, while others are used as servers, both in a Development capacity and a Production capacity. 


With the upcoming release of Windows Server 2008, however, we plan to take even more advantage of virutalization technology.  Comments from Microsoft about the software being able to handle 512 virtual machines per physical machine, notwithstanding, we don't plan on hitting that number any time soon.  What we do plan on doing is implementing features that will allow virtual machines to consume more CPU on the box on which they are hosted, features that will allow us to move a virtual machine from one server to another with no interruption to service, features that will allow us to create new virtual machines in minutes, in some cases in an automated fashion to handle heavier workloads.


Virtualization is a proven technology, just talk to any mainframe guy and he can tell you that multiple "operating systems" are run an IBM mainframe every day.  Great strides are being made in this area everyday and when they are ready to use we will be there.

Error Messages

Error message are vitally important to being able to debug an application that is having troubles.  One thing I should mention, though, is that the error message and subsequent call for action need to make sense.  For instance, the following error messages, or the actions they suggest, just don't make sense or don't help to debug the problem:



  • Keyboard not found.  Press F1 to continue.  (I last saw this on an IBM PS/2 model 55SX.  I paid $6000 for a machine which I felt like throwing out the window.)

  • An unexpected error has occurred.  (I last saw this on a number of different production applications in our own shop.  This doesn't help.  Honest.  Any shred of additional detail would be appreciated.)

  • This is impossible.  (Last seen in one of our production applications.  You know, if I've seen it in an error message, it's obviously not impossible.  BTW, I saw 20 occurrences of this.)

  • Invalid effective end data.  (Too bad there are about a dozen effective dates used at this point in the application.  No idea what date is being used or what table is being accessed.  Quick, call for a DBA!!!)

Sometimes we try to hold our clients hand and we use the excuse "Well, we want to make the error message friendly to the user".  Fine, make it friendly, but you can still had more information.  For instance, on the effective date error if you added what date was incorrect you would not only make it more user friendly, you might actually allow the user to solve the problem themselves!!!  The "unexpected error has occurred" message is sometimes a catchall, but you can still add valuable information. 


No, none of these are perfect solutions, but you need to understand that while you might be covering up the sins of the application to the end user, the support personnel have no data to go on in order to fix the problem.  This prolongs the issue and makes the application actually look worse in the long run.  You might want to consider a two part error message:  first part user friendly, second part techie.  You could add "Report the error to the appropriate support personnel and give them the following data:  blah blah blah".  Give the user both parts, but tell him to pass on the second part.  They will appreciate it, as will I.

Single Point of Failure

Single Point of Failure.


There are probably a lot of really nice definitions our there, but I'd like to use my own.  In my world, a single point of failure is:



... a component, hardware or software based, which when it fails will cause the entire system, or an entire subsystem, to become unavailable to the users ...


So, let's give some examples:



  •  An application that only runs on a single web server has the web server as a single point of failure.

  • An application which uses only a single database server (non-clustered) has the database server as a single point of failure.

  • An application that relies on the Internet, but only has a single connection has their ISP connection as a single point of failure.

While we try to cover many of these different aspects when we design applications and infrastructures, sometimes things still don't work.  For instance, in Production we've got clustered web servers, clustered database server, multiple Ethernet connections, redundant DNS servers, RAID disk storage and dozens of other redundant systems.  Sometimes, though, things just go south really fast and in a really bad way.  Recently we had an air conditioning problem with our server room.  We have redundant units that have multiple air conditioners in each unit.  Through a sad set of circumstances we ended up with only 1 of 4 units working. 


No matter what anyone does, there is no such thing as a full proof system.  There will always be some avenue whereby a single point of failure exists.  The target is to identify those areas and work on putting in redundancy, one step at a time.  It is a long process, but nothing worthwhile is ever accomplished quickly.

DataSets vs. DataReaders

I am stepping into heretical territory here, so you will have to pardon my trepidation.  I am going to discuss something over which wars have been fought, reputations destroyed and live ruined.  Yes, you guessed it, I am going to discuss DataSets vs. DataReaders.


There has been much discussion of this topic behind closed doors and even the occasional directive stating that if you are passing large amounts of data from one tier to another, use a DataSet.  DataSets are indeed convenient mechanisms for transporting around a lot of information that can be stored in a table/row manner.  What happens, though, if you are retrieving a single value?  What if you are going to be retrieving data until a specific event occurs (time or data initiated) and then stop processing?  My contention is that these items may be better suited to a DataReader as opposed to a DataSet.


A DataSet is much lighter weight and is actually the underpinnings upon which the DataSet is built.  When you issue the Fill command to a DataSet it uses a DataReader to retrieve all of the data which it then passes back to you.  if you don't need all of the data, however, you just chewed up a lot of processing cycles, processing memory, and your clients time, retrieving data that you are going to throw away.  If you are in a memory constrained situation or a time constrained situation it may be more appropriate to use a DataReader instead as that will give you more control.  Is it difficult to use?  Heck, no.


So, what is that I am advocating?  Education.  Learn the differences between a DataSet and a DataReader and when each is the most appropriate alternative.  Understand the weaknesses of each, not just the strengths.  Then, only then, make an intelligent, informed decision about the right tool to use. 

History Tables

So, what do you do if you want to have high quality data (i.e. no fake dates for effective end dates) but don't really like to use columns that can contain nulls?  Well, for rows that contain effective dates, have you ever thought about using a history table?


If the vast majority of accesses to the table involve just the current data and not historical data, then a history table may solve your problems.  A history table contains all of the "old" rows and as such it will have an effective start date and an effective end date.  No need for nulls here as you precisely what these dates are.  As for the main table, depending upon the application, it may not even need any effective dates at all!!!!  Need the current address?  Just get it from the Address table.  Need an historical address?  Get it from the Address history table.  Going to be doing this a lot?  Put an index on the date/time fields.  (Sorry about that shameless plug for some other posts of mine.)


Is this effective date nirvana?  No, not really.  There are some applications that make effective use of historical data and for them a history table would only make things more complicated.  In other cases, you aren't really keeping track of history, what the effective start and end dates are being used for is for auditing who made what change on what date.  If what you want is audit information, then create an audit table.  Similar in concept to the history table but designed for auditing.


You see, it's not a sin to take a single table and make it two tables.  Indeed, there are really good reasons why you should.  But, if you aren't sure, talk to your DBA.  They can help you out, if only by asking you questions from a different perspective. That alone is worth the price of a visit.

Null Values

What does a null value in a table actually mean?


Well, technically, a null value means that there is no data for this column.  If the column is to capture a birth date, then a null value would mean that you don't know the birth date.  If the column is about the date of death, then a null value would mean that you don't know the date of death.  It does not mean that the person is alive, just that we don't know the date of their death.


One of the more common problems that developers have is that they make a piece of data, or the absence of the data, mean more than it should.  In the above case, if you need to know if the person is dead, you need an additional field ("Deceased"?) that indicates if the person has shuffled off this mortal coil.  The absence of data in the data of death field cannot, under any circumstances be construed as a field indicating that the person is alive.  What if you were told this person was deceased, but you weren't told when?  What do you do?  Put in a fake date of death?


I have a personal pet peeve in this area.  Within the organization(s) we have a number of tables that have effective dates.  There is a start date and an end date.  What many applications have done is put in "2999-12-31 11:59:59 PM" as the effective end date.  (Historical background: prior to more recent releases of Access, this was the maximum date that Access would allow in a date/time field.)  What this means, to me, is that this record will no longer effective as of that date.  We seem to know this in advance.  Indeed, much of the data that we have seems to expire on this date.  I would not want to be in application support on the day after when all of the data in the organization suddenly expires.


Is this truly the effective end date?  No, it's not.  The effective end date is actually null, but this makes coding for the programmers a little more complicated.  It makes the data cleaner and more accurate, but makes it more difficult to program.


I have a personal preference in this area, as I'm sure you can tell, but I will leave it up to you, the reader, to examine the pros and cons and make up their own mind.  Or, if you'd like, wait until the next Daily Migration Note where a potential solution is revealed.