Friday, August 31, 2007

Apologies

My apologies for the flood of posts today, but I haven't updated the external blog in a long time, whereas the internal email was still being distributed daily. I will try to do better in the future.

Part of the Same Team

"We're all part of the same team, right guys?"


A Project Manager sometimes says this to his team when they've made a decision without consulting him and the decision has some repercussions elsewhere in the project:  money, time, or credibility.


A developer might say this to the management team of the project when the Project Manager or Team Lead has committed to a date that the developer knows is unrealistic, unattainable, or even justifiable.


The business area may say this to the project team when the team seems reluctant to embrace the total vision of the project and seems to be cautious, nervous, or even afraid of the impact.


It doesn't matter the perspective, nor does it matter the person who says it, when this statement is said there is an almost instant "us vs. them" mental image that pops into everyone's head.  Well, maybe not everyone.  Some people, some teams, actually work well together.  They understand the impact of their decisions and, if there are far ranging impacts they discuss them with the required people in advance of agreeing to them.  They understand that even though a request seems simple, they should talk it over with the rest of the team in case something is actually much harder than originally thought.  They understand that being part of a team is a good thing and that teamwork can overcome many obstacles.


Each of us has the ability to shape our team.  Each of us has the ability to help guide the team.  This isn't about being a Project Manager directing the team, it is about people being part of a team and committing to the common goals.


Five people working on the same project is not a team.  Five people, sharing the same vision and goals and working together, is a team. 


 

Orientation day

OK, it's probably pretty obvious that I have been pushing education a lot recently.  Well, today my daughter is attending an orientation day at her new Junior High.  When I think back to when I was her age (yes, the world was black and white back then) going to Junior High was a big change.  Instead of staying in one classroom for most of the day I switched from one room to another and even the people I was with changed throughout the day.


I no longer had the advantage of staying with one teacher a little bit longer and picking up on a concept I missed.  I was no responsible for learning it on my own and, in the event I still couldn't get it, only then was I going to talk to the teacher.  This was a big change in how my world operated up until then and it was really scary.  So, I empathize with my daughter.  I know what she is going to be going through and I will do my best to support her. 


This orientation day my daughter is attending is going to go a long way towards making her feel comfortable in her new school and comfortable with the process.


Now, fast forward ten years.  She's graduated from school and has her degree/diploma and has come to work for your project.  What do you have in place as orientation material?  What do you have that will help her get over the initial fear of a new experience?  What processes are in place to help her become as productive as possible in as short a time as possible?  If you're like most of us, the answer is probably "not much".  We all know the need is there, but filling that need just never seems to be a high priority.


The next time you've got a few minutes, think of my daughter, think of other peoples children, joining your project this year, next year or the year after.  What needs to be in place?  What can you do to help?

Education

Do you ever have a few minutes to kill and you're not sure what to do?  Get certified. 


OK, getting certified in something may take longer than a few minutes, but doing a test is an easy way to tell how close you are to the final goal.  For instance, there is a company called Brainbench that lets you write tests to "certify" yourself in various areas.   While many of these exams do cost money, I prefer looking up the "Free" exams.  Through this route I have taken an exam on Shorthand (I passed, but barely), Internet Security, Writing English, Typing, and others.


I've done these exams for a number of reasons, not the least of which is that I want to test myself to see if I actually know a topic.  I've been talking a lot about Education, recently, and how it is important to keep yourself informed about a topic.  The Brainbench site has a number of FREE exams right now on topics like .NET Framework 2.0, RDMBS concepts, Programming Concepts and Software Testing.  While I am not advocating this particular site, I am advocating education. 


If you are more serious about your education you can try for any one of a number of Microsoft certifications . There are a lot of sites that help you out with studying for these exams, with Transcender being one of the oldest companies in the business.  Or, for those who prefer studying at their own pace with a solid reference, most of the Microsoft exams have associated books.  (Imagine that, they charge for the exam and they charge for the book for studying.  What a racket!!!!) 


It doesn't really matter which route you choose, just go out and learn.

SQL Injection

Security of the data is important to every application.  Ensuring that only properly authenticated users receive access and that only properly authorized users view the data is critical to the success of an application.  Unfortunately, there are many ways to get access to an application and some of them are amazingly simple.  For this note, we're going to talk about "SQL Injection" attacks.


Much like the name implies, a SQL Injection attack is the insertion of SQL code into an existing call in order to compromise security.  Essentially what happens is that the application fails to parse the data coming into the application and allows for people to insert SQL code into an existing SQL call to the database.  For details of how this is done, Steve Friedl of UnixWiz.net has an interesting example.


Is this information hard to come by?  No, it's not.  The link above was actually the top one on the list that Google provided to me.  Detailed, step by step instruction on how to break into a poorly secured web site and the information is so easy to follow that even my daughters can try this out at home.  Many organizations have put standards in place to address this issue.  However, standards are only effective if they are followed and they aren't necessarily going to be followed if the person doing the work doesn't understand the reason why.


Essentially, this comes down to education.  Educate yourself on how to break into your system so that you can prevent others from doing so.  This doesn't mean that you need to be a security specialist, but what it does mean is that you should be conscious of the techniques that people use so that you can stop them from being used against you.  Information is the key.  Let's hope that this key is locking things up instead of opening the lock.

Side Benefits

In a recent note we talked about moving historical records out of the main table into a history table or, depending upon the purpose of the historical records, an audit table.  One of comments that I got back was that had a number of additional benefits:




  1. Easier to write code to retrieve data - no fancy date handling required

  2. Easier to use ad hoc reporting tools - same reason

  3. Better performance due to simplified date handling and smaller table sizes (as only most current record kept)

  4. Can control access to current vs historical data easily by restricting access to the various tables

  5. Easier to archive, as you only need to worry about the history table

(Thanks Rob)


It's easy to miss amongst the glitz and glamour of coming up with solutions that everything we do, every decision we make, has multiple ramifications.  What we may do to "simplify" something may cause severe repercussions in other areas, totally negating the positive benefits.  Sometimes we come across a solution that has both positive and negative impacts, but the positive impacts so far outweigh the negative that there doesn't seem to be a reason not to adopt the new approach.

Coming up with alternatives can be quite difficult, which is where "peer review" comes in really handy.  Grab a friend or two, someone who has done some design work before, and show them your design.  Help them understand the problems and the solutions that you've come up with.  Peer reviews are tremendous tools in that they help to validate approaches and ensure that other possibilities have been considered.  (Don't go overboard on documenting your design until after you've had a peer review, however, as the more time you invest in your solution the less likely you are to consider other options.)

Virtualization Technology

I was reading an article recently about virtualization that actually surprised me.  The Collier County School District in Florida is a very big proponent of virtualization technology.  Their technology plan calls for the replacement of traditional desktops with thin clients.  Users would essentially log into a virtualized desktop located at the District's central computing center.  By loading up blade servers with lots of RAM they are trying to get 30 or more desktops per server.


Wow!  Thirty virtual machines per physical host!  We have not been nearly so aggressive, with our biggest servers handling 15 or 16 virtual machines.  Many of our servers are much smaller and we have a correspondingly smaller number of virtual machines.  Right now we have in excess of 190 virtual machines, some of these being used as desktops, while others are used as servers, both in a Development capacity and a Production capacity. 


With the upcoming release of Windows Server 2008, however, we plan to take even more advantage of virutalization technology.  Comments from Microsoft about the software being able to handle 512 virtual machines per physical machine, notwithstanding, we don't plan on hitting that number any time soon.  What we do plan on doing is implementing features that will allow virtual machines to consume more CPU on the box on which they are hosted, features that will allow us to move a virtual machine from one server to another with no interruption to service, features that will allow us to create new virtual machines in minutes, in some cases in an automated fashion to handle heavier workloads.


Virtualization is a proven technology, just talk to any mainframe guy and he can tell you that multiple "operating systems" are run an IBM mainframe every day.  Great strides are being made in this area everyday and when they are ready to use we will be there.

Error Messages

Error message are vitally important to being able to debug an application that is having troubles.  One thing I should mention, though, is that the error message and subsequent call for action need to make sense.  For instance, the following error messages, or the actions they suggest, just don't make sense or don't help to debug the problem:



  • Keyboard not found.  Press F1 to continue.  (I last saw this on an IBM PS/2 model 55SX.  I paid $6000 for a machine which I felt like throwing out the window.)

  • An unexpected error has occurred.  (I last saw this on a number of different production applications in our own shop.  This doesn't help.  Honest.  Any shred of additional detail would be appreciated.)

  • This is impossible.  (Last seen in one of our production applications.  You know, if I've seen it in an error message, it's obviously not impossible.  BTW, I saw 20 occurrences of this.)

  • Invalid effective end data.  (Too bad there are about a dozen effective dates used at this point in the application.  No idea what date is being used or what table is being accessed.  Quick, call for a DBA!!!)

Sometimes we try to hold our clients hand and we use the excuse "Well, we want to make the error message friendly to the user".  Fine, make it friendly, but you can still had more information.  For instance, on the effective date error if you added what date was incorrect you would not only make it more user friendly, you might actually allow the user to solve the problem themselves!!!  The "unexpected error has occurred" message is sometimes a catchall, but you can still add valuable information. 


No, none of these are perfect solutions, but you need to understand that while you might be covering up the sins of the application to the end user, the support personnel have no data to go on in order to fix the problem.  This prolongs the issue and makes the application actually look worse in the long run.  You might want to consider a two part error message:  first part user friendly, second part techie.  You could add "Report the error to the appropriate support personnel and give them the following data:  blah blah blah".  Give the user both parts, but tell him to pass on the second part.  They will appreciate it, as will I.

Single Point of Failure

Single Point of Failure.


There are probably a lot of really nice definitions our there, but I'd like to use my own.  In my world, a single point of failure is:



... a component, hardware or software based, which when it fails will cause the entire system, or an entire subsystem, to become unavailable to the users ...


So, let's give some examples:



  •  An application that only runs on a single web server has the web server as a single point of failure.

  • An application which uses only a single database server (non-clustered) has the database server as a single point of failure.

  • An application that relies on the Internet, but only has a single connection has their ISP connection as a single point of failure.

While we try to cover many of these different aspects when we design applications and infrastructures, sometimes things still don't work.  For instance, in Production we've got clustered web servers, clustered database server, multiple Ethernet connections, redundant DNS servers, RAID disk storage and dozens of other redundant systems.  Sometimes, though, things just go south really fast and in a really bad way.  Recently we had an air conditioning problem with our server room.  We have redundant units that have multiple air conditioners in each unit.  Through a sad set of circumstances we ended up with only 1 of 4 units working. 


No matter what anyone does, there is no such thing as a full proof system.  There will always be some avenue whereby a single point of failure exists.  The target is to identify those areas and work on putting in redundancy, one step at a time.  It is a long process, but nothing worthwhile is ever accomplished quickly.

DataSets vs. DataReaders

I am stepping into heretical territory here, so you will have to pardon my trepidation.  I am going to discuss something over which wars have been fought, reputations destroyed and live ruined.  Yes, you guessed it, I am going to discuss DataSets vs. DataReaders.


There has been much discussion of this topic behind closed doors and even the occasional directive stating that if you are passing large amounts of data from one tier to another, use a DataSet.  DataSets are indeed convenient mechanisms for transporting around a lot of information that can be stored in a table/row manner.  What happens, though, if you are retrieving a single value?  What if you are going to be retrieving data until a specific event occurs (time or data initiated) and then stop processing?  My contention is that these items may be better suited to a DataReader as opposed to a DataSet.


A DataSet is much lighter weight and is actually the underpinnings upon which the DataSet is built.  When you issue the Fill command to a DataSet it uses a DataReader to retrieve all of the data which it then passes back to you.  if you don't need all of the data, however, you just chewed up a lot of processing cycles, processing memory, and your clients time, retrieving data that you are going to throw away.  If you are in a memory constrained situation or a time constrained situation it may be more appropriate to use a DataReader instead as that will give you more control.  Is it difficult to use?  Heck, no.


So, what is that I am advocating?  Education.  Learn the differences between a DataSet and a DataReader and when each is the most appropriate alternative.  Understand the weaknesses of each, not just the strengths.  Then, only then, make an intelligent, informed decision about the right tool to use. 

History Tables

So, what do you do if you want to have high quality data (i.e. no fake dates for effective end dates) but don't really like to use columns that can contain nulls?  Well, for rows that contain effective dates, have you ever thought about using a history table?


If the vast majority of accesses to the table involve just the current data and not historical data, then a history table may solve your problems.  A history table contains all of the "old" rows and as such it will have an effective start date and an effective end date.  No need for nulls here as you precisely what these dates are.  As for the main table, depending upon the application, it may not even need any effective dates at all!!!!  Need the current address?  Just get it from the Address table.  Need an historical address?  Get it from the Address history table.  Going to be doing this a lot?  Put an index on the date/time fields.  (Sorry about that shameless plug for some other posts of mine.)


Is this effective date nirvana?  No, not really.  There are some applications that make effective use of historical data and for them a history table would only make things more complicated.  In other cases, you aren't really keeping track of history, what the effective start and end dates are being used for is for auditing who made what change on what date.  If what you want is audit information, then create an audit table.  Similar in concept to the history table but designed for auditing.


You see, it's not a sin to take a single table and make it two tables.  Indeed, there are really good reasons why you should.  But, if you aren't sure, talk to your DBA.  They can help you out, if only by asking you questions from a different perspective. That alone is worth the price of a visit.

Null Values

What does a null value in a table actually mean?


Well, technically, a null value means that there is no data for this column.  If the column is to capture a birth date, then a null value would mean that you don't know the birth date.  If the column is about the date of death, then a null value would mean that you don't know the date of death.  It does not mean that the person is alive, just that we don't know the date of their death.


One of the more common problems that developers have is that they make a piece of data, or the absence of the data, mean more than it should.  In the above case, if you need to know if the person is dead, you need an additional field ("Deceased"?) that indicates if the person has shuffled off this mortal coil.  The absence of data in the data of death field cannot, under any circumstances be construed as a field indicating that the person is alive.  What if you were told this person was deceased, but you weren't told when?  What do you do?  Put in a fake date of death?


I have a personal pet peeve in this area.  Within the organization(s) we have a number of tables that have effective dates.  There is a start date and an end date.  What many applications have done is put in "2999-12-31 11:59:59 PM" as the effective end date.  (Historical background: prior to more recent releases of Access, this was the maximum date that Access would allow in a date/time field.)  What this means, to me, is that this record will no longer effective as of that date.  We seem to know this in advance.  Indeed, much of the data that we have seems to expire on this date.  I would not want to be in application support on the day after when all of the data in the organization suddenly expires.


Is this truly the effective end date?  No, it's not.  The effective end date is actually null, but this makes coding for the programmers a little more complicated.  It makes the data cleaner and more accurate, but makes it more difficult to program.


I have a personal preference in this area, as I'm sure you can tell, but I will leave it up to you, the reader, to examine the pros and cons and make up their own mind.  Or, if you'd like, wait until the next Daily Migration Note where a potential solution is revealed.

Windows Registry

The Windows Registry has been a miracle of engineering.  It is a miracle that it hasn't collapsed under it's own weight.


Originally the Windows Registry was the solution to the .ini file.  Instead of storing information in .ini files located all over the hard drive this information could be placed in a central registry and accessed from all applications.  So, what's wrong with this?  Well, like a pendulum that swings from one extreme to another, the concept of the Windows Registry was an extreme.  Yes, there are things that should be placed in the Registry.  Things that are common to one or more applications should place that information in the Registry. Things that aren't probably shouldn't be stored in the Registry.


Like all things, the pendulum is swinging rapidly back the other way.  Using .NET we have application configuration files and web configuration files that are placed in the same directory as the application, pretty much neglecting the Registry.  Is this a good thing?  Well, in many respects, yes.  It gets people thinking about what should be in the registry and what shouldn't be in the registry.


Is it as simple as "Registry=Bad.  Config files=Good"?  No, not really, but it is close to the truth.  Unless you need to put something in a location that multiple applications need to access you should probably use a configuration file.  It is simple.  It is easy to deploy.  It works.

SQL Server Best Practices

Along the lines of our SQL Server set of posts, I came across a really interesting web site:  SQL Server Best Practices.  It contains all sorts of material on best practices with SQL Server 2005.


"But Don, what if we're not using SQL Server 2005?"  Ouch.  Mainstream support for SQL Server 2000 ends on April 8th of 2008.  if you are not currently using SQL Server 2005 then I think your first order of business is getting on to SQL Server 2005.  Don't worry about trying to take advantage of all of the SQL Server 2005 features, just get off of the older software!!!!


OK, now that you've come back from those links and have some awesome ideas on how to take advantage of SQL Server 2005, talk to your DBA.  Please don't automatically assume that everything you read on these pages will be available to you or your project.  Don't assume that all of the whiz bang features have been turned on.  Don't assume that you are familiar enough with the feature to understand the impact it will have in our organization.  Don't assume that just because you read it somewhere and that it made a lot of sense that it actually makes a lot of sense for your application.


Talk to your DBA.  If they don't know about this particular gem that you found on page 764 of a book printed in Greek and found on a Russian web site with the slogan "Punish Microsoft", give them the information and let them research.  As they are more familiar with the base tool than most people they will be able to evaluate your request with a keen eye towards keeping the systems up and running and reducing the effort to do so. 


Once again, talk to your DBA.  And remember, do this before you've written a lot of code as your information may make some radical changes to your database or how you access it.  Proactive interaction with the DBA, an awesome thing to behold.

Talking to your DBA

OK, so last week we talked about the "evils" of SQL Server and how they could be solved through better design and understanding of SQL Server.


One of the questions that I was asked was "how do I get this better understanding"?  Well, there are multiple methods:



  • Reading.  Read books on SQL Server and on effective database design.  Notice how I've separated the two.  Understanding how SQL Server works is just as important as the proper design of the database.

  • Education.  Take a course on database design.  (See if you can find someone who has taken the course before you, as there are some courses that are pure trash and you need to avoid those.

  • DBAs.   Talk to your DBA.  They probably know more than you about SQL Server and how it operates, so it is time to take advantage of their knowledge.

Well the last point, talking to your DBA, may seem obvious, too many people wait until they are in trouble before they talk to the DBA.  That puts a lot of added pressure on everyone and usually results in a "quick fix" as opposed to the best solution.  Talk to your DBA early on in the process, preferably before any code is written, but if that can't be accomplished as soon as possible thereafter.  If there are some fundamental changes that need to be made, you want them done early, not when the Director is breathing down your neck saying "Is it done yet?"


 

SQL Server the Root of All Evil

SQL Server is the root of all evil.  SQL Server causes more problems, with more applications than any other part of an enterprise system.  It is inefficient, slow to respond, and is the focal point of more performance issues than anything else.


OK, now that the myths are out of the way let's get to reality.  SQL Server is indeed at the center of many performance issues, but not because of the SQL Server product itself, but because of the usage of the product.  Database design is a key factor in how well SQL Server, or any database server for that matter, can respond to a query.  What columns actually need to be in your table?  Determining whether you need a sequential  MBUN, a GUID or a ROWID may seem trivial, but it has tremendous impact on performance.  Database design extends to more than just what columns belong in what table, but what indexes should be created and what columns should be used for clustering.  (P.S.  If your cluster index is a GUID, OUCH.  If you don't cluster on your data, OUCH.)


The physical structure, however, is just one part of the overall solution.  Having short, concise and well constructed stored procedures is very important.  Understanding that TempDB should not be used if a TABLE variable works is very important to understand.  Coming up with project standards, at the beginning of the project with regard to expected response time is important.  On a previous project we had a threshold for stored procedures set at 1 second so that if anything took longer than 1 second we looked into the reason why.  Some people significantly lower that number so that anything over 200 milliseconds gets investigated.  If you've set a limit of 1 second and you've told the user that no screen will take more than 4 seconds, then you know that you can call, at most, 4 stored procedures.


If you are experiencing "problems" with the database, don't automatically assume that it is the fault of the database.  If the design, construction and implementation of the database have a flaw you may experience problems, but they are not the fault of the hardware, nor of the database engine itself.

Pet Peeves

We all have pet peeves.  You know, something that really irritates you when other may just shrug it off.  I thought I would list some of the pet peeves of the deployment team:



  • Being asked to remove the old version of ABLearning.xxx.yyy when in the Add/Remove program lists it is called Fred.  This causes no end of trouble.  Can we make the names consistent?  If you are installing ABLearning.xxx.yyy, make sure that the uninstall is called the same thing.
  • Seeing people use a new  deployment package when they migrate the same code to a new environment.  DeCo was set up so that the same deployment package could be used to deploy to UAT and Production.  If the package is named properly it is easy to find and easy to re-use.  (Plus it already has the items attached.  Saves time.)
  • Seeing people use the same deployment package over and over and over and over again.  If you are deploying something new, you need a new deployment package.  Re-using an old deployment package seriously confuses people when they look at the history of other deployments that may have been using the same package.
  • People putting everything into one zip file.  It is so much better for everyone if you match the number of zip files with the number of items.  (Oh, the converse is also true, there is no need to add every single file separately, they can be grouped together in a zip file.)
  • Being told that performance is "slow", but no one can actually tell us what "fast" is.
  • Being asked "when is it going to be fixed", when I don't even know what is broken.
  • People disagreeing with me.

OK, that last one is a joke.  Sort of.

Defaults are Good

Defaults are an interesting thing.  They are usually put in place because, the majority of the time, the default makes sense to use.  This applies to a lot of aspects in life.  The default place for the turn signal indicator in your car is done because, for the most part, this is where people expect it and it is where people can make use of it without taking their hands off the steering wheel.  The default location of Entry/Exit doors to a building or store are set to mimic the roads that we drive on:  enter on the right (just like driving on the right for you out-of-towners).


When we build applications we also have defaults.  For instance, there is a default name for the application configuration file for a .NET application.  DON'T CHANGE IT.  There is a default name for the file that web sites look for if one is not specified.  DON'T CHANGE IT.  There is a default for buttons when they are pressed.  DON'T CHANGE IT.  There is a default behaviour for text boxes.  DON'T CHANGE IT.


OK, perhaps "DON'T CHANGE IT" is a little harsh, but it grabs your attention better than "Don't change it unless there is a reasonable payback on an alternative including, but not limited to: increased productivity, decreased deployment effort, increased user satisfaction, increased performance, etc."  Seriously, though, unless there is a good reason why you want to make changes to the defaults, let them be.  They are defaults because they work for the majority of people, so why don't they work for you?

Doing Data Fixes the Right Way

I was having a rather spirited debate with an old friend the other day about data fixes.  (Since he was buying lunch I thought it only polite that I listen while he talk.  That and I didn't want him to take away the food if I disagreed with him.)  For the most part we agreed on many of the items:



  • when possible data fixes should be tested in UAT prior to Production

  • they should be scheduled much like any other deployment

  • business areas should approve the data fix prior to the data being committed

We disagreed, however, about what constitutes a data fix.  While we both agreed on the general principle of "a data fix is used to correct data that is in an invalid state in a database", I preferred to add one additional word "unexpected".  in my definition the data needs to be in "...an invalid and unexpected state ..."  In my friends world he is used to doing data fixes on a daily basis to correct the data issues that crop up in the inventory system that he is maintaining.  The data fixes are pretty much the same SQL run over and over again, with just the input parameters changing.  When I asked him why he just didn't fix the program he complained that management didn't want him to spend time fixing the code because "... changes were coming ..."


If you expect data fixes as a regular part of your daily operations, then you have a problem with your application.  Fix it!!!!  I know that in some circumstances it isn't easy to fix.  There may be forces outside of the control of your application that cause the data to be invalid.  However, in those cases there is an easier fix than the submission of a data fix every time the business area needs some data changed.  Create a page, a really simple page, that accepts the parameters from the business user and executes a stored procedure to make the changes. 


What does this do?  Well, it eliminates the middle (wo)man, the DBA who needs to create and or run the SQL.  It gives the business area more control over what they want to do when they want to do it.  It can provide a full audit trail of who made the change and when.  And, most importantly, it can be implemented very quickly and will pay for itself very quickly.  The effort around a single data fix, no matter how small, consumes a considerable amount of time.  By letting the user do their own "data fix" the developer/DBA can work on fixing the real problem, not just the symptoms. 


I didn't win the argument, however, as my friend is a consultant and gets paid to do the data fixes, so he actually liked the way it was set up.  I did manage to finish lunch before disagreeing with him, however, so from that perspective I won.


 

Enforcing the Rules

To what extent should "the rules" be enforced?  There are some who advocate complete obedience to the letter of the law.  Others advocate following the spirit of the law, while another group says "as long as it doesn't hurt anyone, does it really matter?"  In some respects it is quite contextual:  you don't follow the letter of the law if it is going to hurt someone, but if no one is around and no one is going to get hurt, do I really have to stop at the stop sign?


The Deployment Team has a number of rules in place with regard to deployments and they were published back in March.  These rules are, we thought, relatively straight forward, but they seem to be either misleading to some people or just ignored.  I though I would re-publish them and ensure that everyone is aware of the rules we follow.  For the most part, we follow the letter of the law with regard to these deployments, mainly because it actually saves us a lot of work and reduces confusion.  If there are any questions about them, please let me know.


Deployment Request Criteria


















Documentation attached

Is there appropriate documentation included with the deployment request so that anyone can deploy the application?  For very simple installs the request can be approved and the deployment team notified that more complete documentation required.  Any deployment that requires the installation of uninstall of an MSI must include documentation.

If there is no documentation the request will be rejected.

Documentation accurate

Is the documentation accurate?  A brief review of the documentation can determine if the document is even pertinent to this migration and if it is accurate.

If the documentation is incorrect the request will be rejected.

Installation files are attached

If DeCo is to be used as an audit tool then the installation files need to be attached to the request and not located on a development server. 

If the required files are not attached the request will be rejected.

Installation files are accurate

If DeCo is to be used as an audit tool then the installation files need to be accurate.  This means that to the best of anyone's knowledge, the files need to be accurate in terms of what they are supposed to do.

If the required files are not accurate the request will be rejected.

Standards are followed

The following standards will be enforced:

- Version numbers

- Database naming

- No Administrator rights to applications

While there are others that we would like to enforce, these are the main items at the moment.

If the standards listed above are not followed the request will be rejected. It is expected that the list of standards to be enforced will grow, but project teams will be notified in advance before the standard is enforced.


There are additional requirements for a Production application









Installation files are from a deployment to UAT

We do not move directly into Production except in the most dire of circumstances.  As a result, all of the files that we are moving into Production need to have gone into the UAT environment first. 

If the files have not previously been deployed to UAT the deployment is rejected, unless the deployment is being made to correct a high priority production incident

Business Approvals made by 8:00 AM on the day of migration

If a Production deployment request does not have business contact approval by 8:00 AM on the day of the migration it will be rescheduled for the following business day, with the exception that nothing will be rescheduled for a Friday afternoon.

The development team is free to reschedule the deployment back to the original day, even if it is Friday, but they will be asked to provide a reason why the migration needs to be done immediately and this reason will be forwarded to Rob Schneider and Dawn Quaife. Friday afternoon deployments to Production will need the approval of Rob Schneider or Dawn Quaife.

If a production deployment request is made for the same day the same process as described above (request for a reason and director notification) will be followed.

Deployments to production are not rejected for this item, but the reasons for late approvals or late creation of requests are made available to Directors for further review.


 

 

 

Documentation

In another life I was the "Master of the Methodology".  OK, what that really meant was that I was the one that knew how to access/install the methodology that we were using and as a result all methodology questions were forwarded to me.  I was the expert, by default.


One of the interesting things that was embedded within the methodology, however, was the concept that the phases of the project and even the deliverables themselves all needed to be defined at the beginning of the project.  While you could use the "standard" project template the vast majority of project managers customized, to some degree, the phases and deliverables that the project was going to create.


While in some circles this would be akin to shooting yourself in the foot, the methodology we used not only made allowances for this, but actively encouraged the modification of the methodology to suit the needs of the business area, the development team and the maintenance area.  There were some documents that were produced that were of specific use to one group only, but there were also many documents that were useful to all parties. There were a few simple rules that were used in determining whether or not a document needed to be produced:



  • What documents are needed by the business area to document their vision?  While there are many different types of documents that could be produced, a short list of really meaningful documentation, created in conjunction with the business area, is what usually made it into the project proposal.

  • What documents are needed by the development team to create the vision shown in the business documents?  If it was not needed to help build the application it was not included in the list of deliverables.

  • What documents are needed to maintain the application?  After being on the maintenance side of the equation for many years, there is a limited subset of documents that are actually useful in maintaining and enhancing an application.

  • What documents are needed by the organization to maintain an appropriate level of oversight on the project?  Status reports, revised project plans, revised timelines, etc., are all required to maintain oversight on a project.

A clear understanding of the word "needing" is required by all individuals.  "Nice to have" is not the same as needing and there needs to be a commonality of understanding amongst all parties:  business, development and maintenance.


The larger the project, the more documentation that is going to be required due to the fact that there are more lines of communication that need to be fully developed and understood.  Conversely, the smaller the project, the fewer the lines of communication and the smaller the amount of documentation required.

Definitions

The English language is an amazing thing.  We have multiple words that mean the same thing and single words that can mean multiple things.  Here are some examples were there can be great confusion over the meaning of a word:


Concurrent users.  To the non-technical user this is the number of users actively using the application.  To the technical person it means the number of people using the application at any one point in time.  The biggest difference is that the more technical  definition specifies an exact period.


Application Downtime.  To the business area the application is down if any portion of the application is unavailable to any portion of the target audience.  To others it means that the entire application is unavailable to the entire audience.


Emergency Change.  This is a situation where there is an immediate need for action or the inaction may cause harm to the organization.  To the more technical  this means an all-nighter and having to convince Don that the deployment request is actually an emergency.


Technical Discussion.  To the business area this is a meeting where the technical people want to get together to discuss something without the business area hearing about it.  To the technical people ... well, I guess everyone agrees here.


A common understanding can go a long ways towards making things run a lot smoother.  For everyone.

Solving the Right Problem

One of the hardest things to do it solve the right problem at the right time. 


When investigating a problem you may end up looking at a wide variety of possible solutions.  Some of these solutions are quick fixes while others require a fair amount of effort to implement.  The question is, which one do you propose?


For a crisis, the quick fix is usually the right choice.  Things need to be resolved quickly and the best solution may not be able to solve the problem fast enough.  As a result the quick fix is usually chosen for Production emergencies and rushed through into Production.  Quick fixes are not meant to be permanent solutions, but in many cases they end up being permanent for a variety of reasons.


In less crisis oriented situations, however, the best solution may actually be the resolution of a deeper, more convoluted problem that is actually the root cause of the issue.  Unfortunately, resolving the root cause of a problem may actually be a problem in and of itself.  There may be significant effort and money that needs to be spent in order to resolve the issue in the manner that it should.  Sometimes the problem is so fundamental to the application that it almost appears that you have to re-write the application to make it work as desired.  If this is the case, is this what you should propose?


As with many things in life, it comes down to a business case:  is the cost of implementing the solution less than the cost of living with the quick fix?  If this were strictly a matter of dollars and cents then the answer would be know right away.  Unfortunately the cost of living with the problem is not easily quantifiable.  How do you measure consumer lack of confidence in terms of cost?  How do you measure consumer satisfaction in terms of cost?  in many cases only the business area affected can even hope to determine the cost.  It is our job to present the facts as we know them, the costs as we know them, and let the business decide the ultimate cost.

Dying Software

Recently I talked about how many of the technologies that we are current coming to an end of their support lifecycle.  Failure to upgrade the technology can cause us problems and here is an example.


We currently use a technology called "iSCSI" to give servers additional storage.  The physical drives are on an iSCSI server and the client essentially maps the space that they are given to one or more drive letters on the local machine.  We have an instance where the space we have allocated is divided into two drive letters:  D: & E:.  The problem arises in the fact that after a restart the second drive (E:) does not always re-appear.  If any application on the server is expecting a drive E: there will be a significant problem.


Microsoft was actually able to recreate the problem on their test machines, but, due to the fact that Windows Server 2000, the operating system that we are using on the client machine, has gone past its Mainstream Support end date, they will not be investing any time in resolving the problem.  They gave us a number of options, but it was pretty much "Good luck and don't bother calling again".


In this case one of the workarounds should fix the problem, but the fact is, if this had been a more serious "production is down everyone come help" type of issue we would have been in serious trouble.


Old technology met new technology and the result was a car wreck.  In order to maintain an operational environment we need to continually update both our hardware and our software.  Being unable to update our software because of dependencies on old versions can cause us some serious trouble.  In this case it was more of an inconvenience than a crisis, but I think we should consider ourselves lucky that our experience was this pleasant. 


If you want to take a look for yourself at what Microsoft products are supported the Lifecycle Information page has a lot of information for you.