Loading ...
Sorry, an error occurred while loading the content.

RE: [agileDatabases] Re: Script creation: Manual vs. Automated

Expand Messages
  • Mark Baekdal
    It s not often I write here but I thought I d give it a go. I can only say what I do and for me no single rule fits all - but this is what I m currently
    Message 1 of 34 , Jun 9, 2009
    • 0 Attachment
      It's not often I write here but I thought I'd give it a go. I can only
      say what I do and for me no single rule fits all - but this is what I'm
      currently implementing in an insurance company in the city of London.



      The databases are all SQL Server. They're a collection of 2000, 2005
      and shortly 2008 databases.

      First all the database code is held in source control - here they use
      Perforce (I quite like it incidentally).

      The database code is used to build temporary source databases on the fly
      when database scripts are changed using Cruise Control and DB Ghost to
      build the databases (very fast incidentally at building the temp db -
      here the average DB builds in around two minutes). You may have
      alternatives to this.

      These temporary source databases are used as the source database for
      comparison purposes to update schema and reference data (static data,
      look up data) in a target database or produce the SQL delta to do so -
      using DB Ghost to do this - once again you may have an alternative.



      That's the simple summary.

      In this company they conceptually have five environments - Development,
      Staging, User Acceptance Testing/Model Office, Pre-Production and
      Production. These are in order of delivery, so code moves from
      Development to Staging to User Acceptance Testing/Model Office to
      Pre-Production and finally to Production.

      Developers have access to the source code in the Developers code line
      only. There are code lines for all the other environments except for
      Production and this can be viewed by anyone but only changed by the
      build/deployment team.

      This separation means the build/deployment team need only to integrate
      changes from the development code line to the target code line when they
      want changes to be propagated. Cruise Control picks up the code change
      event and fires off DB Ghost which does the rest. Pretty cool stuff I
      think.

      Only the Pre-Production code line is a little different in that it's set
      up to run the task of propagating the changes manually. This means that
      the build team can integrate multiple changes into the code line and
      choose when they want this to propagate. When they do the system creates
      a roll forward and roll back script which they then test on the
      Pre-production database and then hand on to the Production DBA group.



      This means every object has a change lineage that everyone has
      visibility of. It means database dependancies are checked via a build at
      every change propagation. It also means the deployment of changes is
      tested at every stage. Once it gets to production the likely hood of a
      change failing is nearly zero as any error would have been captured in
      an upstream environment and fixed in the source code.



      For each of the groups there are benefits. The developers don't have to
      worry about how changes take place, they just change the code and work
      away in their own environment. For the build team database changes means
      code integration. For the Production DBA group they have scripts which
      are well tested. For management they can get accurate estimates on how
      long database changes will take.



      For me this is damn cool - but you might disagree or your environment
      may not be suited to this type of system or you may not be using SQL
      Server in which case DB Ghost is not for you. But conceptually what we
      have here is (IMHO) a very good model for database change management.


      regards,
      Mark Baekdal

      Developers taking control of SQL Server code.





      From: agileDatabases@yahoogroups.com
      [mailto:agileDatabases@yahoogroups.com] On Behalf Of Jens Schauder
      Sent: 09 June 2009 21:59
      To: agileDatabases@yahoogroups.com
      Subject: Re: [agileDatabases] Re: Script creation: Manual vs. Automated








      milenvk schrieb:
      > And for these two points:
      >
      >
      >> Another issue is how to version control the incremental scripts. What
      we've been doing is create sub-folders for every version which contains
      the incremental scripts for that particular version, e.g. from 0.10.1.1
      to 0.10.1.2. I see some issues with this approach:
      >> 1. It seems to be in direct contrast with most version control
      practices where versions should be expressed as tags, not subfolders of
      the branch.
      >>
      >
      > The reason for this is that you are not versioning a statically
      changed entity like source code, text files, etc. You are rather
      versioning just certain axis of change of a volatile environment, like a
      running database, complete with data, concurrent users, etc. With
      incremental scripts you are simply doing what version control systems do
      internally, i.e. keeping just incremental changes to a versioned entity.
      > Consider the alternative: keeping the entire database schema under
      source control and recreating the database from scratch each time. That
      approach has very limited use as it only works for initial development
      and cannot be applied on a production database.
      >
      >
      I think there is a conceptual mistake here. The artifact that we are
      going to deploy is not the complete database, but the change script
      (either one, which is maintained in version control, just as another
      source file, or a combination of migration scripts.) Both approaches
      work fine. I used the first approach even successfull in an environment
      where we had to support different branches of the database.

      regards
      Jens Schauder





      [Non-text portions of this message have been removed]
    • jenswm84
      Hi, first of all, this is a interesting thread. I m doing my bachelor-thesis for such a scenario. I m actually trying to convince the core dev / dba team, that
      Message 34 of 34 , Jun 30, 2009
      • 0 Attachment
        Hi,

        first of all, this is a interesting thread. I'm doing my bachelor-thesis for such a scenario.

        I'm actually trying to convince the core dev / dba team, that the incremental versioning way has more benefits than drawbacks.

        The main problem in our discussions is, like you said in your first post, the execution time of the migration, if there are many not optimized changes. If you add 1 column to the same table in 3 different steps and then remove 1 column in another step, the "diff"-way is much faster, because the resulting changescript will do ONE add column 1 and 2 in one step, instead of doing 3 or 4 steps.

        So I think, in the world of schemas, the "diff"-way seems to be better for our BIG tables.

        But, ..., a migration is not only to change some schema-informations. You want to include complex convertfunctions, like converting the content of 1 column in the OLD schema to 2 columns in the NEW one. In the „diff"-world you have to do custom converts for each patch. It is a difference between the migration from 1.1 to 1.3 and 1.2 to 1.3. Theres no 100% repeatablity.

        At the moment I'm thinking about hybrid-ways, but I don't think there is one. Are there ways to optimize one of this two ways with the ideas of the other one?

        I have an idea for that. First I thought, it is too time-consuming to analyse the "changelog"-path for optimizations. But actually I think it could be possible. Maybe I could order the changes into a graph-like structure, where i can look at the changed objects from top to down. There I will find changes, which are potentially optimizable. It isn't too important to eliminate all unoptimized changetype. It would be a great benefit, if it takes care about the column problem and some others.

        What do you think?

        Your feedback is appreciated.

        Thank you all for this great thread with many interessting standpoints to those ways of migration.

        Best regards.

        Jens

        --- In agileDatabases@yahoogroups.com, "tnabil76" <tarekmnabil@...> wrote:
        >
        > Hi,
        >
        > Thank you all for your responses, I didn't really expect that much feedback.
        >
        > I must say I'm overwhelmed by the consensus on the incremental approach, which I had already completely dismissed. I have to clarify that we are currently using the incremental approach and that my research of alternate approaches is driven by what I felt were shortcomings of that approach.
        >
        > The thing is I believe there are cases which can not be dealt with in this fashion. For example, if there was a decision to drop a column and then this decision was canceled, there would be a script to drop the column and another one to restore it. If that is done incrementally, then how will one be able to restore the data in the restored column?
        >
        > I believe if I dig deep, I can come up with numerous other similar examples.
        >
        > Also, even if we assume that the incremental approach guarantees the correctness of the "roll forward" scripts, it doesn't do the same for the "roll back" scripts. If your rollback scripts are not perfect, then how can you revert back in case one of the changes you made caused problems?
        >
        > Another issue is how to version control the incremental scripts. What we've been doing is create sub-folders for every version which contains the incremental scripts for that particular version, e.g. from 0.10.1.1 to 0.10.1.2. I see some issues with this approach:
        > 1. It seems to be in direct contrast with most version control practices where versions should be expressed as tags, not subfolders of the branch.
        > 2. Another problem is at deployment time, you have to manually come up with a list of those scripts, which is error prone.
        >
        > I feel that this is going to be a long discussion thread, although hopefully, quite a useful one.
        >
        > Thank you again for your help.
        >
        >
      Your message has been successfully submitted and would be delivered to recipients shortly.