Loading ...
Sorry, an error occurred while loading the content.

125Re: [extremeperl] refactoring databases

Expand Messages
  • Matthew Albright
    Feb 17, 2005
      Hmmm, that sounds very similar to something I've done for awhile now (at 2
      different jobs). I don't recall posting about it, but I (or one of my
      cow-orkers) might have...

      It's easier to show some code that would work (as opposed to describing it in
      words):

      sub init_schema_change { # Call this once
      $db->create_table(schema_change => [
      'id serial primary_key',
      'name varchar not null',
      'ts timestamp not null default now()',
      ]);
      }

      sub schema_change {
      my ($name, $change_sub) = @_;

      return if $db->scalar(
      "select count(*) from schema_change where name = ?",
      $name,
      );

      $db->run_in_transaction(sub {
      $change_sub->($db);
      $db->execute("insert into schema_change set name = ?", $name);
      });
      }

      So, this would be called like this, from a file we call update_db.pl:

      schema_change(sub {
      my ($db) = @_;
      $db->execute('alter table foo add field bar integer not null');
      $db->execute('alter table baz add field foo_id integer');
      });

      What this does is give you a table that grows over time that indicates exactly
      which schema changes have been done to the database (and when). You have a
      growing executable update_db.pl file that contains a bunch of schema changes
      that will only be executed once, and can be run at upgrade time to make sure
      the database is up to date. And if any part of the schema change fails, the
      transaction rolls back and the row is not inserted into the schema_change
      table, so you can try again after you fix your code.

      So, you can ship out this update_db.pl file to your customers and be confident
      that no matter how old their database schema is, it will be up to date after
      this file is run.

      And like you said, every so often you do a check of all your customers, and
      figure out the oldest version of code that is out in the wild that has a chance
      of upgrading in the future. Get a test system, and upgrade it to that version,
      dump the schema of the database, and use that as a starting point, deleting
      everything older than that point in your update_db.pl file.

      If anyone is interested, I can share some additional thoughts on data migration
      landmines to watch out for...

      matt

      --- Perrin Harkins <perrin@...> wrote:

      > Hi Rob,
      >
      > I was talking to some of my co-workers the other day about managing
      > changes to databases on a system that has a version in production
      > already. There was a thread I remember reading, possibly on the
      > mod_perl list, and possibly by you, about a technique for managing
      > schema changes by treating everything after the initial design as an
      > update script, i.e. always using ALTER TABLE and the like. I recall
      > someone complaining about how that would become hard to manage and there
      > was discussion of rolling up all the changes every now and then into a
      > new starting point.
      >
      > Does this sound like something you posted? I can't seem to find it, and
      > I wanted to re-read it. Even if it wasn't you, any advice about
      > managing database changes would be welcome, since I remember you saying
      > you made lots of ongoing schema changes on your projects even after
      > launching them.
      >
      > - Perrin
      >
      >
    • Show all 11 messages in this topic