Loading ...
Sorry, an error occurred while loading the content.

RE: [XP] Re: Continuous Integraton and the database

Expand Messages
  • Olson, Curtis B
    Hi Carlton. ... We ve experienced this too with DDL, and I think the best approach is Once And Only Once: don t copy and paste from multiple small files into
    Message 1 of 18 , Apr 1, 2005
    • 0 Attachment
      Hi Carlton.

      > > the objection comes from how do we maintain the SQL script used to
      > > create the database automatically? The objection comes
      > from copying
      > > and pasting the items into the central script. That process is too
      > > manual and prone for error.

      > Personally, I think
      > seperate DDL files cause confusion and I like the idea of one
      > single script which holds everything. It tends to be easier
      > to invoke and produces less "I forgot to update
      > that file" or "I forgot to add that file to the build" events.

      > The goal of the single
      > script was to make it easy for developers to have a way to
      > update their sandbox with the version on the CI server. To
      > me, having multiple sql DDL files makes it hard to update the
      > DB in my local environment.

      We've experienced this too with DDL, and I think the best approach is
      Once And Only Once: don't copy and paste from multiple small files into
      one big file, but just have the master script do nothing more than call
      the individual small files.

      But if you are going to generate one huge DDL script from the database
      rather than have the database generated from the DDL, you have no
      choice. I think this is a smell, but some of my colleagues find it to
      be a more pragmatic way to produce DDL. I see it as a smell because if
      changes to the database are applied by someone working interactively,
      then the script may not get regenerated, and will therefore be out of
      date. Also, it's one huge friggin' file. Furthermore, the temptation
      exists to neglect placing said huge friggin' file under source control -
      I've heard arguments stating that there's no reason to put it under
      source control because the most current version can be regenerated
      straight from the database at any time (at any time other than when the
      system crashes and is unrecoverable, I reply, but the response is that
      with proper backups this cannot happen).

      I like the OAOO thing - having folks create a single small file when
      needed is of sufficiently small pain that most folks will do it, and
      it's not far from their normal flow of working.

      Cheers,
      Curtis
    • banshee858
      I have no strong objections about the small files. I agree that the big file is a hassle. However, do you still need another seperate file for creating roles,
      Message 2 of 18 , Apr 1, 2005
      • 0 Attachment
        I have no strong objections about the small files. I agree that the
        big file is a hassle.

        However, do you still need another seperate file for creating roles,
        execute permissions and creating constraints? Or are they added to
        the small file that makes most sense? Also, do the small files do a
        drop of the object before the object is created?

        Carlton
      • Olson, Curtis B
        ... We generally tend to create a file for each object. This matches our flow: we need a new object and are about to type the DDL into (in our case) Oracle
        Message 3 of 18 , Apr 1, 2005
        • 0 Attachment
          > However, do you still need another seperate file for creating
          > roles, execute permissions and creating constraints? Or are
          > they added to the small file that makes most sense?

          We generally tend to create a file for each object. This matches our
          flow: we need a new object and are about to type the DDL into (in our
          case) Oracle SQL*Plus, but usually it's less error-prone to write it in
          Notepad first and them copy and paste the DDL into SQL*Plus. Hey, might
          as well save that Notepad file while we're at it - something like
          MYOBJECT.SQL. Related information, such as permissions and constraints,
          would probably go into that file right then, because now is when we need
          them.

          We tried keeping a TRIGGERS.SQL file and a VIEWS.SQL file, but honestly
          this approaches the one huge friggin' file approach (instead about a
          dozen sort-big files). However makes it easy to track where specific
          scripts are. When using lots of small, roughly per-object files, one
          needs to use good file naming.


          > Also, do
          > the small files do a drop of the object before the object is created?

          No, we just have the creation statements. I'm guessing your next
          question is "how do you change the scripts without dropping the
          objects". We change the creation script file, but in SQL*Plus we type
          in ALTER statements. Not ideal, but it's usually still within normal
          flow to work this way.

          Some objects (triggers, views, synonyms) can use a CREATE OR REPLACE
          statement which is nifty, but with tables it's either a CREATE statement
          or an ALTER statement. Wish all objects could use CREATE OR REPLACE.

          Cheers,
          Curtis
        • Brad Appleton
          Interesting timing ... just after the Continuous Integration and the Database Thread settled down, I saw someone on a completely unrelated forum mention they
          Message 4 of 18 , Apr 3, 2005
          • 0 Attachment
            Interesting timing ... just after the "Continuous Integration and the
            Database Thread" settled down, I saw someone on a completely unrelated
            forum mention they were going to start developing an OpenSource tool
            to address the lack of software version-control tool functionality for
            databases and the repeated frustration of having to use DDL/SQL
            scripts to (recreate) the database/schema.

            The post was on CMCrossroads.com at
            <http://www.cmcrossroads.com/ubbthreads/showflat.php?Number=42162>

            They started a releated Webpage on the CMWiki at
            <http://www.cmcrossroads.com/cgi-bin/cmwiki/bin/view.cgi/CM/DaversySpecs>
          • eligolovinsky
            ... bin/cmwiki/bin/view.cgi/CM/DaversySpecs Thanks Brad for mentioning my efforts here. As I ve seen in a few discussions and often felt myself, the way we
            Message 5 of 18 , Apr 8, 2005
            • 0 Attachment
              --- In extremeprogramming@yahoogroups.com, "Brad Appleton"
              > Interesting timing ... just after the "Continuous Integration
              > and the Database Thread" settled down, I saw someone on a
              > completely unrelated forum mention they were going to
              > start developing an OpenSource tool to address the lack of
              > software version-control tool functionality for databases and the
              > repeated frustration of having to use DDL/SQL scripts to
              > (recreate) the database/schema.
              >
              > The post was on CMCrossroads.com at
              > <http://www.cmcrossroads.com/ubbthreads/showflat.php?Number=42162>
              >
              > They started a releated Webpage on the CMWiki at
              > <http://www.cmcrossroads.com/cgi-
              bin/cmwiki/bin/view.cgi/CM/DaversySpecs>

              Thanks Brad for mentioning my efforts here.

              As I've seen in a few discussions and often felt myself, the way we
              usually manage database versions is just not good enough. We've got
              most of the issues with managing and versioning source pinned down,
              but somehow the aspect of managing the versions of databases remains
              mostly untreated. But most business applications need a database,
              sometimes several databases at different locations and of different
              versions and vendors. Martin Fowler's Evolutionary Database Design
              has some very good points on database refactoring, but reading it I
              felt that without the proper tools, his method might be problematic.
              Good DBA's are a scarce and expensive resource and having a DBA
              close by for instant code review and integration is just not always
              possible.

              The basic idea that can allow us to move further on the path to
              database SCM is devise a neutral language (probablye an XML of some
              sort) that describes database structures. Files written in such a
              language could be held in source control and creation scripts could
              be created from those files. Furthermore, by comparing two such
              files, we could generate a script that moves a database from one
              version to another and by creating several such diffs we could move
              several versions forwards or backwards. But that's not all there is
              to it. Databases are built to hold data and in that they are very
              different from source code. A piece of source code can be replaced,
              recompiled and deployed, but such technique cannot be applied to
              databases. Suppose we are refactoring two tables of a 1-to-1
              relationship and want to move a column from TABLE1 to TABLE2. Using
              current techniques, we would create a script that would first add
              the new column to TABLE2, the run un UPDATE command that would copy
              the data into the new column and the drop the column in TABLE1. What
              we need is a way to automate that process, a way to build upgrade
              scenarios that can use comparison between db-structure files and
              custom written update commands to move a database from one version
              to another.

              It's a open source project because it has to support many databases
              and many SCM tools and it will require some community effort to make
              that happen.

              I would really like to hear your comments on my ideas, both in terms
              of requirements of such a tool and in terms of implementation
              recommendations. This tool (called Daversy - Database Vesioning
              System) could make all of our lives much easier, but if I'll do it
              on my own, I'm bound to miss a few critical spots.

              My up-to-date thoughts on this are on a wiki here:
              <http://www.cmcrossroads.com/cgi-
              bin/cmwiki/bin/view.cgi/CM/DaversySpecs>

              Hope to hear from you.

              Eli Golovinsky.
            Your message has been successfully submitted and would be delivered to recipients shortly.