Loading ...
Sorry, an error occurred while loading the content.
 

Navigating to deeply-nested directories (was: Re: [hackers-il] enhancing the 'Unix IDE')

Expand Messages
  • Omer Zak
    The problem of navigating through deeply-nested directories is a general one, which affects interactive shell work, filename selection dialogs (such as Open
    Message 1 of 15 , Apr 28, 2004
      The problem of navigating through deeply-nested directories is a general
      one, which affects interactive shell work, filename selection dialogs
      (such as Open and Save As) and probably other operations, which escape
      my memory at the moment.

      I suggest to augment bash's file completion mechanism as follows:
      In addition to history, bash shall maintain a list of recently-used
      file- and directory- names.

      When tabbing (or using shift-tab), the ordinary file completion
      mechanism will work as usual.
      When using ctrl-tab, bash is to invoke an external process, which will
      (when run under X-Windows) display a dialog suggesting names based upon
      the recently-used filenames and directory names. After dialog, the
      external process will return to bash the selected filename.

      The file selection widgets, used by applications, should provide for
      activation of the external process, if the appropriate key bindings are
      made.


      guy keren wrote:

      > the first thing i encountered then, was the tiring job of switching
      > between directories of large projects, or performing tedious file-name
      > completions through several layers of directories. this has become even
      > more prominent lately, now that i write in java - where the files on disk
      > are layed in a rather deep hierarchical structure.


      --- Omer
      My opinions, as expressed in this E-mail message, are mine alone.
      They do not represent the official policy of any organization with which
      I may be affiliated in any way.
      WARNING TO SPAMMERS: at http://www.zak.co.il/spamwarning.html
    • Tzafrir Cohen
      ... One note: if you ever feel that the syntax of aliases is too limiting, try using functions instead. E.g: function fvi() { vi `find ${PROJ_HOME} -name $1
      Message 2 of 15 , Apr 28, 2004
        On Thu, Apr 29, 2004 at 02:03:42AM +0300, guy keren wrote:
        > ---------------------------------------------------------------------
        > # 'fvi' will open vi with a file (found anywhere under the $PROJ_HOME
        > directory)# with the given name. useful for 'recursive' file-name
        > completion.
        > alias fvi 'vi `find ${PROJ_HOME} -name \!:1 -print`'

        One note: if you ever feel that the syntax of aliases is too limiting,
        try using functions instead. E.g:

        function fvi() {
        vi `find ${PROJ_HOME} -name "$1" -print`
        }

        > alias frehash 'set act_java_files="("`find ${PROJ_HOME} -name "*.java"
        > -exec basename {} \;`")"'

        Oops, you use tcsh. No function here :-(

        --
        Tzafrir Cohen +---------------------------+
        http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|
        mailto:tzafrir@... +---------------------------+
      • Muli Ben-Yehuda
        ... Here are a few zero and one-liners I use all the time: # every ooo binary knows to open other types function ooo { /usr/lib/openoffice/program/soffice.bin
        Message 3 of 15 , Apr 28, 2004
          On Thu, Apr 29, 2004 at 02:03:42AM +0300, guy keren wrote:

          > if anyone has ideas for other small tools that could be added to my Unix
          > IDE - feel free to share (preferably with the code that composes these
          > tools ;) ).

          Here are a few zero and one-liners I use all the time:

          # every ooo binary knows to open other types
          function ooo {
          /usr/lib/openoffice/program/soffice.bin "$@"
          }

          # I want xlock even on machines which don't have 'xlock'
          # ask which nicely to find xlock for us
          xlock=`which xlock 2>&1 | grep 'no xlock in'`

          # set things up
          if [ "x${xlock}" != "x" ]; then
          alias xlock='xscreensaver-command --lock';
          fi

          - the watch command

          # the names are deceiving - pipe manually to 'xargs rm'

          # kill annoying *~ files
          alias rmtild="find ./ -name \"*~\" -type f"
          alias cleantree="find ./ -name \"*~\" -o -name \"*.rej\" -o -name \"*.orig\" -type f"

          - the ketchup script (http://www.selenic.com/ketchup/)

          HTH,
          Muli
          --
          Muli Ben-Yehuda
          http://www.mulix.org | http://mulix.livejournal.com/
        • Omer Musaev
          My $0.02 ... # -*- sh-mode -*- # # echoes output common::echo() { echo $* } # # echoes output without trailing carriage return common::necho() { printf $*
          Message 4 of 15 , Apr 29, 2004
            My $0.02

            Shortly before I had switched to Python for scripting:


            -------------------- cut here --------------------

            # -*- sh-mode -*-

            #
            # echoes output
            common::echo() {
            echo "$*"
            }


            #
            # echoes output without trailing carriage return
            common::necho() {
            printf "$*"
            }

            #
            #
            # ask: receives prompt.
            # returns user answer
            common::ask( ) {
            local prompt="${1}"
            local ans
            if [ -z "${prompt}" ]
            then
            prompt="Enter yes or no"
            else
            prompt="${prompt} [y|n] "
            fi

            ans=
            while [ -z "${ans}" ] ; do
            common::necho "$prompt "
            read ans
            case $ans in
            y*|Y*)
            return 0
            ;;
            n*|N*)
            return 1
            ;;
            *)
            ans=
            common::echo "Invalid input"
            ;;
            esac
            done
            }

            #
            #
            # quetly: execute rest of command line without output
            common::quetly() {
            eval "$* 1>/dev/null 2>&1"
            }

            #
            #
            # undefined: at least one of arguments has no value
            common::undefined() {
            local ans=
            local var=
            for var in $*
            do
            local var_val
            eval "var_val=\$$var"
            [ -z "${var_val}" ] && ans=1
            done
            [ -n "${ans}" ] && return 0
            return 1
            }

            #
            # defined
            # at least one of arguments has a value
            common::defined() {
            local ans=
            local var=
            for var in $*
            do
            local var_val
            eval "var_val=\$$var"
            [ -z "${var_val}" ] && ans=1
            done
            [ -z "${ans}" ] && return 0
            return 1
            }

            -------------------- cut here --------------------

            HTH

            --
            o.m.


            ________________________________________________________________________
            This email has been scanned for all viruses.

            Mercury Interactive Corporation
            Optimizing Business Processes to Maximize Business Results
          • Shlomi Fish
            Well, I also use the UNIX IDE with a different variation than Guy s (terminals with bash, gvim, make, perl, perl -d, gdb, etc) and like it a lot. Until some
            Message 5 of 15 , Apr 29, 2004
              Well, I also use the UNIX IDE with a different variation than Guy's (terminals
              with bash, gvim, make, perl, perl -d, gdb, etc) and like it a lot. Until some
              time ago, I indeed did a lot of repetitive commands, but eventually came out
              with the concept of "bash themes". Meaning that I type "Theme $project" and
              have a bash customization suitable for the project. There are several
              projects that I'm involved in and each one requires its own customizations.

              It works like that. In my .bashrc I have:

              <<<
              function load_common
              {
              source "$HOME/.bash_themes/common/$1.bash"
              }

              function Theme
              {
              theme="$1"
              shift;
              filename="$HOME/.bash_themes/themes/$theme/source.bash"
              test -e "$filename" || { echo "Unknown theme" 1>&2 ; return 1; }
              source "$filename"
              }

              complete -W "$(cat $HOME/.bash_themes/list-of-themes.txt)" Theme
              >>>

              "load_common" is a utility function used by the theme scripts. "Theme"
              actually loads a theme file. The "complete" function makes sure "Theme" has
              tab-auto-completion based on the themes present. (list-of-themes.txt is
              generated by a small script I created for the purpose).

              Now most of my themes until now were pretty simple: define a "$this" variable
              with the root of the working area, and cd to "$this" automatically. This all
              changed when I started working on an elaborate patch to Subversion a while
              ago. As a result my "svn" theme became this monster:

              <<<
              load_common mymake

              this="/home/shlomi/progs/svn/SVN-From-Repos/Peg-Revision/trunk"
              test_dir="$this/subversion/tests/clients/cmdline/"
              test_file="$test_dir/past_loc_tests.py"
              patches_dir="/home/shlomi/progs/svn/SVN-From-Repos/patches"

              Edit_tests()
              {
              gvim "$test_file"
              }

              Edit_fs()
              {
              gvim "$this/subversion/libsvn_fs/tree.c"
              }

              gen_patch()
              {
              (cd $this ;
              a=$(cd $patches_dir ;
              ls peg-revision-tng-rev* |
              sed 's/^peg-revision-tng-rev\([0-9]\+\).patch/\1/' |
              sort -n |
              tail -1
              ) ;
              let a++;
              svn diff > $patches_dir/peg-revision-tng-rev"$a".patch
              )
              }

              Run_svnserve()
              {
              (cd $this;
              subversion/svnserve/svnserve -d -r `pwd`/subversion/tests/clients/cmdline
              )
              }

              cd $this
              >>>

              And I could perhaps use more stuff there, for which the shell history was
              sufficient for the time being.

              As for gvim customizations, I have this:

              <<<
              " map <F8> :r /home/shlomi/Docs/lecture/html/texts/mycode.html<CR>
              map <F8> :r /home/shlomi/Docs/lecture/Gimp/slides/mydemo.html<CR>
              " map <F4> :r /home/shlomi/Docs/lecture/Gimp/slides/menupath.html<CR>
              map
              <F4> :r /home/shlomi/Docs/Univ/Homework/SICP/homework/SICP/hw5/mycode.txt<CR><CR>
              " map <F6> :r /home/shlomi/Docs/lecture/Gimp/slides/h3_notbold.html<CR>

              " Set Incremental Search (I-Search)
              set incsearch

              " set guifont=-biznet-courier-medium-r-normal-*-*-140-*-*-m-*-iso8859-2
              " set guifont=-adobe-courier-medium-r-normal-*-*-140-*-*-m-*-iso8859-1

              " map <F3> 0"5y$:!xmms -e '<C-R>5'<CR><CR>
              " map <S-F3> 0"5y$:!xmms '<C-R>5'<CR><CR>

              source ~/conf/Vim/xmms.vim

              " map <F3> 0"5y$ji<CR><ESC>ki<C-R>5<ESC>:s/'/'\\''/ge<CR>0"5y$:!xmms -e
              '<C-R>5'<CR><CR>ddk
              " map <S-F3> 0"5y$ji<CR><ESC>ki<C-R>5<ESC>:s/'/'\\''/ge<CR>0"5y$:!xmms
              '<C-R>5'<CR><CR>ddk

              an 50.740 &Syntax.Convert\ to\ &WML :so
              $VIMRUNTIME/syntax/2html.vim<CR>:%!wml_safe.pl<CR>

              let @s = ":r sect_template.xml\n"

              " Expand the syntax menu automatically
              let do_syntax_sel_menu = 1
              runtime! synmenu.vim
              " aunmenu &Syntax.&Show\ individual\ choices

              let html_use_css = 1

              autocmd BufNewFile,BufRead ~/progs/svn/*.[ch] so ~/conf/Vim/svn-dev.vim
              autocmd BufNewFile,BufRead ~/Download/unpack/graphics/gimp-cvs/*.[ch] so
              ~/conf/Vim/svn-dev.vim

              so ~/conf/Vim/perl-test-manage.vim

              autocmd BufNewFile,BufRead *.t set filetype=perl
              autocmd BufNewFile,BufRead *.t map <F3> :call Perl_Tests_Count()<CR>
              autocmd BufNewFile,BufRead ~/Download/unpack/graphics/*.pdb set filetype=perl

              set guifont=Bitstream\ Vera\ Sans\ Mono\ 12

              so ~/conf/Vim/hebrew.vim

              autocmd BufNewFile,BufRead ~/Svn/homework/*.tex set encoding=iso8859-8

              " To make sure Python file editing is tabbed according to 2 spaces
              " in the subversion Python files.
              autocmd BufNewFile,BufRead ~/progs/svn/*.py retab 2
              autocmd BufNewFile,BufRead ~/progs/svn/*.py set shiftwidth=2

              >>>

              Best Regards,

              Shlomi Fish



              On Thursday 29 April 2004 02:03, guy keren wrote:
              > after about a decade of programming using the 'Unix IDE' (2-4 terminal
              > windows with shells, vi, etc), i've finally realized i'm not about to
              > transition into using a conventional IDE. thus, i made a decision to make
              > sure my Unix IDE is the best i can have. for that, i decided that whenever
              > i encounter some task that, during development, i already performed
              > hundreads of times in the past, i will automate it in a better manner.

              ---------------------------------------------------------------------
              Shlomi Fish shlomif@...
              Homepage: http://shlomif.il.eu.org/

              Quidquid latine dictum sit, altum viditur.
              [Whatever is said in Latin sounds profound.]
            • Tzafrir Cohen
              Some petty notes: Usually it is better to use $@ instead of $* : ... Here it doesn t really matter ... Here $* should indeed be used ... Why not simply:
              Message 6 of 15 , Apr 29, 2004
                Some petty notes:

                Usually it is better to use "$@" instead of "$*" :

                On Thu, Apr 29, 2004 at 11:48:55AM +0300, Omer Musaev wrote:
                >
                > My $0.02
                >
                > Shortly before I had switched to Python for scripting:
                >
                >
                > -------------------- cut here --------------------
                >
                > # -*- sh-mode -*-
                >
                > #
                > # echoes output
                > common::echo() {
                > echo "$*"
                > }

                Here it doesn't really matter
                >
                >
                > #
                > # echoes output without trailing carriage return
                > common::necho() {
                > printf "$*"
                > }

                Here "$*" should indeed be used

                >
                > #
                > #
                > # ask: receives prompt.
                > # returns user answer
                > common::ask( ) {
                > local prompt="${1}"
                > local ans
                > if [ -z "${prompt}" ]
                > then
                > prompt="Enter yes or no"
                > else
                > prompt="${prompt} [y|n] "
                > fi
                >
                > ans=
                > while [ -z "${ans}" ] ; do
                > common::necho "$prompt "
                > read ans
                > case $ans in
                > y*|Y*)
                > return 0
                > ;;
                > n*|N*)
                > return 1
                > ;;
                > *)
                > ans=
                > common::echo "Invalid input"
                > ;;
                > esac
                > done
                > }
                >
                > #
                > #
                > # quetly: execute rest of command line without output
                > common::quetly() {
                > eval "$* 1>/dev/null 2>&1"
                > }

                Why not simply:

                "$@" &>/dev/null

                >
                > #
                > #
                > # undefined: at least one of arguments has no value
                > common::undefined() {
                > local ans=
                > local var=
                > for var in $*

                Here you would theoretically "$@" (with the quotes), but variable names
                should not contain space, anyway.

                > do
                > local var_val
                > eval "var_val=\$$var"
                > [ -z "${var_val}" ] && ans=1
                > done
                > [ -n "${ans}" ] && return 0
                > return 1
                > }
                >
                > #
                > # defined
                > # at least one of arguments has a value
                > common::defined() {
                > local ans=
                > local var=
                > for var in $*
                > do
                > local var_val
                > eval "var_val=\$$var"
                > [ -z "${var_val}" ] && ans=1
                > done
                > [ -z "${ans}" ] && return 0
                > return 1
                > }
                >
                > -------------------- cut here --------------------

                --
                Tzafrir Cohen +---------------------------+
                http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|
                mailto:tzafrir@... +---------------------------+
              • Tzafrir Cohen
                ... That s what a shell history is for. Searching in it using ctrl-r is very useful. BTW: I occasionally automate shell commands in a project using a makefile.
                Message 7 of 15 , Apr 29, 2004
                  On Thu, Apr 29, 2004 at 02:16:58AM +0300, Omer Zak wrote:
                  > Hello Guy,
                  > Your idea is very right approach to extremely high productivity in
                  > computer use and software development.
                  > I'd like to suggest another approach, of which I thought a lot of time
                  > ago, but didn't get around to implement:
                  >
                  > Augment a shell (such as bash), so that its history will be available to
                  > an external process to analyze. The external process is to be able also
                  > to inject commands (and enjoy some of the shell's services such as tilde
                  > expansion, filename completion, etc.).
                  >
                  > The external process which I have in mind will identify, over a long
                  > time, patterns of repeating commands (also inside applications, if it
                  > knows also to listen to X-Window/KDE/Gnome events).
                  > It will then invoke heuristic methods to automatically construct macros.
                  > The user will then be able to edit, polish, give names and document the
                  > macros which he finds to be most useful.
                  >
                  > Then, whenever the user needs to perform again a frequently-occurring
                  > operation, he'll select the macro from a menu provided by the external
                  > process. A dialog will allow the user to enter any required parameters.

                  That's what a shell history is for.

                  Searching in it using ctrl-r is very useful.

                  BTW: I occasionally automate shell commands in a project using a
                  makefile. This allows me to script a complicated process and allows me
                  to run each step separately. Another atvantage of make: good control
                  over built-in variables.

                  Each makefile step should ideally contain only one command. This means
                  that in case a long command succeds but a subsequent short command
                  fails, you won't have to run that long command again.

                  If the command is a "logical" target, or produces something that is nor
                  under the current dorectory, "touch $@" in the end of that target is
                  useful.

                  --
                  Tzafrir Cohen +---------------------------+
                  http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|
                  mailto:tzafrir@... +---------------------------+
                • Gabor Szabo
                  ... why not show us some of your python code too ? Gabor
                  Message 8 of 15 , Apr 29, 2004
                    On Thu, 29 Apr 2004, Omer Musaev wrote:

                    > Shortly before I had switched to Python for scripting:

                    why not show us some of your python code too ?

                    Gabor
                  • guy keren
                    ... can you give some concrete examples? in particular, examples for tasks that are not too project-specific? -- guy For world domination - press 1, or dial
                    Message 9 of 15 , Apr 29, 2004
                      On Thu, 29 Apr 2004, Tzafrir Cohen wrote:

                      > BTW: I occasionally automate shell commands in a project using a
                      > makefile. This allows me to script a complicated process and allows me
                      > to run each step separately. Another atvantage of make: good control
                      > over built-in variables.
                      >
                      > Each makefile step should ideally contain only one command. This means
                      > that in case a long command succeds but a subsequent short command
                      > fails, you won't have to run that long command again.

                      can you give some concrete examples? in particular, examples for tasks
                      that are not too project-specific?

                      --
                      guy

                      "For world domination - press 1,
                      or dial 0, and please hold, for the creator." -- nob o. dy
                    • hackers-il@banoora.net
                      ... You may want to consider using vim to help you find a file in a given search path: set PATH=** ... I find wcd (Wherever Change Directory) very handy as
                      Message 10 of 15 , Apr 30, 2004
                        On Thu, Apr 29, 2004 at 02:03:42AM +0300, guy keren wrote:
                        > # 'fvi' will open vi with a file (found anywhere under the $PROJ_HOME
                        > directory)# with the given name. useful for 'recursive' file-name
                        > completion.
                        > alias fvi 'vi `find ${PROJ_HOME} -name \!:1 -print`'
                        > alias frehash 'set act_java_files="("`find ${PROJ_HOME} -name "*.java"
                        > -exec basename {} \;`")"'
                        > echo "running frehash...wait..."
                        > frehash
                        > complete fvi 'p/1/$act_java_files/'

                        You may want to consider using vim to help you find a file in a given
                        search path:

                        set PATH=**
                        :find myfile.java


                        I find wcd (Wherever Change Directory) very handy as well.

                        kamal
                      • Oleg Goldshmidt
                        Re: $SUBJ: I ve been using the following little bash routines for too many years to count: # search for directories containing pattern: # first call cdsload,
                        Message 11 of 15 , Apr 30, 2004
                          Re: $SUBJ: I've been using the following little bash routines for too many
                          years to count:

                          # search for directories containing pattern:
                          # first call cdsload, then `cds pattern`
                          # (From: Marc Ewing <marc@...>)
                          cds() {
                          if [ $# -ne 1 ]; then
                          echo "usage: cds pattern"
                          return
                          fi
                          set "foo" `fgrep $1 $HOME/.dirs`
                          if [ $# -eq 1 ]; then
                          echo "No matches"
                          elif [ $# -eq 2 ]; then
                          cd $2
                          else
                          shift
                          for x in $@; do
                          echo $x
                          done | nl -n ln
                          echo -n "Number: "
                          read C
                          if [ "$C" = "0" -o -z "$C" ]; then
                          return
                          fi
                          eval D="\${$C}"
                          if [ -n "$D" ]; then
                          echo $D
                          cd $D
                          fi
                          fi
                          }

                          # cdsload is run through crontab every night at 2 am
                          cdsload() { find $HOME -xdev -type d -and -not -name CVS > $HOME/.dirs; }

                          dtree () {
                          (cd ${1-.}; pwd)
                          find ${1-.} -type d -print | sort -f | \
                          sed -e "s,^${1-.},," -e "/^$/d" -e \
                          "s,[^/]*/\([^/]*\)$,\`-----\1," -e "s,[^/]*/,| ,g"
                          }

                          ftree () {
                          (cd ${1-.}; pwd)
                          find ${1-.} -path '*/CVS' -prune -o -print | sort -f | \
                          sed -e "s,^${1-.},," -e "/^$/d" -e \
                          "s,[^/]*/\([^/]*\)$,\`-----\1," -e "s,[^/]*/,| ,g"
                          }


                          --
                          Oleg Goldshmidt | pub@...
                        • guy keren
                          ... is there a way to use this feature during file-name completion? the hting is that file names in large projects can get quite long, and file-name completion
                          Message 12 of 15 , Apr 30, 2004
                            On Fri, 30 Apr 2004 hackers-il@... wrote:

                            > On Thu, Apr 29, 2004 at 02:03:42AM +0300, guy keren wrote:
                            >
                            > > # 'fvi' will open vi with a file (found anywhere under the $PROJ_HOME
                            > > directory)# with the given name. useful for 'recursive' file-name
                            > > completion.
                            > > alias fvi 'vi `find ${PROJ_HOME} -name \!:1 -print`'
                            > > alias frehash 'set act_java_files="("`find ${PROJ_HOME} -name "*.java"
                            > > -exec basename {} \;`")"'
                            > > echo "running frehash...wait..."
                            > > frehash
                            > > complete fvi 'p/1/$act_java_files/'
                            >
                            > You may want to consider using vim to help you find a file in a given
                            > search path:
                            >
                            > set PATH=**
                            > :find myfile.java

                            is there a way to use this feature during file-name completion?
                            the hting is that file names in large projects can get quite long, and
                            file-name completion could be very handy. as far as i see, the 'find'
                            command uses the normal file-name copletion, not taking the find path into
                            account (which i consider to be a bug - after all, 'find' searches in this
                            path, so why complete files found under the _current_ directory?

                            also, is there a way to manipulate completion in vim, in the same way that
                            shells allow it? it _looks_ like it can't be done (unless i change gvim's
                            source code, ofcourse). any trick you can think of to manipulate file-name
                            completion to use a list of options i give it?

                            > I find wcd (Wherever Change Directory) very handy as well.

                            now, i was wondering about directory changing. the problem here, that
                            unlike source files, directory names do tend to colide, and you need to
                            seperate them based on their paths. for example, iny my C/C++ projects,
                            some projects tend to have somehting like:

                            modules


                            utils parse net ....

                            / | / \ / \

                            inc src inc src inc src

                            or another model, such as:

                            modules
                            / \
                            src inc
                            / | \ / | \
                            utils parse net utils parse net

                            which contain ambigious paths.

                            what i normally do is write a large set of aliases that cd into
                            directories i visit often. in the abbove case, i will have:

                            utilssrc - cd into src/utils (or utils/src)
                            utilsinc - cd into inc/utils (or utils/inc)

                            and so on. perhaps i should just automate the creation of these aliases -
                            either explicitly (the user will run a command from inside a directory,
                            which will add an alias to an aliases file), or implicitly - based on what
                            omer suggested.

                            --
                            guy

                            "For world domination - press 1,
                            or dial 0, and please hold, for the creator." -- nob o. dy
                          • Tzafrir Cohen
                            ... Here s one example: I had to run some tests on a remote system. Remote as in: slow trafic, and line may go down unexpectedly. It involved first syncing the
                            Message 13 of 15 , May 7, 2004
                              On Fri, Apr 30, 2004 at 03:13:18AM +0300, guy keren wrote:
                              >
                              > On Thu, 29 Apr 2004, Tzafrir Cohen wrote:
                              >
                              > > BTW: I occasionally automate shell commands in a project using a
                              > > makefile. This allows me to script a complicated process and allows me
                              > > to run each step separately. Another atvantage of make: good control
                              > > over built-in variables.
                              > >
                              > > Each makefile step should ideally contain only one command. This means
                              > > that in case a long command succeds but a subsequent short command
                              > > fails, you won't have to run that long command again.
                              >
                              > can you give some concrete examples? in particular, examples for tasks
                              > that are not too project-specific?

                              Here's one example:

                              I had to run some tests on a remote system. Remote as in: slow trafic,
                              and line may go down unexpectedly. It involved first syncing the data
                              and then running a number of different commands to process them. Some of
                              the processing had to be done remotely. In some cases it was basically
                              simpler.

                              Originally I had a remote terminal and a local terminal and ran all
                              sorts of commands on both. Then I figured out that I make too many
                              mistakes this way.

                              The basic scheme of the make file:

                              Each revesion gets its own subdirectory. It has a makefile that has
                              something like:

                              NUM=123
                              include ../tasks.rules

                              I figured that re-defining NUM every time in the command line is
                              error-prone.

                              Some of the data are taken from a mysql database. I could not figure out
                              a simple and relible way to say "the database has change", so I ended up
                              making them all depend on "time_stamp" . To signal a change in the
                              database I would run "touch time_stamp" .

                              Any copy of data to the remote system is done using $(RSYNC), where
                              'RSYNC=rsync -z -e ssh' . Thus I generally don't need to copy data
                              twice:

                              copy_somedata: local_file
                              $(RSYNC) local_file $(REMOTE_DIR)/whatever
                              touch $@

                              Any future remote processing that depends on the above data available
                              remotely, simply has to depend on copy_somedata . Thus if I run 'make'
                              again in that directory I don't even have the overhead of rsync.
                              Also suppose that local_file has been re-created but is exactly the
                              same. rsync would now saves me the network overhead even though the
                              target is re-run.

                              A local comman would look like:

                              output: dependencies
                              $(SOME_COMMAND)

                              Make's error handling is pretty simple: if anything goes wrong, call the
                              operator. When I fixed whatever was wrong I don't want to run
                              unnecessary commands. If you actually want to re-run a certain stage you
                              can use 'make -W dependency' . 'make -n' is also handy to see what is
                              about to be run before it is run.


                              A remote command sequense is usually run as:

                              do_remote_something: copy_somedata other_task
                              $(SSH) $(REMOTE_HOST) ' set -e; \
                              command1 ; \
                              command2 ; \
                              for file in $(PATH)/a*b; do \
                              operation of $$file ; \
                              another operation on $$file ; \
                              done ; \
                              '
                              touch $@

                              ('$$var': makewill turn this into '$var' and the remote shell will
                              expand 'var' . $(VAR): make will expand 'VAR')

                              Generally if anything goes wrong the ssh command will exit and return an
                              unsuccessfull status . Note, however , that if there was something wrong
                              in 'command2' and you need to re-run the target, command1 will have to
                              be re-run . If command1 doesn't take long to run it might be faster than
                              running it in a separate ssh connection . In any case you should try
                              avoiding puting any command after a command that takes long to execute .


                              As for 'good control over variables':
                              If I want to override a variable such as REMOTE_HOST from the
                              command-line I simply need to run:

                              make REMOTE_HOST=foo

                              But maany paramters tend to be revision-specific . It makes sense to put
                              them in the makefile:

                              NUM=123
                              REMOTE_HOST=foofoo
                              include ../tasks.rules

                              This means I need to allow overriding them in the task.rules:

                              Instead of:

                              REMOTE_HOST=default_remote_host

                              put:

                              ifnfndef REMOTE_HOST
                              REMOTE_HOST=default_remote_host
                              endif

                              (Any better alternative to that?)

                              Actually you often have a number of different revisions that require
                              different parameters. What I do is have in the makefile:

                              NUM=123
                              include ../special.rules

                              where special.rules is something like:

                              VAR1=val1
                              VAR2=val2
                              include ../task.rules

                              --
                              Tzafrir Cohen +---------------------------+
                              http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|
                              mailto:tzafrir@... +---------------------------+
                            Your message has been successfully submitted and would be delivered to recipients shortly.