Loading ...
Sorry, an error occurred while loading the content.

Re: [hackers-il] enhancing the 'Unix IDE'

Expand Messages
  • hackers-il@banoora.net
    ... You may want to consider using vim to help you find a file in a given search path: set PATH=** ... I find wcd (Wherever Change Directory) very handy as
    Message 1 of 15 , Apr 30, 2004
    • 0 Attachment
      On Thu, Apr 29, 2004 at 02:03:42AM +0300, guy keren wrote:
      > # 'fvi' will open vi with a file (found anywhere under the $PROJ_HOME
      > directory)# with the given name. useful for 'recursive' file-name
      > completion.
      > alias fvi 'vi `find ${PROJ_HOME} -name \!:1 -print`'
      > alias frehash 'set act_java_files="("`find ${PROJ_HOME} -name "*.java"
      > -exec basename {} \;`")"'
      > echo "running frehash...wait..."
      > frehash
      > complete fvi 'p/1/$act_java_files/'

      You may want to consider using vim to help you find a file in a given
      search path:

      set PATH=**
      :find myfile.java


      I find wcd (Wherever Change Directory) very handy as well.

      kamal
    • Oleg Goldshmidt
      Re: $SUBJ: I ve been using the following little bash routines for too many years to count: # search for directories containing pattern: # first call cdsload,
      Message 2 of 15 , Apr 30, 2004
      • 0 Attachment
        Re: $SUBJ: I've been using the following little bash routines for too many
        years to count:

        # search for directories containing pattern:
        # first call cdsload, then `cds pattern`
        # (From: Marc Ewing <marc@...>)
        cds() {
        if [ $# -ne 1 ]; then
        echo "usage: cds pattern"
        return
        fi
        set "foo" `fgrep $1 $HOME/.dirs`
        if [ $# -eq 1 ]; then
        echo "No matches"
        elif [ $# -eq 2 ]; then
        cd $2
        else
        shift
        for x in $@; do
        echo $x
        done | nl -n ln
        echo -n "Number: "
        read C
        if [ "$C" = "0" -o -z "$C" ]; then
        return
        fi
        eval D="\${$C}"
        if [ -n "$D" ]; then
        echo $D
        cd $D
        fi
        fi
        }

        # cdsload is run through crontab every night at 2 am
        cdsload() { find $HOME -xdev -type d -and -not -name CVS > $HOME/.dirs; }

        dtree () {
        (cd ${1-.}; pwd)
        find ${1-.} -type d -print | sort -f | \
        sed -e "s,^${1-.},," -e "/^$/d" -e \
        "s,[^/]*/\([^/]*\)$,\`-----\1," -e "s,[^/]*/,| ,g"
        }

        ftree () {
        (cd ${1-.}; pwd)
        find ${1-.} -path '*/CVS' -prune -o -print | sort -f | \
        sed -e "s,^${1-.},," -e "/^$/d" -e \
        "s,[^/]*/\([^/]*\)$,\`-----\1," -e "s,[^/]*/,| ,g"
        }


        --
        Oleg Goldshmidt | pub@...
      • guy keren
        ... is there a way to use this feature during file-name completion? the hting is that file names in large projects can get quite long, and file-name completion
        Message 3 of 15 , Apr 30, 2004
        • 0 Attachment
          On Fri, 30 Apr 2004 hackers-il@... wrote:

          > On Thu, Apr 29, 2004 at 02:03:42AM +0300, guy keren wrote:
          >
          > > # 'fvi' will open vi with a file (found anywhere under the $PROJ_HOME
          > > directory)# with the given name. useful for 'recursive' file-name
          > > completion.
          > > alias fvi 'vi `find ${PROJ_HOME} -name \!:1 -print`'
          > > alias frehash 'set act_java_files="("`find ${PROJ_HOME} -name "*.java"
          > > -exec basename {} \;`")"'
          > > echo "running frehash...wait..."
          > > frehash
          > > complete fvi 'p/1/$act_java_files/'
          >
          > You may want to consider using vim to help you find a file in a given
          > search path:
          >
          > set PATH=**
          > :find myfile.java

          is there a way to use this feature during file-name completion?
          the hting is that file names in large projects can get quite long, and
          file-name completion could be very handy. as far as i see, the 'find'
          command uses the normal file-name copletion, not taking the find path into
          account (which i consider to be a bug - after all, 'find' searches in this
          path, so why complete files found under the _current_ directory?

          also, is there a way to manipulate completion in vim, in the same way that
          shells allow it? it _looks_ like it can't be done (unless i change gvim's
          source code, ofcourse). any trick you can think of to manipulate file-name
          completion to use a list of options i give it?

          > I find wcd (Wherever Change Directory) very handy as well.

          now, i was wondering about directory changing. the problem here, that
          unlike source files, directory names do tend to colide, and you need to
          seperate them based on their paths. for example, iny my C/C++ projects,
          some projects tend to have somehting like:

          modules


          utils parse net ....

          / | / \ / \

          inc src inc src inc src

          or another model, such as:

          modules
          / \
          src inc
          / | \ / | \
          utils parse net utils parse net

          which contain ambigious paths.

          what i normally do is write a large set of aliases that cd into
          directories i visit often. in the abbove case, i will have:

          utilssrc - cd into src/utils (or utils/src)
          utilsinc - cd into inc/utils (or utils/inc)

          and so on. perhaps i should just automate the creation of these aliases -
          either explicitly (the user will run a command from inside a directory,
          which will add an alias to an aliases file), or implicitly - based on what
          omer suggested.

          --
          guy

          "For world domination - press 1,
          or dial 0, and please hold, for the creator." -- nob o. dy
        • Tzafrir Cohen
          ... Here s one example: I had to run some tests on a remote system. Remote as in: slow trafic, and line may go down unexpectedly. It involved first syncing the
          Message 4 of 15 , May 7, 2004
          • 0 Attachment
            On Fri, Apr 30, 2004 at 03:13:18AM +0300, guy keren wrote:
            >
            > On Thu, 29 Apr 2004, Tzafrir Cohen wrote:
            >
            > > BTW: I occasionally automate shell commands in a project using a
            > > makefile. This allows me to script a complicated process and allows me
            > > to run each step separately. Another atvantage of make: good control
            > > over built-in variables.
            > >
            > > Each makefile step should ideally contain only one command. This means
            > > that in case a long command succeds but a subsequent short command
            > > fails, you won't have to run that long command again.
            >
            > can you give some concrete examples? in particular, examples for tasks
            > that are not too project-specific?

            Here's one example:

            I had to run some tests on a remote system. Remote as in: slow trafic,
            and line may go down unexpectedly. It involved first syncing the data
            and then running a number of different commands to process them. Some of
            the processing had to be done remotely. In some cases it was basically
            simpler.

            Originally I had a remote terminal and a local terminal and ran all
            sorts of commands on both. Then I figured out that I make too many
            mistakes this way.

            The basic scheme of the make file:

            Each revesion gets its own subdirectory. It has a makefile that has
            something like:

            NUM=123
            include ../tasks.rules

            I figured that re-defining NUM every time in the command line is
            error-prone.

            Some of the data are taken from a mysql database. I could not figure out
            a simple and relible way to say "the database has change", so I ended up
            making them all depend on "time_stamp" . To signal a change in the
            database I would run "touch time_stamp" .

            Any copy of data to the remote system is done using $(RSYNC), where
            'RSYNC=rsync -z -e ssh' . Thus I generally don't need to copy data
            twice:

            copy_somedata: local_file
            $(RSYNC) local_file $(REMOTE_DIR)/whatever
            touch $@

            Any future remote processing that depends on the above data available
            remotely, simply has to depend on copy_somedata . Thus if I run 'make'
            again in that directory I don't even have the overhead of rsync.
            Also suppose that local_file has been re-created but is exactly the
            same. rsync would now saves me the network overhead even though the
            target is re-run.

            A local comman would look like:

            output: dependencies
            $(SOME_COMMAND)

            Make's error handling is pretty simple: if anything goes wrong, call the
            operator. When I fixed whatever was wrong I don't want to run
            unnecessary commands. If you actually want to re-run a certain stage you
            can use 'make -W dependency' . 'make -n' is also handy to see what is
            about to be run before it is run.


            A remote command sequense is usually run as:

            do_remote_something: copy_somedata other_task
            $(SSH) $(REMOTE_HOST) ' set -e; \
            command1 ; \
            command2 ; \
            for file in $(PATH)/a*b; do \
            operation of $$file ; \
            another operation on $$file ; \
            done ; \
            '
            touch $@

            ('$$var': makewill turn this into '$var' and the remote shell will
            expand 'var' . $(VAR): make will expand 'VAR')

            Generally if anything goes wrong the ssh command will exit and return an
            unsuccessfull status . Note, however , that if there was something wrong
            in 'command2' and you need to re-run the target, command1 will have to
            be re-run . If command1 doesn't take long to run it might be faster than
            running it in a separate ssh connection . In any case you should try
            avoiding puting any command after a command that takes long to execute .


            As for 'good control over variables':
            If I want to override a variable such as REMOTE_HOST from the
            command-line I simply need to run:

            make REMOTE_HOST=foo

            But maany paramters tend to be revision-specific . It makes sense to put
            them in the makefile:

            NUM=123
            REMOTE_HOST=foofoo
            include ../tasks.rules

            This means I need to allow overriding them in the task.rules:

            Instead of:

            REMOTE_HOST=default_remote_host

            put:

            ifnfndef REMOTE_HOST
            REMOTE_HOST=default_remote_host
            endif

            (Any better alternative to that?)

            Actually you often have a number of different revisions that require
            different parameters. What I do is have in the makefile:

            NUM=123
            include ../special.rules

            where special.rules is something like:

            VAR1=val1
            VAR2=val2
            include ../task.rules

            --
            Tzafrir Cohen +---------------------------+
            http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|
            mailto:tzafrir@... +---------------------------+
          Your message has been successfully submitted and would be delivered to recipients shortly.