Re: Heuristic Script Building (was: Re: [hackers-il] enhancing the 'Unix IDE')
- On Fri, Apr 30, 2004 at 03:13:18AM +0300, guy keren wrote:
>Here's one example:
> On Thu, 29 Apr 2004, Tzafrir Cohen wrote:
> > BTW: I occasionally automate shell commands in a project using a
> > makefile. This allows me to script a complicated process and allows me
> > to run each step separately. Another atvantage of make: good control
> > over built-in variables.
> > Each makefile step should ideally contain only one command. This means
> > that in case a long command succeds but a subsequent short command
> > fails, you won't have to run that long command again.
> can you give some concrete examples? in particular, examples for tasks
> that are not too project-specific?
I had to run some tests on a remote system. Remote as in: slow trafic,
and line may go down unexpectedly. It involved first syncing the data
and then running a number of different commands to process them. Some of
the processing had to be done remotely. In some cases it was basically
Originally I had a remote terminal and a local terminal and ran all
sorts of commands on both. Then I figured out that I make too many
mistakes this way.
The basic scheme of the make file:
Each revesion gets its own subdirectory. It has a makefile that has
I figured that re-defining NUM every time in the command line is
Some of the data are taken from a mysql database. I could not figure out
a simple and relible way to say "the database has change", so I ended up
making them all depend on "time_stamp" . To signal a change in the
database I would run "touch time_stamp" .
Any copy of data to the remote system is done using $(RSYNC), where
'RSYNC=rsync -z -e ssh' . Thus I generally don't need to copy data
$(RSYNC) local_file $(REMOTE_DIR)/whatever
Any future remote processing that depends on the above data available
remotely, simply has to depend on copy_somedata . Thus if I run 'make'
again in that directory I don't even have the overhead of rsync.
Also suppose that local_file has been re-created but is exactly the
same. rsync would now saves me the network overhead even though the
target is re-run.
A local comman would look like:
Make's error handling is pretty simple: if anything goes wrong, call the
operator. When I fixed whatever was wrong I don't want to run
unnecessary commands. If you actually want to re-run a certain stage you
can use 'make -W dependency' . 'make -n' is also handy to see what is
about to be run before it is run.
A remote command sequense is usually run as:
do_remote_something: copy_somedata other_task
$(SSH) $(REMOTE_HOST) ' set -e; \
command1 ; \
command2 ; \
for file in $(PATH)/a*b; do \
operation of $$file ; \
another operation on $$file ; \
done ; \
('$$var': makewill turn this into '$var' and the remote shell will
expand 'var' . $(VAR): make will expand 'VAR')
Generally if anything goes wrong the ssh command will exit and return an
unsuccessfull status . Note, however , that if there was something wrong
in 'command2' and you need to re-run the target, command1 will have to
be re-run . If command1 doesn't take long to run it might be faster than
running it in a separate ssh connection . In any case you should try
avoiding puting any command after a command that takes long to execute .
As for 'good control over variables':
If I want to override a variable such as REMOTE_HOST from the
command-line I simply need to run:
But maany paramters tend to be revision-specific . It makes sense to put
them in the makefile:
This means I need to allow overriding them in the task.rules:
(Any better alternative to that?)
Actually you often have a number of different revisions that require
different parameters. What I do is have in the makefile:
where special.rules is something like:
Tzafrir Cohen +---------------------------+
http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|