- Dan Hollis <goemon@...> wrote:
> In order for honeypot to be useful it needs to be easy to deploy, likeYes, DST (Distributed Spam Traps) already does all of this. But there is
> milter-greylist. E.g. not these spaghetti mess of other honeypot software.
> Hopefully it's something like a few .c file and a single config in /etc
> And it doesn't run as root...
no doc written yet, and the config file probably needs some refining.
Check out the code here ftp://ftp.espci.fr/pub/dst
There is basically two items:
dstd is a daemon used to relay spam reports in a NNTP fashion. It has
the ability to just forward reports to other dstd, the ability to
maintain a Berkeley DB database of seen reports, and it can feed a
DNSRBL by performing DNS updates.
dstc is the spam trap itself. It eats messages on stdin, parse the
headers looking for the sender IP address and report it to a dstd. It
can sign the report using a RSA key, and dstd can verify the signature
before feeding a DNSRBL. dstc is ran from /etc/aliases or .forward.
The gaol was to have something efficient and highly interoperable, so
that it can be easily deployed. Everything is written in C, and it will
work with any MTA: dstc works with delivery to a program, and
blacklisting works by using a DNSRBL, something that all MTA support.
Dependencies: OpenSSL (for RSA signing/verifying), BIND9 with the DNS
resolver (for DNS updates), Berkeley DB, and a POSIX thread library.
Now the only issue is to deal with this scenario: spammers discover a
spam trap and start throwing spam at it through a big ISP mail server.
How can we avoid blacklisting the ISP mail server? I think we need some
karma scheme here: you count spam reports divided by total messages and
see if you still want to receive messages from that server.
Il y a 10 sortes de personnes dans le monde: ceux qui comprennent
le binaire et ceux qui ne le comprennent pas.
- Emmanuel Dreyfus <manu@...> wrote:
> Dan Hollis <goemon@...> wrote:I don't mean ram, but rather that the serial db lookups could end up
> > I wonder if this won't cause scalability problems in the future on very
> > large systems (eg 10,000's of users). Multiply by the unique senders and
> > unique IPs...
> If you have such a big system, then you won't object throwing a few more
> GB of RAM to fix that problem :)
taking a lot of cpu.
On-disk database would alleviate both memory pressure and lookups (via