* Allow an alternate ID for Authentication-Results
When using a cluster of servers, it's sometimes needed to have the same
ID in the Authentication-Results header, rather than just the hostname,
and you don't always want to change "me" (because that has other
effects). Allow an alternate "ar-me" config file.
* Change Authentication-Results "me" file and expand
Per request, make the Authentication-Results server ID config file
"me-auth-results" for clarity.
Also, expand its meaning slightly - use "none" to disable adding or
modifying Authentication-Results headers. This is useful when qpsmtpd
is used in an internal hop and should not be overriding an edge hop that
checked SPF/DKIM/etc.
Prevent the following error if we receive an invalid RCPT TO (eg <"relaytest%nmap.scanme.org">)
Can't call method "qp" on an undefined value at /usr/share/perl5/vendor_perl/Qpsmtpd.pm line 451.
/usr/bin/qpsmtpd-forkserver[17472]: command 'rcpt' failed unexpectedly
When using the naughty plugin to defer rejection, we loose the name of the original plugin which caused the reject.
Especially when we parse the logterse plugin output to build graphs. With this addition, we now can get this information back
calling it as such. It doesn't resolve#199 but it does help there.
I'm not sure initializing Qpsmtpd::transaction as {} is a brilliant idea, but I haven't a better solution for that yet.
don't load plugins twice.
Not exactly sure where that feature crept in some time ago. It was suppressed by checking to see if a queue plugin was already registered, and then bailing out on subsequent register_hook runs. I noticed it in testing, b/c I didn't have a queue plugin loaded. This removes the duplicate calls to register_hook.
* adds caching of the AUTH methods. You can't add new plugins or register new
hooks w/o restarting QP, so cache the list and avoid having to generate it on every connection.
* other PBP changes (early exits, less indention, fewer unnecessary parens, etc.)
Destroy the AnyDBM-tied hash after untying
Google's wisdom seems to indicate that leaving the AnyDBM-tied hash around after
untying it was causing data to not flush to the DBM file... or something. At any
rate the regression test added here confirms inconsistency when using multiple
instances which is fixed by destroying the AnyDBM-tied hash after untying.