Archive

Archive for July, 2016

Parallel import

July 18th, 2016 Comments off

Importing in parallel from a single sourceĀ  is really enabledĀ in manitou-mdx since commit 6a860e, under the following conditions:

  • parallelism is driven from the outside: manitou-mdx instances run concurrently, but don’t fork and manage child workers. Workers don’t share anything. Fortunately GNU parallel can easily handle this part.
  • the custom full text indexing is done once the contents are imported, not during the import. The reason is that it absolutely needs a cache for performance, and such a cache wouldn’t work in the share-nothing implementation mentioned above.

The previous post showed how to create a list of all mail files to import from the Enron sample database.

Now instead of that, let’s create a list splitted in chunks of 25k messages, that will be fed separately to the parallel workers:


$ find . -type f | split -d -l 25000 - /data/enron/list-

The result is 21 numbered files of 25000 lines each, except for the last one, list-20 containing 17401 lines.

The main command is essentially the same as before. As a shell variable:

cmd="mdx/script/manitou-mdx --import-list={} \
--import-basedir=$basedir/maildir \
--conf=$basedir/enron-mdx.conf \
--status=33"

Based on this, a parallel import with 8 workers can be launched through a single command:

ls "$basedir"/list-* | parallel -j 8 $cmd

This invocation will automatically launch manitou-mdx processes and feed them each with a different list of mails to import (through the –import-list={} argument). It will also take care that there are always 8 such running processes if possible, launching a new one when another terminates.

This is very effective, compared to a serial import. Here are the times spent to import to entire mailset (517401 messages) for various degrees of parallelism, on a small server with a Xeon D-1540 @ 2.00GHz processor (8 cores, 16 threads).

 

parallel-mdx

Categories: Usage Tags:

Mass-importing case: the Enron mail database

July 12th, 2016 Comments off

Importing mail messages en masse works best when fiddling a bit with the configuration, rather than pushing the mail messages into the normal feed.

As an example, we’re going to use the mails from Enron, the energy provider that famously went down in the 90s, amidst a fraud scandal.
The mail corpus has been made public by the judicial process:
http://www.cs.cmu.edu/~enron/

It has been cleaned from all attachments, in addition to another cleaning process to remove potentially sensitive personal information, done by Nuix.

The archive format is a 423MB .tar.gz file with an MH-style layout:
– one top-level directory per account.
– inside each account, files and directories with mail folders.

It contains 3500 directories for 151 accounts, and a total of 517401 files, taking 2.6GB on disk once uncompressed.

After unpacking the archive, follow these steps to import the mailset from scratch:

1) Create the list of files


$ cd /data/enron/maildir
$ find . -type f > /data/enron/00-list-all

2) Create a database and a dedicated configuration file for manitou-mdx


# Run this as a user with enough privileges to create
# a database (generally, postgres should do)
$ manitou-mgr --create-database --db-name=enron

Create a specific configuration file with some optimizations for mass import:


$ cat enron-mdx.conf
[common]
db_connect_string = Dbi:Pg:dbname=enron;user=manitou

update_runtime_info = no
update_addresses_last = no
apply_filters = no
index_words = no

preferred_datetime = sender

update_runtime_info is set to no to avoid needlessly update timestamps in the runtime_info table for every imported message.

update_addresses_last set to no also will avoid some unnecessary writes.

apply_filters is again a micro-optimization to avoid querying for filters on every message. On the other hand, it should be left to yes if happen to have defined filters and want them to be used during this import.

index_words is key to performance. Running the full-text indexing after the import instead of during it makes it 3x faster. Also the full-text indexing as a separate process can be parallelized (more on that below).

preferred_datetime set to sender indicates that the date of a message is given by its header Date field, as opposed to the file creation time.

If we were importing into a pre-existing manitou-mdx instance running in the background, we would stop it at this point, as
several instances of manitou-mdx cannot work on the same database because of caching, except in specific circumstances (also more on that later).

3) Run the actual import command


$ cd /data/enron/maildir
$ time manitou-mdx --import-list=../00-list-all --conf=../enron-mdx.conf

On a low-end server, it takes about 70 minutes to import the 517402 messages with this configuration and PostgreSQL 9.5.

We can check with psql that all messages came in:

$ psql -d enron -U manitou
psql (9.5.3)
Type "help" for help.

enron=> select count(*) from mail;
count
--------
517401
(1 row)

4) Run the full text indexing

As it's a new database with no preexisting index, we don't have to worry about existing partitions. We let manitou-mgr index the messages with 4 jobs in parallel:


$ time manitou-mgr --conf=enron-mdx.conf --reindex-full-text --reindex-jobs=4

Output from time:

real 10m41.855s
user 28m22.744s
sys 1m8.476s

So this part of the process takes about 10 minutes.

Conclusion

With manitou-mgr, we can check the final size of the database and its main tables:

$ manitou-mgr --conf=enron-mdx.conf --print-size
-----------------------------------
addresses : 13.52 MB
attachment_contents : 0.02 MB
attachments : 0.02 MB
body : 684.98 MB
header : 402.45 MB
inverted_word_index : 2664.77 MB
mail : 250.12 MB
mail_addresses : 441.17 MB
mail_tags : 0.01 MB
pg_largeobject : 0.01 MB
raw_mail : 0.01 MB
words : 106.52 MB
-----------------------------------
Total database size : 4633 MB

Future posts will show how it compares to the full mailset (with attachments, 18GB of .pst files), and how to parallelize the main import itself.

Categories: Usage Tags:

Operators in the search bar

July 9th, 2016 Comments off

Until now, the search bar in the user interface did not support query
terms to search on metadata.
I’m glad to say that commits 2ddddaae and a1cbe72a add support for filtering
by date and message status right from the search bar, introducing
five operators:

  • “date:” must be followed by an iso-8601 date (format YYYY-MM-DD),
    or by a specific month (format YYYY-MM), or just a year (YYYY).
    It selects the messages from respectively that day,or month, or year.

  • “before:” has the same format but selects messages dated
    from this day/month/year or an earlier date.

  • “after:” is of course the opposite, selecting messages past
    the date that follows.

  • “is:” must be followed by a status among read,replied,forward,archived,sent.
    Criteria can be combined by using the option several times, as statuses are cumulative, not mutually exclusive,

  • “isnot:” is of course the opposite of “is”. It accepts the same arguments
    and filters out the messages that have the corresponding status bit.
    “is:” and “isnot:” can also be combined, for instance: “is:archived isnot:sent”.

A few more search bar operators are likely to be added to that list, as it’s a pretty handy and fast way to express basic queries.

Categories: New features, User Interface Tags: