The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

loghack - process and query apache logs

reskip

Regenerate the skiplist for a given chunk.

prep

Parse a raw logfile and split it into hourly chunks.

  loghack prep servername/logfile.gz
  loghack confirm *

list

List files in the repository.

  loghack list 2008-01-01 thru 2008-01-31 in *

compile

Assemble reports into daily chunks (in the .compiled/ directory.)

  loghack compile 2007-10-01

aggregate

Build aggregate reports.

  loghack aggregate month $start_date

  loghack aggregate week $start_date

tabulate

  loghack tabulate daily 2007-10-01 thru 2007-10-31

report

Crunch the prepared data and generate a report for the given chunk(s).

  loghack report $server/$chunk.tar.gz

unique

Experimental: count/report unique visitors within a chunk.

day_unique

Experimental: count/report unique visitors within a day.

month_unique

Experimental: count/report unique visitors within a month.

month_unique2

Experimental: count/report unique visitors within a month (alternate, memory-hungry algorithm.)

Create hardlinks with dated names.

import

Run the prep, report, compile, and aggregate actions (nice for automatic daily imports.)

  loghack import $file1 $file2 ...

count

Count the records in a given chunk (accounting for skiplist.)

dump

Dump the records in a given chunk (accounting for skiplist.)

date

Print a date for the first line in a raw logfile.

  date=$(loghack date logfile.gz)