The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

evaluate_alSet-version.pl - Evaluates submitted Alignment Set(s) against an answer Alignment Set

SYNOPSIS

perl evaluate_alSet-version.pl [options] required_arguments

Required arguments:

        -sub FILENAME,'DESCRIPTION'    As many as submission source-to-target links files.
        -subf BLINKER|GIZA|NAACL    Submission file(s) format (required if not TALP).
        -ans FILENAME    Answer source-to-target links file
        -ansf BLINKER|GIZA|NAACL    Answer file format (required if not TALP)

Options:

        -sub_range BEGIN-END    Submission Alignment Set range
        -ans_range BEGIN-END    Answer Alignment Set range
        -alignMode as-is|null-align|no-null-align Alignment mode. Default: no-null-align
        -w    Activates the weighting of the links
        -title Title of the experiment series
        -help|?    Prints the help and exits
        -man    Prints the manual and exits

ARGUMENTS

--sub,--submission FILENAME,'DESCRIPTION'

One entry for each submission source-to-target (i.e. links) file name (or directory, in case of BLINKER format). Optionally a description can be added, between '' if it contains white spaces.

--subf,--sub_format BLINKER|GIZA|NAACL

Submission Alignment Set format (required if different from default, TALP). The same format is required for all input files.

--ans,--answer FILENAME

Answer source-to-target (i.e. links) file name (or directory, in case of BLINKER format)

--ansf,--ans_format BLINKER|GIZA|NAACL

Answer Alignment Set format (required if different from default, TALP)

OPTIONS

--sub_range BEGIN-END

Range of the submission source-to-target file (BEGIN and END are the sentence pair numbers). The same range is required for all input files.

--ans_range BEGIN-END

Range of the answer source-to-target file (BEGIN and END are the sentence pair numbers)

--alignMode as-is|null-align|no-null-align

Take alignment "as-is" or force NULL alignment or NO-NULL alignment (see AlignmentSet.pm documentation). The default here is 'no-null-align' (as opposed to the other scripts, where the default is 'as-is'). Use "as-is" only if you are sure answer and submission files are in the same alignment mode.

-w, --weighted

Weights the links according to the number of links of each word in the sentence pair.

--title

Give a title to the table where results are compared

--help, --?

Prints a help message and exits.

--man

Prints a help message and exits.

DESCRIPTION

Evaluates one or various submitted Alignment Set(s) against an answer Alignment Set, and compare the results in a table.

EXAMPLES

perl evaluate_alSet-version.pl -sub test-giza.spa2eng.giza,'Spanish to English' -sub test-giza.eng2spa.giza,'English to Spanish' -title'Alignment Evaluation' -subf=GIZA -ans test-answer.spa2eng.naacl

Gives the following output:

    Alignment Evaluation   
----------------------------------
 Experiment                Ps     Rs      Fs      Pp      Rp      Fp     AER  

Spanish to English 93.95 67.51 78.57 93.95 67.51 78.57 21.43

English to Spanish 81.57 74.14 77.68 86.31 65.60 74.54 20.07

AUTHOR

Patrik Lambert <lambert@gps.tsc.upc.edu> Some code from Rada Mihalcea's wa_eval_align.pl (http:://www.cs.unt.edu/rada/wpt/code/) has been integrated in the library function.

COPYRIGHT AND LICENSE

Copyright 2004-2005 by Patrick Lambert

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License (version 2 or any later version).

3 POD Errors

The following errors were encountered while parsing the POD:

Around line 120:

You forgot a '=back' before '=head1'

Around line 122:

'=item' outside of any '=over'

Around line 152:

You forgot a '=back' before '=head1'