evaluate_alSet-version.pl - Evaluates submitted Alignment Set(s) against an answer Alignment Set
perl evaluate_alSet-version.pl [options] required_arguments
Required arguments:
-sub FILENAME,'DESCRIPTION' As many as submission source-to-target links files. -subf BLINKER|GIZA|NAACL Submission file(s) format (required if not TALP). -ans FILENAME Answer source-to-target links file -ansf BLINKER|GIZA|NAACL Answer file format (required if not TALP)
Options:
-sub_range BEGIN-END Submission Alignment Set range -ans_range BEGIN-END Answer Alignment Set range -alignMode as-is|null-align|no-null-align Alignment mode. Default: no-null-align -w Activates the weighting of the links -title Title of the experiment series -help|? Prints the help and exits -man Prints the manual and exits
One entry for each submission source-to-target (i.e. links) file name (or directory, in case of BLINKER format). Optionally a description can be added, between '' if it contains white spaces.
Submission Alignment Set format (required if different from default, TALP). The same format is required for all input files.
Answer source-to-target (i.e. links) file name (or directory, in case of BLINKER format)
Answer Alignment Set format (required if different from default, TALP)
Range of the submission source-to-target file (BEGIN and END are the sentence pair numbers). The same range is required for all input files.
Range of the answer source-to-target file (BEGIN and END are the sentence pair numbers)
Take alignment "as-is" or force NULL alignment or NO-NULL alignment (see AlignmentSet.pm documentation). The default here is 'no-null-align' (as opposed to the other scripts, where the default is 'as-is'). Use "as-is" only if you are sure answer and submission files are in the same alignment mode.
Weights the links according to the number of links of each word in the sentence pair.
Give a title to the table where results are compared
Prints a help message and exits.
Evaluates one or various submitted Alignment Set(s) against an answer Alignment Set, and compare the results in a table.
perl evaluate_alSet-version.pl -sub test-giza.spa2eng.giza,'Spanish to English' -sub test-giza.eng2spa.giza,'English to Spanish' -title'Alignment Evaluation' -subf=GIZA -ans test-answer.spa2eng.naacl
Gives the following output:
Alignment Evaluation ---------------------------------- Experiment Ps Rs Fs Pp Rp Fp AER
Spanish to English 93.95 67.51 78.57 93.95 67.51 78.57 21.43
English to Spanish 81.57 74.14 77.68 86.31 65.60 74.54 20.07
Patrik Lambert <lambert@gps.tsc.upc.edu> Some code from Rada Mihalcea's wa_eval_align.pl (http:://www.cs.unt.edu/rada/wpt/code/) has been integrated in the library function.
Copyright 2004-2005 by Patrick Lambert
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License (version 2 or any later version).
3 POD Errors
The following errors were encountered while parsing the POD:
You forgot a '=back' before '=head1'
'=item' outside of any '=over'
To install Lingua::AlignmentSet, copy and paste the appropriate command in to your terminal.
cpanm
cpanm Lingua::AlignmentSet
CPAN shell
perl -MCPAN -e shell install Lingua::AlignmentSet
For more information on module installation, please visit the detailed CPAN module installation guide.