The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Encode - character encodings

SYNOPSIS

    use Encode;

    use Encode::TW; # for Taiwan-based Chinese encodings
    use Encode::CN; # for China-based Chinese encodings
    use Encode::JP; # for Japanese encodings
    use Encode::KR; # for Korean encodings

DESCRIPTION

The Encode module provides the interfaces between Perl's strings and the rest of the system. Perl strings are sequences of characters.

The repertoire of characters that Perl can represent is at least that defined by the Unicode Consortium. On most platforms the ordinal values of the characters (as returned by ord(ch)) is the "Unicode codepoint" for the character (the exceptions are those platforms where the legacy encoding is some variant of EBCDIC rather than a super-set of ASCII - see perlebcdic).

Traditionaly computer data has been moved around in 8-bit chunks often called "bytes". These chunks are also known as "octets" in networking standards. Perl is widely used to manipulate data of many types - not only strings of characters representing human or computer languages but also "binary" data being the machines representation of numbers, pixels in an image - or just about anything.

When Perl is processing "binary data" the programmer wants Perl to process "sequences of bytes". This is not a problem for Perl - as a byte has 256 possible values it easily fits in Perl's much larger "logical character".

Due to size concerns, before using CJK (Chinese, Japanese & Korean) encodings, you have to use the corresponding Encode::(TW|CN|JP|KR) modules first.

TERMINOLOGY

  • character: a character in the range 0..(2**32-1) (or more). (What Perl's strings are made of.)

  • byte: a character in the range 0..255 (A special case of a Perl character.)

  • octet: 8 bits of data, with ordinal values 0..255 (Term for bytes passed to or from a non-Perl context, e.g. disk file.)

The marker [INTERNAL] marks Internal Implementation Details, in general meant only for those who think they know what they are doing, and such details may change in future releases.

ENCODINGS

Characteristics of an Encoding

An encoding has a "repertoire" of characters that it can represent, and for each representable character there is at least one sequence of octets that represents it.

Types of Encodings

Encodings can be divided into the following types:

  • Fixed length 8-bit (or less) encodings.

    Each character is a single octet so may have a repertoire of up to 256 characters. ASCII and iso-8859-* are typical examples.

  • Fixed length 16-bit encodings

    Each character is two octets so may have a repertoire of up to 65 536 characters. Unicode's UCS-2 is an example. Also used for encodings for East Asian languages.

  • Fixed length 32-bit encodings.

    Not really very "encoded" encodings. The Unicode code points are just represented as 4-octet integers. None the less because different architectures use different representations of integers (so called "endian") there at least two disctinct encodings.

  • Multi-byte encodings

    The number of octets needed to represent a character varies. UTF-8 is a particularly complex but regular case of a multi-byte encoding. Several East Asian countries use a multi-byte encoding where 1-octet is used to cover western roman characters and Asian characters get 2-octets. (UTF-16 is strictly a multi-byte encoding taking either 2 or 4 octets to represent a Unicode code point.)

  • "Escape" encodings.

    These encodings embed "escape sequences" into the octet sequence which describe how the following octets are to be interpreted. The iso-2022-* family is typical. Following the escape sequence octets are encoded by an "embedded" encoding (which will be one of the above types) until another escape sequence switches to a different "embedded" encoding.

    These schemes are very flexible and can handle mixed languages but are very complex to process (and have state). No escape encodings are implemented for Perl yet.

Specifying Encodings

Encodings can be specified to the API described below in two ways:

1. By name

Encoding names are strings with characters taken from a restricted repertoire. See "Encoding Names".

2. As an object

Encoding objects are returned by find_encoding($name).

Encoding Names

Encoding names are case insensitive. White space in names is ignored. In addition an encoding may have aliases. Each encoding has one "canonical" name. The "canonical" name is chosen from the names of the encoding by picking the first in the following sequence:

  • The MIME name as defined in IETF RFCs.

  • The name in the IANA registry.

  • The name used by the organization that defined it.

Because of all the alias issues, and because in the general case encodings have state Encode uses the encoding object internally once an operation is in progress.

As of Perl 5.8.0, at least the following encodings are recognized (the => marks aliases):

  ASCII

  US-ASCII => ASCII

The Unicode:

  UTF-8
  UTF-16
  UCS-2

  ISO 10646-1 => UCS-2

The ISO 8859 and KOI:

  ISO 8859-1  ISO 8859-6   ISO 8859-11         KOI8-F
  ISO 8859-2  ISO 8859-7   (12 doesn't exist)  KOI8-R
  ISO 8859-3  ISO 8859-8   ISO 8859-13         KOI8-U
  ISO 8859-4  ISO 8859-9   ISO 8859-14
  ISO 8859-5  ISO 8859-10  ISO 8859-15
                           ISO 8859-16

  Latin1  => 8859-1  Latin6  => 8859-10
  Latin2  => 8859-2  Latin7  => 8859-13
  Latin3  => 8859-3  Latin8  => 8859-14
  Latin4  => 8859-4  Latin9  => 8859-15
  Latin5  => 8859-9  Latin10 => 8859-16

  Cyrillic => 8859-5
  Arabic   => 8859-6
  Greek    => 8859-7
  Hebrew   => 8859-8
  Thai     => 8859-11
  TIS620   => 8859-11

The CJKV: Chinese, Japanese, Korean, Vietnamese:

  ISO 2022     ISO 2022 JP-1  JIS 0201  GB 1988   Big5       EUC-CN
  ISO 2022 CN  ISO 2022 JP-2  JIS 0208  GB 2312   HZ         EUC-JP
  ISO 2022 JP  ISO 2022 KR    JIS 0210  GB 12345  CNS 11643  EUC-JP-0212
  Shift-JIS                             GBK       Big5-HKSCS EUC-KR
  VISCII                                ISO-IR-165

(Due to size concerns, additional Chinese encodings including GB 18030, EUC-TW and BIG5PLUS are distributed separately on CPAN, under the name Encode::HanExtra.)

The PC codepages:

  CP37   CP852  CP861  CP866  CP949   CP1251  CP1256
  CP424  CP855  CP862  CP869  CP950   CP1252  CP1257
  CP737  CP856  CP863  CP874  CP1006  CP1253  CP1258
  CP775  CP857  CP864  CP932  CP1047  CP1254
  CP850  CP860  CP865  CP936  CP1250  CP1255

  WinLatin1     => CP1252
  WinLatin2     => CP1250
  WinCyrillic   => CP1251
  WinGreek      => CP1253
  WinTurkiskh   => CP1254
  WinHebrew     => CP1255
  WinArabic     => CP1256
  WinBaltic     => CP1257
  WinVietnamese => CP1258

(All the CPNNN... are available also as IBMNNN....)

The Mac codepages:

  MacCentralEuropean   MacJapanese
  MacCroatian          MacRoman
  MacCyrillic          MacRomanian
  MacDingbats          MacSami
  MacGreek             MacThai
  MacIcelandic         MacTurkish
                       MacUkraine

Miscellaneous:

  7bit-greek  IR-197
  7bit-kana   NeXTstep
  7bit-latin1 POSIX-BC
  DingBats    Roman8
  GSM 0338    Symbol

PERL ENCODING API

Generic Encoding Interface

  •         $bytes  = encode(ENCODING, $string[, CHECK])

    Encodes string from Perl's internal form into ENCODING and returns a sequence of octets. For CHECK see "Handling Malformed Data".

    For example to convert (internally UTF-8 encoded) Unicode data to octets:

            $octets = encode("utf8", $unicode);
  •         $string = decode(ENCODING, $bytes[, CHECK])

    Decode sequence of octets assumed to be in ENCODING into Perl's internal form and returns the resulting string. For CHECK see "Handling Malformed Data".

    For example to convert ISO 8859-1 data to UTF-8:

            $utf8 = decode("latin1", $latin1);
  •         from_to($string, FROM_ENCODING, TO_ENCODING[, CHECK])

    Convert in-place the data between two encodings. How did the data in $string originally get to be in FROM_ENCODING? Either using encode() or through PerlIO: See "Encoding and IO". For CHECK see "Handling Malformed Data".

    For example to convert ISO 8859-1 data to UTF-8:

            from_to($data, "iso-8859-1", "utf-8");

    and to convert it back:

            from_to($data, "utf-8", "iso-8859-1");

    Note that because the conversion happens in place, the data to be converted cannot be a string constant, it must be a scalar variable.

Handling Malformed Data

If CHECK is not set, undef is returned. If the data is supposed to be UTF-8, an optional lexical warning (category utf8) is given. If CHECK is true but not a code reference, dies.

It would desirable to have a way to indicate that transform should use the encodings "replacement character" - no such mechanism is defined yet.

It is also planned to allow CHECK to be a code reference.

This is not yet implemented as there are design issues with what its arguments should be and how it returns its results.

Scheme 1

Passed remaining fragment of string being processed. Modifies it in place to remove bytes/characters it can understand and returns a string used to represent them. e.g.

 sub fixup {
   my $ch = substr($_[0],0,1,'');
   return sprintf("\x{%02X}",ord($ch);
 }

This scheme is close to how underlying C code for Encode works, but gives the fixup routine very little context.

Scheme 2

Passed original string, and an index into it of the problem area, and output string so far. Appends what it will to output string and returns new index into original string. For example:

 sub fixup {
   # my ($s,$i,$d) = @_;
   my $ch = substr($_[0],$_[1],1);
   $_[2] .= sprintf("\x{%02X}",ord($ch);
   return $_[1]+1;
 }

This scheme gives maximal control to the fixup routine but is more complicated to code, and may need internals of Encode to be tweaked to keep original string intact.

Other Schemes

Hybrids of above.

Multiple return values rather than in-place modifications.

Index into the string could be pos($str) allowing s/\G...//.

UTF-8 / utf8

The Unicode consortium defines the UTF-8 standard as a way of encoding the entire Unicode repertiore as sequences of octets. This encoding is expected to become very widespread. Perl can use this form internaly to represent strings, so conversions to and from this form are particularly efficient (as octets in memory do not have to change, just the meta-data that tells Perl how to treat them).

  •         $bytes = encode_utf8($string);

    The characters that comprise string are encoded in Perl's superset of UTF-8 and the resulting octets returned as a sequence of bytes. All possible characters have a UTF-8 representation so this function cannot fail.

  •         $string = decode_utf8($bytes [,CHECK]);

    The sequence of octets represented by $bytes is decoded from UTF-8 into a sequence of logical characters. Not all sequences of octets form valid UTF-8 encodings, so it is possible for this call to fail. For CHECK see "Handling Malformed Data".

Other Encodings of Unicode

UTF-16 is similar to UCS-2, 16 bit or 2-byte chunks. UCS-2 can only represent 0..0xFFFF, while UTF-16 has a surrogate pair scheme which allows it to cover the whole Unicode range.

Surrogates are code points set aside to encode the 0x01000..0x10FFFF range of Unicode code points in pairs of 16-bit units. The high surrogates are the range 0xD800..0xDBFF, and the low surrogates are the range 0xDC00..0xDFFFF. The surrogate encoding is

        $hi = ($uni - 0x10000) / 0x400 + 0xD800;
        $lo = ($uni - 0x10000) % 0x400 + 0xDC00;

and the decoding is

        $uni = 0x10000 + ($hi - 0xD8000) * 0x400 + ($lo - 0xDC00);

Encode implements big-endian UCS-2 aliased to "iso-10646-1" as that happens to be the name used by that representation when used with X11 fonts.

UTF-32 or UCS-4 is 32-bit or 4-byte chunks. Perl's logical characters can be considered as being in this form without encoding. An encoding to transfer strings in this form (e.g. to write them to a file) would need to

     pack('L*', unpack('U*', $string));  # native
  or
     pack('V*', unpack('U*', $string));  # little-endian
  or
     pack('N*', unpack('U*', $string));  # big-endian

depending on the endianness required.

No UTF-32 encodings are implemented yet.

Both UCS-2 and UCS-4 style encodings can have "byte order marks" by representing the code point 0xFFFE as the very first thing in a file.

Listing available encodings

  use Encode qw(encodings);
  @list = encodings();

Returns a list of the canonical names of the available encodings.

Defining Aliases

  use Encode qw(define_alias);
  define_alias( newName => ENCODING);

Allows newName to be used as am alias for ENCODING. ENCODING may be either the name of an encoding or and encoding object (as above).

Currently newName can be specified in the following ways:

As a simple string.
As a qr// compiled regular expression, e.g.:
  define_alias( qr/^iso8859-(\d+)$/i => '"iso-8859-$1"' );

In this case if ENCODING is not a reference it is eval-ed to allow $1 etc. to be subsituted. The example is one way to names as used in X11 font names to alias the MIME names for the iso-8859-* family.

As a code reference, e.g.:
  define_alias( sub { return /^iso8859-(\d+)$/i ? "iso-8859-$1" : undef } , '');

In this case $_ will be set to the name that is being looked up and ENCODING is passed to the sub as its first argument. The example is another way to names as used in X11 font names to alias the MIME names for the iso-8859-* family.

Defining Encodings

    use Encode qw(define_alias);
    define_encoding( $object, 'canonicalName' [,alias...]);

Causes canonicalName to be associated with $object. The object should provide the interface described in "IMPLEMENTATION CLASSES" below. If more than two arguments are provided then additional arguments are taken as aliases for $object as for define_alias.

Encoding and IO

It is very common to want to do encoding transformations when reading or writing files, network connections, pipes etc. If Perl is configured to use the new 'perlio' IO system then Encode provides a "layer" (See perliol) which can transform data as it is read or written.

Here is how the blind poet would modernise the encoding:

    use Encode;
    open(my $iliad,'<:encoding(iso-8859-7)','iliad.greek');
    open(my $utf8,'>:utf8','iliad.utf8');
    my @epic = <$iliad>;
    print $utf8 @epic;
    close($utf8);
    close($illiad);

In addition the new IO system can also be configured to read/write UTF-8 encoded characters (as noted above this is efficient):

    open(my $fh,'>:utf8','anything');
    print $fh "Any \x{0021} string \N{SMILEY FACE}\n";

Either of the above forms of "layer" specifications can be made the default for a lexical scope with the use open ... pragma. See open.

Once a handle is open is layers can be altered using binmode.

Without any such configuration, or if Perl itself is built using system's own IO, then write operations assume that file handle accepts only bytes and will die if a character larger than 255 is written to the handle. When reading, each octet from the handle becomes a byte-in-a-character. Note that this default is the same behaviour as bytes-only languages (including Perl before v5.6) would have, and is sufficient to handle native 8-bit encodings e.g. iso-8859-1, EBCDIC etc. and any legacy mechanisms for handling other encodings and binary data.

In other cases it is the programs responsibility to transform characters into bytes using the API above before doing writes, and to transform the bytes read from a handle into characters before doing "character operations" (e.g. lc, /\W+/, ...).

You can also use PerlIO to convert larger amounts of data you don't want to bring into memory. For example to convert between ISO 8859-1 (Latin 1) and UTF-8 (or UTF-EBCDIC in EBCDIC machines):

    open(F, "<:encoding(iso-8859-1)", "data.txt") or die $!;
    open(G, ">:utf8",                 "data.utf") or die $!;
    while (<F>) { print G }

    # Could also do "print G <F>" but that would pull
    # the whole file into memory just to write it out again.

More examples:

    open(my $f, "<:encoding(cp1252)")
    open(my $g, ">:encoding(iso-8859-2)")
    open(my $h, ">:encoding(latin9)")       # iso-8859-15

See PerlIO for more information.

See also encoding for how to change the default encoding of the data in your script.

Encoding How to ...

To do:

  • IO with mixed content (faking iso-2020-*)

  • MIME's Content-Length:

  • UTF-8 strings in binary data.

  • Perl/Encode wrappers on non-Unicode XS modules.

Messing with Perl's Internals

The following API uses parts of Perl's internals in the current implementation. As such they are efficient, but may change.

  • is_utf8(STRING [, CHECK])

    [INTERNAL] Test whether the UTF-8 flag is turned on in the STRING. If CHECK is true, also checks the data in STRING for being well-formed UTF-8. Returns true if successful, false otherwise.

  • valid_utf8(STRING)

    [INTERNAL] Test whether STRING is in a consistent state. Will return true if string is held as bytes, or is well-formed UTF-8 and has the UTF-8 flag on. Main reason for this routine is to allow Perl's testsuite to check that operations have left strings in a consistent state.

  •         _utf8_on(STRING)

    [INTERNAL] Turn on the UTF-8 flag in STRING. The data in STRING is not checked for being well-formed UTF-8. Do not use unless you know that the STRING is well-formed UTF-8. Returns the previous state of the UTF-8 flag (so please don't test the return value as not success or failure), or undef if STRING is not a string.

  •         _utf8_off(STRING)

    [INTERNAL] Turn off the UTF-8 flag in STRING. Do not use frivolously. Returns the previous state of the UTF-8 flag (so please don't test the return value as not success or failure), or undef if STRING is not a string.

IMPLEMENTATION CLASSES

As mentioned above encodings are (in the current implementation at least) defined by objects. The mapping of encoding name to object is via the %encodings hash.

The values of the hash can currently be either strings or objects. The string form may go away in the future. The string form occurs when encodings() has scanned @INC for loadable encodings but has not actually loaded the encoding in question. This is because the current "loading" process is all Perl and a bit slow.

Once an encoding is loaded then value of the hash is object which implements the encoding. The object should provide the following interface:

->name

Should return the string representing the canonical name of the encoding.

->new_sequence

This is a placeholder for encodings with state. It should return an object which implements this interface, all current implementations return the original object.

->encode($string,$check)

Should return the octet sequence representing $string. If $check is true it should modify $string in place to remove the converted part (i.e. the whole string unless there is an error). If an error occurs it should return the octet sequence for the fragment of string that has been converted, and modify $string in-place to remove the converted part leaving it starting with the problem fragment.

If check is is false then encode should make a "best effort" to convert the string - for example by using a replacement character.

->decode($octets,$check)

Should return the string that $octets represents. If $check is true it should modify $octets in place to remove the converted part (i.e. the whole sequence unless there is an error). If an error occurs it should return the fragment of string that has been converted, and modify $octets in-place to remove the converted part leaving it starting with the problem fragment.

If check is is false then decode should make a "best effort" to convert the string - for example by using Unicode's "\x{FFFD}" as a replacement character.

It should be noted that the check behaviour is different from the outer public API. The logic is that the "unchecked" case is useful when encoding is part of a stream which may be reporting errors (e.g. STDERR). In such cases it is desirable to get everything through somehow without causing additional errors which obscure the original one. Also the encoding is best placed to know what the correct replacement character is, so if that is the desired behaviour then letting low level code do it is the most efficient.

In contrast if check is true, the scheme above allows the encoding to do as much as it can and tell layer above how much that was. What is lacking at present is a mechanism to report what went wrong. The most likely interface will be an additional method call to the object, or perhaps (to avoid forcing per-stream objects on otherwise stateless encodings) and additional parameter.

It is also highly desirable that encoding classes inherit from Encode::Encoding as a base class. This allows that class to define additional behaviour for all encoding objects. For example built in Unicode, UCS-2 and UTF-8 classes use :

  package Encode::MyEncoding;
  use base qw(Encode::Encoding);

  __PACKAGE__->Define(qw(myCanonical myAlias));

To create an object with bless {Name => ...},$class, and call define_encoding. They inherit their name method from Encode::Encoding.

Compiled Encodings

Encode.xs provides a class Encode::XS which provides the interface described above. It calls a generic octet-sequence to octet-sequence "engine" that is driven by tables (defined in encengine.c). The same engine is used for both encode and decode. Encode:XS's encode forces Perl's characters to their UTF-8 form and then treats them as just another multibyte encoding. Encode:XS's decode transforms the sequence and then turns the UTF-8-ness flag as that is the form that the tables are defined to produce. For details of the engine see the comments in encengine.c.

The tables are produced by the Perl script compile (the name needs to change so we can eventually install it somewhere). compile can currently read two formats:

*.enc

This is a coined format used by Tcl. It is documented in Encode/EncodeFormat.pod.

*.ucm

This is the semi-standard format used by IBM's ICU package.

compile can write the following forms:

*.ucm

See above - the Encode/*.ucm files provided with the distribution have been created from the original Tcl .enc files using this approach.

*.c

Produces tables as C data structures - this is used to build in encodings into Encode.so/Encode.dll.

*.xs

In theory this allows encodings to be stand-alone loadable Perl extensions. The process has not yet been tested. The plan is to use this approach for large East Asian encodings.

The set of encodings built-in to Encode.so/Encode.dll is determined by Makefile.PL. The current set is as follows:

ascii and iso-8859-*

That is all the common 8-bit "western" encodings.

IBM-1047 and two other variants of EBCDIC.

These are the same variants that are supported by EBCDIC Perl as "native" encodings. They are included to prove "reversibility" of some constructs in EBCDIC Perl.

symbol and dingbats as used by Tk on X11.

(The reason Encode got started was to support Perl/Tk.)

That set is rather ad hoc and has been driven by the needs of the tests rather than the needs of typical applications. It is likely to be rationalized.

SEE ALSO

perlunicode, perlebcdic, "open" in perlfunc, PerlIO, encoding