Unicode Support in Lazarus

From Free Pascal wiki
Jump to navigationJump to search

Introduction

This page covers Unicode support in Lazarus programs (console or server, no GUI) and applications (GUI with LCL) using features of FPC 3.0+.

Light bulb  Note: The feature and this page are under construction. Please test it with Lazarus trunk and report your findings in Lazarus mailing list or in bug tracker.

The old way to support UTF-8 in LCL using FPC versions up to 2.6.4 is explained here: LCL Unicode Support

RTL with default codepage UTF-8

Usually the RTL uses the system codepage for strings (e.g. FileExists and TStringList.LoadFromFile). On Windows this is a non Unicode encoding, so you can only use characters from your language group. The LCL works with UTF-8 encoding, which is the full Unicode range. On Linux and Mac OS X UTF-8 is typically the system codepage, so the RTL uses here by default CP_UTF8.

Since FPC 2.7.1 the default system codepage of the RTL can be changed to UTF-8 (CP_UTF8). So Windows users can now use UTF-8 strings in the RTL.

  • For example FileExists and aStringList.LoadFromFile(Filename) now support full Unicode. See here for the complete list of functions that already support full Unicode:

RTL changes

  • AnsiToUTF8, UTF8ToAnsi, SysToUTF8, UTF8ToSys have no effect. They were mainly used for the above RTL functions, which no longer need a conversion. For WinAPI functions see below.
  • Many UTF8Encode and UTF8Decode calls are no longer needed, because when assigning UnicodeString to String and vice versus the compiler does it automatically for you.
  • When accessing the WinAPI you must use the "W" functions or use the functions UTF8ToWinCP and WinCPToUTF8. The same is true for libraries that still use Ansi WinAPI function. For example in FPC 3.0 and below the unit registry needs this.
  • "String" and "UTF8String" are different types. If you assign a String to an UTF8String the compiler adds code to check if the encoding is the same. This costs unnecessary time and increases code size. Simply use String instead of UTF8String.

More information about the new FPC Unicode Support: FPC Unicode support

Testing with Lazarus

The new mode is enabled automatically when Lazarus is compiled with FPC 3.0+. It can be disabled by defining -dDisableUTF8RTL, see page Lazarus with FPC3.0 without UTF-8 mode for details.

If you use string literals with the new mode, your sources must have UTF-8 encoding. However -FcUTF8 is not typically needed. This is rather counter-intuitive because the meaning of that flag is to treat string literals as UTF-8. However the new UTF-8 mode switches the encoding at run-time, yet the constants are evaluated at compile-time. See "String Literals" below for more details.

What actually happens in the new mode? These 2 FPC functions are called in an early initialization section, setting the default String encoding in FPC to UTF-8 :

 SetMultiByteConversionCodePage(CP_UTF8);
 SetMultiByteRTLFileSystemCodePage(CP_UTF8);

Also the UTF8...() functions in LazUTF8 (LazUtils) are set as backends for RTL's Ansi...() functions.

For console programs (no LCL) a dependency for LazUtils must be added manually. (LCL applications already have it through LCL dependency.) Important: For the console programs, LazUTF8 unit must be put in the uses section of the main program file. It must be near the beginning, just after the critical memorymanagers and threading stuff (e.g. cmem, heaptrc, cthreads).

Compatibility with Unicode Delphi

For console programs LazUTF8 unit must be in the uses section of main program file. Delphi has no such unit.

RTL functions in ASCII area

RTL functions that work in ASCII area (e.g. UpperCase) are compatible, but they work faster in the UTF-8 RTL. In Delphi all string functions became slower after they switched to UTF-16.

RTL Ansi...() Unicode functions

RTL Ansi...() functions that work with codepages / Unicode (e.g. AnsiUpperCase) are compatible.

Reading individual codepoints

Not compatible, although it is quite easy to make source code that works with both encodings.

Delphi has functions like NextCharIndex, IsHighSurrogate, IsLowSurrogate to deal with UTF-16 surrogate pairs, codepoints consisting of 2 UnicodeChar(*) (WideChar, Word, 2 bytes). However those functions are not used much in example code and tutorials. Most tutorials say that Copy() function works just as it did with Delphi versions before D2009. No, a codepoint can now be 2 UnicodeChar(*) and Copy() may return half of it.

UTF-8 has an advantage here. It must be done always right because multi-byte codepoints are so common.

See section : Dealing with UTF8 strings and characters in code below for examples of how to use UTF-8 and how to make code that works with both encodings.

(*)

  • "UnicodeString" and "UnicodeChar" names for UTF-16 types was a very unfortunate choice from Borland.
  • A Unicode codepoint is a "real" character definition in Unicode which can be encoded differently and its length depends on the encoding.
  • A Unicode character is either one codepoint or a decomposed character of multiple codepoints. Yes, this is complex ...

Compatibility with LCL in Lazarus 1.x

Many Lazarus LCL applications will continue to work without changes. However the handling of Unicode has become simpler and it makes sense to clean the code. Code that reads or writes data from/to streams, files or DBs with non-UTF-8 encoding, breaks and must be changed. (See below for examples).

Explicit conversion functions are only needed when calling Windows Ansi functions. Otherwise FPC takes care of converting encodings automatically. Empty conversion functions are provided to make your old code compile.

  • UTF8Decode, UTF8Encode - Almost all can be removed.
  • UTF8ToSys, SysToUTF8, UTF8ToAnsi, AnsiToUTF8 - Almost all can be removed

File functions in RTL now take care of file name encoding. All (?) file name related ...UTF8() functions can be replaced with the Delphi compatible function without UTF8 suffix. For example FileExistsUTF8 can be replaced with FileExists.

Most UTF8...() string functions can be replaced with the Delphi compatible Ansi...() functions. The UTF8...() functions in LazUTF8 are registered as callback functions for the Ansi...() functions in SysUtils.

UTF-8 works in non-GUI programs, too. It only requires a dependency for LazUtils and placing LazUTF8 unit into the uses section of main program file.

Reading text file with Windows codepage

This is not compatible with former Lazarus code. In practice you must encapsulate the code dealing with system codepage and convert the data to UTF-8 as quickly as possible.

Can set the right codepage for a string in advance:

 var
   StrIn, StrOut: String;
 ...
 SetCodePage(RawByteString(StrIn), 1252, false);  // 1252 always !! (or Windows.GetACP())
 ...

or use RawByteString and do an explicit conversion :

 uses ... , LConvEncoding;
 ...
 var
   StrIn: RawByteString;
   StrOut: String;
 ...
 StrOut := CP1252ToUTF8(StrIn,true);

ToDo ...

Code that depends very much on Windows codepage

Sometimes program code depends so much on system codepage that using the new UTF-8 mode is not practical. There are 2 choices then :

  • Continue using Lazarus with FPC 2.6.4. This is a good solution for code that is in maintenance mode. Lazarus can still be compiled with FPC 2.6.4 for some time to come and the old UTF8...() functions will be there.
  • Use FPC 3.0 without the new UTF-8 mode, by defining DisableUTF8RTL. This causes some nasty problems which are explained here : Lazarus with FPC3.0 without UTF-8 mode.

Helper functions for CodePoints

LazUtils will have special functions for dealing with codepoints. They will use the old UTF8...() functions in LCL now but can be made alias to functions using other encoding in Delphi and in FPC's {$mode DelphiUnicode}.

  • CodePointCopy() - Like UTF8Copy()
  • CodePointLength() - Like UTF8Length()
  • CodePointPos() - Like UTF8Pos()
  • CodePointToWinCP()
  • WinCPToCodePoint()
  • CodePointByteCount() - Like UTF8CharacterLength()

An interesting question is how CodePointCopy, CodePointLength and CodePointPos should be implemented in Delphi which does not provide such functions for UTF-16. (Or does it?) Practically all Delphi code uses plain Copy, Length and Pos when codepoint aware functions should be used.

Dealing with UTF8 strings and characters in code

See details in UTF8_strings_and_characters.

String Literals

Sources should be saved in UTF-8 encoding. Lazarus creates such files by default. You can change the encoding of imported files via right click in source editor / File Settings / Encoding.

In most cases {$codepage utf8} / -FcUTF8 is not needed. ToDo: explain more...

  • AnsiString/String literals work with and without {$codepage utf8} / -FcUTF8.
const s: string = 'äй';
  • ShortString literals work only without {$codepage utf8} / -FcUTF8. You can
unit unit1;
{$Mode ObjFPC}{$H+}
{$modeswitch systemcodepage} // disable -FcUTF8
interface
const s: string[15] = 'äй';
end.

Alternatively you can use shortstring with $codepage via codes:

unit unit1;
{$Mode ObjFPC}{$H+}
{$codepage utf8}
interface
const s: String[15] = #$C3#$A4; // ä
end.
  • WideString/UnicodeString/UTF8String literals only work with {$codepage utf8} / -FcUTF8.
unit unit1;
{$Mode ObjFPC}{$H+}
{$codepage utf8}
interface
const ws: WideString = 'äй';
end.

FPC codepages

The compiler (FPC) supports specifying the code page in which the source code has been written via the command option -Fc (e.g. -Fcutf8) and the equivalent codepage directive (e.g. {$codepage utf8}). In this case, rather than literally copying the bytes that represent the string constants in your program, the compiler will interpret all character data according to that codepage. There are two things to watch out for though:

  • on Unix platforms, a widestring manager must be included by adding the cwstring unit to uses-clause. Without it, the program will not be able to convert all character data correctly when running.

It is included by default with the new UTF-8 mode although it makes the program dependent on libc and makes cross-compilation harder.

  • The compiler converts all string constants that contain non-ASCII characters to widestring constants. These are automatically converted back to ansistring (either at compile time or at run time), but this can cause one caveat if you try to mix both characters and ordinal values in a single string constant:

For example:

program project1;
{$codepage utf8}
{$mode objfpc}{$H+}
{$ifdef unix}
uses cwstring;
{$endif}
var
  a,b,c: string;
begin
  a:='ä';
  b:='='#$C3#$A4; // #$C3#$A4 is UTF-8 for ä
  c:='ä='#$C3#$A4; // after non ascii 'ä' the compiler interprets #$C3 as widechar.
  writeln(a,b); // writes ä=ä
  writeln(c);   // writes ä=ä
end.

When compiled and executed, this will write:

ä=ä
ä=ä

The reason is once the ä is encountered, as mentioned above the rest of the constant string assigned to 'c' will be parsed as a widestring. As a result the #$C3 and #$A4 are interpreted as widechar(#$C3) and widechar(#$A4), rather than as ansichars.

Open issues

  • TFormatSettings char: for example: ThousandSeparator, DecimalSeparator, DateSeparator, TimeSeparator, ListSeparator. These should be replaced with string to support UTF-8. For example under Linux with LC_NUMERIC=ru_RU.utf8 the thousand separator is the two byte nbsp/160. Workaround: use space instead of nbsp.
  • Unit registry, TRegistry - this unit uses Windows Ansi functions and therefore you need to use UTF8ToWinCP, WinCPToUTF8. Formerly it needed UTF8ToSys.

Future

The goal of FPC project is to create a Delphi compatible UnicodeString (UTF-16) based solution, but it is not ready yet. It may take some time to be ready.

This UTF-8 solution of LCL in its current form can be considered temporary. In the future, when FPC supports UnicodeString fully in RTL and FCL, Lazarus project will provide a solution for LCL that uses it. At the same time the goal is to preserve UTF-8 support although it may require changes to string types or something. Nobody know the details yet. We will tell when we know...

In essence LCL will probably have 2 versions, one for UTF-8 and one for UTF-16.

FAQ

What about Mode DelphiUnicode?

The {$mode delphiunicode} was added in FPC 2.7.1 and is like {$Mode Delphi} with {ModeSwitch UnicodeStrings}. See the next question about ModeSwitch UnicodeStrings.

What about ModeSwitch UnicodeStrings?

The {$ModeSwitch UnicodeStrings} was added in FPC 2.7.1 and defines "String" as "UnicodeString" (UTF-16), "Char" as "WideChar", "PChar" as "PWideChar" and so forth. This affects only the current unit. Other units including those used by this unit have their own "String" definition. Many RTL strings and types (e.g. TStringList) uses 8-bit strings, which require conversions from/to UnicodeString, which are added automatically by the compiler. The LCL uses UTF-8 strings. It is recommended to use UTF-8 sources and compile with "-FcUTF8".