UTF8 strings and characters
The beauty of UTF-8
Bytes starting with '0' (0xxxxxxx) are reserved for ASCII-compatible single byte characters. With multi-byte characters the number of 1’s in the leading byte determines the number of bytes the character occupies. Like this :
- 1 byte : 0xxxxxxx
- 2 bytes : 110xxxxx 10xxxxxx
- 3 bytes : 1110xxxx 10xxxxxx 10xxxxxx
- 4 bytes : 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
The design of UTF-8 has some benefits over other encodings :
- It is backwards compatible with ASCII and produces compact data for western languages. ASCII is also used in markup language tags and other metadata which gives UTF-8 an advantage with any language.
- The integrity of multi-byte data can be verified from the number of '1'-bits at the beginning of each byte.
- You can always find the start of a multi-byte character even if you jumped to a random byte position.
- A byte at a certain position in a multi-byte sequence can never be confused with the other bytes. This allows using the old fast string functions like Pos() and Copy() in many situations. See examples below.
- Robust code. Code that deals with codepoints must always be done right with UTF-8 because multi-byte characters are common. For UTF-16 there is plenty of sloppy code which assumes characters to be fixed width.
- The most widely used operating systems, including Android, now use UTF-8 natively. It makes sense to use it in applications, too. Windows used to be a dominant platform but it is no more.
Examples
Simply iterating over characters as if the string was an array of equal sized elements does not work with UTF-8 encoded strings. This is not something specific to UTF-8, UTF-16 encoding also has this issue. UTF-32 is a Unicode encoding scheme in which characters are of a fixed size (4 bytes). If you want to iterate over the characters of a UTF-8 string, there are basically two ways:
- iterate over bytes - useful for searching a substring or when looking only at the ASCII characters in the UTF8 string, for example when parsing XML files.
- iterate over characters - useful for graphical components like synedit, for example when you want to know the third printed character on the screen.
Searching a substring
Due to the special nature of UTF8 you can simply use the normal string functions for searching a sub-string. Searching for a valid UTF-8 string with Pos will always return a valid UTF-8 byte position:
uses lazutf8;
...
procedure Where(SearchFor, aText: string);
var
BytePos: LongInt;
CharacterPos: LongInt;
begin
BytePos:=Pos(SearchFor,aText);
CharacterPos:=UTF8Length(PChar(aText),BytePos-1);
writeln('The substring "',SearchFor,'" is in the text "',aText,'"',
' at byte position ',BytePos,' and at character position ',CharacterPos);
end;
Search and copy
Another example of how Pos(), Copy() and Length() work with UTF-8. This function has no code to deal with UTF-8 encoding, yet it works with any valid UTF-8 text always.
function SplitInHalf(Txt, Separator: string; out Half1, Half2: string): Boolean;
var
i: Integer;
begin
i := Pos(Separator, Txt);
Result := i > 0;
if Result then
begin
Half1 := Copy(Txt, 1, i-1);
Half2 := Copy(Txt, i+Length(Separator), Length(Txt));
end;
end;
Iterating over string looking for ASCII characters
If you only want to find characters in ASCII-area, you can use Char type and compare with Txt[i] just like in old times. Most parsers do that and they continue working.
procedure ParseAscii(Txt: string);
var
i: Integer;
begin
for i:=1 to Length(Txt) do
case Txt[i] of
'(': PushOpenBracketPos(i);
')': HandleBracketText(i);
end;
end;
Iterating over string looking for Unicode characters
If you want to find all occurrances of a certain character in a string, you can call PosEx() repeatedly.
If you want to test for different characters inside a loop, you can still use the fast Copy() and Length(). UTF-8 specific functions could be used but they are not needed.
procedure ParseUnicode(Txt: string);
var
Ch1, Ch2, Ch3: String;
i: Integer;
begin
Ch1 := 'Й'; // Characters to search for.
Ch2 := 'ﯚ';
Ch3 := 'Å';
for i:=1 to Length(Txt) do
begin
if Copy(Txt, i, Length(Ch1)) = Ch1 then
DoCh1(...)
else if Copy(Txt, i, Length(Ch2)) = Ch2 then
DoCh2(...)
else if Copy(Txt, i, Length(Ch3)) = Ch3 then
DoCh3(...)
end;
end;
The loop could be optimized by jumping over the already handled parts.
Iterating over string analysing individual codepoints
This code copies each codepoint into a variable of type String which can then be processed further.
procedure IterateUTF8(S: String);
var
CurP, EndP: PChar;
Len: Integer;
ACodePoint: String;
begin
CurP := PChar(S); // if S='' then PChar(S) returns a pointer to #0
EndP := CurP + length(S);
while CurP < EndP do
begin
Len := UTF8CharacterLength(CurP);
SetLength(ACodePoint, Len);
Move(CurP^, ACodePoint[1], Len);
// A single codepoint is copied from the string. Do your thing with it.
ShowMessageFmt('CodePoint=%s, Len=%d', [ACodePoint, Len]);
// ...
inc(CurP, Len);
end;
end;
Accessing bytes inside one UTF8 character
UTF-8 encoded characters can vary in length, so the best solution for accessing them is to use an iteration when one intends to access the characters in the sequence in which they are. For iterating through the characters use this code:
uses lazutf8;
...
procedure DoSomethingWithString(AnUTF8String: string);
var
p: PChar;
CharLen: integer;
FirstByte, SecondByte, ThirdByte, FourthByte: Char;
begin
p:=PChar(AnUTF8String);
repeat
CharLen := UTF8CharacterLength(p);
// Here you have a pointer to the char and its length
// You can access the bytes of the UTF-8 Char like this:
if CharLen >= 1 then FirstByte := P[0];
if CharLen >= 2 then SecondByte := P[1];
if CharLen >= 3 then ThirdByte := P[2];
if CharLen = 4 then FourthByte := P[3];
inc(p,CharLen);
until (CharLen=0) or (p^ = #0);
end;
Accessing the Nth UTF8 character
Besides iterating one might also want to have random access to UTF-8 Characters.
uses lazutf8;
...
var
AnUTF8String, NthChar: string;
begin
NthChar := UTF8Copy(AnUTF8String, N, 1);
Showing character codepoints with UTF8CharacterToUnicode
The following demonstrates how to show the 32bit code point value of each character in an UTF8 string:
uses lazutf8;
...
procedure IterateUTF8Characters(const AnUTF8String: string);
var
p: PChar;
unicode: Cardinal;
CharLen: integer;
begin
p:=PChar(AnUTF8String);
repeat
unicode:=UTF8CharacterToUnicode(p,CharLen);
writeln('Unicode=',unicode);
inc(p,CharLen);
until (CharLen=0) or (unicode=0);
end;
Decomposed characters
Due to the ambiguity of Unicode, compare functions and Pos() might show unexpected behavior when e.g. one of the string contains decomposed characters, while the other uses the direct codes for the same letter. This is not automatically handled by the RTL. It is not specific to any encoding but Unicode in general.
Mac OS X
The file functions of the FileUtil unit also take care of Mac OS X specific behaviour: OS X normalizes filenames. For example the filename 'ä.txt' can be encoded in Unicode with two different sequences (#$C3#$A4 and 'a'#$CC#$88). Under Linux and BSD you can create a filename with both encodings. OS X automatically converts the a umlaut to the three byte sequence. This means:
if Filename1 = Filename2 then ... // is not sufficient under OS X
if AnsiCompareFileName(Filename1, Filename2) = 0 then ... // not sufficient under fpc 2.2.2, not even with cwstring
if CompareFilenames(Filename1, Filename2) = 0 then ... // this always works (unit FileUtil or FileProcs