FMi on text symbol encodings

Lars Hellström Lars.Hellstrom@math.umu.se
Tue, 2 Mar 1999 13:23:21 -0500


Hilmar Schlegel wrote:
>Even "pure" T1 is not always what is optimal (e.g. the question whether
>to fake ffl only because it is in T1, for printing a dvi a fake is
>necessary, for compiling the source you would not want it to be in the
>font).

Aren't you in trouble either way if you want to print a dvi, since chances
are the metrics won't match? As for the source variant, can't you just
leave out the ligature command from the ligkern table but include a faked
glyph in the slot?

[snip]
>My suggestion would be rather to implement a way to switch
>"codepages"(i.e. a clear way to change encodings) instead to put all
>what doesn't fit in the historical scheme into a disordered "symbol
>collection".

Don't you get that with T1/T2A/T2B/T2C etc.? Or are you criticizing X2? It
doesn't, in any case, seem to have any relevance for the discussion of TS1.


>The idea to have only one encoding for zillions of fonts which provide a
>wide spectrum of different character sets (aside from the Adobe Standard
>Roman Character Set) causes usually some headaches. At least depending
>on the available ligatures there are usually differences between
>individual fonts. The key problem is that Latex has no concept with
>respect to encodings: there is some single "standard" enconding and
>that's it. Conceptual problems to change uc/lc codes imply some inherent
>difficulties to change that for the long run.

You could have a look in CTAN at

 macros/latex/contrib/supported/relenc/

It is one approach on how encodings can be made more flexible without
violating the basic invariance of interpretation when changing font. (I
wrote the package, just in case anyone wonders.)

Lars Hellström