FMi on text symbol encodings

Hilmar Schlegel
Wed, 3 Mar 1999 09:37:29 -0500

Lars Hellström wrote:
> Hilmar Schlegel wrote:
> >Even "pure" T1 is not always what is optimal (e.g. the question whether
> >to fake ffl only because it is in T1, for printing a dvi a fake is
> >necessary, for compiling the source you would not want it to be in the
> >font).
> Aren't you in trouble either way if you want to print a dvi, since chances
> are the metrics won't match? 

Sorry I was not precise enough (since this is a *really* nasty problem
;-). The case is very specific and no speculation: printing a dvi made
with EC fonts and get Type1 versions into the result (for obvious
reasons). Metrics match (or should at least) more or less. One would
need fakes here if something is unavailable. The ligature example is
actually more relevant for the general discussion about "complete"

> As for the source variant, can't you just
> leave out the ligature command from the ligkern table but include a faked
> glyph in the slot?

Yes, this was the (another) discussion: do we need faked "ligatures" or
not. For the source you obviously won't but for dvi-files (the latter
case is however quite academic since dvi problems are related in the
first line to TS1 and not ligatures).

It remains that for every TS1 character one would have to decide if a
fake should be there (and possibly utilized then by Tex) or whether
there is an empty place. For ligatures there is no change in the code
but for other characters which are directly accessed, the code must
"know" about what is really in the font (you can only switch skewchars
and hyphenchars font-by-font).

Back from theory to the topic: some kind of definition *what* is in TS1
would be helpful as a first step for an emulation. Some list with
"names" (descriptive, private or whatever), character names (to be used
in a Type1 font), and implemented shape would be in place. In the
present state even the context of some characters is unclear and it is
quite difficult to setup encodings even if the fonts have the necessary
stuff available.

There should be two variants for faking:
- a maximally similar encoding vector for fonts with standard character
- a fontinst etx + mtx (more friendly ;-)

> [snip]
> >My suggestion would be rather to implement a way to switch
> >"codepages"(i.e. a clear way to change encodings) instead to put all
> >what doesn't fit in the historical scheme into a disordered "symbol
> >collection".
> Don't you get that with T1/T2A/T2B/T2C etc.? Or are you criticizing X2? It
> doesn't, in any case, seem to have any relevance for the discussion of TS1.

Of course TS1 will stay as it is - for practical purposes it is less
usable then than it could be, however. My view is that instead of
utilizing a few fonts to switch around, it would be better to have more
flexible encodings (e.g. T1 is completely filled but switching encodings
midst text is a problem). Actually encodings should be font-specific to
be exact...

>  macros/latex/contrib/supported/relenc/
> It is one approach on how encodings can be made more flexible without
> violating the basic invariance of interpretation when changing font. (I
> wrote the package, just in case anyone wonders.)

Many thanks for the pointer!

> Lars Hellström