• To: math-font-discuss@cogs.susx.ac.uk
• Subject: Re: About atomic encoding
• From: vieth@convex.rz.uni-duesseldorf.de (Ulrik Vieth)
• Date: Fri, 8 Apr 1994 11:08:40 +0200 (MET)
• Content-Length: 3848
• Organization: Heinrich-Heine-Universitaet Duesseldorf



I tend to think that this might be turn out to be the method of
choice for some math fonts as well.

Just remember Justin's encoding proposals which indicated that we
are going to hit the limit of 256 characters for the math core font,
if we include every possible character that might be needed there
in one field or another, yet hardly anybody needs all these symbols
at the same time. For example, physicist that need various barred
or slashed letters in addition to \hbar or \hslash can do well
without the hebrew letters, while number theorists can do well
without those special letters needed in physics.

Using an atomic encoding for real fonts would thus allow producing
customized versions of the core font from the same METAFONT source
using only different driver files without any loss in quality,
while keeping most of the standard letters unchanged. Using an
atomic encoding for virtual fonts, on the other hand, might have
the advantage of saving some disk space, but this would require
another real font for the additional letters, so it wouldn't
save very much, and I'm not sure about the quality.

To ensure a reasonable level of compatibility between different
variants of the math core font, a new encoding standard might
specify a required part (e.g. the range 32--191, including the
basic stuff like punctuation, delimiters, numerals, letters and
one set of greek) and an optional part (including the second
set of greek and various other symbols), which can be customized
according to different needs in different fields.

In any case, it would be necessary to produce different sets of
TeX macros to access these characters, but most of them would
be the same in all variants and they could be produced from one
single source file using different DOCSTRIP modules.

To explain why I like this approach, here's a personal opinion
from a physicists point of view: I'm not completely happy with
the idea of having (a) a math core font with italic letters
and two sets of greek (italics and upright) and (b) a math
operator font with upright roman letters and no upright greek.
A macro to typeset particle symbols in upright type could not
be implemented simply as a math alphabet in this framework,
instead it would have to take roman letters from the operator
font and greek letters from the second set in the core font.
On the other hand, I need various roman letters in upright
type and if they are taken from the operator font rather than
the core font, there are problems with inter-letter spacing.
(Just consider an upright i' followed by round' letter such
as \omega or \sigma, compared to the case when an upright i'
is followed by a 'straight' letter such as \hbar or k'.)

An atomic encoding, would allow me to replace the second set
of upright greek in the core font with a special selection of
upright letters needed in physics, while I could use an upright
version of the core font with greek and roman letters as the
math operator font as well as for particle symbols. This upright
version of the math core font could even leave out many of
the symbols that are not needed in the variants that are used
simply as a letters' font. The same applies to various bold,
bold italics, bold sans serif italics versions of the core font,
also used simply as another letters' font.

In short, an atomic encoding would allow me to create an optimal
math core font for physics, while not sacrifycing the compatibility
with other versions of the core font too much, and it would also
allow to leave out symbols that are not needed in variants of
the core font that are used for various math alphabets only.

Any opinions? Perhaps this might initiate some discussion about
math fonts here again...

Ulrik Vieth.