[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PostScript font installation: my evolving tools...


At 06:22 PM 98/01/05 -0800, you wrote:

>Just what is a skewchar/skewkern anyway, and how does one ensure that
>it is honest?

In math fonts the character advance width in the TFM is not the `true'
advance width of the character, but the place to put the subscript.
The TFM width plus the italic correction is where to put the superscript.  
This sum is also taken to be the `true' advance width.

Since in a math font the basic
metrics are misused in this fashion, the usual TeX algorithm for
centering accents cannot work (it shifts the accent to the right by one 
half of the difference between the width of the base character and the width
of the accent character).  So accent positioning is dealt with instead by
introducing bogus kern pairs with a mystical `skewchar'.  Those
kernpairs are not really kern pairs, but information on how much to
shift an accent on a particular base character.  Bizaree what?
(See my 1993 TUG paper on math fonts for more such things).

This by the way is why subscripts on Gamma, Upsilon and Psi in CM
are way too far away from those letters.  In CM, the upright uppercase Greek
letters come from a text fonts, so you can't adjust subscript, superscript
and accent position separately (not a problem in European Modern,
MathTime and Lucida New Math,  where the upright Greek letters come
>From math fonts).

>Now that you mention it, I'm not entirely convinced I'm working out
>italic corrections in the most ideal way, either. So far, I've gone with
>the established method, which seemed to be to use the right sidebearing
>of the font, if it was negative (making it positive, of course). 

You need more.  I find that adding 30 to the above derived value works well
(This is on the 1000 unit per em scale).  This is for a text font. If you
use just 
max(0, - rsb) you risk coliisions - unless all letters in the font have

For a math font, there is no choice but to try lots of possible superscripts
and adjust the italic correction so none collide with the base character.
But this should be done only after trying lots of subscripts and adjusting
the `TFM width' until none of those collide with the character...

>what I feel I ought to be doing is looking also at the right sidebearing
>of the upright font to. This would mean that even after italic correction,
>an italic `f' would have a slight negative right sidebearing. Does this
>seem sensible or crazy?

I am afraid so.  In a text font the correction is there to prevent
collision with
the next character when that is upright.  So reference to the upright
version of the trailing italic character doesn't do anything useful.
And in the case of a math fonts it tells you were to put a superscript,
so it also doesn't make much sense to refer to the upright version of
the letter (even if there is one, which in CM is often not the case,
since, unlike Lucida New Math, there is no upright `math italic').

>I have two ideas for creating math letters. One is to follow a scheme
>like Thierry's, which is to decide we are only prepared to tollerate
>negative sidebearings up to a certain limit (I think it may be wrong
>to elimiate them entirely, `f' in CMMI10 has a right sidebearing of
>-62 AFM units and `j' has a negative sidebearing of -13 AFM units).

My experience is that you can get a first cut at the changes in advance
width and left sidebearing by some algorithmic means, but that you 
cannot avoid doing the hard work of trying a huge number of examples
and fine tuning.  Too many compromises are needed, and trade-offs
have to be considered individually.  An automated method would have
to `play it safe' and have very `loose' metrics

>The other is to take a pair math and text fonts (e.g. CMMI10 & CMTI10
>or LucidaNewMath-Italic & LucidaBright-Italic), which I shall call the
>source fonts, and attempt to steal their wisdom and apply it another
>font, which I shall call the candidate font.  

This gives a good initial value to start from.

>Thoughts on both of these methods would certainly be welcome,

Well, an automated method that does a one-quarter decent job would
be most useful.  But in my experience a *lot* of careful hand-tuning is
required to make it work well.  And one can get pretty sick of looking
at pages of A_B and A^B for `all reasonable' values of A and B!  Then
try it at different sizes, and add checks for {A^B}^C, A^{B^C} etc.

There are programs (like URW Kernus) that do a reasonable job of
getting a first cut at kerning pairs.  That is the kind of technology needed
here.  Unfortunately such programs do not come with explanations of
the trade secret methods used :-)

Regards, Berthold.

Berthold K.P. Horn
Cambridge, MA		mailto:bkph@ai.mit.edu