Loading ...
Sorry, an error occurred while loading the content.

Re: hardcopy and printing latin2 (long)

Expand Messages
  • Mikolaj Machowski
    ... [snip] ... Thanks. ... OK. But when you re storing these files on disk it can lead in future to some mistakes. ... I was thinking about importing files
    Message 1 of 16 , Jan 3, 2003
    • 0 Attachment
      On Fri, Jan 03, 2003 at 10:18:08AM -0000, Mike Williams wrote:
      > This encoding vector is not quite right - try this one instead:
      > --------------------8<-----------------------
      [snip]
      > The previous version was using Eth and eth instead of Dcroat and
      > dcroat (that is d and D with a bar). If your PS printer does not
      > recognise dcroat and Dcroat then you need to install a more recent
      > version of the font - Eth and eth are Icelandic characters but could
      > do at a pinch (the lower cased d-bar as a curly d so wont look that
      > good in its context).

      Thanks.

      > > Solution is not perfect because after :hardcopy > vps.ps
      > > I am getting proper looking file and prints also OK but note line:
      > > % ISOLatin2Encoding
      > > and phrase "VIM-Encoding-ISOLatin1 1.0 0" (hardcoded in ex_cmds2.c), but
      > > file itself is in ISOLatin2.
      > This should not cause any problems. The phrase "VIM-Encoding-
      > ISOLatin1 1.0 0" will only be picked up by sophisticated document
      > processors, usually in large multi-user printing environments. AFAIK
      > the Linux printing systems do no make use of the phrase.

      OK. But when you're storing these files on disk it can lead in future to
      some mistakes.

      > > Wishlist:
      > > new option 'hcencoding' 'hcenc' ('hce') - hardcopy encoding. By default
      > > equal to 'encoding'.
      > After a quick 5 seconds thinking I cannot see a point where you would
      > not want to use the 'encoding' when selecting an encoding vector.
      > Ah, just thought of one - in the case of single byte encodings,
      > someone does not bother to the set 'encoding' but does set the
      > character set with cXXXX in 'guifont', but this would only be on
      > Windows. Is there an equivalent on other platforms.

      I was thinking about importing files with various encodings. For example
      I have file in cp1250 under linux (it should be iso-8859-2). I don't
      have to translate into iso2 (to be sure I won't lose any chars I have
      translate cp -> utf -> iso2), but I want to print it. Hmm. Special
      option may be too much. Argument for :hardcopy would be enough.

      > > New directory in Vim tree: hardcopy. In this directory would be stored
      > > files with definitions of various encodings and this files would be
      > > included in produced ps files (similar to lang directory?)
      > There needs to be an encoding vector for each platform supporting PS
      > printing. For example a latin2 encoding vector for EBCIDIC would be
      > totally different, as is the Windows one since that allows printing
      > of extra characters. As you have already pointed out VIM has 5
      > latin1 encoding vectors, so it will need 5 latin2 encodings to!
      > It would be useful if the list and mapping of available encodings was
      > read in by VIM, rather than have it compiled in. Then additional
      > encodings could be added as and when they are needed. It would need
      > some means of identifying the encoding need for a specific platform,
      > or a defined naming scheme, something like evmac-latin2.ps or some
      > such.

      Yes. Number of encodings wouldn't be so high. On EBCIDIC and VMS
      platforms there is no strong need for encodings other than iso1. As you
      wrote Win has different system for printing and ps is not as necessary
      as for linux. I think MOX can use standard iso encodings or Unicode. And
      here it is. All systems are on steady crawl toward Unicode and vector
      files in not very long time should be system independent.

      > It may also be useful that the directory is subject to runtimepath
      > directory searching so that local replacements can be used (such as
      > using Eth/eth instead of Dcroat/dcroat).

      And this is IMO most important thing in short term. Playing with root
      each time I want to change printing encoding is a BadThing(tm). I know
      Bram doesn't want to change Vim behaviour without mature replacement but
      "simple" change of hardcoding place of ev files (in my case
      /usr/share/vim/vim61) to "standard" looking through 'rtp' dirs would be
      very nice thing in Vim6.2.

      > > ps. Heh. I found vector for koi8 and with changing Courier and family to
      > > CourierISOC I got cyrilic. Vectors for all(?) iso-8859 vectors and koi8
      > > are in a2ps resources.
      > There is a Cyrillic Courier font knocking around on the web somewhere
      > that could be used to enable printing of Cyrillic files on PS
      > printers without their own Cyrillic font.

      Hmm. I just tested. Changing names of fonts was unnecessary. I have get
      the same results with "naked" Courier. Only ev file is important here.
      Mdk9 with standard rpms.

      > However, this would
      > require VIM to embed the font definition in the generated PS file,
      > and would be yet more files to have stored in the runtime
      > directories. In this case it may be neater to have encodings, fonts,
      > and procsets sub-directories under a hardcopy or printing directory
      > in the runtime directory.

      IMO this is not necessary. Vim should use system resources whenever this
      is possible. Including fonts definition into Vim distro have sense only
      if this definition is full Unicode glyph. This solve all problems but
      this is going with nuke against an ant :)

      Mikolaj
    • Mike Williams
      ... Potentially yes. If you believe it is a problem then your best bet is to change the phrase to something unique like MMachowski-Encoding- ISOLatin2 1.0 0
      Message 2 of 16 , Jan 6, 2003
      • 0 Attachment
        On 4 Jan 2003 at 0:48, Mikolaj Machowski wrote:

        > OK. But when you're storing these files on disk it can lead in future to
        > some mistakes.

        Potentially yes. If you believe it is a problem then your best bet is
        to change the phrase to something unique like "MMachowski-Encoding-
        ISOLatin2 1.0 0" to prevent possible clashes with any other ISOLatin2
        encoded hardcopy files you generate now or in the future.

        > I was thinking about importing files with various encodings. For example
        > I have file in cp1250 under linux (it should be iso-8859-2). I don't
        > have to translate into iso2 (to be sure I won't lose any chars I have
        > translate cp -> utf -> iso2), but I want to print it. Hmm. Special
        > option may be too much. Argument for :hardcopy would be enough.

        Correct me if I am wrong - you have some files that have a cp1250 (in
        other words a Windows Latin2) encoding that you want to print from
        Linux, right? In this case you cannot translate from cp1250 to
        latin2 since there is not a one to one mapping. I would have thought
        you would set 'encoding' cp1250 - or is this a Windows only feature?
        If it is then vim will need a 'printencoding' option that overrides
        the current 'encoding' value.

        This of course means that every VIM installation would need a
        complete set of encodings from every supported platform - but this is
        stanadard for PS generating applications.

        > Yes. Number of encodings wouldn't be so high. On EBCIDIC and VMS
        > platforms there is no strong need for encodings other than iso1.

        You will have to ask the EBCIDIC/VMS users ;-)

        > As you wrote Win has different system for printing and ps is not as
        > necessary as for linux. I think MOX can use standard iso encodings
        > or Unicode.

        I'll check but I guess there will still be a fair number of OS8/9
        text files lying around with non-iso encodings.

        > And here it is. All systems are on steady crawl toward Unicode and
        > vector files in not very long time should be system independent.

        Don't forget legacy data - most of which will not be translated to
        Unicode. Support for alternate 8-bit encodings will be needed for
        years yet.

        > And this is IMO most important thing in short term. Playing with
        > root each time I want to change printing encoding is a
        > BadThing(tm). I know Bram doesn't want to change Vim behaviour
        > without mature replacement but "simple" change of hardcoding place
        > of ev files (in my case /usr/share/vim/vim61) to "standard" looking
        > through 'rtp' dirs would be very nice thing in Vim6.2.

        I'll have a look into it. There are two issues, hardcoding file
        locations, and providing for alternate encodings.

        > Hmm. I just tested. Changing names of fonts was unnecessary. I have get
        > the same results with "naked" Courier. Only ev file is important here.
        > Mdk9 with standard rpms.

        It depends on the version of the font you have. Some versions will
        include Cyrillic characters, some will not, and that wont change
        anytime soon. For those printers, a downloadable font is a must.

        > IMO this is not necessary. Vim should use system resources
        > whenever this is possible. Including fonts definition into Vim
        > distro have sense only if this definition is full Unicode glyph.

        I am not suggesting including fonts in the distribution, but allowing
        users to add them if they need to, especially to support printing
        characters that are not part of the standard font installed with the
        printer. This is a longer term project ;)

        > This solve all problems but this is going with nuke against an ant
        > :)

        Very topical ;)

        Mike
        --
        Just when you think you've finally hit bottom, someone tosses you a shovel.
      • Mikolaj Machowski
        ... This is Windows feature - from historical reasons MS created its own set of encodings based on ANSI standards. In internet and on *nix systems are
        Message 3 of 16 , Jan 7, 2003
        • 0 Attachment
          On Mon, Jan 06, 2003 at 06:21:41PM -0000, Mike Williams wrote:
          > > I was thinking about importing files with various encodings. For example
          > > I have file in cp1250 under linux (it should be iso-8859-2). I don't
          > > have to translate into iso2 (to be sure I won't lose any chars I have
          > > translate cp -> utf -> iso2), but I want to print it. Hmm. Special
          > > option may be too much. Argument for :hardcopy would be enough.
          > Correct me if I am wrong - you have some files that have a cp1250 (in
          > other words a Windows Latin2) encoding that you want to print from
          > Linux, right? In this case you cannot translate from cp1250 to
          > latin2 since there is not a one to one mapping. I would have thought
          > you would set 'encoding' cp1250 - or is this a Windows only feature?

          This is Windows feature - from historical reasons MS created its own set
          of encodings based on ANSI standards. In internet and on *nix systems
          are mostly(?) used ISO standards.
          cp1250 is Windows equivalent of iso-8859-2 for East European countries.

          > If it is then vim will need a 'printencoding' option that overrides
          > the current 'encoding' value.

          After a while I am not sure if this is really necessary. I have to try
          one more thing. Stay tuned ;)

          > This of course means that every VIM installation would need a
          > complete set of encodings from every supported platform - but this is
          > stanadard for PS generating applications.
          > > Yes. Number of encodings wouldn't be so high. On EBCIDIC and VMS
          > > platforms there is no strong need for encodings other than iso1.
          > You will have to ask the EBCIDIC/VMS users ;-)

          Heh. First I have to find one ;)

          > > As you wrote Win has different system for printing and ps is not as
          > > necessary as for linux. I think MOX can use standard iso encodings
          > > or Unicode.
          > I'll check but I guess there will still be a fair number of OS8/9
          > text files lying around with non-iso encodings.

          That's right. And I know some people who don't want/can't leave their
          OS9 systems for a while.

          > > And here it is. All systems are on steady crawl toward Unicode and
          > > vector files in not very long time should be system independent.
          > Don't forget legacy data - most of which will not be translated to
          > Unicode. Support for alternate 8-bit encodings will be needed for
          > years yet.

          But this files can be translated into Unicode before printing if system
          support it.

          > > And this is IMO most important thing in short term. Playing with
          > > root each time I want to change printing encoding is a
          > > BadThing(tm). I know Bram doesn't want to change Vim behaviour
          > > without mature replacement but "simple" change of hardcoding place
          > > of ev files (in my case /usr/share/vim/vim61) to "standard" looking
          > > through 'rtp' dirs would be very nice thing in Vim6.2.
          > I'll have a look into it. There are two issues, hardcoding file
          > locations, and providing for alternate encodings.

          Maybe a small cheat? 'printencoding' would be just name of file included
          as EV in new directory (eg. "print") which file/directory could exist in
          common 'rtp' places. Full name of such file could be
          ~/.vim/print/ev_iso-8859-2.ps
          Option - printencoding=iso-8859-2

          Other values would be as in :help encoding-values

          > > Hmm. I just tested. Changing names of fonts was unnecessary. I have get
          > > the same results with "naked" Courier. Only ev file is important here.
          > > Mdk9 with standard rpms.
          > It depends on the version of the font you have. Some versions will
          > include Cyrillic characters, some will not, and that wont change
          > anytime soon. For those printers, a downloadable font is a must.

          I underscored name of distribution because from 9.0 Mdk is
          GNU-distribution. Thus many various fonts are downloadable for free.

          Mikolaj
        • Mike Williams
          ... ... but with a few extra characters that I imagine those Windows users would actually like to be printed. ... Hmm, one scenario I can think of is where you
          Message 4 of 16 , Jan 8, 2003
          • 0 Attachment
            On 7 Jan 2003 at 23:17, Mikolaj Machowski wrote:

            > This is Windows feature - from historical reasons MS created its own set
            > of encodings based on ANSI standards. In internet and on *nix systems
            > are mostly(?) used ISO standards.
            > cp1250 is Windows equivalent of iso-8859-2 for East European countries.

            ... but with a few extra characters that I imagine those Windows
            users would actually like to be printed.

            > After a while I am not sure if this is really necessary. I have to try
            > one more thing. Stay tuned ;)

            Hmm, one scenario I can think of is where you are using a Unicode
            encoding. There are no Unicode fonts in PS (apart from with some CID
            fonts for printing CJK text, but even they are just a subset) so an
            encoding needs to be specified for printing.

            If no print encoding is given then VIM could default to latin1, or
            better still, allow a default to be specified and only if that isn't
            default to latin1. Do we need two options or have printencoding take
            two values, a default encoding and the required encoding?

            > > > And here it is. All systems are on steady crawl toward Unicode and
            > > > vector files in not very long time should be system independent.
            > > Don't forget legacy data - most of which will not be translated to
            > > Unicode. Support for alternate 8-bit encodings will be needed for
            > > years yet.
            >
            > But this files can be translated into Unicode before printing if system
            > support it.

            However, since there are no Unicode printing fonts they would need to
            be translated back to an 8-bit encoding.

            > > > And this is IMO most important thing in short term. Playing with
            > > > root each time I want to change printing encoding is a
            > > > BadThing(tm). I know Bram doesn't want to change Vim behaviour
            > > > without mature replacement but "simple" change of hardcoding place
            > > > of ev files (in my case /usr/share/vim/vim61) to "standard" looking
            > > > through 'rtp' dirs would be very nice thing in Vim6.2.
            > > I'll have a look into it. There are two issues, hardcoding file
            > > locations, and providing for alternate encodings.
            >
            > Maybe a small cheat? 'printencoding' would be just name of file included
            > as EV in new directory (eg. "print") which file/directory could exist in
            > common 'rtp' places. Full name of such file could be
            > ~/.vim/print/ev_iso-8859-2.ps
            > Option - printencoding=iso-8859-2

            I like your thinking ;-)

            > Other values would be as in :help encoding-values

            Hmm, another problem latin1 encodings are different for different
            platforms but I 'spose we could tag that on to the given encoding
            name within VIM. However that does mean you could never print a say
            a Windows Latin2 encoded file correctly from Linux. Whoops. I'll
            have to think a bit more on this

            TTFN

            Mike
            --
            If you can't see the bright side, polish the dull side.
          • Mike Williams
            Message 5 of 16 , Jan 8, 2003
            • 0 Attachment
            • Bram Moolenaar
              ... It is really disappointing that printing Unicode directly will not work. How is this done on MS-Windows then? It has quite a bit of Unicode support and
              Message 6 of 16 , Jan 8, 2003
              • 0 Attachment
                Mike Williams wrote:

                > Hmm, one scenario I can think of is where you are using a Unicode
                > encoding. There are no Unicode fonts in PS (apart from with some CID
                > fonts for printing CJK text, but even they are just a subset) so an
                > encoding needs to be specified for printing.

                It is really disappointing that printing Unicode directly will not work.
                How is this done on MS-Windows then? It has quite a bit of Unicode
                support and does PS printing.

                > If no print encoding is given then VIM could default to latin1, or
                > better still, allow a default to be specified and only if that isn't
                > default to latin1. Do we need two options or have printencoding take
                > two values, a default encoding and the required encoding?

                We could also use 'fileencoding'. Thus when 'printencoding' is empty
                use 'fileencoding' when it's an 8-bit encoding. Otherwise default to
                latin1. This is not really foolproof though.

                The Japanese vs Chinese problem still exists, thus specifying the
                language or font used would also be required.

                --
                GUEST: He's killed the best man!
                SECOND GUEST: (holding a limp WOMAN) He's killed my auntie.
                FATHER: No, please! This is supposed to be a happy occasion! Let's
                not bicker and argue about who killed who ...
                "Monty Python and the Holy Grail" PYTHON (MONTY) PICTURES LTD

                /// Bram Moolenaar -- Bram@... -- http://www.moolenaar.net \\\
                /// Creator of Vim - Vi IMproved -- http://www.vim.org \\\
                \\\ Project leader for A-A-P -- http://www.a-a-p.org ///
                \\\ Lord Of The Rings helps Uganda - http://iccf-holland.org/lotr.html ///
              • Mike Williams
                ... Always with the questions! ;-) There are two methods I can think of off the top of my head - Windows generates the edge path of each glyph and fills it in.
                Message 7 of 16 , Jan 9, 2003
                • 0 Attachment
                  On 8 Jan 2003 at 20:53, Bram Moolenaar wrote:

                  > Mike Williams wrote:
                  >
                  > > Hmm, one scenario I can think of is where you are using a Unicode
                  > > encoding. There are no Unicode fonts in PS (apart from with some CID
                  > > fonts for printing CJK text, but even they are just a subset) so an
                  > > encoding needs to be specified for printing.
                  >
                  > It is really disappointing that printing Unicode directly will not work.
                  > How is this done on MS-Windows then? It has quite a bit of Unicode
                  > support and does PS printing.

                  Always with the questions! ;-)

                  There are two methods I can think of off the top of my head - Windows
                  generates the edge path of each glyph and fills it in. No use is
                  made of PS fonts on the printer. I have seen this done - it results
                  in large amounts of PS data and can result in slow printing (or
                  possibly failure for comples pages on printers with small amounts of
                  memory). The other approach is to use PS fonts, either one Windows
                  generates itself on the fly or one on the printer, and translate the
                  file character data into suitable character indexes for the generated
                  encoding (does not have to be a recognised encoding!) - this is the
                  more normal approach with PS/PDF generation.

                  It all depends on the type of font being used for printing and the
                  number of characters to be printed, and I don't want to get PS techie
                  here (to do with code ranges, type 0, 1 and 42 PS fonts, etc.).
                  Basically Windows will translate character codes when printing. VIM
                  could be made to do this as well, but it would require a lot of code
                  to be able to read font information, for a variety of font
                  technologies.

                  To support printing of Unicode encoded files, VIM will need to
                  translate the characters to a suitable font encoding. For CJK output
                  this may be a no brainer since there are Unicode encoded CID fonts
                  (although there most likely be issues in terms of number of bytes and
                  endianess).

                  > > If no print encoding is given then VIM could default to latin1, or
                  > > better still, allow a default to be specified and only if that isn't
                  > > default to latin1. Do we need two options or have printencoding take
                  > > two values, a default encoding and the required encoding?
                  >
                  > We could also use 'fileencoding'. Thus when 'printencoding' is empty
                  > use 'fileencoding' when it's an 8-bit encoding. Otherwise default to
                  > latin1. This is not really foolproof though.

                  I thought the characters used for printing (as in what the code picks
                  up) are in 'encoding' encoding, not 'fileencoding' encoding. In
                  which case the characters would need to be first translated back to
                  'fileencoding' prior to printing - right?

                  > The Japanese vs Chinese problem still exists, thus specifying the
                  > language or font used would also be required.

                  Er, what problem is that? (Got my dumb hat on today.)

                  TTFN

                  Mike
                  --
                  I read the docs, but the nurses were more fun.
                • vipin aravind
                  Mike, ... In windows NT,2k,... things are neat, the glyph indices along with the relevant font information is downloaded to the printer,if it is not a device
                  Message 8 of 16 , Jan 9, 2003
                  • 0 Attachment
                    Mike,

                    > Mike Williams wrote:
                    >
                    > > Hmm, one scenario I can think of is where you are using a Unicode
                    > > encoding. There are no Unicode fonts in PS (apart from with some CID
                    > > fonts for printing CJK text, but even they are just a subset) so an
                    > > encoding needs to be specified for printing.
                    >
                    > It is really disappointing that printing Unicode directly will not
                    > work. How is this done on MS-Windows then? It has quite a bit of
                    > Unicode support and does PS printing.

                    >Always with the questions! ;-)

                    >There are two methods I can think of off the top of my head - Windows
                    >generates the edge path of each glyph and fills it in.

                    In windows NT,2k,... things are neat, the glyph indices along with the
                    relevant font information is downloaded to the printer,if it is not a device
                    font. If a device font it downloads the glyph indexes and the font-name.
                    It can also send as bitmaps, look for an option called "send truetype as bitmaps"
                    in your printer's tabs. All the printer driver does is that he would write the incoming
                    Glyphs on to a bitmap and send it down.The above holds for the pcl printer description
                    language.

                    So its neater on windows NT series.

                    Mike, it would good if you can look at the nt40ddk\src\print\psprint driver.
                    You may be able to clone the text handling code from the postscript driver into vim
                    for the problems faced here.
                    I don't have time to look into this as Iam stuck up with another project besides the
                    day job.

                    vipin
                  • vipin aravind
                    I looked fast into the ddk code and it won t apply here quite easily, because they play in terms of glyphs.So it should be possible to convert a text string to
                    Message 9 of 16 , Jan 9, 2003
                    • 0 Attachment
                      I looked fast into the ddk code and it won't apply here quite easily,
                      because they play in terms of glyphs.So it should be possible to convert
                      a text string to the glyph array, I guess there is raw way to do this in the
                      Microsoft site. As far as windows programming is concerned it was simple,
                      GetGlyphIndices(...) gets the job done.So might be the problems can be fixed
                      with ps printing on on windows and with the tricks applied from the ddk ps
                      Driver and ported to non-windows systems later.

                      vipin




                      -----Original Message-----
                      From: vipin aravind
                      Sent: Thursday, January 09, 2003 6:12 PM
                      To: mike.williams@...; Bram Moolenaar
                      Cc: vim-dev@...
                      Subject: RE: hardcopy and printing latin2 (long - but getting shorter!)



                      Mike,

                      > Mike Williams wrote:
                      >
                      > > Hmm, one scenario I can think of is where you are using a Unicode
                      > > encoding. There are no Unicode fonts in PS (apart from with some
                      > > CID fonts for printing CJK text, but even they are just a subset) so
                      > > an encoding needs to be specified for printing.
                      >
                      > It is really disappointing that printing Unicode directly will not
                      > work. How is this done on MS-Windows then? It has quite a bit of
                      > Unicode support and does PS printing.

                      >Always with the questions! ;-)

                      >There are two methods I can think of off the top of my head - Windows
                      >generates the edge path of each glyph and fills it in.

                      In windows NT,2k,... things are neat, the glyph indices along with the relevant font information is downloaded to the printer,if it is not a device font. If a device font it downloads the glyph indexes and the font-name. It can also send as bitmaps, look for an option called "send truetype as bitmaps" in your printer's tabs. All the printer driver does is that he would write the incoming Glyphs on to a bitmap and send it down.The above holds for the pcl printer description language.

                      So its neater on windows NT series.

                      Mike, it would good if you can look at the nt40ddk\src\print\psprint driver. You may be able to clone the text handling code from the postscript driver into vim for the problems faced here. I don't have time to look into this as Iam stuck up with another project besides the day job.

                      vipin
                    • Mike Williams
                      ... As you say that is for PCL printers - PCL printers are heavily geared towards Windows printing, especially with PCL6. ... Thanks for the pointer. Mike --
                      Message 10 of 16 , Jan 9, 2003
                      • 0 Attachment
                        On 9 Jan 2003 at 18:12, vipin aravind wrote:

                        > In windows NT,2k,... things are neat, the glyph indices along with
                        > the relevant font information is downloaded to the printer,if it is
                        > not a device font. If a device font it downloads the glyph indexes
                        > and the font-name. It can also send as bitmaps, look for an option
                        > called "send truetype as bitmaps" in your printer's tabs. All the
                        > printer driver does is that he would write the incoming Glyphs on
                        > to a bitmap and send it down.The above holds for the pcl printer
                        > description language.

                        As you say that is for PCL printers - PCL printers are heavily geared
                        towards Windows printing, especially with PCL6.

                        > So its neater on windows NT series.
                        >
                        > Mike, it would good if you can look at the
                        > nt40ddk\src\print\psprint driver. You may be able to clone the
                        > text handling code from the postscript driver into vim for the
                        > problems faced here. I don't have time to look into this as Iam
                        > stuck up with another project besides the day job.

                        Thanks for the pointer.

                        Mike
                        --
                        Yes-men: Fellows who hang around the man nobody noes.
                      • Mike Williams
                        Hmm, I only have Win2k ddk and it doesn t seem to have psprint anymore. I guess MS think they have cracked it and no one will want to write a PS driver. And a
                        Message 11 of 16 , Jan 9, 2003
                        • 0 Attachment
                          Hmm, I only have Win2k ddk and it doesn't seem to have psprint
                          anymore. I guess MS think they have cracked it and no one will want
                          to write a PS driver. And a down side is GetGlyphIndices is not
                          supported on Win 9x/ME.

                          For now I'll concentrate on supporting additional 8-bit encodings.

                          TTFN

                          On 9 Jan 2003 at 18:22, vipin aravind wrote:

                          > I looked fast into the ddk code and it won't apply here quite easily,
                          > because they play in terms of glyphs.So it should be possible to convert
                          > a text string to the glyph array, I guess there is raw way to do this in the
                          > Microsoft site. As far as windows programming is concerned it was simple,
                          > GetGlyphIndices(...) gets the job done.So might be the problems can be fixed
                          > with ps printing on on windows and with the tricks applied from the ddk ps
                          > Driver and ported to non-windows systems later.
                          >
                          > vipin
                          >
                          >
                          >
                          >
                          > -----Original Message-----
                          > From: vipin aravind
                          > Sent: Thursday, January 09, 2003 6:12 PM
                          > To: mike.williams@...; Bram Moolenaar
                          > Cc: vim-dev@...
                          > Subject: RE: hardcopy and printing latin2 (long - but getting shorter!)
                          >
                          >
                          >
                          > Mike,
                          >
                          > > Mike Williams wrote:
                          > >
                          > > > Hmm, one scenario I can think of is where you are using a Unicode
                          > > > encoding. There are no Unicode fonts in PS (apart from with some
                          > > > CID fonts for printing CJK text, but even they are just a subset) so
                          > > > an encoding needs to be specified for printing.
                          > >
                          > > It is really disappointing that printing Unicode directly will not
                          > > work. How is this done on MS-Windows then? It has quite a bit of
                          > > Unicode support and does PS printing.
                          >
                          > >Always with the questions! ;-)
                          >
                          > >There are two methods I can think of off the top of my head - Windows
                          > >generates the edge path of each glyph and fills it in.
                          >
                          > In windows NT,2k,... things are neat, the glyph indices along with the relevant font information is downloaded to the printer,if it is not a device font. If a device font it downloads the glyph indexes and the font-name. It can also send as bitmaps, look for an option called "send truetype as
                          bitmaps" in your printer's tabs. All the printer driver does is that he would write the incoming Glyphs on to a bitmap and send it down.The above holds for the pcl printer description language.
                          >
                          > So its neater on windows NT series.
                          >
                          > Mike, it would good if you can look at the nt40ddk\src\print\psprint driver. You may be able to clone the text handling code from the postscript driver into vim for the problems faced here. I don't have time to look into this as Iam stuck up with another project besides the day job.
                          >
                          > vipin
                          >
                          >

                          Mike
                          --
                          For every action, there is an equal and opposite criticism.
                        • Bram Moolenaar
                          ... Well, at least it works. Can we steal code for this from another application? Although, this might be so big it s better installed separately, which
                          Message 12 of 16 , Jan 9, 2003
                          • 0 Attachment
                            Mike Williams wrote:

                            > There are two methods I can think of off the top of my head - Windows
                            > generates the edge path of each glyph and fills it in. No use is
                            > made of PS fonts on the printer. I have seen this done - it results
                            > in large amounts of PS data and can result in slow printing (or
                            > possibly failure for comples pages on printers with small amounts of
                            > memory).

                            Well, at least it works. Can we steal code for this from another
                            application? Although, this might be so big it's better installed
                            separately, which means you might as well install that other
                            application.

                            > It all depends on the type of font being used for printing and the
                            > number of characters to be printed, and I don't want to get PS techie
                            > here (to do with code ranges, type 0, 1 and 42 PS fonts, etc.).
                            > Basically Windows will translate character codes when printing. VIM
                            > could be made to do this as well, but it would require a lot of code
                            > to be able to read font information, for a variety of font
                            > technologies.

                            Don't think we want to do that.

                            > To support printing of Unicode encoded files, VIM will need to
                            > translate the characters to a suitable font encoding. For CJK output
                            > this may be a no brainer since there are Unicode encoded CID fonts
                            > (although there most likely be issues in terms of number of bytes and
                            > endianess).

                            Let's just restrict ourselves to the simple and small solutions. This
                            might require some cleverness though.

                            Could we use something of CUPS when it is installed? I thought it could
                            handle printing Unicode files (but can it convert to PS?).

                            > > > If no print encoding is given then VIM could default to latin1, or
                            > > > better still, allow a default to be specified and only if that isn't
                            > > > default to latin1. Do we need two options or have printencoding take
                            > > > two values, a default encoding and the required encoding?
                            > >
                            > > We could also use 'fileencoding'. Thus when 'printencoding' is empty
                            > > use 'fileencoding' when it's an 8-bit encoding. Otherwise default to
                            > > latin1. This is not really foolproof though.
                            >
                            > I thought the characters used for printing (as in what the code picks
                            > up) are in 'encoding' encoding, not 'fileencoding' encoding. In
                            > which case the characters would need to be first translated back to
                            > 'fileencoding' prior to printing - right?

                            When 'encoding' is "utf-8" and you can't print Unicode, you can use
                            'fileencoding' to know what subset of Unicode is being used in the file,
                            then you can use iconv() to convert the text and print (Assuming PS
                            printing for that latin encoding is supported). But when 'fileencoding'
                            is empty, you would need to use 'printencoding', the encoding that the
                            user has selected to be used for the printer. You might be able to
                            guess it by checking which Unicode characters the user uses (e.g.,
                            Cyrillic characters).

                            > > The Japanese vs Chinese problem still exists, thus specifying the
                            > > language or font used would also be required.
                            >
                            > Er, what problem is that? (Got my dumb hat on today.)

                            Chinese and Japanese use the same Unicode characters, but they need to
                            be printed with a different font. Thus you need to know if it's a
                            Japanese or a Chinese Unicode file. Although there could also be
                            markers in the file that indicate the language, but I don't think more
                            than a few people use that. Mostly the locale can be used to know the
                            language. Although the font has to be set manually anyway, because we
                            don't know which one is supported by the printer.

                            --
                            ARTHUR: Did you say shrubberies?
                            ROGER: Yes. Shrubberies are my trade. I am a shrubber. My name is Roger
                            the Shrubber. I arrange, design, and sell shrubberies.
                            "Monty Python and the Holy Grail" PYTHON (MONTY) PICTURES LTD

                            /// Bram Moolenaar -- Bram@... -- http://www.moolenaar.net \\\
                            /// Creator of Vim - Vi IMproved -- http://www.vim.org \\\
                            \\\ Project leader for A-A-P -- http://www.a-a-p.org ///
                            \\\ Lord Of The Rings helps Uganda - http://iccf-holland.org/lotr.html ///
                          • Mike Williams
                            ... Not always. It could easily fail on older PS printers with small amounts of memory, especially with small point sizes (
                            Message 13 of 16 , Jan 11, 2003
                            • 0 Attachment
                              On 9 Jan 2003 at 21:29, Bram Moolenaar wrote:

                              > Mike Williams wrote:
                              >
                              > > There are two methods I can think of off the top of my head - Windows
                              > > generates the edge path of each glyph and fills it in. No use is
                              > > made of PS fonts on the printer. I have seen this done - it results
                              > > in large amounts of PS data and can result in slow printing (or
                              > > possibly failure for comples pages on printers with small amounts of
                              > > memory).
                              >
                              > Well, at least it works.

                              Not always. It could easily fail on older PS printers with small
                              amounts of memory, especially with small point sizes (< 8).

                              > Can we steal code for this from another application?

                              You could, but why bother? That is what a PS interpreter does best -
                              convert font programs to filled outlines. It is really only
                              necessary for old (say 7 years or more) PS printers. You would be
                              effectively adding a PS interpreter to VIM (some font programs are
                              written in PS) - hmm, perhaps we should integrate VIM with
                              GhostScript ;-)

                              > Although, this might be so big it's better installed separately,
                              > which means you might as well install that other application.

                              Yup, might as well hook up with enscript or a2ps I guess.

                              > > It all depends on the type of font being used for printing and the
                              > > number of characters to be printed, and I don't want to get PS techie
                              > > here (to do with code ranges, type 0, 1 and 42 PS fonts, etc.).
                              > > Basically Windows will translate character codes when printing. VIM
                              > > could be made to do this as well, but it would require a lot of code
                              > > to be able to read font information, for a variety of font
                              > > technologies.
                              >
                              > Don't think we want to do that.

                              Ok, I think we need some basic principlies behind printing from VIM.
                              At the moment this seems to be, the user knows what he is doing - he
                              understands encodings, he knows what encodings his printer fonts
                              support, and VIM will set up an encoding for printing as requested.

                              Have I missed anything? Is this what we want?

                              > Let's just restrict ourselves to the simple and small solutions. This
                              > might require some cleverness though.

                              Ok, I think we are losing some focus here. The original problem was
                              to simplify the addition of new 8-bit encodings (so we don't need a
                              patch/new version of VIM with each one added) which raised a side
                              issue of specifying the encoding to use where it is not explicit.
                              This has gone all the way to how to print Unicode encoded files with
                              arbitrary code ranges.

                              Without knowing the fonts available and there font
                              metrics/information it is impossible to solve this problem. The most
                              you can guarantee for a PS printer is support for a few platform
                              specific and ISO Latin encodings. We can assume a few other
                              encodings to support CJK fonts with Asian printers but that will fail
                              with the LaserJet you pick up from your European/American computer
                              store.

                              If VIM is to go beyond this then it will need to be able to query PS
                              font support (a PPD reader), read font metrics (AFM, TT, and OT
                              readers), and be able to embed fonts (not all printable fonts will be
                              stored on the printer - they are downloaded in jobs as they need
                              them). All of which I doubt you want to do.

                              > Could we use something of CUPS when it is installed? I thought it could
                              > handle printing Unicode files (but can it convert to PS?).

                              I haven't looked too deeply into CUPS - my feeling was that it was
                              some filters to convert files to PS based on their type. It may do
                              some probing to decide if a) the file is text and b) if it is single
                              or multi-byte. Beyond that I cannot say. AFAICT it is not a PS
                              printer driver for applications.

                              > When 'encoding' is "utf-8" and you can't print Unicode, you can use
                              > 'fileencoding' to know what subset of Unicode is being used in the file,
                              > then you can use iconv() to convert the text and print (Assuming PS
                              > printing for that latin encoding is supported). But when 'fileencoding'
                              > is empty, you would need to use 'printencoding', the encoding that the
                              > user has selected to be used for the printer. You might be able to
                              > guess it by checking which Unicode characters the user uses (e.g.,
                              > Cyrillic characters).

                              Two thoughts:

                              1) You would have to scan the whole file to establish all the code-
                              ranges used within it.
                              2) If we knew the set of fonts and encodings available we can switch
                              font depending on the code-range currently being printed if no one
                              font contained them all.

                              TTFN

                              Mike
                              --
                              A fart is the cry for help from a turd in trouble.
                            • Bram Moolenaar
                              ... I was talking about stealing code for printing a plain-text Unicode file. There must be other programs that do this... The size of the code could be a
                              Message 14 of 16 , Jan 12, 2003
                              • 0 Attachment
                                Mike Williams wrote:

                                > > Can we steal code for this from another application?
                                >
                                > You could, but why bother? That is what a PS interpreter does best -
                                > convert font programs to filled outlines. It is really only
                                > necessary for old (say 7 years or more) PS printers. You would be
                                > effectively adding a PS interpreter to VIM (some font programs are
                                > written in PS) - hmm, perhaps we should integrate VIM with
                                > GhostScript ;-)

                                I was talking about stealing code for printing a plain-text Unicode
                                file. There must be other programs that do this... The size of the
                                code could be a problem though. We would only want to include a simple
                                solution, not one that adds Mbytes.

                                > Ok, I think we need some basic principlies behind printing from VIM.
                                > At the moment this seems to be, the user knows what he is doing - he
                                > understands encodings, he knows what encodings his printer fonts
                                > support, and VIM will set up an encoding for printing as requested.
                                >
                                > Have I missed anything? Is this what we want?

                                We want to allow the user to print the text he is editing. At the same
                                time we want Vim to remain a small and portable program. These can be
                                conflicting desires.

                                It seems the main trouble is that Unicode is the generic solution to
                                avoid problems with encodings, but we can't print Unicode.

                                > > Let's just restrict ourselves to the simple and small solutions. This
                                > > might require some cleverness though.
                                >
                                > Ok, I think we are losing some focus here. The original problem was
                                > to simplify the addition of new 8-bit encodings (so we don't need a
                                > patch/new version of VIM with each one added) which raised a side
                                > issue of specifying the encoding to use where it is not explicit.
                                > This has gone all the way to how to print Unicode encoded files with
                                > arbitrary code ranges.

                                Yes, we do want both eventually, thus when adding code for printing we
                                should keep in mind what direction we are going. We don't need to add
                                all the code right away, just make sure it doesn't conflict with what we
                                will get eventually.

                                > Without knowing the fonts available and there font
                                > metrics/information it is impossible to solve this problem. The most
                                > you can guarantee for a PS printer is support for a few platform
                                > specific and ISO Latin encodings. We can assume a few other
                                > encodings to support CJK fonts with Asian printers but that will fail
                                > with the LaserJet you pick up from your European/American computer
                                > store.

                                If it's really impossible to do inside Vim, then we should have some
                                solution in the form of installing another application (e.g., CUPS) that
                                takes care of it. We can document, and for systems like FreeBSD it can
                                be installed automatically as a dependency.

                                > If VIM is to go beyond this then it will need to be able to query PS
                                > font support (a PPD reader), read font metrics (AFM, TT, and OT
                                > readers), and be able to embed fonts (not all printable fonts will be
                                > stored on the printer - they are downloaded in jobs as they need
                                > them). All of which I doubt you want to do.

                                Indeed, we want to keep Vim simple.

                                > > Could we use something of CUPS when it is installed? I thought it could
                                > > handle printing Unicode files (but can it convert to PS?).
                                >
                                > I haven't looked too deeply into CUPS - my feeling was that it was
                                > some filters to convert files to PS based on their type. It may do
                                > some probing to decide if a) the file is text and b) if it is single
                                > or multi-byte. Beyond that I cannot say. AFAICT it is not a PS
                                > printer driver for applications.

                                CUPS has been growing very fast, because several companies intend to use
                                it (SuSe, Apple). It would be worth checking out what the recent
                                version offers.

                                > > When 'encoding' is "utf-8" and you can't print Unicode, you can use
                                > > 'fileencoding' to know what subset of Unicode is being used in the file,
                                > > then you can use iconv() to convert the text and print (Assuming PS
                                > > printing for that latin encoding is supported). But when 'fileencoding'
                                > > is empty, you would need to use 'printencoding', the encoding that the
                                > > user has selected to be used for the printer. You might be able to
                                > > guess it by checking which Unicode characters the user uses (e.g.,
                                > > Cyrillic characters).
                                >
                                > Two thoughts:
                                >
                                > 1) You would have to scan the whole file to establish all the code-
                                > ranges used within it.

                                Yes, that is not very difficult. So long as we can make a table for
                                each 8-bit character set that we support. This can probably be
                                generated from the tables that the Unicode consortium provides.

                                > 2) If we knew the set of fonts and encodings available we can switch
                                > font depending on the code-range currently being printed if no one
                                > font contained them all.

                                That would be a solution when mixing 8-bit character sets in a Unicode
                                file (e.g., Greek and Hebrew). It won't work for Asian langauges
                                though.

                                --
                                BEDEVERE: How do you know so much about swallows?
                                ARTHUR: Well you have to know these things when you're a king, you know.
                                "Monty Python and the Holy Grail" PYTHON (MONTY) PICTURES LTD

                                /// Bram Moolenaar -- Bram@... -- http://www.moolenaar.net \\\
                                /// Creator of Vim - Vi IMproved -- http://www.vim.org \\\
                                \\\ Project leader for A-A-P -- http://www.a-a-p.org ///
                                \\\ Lord Of The Rings helps Uganda - http://iccf-holland.org/lotr.html ///
                              Your message has been successfully submitted and would be delivered to recipients shortly.