Loading ...
Sorry, an error occurred while loading the content.

Re: [dita-users] DITA 1.2 Spec production data

Expand Messages
  • Jarno Elovirta
    Mica, Do you mean that the example below should result in an error condition because there are duplicate key definitions? Jarno
    Message 1 of 41 , May 1, 2013
    • 0 Attachment

      Do you mean that the example below should result in an error condition because there are duplicate key definitions?


      On 1 May 2013 03:15, Mica Semrick <paperdigits@...> wrote:


      Key names are global, and that should fail, no?


      On Tue, Apr 30, 2013 at 2:07 PM, Jarno Elovirta <jelovirt@...> wrote:

      The spec at http://docs.oasis-open.org/dita/v1.2/os/spec/non-normative/interoperability-considerations.html says

      The DITA specification does not require processors to perform filtering, content reference resolution, key space construction, and other processing related to base DITA semantics in any particular order. This means that different conforming DITA processors may produce different results for the same initial data set and filtering conditions.

      I think this allows different results. E.g. as the spec doesn't mandate that the key space is constructed before conref processing, reversing the order there may result in different output. Current DITA-OT order is

      1. key space construction
      2. conref resolution
      3. keyref resolution

      But the order could be

      1. conref resolution
      2. key space construction
      3. keyref resolution

      For example, if we have

      <map id="map">
      <topicgroup conref="#map/second">
      <keydef keys="foo" href="first.dita"/>
      <topicgroup id="second">
      <keydef keys="foo" href="second.dita"/>

      the effective value of key "foo" can be either "first.dita" or "second.dita".


      On 30 April 2013 21:44, Eliot Kimber <ekimber@...> wrote:

      The spec *cannot* allow the order of conref and keyref processing to result
      in different final results: that would be very very wrong. If you think the
      spec does allow that, please provide the citation. But whether you resolve
      conrefs first or keyrefs first, the result should always be the same because
      meaning of the source is invariant and deterministic, meaning you should
      *always* get the same result.

      In particular, you have to resolve all direct conrefs before you can
      determine any key bindings--in any other case you don't have sufficient
      information to know if your effective key definitions are complete (because
      a conref on a key-defining topicref could change the effective attribute
      values for the topicref element).

      What the spec does allow is different orders of filtering and conref
      resolution, but in that case, the final result will be same, although in one
      case you may get messages about bad references and in other you won't (see

      In your example, the meaning of the markup in invariant: the key named "foo"
      will have as its link text the value of the element with ID "baz" in topic
      file "bar.dita".

      The order of processing cannot change that.

      If, for example, you resolved keyrefs before conrefs, then as part of the
      keyref resolution you'd have to rewrite the conref so that the resulting
      @conref value still got you to the resource addressed by "bar.dita#bar/baz"
      relative to the document that contains the <keydef> element. Any other
      processing result would be wrong and would be a bug.

      If that is done, then the final result must be the same as if you resolved
      the conrefs and then did keyref resolution on the conref-resolved result.



      On 4/30/13 1:32 PM, "Jarno Elovirta" <jelovirt@...> wrote:

      > No, because the effective key bindings are defined before conrefs or keyrefs
      > are resolved. Only in cases where you have something like
      > <keydef keys="foo">
      >   <topicmeta>
      >     <keywords conref="bar.dita#bar/baz"/>
      >   </topicmeta>
      > </keydef>
      > There are probably some other cases where the output between the two
      > approaches will not yield same results. The DITA spec doesn't mandate the
      > order of conref and keyref processing, so both approaches are "correct".
      > Jarno
      > On 30 April 2013 21:20, Shane Taylor <shanetaylortext@...> wrote:
      >> Wouldn't changing the order of conref/keyref resolution sometimes change how
      >> a
      >> keyref gets resolved when a map uses conref?
      >> On Tue, Apr 30, 2013 at 2:10 PM, Jarno Elovirta <jelovirt@...> wrote:
      >>> I don't have the exact numbers at hand right now, but earlier this year I
      >>> noticed that if the order or keyref and conref makes almost a three fold
      >>> difference in DITA 1.2 spec processing time. The spec uses *a lot* of
      >>> conrefs
      >>> and keyrefs in the language reference. The default DITA-OT processing order
      >>> is conrefs first, then keyrefs. If you turn this around, i.e. keyrefs first
      >>> and then conrefs, the processing time is reduced drastically. I don't know
      >>> if
      >>> DITA2Go supports changing the processing order, but take look how that
      >>> changes the processing time.
      >>> Janro
      >>> On 30 April 2013 10:15, Radu Coravu <radu_coravu@...> wrote:
      >>>> Hi Jeremy,
      >>>> It took me 5 minutes to produce from the DITA 1.2 specs WebHelp Mobile
      >>>> (integrated as a DITA OT plugin) using an Oxygen 15.0 beta.
      >>>> But I have a very strong workstation configuration: 16 GBs RAM, Intel
      >>>> Core i5 quad core 3.4 GHZ, Win 7 64-bits.
      >>>> Regards,
      >>>> Radu
      >>>> Radu Coravu
      >>>> <oXygen/> XML Editor, Schema Editor and XSLT Editor/Debugger
      >>>> http://www.oxygenxml.com
      >>>> On 4/30/2013 1:21 AM, Jeremy H. Griffith wrote:
      >>>>>> Hi all! We're one of the four vendors who produced
      >>>>>> Help for the DITA 1.2 Specification on Don Day's neat
      >>>>>> site, Mobile DITA:
      >>>>>> http://mobiledita.com
      >>>>>> As you'll see on our page at the site, we try hard
      >>>>>> to have the fastest possible loading time, and the
      >>>>>> shortest possible production time, with our tools.
      >>>>>> However, when we ran the 1.2 spec, it took a *lot*
      >>>>>> longer to run than any other project we've ever done.
      >>>>>> On an older (2.3 GHz, 4 GB) Win 7 64-bit system, it
      >>>>>> took 70 minutes. For comparison, our User's Guide
      >>>>>> takes about 7 minutes; that was formerly the longest
      >>>>>> run we'd ever had. The spec is longer, 1236 pp vs.
      >>>>>> 870 pp in PDF, but we'd still expect a 10-minute
      >>>>>> time, not one seven times longer.
      >>>>>> We'd like to compare this with the experience of
      >>>>>> any others who have produced Help for the 1.2 spec.
      >>>>>> That would include the other three vendors on the
      >>>>>> site, and also whoever produced the HTML Help for
      >>>>>> the spec on the OASIS site (Jarno?). Frankly, we
      >>>>>> are considering whether to have someone spend a
      >>>>>> few weeks profiling and tuning, but if others see
      >>>>>> the same sort of discrepancy, we'd rather spend the
      >>>>>> time on other planned enhancements.
      >>>>>> Can anyone else here comment on their production
      >>>>>> time for the 1.2 spec, from DITA source to finished
      >>>>>> Help system?
      >>>>>> Thanks!
      >>>>>> -- Jeremy H. Griffith <jeremy@... <mailto:jeremy%40omsys.com> >
      >>>>>> DITA2Go site: http://www.dita2go.com/
      >>>>>> ------------------------------------
      >>>>>> Yahoo! Groups Links
      >>>> --

      Eliot Kimber
      Senior Solutions Architect, RSI Content Solutions
      "Bringing Strategy, Content, and Technology Together"
      Main: 512.554.9368
      Book: DITA For Practitioners, from XML Press,

    • Ron Wheeler
      My RAM disk setup is working well. It has survived a few Windows updates with required reboots over the last 2 weeks. The disk save and restore worked as
      Message 41 of 41 , Jun 4, 2013
      • 0 Attachment
        My RAM disk setup is working well.
        It has survived a few Windows updates with required reboots over the
        last 2 weeks.
        The disk save and restore worked as advertised and nothing was lost or
        damaged so far.

        I am using the Eclipse editor to edit my DITA docs and the RAM drive has
        made Eclipse quite snappy.

        I would certainly recommend it to anyone who is tired of waiting around
        for document builds or not happy with the time taken to switch between
        topics in editing.

        Looking forward to a more efficient set of final steps in the build
        process which are now compute bound and seem to take a long time for
        such a short document.


        On 04/06/2013 3:42 PM, Ben Allums wrote:
        > On 5/24/2013 11:40 AM, Jeremy H. Griffith wrote:
        >> On Fri, 24 May 2013 10:23:01 -0500, Ben Allums <allums@...> wrote:
        >>> WebWorks ePublisher did exhibit a fairly long generation time as well.
        >>> Digging in, I found the issue related to the link map. From your
        >>> previous descriptions, it sounds like both DITA2Go and ePublisher build
        >>> a link map so that generated files can be renamed/relocated as
        >>> necessary. This flexibility is great, but does increase the overhead
        >>> required for correct processing.
        >>> Your original time of 70 minutes matched what I saw with ePublisher
        >>> 2012.3. For ePublisher 2012.4, the times were closer to 20-30 minutes
        >>> (this is from memory though). The big challenge was in the HTML page
        >>> generation and the link processing. Constant look-ups can take their
        >>> toll. Also, it is quite a few files when broken out into segments.
        >> Thanks, Ben! I think you are right. The spec has a
        >> truly remarkable number of links; we added a counter
        >> to find out how many, and it is over 300,000. Yes,
        >> we do have a link map, and lookups for that number
        >> of links can take an amazing amount of time. We'll
        >> probably add a further lookup optimization sometime;
        >> we already use a hash table for it.
        >> For the OmniHelp version, we wound up with 760 files.
        >> But our D2G User's Guide has 1625 files, and it runs
        >> in 7 minutes 18 seconds. The Mif2Go UG, from Frame,
        >> has 2407 files, and it runs in 3 minutes 19 seconds...
        >> both have a few thousand links, nothing like the spec.
        >> So it does seem to be the link count that's the issue.
        > Jeremy,
        > I reviewed our own output and am seeing a similar number of output files
        > (784). The linking behavior was previously causing us trouble due to
        > ePublisher's allowance for page splits on any source paragraph. That
        > translates into tracking 300,000+ links. We have optimized this for our
        > upcoming 2013.1 release.
        > One reason I had in my head that the DITA 1.2 spec was large is due to
        > the fact that some of our customers use ePublisher to generate 20,000+
        > page/topic DITA maps. This can cause a bit of a slow down. ;-)
        > Luckily, the improvements to link map processing easy memory overhead
        > and speed things along. RAM disks may be the next step for some customers.
        > Ben Allums
        > allums@...
        > 512-381-8885

        Ron Wheeler
        Artifact Software Inc
        email: rwheeler@...
        skype: ronaldmwheeler
        phone: 866-970-2435, ext 102
      Your message has been successfully submitted and would be delivered to recipients shortly.