Loading ...
Sorry, an error occurred while loading the content.

Re: [hercules-os380] variable length record

Expand Messages
  • Gerhard Postpischil
    ... Aha. You put the duct tape in your spark plug, and hit it with a hammer to regap?
    Message 1 of 54 , Jul 30 2:45 PM
      On 7/30/2013 1:25 AM, Jon Perryman wrote:
      > I only need a hammer and duct tape to fix my motorcycle. I use the duct
      > tape when something moves but shouldn't and use the hammer when
      > something won't move when it should.

      Aha. You put the duct tape in your spark plug, and hit it with a hammer
      to regap? <g?

      > As far as the concept of blocks and records, Only operating systems have
      > this concept. Disks have sectors which IBM has chosen to represent a
      > block. I don't believe that there is a skip ## sectors I/O instruction
      > so you can't actually skip blocks. For IBM FB files, the sectors is
      > consistent so the block can easily be translated to Cyl, head, track &
      > sector. IBM VB files on the other hand don't have a consistent block
      > size (sectors not consistent for each track), so you can't calculate it.

      A Skip Sectors routine isn't necessary, as the calculation is trivial
      (convert address to sectors, add increment, convert back to CCHHR and
      Search Id). For VB files, if random access is required, the obvious
      choice is VSAM. I could still do it with BSAM or EXCP/XDAP using a
      binary search of blocks until the correct block is found. If there is
      enough memory, I can stage several tracks (Karl Barnard at Bell Labs
      wrote a SQUISH program in the sixties or early seventies for DASD
      copying that does that, and was a magnitude faster than anything IBM
      had). I'm not aware of any C equivalent functionality.

      > INTEL CPU's are considered to be character (or byte) based but DASD is
      > not. I believe most dasd (IBM, PC, Solaris & ...) transfer data to
      > storage in very similar manner and similar transfer rates. None of it is
      > really character based. SATA is currently rated for 150MB/S and I think
      > that IBM is somewhere around that but IBM has implemented techniques to
      > improve the end transfer thruput (e.g. multiple channels, wider bus,
      > caching algorithms & ???).

      PCs, last time I checked, used CKD architecture to format 512-byte
      blocks, making it behave as FBA. Any file structure is mapped on top of

      > Implementing record formats similar to IBM is possible in C programs and
      > will have a similar performance level is programmed properly. The
      > problem is that everyone must re-invent the wheel so it is not always
      > optimal. Use of seek, fseek, fseeko and ??? are used to handle records.
      > For example, to use recfm=F, I would use a STRUCT to represent the
      > record and fread for a length of the structure. If I wanted to skip 10
      > records, then I would use fseeko 10*(length of structure). Recfm=V would
      > require more use of fseeko and possibly more structures. You could
      > improve performance by having a buffer with the max lrecl and shifting
      > data for the lrecl followed by fread for the available space.
      > Fortunately for us, IBM has hidden this so we don't need to worry about it.

      My point isn't that it can't be done in C, but that the underlying
      processing still has character based overhead. The software has to do
      the positioning, because the hardware doesn't (except on z systems).

      > If written correctly in C, the IRS application's performance could be
      > close to running on z/OS but the problem is getting it to run optimally
      > because you must take care of everything (write efficient code for high
      > volumes, handle problems where IBM provides solutions, interact with
      > user's and operator's as needed). Certainly far easier on z/OS but not
      > impossible to implement in UNIX or Linux.

      I'm not sure the IRS applications can be done in C, but I'm no expert.
      Basically, there is a main program, run once a week, with all ten files
      in parallel. Initialization consists of loading all programs approved
      for production, each with its own control and output files; when the
      main program reads a record, it passes control to each program in turn,
      as a subroutine, to process or ignore that record (there is a little
      range screening to reduce overhead); on input end, each program is
      called a final time. IRS paid IBM for proprietary system changes; e.g.,
      allowing 100 volumes for a tape file.

      Gerhard Postpischil
      Bradford, Vermont
    • somitcw
      ... No. IBM came up with the 6144 specifically for PDS load libraries on 2314 disks. IBM also used 1024 and 3072 on 2314 disk volumes. 6144 is the maximum
      Message 54 of 54 , Aug 2, 2013
        --- In hercules-os380@yahoogroups.com,
        "kerravon86" <kerravon86@...> wrote:
        > --- In hercules-os380@yahoogroups.com,
        >"somitcw" <somitcw@> wrote:
        >>> That explanation would be fine if the value
        >>>we were talking about is 6233. We're not.
        >>>We need an explanation for 6144 not 6233.
        >> Because there are control records between text
        >>records, 6233 is not the best size for a 3350 track size.
        >>Of course, 6144 has the same issue but 6028 hasn't been
        >>used and tested for decades like 6144 has.
        > If I understand this correctly, the linkage
        >editor is writing blocks of different sizes.
        >Some blocks are short, others are 6144. And
        >when both of those things are taken into
        >consideration, 6144 is reasonable.
        > So here's what I put:
        > If you are storing load modules, use a block size of 6144,
        >which IBM chose as a reasonable value for the disks supported.
        > BFN. Paul.


        IBM came up with the 6144 specifically for PDS load
        libraries on 2314 disks. IBM also used 1024 and 3072
        on 2314 disk volumes. 6144 is the maximum text block
        that IBM would write in 2314 load library on old
        systems that supported 2314 disks. IBM came up with
        12288 for 3330, 18432 for 3350, 32760 for 3375, and
        other sizes for drums.

        The rest of the world had systems with several types
        of disk volumes so to be compatible, used the lowest
        common PDS load library block size which was 6144.

        Back when 2311 were in a mixed shop with 2314 disks
        was when people used 1024 and 3072 but MVS 3.8j doesn't
        even support 2311 disk volumes. I believe that MFT and
        MVT do support 2311 disk volumes and PCP supports
      Your message has been successfully submitted and would be delivered to recipients shortly.