Loading ...
Sorry, an error occurred while loading the content.

simulation results for increased tcp intial window

Expand Messages
  • Kathleen Nichols
    These are the results of simulations exploring the conditions under which a larger inital window size (IW) for TCP is a win and to determine what effects, if
    Message 1 of 7 , Sep 4, 1997
    • 0 Attachment
      These are the results of simulations exploring the conditions under which
      a larger inital window size (IW) for TCP is a "win" and to determine
      what effects, if any, the larger IW might have on other traffic flows
      using an IW of 1. This set of simulations was inspired by discussions
      at the Munich IETF tcp-impl and tcp-sat meetings. It appeared that some
      of the questions being raised could be addressed fairly easily in an
      ns-2 simulation. It turned out that the simulation model was easy to
      construct, but debugging ns-2's tcp-full implementation took a lot more time.

      For ns-2 users: some modifications were made to the base tcp class,
      mainly fixes to the timers (tcp base class modifications by Van,
      tcp-full modifications made by both Van and myself). The tcp-full code
      was modified to use an "application" class and three application
      client-server pairs were written: a simple file transfer (ftp), a model
      of http1.0 style web connection and a very rough model of http1.1
      style web connection. I'll see about making these modified files available
      through the "contributed code" link from the ns-2 web page. (so don't bother
      me in the short term unless you're a Close Personal Friend.)

      The simulated network topology:

      10Mb,100us 10Mb,100us
      (all 4 links) (all 4 links)

      C n2_________ ______ n6 S
      l n3_________\ /______ n7 e
      i \\ 1.5Mb,50ms // r
      e n1 ------------------------ n0 v
      n n4__________// \ \_____ n8 e
      t n5__________/ \______ n9 r
      s s
      URLs --> <--- FTP & Web data

      Each left hand side node (n2-n5) has four web clients attached to
      it, each of which is served by a different web server attached to
      one of the nodes on the right hand side (n6-n8). The links to and from those
      nodes is at 10 Mbps. The bottleneck link is between n1 and n0. Depending
      on the simulation scenario, one or two ftp clients can also be
      attached to the left hand side nodes and ftp servers can be attached
      to the right hand side nodes. All links are bi-directional, but
      only acks, syns, fins, and URLs are flowing from left to right.

      Assumptions made in the simulations were that all ftps transfered 1 MB
      files and that all web pages had exactly three embeded urls. The web clients
      are browsing quite aggressively, requesting a new page after a delay
      uniformly randomly distributed between 1 and 5 seconds. This is not meant to
      realistically model a single user's web-browsing pattern, but to create a
      reasonably heavy traffic load whose individual tcp connections
      accurately reflect real web traffic.

      The maximum tcp window was set to 11 packets, maximum packet size to
      1460 bytes, and buffer sizes were set at 22 packets.
      (The ns-2 tcp's require setting window sizes and buffer sizes in number of
      packets. In tcp-full some of the internal parameters have been set to be
      byte-oriented, but external values must still be set in number of packets.)

      The first set of simulation runs was done with 16 web clients and
      a number of ftp clients ranging from zero to 8. The IW was varied
      from 1 to 4, though the 4 packet case lies beyond what is currently
      recommended. The figures of merit used were the median page delay
      seen by the web clients and the median file transfer delay seen
      by the ftp clients. The simulated run time was rather large, 360 seconds,
      to sample a large number of these metrics. (The median values remained
      stable for twice that time, so it seemed adequate.)

      Median Web Page Delays (secs) | Median File Transfer Delays
      #FTPs IW=1 IW=2 IW=3 IW=4 | IW=1 IW=2 IW=3 IW=4
      ------------------------------------ | ----------------------------
      0 0.71 0.58 0.55 0.52 |
      1 0.81 0.68 0.64 0.62 | 9.1 9.3 9.3 9.4
      4 2.17 1.76 1.56 1.46 | 26.3 27.0 27.1 28.1
      6 2.57 2.11 1.87 1.70 | 39.5 38.3 40.1 40.7
      8 2.80 2.37 2.07 2.02 | 52.2 51.7 52.2 52.1

      percentage improvement in page delays vs number of ftps
      #FTPs IW=1 IW=2 IW=3 IW=4
      ------------------------------------
      0 - 18 23 27
      1 - 16 21 23
      4 - 19 28 33
      6 - 18 27 34
      8 - 15 26 28

      Even though the ftps use the same IW as the webs, the effect is
      not significant since there are only about 50 file transfers
      completed over the run time of the simulation. When a packet is
      dropped, the restart window size used is one packet. Thus it
      didn't seem necessary to compare web clients with larger IWs to
      ftps with shorter IWs. On the other hand, it is interesting to
      mix some webs with shorter windows with those using longer windows.
      This experiment doubled the number of web clients to 32. All 32 were
      simulated using the same initial window size, first IW=1, then IW=3.
      Then the clients were split into two groups of 16 each, one of which
      use IW=1 and the other used IW=3.

      Median Page Delays (secs)
      #Webs IW=1 IW=3
      --------------------
      32 0.75 0.61
      16/16 0.80 0.60

      The first line shows the same result as the earlier data: clients
      with IW=3 significantly outperform clients with IW=1. The second
      line shows that running a mixture of IW=3 & IW=1 has a tiny
      negative effect on the IW=1 conversations and essentially no
      effect on the IW=3 conversations.

      Since these simulations were all with http1.0 style web traffic, a
      natural question is to ask how results are affected by migration to
      http1.1. A rough model of this behavior was simulated by using one
      connection to send all of the information from both the primary URL
      and the three in-lines. These results:

      Med Web Page Delay | Med FTP Delays | % web improvement
      #FTPs IW=1 IW=3 | IW=1 IW=3 | from IW=1 to IW=3
      ------------------------------------ |--------------
      0 0.57 0.45 | | 21
      1 0.64 0.52 | 9.2 9.5 | 19
      4 1.80 1.31 | 27.0 27.0 | 27
      8 2.26 1.74 | 53.1 54.6 | 23

      Although these web clients clearly have better delay properties, they
      seem to get about the same percentage delay improvement from going
      to the larger IW.

      The indications from these results are that increasing the initial
      window size to 3 packets (or 4380 bytes) doesn't "hurt" and helps
      to improve the perceived performance. These simulations have suggested
      some further analyses of the traffic dynamics of the simulated network.
      It is also possible to do some further variations on the scenarios
      simulated here.

      Using ns for the simulations made it possible to explore some
      other effects. ns-2 has a built-in RED function for buffer managment,
      making it a simple matter to rerun the simulations with the RED buffer
      managment on. With no FTPs there are no (or almost no) dropped packets,
      so that case will not differ from those with the drop tail queues.

      Median Web Page Delays (secs) | Median File Transfer Delays
      #FTPs IW=1 IW=2 IW=3 IW=4 | IW=1 IW=2 IW=3 IW=4
      ------------------------------------ | ----------------------------
      1 0.82 0.69 0.64 0.62 | 9.1 9.3 9.4 9.4
      4 1.31 1.11 1.03 0.98 | 27.8 29.2 29.5 29.3
      6 1.68 1.54 1.48 1.47 | 42.3 43.1 42.8 43.6
      8 2.02 1.91 1.69 1.61 | 55.1 58.7 59.7 51.3

      percentage improvement in page delay
      #FTPs IW=1 IW=2 IW=3 IW=4
      ------------------------------------
      1 - 16 22 24
      4 - 15 21 25
      6 - 8 12 13
      8 - 5 16 20

      There are two interesting aspects to these results. First, for the cases
      where there are enough concurrent FTPs to fill the buffers, there is a
      larger improvement gained going from drop tail to RED than with the
      increased IW, another validation of the usefulness of RED.
      The other is that the improvements from larger IWs are smaller with
      the RED scenario. Although deploying RED would have a more powerful
      impact on the delays seen by small transfers like typical web pages,
      increasing the initial window size is still useful.

      Packet drop rates did increase with IW, but the change was not significant.
      For the drop-tail simulations, the drop rates on the congested link for all
      flows range from 0.6-1.0% for 4 FTPs, 1.6-1.9% for 6 FTPs, and 2.4-2.8%
      for 8 FTPs. For the RED scenarios the ranges were 1.8-2.0% for 4 FTPs,
      2.9-3.2% for 6 FTPs, and 4.0-4.2% for 8 FTPs. Since the increased drop
      rates were accompanied by better performance, it's clear that, for
      these low rates, drop rate is clearly not an indicator of user level
      performance.

      Kathie
      knichols@...
      (this work benefited from discussions and comments from Van Jacobson)
    • Wu-chang Feng
      ... For small scale experiments like this, loss rates won t be significant. What about loss rates when there are a large number of TCP flows such as the
      Message 2 of 7 , Sep 5, 1997
      • 0 Attachment
        >> Packet drop rates did increase with IW, but the change was
        >> not significant.

        For small scale experiments like this, loss rates won't be
        significant. What about loss rates when there are a large number of
        TCP flows such as the experiment(s) in...

        Morris, R., "TCP Behavior with Many Flows", ICNP '97

        Having a large IW may exacerbate the large loss rates observed in his
        experiments.

        Wu
      • Kathleen Nichols
        ... The 16 web clients can easily cause 48 simultaneously active tcp connections along with the FTPs. These values were experimentatlly chosen to cause drop
        Message 3 of 7 , Sep 5, 1997
        • 0 Attachment
          >
          > >> Packet drop rates did increase with IW, but the change was
          > >> not significant.
          >
          > For small scale experiments like this, loss rates won't be
          > significant. What about loss rates when there are a large number of
          > TCP flows such as the experiment(s) in...
          >
          > Morris, R., "TCP Behavior with Many Flows", ICNP '97
          >
          > Having a large IW may exacerbate the large loss rates observed in his
          > experiments.
          >
          > Wu

          The 16 web clients can easily cause 48 simultaneously active
          tcp connections along with the FTPs. These values were experimentatlly
          chosen to cause drop rates in the 1-5% rate on the "T1 link". Many more
          configurations could be tested and I would certainly invite interested
          parties to do so and share the results with the list. I would be very
          much interested in other studies and I assume most other readers of
          these lists would be.

          Kathie
        • Curtis Villamizar
          Kathie, The objection to IW 1 is that it may have adverse affects on already congested networks that are dominated by HTTP transfers. You have not done
          Message 4 of 7 , Sep 5, 1997
          • 0 Attachment
            Kathie,

            The objection to IW>1 is that it may have adverse affects on already
            congested networks that are dominated by HTTP transfers. You have not
            done simulations on a congested link dominated by HTTP so your results
            are not applicable to the assertion that IW>1 may be harmful.

            If you limit the rate of requests to one request every 1-5 seconds
            after completion and transfers complete in 0.5 to 0.7 seconds, with 16
            clients, the link utilizations are very low. Since each client is
            idle for 3 soconds on avergae, they have a 1/6 duty cycle and 16/6
            HTTP transfers can be expected to be active. If you have 1-8 long
            running ftp transfer, then the traffic on the link is dominated by the
            ftp. It is not at all surprising that a very small amount of HTTP
            traffic had a negligible effect on the TCPs dominating the traffic and
            it is also not surprising that the slightly more aggressive HTTPs did
            a little better.

            One thing you did not mention is the size of the HTTP transfers. I
            don't think you mentioned the queue capacity either.

            > Packet drop rates did increase with IW, but the change was not
            > significant. For the drop-tail simulations, the drop rates on the
            > congested link for all flows range from 0.6-1.0% for 4 FTPs, 1.6-1.9%
            > for 6 FTPs, and 2.4-2.8% for 8 FTPs. For the RED scenarios the ranges
            > were 1.8-2.0% for 4 FTPs, 2.9-3.2% for 6 FTPs, and 4.0-4.2% for 8
            > FTPs. Since the increased drop rates were accompanied by better
            > performance, it's clear that, for these low rates, drop rate is
            > clearly not an indicator of user level performance.

            I suspect the drop rate for 0 FTPs was exactly zero and 1 FTP was
            close to zero. These are uncongested. You also didn't mention the
            bottleneck link utilization. If the link utilization drops with the
            increase in loss then this will have an adverse on already congested
            links (anything that loweres bottleneck utilization is a problem for
            already congested links.

            Most of the US Internet seems to still be running under 5% loss or
            even under 1% loss. On portions of the Internet, drop rates are
            already 5-15%. I think the US to Europe problems of 25-50% loss are
            now a thing of the past. Portions of the world are living with
            underprovisioned networks and higher loss rates outside the US and
            western Europe.

            It would be interesting to try this with increasing numbers of HTTP
            clients such that the loss rate with no FTP was in the 1% range, in
            the 5% range, and in the 15% range. Then increase IW and see what the
            effect is.

            While I dodn't advocate running links at 1% loss or more, we must
            consider reality.

            Curtis
          • Alan Cox
            Message 5 of 7 , Sep 5, 1997
            • 0 Attachment
              > chosen to cause drop rates in the 1-5% rate on the "T1 link". Many more
              > configurations could be tested and I would certainly invite interested
              > parties to do so and share the results with the list. I would be very
              > much interested in other studies and I assume most other readers of
              > these lists would be.

              Im not in a situation with time to do such testing but for europe you want
              to be modelling 20-30 parallel over a 64K line for realistic views of some
              sites under load. Also 4-8 over a 28.8 modem (typical client loading images
              aggresively)
            • Alan Cox
              Message 6 of 7 , Sep 5, 1997
              • 0 Attachment
                > already 5-15%. I think the US to Europe problems of 25-50% loss are
                > now a thing of the past. Portions of the world are living with
                > underprovisioned networks and higher loss rates outside the US and
                > western Europe.

                I've been collecting but not keeping 15 minute states from our site to
                most US backbones across mae-east. Its typically under 4% during US night time
                rising to 4-15% during US day time with the loss generally at the mae and
                beyond.

                The curious can poll www.cymru.net/cgi-bin/ping-status for a UK view of the US

                Alan
              • Guy Romano
                I apologize if this message was sent out twice. A wireless link should also be considered because of its low bandwidth, high latency, and high delay variance
                Message 7 of 7 , Sep 8, 1997
                • 0 Attachment
                  I apologize if this message was sent out twice.

                  A wireless link should also be considered because of its low bandwidth,
                  high latency, and high delay variance characteristics. CDPD with its raw
                  data rate of 19.2 Kbps and effective user throughput on the order of 9.6 Kbps
                  could be considered as an example.

                  I assume that any IW>2 will begin to have a negative impact on TCP performance
                  on a wireless link.


                  Guy Romano

                  > > chosen to cause drop rates in the 1-5% rate on the "T1 link". Many more
                  > > configurations could be tested and I would certainly invite interested
                  > > parties to do so and share the results with the list. I would be very
                  > > much interested in other studies and I assume most other readers of
                  > > these lists would be.
                  >
                  > Im not in a situation with time to do such testing but for europe you want
                  > to be modelling 20-30 parallel otcp-impl@... ver a 64K line for realistic views of some
                  > sites under load. Also 4-8 over a 28.8 modem (typical client loading images
                  > aggresively)
                  >
                Your message has been successfully submitted and would be delivered to recipients shortly.