5.1 Per-packet load-sharing

For this tast don't you need to change the load balancing method to be per-packet instead of per-destination (default). Since in this case the destination does not change 150.1.5.5 so it would only harsh out to one interface. Am I missing something?

 

I double check and you don't have to configure per-packet load balancing, but why?

 

Thanks

 

Tom

 

Comments

  • I just tested this and the reason it works is because "ip route-cache" is disabled on R3's F0/0 interface. If you put access-lists (for GRE packet counting) on R1 and R2's connecting interfaces to R3, you will see that when route-caching is disabled on R3's F0/0 interface, per-packet load-balancing does occur. However, when you re-enable route-caching, load-balancing goes back to the default which is per-destination and multicast traffic is not load balanced over the Tunnel.

     

    Hopefully someone else can shed some light on what is really needed. Disabling the route-cache is a solution, but it was not a part of the SG's solution, only a part of the testing. I initially thought about only forming the tunnel from R3 to R5 over the R1 connection (using different source and destination IPs of course), but then "ip multicast multipath" would have to be used.

  • IMHO believe this is just needed for testing, but I agree that it is weird. Maybe an "ask to the proctor" thing?

    Just my 2 cents

  • Hi All

    Just tested extensively the load-balancing bit of this task, and found out a few interesting details.

    First, this may be IOS version dependent so here is the version I had : flash:IMAGES/c2600-adventerprisek9-mz.124-10.bin (INE rack rental)

    What I found is that even though the "debug ip packet detail 100" show you that the traffic is being load-balanced between s1/2 and s1/3 (like in the SG), it actually sends all packet only on one link ! I verified by both looking at counters on s1/2 and s1/3 and by configuring access-lists on R5 interfaces s0/0/0.501 and s0/0/0.502. So debug can lie ! Never thought of that before.

     

    Then I tried configuring any combination of "no ip route-cache", "no ip route-cache cef" and "no ip mroute-cache" on both F0/0 and tu35 on R3, and it never load-balanced. I also tried "ip load-sharing per-packet" on R3 under both f0/0 and tu35 but it does not work either. I guess traffic that needs to be GRE encapsulated is handled in another way.

     

    The only way I managed to make it load balance the GRE traffic was by disabling ip cef globally.

     

    Cheers.

  • I've had no luck with this task either even after trying some solutions proposed here in various combinations.  Anyone have any success?

  • There were several posts concerning load balancing and load sharing.  Following is my conclusion.  Please let me know if I am missing anything.

    Configuring the static mroute on R5 results in T35 passing the rpf check and becoming a valid outgoing interface on R3.   Since this is dense mode it just means that R3 will flood the multicast traffic out both T35 and S1/3 interfaces. 

    The use of the term load balanced is misleading because this is not load balanced.   It is just forwarding (flooding) the multicast packet out two dense mode outgoing interfaces.  The trace on R3 shows the same multicast packet going out both interfaces since both dense mode interfaces are in the OIL.   Since dense mode multicast is based on flooding out the OIL (Outgoing Interface List) I dont know of any way to load balance or load share.   This appears to me to be a misunderstanding based on the term load balanced in the task.

     

    (139.1.0.6, 225.25.25.25), 00:00:12/00:02:54, flags: T
      Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0
      Outgoing interface list:
        Tunnel35, Forward/Dense, 00:00:13/00:00:00
        Serial1/3, Prune/Dense, 00:00:11/00:02:48

    Rack1R3#
    *Mar  2 12:03:10.394: IP(0): MAC sa=0017.e02e.91b5 (FastEthernet0/0), IP last-hop=139.1.0.6
    *Mar  2 12:03:10.394: IP(0): IP tos=0x0, len=100, id=527, ttl=254, prot=1
    *Mar  2 12:03:10.394: IP(0): s=139.1.0.6 (FastEthernet0/0) d=225.25.25.25 (Tunnel35) id=527, ttl=254, prot=1, len=100(100), mforward
    *Mar  2 12:03:10.398: IP(0): s=139.1.0.6 (FastEthernet0/0) d=225.25.25.25 (Serial1/3) id=527, ttl=254, prot=1, len=100(100), mforward
    Rack1R3#

     

     

  • JoeMJoeM ✭✭✭

    Here's my shot at the difference in the terms:

    Load-Sharing  is more of a round-robin taking turns.  Your turn, My turn, his turn, ok...your turn again.

    Load-Balance  actually looks at the load on each of the links (and adapts). Think PfR/OER, Performance Routing.

  • Thanks for the definitions.   So is the solution in the SG for this task load balancing?

     

    "Configure the network so that when the feed is sent from VLAN 367 to
    VLAN 2 it uses the HDLC link between R2 and R3, but when the feed is
    sent from VLAN 367 to VLAN 5 it is load balanced between R1 and R2."

Sign In or Register to comment.