Posts Tagged ‘tricks’
The leading question:
“Is it possible (and if so, how) to redistribute or originate a default route based on time of day?”
The short answer is “Sure, why not?”… But the longer answer has to do with how do we warp the forces of the universe to make that happen???
Well, start with what we know. We know we can do time-ranges in access-lists, right? Can we do them in standard access-lists (what we see used for redistribution all the time)?
Rack1R1(config-if)#exit Rack1R1(config)#access-list 1 permit 172.16.0.0 0.15.255.255 ? log Log matches against this entry <cr> Rack1R1(config)#
Nope. There’s a bummer. So we will need to use EXTENDED ACL’s in order to make this work. So now we are reaching the point of “Yes, it can be done, but it will make my head hurt.” as the answer.
First, as a little review, check out a blog we did last year providing some information on that sort of thing in conjunction with a distribute-list in different routing protocols.
A voice lab rack usually utilizes dedicated piece of hardware to simulate PSTN switch. Commonly, you can find a Cisco router in this role, with a number of E1/T1 cards set to emulate ISDN network side. It perfectly suits the function, switching ISDN connections between the endpoints. Additionally, it is often required to have an “independent” PSTN phone connected to the PSTN switch, in order to represent “outside” dialing patterns – such as 911, 999, 411 1-800/900 numbers. The most obvious way to do this is to enable a CallManager Express on the PSTN router, and register either hardware IP Phone or any of IP Soft-phones (such as IP Blue or CIPC) with the CME system.
However, there is another way to accomplish the same goal using IOS functionality solely. It relies on the IP-to-IP gateway feature, called “RTP loopback” session target. It is intended to be used for VoIP call testing, but could be easily utilized to loopback incoming PSTN calls to themselves. Let’s say we want PSTN router to respond to incoming calls to an emergency number 911. Here is how a configuration would look like:
PSTN: voice service voip allow-connections h323 to h323 ! interface Loopback0 ip address 22.214.171.124 255.255.255.255 ! dial-peer voice 911 voip destination-pattern 911 session target ipv4:126.96.36.199 incoming called-number 999 tech-prefix 1# ! dial-peer voice 1911 voip destination-pattern 1#911 session target loopback:rtp incoming called-number 1#911
The trick is that only IP-to-IP calls could be looped back. Because of that, we need to redirect the incoming PSTN call to the router itself first, in order to establish an incoming VoIP call leg.
While this approach permits VoIP call testing, it lacks one important feature, available with the “real” PSTN phone: placing calls from the PSTN phone to the in-rack phones. However, you can always use “csim start” command on the PSTN router to overcome this obstacle. Have fun!
Quite many people don’t pay attention to the difference in handling packets on interfaces configured for NAT inside and outside. Here is an example to demonstrate how NAT “domains” interact with routing. Consider three routers connected in the following manner:
For this scenario we have no routing configured. Let’s use static NAT to provide connectivity between R1 and R2. R2 would see R1 as a host on local connected segment with the IP address 188.8.131.52 and R1 would see R2 as a host on it’s local segment with the IP address 184.108.40.206. This goal could be achieved with the following configuration:
R3: ! interface Serial 1/0.301 point-to-point ip address 220.127.116.11 255.255.255.0 ip nat inside no ip route-cache ! interface Serial 1/0.302 multipoint ip address 18.104.22.168 255.255.255.0 frame-relay map ip 22.214.171.124 302 ip nat outside no ip route-cache ! ! Static NAT: translations are effectively bi-directional ! ip nat inside source static 126.96.36.199 188.8.131.52 ip nat outside source static 184.108.40.206 220.127.116.11 R2: ! ! Add a Frame-Relay mapping for the new IP (representing R1) ! so that R2 would know how to reach the address over multipoint FR interface ! interface Serial 1/0.203 multipoint ip address 18.104.22.168 255.255.255.0 frame-relay map ip 22.214.171.124 203 frame-relay map ip 126.96.36.199 203
Let’s see how it’s working. Note that we disabled route-cache on both interfaces to intercept packets via CPU.
Rack1R3#debug ip nat detailed IP NAT detailed debugging is on Rack1R3#debug ip packet detail IP packet debugging is on (detailed) Rack1R2#ping 188.8.131.52 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 184.108.40.206, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
Hmm…it fails. Look at the debugging output on R3:
Rack1R3# ! ! Packet on NAT outside (o - for outside) hits the interface ! NAT*: o: icmp (220.127.116.11, 16) -> (18.104.22.168, 16)  ! ! Source and destination for the packet rewritten according to NAT rules ! NAT*: s=22.214.171.124->126.96.36.199, d=188.8.131.52  NAT*: s=184.108.40.206, d=220.127.116.11->18.104.22.168  ! ! The packet is routed after translation (with new source and destination IPs). Note that routing decision ! and the actual forwarding take place only after translation rules were triggered by NAT tables ! P: tableid=0, s=22.214.171.124 (Serial1/0.302), d=126.96.36.199 (Serial1/0.301), routed via RIB IP: s=188.8.131.52 (Serial1/0.302), d=184.108.40.206 (Serial1/0.301), g=220.127.116.11, len 100, forward ICMP type=8, code=0 ! ! The response packet from R1 comes in - to destination 18.104.22.168 - routed via RIB (to the same interface) ! But no NAT rules were triggered since the destination interface is the same as input interface! ! IP: tableid=0, s=22.214.171.124 (Serial1/0.301), d=126.96.36.199 (Serial1/0.301), routed via RIB IP: s=188.8.131.52 (Serial1/0.301), d=184.108.40.206 (Serial1/0.301), len 100, rcvd 3 ICMP type=0, code=0
OK hold here for a second.. Now we recall that for inside NAT routing is tried first, and only then the packet is translated according to the NAT rules. This is how the NAT order of operations works on the inside. So now it’s clear: IOS first tries to route packet to 220.127.116.11 – which is the same interface as it came in.. therefore the inside->outside translation never occurs! To fix this, let’s add a static route on R3:
R3: ip route 18.104.22.168 255.255.255.255 22.214.171.124
Rack1R2#ping 126.96.36.199 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 188.8.131.52, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 12/33/52 ms Rack1R3# ! ! Outside: translate & route ! NAT*: o: icmp (184.108.40.206, 17) -> (220.127.116.11, 17)  NAT*: s=18.104.22.168->22.214.171.124, d=126.96.36.199  NAT*: s=188.8.131.52, d=184.108.40.206->220.127.116.11  ! ! Routing decision and forwarding ! IP: tableid=0, s=18.104.22.168 (Serial1/0.302), d=22.214.171.124 (Serial1/0.301), routed via RIB IP: s=126.96.36.199 (Serial1/0.302), d=188.8.131.52 (Serial1/0.301), g=184.108.40.206, len 100, forward ICMP type=8, code=0 ! ! Inside: Routing decision - the packet is routed using our fixup static route ! IP: tableid=0, s=220.127.116.11 (Serial1/0.301), d=18.104.22.168 (Serial1/0.302), routed via RIB ! ! NAT rule (i - for inside) is triggered by the packet ! NAT: i: icmp (22.214.171.124, 17) -> (126.96.36.199, 17)  ! ! Source and destination addresses rewritten in the "opposite" direction ! NAT: s=188.8.131.52->184.108.40.206, d=220.127.116.11  NAT: s=18.104.22.168, d=22.214.171.124->126.96.36.199  ! ! Packet is sent to R2 (with the new source and destination) - forwarding takes place ! IP: s=188.8.131.52 (Serial1/0.301), d=184.108.40.206 (Serial1/0.302), g=220.127.116.11, len 100, forward ICMP type=0, code=0
Nice. So now we know the difference for sure: packets on the NAT outside are first translated and then routed. On the inside interface routing decision kicks in first and only then translation rules get applied followed by forwarding. Before we finish with that, recall new 12.3T feature called NAT Virtual Interface. With this feature we can now configure any interface as “NAT enabled” an get rid of those “inside” and “outside” domains . All NAT traffic passed through new virtual interface called NVI, in symmetric manner. Let’s reconfigure out task using this new concepts.
R3: interface Serial 1/0.301 point-to-point no ip nat inside ip nat enable ! interface Serial 1/0.302 multipoint no ip nat outside ip nat enable ! ! Remove old rules ! no ip nat inside source static 18.104.22.168 22.214.171.124 no ip nat outside source static 126.96.36.199 188.8.131.52 ! ! Add "domainless" rules ! ip nat source static 184.108.40.206 220.127.116.11 ip nat source static 18.104.22.168 22.214.171.124 no ip route 126.96.36.199 255.255.255.255 188.8.131.52
Rack1R2#ping 184.108.40.206 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 220.127.116.11, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 12/40/60 ms Rack1R3# ! ! Routing decision it taken: packet classified for NAT, since destination is in NAT table ! Note that no actual forwarding occurs, just routing decision to send packet ! IP: tableid=0, s=18.104.22.168 (Serial1/0.302), d=22.214.171.124 (Serial1/0.302), routed via RIB ! ! Packet translated according to NAT rules (note "i" for inside NAT) ! NAT: i: icmp (126.96.36.199, 19) -> (188.8.131.52, 19)  NAT: s=184.108.40.206->220.127.116.11, d=18.104.22.168  NAT: s=22.214.171.124, d=126.96.36.199->188.8.131.52  ! ! Another routing decision, for translated packet - now actual forwarding occurs ! IP: tableid=0, s=184.108.40.206 (Serial1/0.302), d=220.127.116.11 (Serial1/0.301), routed via RIB IP: s=18.104.22.168 (Serial1/0.302), d=22.214.171.124 (Serial1/0.301), g=126.96.36.199, len 100, forward ICMP type=8, code=0 ! ! Response comes in, first routing decision - NAT table entry matched ! IP: tableid=0, s=188.8.131.52 (Serial1/0.301), d=184.108.40.206 (Serial1/0.301), routed via RIB ! ! Packet translated ("i" - inside NAT) ! NAT: i: icmp (220.127.116.11, 19) -> (18.104.22.168, 19)  NAT: s=22.214.171.124->126.96.36.199, d=188.8.131.52  NAT: s=184.108.40.206, d=220.127.116.11->18.104.22.168  ! ! Another routing decision, for post-translated packet, followed by forwarding ! IP: tableid=0, s=22.214.171.124 (Serial1/0.301), d=126.96.36.199 (Serial1/0.302), routed via RIB IP: s=188.8.131.52 (Serial1/0.301), d=184.108.40.206 (Serial1/0.302), g=220.127.116.11, len 100, forward ICMP type=0, code=0
So what’s the difference with NVI? First, we see that now NAT behaves symmetrically. Next, we see that NAT translation tables are used to take a “routing decision” to send packet to virtual interface. Packet is translated there and then another routing decision takes place, followed by packet forwarding. So the difference from the old model is that now routing decision is taken twice: before and after translation. This allows to get rid of any static routes needed by “legacy” NAT, since lookup is performed after translation.
To summarize: Domain-based NAT uses different orders of operations for inside and outside domain. NVI based NAT is symmetrical and performs routing lookup twice: first to send packet to NVI, second to route packet using the post-translated addresses.
Sometimes people need to conditionally advertise routes into BGP table based on time of day. Say, we may want to adversite IGP prefix 18.104.22.168/24 with community 1:100 during daytime and with community 1:200 at the other time. Back in days, the procedure was easy – you had to create time based ACL, and use it in route-map to set communities:
time-range DAY periodic daily 9:00 to 18:00 access-list 101 permit ip any any time-range DAY route-map SET_COMMUNITY 10 match ip address 101 set community 1:100 ! route-map SET_COMMUNITY 20 set community 1:200
This construct worked fine back in days with 12.2T and 12.3 IOSes up to 12.3(17)T. However, since 12.3(17)T, BGP scanner behavior has changed significally. Up to the new version, redistribution into BGP table was based on BGP scanner periodically polling the IGP routes every scan-interval (one minute by default). With the new IOS code, redistribution is purely event driven: a new route is added/deleted from BGP table based on event, signaled by IGP (e.g. IGP route withdrawn, next-hop change etc). This change in BGP scanner behavior was not clearly documented, unlike the related BGP support for next-hop address tracking feature. Ovbsiously, a change in time-range is not treated as an IGP event, hence the filter does not work anymore.
Still, there is a number of workarounds. Here is one of them: we use time-based ACL to filter or permit ICMP packets, and advertise routers based on that virtual “reachability” info.
First, we create time-range and time-based access-list:
time-range DAY periodic daily 9:00 to 18:00 ! access-list 101 permit ip any any time-range DAY
Next we create a special loopback interface, which is used send ICMP echo packets to “ourself” and attach the ACL to the interface to filter incoming (looped back) packets:
interface Loopback0 ip address 22.214.171.124 255.255.255.255 ip access-group 101 in
We create a new IP SLA monitor, to send ICMP echo packets over loopback interface. If the time-based ACL permit pings, the monitor state will be “reachable”
ip sla monitor 1 type echo protocol ipIcmpEcho 126.96.36.199 timeout 100 frequency 1
Next we track our “pinger” state. The first tracker is “on” when the loopback is “open” by packet filter, the second one is active when the time-based ACL filters packets:
track 1 rtr 1 reachability ! ! Inverse logic ! track 2 list boolean and object 1 not
The we create two static routes, bound to the mentioned trackets. That is, the static route with tag 100 is only active when loopback is “open”, i.e. time-based ACL permits packets. The other static route is active only when time-range is inactive (the second tracker tells that the destination is “reachable”):
ip route 188.8.131.52 255.255.255.0 Loopback0 184.108.40.206 tag 100 track 1 ip route 220.127.116.11 255.255.255.0 Loopback0 18.104.22.168 tag 200 track 2
Now we redistribute static routes into BGP, based on tag values, and also set communities based on the tags:
router bgp 1 redistribute static route-map STATIC_TO_BGP ! route-map STATIC_TO_BGP permit 10 match tag 100 set community 1:100 ! route-map STATIC_TO_BGP permit 20 match tag 200 set community 1:200
This is also a funny example of how you can tie up together multiple IOS features at the same time.