Monday, 21 April 2014

Weird 2960 ARP issue

I've been doing a load of migrations lately which involve moving circuits from one head end PE to another and I've consistently run into a problem with 2960 switches running the LAN base image. Basically, following the moves I consistently find that although everything attached to the switch is still reachable, I lose management  connectivity to the on-site switch at the far end of the circuit. Strangely it is always possible to ping / SSH to the switch from the connected interface but not from the management station.

The issue seems to be down to some really weird behaviour with the ARP table of the 2960. LAN base is a layer 2 only image so the box can only have one SVI active and relies on a default gateway to get to any other networks - nothing new there. The weird thing is that for some reason when the 2960 wants to send traffic to a remote network, for example responding to a ping from a management station, it creates an ARP entry in its table for the remote IP with the gateway's MAC.

Remote-Sw-01#show ip arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  192.168.50.50         167   00c0.321a.be00  ARPA   Vlan120
Internet  10.123.145.193          0   189c.5dfe.be1f  ARPA   Vlan120
Internet  10.123.145.194          -   3037.ade1.a4b4  ARPA   Vlan120

This seems to happen irrespective of whether proxy arp is enabled on the upstream interface, plus in any case the switch should not be ARPing for anything outside its subnet, so I seriously doubt that the entry is being built by any genuine ARP transaction. Looks like a bodge to me :)

Once these spurious non-adjacent ARP entries are in place they do not seem to get overwritten by, for example, receiving traffic from a given IP with a different MAC. Fortunately, legitimate entries for the local subnet do get overwritten, which leaves the door slightly ajar.

I can't see any way to stop the annoying behaviour, so the obvious workaround is to SSH in from the connected interface (check your ACLs!) or and blow any entries still referring to the old gateway MAC out of the ARP table.

Remote-Sw-01#clear ip arp 192.168.50.50
Remote-Sw-01#show ip arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  192.168.50.50           0   189c.5dfe.be1f  ARPA   Vlan120
Internet  10.123.145.193          1   189c.5dfe.be1f  ARPA   Vlan120
Internet  10.123.145.194          -   3037.ade1.a4b4  ARPA   Vlan120

 At that point the correct gateway MAC will be learned and connectivity should instantly be restored. Another alternative is to SSH from a second management station which hasn't connected recently enough to have an ARP entry. Of course, if you have 4 hours you could just wait for the ARP entry to expire.

Wednesday, 9 April 2014

Weird Problem Running Password Recovery on a PIX 501

Today I dug out an old PIX 501 from the store room to do some testing (don't ask). As expected, it already had a config including some unknown enable password so I was forced to perform a password recovery on it. I've done a million of these on routers and switches but probably only once or twice on a PIX so I wound up on Cisco's how to password recover a PIX page giving myself a quick refresher on how to do it.

The password recovery process on a PIX is version dependent, requiring the right recovery image for the installed PIX software. Fortunately for me the console was not set with a password so I could use "show ver" what was running on the box:

VPN-TEST> show ver

Cisco PIX Firewall Version 6.3(5)
Cisco PIX Device Manager Version 3.0(4)

Compiled on Thu 04-Aug-05 21:40 by morlee

<snip>


"Great", I thought, and downloaded the 6.3 recovery image. The process itself is pretty straightforward and explained on the Cisco instruction page so I won't go over it in detail. After breaking the boot sequence and firing up the TFTP I was greeted with this:

monitor> tftp
tftp 8529-np63.bin@10.10.10.1.....................................................................................................................................................................................
Received 92160 bytes

Cisco Secure PIX Firewall password tool (3.0) #0: Thu Jul 17 08:01:09 PDT 2003
Flash=E28F640J3 @ 0x3000000
BIOS Flash=E28F640J3 @ 0xD8000

Do you wish to erase the passwords? [yn]


Of course I pressed "y", only to be told:

▒o passwords or aaa commands were found.

Rebooting....


How rude! Following that I returned to trying the default cisco / pix / blank passwords, in case I'd fat-fingered them earlier, but nothing worked. There *was* a password there, dammit!

After a fair bit of searching I soon realised that this was not a common problem. There were only a couple of forum posts quoting the "no passwords or aaa commands were found" message and none of them had a solution.

Out of desperation, as much as anything, I tried the PIX 7/8 recovery image:

monitor> tftp
tftp 8529-np70.bin@10.10.10.1.............................................................................................................................................................................................................................................................
Received 129024 bytes

Cisco PIX Security Appliance password tool (3.0) #0: Thu Jun  9 21:45:44 PDT 2005
This utility is not supported on this platform

Rebooting....


Huff. OK, last try. Let's go with the next version down - 6.2 and see if that works:

monitor> tftp
tftp 8529-np62.bin@10.10.10.1.................................................................................................................................................
Received 73728 bytes

Cisco Secure PIX Firewall password tool (3.0) #0: Wed Mar 27 11:02:16 PST 2002
Flash=E28F640J3 @ 0x3000000
BIOS Flash=E28F640J3 @ 0xD8000

Do you wish to erase the passwords? [yn] 


Well, at least it ran this time. Naturally I typed "y":

The following lines will be removed from the configuration:
        enable password XJEP6/bAhsOZPahK encrypted
        passwd 2KFQnbNIdI.2KYOU encrypted

Do you want to remove the commands listed above from the configuration? [yn] 


Ah, the good old default "cisco" passwd entry (who can forget the "KYOU" on the end?) along with the troublesome unknown enable password. I've mangled it to avoid leaking genuine information. After pressing "y" I got the following promising message:

Passwords and aaa commands have been erased.

Rebooting....


This time it actually worked, restoring the enable password to blank!

Out of curiosity I thought I'd check whether the config file was last saved under PIX 6.2 (a long shot, admittedly):

LAB-501# show run
: Saved
:
PIX Version 6.3(5)
<snip>


Er, nope. I can only assume that this little runt of a firewall had previously run 6.2 code and had later been upgraded. I vaguely remember upgrading PIXes in the past and being warned about scary, irreversible changes being made to the flash filesystem - perhaps the file system is a little different between 6.2 and 6.3, but it doesn't bother to overwrite the flash for upgrades between minor releases? Either way, the 6.3 recovery image evidently didn't understand it and 6.2 did.

So there you have it. I suppose in theory you could just start high and work backwards until it succeeds. I've grabbed every recovery image on the page while they're still available - I don't expect Cisco to take them down (they are over a decade old now and still up) but you never know.

There you go. Now there is an answer for the 1 other person in the world who may ever have the same problem trying to revive a completely defunct model of firewall. Long live the PIX!

Wednesday, 2 April 2014

Bending the MPLS Security Model - part 2 (Foundations)

The Foundations of MPLS Tomfoolery

In the previous post I briefly reviewed the constructs of MPLS services and how traffic is segregated. The key takeaways from that post are that:
  1. Label switch routers generally only interpret / act on the outer label 
  2. Basically all the security is provided / enforced through the control plane which restricts reachability by selectively advertising service labels
  3. Routing for VRFs is different from normal IP due to the use of route distinguishers and route targets
Let's expand on how these points work in the attacker's favour.

All the intelligence in an MPLS network sits around the edge. It was designed this way so that the devices in the middle could dumbly (quickly) pass frames on, swapping one label for another without needing any knowledge of what the traffic is or how it should route. The core (P) nodes do not need to know anything about VRFs, IPv6 or even BGP - the edge PEs do all the protocol work, keep track of how that translates into labels and just hand labelled traffic into the core to be switched across to another node which understands.

Essentially the only security feature of the data plane is to drop frames with unknown labels. While you wouldn't expect to see it in the steady state, during many different types of legitimate convergence event frames can arrive with invalid labels attached. Generally this will only happen for milliseconds at a time but with high speed traffic flows or momentary loops that can mean a lot of frames. Mostly for this reason, I suppose, I've never seen a platform that logs or traps in the event of receiving an invalid label. This is helpful as it allows us to guess at labels without creating ridiculous amounts of noise.

Another less-thought-about aspect of (frame mode) MPLS is that the label space is platform wide, in other words it doesn't matter which interface receives a packet, only what the top label is. So no real sanity checking, and certainly no RPF checks (labels only indicate destination, not source) - frames can arrive from any angle and be treated the same.

All in all we have a system where all the decisions are made by the ingress PE. When a "customer" packet needs to be routed by the PE, it does a lookup to decide what service label is required (to indicate the correct VPN or pseudowire the egress PE) and what transport label should be applied (to indicate that the core should pass the traffic to the correct egress PE). Since nothing gets checked along the way, any device can send traffic to any service on any PE if the right labels are applied.

Label IDs are 20 bits wide, which would be a fair old area for an attacker to 'spray and pray'. Luckily for the attacker, though, the dynamic label assignment algorithms of various platforms are pretty predictable (in fact, they're usually sequential). The attacker is also helped by the fact that packets with invalid labels are silently dropped, leaving little in the way of evidence that anyone has been "poking around". 

The default dynamic assignment label ranges for a few common platforms are:

Cisco (IOS / IOS-XE): 16 upwards
Cisco IOS-XR: 16000 upwards
Juniper Junos: 100000 upwards
Alcatel 7750: 131071 downwards

Most kit will fall into one of these categories, so even if you don't know what kit is in use you still have a good chance of hitting the right labels.

A few relatively straightforward attacks spring to mind, each of which will be covered in separate blog posts. The scenarios are:

  1. How to inject packets into a layer 2 EoMPLS pipe
  2. How to trombone / MITM layer 3 VPN traffic
  3. How to go after the PEs themselves
Stay tuned!