Monday Morning notes – 03.02.2020

Greetings programs!

It’s been quite a ride for the last month or so, so much going on. Last Monday morning I passed the new Cisco DEVASC exam (200-901) on the first day to test, making me a member of the Devnet 500 club.

DEVNET 500 w00t!

Preparing for the exam really did a lot for sharpening up some relatively new skills I’ve acquired that haven’t had a lot of reinforcement. That’s the real value of IT certifications – having that performance-based focal point. When your learning is self-directed there is a real tendency to sort of wander from topic to topic and lose focus, and learning tracks are a great tool to combat that.

That said, I think the most important takeaway is that we’re rapidly transitioning to an API driven world and it’s important to get comfortable working with structured data and logical abstractions in general regardless of the job role.

Ok, about Terraform. What I’m working through is the approach I want to take. I want to bring a unique angle rather than join a chorus of people all saying and doing the same thing. If there are specific things you would like to see, let me know in the comments.

Thanks for reading and I’ll see you around.

Monday morning note 02.17.2020

Greetings Programs!

Just a quick check-in for this week.

Life has gotten a bit hectic recently, and will likely remain so for another couple of weeks. Small things are beginning to take conscious effort and focus which is a reliable indicator that I need to trim the sails a little. What this means for this space is the first Terraform post is delayed for a week, possibly two.

On a brighter note, the weather was wonderful for this morning’s walk, my dogs were well behaved, and coffee is always there for us and wants us to be happy.

Until next time, may you care for yourself with ease.

-s

Case study: Combating MAC address spoofing in access networks

Greetings programs!

Last week I went down an interesting rabbit hole of MAC address spoofing. I found that while the problem was well defined and easily researched, there were no simple prescriptive recipes for a solution. I thought it might be helpful to share this solution in the hopes it could be useful to others.

I would like to acknowledge the contributions of Marvin Rhoads for technical vetting and proofreading, and Brad Johnson (web page) for climbing down into the rabbit hole with me and lending his considerable expertise.

Unmasking the imposter

Introduction

Media Access Control (MAC) Addresses commonly are used to identify endpoints for purposes of access control and authorization on access layer networks that have yet to implement 802.1x (dot1x) device authentication. The problem with this approach is MAC address spoofing is trivial to implement. However, with a defense in depth approach using basic tools and techniques, the risk and impact can be largely mitigated.

To explore the issues, we are going to evaluate the case of an organization that had recently implemented a network access control solution. A network penetration tester easily bypassed their access controls by cloning a mac address from an IP phone to a Linux laptop computer,

Background

The organization had recently implemented Cisco Systems Identity Services Engine (ISE) and had hired a pen-testing firm to evaluate its efficacy in preventing unwanted access to the network. In general, Network Access Control (NAC) implementations take a phased approach to control risk and get immediate value from the tool, and this was the case here.

At the time the penetration test was performed, some of the network was using 802.1x authentication with digital certificates, and some of the network was using MAC Authentication Bypass (MAB) combined with device profiling to determine the correct level of authorization for a connecting device.

Additionally, the Authorization policies weren’t fully implemented, so effectively authorization was a simple ‘yes/no’ result where full access is granted based on a device profile match.

When the Pentester did her work, she grabbed the MAC address off the back of an IP phone in a common area, applied it to her Linux laptop computer, and used the network cable from the IP phone to connect to the network. ISE recognized her computer as the IP phone, and she was granted unrestricted access to the network.

The information security team had the impression that ISE was able to handle a basic access layer attack like MAC address spoofing, and wanted some answers regarding how this happened and what could be done to mitigate it until they were able to roll out dot1x authentication

Analysis

Before getting into the details, we will set the stage by briefly reviewing how the ISE profiler works. Then we will:

  1. Dive into what happens when Windows and Ubuntu Linux devices connect to the network with the same MAC address as a test IP phone
  2. Review the ISE Anomalous Endpoint Detection (AED) feature and explain why it is ineffective in this case

ISE profiler primer

The ISE profiler has 11 modules that ingest information from a variety of sources to build a database of endpoints and endpoint attributes. The primary key for this data structure is the MAC address of the endpoint. The ISE user interface provides an interface to view the endpoint database and inspect individual endpoints through the Context VisibilityEndpoints Menu.

Endpoint Database and Endpoint Attributes

Figure – endpoint database

By clicking on the MAC address of the endpoint, we can view detailed information from the database

Figure – Summary information about the endpoint

In figure 2, we can conclude that Media Access Control (MAC) Authentication bypass is being employed because the username is the same value as the MAC address. Ultimately this means we are not doing authentication. However, we can use the profiling information to authorize a specific level of access, which we will review later.

In the following image, we can see some attributes that ISE learned, as well as a value called Total Certainty Factor (TCF)

Figure TCF and attributes learned from Device Sensor

So how is this information employed? ISE evaluates the attributes against a set of profiling policies, the policy with the highest TCF is assigned as the endpoint policy for that device. We then use that endpoint policy to decide how much access (if any) to authorize.

 

ISE profiling policies

Profiler polices assign point values to matching attributes. The highest Total Certainty Factor (TCF) score wins. The policies are arranged in a tree-like structure from coarse to finer-grained. The minimum score at each level of the tree has to be met before the child nodes will be evaluated.

Figure – Profiler policy for a 7965 IP phone

Logical polices

Logical policies are used to group like devices together where a collective policy decision would be made for them. It is the functional equivalent of putting users into groups for granting access to files and folders on a computer.

Figure – Logical profile

Policy Set Authorization (AuthZ) rule

Finally, we use the logical profile in an authorization rule, which then directs the network device to apply the authorization we have defined.

Figure – Authorization rule for IP Phones

So how does mac address spoofing bypass this system? Now that we have set the stage, we can start to talk about that.

Effect of MAC address spoofing on the profiler

Now we will see what happens when we try a Windows and then a Linux Computer using the MAC address of our previously profiled phone

Windows laptop

For our first, we will connect a Windows 10 computer to the switch, with the same mac address as the test phone, and we will see what changes.

Figure – Windows 10 laptop

Reviewing figure 7, There are 4 items highlighted:

  1. Authorization profile
  2. Total Certainty Factor
  3. Dhcp-class-identifier
  4. Host-name

The reason why we have an authorization result of DenyIP is that I had configured Anomalous endpoint detection. The dhcp-class-identifier change triggered the Anomalous Endpoint flag on the endpoint to true. I then used this as a condition in a rule to return the DenyIp authorization result. We will dive into the details in the AED section.

There are two main takeaways here:

  1. ISE was able to respond to the MAC address spoofing attempt and flagged the endpoint.
  2. The attributes from when the phone was profiled are still present, even though we can be reasonably sure the laptop did not send them.

It is the second point that’s important to understand. The absence of a value being sent (that was sent prior) does not equal a change as far as the profiler is concerned. The TCF changed by -30 because the dhcp-class-identifier value is scored in two locations for 10 and 20 points, respectively.

Figure – how attribute matches are scored

It is essential to grasp that except for DHCP, when endpoint attributes accumulate, they remain until there is a change. Usually, this is not too much of a problem….

Once a device has had the AED flag set, the only way to clear the condition is to delete the endpoint. I am going to unplug the laptop, delete the endpoint, and next, we will try the Ubuntu Laptop.

Linux laptop

The endpoint was deleted, the phone plugged back in, and our endpoint has been recreated with the phone accurately identified. Now let us plug in the Linux laptop and take a look.

Figure – Linux laptop

The only informational DHCP attribute Ubuntu sends in its DHCP discovery request is host-name. The result is that the laptop received the IP_phones Authorization Profile. Why didn’t AED fire and block the endpoint?

Because we are not getting anything actionable, there is nothing to trigger ISE. If the residual attributes left from when the phone was plugged in were not persistent, this would trigger a reprofile and we would be able to do something. The reasons the attributes are cached are understandable, but it presents a difficulty here.

The main takeaways here are:

  1. In default DHCP configuration, the Ubuntu laptop doesn’t give up any useful information.
  2. We just got p0wned.

Anomalous Endpoint Detection (AED)

We saw that AED worked in the case of a Windows machine but not in the case of the Ubuntu Linux machine. Let us take a closer look at AED and see how it works.

According to the Documentation (Cisco, 2018), AED can trigger on a change in one of three things.

  1. NAS-Port-Type (i.e. wired or wireless)
  2. DHCP-class-identifier
  3. Endpoint policy change

If any of these conditions occur with AED enabled, the endpoint will be flagged. It should be noted that not getting the class-id when it was received prior will not trigger AED. And this is why we got p0wned by Linux.

While we are here, let us walk through how to configure AED.

  1. Enable it
  2. Create an Authorization exception rule that matches on the AnomalousBehaviour endpoint flag
  3. Apply an authorization profile that allows the device to receive an address (optional, I’ll explain)

Enabling AED

In the profiler configuration settings, check off the AED boxes.

Figure – Enable AED

Define a DACL that only allows DHCP/Bootp

Reference the DACL in an authorization profile:

In the policy set, create an Exception rule that conditions on AnomalousBehaviour Endpoint Attribute. In this case, I also conditioned on IP phones because I only want the scope to cover the specific assets of interest. Assign the AuthZ profile.

Why the access-accept, and why give out an IP address? The simple answer is visibility. If we give the device an IP address, ISE can ingest the DHCP attributes and we will be able to tell from the ISE UI that a different device has been plugged in even though the endpoint was flagged by AED. If we were to prevent all access, we might not receive any new information until the endpoint was deleted and recreated.

Proposed Solution

Now we have discussed the limitations of AED and the inability to act with the Linux laptop because ISE is not receiving any actionable information. Even if we rolled out 802.1x, there might still be devices that require using MAB, so we cannot sidestep the issue.

There is a straightforward solution: Implement first-hop security and require the client to send a class-identifier in its DHCP requests. If the class-identifier does not match, or there is not one sent, then the device will not obtain an IP Address.

There are three tools we will employ to accomplish this:

  1. DHCP Snooping (which you have already enabled on your network, right?)
  2. IP source guard (depends on DHCP snooping, prevents user-assigned static addresses)
  3. DHCP policy on the DCHP Server

DHCP Snooping

DHCP snooping listens to DHCP traffic and creates a table of IP MAC and Interface bindings. Configuring DHCP snooping is very straightforward. However, consult the documentation and make to understand how to scope it to specific VLANS and trust the uplinks leading towards the DHCP server. (Cisco Security Configuration Guide, no.date.)

Sample configuration:

ip dhcp snooping vlan 10,100

no ip dhcp snooping information option

ip dhcp snooping

!

interface GigabitEthernet1/0/48

!

!———–Trust the northbound link towards the DHCP server

ip dhcp snooping trust

!

Binding example:

IP Source Guard

IP source guard will check the DHCP snooping binding table as well as the static binding table for a matching entry when IP packets are received on the switchport. If there no match, the traffic will be dropped. This prevents an attacker from statically applying an IP address.

Ip source guard is a one-line command:

interface GigabitEthernet1/0/36

!

ip verify source

Verification output:

DHCP Policy

To close the loop, we will create a DHCP policy that will only offer addresses to devices that send the correct class-id in the DHCP Discover or Request packets. (Microsoft Docs, 2016). Creating the policy consists of the following steps:

  1. Create a vendor Class
  2. Create a policy that references the vendor class

Creating the Vendor Class.

In the DHCP management tool, right-click on the IP Address family icon and select define vendor classes. When defining the class, the beginning or the end of the string can be used for wildcard matches, but the characters must be exact. An easy way to get this is to copy it from the endpoint attribute in ISE.

Creating the policy that references the vendor class

  1. In the policies folder, right-click and select new policy. This will launch the new policy wizard.
  2. On the configure conditions window, click add to add a condition.
  3. Select vendor class, operator, the vendor class defined earlier, then wildcard if desired. Select add, and click ok.

  1. Use all of the addresses in the scope. NOTE: once done, the policy will reserve 100% of the addresses even if disabled. The policy must be deleted to release the reservation.

  1. Click through the rest of the wizard and select finish.

Conclusion

Strong authentication using 802.1x will protect against identity spoofing, but it is a significant undertaking for existing networks, and it takes time to roll out. Additionally, there may be devices that cannot perform 802.1x authentication, but they require network access. Therefore, we have to deploy a defense-in-depth strategy to secure the access network when device authentication is not possible.

Recommendations

  • Use 802.1x with certificate authentication whenever possible.
  • If you have a NAC such as ISE, employ least privilege authorization to mitigate impact.
  • Deploy First Hop Security (FHS), primarily DHCP snooping and IP Source Guard.
  • DHCP servers are a good control point to perform policy enforcement. Take advantage.
  • Take operations security practices seriously and follow the cycle.

Bibliography

Cisco. (2018, May). Configure Anomalous Endpoint Detection and Enforcement on ISE 2.2. Retrieved from Cisco.com: https://www.cisco.com/c/en/us/support/docs/security/identity-services-engine-22/200973-configure-anomalous-endpoint-detection-a.html

Cisco Security Configuration Guide. (no.date.). Security Configuration Guide, Cisco IOS XE Fuji 16.9.x, Configuring DHCP. Retrieved from Cisco.com: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9300/software/release/16-9/configuration_guide/sec/b_169_sec_9300_cg/configuring_dhcp.html?bookSearch=true&arrowback=true

Microsoft Docs. (2016, August 31). Scenario: Secure a subnet to a specific set of clients. Retrieved from https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn425039(v%3Dws.11)

Monday Morning Brief: Terraform series coming

Greetings programs!

Just a quick post this morning to mention that I’m working on a thing about deploying infrastructure with Terraform. Tentatively should have the first installment out next Monday.

For the not-yet-informed. Terraform is an infrastructure deployment tool that takes a text file description of a distributed application infrastructure, compares it to what exists, and changes reality to match the description of what you want. It does this very well, and has a quick learning curve and excellent ease of use. If you’re doing production deploys into AWS or Azure, or even Cisco ACI, this is a tool you should get to know.

Brilliant sunrise this morning

If you want to understand how Terraform does it’s thing from an architecture perspective here’s a link to a 30 minute talk that I found enlightening. TL;DR Graph theory is everywhere!

That’s it for this week. May you care for yourself with ease, and may the odds ever be in your favor.

-s

PKIFNE #12: Diffie Hellman Groups

Greetings programs!

Today’s post is an excerpt from my CCIE Security notes, and it has a useful table of Diffie Hellman groups, which you can use as a job aid in IPSEC VPN designs. I Hope you find it useful.

About Diffie Hellman

The key bit of magic that makes IKE (Internet Key Exchange) possible is Diffie-Hellman. Diffie-Hellman allows anonymous entities to calculate a shared secret that can’t be discovered by a third party listening to the exchange. What’s amazing about it is the peers are able to do this using two different passwords that they keep private and never exchange. DH is one of the earliest examples of Public Key Cryptography.

Key points

  • Diffie-Hellman does not provide authentication. It’s used to assist in creating a secure channel for authentication
  • Diffie-Hellman does not provide encryption. It provides the keying material for encryption.
  • Diffie-Hellman is used for control plane functions only.

Operations

At a high level it works like this:

If side a and side b use the same generator and modulus, resulting values from step 5 and 6 should be the same. This shared key is used as an input to the negotiated encryption algorithm.

In the case of Diffie-Hellman The generator and Prime (g,p) are predefined values (defined in a number of different RFCs) which are referenced as Diffie-hellman groups. The larger the Generator and Prime are, the more difficult it is to break. As computational power has increased substantially since the first DH groups were defined, the old groups are no longer safe to use.

The following Table lists the Diffie-Hellman Groups:

Diffie Hellman Groups

*NGE refers to Cisco Next Generation Encryption, which is the vendors set of recommended ciphersuites.

*NSA Suite B refers to the United States the National Security Agency’s published list of list of interoperable modern cryptographic standards.

Quick hits: Taking control of emotions

Thinking about thinking

Greetings programs!

I recently took a course on critical thinking, and I would like to share a couple of ideas about working with emotions and using them as a diagnostic tool.

Thinking about thinking
Thinking about thinking

Think—>Feel—>Want

If we look at how the human mind works, we can simplify the order of operations down to three steps:

  • Thinking
  • Feeling
  • Wanting
Credit: ‘Critical Thinking’ By Richard Paul and Linda Elder

The implications of this are rather profound. If our emotions are driven by our thoughts, it follows that we can change our emotional experience of the world by changing our thinking.

It further follows that if we’re having emotions that don’t seem particularly helpful, such as a generally negative outlook, than the key is to examine the thinking that led to those feelings. Which brings us to the second idea.

Check your assumptions

Assumptions represent our implicit beliefs about the world, and they are usually subconscious in nature. We use assumptions to rapidly interpret and make decisions. This pretty helpful in a situation where we need to react immediately such as taking evasive action to avoid a car accident. They are less helpful when they are used to make judgments based on a perspective or situation that’s no longer relevant, such as being an adult instead of a child.

Credit: ‘Critical Thinking’ By Richard Paul and Linda Elder

If we want to evaluate our thinking, and we know that our thinking is driven by subconscious assumptions, we have to find a way to bring those assumptions to the surface so we can take a look at them. How can we do that? We can ask questions!

Questions we might ask:

  • What is the goal of this thinking? What problem am I trying to solve?
  • What questions am I trying to answer?
  • What facts do I have?
  • What perspective am I looking at this from?

Putting it together

Once we’ve brought these assumptions to the surface, we can decide if they’re relevant and make sense. If they are, we now have an explicit reason and supporting argument to underpin our thinking.

If our assumptions don’t hold up, and we have no good supporting facts or reasons behind our thinking, then we can go back and look at the purpose of the thinking and see if it makes sense to reframe things in a way that can be supported by facts or reasons.

  • Maybe I’m just tired or hungry.
  • Maybe it’s something I can’t control and I have to let go.
  • Maybe it’s an unhealthy a situation I need to extract myself from
  • Maybe I need to look at this from a different perspective

Wrap up

Welp, that’s it for today. I hope was interesting and/or useful, and I’ll see you around.

-s

Children of the Magenta Line

Greetings programs!

I believe it’s a safe assumption that the current crop of budding network engineers are being indoctrinated with the notion that if they don’t 100% automate all the things, their jobs and careers will be at risk. My friend Daniel Dibb deserves ample credit for repeatedly asking the question “What happens when we subsequently end up with a generation of automation engineers who don’t understand networking?”.

Yesterday morning I was drinking coffee, scrolling through my Twitter feed, when I came across a thread where Marko Milivojević posted a link to a youtube video of American Airlines Captain Warren Van Der Burgh, delivering a talk in 1997 called Children of the Magenta Line. The context is lessons learned from a rash of airline crashes caused by flight crews becoming too dependent on automation.

If you are an infrastructure engineer, or you manage infrastructure engineers, this video is worth 25 minutes of your time.

There are two quotes that stand out from this Video:

  • We are Pilots and Captains, not Automation Managers
  • You’ve got to pick the appropriate level of automation for the task at hand

In googling the name of the video, I came across this article which had a quote from William Langewiesche which also illustrates a fundamental problem with over reliance on automation:

“We appear to be locked into a cycle in which automation begets the erosion of skills or the lack of skills in the first place and this then begets more automation.”

Let’s learn from the Airline industry and not repeat their mistakes.

Flock of birds wheeling around a statue of the barefoot mailman

Have a good one, and I’ll see you around.

-s

Quick hit: playing with Terraform & IOS-XE Day 0 automation

Greetings Programs!

Over the last week or so I’ve been working on an Infrastructure as code Azure Hybrid cloud deployment, based on the Azure Spoke and Hub reference topology.

The gateways connecting Azure to the On-Prem datacenter in my case are Cisco Cloud Services Router 1000v (CSR1kv), using day 0 automation to inject a configuration into the Azure Router.

Here’s a link to the Azure Deployment Guide for the CSR1kv, which has some really interesting tidbits. I’ll dive into those in future post.

To actually execute the infrastructure build, I’m using a tool called Terraform.

Terraform does an incredible job of abstracting away all the details you normally have to worry about when provisioning cloud provider infrastructure. It’s a joy to work with. The documentation and examples are easy to follow, and the error messages tell you exactly what you did wrong and where to look.

Terraform Description file example

If that wasn’t cool enough, the Terraform binary is actually built in to the Azure Cloud shell. The net of that is, once you have your deployment files built, you can work with them using just cloudshell, which is super convenient.

Azure Cloud Shell – Checking the Terraform Version

That’s it for today. Lot’s of neat stuff in this lab that I’m dying to talk about, can’t wait to share.

Best regards,

-s

Professional Development #1: Skillsets – T, Pi, & Stacking

Greetings Programs!

Today we’re going to talk about managing skillsets over time to give oneself the best chance of long term career success and contentment.

Pi shaped skillset: Sun and lighthouse

Introduction

In a perfect world we would all be geniuses with photographic memories and could maintain the maximum attained level of skill for anything we trained for. Sadly this is not how it works for 99.99 percent of the population. It takes time and effort to develop skill, and when we stop doing that thing, skill deteriorates, memory fades.

What’s more, we now live in the age of digital transformation, a time of rapid change and upheaval. There is a good chance that the job you are doing today will not exist in 20 years.

When we combine the perishability of knowledge and skills with a rapidly evolving workplace, it’s self evident that it would be wise to implement some sort of strategy. I have come across several, which I will cover in the following paragraphs. Let’s start with the one everyone knows: T shaped Skillset.

T Shaped Skillset

T-shape skillset example

The T is the classic skillset mix that’s been promoted for a very long time. It’s based on the notion of developing deep expertise in one discipline, with a set of supporting skills around it. This was a very successful industrial age strategy and what a typical university education is designed to produce.

The fundamental problem with the T is it’s very risky in the current day and age. To use an old analogy, if you’re an expert in making buggy whips and people start buying cars, you’re going to have to reskill, and that takes time. You’re going to have a lot of downtime of not making much money while the reskilling is occurring. Been there, done that, not the best strategy for today.

The obvious solution is to have a plan B. Which brings us to the next strategy.

Pi Shaped Skillset

Pi shaped skillset

With a Pi shaped skillet, you cultivate a secondary skillset alongside the primary skillset you use in your day job. Second Skilling is a way to ramp up on emerging or disruptive technology without the income and career impact of reskilling. Additionally it also has the benefit of some cross-pollination, where deeply learned skills can be used across knowledge domains.

There is a third common strategy which typically seen with self employed and business owners/entrepreneurs, which we will cover next.

Skill Stacking

Skill stacking

Skill stackers take a number of skills and combine them to good effect. You don’t have to be particularly great at any of them, but you know how to combine them to produce results. Skill stackers tend to be independent consultants, business owners, inventors, and entrepreneurs.

Skill stacking leverages the concept of the Pareto Principle, which states that approximately 80% of results come from 20% of effort.

In his 2011 book Outliers, Malcolm Gladwell popularized the ten thousand hour rule, which roughly translates to the idea that it takes about 10,000 hours of practice to master something.

If we combine these two ideas of the Pareto Principle and the 10,000 hour rule, we come up with the idea that if we invest approximately 2000 hours of effort in learning and practicing something, we’re getting some real efficiency out of our time, which enables us to have a broader and deeper skillset than we might otherwise be capable of.

Broadening and Creativity

One thing I have observed, is that broadening your horizons and learning a little about a lot of things, particularly in the arts, can beneficial on a number of levels. For example, Nobel prize winning Scientists are 2.85 times more likely to engage in arts and crafts than their counterparts.

Closing thoughts

Anya really wants some fish. If only she had opposable digits and a fishing pole

We live in a world where to stay relevant professionally, we have to be agile; always preparing ourselves for the next opportunity. It’s exciting and frightening at the same time. Having a strategy to stay in position to take advantage of new opportunities is really important. I hope you enjoyed the read, and I’ll see you around.

-s

PKI for Network Engineers #11: Cisco IOS CA basic configuration

Greetings programs! Today we’re going to spin up IOS Certificate authority.

IOS CA is a valid workaround for the Self Signed Certificate Issue documented in Cisco Field Notice 70489: PKI Self-Signed Certificate Expiration in Cisco IOS and Cisco IOS XE Software. This article should give you enough information to spin up a CA that’s reasonably safe and easy to operate.

Anya thinks PKI is awesome.

Introduction

In PKIFNE part 10 (link), I introduced Cisco IOS Certification Authority, reviewing its use cases, deployment options, and enrollment challenges. In this installment we’re going to dip our toes in the water and put together a basic working configuration utilizing Simple Certificate Enrollment Protocol (SCEP). This design is suitable for production deployment in a small to medium sized network.

Notes:

There are some interactive steps in turning up a CA and enrolling spokes. Also there is a one line configuration difference between a spoke and a hub. for these reasons, the configurations are broken up into several snippets. After the snippets I’ll show you what the output should approximately look like, and we’ll follow that up with some verification commands for testing and troubleshooting your deployment.

For more detailed information on IOS CA, here is a link: IOS-XE public key infrastructure configuration guide.

Topology

In this example, we’re going to do a spoke and hub network, with a CA sitting behind the hub. For simplicity’s sake, I’ll forgo some of the things I would normally include in this kind of design (such as DMVPN with FVRF) so we can focus on the PKI part. Just know that this design is intended to work with a spoke and hub vpn topology.

Demonstration Topology

Quick verification that our toplogy is functional and we have the routes:

Verification from r5

Root CA

Overview

In this simple design we have a single root issuing CA. Let’s go ahead and pop in a somewhat minimal working config. We’re going to:

  • Generate a 2048 bit RSA keypair called CA
  • create a folder to hold our pki database
  • Create a CA server called CA
  • set the database level to name
  • set lifetimes on our CA and client certificates (feel free to alter as needed)
  • set the issuing CA information
  • Automatically grant certificates
  • enable the ios HTTP server
  • Disable all HTTP modules except for the SCEP server
  • Put an access control list on the HTTP server

CA configuration snippet 1

!------Basic IOS CA-----------!
Crypto key generate rsa modulus 2048 label CA
!
!**** leave blank line after pki*****
do mkdir pki
pki

!
!
ip http server
!
!*****Whitelist for SCEP clients*****
access-list 99 permit 10.1.1.1
access-list 99 permit 10.1.1.2
access-list 99 permit 10.1.1.3
access-list 99 permit 10.1.1.4
access-list 99 permit 10.1.1.5 
!
ip http access-class 99
!
crypto pki server CA
Database url flash:/pki/
Database level names
Lifetime ca-certificate 7000
Lifetime certificate 3500 
issuer-name cn=r1.densemode.com,O=Densemode,OU=IT
! 
!***Grant auto should be combined with http acl to restrict access***
grant auto
!
!*****you will need to interactively enter a passphrase****
!*****After no shut command is issued*****
no shut

This approximately what you should expect to see when turning up the CA:

CA configuration Snippet 2

!——————Disable unused http session modules—————!
ip http secure-active-session-modules none
ip http session-module-list RA SCEP
ip http active-session-modules RA

Client configuration

Overview

The basic workflow is pretty straightforward.

  • Configure a trust point
  • Authenticate the CA
  • Enroll the device with the CA

These are the features we’re going to configure on the client:

  • generate a 2048 bit RSA keypair
  • define a trustpoint to hold our configuration
  • set the enrollment url to the loopback of the CA
  • set our source interface to our loopback
  • set our subject-name field
  • set our FQDN field
  • attach our RSA keypair
  • pull down the CA certificate
  • enroll the device
  • optionally disable revocation checking (explained later)

Client configuration snippet 1

!----—router enrollment config-----------!
!*****This goes on the client routers in your topology*******!
!*****Be sure to change the subject-name and fqdn fields*****!
!*****To match your devices*******
!*****Ansible+jinja2 templates would be a great way******
!*****To template this configuration**********
!
Crypto key generate rsa modulus 2048 label CA
!
crypto pki trustpoint CA
Enrollment url http://10.1.1.1
source interface lo0
subject-name cn=r2.densemode.com,O=Densemode,OU=IT
Fqdn r2.densemode.com
Rsakeypair CA
! 
!******Accept the CA certificate*****
Crypto pki authenticate CA

you’ll get a prompt asking you to verify you trust the fingerprint of the CA certificate:

Client configuration snippet 2

!******Follow the onscreen prompts****
Crypto pki enroll CA

There will be a short series of prompts prior to the enrollment request being sent to the CA. If everything is in order you get a success notification a few seconds after the request is sent out.

Client configuration snippet 3

!***** Use this on remote spokes to solve CA chicken and egg *****
!***** Reachability problem *****
!***** It's not a security risk because the hub router *****
!***** Will perform the validity checks *******
!
crypto pki trustpoint CA
revocation-check none

Verification of CA

Verification commands:

  • show crypto pki server
  • show crypto pki certificates verbose
  • show ip http server session-module
  • show ip http server status
  • show access-list
Our Certification Authority is up and running

Using the ‘show pki server‘ command We can see here that our CA is up and running, along with operational and configuration information.

CA Certificate

The ‘show crypto pki certificates verbose‘ command allows us to inspect the CA server certificate. Notice the certificate usage is signature. This certificate will be used to digitally sign all certificates issued by the CA

http server module status

show ip http server session-module‘ can be used to verify we’re only running the minimum services needed for the CA to act as a SCEP server to issue certificates in-band over the network

show ip http server status

This truncated output ‘show ip http server status’ Allows us to verify the access list devices that are allowed to communicate with the SCEP server. In this case it’s access list 99.

http access list example

As we can see here, our access list allows only the loopback interfaces of our routers to request certificates from the CA. For a production network this is vital if you’re going to configure the CA to automatically grant certificates.

Exploring the contents of the CA database

contents of the pki database

In our sample configuration, we placed the CA database under a folder called PKI. As you can see from the output, there is a:

  • Serial number file
  • Certificate Revocation List (CRL)
  • A file for each certificate issued with the serial # as the filename.

Since we did a database level of name, the serial number file contains the hostname and expiration date of the issued certificate. If you needed to revoke a certificate for a device, this is how you would verify you’re revoking the correct certificate. It’s also why we set the database level to name. 🙂

Client Verification

  • show crypto pki trustpoint status
  • show crypto pki certificates verbose
  • show run | section crypto pki trustpoint
Verifying trustpoint status

The command ‘show crypto pki trustpoint status’ allows to verify that the Trustpoint is properly configured and we have a certificate issued from the CA. We can also inspect the fingerprint of the CA certificate and the router certificate.

Viewing a router certificate in verbose mode

Show crypto pki certificates verbose’ allows us to inspect our router certificate in detail. This is an easy way to check the validity period of your certificate and verify that you used a suitably strong keypair in your certificate request.

Trustpoint configuration

show run | s crypto pki trustpoint‘ Enables us to check a couple of important details the other commands can’t give us. The important ones among these are:

  • source interface
  • revocation-check

Source Interface

Because we’re setting an Access Control List (ACL) on the web server of our issuing CA, it’s important to source the packets from the IP Address that’s in that access list. If we don’t explicitly set the interface, the router will use the routing table to decide which ip address to use.

Revocation Check

In many common designs, the router will not have reachability to the CA until its vpn tunnels come up. However if certificate authentication is being used to form the tunnel, by default the router will attempt to use SCEP to request the Certificate Revocation list (CRL), and the revocation check will fail. We have a chicken and egg problem

Certificate Auth VPN and revocation checks

The solution is straightforward. Since all spokes must transit the hub router to reach the CA, we can have the hub router perform revocation checks and disable it on the spokes. In this way, you can revoke a certificate for a spoke and will be effective as the hub will no longer accept connections from it, preventing the revoked device from coming up on the network.

When we’re inspecting a spoke, we want to see ‘revocation-check none’ in the configuration output.

Revoking a certificate

One of the useful features of IOS CA and PKI in general is we can revoke a certificate at will. For VPN applications this is much better than pre shared keys. In the following exmample, let’s imagine that R5 is being decommissioned.

Checking the serial # on the router

using ‘show crypto pki cerfificate’ on router 5, we can see that the serial number of the certificate is 7. Let’s verify this on the CA.

Verifying the serial # on the CA

using the more command, we can view the contents of certificate serial #7 in the CA database.

Let’s go ahead and revoke the certificate

Revoking a certificate

we revoked the certificate using cry pki server <ca name> revoke <serial #>. This will cause the crl file to be updated so enrolled devices with revocation checking will stop accepting the certificate.

Wrap up

There you have it, a basic utilitarian IOS CA configuration. There are a lot of details left out such as auto-renewal, whether or not to make the private key of the CA exportable for DR purposes, etc. But this is a good 80% solution in my opinion.

Hope you found this useful and I’ll see you around.

-s