Professional Development #1: Skillsets – T, Pi, & Stacking

Greetings Programs!

Today we’re going to talk about managing skillsets over time to give oneself the best chance of long term career success and contentment.

Pi shaped skillset: Sun and lighthouse

Introduction

In a perfect world we would all be geniuses with photographic memories and could maintain the maximum attained level of skill for anything we trained for. Sadly this is not how it works for 99.99 percent of the population. It takes time and effort to develop skill, and when we stop doing that thing, skill deteriorates, memory fades.

What’s more, we now live in the age of digital transformation, a time of rapid change and upheaval. There is a good chance that the job you are doing today will not exist in 20 years.

When we combine the perishability of knowledge and skills with a rapidly evolving workplace, it’s self evident that it would be wise to implement some sort of strategy. I have come across several, which I will cover in the following paragraphs. Let’s start with the one everyone knows: T shaped Skillset.

T Shaped Skillset

T-shape skillset example

The T is the classic skillset mix that’s been promoted for a very long time. It’s based on the notion of developing deep expertise in one discipline, with a set of supporting skills around it. This was a very successful industrial age strategy and what a typical university education is designed to produce.

The fundamental problem with the T is it’s very risky in the current day and age. To use an old analogy, if you’re an expert in making buggy whips and people start buying cars, you’re going to have to reskill, and that takes time. You’re going to have a lot of downtime of not making much money while the reskilling is occurring. Been there, done that, not the best strategy for today.

The obvious solution is to have a plan B. Which brings us to the next strategy.

Pi Shaped Skillset

Pi shaped skillset

With a Pi shaped skillet, you cultivate a secondary skillset alongside the primary skillset you use in your day job. Second Skilling is a way to ramp up on emerging or disruptive technology without the income and career impact of reskilling. Additionally it also has the benefit of some cross-pollination, where deeply learned skills can be used across knowledge domains.

There is a third common strategy which typically seen with self employed and business owners/entrepreneurs, which we will cover next.

Skill Stacking

Skill stacking

Skill stackers take a number of skills and combine them to good effect. You don’t have to be particularly great at any of them, but you know how to combine them to produce results. Skill stackers tend to be independent consultants, business owners, inventors, and entrepreneurs.

Skill stacking leverages the concept of the Pareto Principle, which states that approximately 80% of results come from 20% of effort.

In his 2011 book Outliers, Malcolm Gladwell popularized the ten thousand hour rule, which roughly translates to the idea that it takes about 10,000 hours of practice to master something.

If we combine these two ideas of the Pareto Principle and the 10,000 hour rule, we come up with the idea that if we invest approximately 2000 hours of effort in learning and practicing something, we’re getting some real efficiency out of our time, which enables us to have a broader and deeper skillset than we might otherwise be capable of.

Broadening and Creativity

One thing I have observed, is that broadening your horizons and learning a little about a lot of things, particularly in the arts, can beneficial on a number of levels. For example, Nobel prize winning Scientists are 2.85 times more likely to engage in arts and crafts than their counterparts.

Closing thoughts

Anya really wants some fish. If only she had opposable digits and a fishing pole

We live in a world where to stay relevant professionally, we have to be agile; always preparing ourselves for the next opportunity. It’s exciting and frightening at the same time. Having a strategy to stay in position to take advantage of new opportunities is really important. I hope you enjoyed the read, and I’ll see you around.

-s

PKI for Network Engineers #11: Cisco IOS CA basic configuration

Greetings programs! Today we’re going to spin up IOS Certificate authority.

IOS CA is a valid workaround for the Self Signed Certificate Issue documented in Cisco Field Notice 70489: PKI Self-Signed Certificate Expiration in Cisco IOS and Cisco IOS XE Software. This article should give you enough information to spin up a CA that’s reasonably safe and easy to operate.

Anya thinks PKI is awesome.

Introduction

In PKIFNE part 10 (link), I introduced Cisco IOS Certification Authority, reviewing its use cases, deployment options, and enrollment challenges. In this installment we’re going to dip our toes in the water and put together a basic working configuration utilizing Simple Certificate Enrollment Protocol (SCEP). This design is suitable for production deployment in a small to medium sized network.

Notes:

There are some interactive steps in turning up a CA and enrolling spokes. Also there is a one line configuration difference between a spoke and a hub. for these reasons, the configurations are broken up into several snippets. After the snippets I’ll show you what the output should approximately look like, and we’ll follow that up with some verification commands for testing and troubleshooting your deployment.

For more detailed information on IOS CA, here is a link: IOS-XE public key infrastructure configuration guide.

Topology

In this example, we’re going to do a spoke and hub network, with a CA sitting behind the hub. For simplicity’s sake, I’ll forgo some of the things I would normally include in this kind of design (such as DMVPN with FVRF) so we can focus on the PKI part. Just know that this design is intended to work with a spoke and hub vpn topology.

Demonstration Topology

Quick verification that our toplogy is functional and we have the routes:

Verification from r5

Root CA

Overview

In this simple design we have a single root issuing CA. Let’s go ahead and pop in a somewhat minimal working config. We’re going to:

  • Generate a 2048 bit RSA keypair called CA
  • create a folder to hold our pki database
  • Create a CA server called CA
  • set the database level to name
  • set lifetimes on our CA and client certificates (feel free to alter as needed)
  • set the issuing CA information
  • Automatically grant certificates
  • enable the ios HTTP server
  • Disable all HTTP modules except for the SCEP server
  • Put an access control list on the HTTP server

CA configuration snippet 1

!------Basic IOS CA-----------!
Crypto key generate rsa modulus 2048 label CA
!
!**** leave blank line after pki*****
do mkdir pki
pki

!
!
ip http server
!
!*****Whitelist for SCEP clients*****
access-list 99 permit 10.1.1.1
access-list 99 permit 10.1.1.2
access-list 99 permit 10.1.1.3
access-list 99 permit 10.1.1.4
access-list 99 permit 10.1.1.5 
!
ip http access-class 99
!
crypto pki server CA
Database url flash:/pki/
Database level names
Lifetime ca-certificate 7000
Lifetime certificate 3500 
issuer-name cn=r1.densemode.com,O=Densemode,OU=IT
! 
!***Grant auto should be combined with http acl to restrict access***
grant auto
!
!*****you will need to interactively enter a passphrase****
!*****After no shut command is issued*****
no shut

This approximately what you should expect to see when turning up the CA:

CA configuration Snippet 2

!——————Disable unused http session modules—————!
ip http secure-active-session-modules none
ip http session-module-list RA SCEP
ip http active-session-modules RA

Client configuration

Overview

The basic workflow is pretty straightforward.

  • Configure a trust point
  • Authenticate the CA
  • Enroll the device with the CA

These are the features we’re going to configure on the client:

  • generate a 2048 bit RSA keypair
  • define a trustpoint to hold our configuration
  • set the enrollment url to the loopback of the CA
  • set our source interface to our loopback
  • set our subject-name field
  • set our FQDN field
  • attach our RSA keypair
  • pull down the CA certificate
  • enroll the device
  • optionally disable revocation checking (explained later)

Client configuration snippet 1

!----—router enrollment config-----------!
!*****This goes on the client routers in your topology*******!
!*****Be sure to change the subject-name and fqdn fields*****!
!*****To match your devices*******
!*****Ansible+jinja2 templates would be a great way******
!*****To template this configuration**********
!
Crypto key generate rsa modulus 2048 label CA
!
crypto pki trustpoint CA
Enrollment url http://10.1.1.1
source interface lo0
subject-name cn=r2.densemode.com,O=Densemode,OU=IT
Fqdn r2.densemode.com
Rsakeypair CA
! 
!******Accept the CA certificate*****
Crypto pki authenticate CA

you’ll get a prompt asking you to verify you trust the fingerprint of the CA certificate:

Client configuration snippet 2

!******Follow the onscreen prompts****
Crypto pki enroll CA

There will be a short series of prompts prior to the enrollment request being sent to the CA. If everything is in order you get a success notification a few seconds after the request is sent out.

Client configuration snippet 3

!***** Use this on remote spokes to solve CA chicken and egg *****
!***** Reachability problem *****
!***** It's not a security risk because the hub router *****
!***** Will perform the validity checks *******
!
crypto pki trustpoint CA
revocation-check none

Verification of CA

Verification commands:

  • show crypto pki server
  • show crypto pki certificates verbose
  • show ip http server session-module
  • show ip http server status
  • show access-list
Our Certification Authority is up and running

Using the ‘show pki server‘ command We can see here that our CA is up and running, along with operational and configuration information.

CA Certificate

The ‘show crypto pki certificates verbose‘ command allows us to inspect the CA server certificate. Notice the certificate usage is signature. This certificate will be used to digitally sign all certificates issued by the CA

http server module status

show ip http server session-module‘ can be used to verify we’re only running the minimum services needed for the CA to act as a SCEP server to issue certificates in-band over the network

show ip http server status

This truncated output ‘show ip http server status’ Allows us to verify the access list devices that are allowed to communicate with the SCEP server. In this case it’s access list 99.

http access list example

As we can see here, our access list allows only the loopback interfaces of our routers to request certificates from the CA. For a production network this is vital if you’re going to configure the CA to automatically grant certificates.

Exploring the contents of the CA database

contents of the pki database

In our sample configuration, we placed the CA database under a folder called PKI. As you can see from the output, there is a:

  • Serial number file
  • Certificate Revocation List (CRL)
  • A file for each certificate issued with the serial # as the filename.

Since we did a database level of name, the serial number file contains the hostname and expiration date of the issued certificate. If you needed to revoke a certificate for a device, this is how you would verify you’re revoking the correct certificate. It’s also why we set the database level to name. 🙂

Client Verification

  • show crypto pki trustpoint status
  • show crypto pki certificates verbose
  • show run | section crypto pki trustpoint
Verifying trustpoint status

The command ‘show crypto pki trustpoint status’ allows to verify that the Trustpoint is properly configured and we have a certificate issued from the CA. We can also inspect the fingerprint of the CA certificate and the router certificate.

Viewing a router certificate in verbose mode

Show crypto pki certificates verbose’ allows us to inspect our router certificate in detail. This is an easy way to check the validity period of your certificate and verify that you used a suitably strong keypair in your certificate request.

Trustpoint configuration

show run | s crypto pki trustpoint‘ Enables us to check a couple of important details the other commands can’t give us. The important ones among these are:

  • source interface
  • revocation-check

Source Interface

Because we’re setting an Access Control List (ACL) on the web server of our issuing CA, it’s important to source the packets from the IP Address that’s in that access list. If we don’t explicitly set the interface, the router will use the routing table to decide which ip address to use.

Revocation Check

In many common designs, the router will not have reachability to the CA until its vpn tunnels come up. However if certificate authentication is being used to form the tunnel, by default the router will attempt to use SCEP to request the Certificate Revocation list (CRL), and the revocation check will fail. We have a chicken and egg problem

Certificate Auth VPN and revocation checks

The solution is straightforward. Since all spokes must transit the hub router to reach the CA, we can have the hub router perform revocation checks and disable it on the spokes. In this way, you can revoke a certificate for a spoke and will be effective as the hub will no longer accept connections from it, preventing the revoked device from coming up on the network.

When we’re inspecting a spoke, we want to see ‘revocation-check none’ in the configuration output.

Revoking a certificate

One of the useful features of IOS CA and PKI in general is we can revoke a certificate at will. For VPN applications this is much better than pre shared keys. In the following exmample, let’s imagine that R5 is being decommissioned.

Checking the serial # on the router

using ‘show crypto pki cerfificate’ on router 5, we can see that the serial number of the certificate is 7. Let’s verify this on the CA.

Verifying the serial # on the CA

using the more command, we can view the contents of certificate serial #7 in the CA database.

Let’s go ahead and revoke the certificate

Revoking a certificate

we revoked the certificate using cry pki server <ca name> revoke <serial #>. This will cause the crl file to be updated so enrolled devices with revocation checking will stop accepting the certificate.

Wrap up

There you have it, a basic utilitarian IOS CA configuration. There are a lot of details left out such as auto-renewal, whether or not to make the private key of the CA exportable for DR purposes, etc. But this is a good 80% solution in my opinion.

Hope you found this useful and I’ll see you around.

-s

The DevOps Chronicles part 2: Containers and Kubernetes overview.

Greetings programs.

In this post we’re going to do a brief overview of Containers and Kubernetes. My aim here is to set the stage for further exploration and to give a quick back of the napkin understanding of why they’re insanely great.

A little breezy, but warm this morning.

A brief history of distributed systems


Image taken from the Kubernetes documentation
( https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ )

Traditional Deployment

From the earliest days of computers that had multitasking operating systems applications have been hosted directly on top of the host operating system. The process of deploying, testing and updating applications was slow and cumbersome to say the least. The result: mounting technical debt that eventually consumes more and more resources just to keep legacy applications operating, without any ability to update or augment them.

Virtualized Deployment

Virtualization enabled multiple copies of a virtual computer on top of a physical computer. This dramatically improved the efficiently of resource utilization and the speed at which new workloads could be instantiated. It also made it convenient to run a single application or service per machine.

Because of the ease in which new workloads could be deployed, In this era we started to see horizontal scaling of applications as opposed to vertical scaling. This means running multiple copies of a thing and placing the copies behind a load balancer, as opposed to making the things bigger and heavier. This is an example of how introducing a new layer of abstraction results in innovation, which is a hallmark of the history of computers.

However, Some problems emerged with the explosion of virtualized distributed systems. Horizontal scaling techniques add additional layers of complexity, which made deployments slower and more error prone. Operations issues also emerged due to the need to monitor patch and secure these growing fleets of virtual machines. It was two steps forward, one step backward.

Container deployment

Containers are a similar concept to Virtual machines, however the isolation occurs at the operating system level rather than the machine level. There is a single host operating system, and each application runs in isolation.

Containers are substantially smaller and start almost instantly. This allows for containers to be stored in a central repository where they can be pulled down and executed on demand. This new abstraction resulted in the innovation of immutable infrastructure. Immutable infrastructure solves the day 2 operations of care and feeding of the running workloads. However it doesn’t solve for the complexity of deploying the horizontal scaling infrastructure. This is where Kubernetes comes in.

Kubernetes: Container orchestrator

Kubernetes is a collection of components that:

  • Deploys tightly coupled containers (pods) across the nodes
  • Scales the number of pods up and down as needed
  • Provides load balancing (ClusterIP, Ingress controller) to the pods
  • Defines who the pods can communicate with
  • Gives pods a way to discover each other
  • Monitors the health of pods and services
  • Automatically terminates and replaces unhealthy pods
  • Gracefully rolls out new versions of a pod
  • Gracefully rolls back failed deployments

Kubernetes takes an entire distributed application deployment and turns it into an abstraction that is defined in YAML files that can be stored in a version control repository. A single K8s cluster can scale to hundreds of thousands of Pods on thousands of nodes. This abstraction has resulted in an explosion of new design patterns that is changing how software is written and architected, and has led to the rise of Microservices. We’ll cover Microservices in another post.

K8s single node distributed app example. Taken from Stephen Grider’s Excellent course Docker and Kubernetes: the complete guide

Wrap up

I hope this was helpful and gave you a useful concept of what Kubernetes is and why it’s such a game changer. Would you like to see more diagrams? Let me know your thoughts.

Best regards,

-s

Anya says hi.

Blowing the dust off, redux

Greetings programs.

I took this photo just a few minutes ago on my Monday morning beach walk. I hope you find it relaxing.

Holy cow, been 8 months since my last dispatch.  Still studying and learning cool stuff, have not done a good job at stopping to document my learnings, going to make another attempt to rectify that.

Been an eventful year.  In addition to learning DevOps tools and techniques and studying cloud native software design, I’ve also changed jobs.  As of August I’ve been working at Cisco systems as a pre-sales Architect.  It’s a dream job and I absolutely love working with my customers. I learn as much from them as they learn from me.

So yeah, the new job thing; several months of getting the firehose and struggling to take everything in and find my stride.

I didn’t feel like that was enough new stuff, so I enrolled in WGU’s Cybersecurity Bachelor’s program starting in January.  I don’t **need** a degree, but I don’t think it would hurt to become a more well-rounded technologist.  I’ve historically used certifications for learning paths which is fine but they tend to be narrowly defined in scope which can lead to polarized thinking.

So here’s the plan for the blog:  I’m committing to posting something every Monday morning, even if it’s just a “hello, I’m drinking coffee and checking my twitter feed how about you”.  I have a sizable collection of study notes, so there’s no shortage of meaningful topics however. Hopefully I’ll dip into that more than the small talk  :).

Have a great week, talk to you next Monday morning.

-s

PKI for Network Engineers Ep 10: Cisco IOS CA introduction

Greetings programs!

In the next few PKI for network engineers posts, I’m going to cover Cisco IOS CA. If you’re studying for the CCIE security lab or you’re operating a DMVPN or FlexVPN network, and you’d like to use Digital certificates for authentication, then this series could be very useful for you.

Introduction

IOS-CA is the Certification Authority that is built into Cisco IOS. While not a full featured enterprise PKI, for the purposes of issuing certificates to routers and firewalls for authenticating VPN connections it’s a fine solution. It’s very easy to configure and supports a variety of deployment options.

Key points

  • Comes with Cisco IOS
  • Supports enrollment over SCEP (Simple Certificate Enrollment Protocol)
  • RSA based certificates only
  • Easy to configure
  • Network team maintains control over the CA
  • Solution can scale from tiny to very large networks

Deployment Options

IOS-CA has the flexibility to support a wide variety of designs and requirements. The main factors to consider are the size of the network, what kind of transport is involved, how the network is dispersed geographically, and the security needs of the organization. Let’s briefly touch on a few common scenarios to explore the options.

Single issuing Root CA

For a smaller network with a single datacenter or a active/passive datacenter design, a single issuing root may make the most sense. It’s the most basic configuration and it’s easy to administer.

This solution would be appropriate when the sole purpose of the CA is to issue certificates for the purpose of authenticating VPN tunnels for a smaller network of approximately a hundred routers or less. Depending on the precautions taken and the amount of instrumentation on the network, the blast radius of this CA compromise would be relatively small and you could spin up a new CA an enroll the routers to it fairly quickly.

The primary consideration for this option is CA placement. A good choice would be a virtual machine on a protected network with access control lists limiting who can attempt to enroll. The CA is relatively safe from being probed and scanned, and it’s easily backed up.


Fig 1. Single issuing root

Offline Root plus Issuing Subordinate CAs

For a network that has multiple datacenters and/or across multiple continents it would make more sense to create a Root CA, then place a subordinate Issuing CA each datacenter that contains VPN hubs. By Default IOS trusts a subordinate CA, meaning the root CA’s certificate and CRL need not be made available to the endpoints to prevent chaining failure.

As in the case with any online/Issuing CA, steps should be taken to use access controls to limit access to HTTP/SCEP

Fig 2. Offline Root

Issuing Root with Registration Authority

Fig 3. Single root with Registration authority

In this design, there is still a single issuing root, however the enrollment requests are handled by an RA (Registration Authority). An RA acts as a proxy between the CA and the endpoint. This allows for the CA to have strict access controls yet still be able to process enrollment requests and CRL downloads.


Offline Root & Issuing Subordinate CAs w/RA

The final variation is a multi-level PKI that uses the RA to process enrollment and CRL downloads. This design provides the best combination of security, scaling, and flexibility, but it is also the most complex.

Bootstrapping remote routers

Consider a situation where you’re turning up a remote router and you need to bring up your transport tunnel to the Datacenter. The Certification authority lives in the Datacenter on a limited access network behind a firewall. In order for the remote router to enroll in-band using SCEP, it would need a VPN connection. But our VPN uses digital certificates.

There are three options for solving this:

Sideband tunnel w/pre-shared key

This method involves setting up a separate VPN tunnel that uses a pre-shared key in order to provide connectivity for enrollment. This could be a temporary tunnel that’s removed when enrollment is complete, or it could be shut down and left in place for use at a later time for other management tasks.

A sideband tunnel for performing management tasks that may bring down the production tunnel(s) is useful, making this a good option. It’s main drawback is the amount of configuration work required on the remote router. It also depends on some expertise on the part of the installer, a shortcoming shared with the manual enrollment method.

Registration Authority

A Registration Authority is a proxy that relays requests between the Certification Authority and the device requesting enrollment. In this method an RA is enabled on the untrusted network for long enough to process the enrollment request. Once the remote router has been enrolled the the transport tunnels will come up and bootstrapping is complete.

Using a proxy allows in-band enrollment with a minimum amount of configuration on the remote router, making it less burdensome for the on-site field technician. The tradeoff is we’re shifting some of that work to the head end. Because the hub site staff is likely to possess more expertise, this is an attractive trade-off.

Manual/Terminal enrollment

In this method, the endpoint produces a Enrollment request on the terminal formatted as base64 PEM (Privacy Encrypted Mail) Blob. The text is copied and pasted into the terminal of the CA. The CA processes the request and outputs the certificate as a PEM file, which is then pasted into the terminal of the Client router.

While this does have the advantage of not requiring network connectivity between the CA and the enrolling router, it does have a couple of drawbacks. Besides being labor intensive and not straightforward for a field technician to work with, endpoints enrolled with the terminal method cannot use the auto-rollover feature, which allows the routers to renew certificates automatically prior to expiration. The author regards this is an option of last resort.

CRL download on spokes problem

The issue here is when the spoke router needs to download the certificate revocation list (CRL) but the CA that has the CRL is reachable only over a VPN tunnel, that cannot come up because the spoke can’t talk to the CA to download a unexpired copy of the CRL in order to validate the certificate of the head end router.

This is actually a pretty easy problem to solve. Disable CRL checking on the spoke routers, but leave it enabled on the hub routers. This way the administrator can revoke a router certificate and that router will not be allowed to join the network because the hub router will see that it’s certificate has been revoked.

Wrap-up

Ok, so there are the basics. In the next installment, we’ll step through a minimum working configuration.


The DevOps Chronicles part 1: Why I’m studying python for 5 hours a week.

So, I’m committed to the study of coding in python for 250 hours a year. And I’m about 12 weeks in.

I have no ambitions of becoming a professional developer. I like being an infrastructure guy. So why in the heck am I making this commitment as part of my professional development?

Here’s a fundamental problem infrastructure designers & engineers of all stripes are faced with today:

The architecture, complexity, and feature velocity of modern applications overwhelms traditional infrastructure service models. The longer we wait to face this head on, the larger the technical debt will grow, and worse, we become a liability to the businesses that depend on us.

Allow me to briefly put some context around this. I’m going use some loose terminology and keep things simple and high level so we can quickly set the stage. Then I’ll explain where python fits in.

Application Components

Most all software applications have three major components:

  1. data store
  2. data processing
  3. user (or application) interface

Overall, the differences in complexity come down to component placement and whether or not each function supports multiple instances for scaling and availability.

Application Architectures

A stage 1 or basic client/server application will place all components on a single host. Users access the application using some sort of lightweight client or terminal that provides input/ouput and display operations. If you buy something at Home Depot and glance at the point of sale terminal behind the checkout person, you’re probably looking at this kind of app.

A stage 2 Client/server application will have a database back end, with the processing and user interface running as a monolithic binary running on the user’s computer. This is the bread and butter of custom small business applications. Classic example of this would the program you see the receptionist using at a doctor or dentist’s office when they’re scheduling your next appointment. Suprisingly, this design pattern is still commonly used to develop new applications even today, becuase it’s simple and easy to build.

A stage 3 Client/Server application splits the 3 major components up so that they can run on different compute instances. Each layer can scale independently of the other. In older designs, the user interface is a binary that runs on the end user’s computer. Newer designs used web based user interfaces. These tend to be applications with scaling requirements in the hundreds to tens of thousand of concurrent sessions. These apps tend to be constrained by the database design. A typical SQL database is single master and often the choke point of the app. There are many of examples in manufacturing and ERP systems of 3 layer designs where the ui and data processing is scale-out, backed by a monster SQL Server failover cluster .

All of these design patterns are based on the presumption of fixed, always available compute resources underpinning them. They’re also generally vendor centric technology stacks with long life cycles. They all depend on redundancy underpinning the compute storage and network components for resiliency, with the 3 tier being the most robust as 2 of the layers can tolerate failures and maintain service.

Traditional infrastructure service models are built around supporting these types of application designs and it works just fine.

Then came cloud computing, and the game changed.

Stage 4: Cloud native applications.

Cloud native applications operate as an elastic confederation of resilient services. Let’s unpack that.

Elastic means service instances spin up or down dynamically based upon demand. when an instance has no work to do, it’s destroyed. When additional capacity is required, new instances are dynamically instantiated to accommodate demand

Resilient means the individual services instances can fail and the application will continue running by simply destroying the failed service instance and instantiating a healthy replacement.

Confederation means a collection of services are designed to be callable via discoverable APIs. This means if an application needs a given function that can be offered by an existing service, the developer can simply have his application consume it. This means components of an application can live anywhere, as long as they’re reachable and discoverable over the network.

Because of this modular design, it’s easy to quickly iterate software, adding additional functions and features to make it more valuable to it’s users.

Great! but as an infrastructure person how do we support something like this? The fact is, this is where traditional vendor-centric shrink wrap tool approach breaks down.

Infrastructure delivery and support

Here are the main problems with shrink wrap monolithic tools in context of cloud native application design patterns:

  1. Tool sprawl (ever more tools and interfaces to get tasks done)
  2. Feature velocity quickly renders tool out of date
  3. Graphical tools surface a small subset of a product’s capabilities
  4. Difficult to automate/orchestrate/manage across technology domains

Figure 1 is what the simple task of instantiating a workload looks like on 4 different public cloud providers. Project this sprawl across all the componentry required to turn up an application instance and the scope of the problem starts to come into focus.

Fig 1: Creating a virtual machine on different cloud providers.

So how in the heck do you deal with this? The answer is surprisingly simple: infrastructure tools that mirror the applications the infrastructure needs to support.

TL;DR Learn how to use cloud native tools and toolchains. And this brings us to the title of the blog post.

Python as a baseline competency

Python is the glue that lets you assemble your own collection of services in a way that’s easy for your organization’s developers and support people to consume. The better you are with python, the easier it is to consume new tools and create tool chains. Imagine that you’re at a restaurant and you’re trying to order for yourself and your family. If you can communicate fluently, it’s much easier to get what you want than if you don’t speak the language, and are reduced to pointing at pictures and pantomiming.

Interestingly, I’ve found that many of the principles of good network and systems design are directly applicable to writing a good program. encapsulation, separation, information hiding, discreet building blocks, etc.

Use Case: Creating a Virtual Machine

Let’s take the example of creating a virtual machine.

For creating a VM, we could create a class or classes in python that allows someone to spin up, shut down and check the status of virtual machines on 4 different public cloud providers in a manner that abstracts the specifics of each provider and gives them to the user as a generic service. Then we or anyone else on the team can use it to spin up workloads with a simple call, without having to know the implementation details of each provider.

The power of making tools like this is:

  1. we only have to solve the problem of doing a thing once
  2. we encapsulate it
  3. we make it available to other code

In the next installment of my DevOps journey, I’ll take a stab at writing the python Virtual Machine class described and we’ll see how well it works.

Best regards,

-s

Blowing the dust off

Greetings!

Hard to believe it’s been a year since the last update. Time really flies doesn’t it?

I’ve maintained a study schedule, however it’s been oriented around topics that don’t make any sense for a technical blog. That’s going to continue for a time, but I’m also bringing relevant topics back into the mix. This means more regular updates here (yay!).

In addition to straight technical content , I’ll be adding some context and high level overview type things. This is my way of processing and participating in the ongoing community dialog about learning, skill stacks, and general professional development.

There are a lot of exciting things going on and not enough hours in the day. I feel like a kid in a candy store when I get up every morning. Let’s get to stuffing our pockets shall we?

Best regards,

-s

CCIE Security – Why and How

Greetings programs!

I’ve been quite surprised by the response to the news that I passed the lab.  I suspect a lot of it is due to my friend Katherine who has a sizeable following on account of her excellent blog. If you haven’t checked it out, you should!

Anyhow, from all my new friends, I received a lot of questions.  This post is intended to answer them in one place, as well as supply some additional context, and share some of the key resources that I used in my preparation.

The big question, Why?

For large undertaking like this, you need to have a pretty strong reason, or reasons.  It’s a sizeable commitment and when things get really hard,  you’re likely to give up if there’s not a good reason for doing it.

In my case there were several reasons.  Listed in no particular order:

  • Increase my knowledge level. I like to have a deep understanding of what I’m working with.
  • Increase my income earning capability
  • Prove to myself that passing Route/Switch wasn’t a fluke; that I could put my mind to a goal of similar scope and do it again in an area where I have less background.
  • I enjoy the topics.  These toys are a **lot** of fun to play with.

Having seem mine, hopefully it will help you work with defining yours.

The commitment required

Let’s take a realistic assessment of the level of effort required.  Understand this is going to specific to the individual based upon:

  • The existing expertise brought to the starting line
  • How much of the blueprint covers work performed in day job
  • Ability to purchase quality training materials
  • Quality of the structures used to facilitate learning process
  • Consistent application of effort (i.e. not engaging in distractions during study time)
  • Capacity to absorb and process the information

In my case, I was already familiar with much of the VPN technology and I knew my way around ASA firewalls and firepower, all of which I implement and support in my day job. Additionally  I had recently completed CCIE Route/Switch, so I knew how to structure information for later retrieval and how to establish a sustainable study routine. However I was a complete newbie to ISE and Cisco wireless networking.

With all that, it took me approximately 19 months and two attempts, studying an average of 20 hours a week, for a total effort of somewhere around 1600-1700 hours.  I reckon the range for the vast majority of candidates would fall somewhere between 1200 and 2500 hours, presuming they average 20 hours a week of quality study time without significant gaps.

Support and buy in from your family

Consider this:  You’re already employed with full time job  and you have adult obligations to maintain.  When you start studying 20+ hours a week, your wife and your kids (if you have them) are going to feel your absence, and it’s not going to be easy for them.  There needs to be relevant reasons for them as well because they are going to sacrifice alongside you.  It’s a good idea to talk to them and ensure they have a crystal clear understanding of what the next 18 to 24 months going to be like, and get their buy-in.  This is not worth losing your family over!

I strongly recommend taking one day off a week to spend that quality time with your family. That may not be possible in your final push, but that’s just the end phase.  This is a marathon not a sprint.

Establishing milestones

One thing that derails a lot of candidates is lacking a clear path to the end goal.  You need a sense of urgency and momentum. It’s easy to get lost in the technical deep dives (especially with the larger topics like VPN and ISE), but it’s important to keep moving.

It’s ok to make adjustments as you go along, and there will be setbacks.  The idea is to have something to help you stay on track so at some point you’ve got your ass planted in a seat at a lab location and you’re making it happen.

Example high level outline

Months 1-3:  Pass one through the blueprint

  • Focus on basics
  • 70% theory 30% lab

Months 4-7:  Pass two through the blueprint

  • Go deeper into the details and advanced use cases, read rfc and advanced docs.
  • 50% theory 50% lab

Months 8-9:  Prepare for written

  • Heavy on theory and facts
  • Study written only topics
  • 80% theory 20% lab

End of month 9: take written exam

Months 10-12: Pass three through the blueprint

  • Repeat deep dive, with more emphasis on hands on
  • focus mostly on VPN and ISE
  • 30% theory 70% lab

Months 13-14: multi-topic labs

  • No more deep dives
  • Combine multiple technologies in each labbing session
  • get used to shifting gears
  • Wean off cli.  Build all config in notepad
  • 20% theory 80% lab

Schedule lab for 90 days out

Months 15-17: final push for lab

  • Lab every topic on blueprint during the week
  • Lab all topics in one sitting 1x/week
  • all config in notepad
  • speed and accuracy paramount
  • Stop using debugs
  • Time all exercises
    • include verification in time
  • Get friend to break your labs for troubleshooting practice
  • 10% review, 90% lab

End of month 17: First attempt

The battle of memory

Setting up your notebook and files

In the beginning, you’re not going to have much to look at in terms of accumulated knowledge. That will gradually change. It’s a really good idea to set up a system at the beginning; a whole lot easier before you’ve accumulated a bunch of stuff.

My preference these days is OneNote, but other tools will of course work fine.  Heck even an old 3 ring binder can do the job if that’s what you’re into.

Converting the blueprint into something more practical

When your setting up your notebook, the first intuition might be to mirror the blueprint.  i.e. a tab called 1.0 Perimeter Security and Intrusion Protection.   Here’s my suggestion: Organize by product and technology, with additional tabs for unprocessed notes, running and completed tasks, general info, classes taken, etc.  Here’s a screenshot of my tab list for example:

onenote.example

Should you start a blog?

There’s a very effective learning technique called the Feynman Technique. It consists of 5 steps.

  1. Write down the name of a concept
  2. Write out an explanation of the concept as you understand it in plain English.
  3. Review what you’ve written, study the gaps you’ve discovered in your knowledge. Update your explanation
  4. Review your explanation a final time and look for ways to simplify and improve clarity.
  5. Share this explanation with someone.

It really does work.

Blogging is a perfect vehicle for this sort of thing.  But here’s the rub: Blogging is a skill that takes effort to develop.  In my own experience, my early posts took hours to write and I just about gave up on it due to the amount of time that it took.  But as I kept doing it, I got more efficient, and ultimately I found it gave a good return on the time invested.

It’s not a requirement of course.  It’s just a tool, and there lots of other useful tools to choose from.

Written prep and fact memorization Tools

The written covers things you’ll see in the lab, but it also stresses blueprint items that aren’t testable in hands on environments, such as knowledge about theory, standards and operations, and emerging technologies.

You’ll need to have a lot of facts at your recall for the written, so you’re going to have to do a lot of memorization.  That means good old spaced repetition.

The two products that I’ve used to aid in memorization are Anki and Supermemo.  I have a personal preference for supermemo, but it’s a quirky program with a learning curve.  Anki is much easier to pick up and start using.  Either one of these tools are a good aid for the memorization grind that’s part and parcel of passing a CCIE written exam.

Gathering learning resources

This will be an ever expanding list as you get deeper into your studies, but before you start, it makes sense to have some material lined up to take you on that first tour though the blueprint.  As there is not a one stop guide for the security track, this first step can take a little bit of time.  You should have your study material for the first six to 9 months lined up before you dive in order to ensure you make productive use of your time.  The resource list near the end of this post Will hopefully make a good starting point.

Labbing equipment

You will want to invest in some equipment for this track.  If you’re serious about doing this, it’s the best way.  Yes you can do quite a bit on GN3 and EVE-NG, but you’re going to want to have a lot of seat time with the switch and the firewalls, and you’re going to want large topologies that include resource hungry appliances.  A good strategy is to pool your resources with a couple of other people and host your gear in a cheap colo.

This is the lab gear that I recommend:

  • Server: EXSi host with 128gb ram, 12 cores, 1tb disk
  • Switch: Catalyst 3650 or 3850 is best, a 3750x is a good budget option
  • AP:  3602i or similar
  • IP Phone: Cisco 7965G
  • Firewall: 2xASA 5512-X with clustering enabled.
  • Optional gear for COLO:
    • Terminal Server (I Prefer OpenGear IM series: not cheapest but quite secure)
    • Edge firewall:  ASA 5506x

Equipment thoughts and notes

There’s lots of good deals on older 1u rack mount servers on ebay, that’s easy information to research.  If you can afford more than 128gb of ram, go for it.  You’ll use all of it once you start labbing larger toplogies.

When shopping for switches on ebay avoid switches with the LAN Base image, as IP Base is the minimum version required to use most features on the blueprint.  Also, it never hurts to check the Trustsec Compatibility Matrix for the specific part you’re looking at buying.

Yes, you want to drop the coin on a pair of ASAs.  Just do it. You don’t have to spring for the x series, but you do need multicontext and clustering, and you need to be able to run version 9.2 of the ASA software, those are your minimums.

My labbing partner and I started out with a cheap terminal server and the darn thing kept getting DOSed, so we dropped 400 bucks on an older Opengear terminal server, and that thing is bulletproof.  Worth the extra money for out of band access to your gear.

Most COLOs, even budget ones, have remote hands to power cycle things for you if they get locked up.  But if you’re not comfortable with that, hooking up a PDU like an old APC AP7931 to your terminal controller will work.  I use one of these for my home rack and they’re great.

About software and platform versions

You want to match the versions on the blueprint as much as you can. Where it’s not realistic or a hassle to do so; I strongly urge you to check your syntax and configs against the lab version periodically if possible.

Personal experience: I my environment I labbed on the newer version of the CSR1kv image that has the 1mb/sec throughput limitation in demo mode.  That’s great, but some of my configuration speed optimizations blew up on lab day.  That repeated in several areas, including differences between the 3750x in my lab and the 3850 in the actual lab.

Bottom line:  It’s a lot of money and stress to go sit the lab, and when the clock is ticking and you’re running into unexpected platform and software issues, the little bit of money you saved on that screaming ebay deal for an older part will be long forgotten.

About licensing the appliances in your lab.

Most of stuff is not a problem, you can run them on demo licenses. The exceptions are NGIPS WSA and ESA which are a pain.  If you don’t work for a partner or Cisco, your local Cisco SE should be able to get you 3 month licenses for those appliances to assist with lab prep.

Establishing the daily routine

This is a critical item. It’s just like a fitness routine.  Consistency is everything.  You’re not going to be able skip studying during the week and binge on weekends expecting quality results.  It just doesn’t work that way.

In my view, the bare minimum do to this in 18 months is 20 hours a week.  That’s 20 focused hours without distractions.

There are going to be days when you *really* don’t want to study. Embrace the suck my friend, and just do it anyway.

Suggested Schedule

  • 3-4 hours a day Monday through Friday
  • 8 hours Saturday
  • Sunday off

If you stick to that week in and week out, you will make good steady progress.

One thing that really helped me is a tool called tomato timer.  The idea is for 25 minutes you’re hyper focused.  Then you take 5 minutes off. Then 25 minutes on again.  It really helps with procrastination and the urge to check your twitter feed and all that.

Responding when life gets in the way

FACT:  Things are going to happen that are out of your control.  You’re probably going to get pulled away from your studies for a week or two here and there.

Try to at least read a little bit and stay mentally engaged on some level every day until the clouds part and you’re able to get back on the grind again.  Main thing is don’t let a short break turn into a long one; it happens very easily and you’ll be upset with yourself as you have to retrace your steps to get back to where your were in your progress.

Resource List:

Here’s a list of learning resources that I found to be particularly helpful.  (Not including obvious things like product documentation).

Books

Videos

Instructor led Training

Workbooks

DCloud Labs

  • Security Everywhere
  • IPSEC VPN troubleshooting
  • Web Security Appliance Lab v1

Cisco Live Sessions

Cisco ISE community

Closing thoughts

I hope this answered some of your questions and gave you a sense of a path to get to the CCIE Security lab.  I really enjoyed reliving all the memories creating this post triggered.  It’s been quite an adventure. 🙂

Best regards, and may you care for yourself with ease,

 

-s

CCIE Security Lab 1, Steve 1 – I passed!

Greetings Programs!

I sat my second attempt at the CCIE security lab on Monday, March 5th, 2018. This time I got the pass. It was not without a bit of drama.

pass

TL;DR: It’s a very tough exam, harder than you think, harder than you remember. Fight with everything you have until the proctor tells you to stop.

The longer version.

Had the usual anxiety thing in the days leading up to lab day. Did what I could to minimize it. Got ok sleep the night before. On Lab day I wasn’t nearly as nervous as I was in January.

Tshoot.  I came out swinging and quickly solved most of the tickets. Then I had an issue with ISE that left me dead in the water for half an hour. left one ticket unsolved.

Diag was like last time, pretty easy, chance to take a breather and get ready for the main event.

Config started out well, was flying through the first few tasks. Had some annoying issues related to code and platform differences between my personal lab and the actual exam, don’t recall that happening the last attempt.

Then the trouble began.

ISE was acting up, so I attempted to restart it. Application shut down dirty with database errors and would not start. While all this was going on I was trying to work on other stuff and keep moving. Eventually the proctor got ISE running somehow, but the database corruption was an issue and ISE wasn’t adding endpoints to the database, which made several tasks impossible to complete. The lowest point was when he was sending people home, and I knew I didn’t have the points. When he said I could continue I put my head down and kept grinding.

I was just about out on my feet but I kept battling. When I was told I’d have to stop, I looked at my scorecard to add the points, and all the tasks that could be completed were.  Scorecard said I had the points with a small cushion. Would the tasks verify though?

To my amazement I got the email about 3 hours later with the good news.

Much respect to the proctor who probably stayed at work late in order to let me make up the time lost by the ISE debacle. Those folks have a very tough job putting up with us.

Final thought: The CCIE lab is **always** so much harder than you remember it. You can’t simulate what it’s really like. Even after just 7 weeks gap I was going “what the hell!?!”.

Now I get to make things up to my wife and try to lose the 10 lbs of body fat I put on in the final push.

Warmest regards,

-s