The DevOps Chronicles part 1: Why I’m studying python for 5 hours a week.

So, I’m committed to the study of coding in python for 250 hours a year. And I’m about 12 weeks in.

I have no ambitions of becoming a professional developer. I like being an infrastructure guy. So why in the heck am I making this commitment as part of my professional development?

Here’s a fundamental problem infrastructure designers & engineers of all stripes are faced with today:

The architecture, complexity, and feature velocity of modern applications overwhelms traditional infrastructure service models. The longer we wait to face this head on, the larger the technical debt will grow, and worse, we become a liability to the businesses that depend on us.

Allow me to briefly put some context around this. I’m going use some loose terminology and keep things simple and high level so we can quickly set the stage. Then I’ll explain where python fits in.

Contents

Application Components

Most all software applications have three major components:

  1. data store
  2. data processing
  3. user (or application) interface

Overall, the differences in complexity come down to component placement and whether or not each function supports multiple instances for scaling and availability.

Application Architectures

A stage 1 or basic client/server application will place all components on a single host. Users access the application using some sort of lightweight client or terminal that provides input/ouput and display operations. If you buy something at Home Depot and glance at the point of sale terminal behind the checkout person, you’re probably looking at this kind of app.

A stage 2 Client/server application will have a database back end, with the processing and user interface running as a monolithic binary running on the user’s computer. This is the bread and butter of custom small business applications. Classic example of this would the program you see the receptionist using at a doctor or dentist’s office when they’re scheduling your next appointment. Suprisingly, this design pattern is still commonly used to develop new applications even today, becuase it’s simple and easy to build.

A stage 3 Client/Server application splits the 3 major components up so that they can run on different compute instances. Each layer can scale independently of the other. In older designs, the user interface is a binary that runs on the end user’s computer. Newer designs used web based user interfaces. These tend to be applications with scaling requirements in the hundreds to tens of thousand of concurrent sessions. These apps tend to be constrained by the database design. A typical SQL database is single master and often the choke point of the app. There are many of examples in manufacturing and ERP systems of 3 layer designs where the ui and data processing is scale-out, backed by a monster SQL Server failover cluster .

All of these design patterns are based on the presumption of fixed, always available compute resources underpinning them. They’re also generally vendor centric technology stacks with long life cycles. They all depend on redundancy underpinning the compute storage and network components for resiliency, with the 3 tier being the most robust as 2 of the layers can tolerate failures and maintain service.

Traditional infrastructure service models are built around supporting these types of application designs and it works just fine.

Then came cloud computing, and the game changed.

Stage 4: Cloud native applications.

Cloud native applications operate as an elastic confederation of resilient services. Let’s unpack that.

Elastic means service instances spin up or down dynamically based upon demand. when an instance has no work to do, it’s destroyed. When additional capacity is required, new instances are dynamically instantiated to accommodate demand

Resilient means the individual services instances can fail and the application will continue running by simply destroying the failed service instance and instantiating a healthy replacement.

Confederation means a collection of services are designed to be callable via discoverable APIs. This means if an application needs a given function that can be offered by an existing service, the developer can simply have his application consume it. This means components of an application can live anywhere, as long as they’re reachable and discoverable over the network.

Because of this modular design, it’s easy to quickly iterate software, adding additional functions and features to make it more valuable to it’s users.

Great! but as an infrastructure person how do we support something like this? The fact is, this is where traditional vendor-centric shrink wrap tool approach breaks down.

Infrastructure delivery and support

Here are the main problems with shrink wrap monolithic tools in context of cloud native application design patterns:

  1. Tool sprawl (ever more tools and interfaces to get tasks done)
  2. Feature velocity quickly renders tool out of date
  3. Graphical tools surface a small subset of a product’s capabilities
  4. Difficult to automate/orchestrate/manage across technology domains

Figure 1 is what the simple task of instantiating a workload looks like on 4 different public cloud providers. Project this sprawl across all the componentry required to turn up an application instance and the scope of the problem starts to come into focus.

Fig 1: Creating a virtual machine on different cloud providers.

So how in the heck do you deal with this? The answer is surprisingly simple: infrastructure tools that mirror the applications the infrastructure needs to support.

TL;DR Learn how to use cloud native tools and toolchains. And this brings us to the title of the blog post.

Python as a baseline competency

Python is the glue that lets you assemble your own collection of services in a way that’s easy for your organization’s developers and support people to consume. The better you are with python, the easier it is to consume new tools and create tool chains. Imagine that you’re at a restaurant and you’re trying to order for yourself and your family. If you can communicate fluently, it’s much easier to get what you want than if you don’t speak the language, and are reduced to pointing at pictures and pantomiming.

Interestingly, I’ve found that many of the principles of good network and systems design are directly applicable to writing a good program. encapsulation, separation, information hiding, discreet building blocks, etc.

Use Case: Creating a Virtual Machine

Let’s take the example of creating a virtual machine.

For creating a VM, we could create a class or classes in python that allows someone to spin up, shut down and check the status of virtual machines on 4 different public cloud providers in a manner that abstracts the specifics of each provider and gives them to the user as a generic service. Then we or anyone else on the team can use it to spin up workloads with a simple call, without having to know the implementation details of each provider.

The power of making tools like this is:

  1. we only have to solve the problem of doing a thing once
  2. we encapsulate it
  3. we make it available to other code

In the next installment of my DevOps journey, I’ll take a stab at writing the python Virtual Machine class described and we’ll see how well it works.

Best regards,

-s

One thought on “The DevOps Chronicles part 1: Why I’m studying python for 5 hours a week.

  1. Thanks Steven,

    I really like the rationale that you lay out and suspect I’ll be using a link to this blog post when trying to convince my teammates why the study of Python may be useful to them.

Leave a Reply