Lots of buzzwords that are changing the landscape of how IT works and how a traditional server administrator does their job. I consider myself the more traditional, on-premises systems administrator. I started in IT around 2007 working on Active Directory, Exchange, workstation image deployment, etc. Lots of manual work: installing servers by CD, manually installing updates. I’ve gotten more into some cloud technology with Office 365, Exchange Online, Skype for Business Online, and Microsoft Teams; however, these technologies are SaaS applications; from my experience, not a ton of working at scale as the back-end servers are managed for you. I’m working on updating my cloud skill set and making myself more relevant in the job market. This introduces a lot of new technologies I am not familiar with.

After looking through several job postings for cloud and devops engineers, I’ve compiled a list of the technologies frequently mentioned and will go through a brief explanation of each. This is not a declaration of who or what is better, just my observations of what I’ve seen in job descriptions.

Cloud Service Providers

Cloud service providers are organizations that provide compute power hosted from their data centers. The typical type of services include:

Infrastructure as a Service (IaaS) – infrastructure components like servers, storage, and network are provisioned in the provider’s data center

Platform as a Service (PaaS) – commonly used with software development, where the operating system and servers are obfuscated and code can be deployed without worrying about these underlying components

Software as a Service (Saas) – the cloud service provider runs the software and hardware necessary for the application; common examples of these apps include email, meeting, chat from vendors like Microsoft’s Office 365 or Google’s G-Suite.

There are what I consider three major players in the cloud provider space:

Amazon Web Services (AWS)

Google Cloud Platform (GCP)

Microsoft Azure

Again, based on job descriptions, the primary cloud service skill being sought is AWS. This is not a surprise as I would consider AWS to be the provider that popularized cloud computing first. I see fewer job listings requesting Azure and Google Cloud, but I would not consider these second tier. Each has their own set of features that make them different, and it will ultimately be where you or your team has their current skill set. Companies that are built on more traditional enterprise systems might favor Azure due to their investment in Microsoft while start-ups or developers lean towards AWS or GCP.

If you listen to the Cloudskills.fm podcast by Mike Pfeiffer, he and other guests have emphasized that if you are learning the cloud, just pick one of the providers and learn the basics: compute, storage, network, and security/identity. These concepts will be common across any cloud platform and have a foundation in these skills will translate across the different services.

If you are looking to get started, you can work towards a certification in one of these technologies. As I’ve said before, certifications are a nice thing to have, but what I use them for is a structured way to learn a new technology. Here are some links to the different certifications to consider getting started:

Amazon Web Services

AWS Certified Cloud Practitioner

AWS Certified Solutions Architect – Associate

AWS Certified Developer – Associate

AWS Certified SysOps Administrator – Associate

Google Cloud Platform

Associate Cloud Engineer

Professional Cloud Architect

Professional Cloud Developer

Microsoft Azure

Microsoft Certified Azure Fundamentals

Microsoft Certified Azure Administrator Associate

Microsoft Certified Azure Developer Associate

Microsoft Certified Azure Solutions Architect Expert

Scripting & Programming Languages

With cloud and automation, systems administrators are needing to learn how to code. Whether it’s writing scripts to provision servers or writing code to define infrastructure resources, the days of clicking through install wizards is “old school”. I understand where this can be a cause for concern; programming isn’t for everyone, it doesn’t click or always make sense. I’m pretty lucky in that I enjoy programming and it makes sense to me. I’m a big fan of solving and automating solutions where I can.

That being said, it seems the most common language I see in job postings in Python, which is not a big surprise. Python is meant to be easily readable and follows strict rules for indentation and use of white space. Python is considered an easy language for beginners to learn as well as powerful for systems management for experience programmers. It is versatile as it can be used for websites, app development, and data science due to the many libraries available for it.

While not seen in job descriptions as often, I would add PowerShell to this list as well. While traditionally meant for Windows Servers and applications, PowerShell is now cross-platform compatible and can be used in Linux. If you are working at all with Windows Server and automation, I would add knowing PowerShell as a must-have skill set.

Speaking of platform specific languages, another skill I see often is Bash shell scripting. Bash is the primary interface on Linux systems. I am a beginner to Linux myself, but I would say this is comparable to DOS and PowerShell on Windows systems. Creating scripts using this “native” language to the system is a definite plus.

Outside of these, I occasionally see requests for Java or Ruby. Much like learning a cloud platform, I wouldn’t fret so much over which language to choose. I would focus on programming concepts, data types, and how to write scripts that scale. Once you learn this, you can take these skills to specific languages.

Configuration Management/Infrastructure as Code

So at this point we’ve looked cloud platforms and some languages to know. Next is the ability to deploy your systems at scale. There’s two categories here that I’m going to combine together: configuration management and Infrastructure as Code (IaC).

Configuration management and IaC is the process of defining how the infrastructure should be configured, then applying that to systems automatically. This can happen through a few ways. An agent on the system can periodically check a master server to pull information on how it should be configured, or the configuration can be pushed to the system. The idea is these configurations are in a declarative format, meaning you write out how the system should be defined but not the logic to do so. For example, you might say a server needs to have a Web server installed with port 80 and 443 open on the firewall. In the declarative code, you don’t have to write out how to install the web server or open the ports, you simply define that’s how the resource should be configured. It’s the responsibility of the system for matching the configuration.

The most popular options I come across are Chef, Puppet, and Ansible. These operate in various languages and in push/pull modes. Again, I’m going to give a shout-out to Desired State Configuration or DSC from Microsoft. I don’t often see this listed in job descriptions, but it can definitely be used alongside other technologies. DSC is also not just for Windows; it is written in Managed Object Format or MOF. These are cross platform and can also be used on Linux servers.

Orchestration

Now that you have your infrastructure defined in code, how do you automate the deployments? This is where an orchestration engine comes into play. Orchestration software takes those automation tasks from earlier and works to put them together. It’s responsible for coordinating multiple processes and executing a workflow that may have dependencies and ultimately delivering the final solution.

For orchestration, I’ve primarily seen job descriptions looking for experience in Terraform and CloudFormation, the latter being AWS proprietary solution while Terraform is open source and created by HashiCorp.

Continuous Integration/Continuous Delivery (CI/CD)

Let’s talk about the CI first. Continuous integration is merging changes to the master branch of code as often as possible. Developers will work on branches either fixing bugs or developing new features and committing these to the master branch. These commits are verified by creating a build and running automated tests against the code to ensure the application is still functional. By performing these smaller code commits and running automated tests to ensure quality more frequently, continuous integration allows developers to iterate more quickly. I don’t know this from experience, but I’ve read that older coding practices would wait weeks or months before creating a build, which led to conflicts that required lots of work to resolve. So CI is the practice of committing and merging to the main branch of an application using automated testing.

Next, continuous delivery is an extension of CI where, in addition to automated testing, the process of pushing your code into production is automated. This allows deploying an updated application to production at any time. If branches or features are checked in and merged more frequently and automated tests confirm that the application is functional, then you should be able to release the main branch into production at any time.

One more concept that is the step past continuous delivery is continuous deployment. Continuous delivery requires a manual step to trigger the code push to production while continuous deployment automates this final step. This puts full confidence into your pipeline and testing and removes any human intervention. A developer could merge their branch, have automated tests verify it, and then deploy it into production all by itself.

When it comes to job descriptions, Jenkins is the most prevalent CI/CD technology that I come across. It is an open source automation tool. What makes it so popular, I’m currently unsure of but given the frequency I see it mentioned, I would target it as a learning resource if I was starting new in CI/CD concepts. For cloud provider specific solutions, AWS has CodePipeline and Microsoft has Azure DevOps, formerly known as Team Foundation Server.

Containers

Containers are another big topic I keep reading about and seeing in job descriptions. Containers are a way to package up application code and its dependencies into a single object. The containers will share the same operating system installed on a server but run in isolation from one another. This allows deploying your application across multiple servers consistently and reliably regardless of the environment as long as the underlying OS is the same. Containers allow decoupling the app from the operating system and runs in such as way that the application thinks it is running on its own server. What it doesn’t know is there are other apps running on the same server in isolation from each other.

When talking about containers, you’re likely to hear two other terms: Docker and Kubernetes, so let’s unpack these.

Docker is a company. Containers were originally developed for use on Linux systems. Docker came along and worked to standardize and make them easier to deploy and use.

Kubernetes is an open-sourced container orchestration tool. It’s for deploying and managing containerized applications. It can manage all kinds of containers including Docker containers. Docker Swarm is Docker’s version of container orchestration software.

AWS’s container technology is called Amazon Elastic Container service. AWS also has a Kubernetes version called Amazon Elastic Kubernetes Service. Microsoft has its version name Azure Kubernetes Service or AKS.

So to sum it up: containers are an app packaging technology, Docker worked to standardize it and make it easier to use, and you can orchestrate container images using Docker Swarm or Kubernetes. Cloud service providers also have their own tools to deploy and manage container images.

DevOps

DevOps is also a popular topic right now and you’re likely to come across quite a few job descriptions for these roles. DevOps is not a specific tool or a piece of software; it’s a mindset and way of doing business. It is meant to automate the process of software development so it can be built, tested, and released into production fast and reliably. In the past, developers and systems operators/administrators worked in silos. The typical story I’ve heard is that developers would build code and toss it to ops. Ops would try to get it to work on the servers, but since the development systems and test system and production systems were not all the same, there would be discrepancies and issues getting it to work.

By following some of the practices above (configuration management, Infrastructure as Code, automation, orchestration, CI/CD), these can transform how infrastructure is deployed and configured and how quickly software can be released. By being able to test, build, and deploy software fast and reliably, you can push out features faster. The primary point I’ve heard about DevOps is it is not a set of tools but changing how you do business changing the culture of how IT resources are deployed (but tools help support the model). For an introduction to what exactly DevOps looks like in an organization, it’s generally recommended to read The Phoenix Project and The DevOps Handbook.

Odds and Ends

Finally, here are some other skills I frequently see that didn’t necessarily fit into any of the categories above.

Linux

Linux is not new, but it might be for someone who has traditionally worked with Windows Server in the enterprise, like myself. I don’t have a particular bias towards Windows or against Linux; it’s just how my career worked out. I’ve worked in more traditional enterprise environments that were built on Active Directory, Exchange, SharePoint, and other Microsoft technologies, so that’s what I learned. I knew over the years that if I wanted to make myself more valuable in the job market, having both Windows and Linux administrator skills would go a long way; however, I never learned Linux. I see more and more job postings looking for both Windows and Linux experience. If I was a new person to IT, I would make it a point to learn both.

Git

Git is an open source version control system used in software development. It is typically used at the command line for tracking changes, pushing code to repositories, and merging branches into the master branch. While version control isn’t new to developers, it is probably a new concept for infrastructure or systems administrators. Going with the automation and Infrastructure as Code concepts, I view learning git to manage your scripts and configuration files a must have skill now for admins as well. Some popular implementations using git include GitHub, Bitbucket, and GitLab but you can use git on your local system without a remote respository to learn the basics.

Serverless

The last topic I wanted to touch on is serverless computing. I know what you’re thinking, how can we have compute power without servers? It’s not what you think. Serverless computing is only worrying about writing and deploying code without the need to build the underlying server and operating system dependencies. The cloud provider manages all this for you, including dynamically expanding the resources in the event of increased usage. Paying for the resources based on consumption is also a characteristic of serverless computing, meaning you only pay for when the code is executed. Some example of serverless computing in different cloud providers includes AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions.

Conclusion

Whether you are new to IT or a seasoned veteran, hopefully you found something in this post useful. As I am working on updating my skill set to work in the new cloud world, these are the topics I come across most often. By writing them down and learning more about them, I hope this helps you understand them as well.

Most importantly, as a beginner, these are things and concepts I am learning, so did I get something wrong? Or is there something else you feel someone should learn for pivoting to the cloud? If so, leave a comment below or find me on Twitter or LinkedIn to discuss further.

Related Post