Category Archives: Linux

Prayer-Based Parsing

Recently I rebuilt my home CentOS server which I use to run some home media services and keep up on my journey to learn linux. Everything was going well, I moved a few services into docker containers and everything not in a container was installed through package managers.

After a few days I noticed the media service plex would occasionally stop. Reviewing the systemd logs showed it was being killed presumably to allow a yum update to succeed. Oh, “that’s no problem” I thought, “I’ll just edit the systemd unit file to have the appropriate restart always instructions”. Well, I did that and a few days later when my wife and I went to watch a movie the plex media service wasn’t running again.

I spent a few minutes thinking on how I wanted to proceed from here and decided I would look at solving the reoccurring problem with PowerShell (Core) on linux. I got the repo configured and powershell installed via yum then started on my task to keep the plex service running.

Once PowerShell was installed I launched pwsh from terminal and tried Get-Service. I was greeted with this error.

Well it looks like PowerShell Core does not support systemd or have the ‘*-Service’ cmdlets on linux. Not to be deterred, I decided I could attempt to parse text to get the job done like a “true linux admin”. A little of what Jeffrey Snover referred to as “Prayer-Based Parsing” in the Monad Manifesto…

5 minutes later I had a script that got the job done and could handle restarting plex if it was ever stopped in the future.

Let’s walk through the code.

Line 1 gets the services output then filters / greps the results and only returns the service I care about which is plex. If I had multiple services that shared a name then I could be more precise and ask for the individual service status directly but for now there is no need. I checked the output of systemctl with plex running and with plex stopped and notice there is no output when the service is stopped. When the service is running the following text is returned:

Further inspection of the stopped/failed service state shows the string variable will not be null but it also won’t contain any string data. With this in mind, line 2 checks to see if the $plexstatus string variable is 0 characters. When it is 0 characters it will then set the boolean variable $startplex. This boolean will later be used to determine if the script will try to start the service.

To handle the expected condition of the plex service running we can check the output from systemctl and ensure the output indicates that plex service is running. This is where the prayer-based part comes in. Since there are no service objects and state properties available my approach is to fall back to string manipulation. Line 3’s success is very dependent on the text structure returned from the systemctl call. It is also going to use indexes after string splitting which will error if the split does not succeed and there is no resulting array elements that align with index. This is all very sloppy and error prone so it is placed in a try catch block to make sure if the prayer based parsing errors the script still has a path forward. The catch block will set the boolean to try to start the plex service instead of throwing an indexing error.

On line 5, this is a happy path check to make sure the output that has been parsed above contains the word running. If the value is anything but ‘running’ the boolean to start the plex service will be set.

Starting on line 6, the script will start the plex service if any of the failure conditions were met and then save some timestamped log data for future reference. A completely unnecessary else condition with an even more unnecessary Write-Output command brings it home.

Wrapping it all up, the script is set to run in a cronjob every few hours.

I’m sure there are much better ways to do this with standard *nix command line tools but by sticking with what I already know I was able to come up with a solution to my problem in a short period of time using PowerShell.

Lets try running some AWS PowerShell functions on Linux

Today I am going to attempt to take some PowerShell functions I wrote on Windows and run them on Linux. This should all be possible now that Microsoft Loves Linux! With the new .Net (core) going open-source and cross platform combined with AWS’s Tools for PowerShell core, I should be able to run the exact same functions across Windows and Linux.

For this exercise I will be using a Ubuntu virtual machine on Hyper-V but this could easily be done on CentOS or other various linux distros. Microsoft recently added support for installing PowerShell through popular distro’s default package managers so we will take that approach to get up and running.

Enough intro lets get to it! I am going to use Microsoft’s provided steps in a bash terminal window to register the Microsoft repo and get the latest PowerShell 6 alpha installed and running.

 

Installing PowerShell

After running those commands, PowerShell is installed and the system leaves us at the PowerShell command prompt.

To verify everything is working I can use $psversiontable to output our PowerShell info to the host.

Okay, everything is looking good so far.

Loading AWS Tools for PowerShell Core

Next up is to get AWS Tools for PowerShell core loaded. This can be done with the new PowerShell package management cmdlets specifically Install-Module.

Oh No, a red error appeared! Quick, email this error to our System Administrator to figure out what went wrong! Haha, just kidding. Lets read it.

The error says administrator rights are required to install modules. The suggestions are to try to change the scope via parameter or to use elevated rights. Well, run as administrator sure won’t work on Linux, so I will do the equivalent and exit out of PowerShell then sudo powershell back into the PowerShell host.

After a retry of the Install-Module command from the now elevated PowerShell host, the Install-Module command completes without error.

I want to check to see the available modules with the get-module command and verify the AWSPowerShell.NetCore module is listed now that its installed.

Everything checks out and the AWS module is listed right at the top.

Loading my AWS functions from GitHub

I don’t plan on doing any editing of my functions or commits from this system, so I can skip configuring Git and just install it right from the package manager. The neat thing about using Git is that all the nuances that come from working on files between *nix and Windows, like different carriage returns, should be handled behind the scenes by Git.

Once git is installed I can clone the PowerShellScripts repository from my github.

A quick ls and cd is used to make sure the AWSFunctions folder came down with the repository.

Creating AWS Read Only Access Keys

Since this is just a proof of concept exercise, I am going to run a function I built to check the status of a running EC2 Instance by looking up its Name tag. The only access I need for this in AWS IAM is the ability to describe my instances so we can create a new IAM User with an attached EC2 Read only policy.

The IAM console has really become simple to use with recent updates but lets cover everything step by step.

First I’ll log into my AWS account and navigate to the IAM console. From there I want to choose Users and then use the Add User button.

I will call the user blogpostec2readonly and check the box for programmatic access, which will generate our access keys.

On the next screen I will choose Attach existing policies directly. The filter box directly below can be used to search for “ec2readonly” and an AWS managed policy for EC2 Read Only will appear. This managed policy is prewritten json IAM policy maintained by Amazon that helps administrators quickly provide permissions without needing to deep dive into IAM permissions. Perfect for our use case at hand. I’ll check the box for this policy and click next.

The next screen is a review screen and a final Create User button.

After the new IAM user is created the access key and secret key are provided for download. Be careful with these, as AWS access keys are all that is needed to access an AWS account. I will copy the provided access keys into the gedit text editor so I can use them in the next step.

 

Configuring AWS PowerShell Module Credentials

All the prep work is nearly completed and the next steps are to configure the default region, access key, and secret keys to be used with the AWS PowerShell module cmdlets. To do this we will import the AWSPowerShell.NetCore module and run the Set-AWSCredentials and Initialize-AWSDefaults cmdlets.

 

 

Running my custom functions

I need to load my functions into memory so lets use Get-ChildItem to list the functions files and dot source each one. (% in PowerShell is a shorthand alias for ForEach-Object)

To verify my custom functions are loaded and ready to execute we can try to tab complete them. The function I am running in this exercise is Test-RunningEC2InstanceByServerName so I will type Test-Run and press tab.

Success! Tab completion filled out the name of function for me. Lets see if it works…

The Instance hosting this here blog is called PACKETLOST02 so I will send that server name in as a parameter into the function and I am expecting it to return that the instance is running.

The function ran and returned that the instance is running.

Summary

How neat was this? I took some PowerShell functions I wrote on the Windows platform and commited them into my GitHub repo then got them to run on Linux. When I initially wrote these functions it was to help automate my day to day administration of Amazon Web Services. I wrote these functions on the Windows platform with only the Windows platform in mind. Thanks to the great work of the developers at Microsoft and Amazon Web Services these functions are now cross platform.

I hope this post provides a quick glance into how useful and flexible PowerShell can be as well as how promising the future of the .NET core and the .NET standard libraries are to cloud computing. Cheers!

Backup your EC2 Amazon Linux WordPress Blog to S3

So I finally decided to run my own Linux server and utilize the AWS free tier for a year.

It was a great learning experience and I wanted to share the most difficult part of the process, backing up my new blog to S3. Automatically of course.

I had just finished configuring my sever how I wanted. I followed these great guides I found on the net to get me up and running.

After I wrote a few posts and configured some plugins on this here blog it was time to figure out how to automate Linux. Something I have never done before.

Step 1) Generate a script to take backups of my site.

This wasn’t easy, and took a few hours of my time. Over an hour of which was finally tracked down to starting my .sh file on a windows system (using notepad++). Apparently the carriage return character on Windows and Linux is different and there was something in this file that made all my files get generated with ‘?’ in the file name. When I tried to download the files being created by the backup script in WinSCP I was greeted with invalid file name syntax errors. It wasn’t until I ran the bash script with sudo that an prompt appeared upon file deletion showing me ‘\r’ was in the file name and not a question mark.

Once I FINALLY tracked down the root cause of my file creation issues I was off to the races. Thankfully during all this I got the hang of Nano (after admitting temporary defeat learning VIM) and was able to easily create a new shell script file from the ssh window and get my script working. Below is the code I ended up with. Mostly based off this LifeHacker article.

Actually starting Step1:

So here is what you need to do to configure automatic WordPress backups to S3. My approach is to backup weekly and keep 1 month of backups on the server and 90 days of backups in S3.

I started off by making a /backups and /backups/files directory in my ec2-user home directory. This folder will hold my scripts and backup files going forward. This is the directory you will be in by deault after you SSH into an amazon linux instance as ec2-user.

With Nano open, copy and paste the below code into nano. Then press Control+X to save the file.

Once the backups.sh file is created, we need to give it execute privileges.

Now we can run it to make sure it works with bash. Or move right onto scheduling it to occur automatically as a cron job.

Checking it with bash:

 

Step 2) Configuring the script to run automatically

Scheduling it with cron:

First things first for me, scheduling cron jobs is done with crontab. Crontab’s default editor was VIM which is very confusing to a Linux novice such as myself. Lets change the default crontab editor to nano…

And now lets configure our backup shell script to run Sunday mornings at 12:05 AM EST (0505 UTC).

A great guide is found here.

Don’t forget to Control-X to have nano save the edited crontab file. It appears as Amazon Linux automatically elevates to sudo to accomplish crontab changes because I configured everything without sudo.

Now our site is backing up automatically. So lets offload these backups to S3.

Step 3) Syncing the automatic weekly backup files to S3

H/T to this helpful blog post for guidance.

Create an S3 bucket. Then create an IAM user, assign it to a group, and give the group the following policy to restrict it to only having access to the new bucket. Replace the bucketname as needed.

Or you can just use your root IAM credentials, whatever floats your boat.

Next up, install s3cmd onto your Amazon Linux instance. While s3cmd is very useful, its third party developed and not an actual Amazon command line feature, so we have to download it from another repository. We can install s3cmd onto an Amazon Linux instance with the following command.

You will have to accept some certificate prompts during the install.

Once s3cmd is installed we can configure it with our IAM credentials. Don’t worry, with proper restricted IAM credential setup it will fail the configuration check at the end.

Create a shell script to sync our backup files to s3. Make sure we are still in the backups directory and use nano to create the script.

Paste the following code into nano and press Control-X to save. Don’t forget to change the bucket name.

Configure the script to be executed.

Now you can use some of the steps above to execute the script manually to make sure it works or schedule the script to run a few minutes after the backup script via cron.

You can check the logfile with tail for more information.

Wrapping it up

At this point you should have your compressed WordPress database backups and compressed Apache files being created weekly. Then they are being synchronized to S3 shortly after. What if we want to keep the files on S3 longer than the files on the server?

All we need to do is enable versioning on the bucket. Then apply a lifecycle policy to permanently delete previous versions after 60 days. Now we have 90 day backup retention.

Anyway, I hope this helps. I tried to link to all blogs that helped me get up and running.