The other day I needed to launch some one-off Fargate / ECS container tasks with PowerShell. The documentation covered most of what I needed but I could not find any examples on how to override environment variables sent to the container task.
I only needed to change one environment variable so creating a whole new task definition seemed overkill for this purpose. After some trial and error the below code helped me get the job done.
If this helped you out please let me know in the comments. Feedback will motivate me to share solutions on this site more often.
Recently I rebuilt my home CentOS server which I use to run some home media services and keep up on my journey to learn linux. Everything was going well, I moved a few services into docker containers and everything not in a container was installed through package managers.
After a few days I noticed the media service plex would occasionally stop. Reviewing the systemd logs showed it was being killed presumably to allow a yum update to succeed. Oh, “that’s no problem” I thought, “I’ll just edit the systemd unit file to have the appropriate restart always instructions”. Well, I did that and a few days later when my wife and I went to watch a movie the plex media service wasn’t running again.
I spent a few minutes thinking on how I wanted to proceed from here and decided I would look at solving the reoccurring problem with PowerShell (Core) on linux. I got the repo configured and powershell installed via yum then started on my task to keep the plex service running.
Once PowerShell was installed I launched pwsh from terminal and tried Get-Service. I was greeted with this error.
Well it looks like PowerShell Core does not support systemd or have the ‘*-Service’ cmdlets on linux. Not to be deterred, I decided I could attempt to parse text to get the job done like a “true linux admin”. A little of what Jeffrey Snover referred to as “Prayer-Based Parsing” in the Monad Manifesto…
5 minutes later I had a script that got the job done and could handle restarting plex if it was ever stopped in the future.
Let’s walk through the code.
Line 1 gets the services output then filters / greps the results and only returns the service I care about which is plex. If I had multiple services that shared a name then I could be more precise and ask for the individual service status directly but for now there is no need. I checked the output of systemctl with plex running and with plex stopped and notice there is no output when the service is stopped. When the service is running the following text is returned:
systemctl output
Shell
1
plexmediaserver.service loaded active running Plex Media Server
Further inspection of the stopped/failed service state shows the string variable will not be null but it also won’t contain any string data. With this in mind, line 2 checks to see if the $plexstatus string variable is 0 characters. When it is 0 characters it will then set the boolean variable $startplex. This boolean will later be used to determine if the script will try to start the service.
To handle the expected condition of the plex service running we can check the output from systemctl and ensure the output indicates that plex service is running. This is where the prayer-based part comes in. Since there are no service objects and state properties available my approach is to fall back to string manipulation. Line 3’s success is very dependent on the text structure returned from the systemctl call. It is also going to use indexes after string splitting which will error if the split does not succeed and there is no resulting array elements that align with index. This is all very sloppy and error prone so it is placed in a try catch block to make sure if the prayer based parsing errors the script still has a path forward. The catch block will set the boolean to try to start the plex service instead of throwing an indexing error.
On line 5, this is a happy path check to make sure the output that has been parsed above contains the word running. If the value is anything but ‘running’ the boolean to start the plex service will be set.
Starting on line 6, the script will start the plex service if any of the failure conditions were met and then save some timestamped log data for future reference. A completely unnecessary else condition with an even more unnecessary Write-Output command brings it home.
Wrapping it all up, the script is set to run in a cronjob every few hours.
I’m sure there are much better ways to do this with standard *nix command line tools but by sticking with what I already know I was able to come up with a solution to my problem in a short period of time using PowerShell.
Its been a while since I’ve posted here. I needed to set default encryption on a bucket in an account that was not being managed by cloudformation and was not making use of kms. Here is the PowerShell that worked since google came up empty for me!
I often see folks looking to get started with source control for their PowerShell scripts. There are a variety of free options out there but the choices get a bit limited when you need private repositories where code remains confidential. If you are like me a lot of your older code might be job specific, contain credentials, or perhaps you never even intend to share your code. None of those reasons are blockers for getting started using source control.
Here is a step by step walk through of using AWS CodeCommit and Git for Windows to make your first private Git Repo.
AWS CodeCommit
AWS CodeCommit is a service which provides fully managed, highly scalable source control and its currently free indefinitely for personal use if you stay under 50 GB stored, 5 user accounts, and 10,000 requests per month. Code is stored in AWS and automatically encrypted at rest. Think of it as private git repositories asĀ a service.
There are a few methods which can be used to access CodeCommit. This guide will focus on HTTPS access with Git credentials instead of using IAM credentials directly. Using HTTPS leaves local Git able to interface with other Git repositories seamlessly.
Creating Your AWS CodeCommit Git Credentials
Assuming you already have an AWS account, navigate to the console and sign in. You can use that link to sign up for a new account and take advantage of AWS’ free tier if you are a new customer.
The first step after logging in is to create a new IAM user which will access the CodeCommit service. CodeCommit cannot be accessed over HTTPS from the root account. Head on over to the IAM console.
Within IAM create a new user. I’ll call this user codecommituser and check the box for Programmatic access.
On the permissions page choose “attach existing polices directly” and search for codecommit. I will choose AWSCodecommitFullAccess but another option would be the AWSCodeCommitPowerUser policy which restricts repository deletions.
At the end of the add user wizard IAM credentials are presented for this user. These credentials are not needed so go ahead and click close.
Back in the IAM console under the security credentials tab of the new user disable the IAM keys using the make inactive link.
Further down on the security credentials page locate the HTTPS Git credentials section and use the Generate button. Credentials will be presented in a popup dialog. Save these credentials for later use.
Creating Your First CodeCommit Repository
Back in the AWS services console use the drop down in the top right of the screen to select the desired AWS region to use with CodeCommit. Then search for CodeCommit from the service search bar.
Landing on the CodeCommit service page brings a Getting Started button, clicking the button starts the create repository process. I’ll go ahead and create a repository called CodeCommitRepo in this example.
Copy the link that is provided after choosing HTTPS from the Clone URL drop down. Save this link along side the previously saved Git HTTPS credentials.
Cloning a Repo And Making the First Commit
That was a lot of boring prep but now you should have everything you need to start using CodeCommit. If you don’t already have Git installed you will need it. Assuming you are on Windows download and install Git for Windows. The defaults of the installer are fine but make sure to double check the option to ensure the path environment variable will be updated to use Git.
With Git installed I create a new folder to keep the local clone of the new repository. Using PowerShell in this example I create a directory and cd (or set-location) into the new directory. From the new directory I then use “git clone” followed by the Repo url saved earlier.
When Git tries to clone the directory it will prompt for the HTTPS Git credentials.
After entering the credentials Git clones the repo into a new sub-directory. If this is your first time using git set the email and name to attach to commits made to this repo. This can be done with git config within the sub-directory.
PowerShell
1
2
3
cdCodeCommitRepo
git config user.email'chris@example.com'
git config user.name chris
For this walk-through I will create a new script file with PowerShell but I could also copy existing scripts into the directory at this step. Creating a README.md here would also be useful.
Once the files I want stored in CodeCommit are in the cloned repo folder I am ready to start using source control! I use “git add .” to stage all the files in the directory for the next commit. Then “git commit” is used with the -m switch to add a message describing the changes. Finally “git push origin master” sends the commit up to AWS CodeCommit.
Back in the CodeCommit console will be the newly committed files.
I can now view the contents of synchronized scripts from the console.
Using CodeCommit with Visual Studio Code
Maybe the command line isn’t how you want to work with Git. Its not for everyone. Visual Studio Code has great PowerShell and Git integration and it works smoothly with CodeCommit.
When working with PowerShell scripts in Visual Studio Code it will detect when it is working in a directory that is associated with a Git repo. Here is an example of what happens if I update my Hello World script file. I added some exclamation marks to the script and then saved the changes. The Source Control icon on the left hand side of Code lights up indicating it sees changed files. I can then use the + sign next to the file to stage my changes (git add equivalent).
Then with files staged I can add a message and use the check box to commit (git commit -m equivalent).At the bottom of the Window is where I can synchronize changes up to AWS CodeCommit (git push equivalent).
A popup dialog appears for confirmation.
Now back in CodeCommit I can see the commit history and am able to track changes to my scripts.
Summary
In this walk through I created a new IAM user to use with AWS CodeCommit. The new IAM user is left with no access to AWS other then the CodeCommit service through HTTPS authentication. Using HTTPS authentication with CodeCommit enables encrypted file transmission and AWS handles encrypting the storage. Using this solution for source control I gain off site backups and versioning of all my scripts. Best of all its no cost to me provided I can stay under the CodeCommit Free Tier service limits.
Here is a quick gist which will return the SQL servers responding to the request to the PowerShell host.
As written this needs to be run from PowerShell as a user with privileged sql access to run “select @@servername”. Make sure to target a non system database inside an availability group with read only routing configured. The script can target a named instance by appending the instance name to the $server variable at the top.
The first value returned is the server node responding to regular requests without application intent specified. The second value returned is the server node responding to readonly application intent requests.
One of the things that tripped me up early on while learning PowerShell was working with objects. Like most sysadmins I approached learning PowerShell from a scripting mindset. I wanted to run a script and have the script complete a routine task. I thought about PowerShell as a purely procedural language and I mostly ignored objects.
A great characteristic of PowerShell is just how easy it is to get started. You can still get tons of tasks done in PowerShell without a solid grip on objects. But to make progress into the language and get into what most consider intermediate level knowledge there is a need to gain a solid understanding of objects. How does someone without a programming or developer background get to a solid understanding of objects? I mean there are so many kinds of objects. Its not like there is an arbitrary number of times you pass objects over the pipeline before you have them all mastered. No, the best way get comfortable working with objects is learning how to examine them.
Below are the methods I use the most when working with unfamiliar objects.
Method Number 1: IntelliSense in the PowerShell ISE.
This is my go to method of inspecting a simple object. This often is all that is needed to discover to the properties of the object I am after.
To demonstrate this I will use a practical example of getting the full path to file objects returned from filtering the results of Get-ChildItem. I can save the search resultsĀ to a variable $a.
With the variable saved I can call the variable with a trailing period and that will start IntelliSense exploration of the object.
This gets me right to the “FullName” property that contains the complete path.
Method Number 2: Get-Member
If using IntelliSense does not get me what I am after, I am probably looking at a more complex object. Maybe its an object that contains other objects. The best way I’ve found to further inspect a object beyond IntelliSense is to pipe it to Get-Member or its alias “gm”.
Again I will use a practical example from one of my posts a few weeks back about working with Amazon EC2 security groups. My goal in this example was to create security groups and their inbound/ingress rules. I know from experience that I want to understand Amazon’s objects when working with the AWS Tools for Windows PowerShell before using the cmdlets. So I set out to look at what kind of objects Amazon uses for its EC2 security groups. My first step was to look up a security group with the Get-EC2SecurityGroup cmdlet and save the returning object to a variable.
PowerShell
1
$sg=Get-EC2SecurityGroup-GroupIdsg-66ad9e19
With the object stored in the variable I tried IntelliSense and saw IpPermission properties that looked promising. I have a hunch that this property will reveal how security groups handle their network traffic permissions.
After choosing the property and entering it into the console, I can see it does contain what I am after but its not so straight forward. I see some “{}” in fields where I expect data. Port 80 and 22 match up with my ingress rules on the security group but there are no details on the source security group of the ingress rule.
This is a sign of a more complex object and its time to use Get-Member.
PowerShell
1
$sg.IpPermission|gm
Whoa, this object is quite complex. I can see that this IpPermission property that I am inspecting is an object type unique to itself. Its an Amazon.EC2.Model.IpPermission which is listed at the top of the Get-Member output. This IpPermission has its own set of properties. I think of these as “sub-properties” from the parent security group object. Looking at these “sub-properties” we see they are lists of other object types. Its only going to get more complex from here!
Next I backtrack a step and pipe $sg to Get-Member and see that it is an Amazon.EC2.Model.SecurityGroup.
PowerShell
1
$sg|gm
With the type of the object gained from using Get-Member, I can use a search engine to find my way to Amazon’s documentation of this object’s class. That’s another great source of information about this object. From this point further inspection is a matter of preference. I can keep using Get-Member on all the properties under IpPermission to learn about the object or I can look to Amazon’s documentation about the IpPermission class. Both of these options are valid but I prefer to keep using PowerShell. Continuing down this path of discovering sub properties and piping them to Get-Member might take a while so to save time I can move on to my final and new favorite method of object exploration.
Method Number 3: Show-Object
Show-Object is a great add-on to PowerShell. Its available from the PowerShell gallery as part of Lee Holmes’ PowerShellCookbook module. Its like Get-Member on steroids.
PowerShell
1
$sg|Show-Object
When you pipe an object to Show-Object it will display a tree view of the object in a GUI just like showwindow. You can use the popup window to click through all the properties of the object and discover more details about the object’s inheritance and structure. As you drill into the tree view the bottom pane of the window will update with familiar Get-Member results of each property.
A few clicks later inside the IpPermission property I see information about UserIdGroupPair and I’ve found my source security group allowed for ingress traffic. This is “sg-fda89b92” in the image above. It is in a form I did not initially expect. With all the information I have gained from these discovery methods its was only a matter of time before I had a great understanding of this previously completely unknown object type.
I wanted to look at connecting two disparate systems for a recent project. The goal was to be able to enter information into one system and have information processed by another system. The systems have no direct authentication trusts between them but they are both running on Amazon Web Services EC2 platform. This was a perfect use for the decoupling nature of the Amazon Simple Queue Service and I wanted to come up with a proof of concept, which is outlined below.
Before getting into any details, I want to make clear that this is not a best practice use case of SQS. For most uses of SQS there is a need to keep track of the messages being processed in some kind of permanent state such as a database. With a persistent data store containing the processed messages, the queue workers can more effectively process messages if messages are delivered one or more times. That being said lets go over this proof of concept.
Assuming AWS keys with correct permissions are configured and the AWSPowerShell module is loaded, the below command will create a new SQS queue with PowerShell. The command returns the created queue url which will be stored in a variable $NewSQSQueueUrl for future use.
A quick peek at the SQS console to ensure the queue was created.
This next bit creates an array of strings which will serve as some example information to share between the systems. For this proof of concept I am sending example PowerShell parameters into the SQS queue.
PowerShell
1
$exampleparams=@("All The Things","Another Example Parameter")
I have written the POC functions which are also uploaded to my GitHub PowerShell repo that get dot sourced. These functions put the information (example parameters) into the new SQS queue as message attributes of the newly created SQS message.
After running these functions the message ids are returned to the PowerShell host indicating the messages have been inserted into the SQS queue successfully.
Below is the function that was dot sourced that did the uploading. You could customize this to fit your use case with some help from the AWS Send-SQSMessage cmdlet documentation.
Send-SQSMessage-QueueUrl$SQSQueueUrl-MessageAttributes$messageAttributes-MessageBody"Request generated by $($env:Username) at $(Get-Date -Format u)"
}
}
catch{
Write-Error$_
}
}
}
With messages being put into the queue, I need a function to pull down the messages and process them on the queue worker system (aka the SQS message receiver). My goal is to take different actions on the queue worker system based on the message attributes of the SQS messages pulled out of the queue. That function looks something like this.
Start-SQSQueueProccessing
PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
functionStart-SQSQueueProccessing
{
[CmdletBinding()]
[Alias()]
[OutputType([int])]
Param
(
[Parameter(Mandatory=$true,
ValueFromPipelineByPropertyName=$true,
Position=0)]
[String]$SQSQueueUrl
)
Process
{
try
{
Write-Verbose"$(Get-Date -Format u) Polling for $SQSQueueUrl messages"
}else{Write-Verbose"$(Get-Date -Format u) ... No messages were pulled from the queue...nothing to do right now"}
}
catch{
Write-Error$_
}
}
}
This function isn’t actually doing anything interesting with the messages other than generating some output to the PowerShell streams but this is a proof of concept after all :).
Considerations when using SQS
As SQS is designed to decouple distributed systems, SQS does not assume every message pulled from the queue has been processed successfully. Messages that are pulled from the queue are hidden from the queue until the message visibility timeout period has passed. It is up to the queue workers to delete the messages from the queue after the message has been processed. This is why at the end of the function above, messages are deleted from the queue with the Remove-SQSMessage cmdlet.
After working with SQS a bit, I noticed that the behavior surrounding the delivery of messages sitting in the queue is a little unintuitive. For example, say there are 8 messages in a queue and I request for up to 10 messages to be received with Receive-SQSMessage. A logical assumption would be that all 8 messages are returned but that is rarely the case. After working with some messages in queues it becomes quite apparent that SQS will return a random number of messages. Additionally without using FIFO (First-In First-Out) queues, the messages will often be delivered out of order.
Another bit of a gotcha I ran into at first was that by default, Recieve-SQSMessage will not return any message attributes from SQS. The resulting Amazon.SQS.Model.Message object that was returned had blank MessageAttributeValues until I specified “-MessageAttributeName All” parameter.
Hopefully the above considerations will shed some light on the way the function is written. I wrote it so that it could be run repeatedly from a parent polling script and that it could handle one more more message objects being returned from each poll of SQS.
Back to the functions
Finally, we get to the polling function portions of the script which could run on scheduled intervals via task scheduler. This function first checks a queue for the existence of messages using the Get-SQSQueueAttribute cmdlet. If messages are found in the queue, it will invoke the Start-SQSQueueProcessing function referenced above to handle the messages. I make use PowerShell transcription to keep a log for now. If this ever moves out of proof of concept, logging could be improved quite a bit to make it cleaner.
What does it look like when ran you may be wondering?
The PowerShell transcript output captures the same information. As you may have noticed, the body of the generated SQS messages contains information on who created the SQS message and when it was created which could help for audit trails.
I stumbled upon an interesting bug today. I tested some PowerShell scripts running from task scheduler on Windows Server 2012 R2 and everything executed fine. However, when I configured the scheduled task for daily execution and checked the log output the next day, the script had failed. None of the service account’s PowerShell profiles loaded and the functions I needed were unavailable. This was odd because I tested this entire process the day before, the only difference I could think of was that I was logged in interactively with that service account when I tested the scheduled task the day before.
Armed with this hunch of an assumption I went to the search engines and eventually found Microsoft hotfixĀ 3133689.
So if you ever need to load PowerShell profiles for scheduled tasks, you are going to need that hotfix. Obviously there are much cleaner ways to do what I was trying to do. Additionally having profile dependencies in your scripts or allowing interactive sessions for your service accounts are both far from best practice.
Here is some example code which may help you automate security group creation with PowerShell. I wanted to take a look at automating some security group creation tasks today and there wasn’t too much help available via search engines. Maybe this post will help that out a bit.
The minimum amount of IAM permissions needed to accomplish this task will be:
IAM Policy for this script
Vim
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Stmt1392679134000",
"Effect":"Allow",
"Action":[
"ec2:Describe*",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:CreateTags"
],
"Resource":[
"*"
]
}
]
}
This snippet of powershell will:
Lookup the only VPC in your account, provided your regional defaults are set via Initialize-AWSDefaults or the ec2 instance you are running this on. This is helpful as some of the powershell cmdlets only play nice with the default vpc, which many people tend to delete.
Create a new security group for a load balancer
Allow HTTP and HTTPS traffic ingress into the load balancer security group
Create a new security group for a web server
Allow HTTP from the load balancer to the web server security group
Allow SSH from a security group that is looked up by the name “My Bastion Host Security Group” to the web server
Today I am going to attempt to take some PowerShell functions I wrote on Windows and run them on Linux. This should all be possible now that Microsoft Loves Linux! With the new .Net (core) going open-source and cross platform combined with AWS’s Tools for PowerShell core, I should be able to run the exact same functions across Windows and Linux.
For this exercise I will be using a Ubuntu virtual machine on Hyper-V but this could easily be done on CentOS or other various linux distros. Microsoft recently added support for installing PowerShell through popular distro’s default package managers so we will take that approach to get up and running.
Enough intro lets get to it! I am going to use Microsoft’s provided steps in a bash terminal window to register the Microsoft repo and get the latest PowerShell 6 alpha installed and running.
After running those commands, PowerShell is installed and the system leaves us at the PowerShell command prompt.
To verify everything is working I can use $psversiontable to output our PowerShell info to the host.
Okay, everything is looking good so far.
Loading AWS Tools for PowerShell Core
Next up is to get AWS Tools for PowerShell core loaded. This can be done with the new PowerShell package management cmdlets specifically Install-Module.
PowerShell
1
Install-Module-NameAWSPowerShell.NetCore
Oh No, a red error appeared! Quick, email this error to our System Administrator to figure out what went wrong! Haha, just kidding. Lets read it.
The error says administrator rights are required to install modules. The suggestions are to try to change the scope via parameter or to use elevated rights. Well, run as administrator sure won’t work on Linux, so I will do the equivalent and exit out of PowerShell then sudo powershell back into the PowerShell host.
After a retry of the Install-Module command from the now elevated PowerShell host, the Install-Module command completes without error.
I want to check to see the available modules with the get-module command and verify the AWSPowerShell.NetCore module is listed now that its installed.
PowerShell
1
Get-Module-listavilable
Everything checks out and the AWS module is listed right at the top.
Loading my AWS functions from GitHub
I don’t plan on doing any editing of my functions or commits from this system, so I can skip configuring Git and just install it right from the package manager. The neat thing about using Git is that all the nuances that come from working on files between *nix and Windows, like different carriage returns, should be handled behind the scenes by Git.
Shell
1
sudo apt-getinstall git
Once git is installed I can clone the PowerShellScripts repository from my github.
A quick ls and cd is used to make sure the AWSFunctions folder came down with the repository.
Creating AWS Read Only Access Keys
Since this is just a proof of concept exercise, I am going to run a function I built to check the status of a running EC2 Instance by looking up its Name tag. The only access I need for this in AWS IAM is the ability to describe my instances so we can create a new IAM User with an attached EC2 Read only policy.
The IAM console has really become simple to use with recent updates but lets cover everything step by step.
First I’ll log into my AWS account and navigate to the IAM console. From there I want to choose Users and then use the Add User button.
I will call the user blogpostec2readonly and check the box for programmatic access, which will generate our access keys.
On the next screen I will choose Attach existing policies directly. The filter box directly below can be used to search for “ec2readonly” and an AWS managed policy for EC2 Read Only will appear. This managed policy is prewritten json IAM policy maintained by Amazon that helps administrators quickly provide permissions without needing to deep dive into IAM permissions. Perfect for our use case at hand. I’ll check the box for this policy and click next.
The next screen is a review screen and a final Create User button.
After the new IAM user is created the access key and secret key are provided for download. Be careful with these, as AWS access keys are all that is needed to access an AWS account. I will copy the provided access keys into the gedit text editor so I can use them in the next step.
Configuring AWS PowerShell Module Credentials
All the prep work is nearly completed and the next steps are to configure the default region, access key, and secret keys to be used with the AWS PowerShell module cmdlets. To do this we will import the AWSPowerShell.NetCore module and run the Set-AWSCredentials and Initialize-AWSDefaults cmdlets.
I need to load my functions into memory so lets use Get-ChildItem to list the functions files and dot source each one. (% in PowerShell is a shorthand alias for ForEach-Object)
PowerShell
1
Get-ChildItemAWSFunctions|%{.$_.FullName}
To verify my custom functions are loaded and ready to execute we can try to tab complete them. The function I am running in this exercise is Test-RunningEC2InstanceByServerName so I will type Test-Run and press tab.
Success! Tab completion filled out the name of function for me. Lets see if it works…
The Instance hosting this here blog is called PACKETLOST02 so I will send that server name in as a parameter into the function and I am expecting it to return that the instance is running.
The function ran and returned that the instance is running.
Summary
How neat was this? I took some PowerShell functions I wrote on the Windows platform and commited them into my GitHub repo then got them to run on Linux. When I initially wrote these functions it was to help automate my day to day administration of Amazon Web Services. I wrote these functions on the Windows platform with only the Windows platform in mind. Thanks to the great work of the developers at Microsoft and Amazon Web Services these functions are now cross platform.
I hope this post provides a quick glance into how useful and flexible PowerShell can be as well as how promising the future of the .NET core and the .NET standard libraries are to cloud computing. Cheers!