Alert on Login on Production environment

Almost one year ago, roadmap was ISO 27001 certified. This was a great feat for a startup of only a half a year old at that time. We weren’t entirely issue free of course, we had a couple of minors to fix, but only a couple. In general, we were in control. One of these minor points was that we needed to get an alert when someone logged on to a server in the production environment. Why? Because people shouldn’t. There is no need to be on a production server for writing or deploying software, because we use Octopus deploy for deployments. The only developer who we’d expect to find logging on to servers is the one on call, he or she is in the ‘ops’ role of Devops. And of course we’d expect Jeffrey, our dba. 

Recently, someone asked me just how we created such an alert, so I decided to share how we did it.

Our first attempt of to get the alert was to leverage the windows event with id 4624 together with LogicMonitor (https://www.logicmonitor.com). It is great tool to monitor your servers, you should check it out. My thought was that I could use LogicMonitor to check the event log and warn whenever a user logs on. But the log was full of these events; Windows didn’t just log the logon events from me and my colleagues, but also from AD accounts that we use to run services / tools. We needed to filter the logins to see if there were some strange logins.  Unfortunately, I was unable to filter to just the events to the level I needed,  despite LogicMonitors powerful capabilities on filtering.

After that we tried a different approach and had more luck with that: the Group Policy logon scripts. Details on how to enable running a powershell script when someone logs on or off can be found here: https://4sysops.com/archives/configuring-logon-powershell-scripts-with-group-policy/. After that we just had to come up with the script to send an email. Luckily this is really easy in powershell.

$user = [Environment]::UserName
$machine = [Environment]::MachineName
$now = [DateTime]::Now.ToString() 

Send-MailMessage -SmtpServer "<mailserveraddress>" `
		 -to "monitoring@getroadmap.com" `
		 -from "alert@getroadmap.com" `
		 -subject "AD RD Session Start" `
		 -Body "$user has logged into $machine at $now"

But this wasn’t all we needed, as we found out during the internal audit for ISO27001 half a year later. We needed to know why somebody was logging on. My assumption was that we would just change the script to add a logon reason. But alas, if it only were this simple.

To add the reason the first thing you need to do is to ask the user. This seems simple enough from powershell.

$reason = [microsoft.visualbasic.interaction]::inputbox($msg, $title, "", -1, -1)

There are a few major problems with this solution. First, you can dismiss the box (there is a cancel button and a close button). This was easy to fix with a while loop around it that would check for the reason to be filled. But even then there is still a chance of someone killing the script and we would never be informed of someone logging on. So we decided to send a second mail with the reason. We also got rid of the Visual Basic msgbox by using XAML as you will be able to see in the final script below.

Then there is the problem that the window needs to be opened on the foreground so it visible right away. This required us to do some dll imports. Here’s the final script that will send the second mail.

function ShowWindow {
$Xaml = @'
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        x:Name="Window"
        Title="Name" Height="137" Width="444" MinHeight="137" MinWidth="100"
        FocusManager.FocusedElement="{Binding ElementName=TextBox}"
        ResizeMode="CanResizeWithGrip">
    <DockPanel Margin="8">
        <StackPanel DockPanel.Dock="Bottom" 
                    Orientation="Horizontal" HorizontalAlignment="Right">
            <Button x:Name="OKButton" Width="60" IsDefault="True" 
                    Margin="12,12,0,0" TabIndex="1" >_OK</Button>
            
        </StackPanel>
        <StackPanel >
            <Label x:Name="Label" Margin="-5,0,0,0" TabIndex="3">Label:</Label>
            <TextBox x:Name="TextBox" TabIndex="0" />
        </StackPanel>
    </DockPanel>
</Window>
'@

    if ([System.Threading.Thread]::CurrentThread.ApartmentState -ne 'STA')
    {
        throw "Script can only be run if PowerShell is started with -STA switch."
    }

    Add-Type -Assembly PresentationCore,PresentationFrameWork

    $xmlReader = [System.Xml.XmlReader]::Create([System.IO.StringReader] $Xaml)
    $form = [System.Windows.Markup.XamlReader]::Load($xmlReader)
    $xmlReader.Close()

    $window = $form.FindName("Window")
    $window.Title = "Logon reason"

    $label = $form.FindName("Label")
    $label.Content = "Why are you logging on?"

    $textbox = $form.FindName("TextBox")

    $okButton = $form.FindName("OKButton")
    $okButton.add_Click({$window.DialogResult = $true})

    if ($form.ShowDialog())
    {
        if ([string]::IsNullOrEmpty($textbox.Text)){
            return ""
        }
        return $textbox.Text
    }
    else{
        return ""
    }
}
     
$activateWindow = {
        $sfw = '[DllImport("user32.dll")] public static extern bool SetForegroundWindow (IntPtr hWnd);'
        Add-Type -MemberDefinition $sig -name NativeMethods -namespace Win32
        $sig = '[DllImport("user32.dll")] public static extern bool BringWindowToTop (IntPtr hWnd);'
        Add-Type -MemberDefinition $sig -name NativeMethods -namespace Win32
        $fw = '[DllImport("user32.dll")] public static extern IntPtr FindWindow (String sClassName, String sAppName);'
        Add-Type -MemberDefinition $fw -name NativeMethods -namespace Win32
        $sw = '[DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);'
        Add-Type -MemberDefinition $fw -name NativeMethods -namespace Win32
  
        $null = [reflection.assembly]::loadwithpartialname("microsoft.visualbasic")
        $isWindowFound = $false
        while(-not $isWindowFound) {
            try {
                [microsoft.visualbasic.interaction]::AppActivate($args[0])
                $ptr = $fw::FindWindow([IntPtr]::Zero, $args[0])
                $sig::ShowWindowAsync($ptr)
                $sfw::SetForegroundWindow($ptr)
                $sw::ShowWindow($ptr, 1)
                $isWindowFound = $true
            }
            catch {
                sleep -Milliseconds 100
            }
        }
    }

$user = [Environment]::UserName
$machine = [Environment]::MachineName
$now = [DateTime]::Now.ToString() 
$reason = ""

while($reason -eq "") {
    $job = Start-Job $activateWindow -ArgumentList "Logon reason"
    $reason = ShowWindow
    Remove-Job $job -Force
}
Send-MailMessage -SmtpServer "<mailserveraddress>" `
		 -to "monitoring@getroadmap.com" `
		 -from "alert@getroadmap.com" `
		 -subject "AD RD Session Reason" `
		 -Body "$user has logged into $machine at $now because: $reason"

But we still weren’t there. The logon script will not allow the script to be run in the foreground and thus the window will never pop up in the right way. We had to pull another trick.

We changed the logon script to invoke the powershell script shown above.

Invoke-Item (start powershell ((Split-Path $MyInvocation.InvocationName) + "\SendMailOnLogonWithReason.ps1"))

Now we are greeted with a popup window every time we access a server in the production environment. We have to fill it in or it will keep bugging us until we do. We receive no less than 3 mails per session. One at logon time, one with the reason and one with the logoff time.

It does the job, but we can already see this change into a little sql script or maybe a publish of an azure service bus event rather than a collection of  emails.

More sweetness…

I already blogged about ‘OctopusDeploy &TestComplete sweetness!’ and now I ‘d like to add a little more…

First, since we bought a TestComplete Enterprise license we were able to run TestExecute on multiple machines and thus add more servers that could execute tests.

Second, I could see problems coming our way if we we were to keep our test scripts outside of source control or outside the branch.We would run the risk that we’d change the system in such a way that the tests would no longer be correct. I spend some time today on mitigating that problem. Here’s what I did:

  • I wrapped the TestComplete project inside a class library
  • added a deploy.ps1 that does the magic of starting TextExecute
  • set all files to ‘Content & Do not copy’
  • created a package with _PublishedApplications and Octopack
  • set up a server in the environment in the role ‘TestComplete’
  • added a step to the octopus project to deploy the TestComplete package to that server.

Done!

Seriously, this setup is starting making me happier every day!

BTW: still bothered by the SMS that is send with every test run…

OctopusDeploy &TestComplete sweetness!

Seriously, this is just sw33t! After a deploy our system is being tested with no less than 2 tests! Smile with tongue out 

image

 

Okay, it’s actually a lot sweeter than it sounds. The first test is to check whether our website will return the correct response in case we do not specify an id. This is just a basic test.

The sweetness is in the second test. This test performs a XML post to the API and checks afterwards whether the returned id from the API is collectable on our website. And apparently it is! This test tests the API, the windows service, and the website. That is all there is for our system.

Now on to check whether the SMS that we send arrives…

Running Testcomplete scripts with Octopus Deploy

It was becoming more and more clear that just running a deployment without testing that the deployment has gone as planned, and yesterday we had a deployment gone bad…Luckily it was on the test environment and not on production.

So my conclusion was that we have to do some automated system or UI testing apart from the unit & scenario tests we run in the build. We have just bought a license for Testcomplete (http://smartbear.com/products/qa-tools/automated-testing-tools) to automate our tests, so it would be sweet if we could run our tests right after a deployment.

I took a relatively simple test to try to implement in the deployment process: a simple text-check on a webpage. First I had to find out how to start testcomplete from the command line and specify a test. Refer to the documentation for this, because it is well specified how to do this. The step from command line to powershell is a breeze after that.

There is however a problem: Testcomplete doesn’t just return success or failure, it generates logfiles. So we are stuck with the process of extracting the results from the logfile. I run TestComplete in silent mode and export the log to an mht file, which is a multipart mime file. The script unpacks this file, finds the root.xml file amongst all the parts (decodes base64 format) and gets the status code from the xml. You can look in this script to see the details.

After this worked locally I installed a Tentacle on the testcomplete server and tried to start kick off the powershell script with Octopus. It started, but the test result failed because I could not navigate to the website in my script. This is because of the fact that the Tentacle runs as local system and is thus bound to the system. I used an AD user as the service account for Tentacle and it worked as charm.

Happy deploying!

(PS: There might be easier ways to get the results from TestComplete or to parse a multipart mime document. If you know, please tell me, because this is not the most elegant way…)

Word 2013 not uploading images for blog posts

Apparently, Word 2013 is no longer uploading images to my blog. I will have to find out why… until I have a solution you’ll have to do without…

Sorry….

Octopus deploy at Sound Of Data

Two month ago I blogged Application Lifecycle Management (http://bloggingabout.net/blogs/dries/archive/2013/03/21/sound-of-data.aspx). I would have given regular updates on progress if it weren’t for an important project for a customer which got in the way. Or did it?

We decided to use the project as a showcase of what can be achieved when Application Lifecycle Management is firmly in place. Here’s what we did.

Development & branching

To begin with we looked at the iterative development process and the branching strategy. We decided that we should keep the Main branch stable whatever it takes, so we developed every user story in a separate branch and merged these in a sprint branch. The main branch would remain untouched until such time as the sprint branch was considered stable.

Build & packaging

We also set up a fully automated build; three of them to be exact. One for continuous integration – with a failing build notification so that the entire team would know that the build was failing and could assist in fixing it – and one that creates some packages to be released on the sprint branch and another one that creates packages for the main branch. The packages were created as described in my blog post (here and here).

The goal was ultimately that these packages would be installable on all environments (Devtest, Test, Acceptance and Production), but the sprint branch packages – the possibly unstable ones – should only be installable on DevTest. To do this we created 2 NuGet repositories (fileshares) that the build would publish to: the sprint branch would be published to our team repository and the main branch would be published to the release repository.

Deployment

We decided to use Octopus Deploy for our automated deployment tool. We used the following setup to get a smooth deployment cycle. We created 2 environments, 2 project groups and 2 projects: one for the team and one for the real stuff. The team project (group) can only release to the team devtest environment and will release the sprint packages. The release packages that come from the main branch, can be published to Test, Acceptance and Production, the AppSettings that have different values between these environments are being changed by octopus using variables. Octopus does a really great job at this.

Result

So what can we do now? We can deploy that product every time a build is finished. We do not do this automatically, because our tester, not surprisingly, does not enjoy having the devtest environment change during her tests. She’s basically in control; when she’s ready for a new test, she presses the button on the Octopus portal, a new version is deployed and she can start the next test round. (In practice we’re still pressing the button, but that’s an adoption issue…) When she gives the green light for the sprint branch, we merge into the main branch. We run the build and if it succeeds we can start to deploy to test where our tester does her thing again. We deploy to acceptance if all is still well. The same really holds true for deployment to production.

I overheard a colleague say: “Deployment has gotten from being really stressful, to being really boring; I am starting to annoy my colleagues with stupid jokes when releasing software…” Well, at least it is over very quicklyJ.

From here, we now want to go and automate the deployment of all our products. We don’t just want to write software, we want our software to be used.

Happy coding! And deploying, I guess…

Octopus Deploy with PublishedApplications

Normally, when you install Octopack using nuget, the contents of the OutDir (msbuild variable) will be put in the nuget package it creates for you. But when running in TFS build this will give you trouble as mentioned in http://help.octopusdeploy.com/discussions/problems/505-all-binaries-from-tfs-build-in-nuget-package

A solution was mentioned to use the PublishedApplications nuget package to build each project to its own directory and I blogged as much yesterday…. But this is just a half-baked solution; yes, each project is build to its own directory, but octopack still takes the output of the tfs binaries folder for the packages. I found a way around this and I will describe it here.

I had to edit the source for octopack. I changed the dll to determine how a web project is recognized. Normally it does this by looking for a ‘web.config’ file, now you set the attribute TreadEveryProjectAsApplication to ‘true’ of the CreateOctoPackPackage task which will make octopack always use the content of the OutDir as input for the package. (It will ignore the content files in the project directory.)

I also removed the line where it excluded the files in the _PublishedWebsites folder, because I explicitly need these files.

Added this PropertyGroup to the octopack.targets file:

<PropertyGroup>

      <OctoPackDirectoryToPack Condition=‘$(ExeProjectOutputDir)’ != ”>$(ExeProjectOutputDir)</OctoPackDirectoryToPack>

      <OctoPackDirectoryToPack Condition=‘$(WebProjectOutputDir)’ != ”>$(WebProjectOutputDir)</OctoPackDirectoryToPack>

    </PropertyGroup>


It will set the variable OctoPackDirectoryToPack to either ExeProjectOutputDir or WebProjectOutputDir. I then use that variable as input for the OutDir attribute of the CreateOctoPackPackage.

Download it here: http://bloggingabout.net/media/p/578418/download.aspx or check out the code at https://github.com/dmarckmann/OctoPack

Happy Coding!

PS. I later also created the property GetVersionFromAssemblyFileVersion (bool) if you want to get the version from the assemblyfileversion of the PrimaryOutputAssembly like we do. Download from github and build locally…

PublishedApplications sweetness for TFS Build

I just learned that there exists a nuget package that allows you to build your non-web projects to a ‘_PublishedApplications’ directory just as your web projects are build to a ‘_PublishedWebsites’ directory.

Check it out: http://www.nuget.org/packages/PublishedApplications/

From there on it’s an easy ride to get your build to produce Octopus deploy packages. You can read about it here: http://help.octopusdeploy.com/discussions/problems/505-all-binaries-from-tfs-build-in-nuget-package

If you want to know all about setting up TFS build for Octopus deploy, check this great walkthrough: http://octopusdeploy.com/blog/automated-deployment-with-tfspreview-octopack-myget

Happy Coding!

From ‘A-ha’ to ‘Ka-Ching’ with Sound Of Data

This post will be posted here and on the site of Sound of Data as well (http://soundofdata.nl/en/nieuws)

As of february 25th I started as Senior Developer at Sound of Data. For those who do not know me I’ll shortly introduce myself.

I am 37 years old and I live in Goedereede-Havenhoofd. (That’s here). Writing code has always been a hobby and 14 years ago I managed to turn my hobby into work and I’ve been hobbying ever since.

After 7,5 years working for TellUs, leader in online (sales) lead generation, it was time for a change. I was lucky to be contacted by Sound of Data because of my affinity with CQRS and Event Sourcing.

Their entire platform has been built on this architectural design pattern and they could do with an extra senior developer. I soon learned that their implementation of CQRS & ES is okay, but not yet fully complete. I hope to be able to lend a hand in completing the implementation. Than we can enjoy all the benefits of this pattern.

This isn’t my first priority though. I saw that SOD has some issues where it comes to deployment, so I made it my mission to get some Application Lifecycle Management in place and take the first steps towards Continuous Delivery. The idea of this practice is to make the time between ‘A-ha’ (the idea) and ‘Ka-ching’ (the release to market) as small as possible by automating and standardizing releases. This will help us bringing our customers closer to their customers and bring us one step closer to world domination in that area.


Happy coding!

Quick install of tools using Chocolatey

I got my new laptop today… decided to spend an hour or so to get an easy install working. Using Chocolatey (http://chocolatey.org) that should be easy.

It is, but it is not straightforward. You can’t create a simple batchfile like this:

cinst notepadplusplus

cinst fiddler

The command window will exit after installing notepad++.  A quick search revealed what I should have realized up front. Chocolatey uses nuget and therefor we can use a local packages.config file to get and install all packages. So now my script looks like this:

::Ensure we have elevated permissions

@reg add HKLMSoftwareMicrosoftDevDivb3d680166a14e50a8c8e2ed060d8d90 /v Elevated /t REG_DWORD /d 1 /f > nul 2>&1

@if /i “%errorlevel%”==”1” echo Error: elevation required. &exit /b 740

@reg delete HKLMSoftwareMicrosoftDevDivb3d680166a14e50a8c8e2ed060d8d90 /va /f > nul 2>&1

::Install Chocolatey

@powershell -NoProfile -ExecutionPolicy unrestricted -Command “iex ((new-object net.webclient).DownloadString(‘http://chocolatey.org/install.ps1’))” && SET PATH=%PATH%;%systemdrive%chocolateybin

::Start installing packages

cinst packages.config

And here is the contents of my packages.config:

<?xml version=”1.0″ encoding=”utf-8″?>

<packages>

    <package id=”VirtualCloneDrive” />

     <package id=”notepadplusplus” />

    <package id=”FoxitReader” />

    <package id=”imgburn” />

    <package id=”7zip” />

    <package id=”ilspy” />

    <package id=”tortoisegit” />

    <package id=”tortoisesvn” />

    <package id=”tortoisehg” />

    <package id=”expresso” />

    <package id=”virtualbox” />

    <package id=”KeePass” />

    <package id=”Paint.NET” />

    <package id=”rabbitmq” />

    <package id=”steam” />

    <package id=”vlc” />

    <package id=”fiddler” />

    <package id=”baretail” />

    <package id=”linqpad4″ />

    <package id=”tweetdeck” />

    <package id=”teamviewer” />

    <package id=”Teamspeak3″ />

    <package id=”skype” />

    <package id=”SkyDrive” />

    <package id=”ransack” />

</packages>

So I’m quickly set up to do some happy coding!