Monday, December 30, 2019

Tuesday, November 26, 2019

Thursday, October 17, 2019

Football Pontoon Azure Architecture

As you may have seen from my recent posts on LinkedIn, I have been creating a new website called FootballPontoon.  It is a game you can play with your friends or work colleagues where each player picks a unique team from the Football League, pays a weekly ante and the first team to get to 21 wins the cash.  At that point, you all go back to zero and the next round starts.

It is a good game to play and can even attract people who are not into Football as it is pretty much set and forget.  It was a game my wife's work played and I was always interested in her teams performance.  After hearing that it was managed manually (yep, someone has to manually enter scores for each team into spreadsheet each week!), I thought that it could be done better.

So I created something pretty good which used Google Sheets to auto download results, calculate scores and then used Google Scripts to send out weekly updates and check for winners.

Look at that formula!

This worked really well and meant I had pretty much zero work to do each week.

During my recent journey to pass my AZ-300 and AZ-301 exams, I did wonder if I had a proper application to develop, it would be easier to learn lots of the Azure technologies and how they connect together.  Following walkthroughs and tutorials is fine, but sometimes it can feel as though as though you are blindly following instructions without challenging why certain things are done.

Now, don't get me wrong, is an absolute treasure trove of amazing documentation and in the last few years has seriously ramped up in quality and quantity.

My background is primarily in Operations and End User Compute, so software development is very new to me.  I decided to re-engineer what I had in Google and do it within Azure.  And this is what I ended up with

Current Scores

Previous Round

The architecture is as follows.

Azure Automation
So I am using Azure Automation RunBooks for some activities.  I could have used Azure Functions, but I am more comfortable with PowerShell.  I have three Runbooks.  One that will download the latest score information and update my SQL database, one that will check each night whether a team has won and if so to create the new round and lastly a RunBook which will post a tweet of the latest scores via If This Than That (IFTTT).  It also uses a service called ScreenShot Machine.  This takes a picture of the current scores table on the website and adds it as an image to the Tweet.   

The Runbook which checks the latest scores is triggered by a logic app.  The reason for this is that a standard Azure Automation schedule can do a maximum of once an hour.  During the periods where there are lots of games (Saturday 3pm-5pm) I wanted the website to be updated much more frequently.  Logic Apps give you this flexibility, so I have it initiated every 5 minutes during the busy period and every 8 hours otherwise.

I have a basic SQL database with 5 tables in total (rounds, currentscores, previousrounds, teams and matches).  I won't go into too much detail about the relationships, but I will say that the DB design was the most important step in creating this.  I spoke to a friend of mine who is a SQL expert (Daniel O'Reilly) and he told me to spend some time to map it all out up front.  This certainly helped out a lot further down the line.

I created a basic Azure WebApp in C# to display the information from the SQL database.  I am using DevOps as source control and have configured Continuous Integration to automatically build a new website on newly pushed code.  The following PluralSight course was really useful on getting me up to speed.

ASP Fundamentals

I am using Cloudflare for DNS and HTTPS for the new site.  In Azure, you can add custom domain names and SSL/TLS for WebApps, but only if they were a certain tier.  I am using my VS subscription credits for this service and didn't fancy spending £50 a month just to support a custom domain name for TLS.  Cloudflare gives you this capability at their free tier and is something I use for this blog.

Other bits
I used Azure Bastion quite a lot to connect to a developer VM with Visual Studio and SSMS.  This was really useful as it meant I could get access to my tools from whatever machine and connect to the VM from within a browser (no fancy port opening needed!)  Azure Bastion costs about 7p an hour regardless if you are using it or not and it is not possible to turn it off.  For this reason I would delete it and use Azure RM templates to recreate it whenever I need it.  This would say a lot of money on my VS subscription.

Next Steps
I would like to investigate the possibility of people being able to create their own leagues.  This does pose a number of challenges.  I haven't had to do anything on authentication at all and data privacy would be a big concern.  I don't want to hold user information until I am more confident with c#.

I may look at Microsoft Flow as an alternative for IFTTT going forward.

I want to update the site to core 3.0 at some point, but at this moment in time, it is not supported for Azure App Services.

Let me know what you think and if you have any ideas.  This process has been really useful to learn new things and will help me going forward when using new features.

Thursday, September 12, 2019

Citrix Future of Work Tour 2019 (London)

I visited etc. Venues in Bishopsgate for Citrix Future of Work Tour 2019 on September 11 2019.  There were three main takeaways

Consumer like experiences
Packed house!
Consumer like experiences
During the keynote, they mentioned how consumer applications like Facebook are very easy to use, data is surfaced to you in the simple way and interactions are straightforward.  Employee tools however, are usually quite cumbersome and difficult to use. 

This product was realised out of the acquisition of Sapho late last year

This provides the employee with a feed built up from connections to various SaaS applications.  This feed also allows some basic interactions e.g. booking leave in WorkDay or approving expenses in SAP.  It is built on their cloud offering and includes AI and ML to increase productivity e.g. if you as a manager always approve expenses under £50, it would start to automate this exercise.  A challenge was raised about employees “gaming” the system, but Citrix’s response was that the same AI/ML would be able to identify staff who were always putting in £49.99 requests and force the manager to review.

This isn’t very “new” news as it was talked about at Synergy, but the demos were pretty good.  They launched a Citrix secure web app to launch WorkDay to book some leave and it took 3-4 minutes end to end.  Using Intelligent Workspace it took about 20 seconds.

There are a bunch of pre-built integrations out there for popular SaaS apps and a Microapp builder which is a low/no code solution for building your own integrations.

Integration of these app notifications into Teams or Slack is supported too.

Again Citrix betting big on their cloud offerings.  By using Citrix Cloud with analytics, they are able to make more intelligent decisions around security for users.  Being able to check risky sign-ins or other unusual patterns.  They can also take feeds from other products such as Microsoft Azure AD to help provide a better context and make better decisions.  They can also output their data to third parties SIEM solutions like Splunk or Azure Sentinel.

The analytics piece is also for performance too.  By leveraging Citrix Cloud, you can get a much better breakdown of the user experience.  It looks like Director that you may have on-premises but includes much more fine grained information about the user’s session.  With their ML, they can see trends too.

It felt like Citrix were trying to flog SD-WANs to anyone that would listen.  They did a session on optimising Office365 and provided some stats on how using SD-WAN could increase MS Word launch speed by 55%.  Sounds great…how do they do that?  Well, in essence they are using SD-WAN to breakout directly to the internet/Office365 from branch offices rather than going through your datacentre.

They also talked about Citrix intelligent traffic management, again using Citrix’s Cloud to make better decisions on network routing and performance.  They are collecting 15 million data points everyday which can help them route traffic in the most expeditious way.  We didn’t get a demo of this, but they provided the following links which might be worth looking at.

Bonus: Secret Demo Room
There was a secret demo room which showed three products that Citrix engineers had created in their 20% free time.  These may not see the light of day and they were keen to ask for feedback from customers.

I won't ruin the surprise if you are due to go to a future event, but I recommend registering to see them if you can. If you can't get there and want to know more, ping me on LinkedIn! 

Other notes
There was a session on provisioning Windows 10 in Azure which I caught the end of.  Citrix are suggesting their USP over WVD and more generally is that you can use Workspace to access any of these services from one single entry point.  This makes sense, but it also requires you to buy in Citrix lock, stock and barrell.  Something I am sure their Sales team will be happy to discuss with you!  Other than that, the provisioning plane in Citrix Cloud looked similar to Workspot.

The key takeaway was the fact that Citrix really want you to become a Citrx Cloud customer.  This obviously provides them with a better licencing model which is sustainable as most tech companies are going down the subscription route, and as many of the offerings they are bringing out have some ML or AI baked in, it is difficult/impossible to backport this to On-Prem.  Other than that, Citrix want you to buy a bunch of SD-WANs…to connect to theirs and other cloud services.

It was a well put together day and if you have the opportunity to attend in a different region, I recommend it.

Saturday, February 23, 2019

Dynamics 365 USD Performance Testing (Part 3)

This is the 3rd part of a series of blog posts which cover a set of Performance Testing scripts I have used to test Dynamics 365 CE with Unified Service Desk with XenApp.

This test focuses on the controller script which is used to launch user sessions.   Below is a version of the script I used.  It has some comments which cover what each of the sections do.
One of the fundamental issues I had was how to programmtically launch Citrix sessions from PowerShell.  Fortunately LoginVSI comes with an executable called SFConnect.exe.  This is a really simple application which makes it easy to launch Citrix sessions.

If you are not lucky enough to have access to SFConnect.exe, then you could use something like this.

In the next part I will record a video of this in action in my lab and in the final blog I will look at areas where this could potentially be improved.

I would love to get your feedback on the blog so far, so feel free to make some comments.

To see part 2, click here

Monday, February 11, 2019

Friday, February 01, 2019

Dynamics 365 USD Performance Testing (Part 1)

As discussed in my introduction, this set of blog posts is to show what can be done to performance test Unified Service Desk and Dynamics 365 CE.

These tests are more designed to identified the client side impact of using USD.  It can be a very CPU intensive program and depending on your configuration on CE, some of the pages can take a while to load and/or use up a lot of client side resource.

Below is a screenshot of the hardware requirements for USD

Microsoft provide some hardware recommendations, but when running this in a Server Based Computing environment such as Citrix XenApp, it is unlikely you are going to give every user 2 dedicated CPUs without some pooling.

The other thing to bear in mind is that USD uses embedded iexplore.exe processes for its frames.  We all know that IE sucks, but specifically with JavaScript and DOM Storage performance, which Dynamics uses a lot of.

Microsoft know this and if they could burn IE to the ground I am sure they would, but in the meantime their Dev teams have been hard at work and have a public preview of leveraging Edge instead of IE.

Back to my script.  It was designed to see what the sweet spot ratio of user:compute is.

This post is more related to the workload and to tackle the issues you may face when automating test steps.  I am assuming that you have some PowerShell knowledge so far, but if you have any questions, feel free to comment.

-How to move the mouse cursor
So how do you move the cursor around the screen?  In automated testing it is usually better to use keyboard strokes, but some windows and applications do not support them or are unreliable.  It is imperative that if you use mouse cursor clicks, that tests are run with the same resolution.

First of all you need to ensure that the WinForms is added to your script.  Just place the following at the top of your script.

Add-Type -AssemblyName System.Windows.Forms

After that, simply use the following line to move the mouse cursor to the correct place based on X,Y co-ordinates.

[Windows.Forms.Cursor]::position = "100,422"

You could use something like this to find out the position of the mouse cursor when you are configuring your script.

-How to click the mouse button
OK, so I have the cursor in the correct place, but how to I imitate a click.  Well I won't pretend to have come up with this myself, but I found this which did the trick nicely.

You can create a simple function and just call it at certain points in your script. Just put the code from the link above at the top of your script.

-How to type text
There is some guidance for using or creating keyboard shortcuts in USD but I found it required config changes within the CE environment which I wasn't keen to tinker with.

If you do have keyboard shortcuts or do need to actually type something, you can use SendKeys by creating a ComObject
$wshell = New-Object -ComObject;

And use this every time you need to type something

You can find a list of special keys here.

How to click a window button..when you don't know where it will appear
This one drove me crazy! 

Imagine a window pops up with a button you need to click OK to.  Now imagine that every time this window pops, it is in a different location on the screen.  A testing nightmare.

I used a couple of techniques to resolve this issue.  First of all, I needed to know when the window appeared.  

The process UnifiedServiceDesk.exe was running, but when the call would appear, the window title would be blank.  I had a loop that would do nothing until $idle was equal to 0.
$idle = Get-Process | where {$_.MainWindowTitle -eq "Unified Service Desk for Microsoft Dynamics 365"}

After this I would get the coordinates by using get-window powershell script which can be found here I would then subtract a certain amount of pixels to determine where the button I wanted to click was!
$accept = get-process unifiedservicedesk | where {$_.CPU} |Get-Window | select bottomright -expandproperty bottomright

$answerx = $accept.x - 50

$answery = $accept.y - 50

[Windows.Forms.Cursor]::position = "$answerx,$answery"

-How to get application focus on a Window
This was relatively straightforward.  Again, I am against reinventing the wheel, so I found thilink which works really well.

You can create a simple function and just call it at certain points in your script. Just put the code from the link above at the top of your script.

-How to present feedback of the script progress
This was an interesting one.  I could have easily have used Write-Host or Write-Progress, but this information would have been hidden behind USD.  Not great when you want to monitor the testing or troubleshoot what is occurring.

The answer is to create little GUI using WinForms.  As you have already loaded the assembly for mouse clicks, you just need to declare the form and labels.  You could get fancy and create this in Visual Studio, but I just created it in ISE and tested it locally.

I created three labels.  One which says which task is being completed, one which says how many entries of a certain event are in the USD log file and one which says how many are required before moving onto the next step.
Add-Type -AssemblyName System.Windows.Forms
#General Form option
$form = New-Object Windows.Forms.Form
$form.Location = New-Object System.Drawing.Point(10,800);
$Form.Size = New-Object System.Drawing.Size 250,250
$form.text = "Script Feedback"
$form.StartPosition = "manual"

$form.FormBorderStyle = [System.Windows.Forms.FormBorderStyle]::FixedSingle

$Form.MinimizeBox = $False
$Form.MaximizeBox = $False
$Form.AutoSize = $True
$Form.AutoSizeMode = "GrowAndShrink"
$Form.FormBorderStyle = "None"

$labelTask = New-Object Windows.Forms.Label
$labelTask.Location = New-Object Drawing.Point 0,0
$labelTask.Size = New-Object Drawing.Point 100,25
$labelTask.Text = "Script Feedback"

$labelTotalNeeded = New-Object Windows.Forms.Label
$labelTotalNeeded.Location = New-Object Drawing.Point 0,50
$labelTotalNeeded.Size = New-Object Drawing.Point 100,25
$labelTotalNeeded.Text = "Total Needed"

$labelTotalLog = New-Object Windows.Forms.Label
$labelTotalLog.Location = New-Object Drawing.Point 150,50
$labelTotalLog.Size = New-Object Drawing.Point 100,25
$labelTotalLog.Text = "Total in Log"

# Add the controls to the Form
$form.Topmost = $True

# Display the dialog
$form.Show() | Out-Null

This can be whatever you want.  The key part here is to ensure it appears above all over windows.  This is achieved with $form.Topmost = $true.  You should ensure that when you update the labeltext, you refresh the form.

$labelTask.text = "waiting for call"

The result is a little form in the bottom right hand corner of your screen with provides feedback of your scripts progress.  This will sit on top of all other windows too

-How to answer an offered Skype Call
This one is amusing.  Aspect Unified Agent Desktop establishes a voice path with the agent which essentially ensures that they are engaged and call delivery is quick.  When logging into UAD, it will dial the user in Skype.  As you may know, Skype presents a toast pop up in the bottom right hand corner of the screen.  It is really difficult to programmatically interact with this toast pop up or have auto answer.

My answer was to simply wait 5 seconds after UAD had logged in, move the mouse to the bottom right hand corner and click the left button.

-How to ensure a proper audio/mic device is remoted into Citrix session
You cannot answer a Skype call unless you have a proper Microphone device on your computer.  This can be a problem if you are using XenApp through RDP or something similar.

I used Virtual Audio Cable to mimic the presence of a mic.  This can be used programmtically, but I was only interested in bypassing the Skype check.

-How to check whether a step has finished successfully
This was the most difficult issue.  In my first version of the script, I simply used Start-Sleep only.  This worked most of the time, but in my scenario I was trying to see the impact of running more users on my XenApp Server.  This meant that the more congestion, the longer the sleep times.

I next looked at monitoring the CPU usage of USD.  Looping round to see when the usage went down to 0% for a sustained period of time.  This fixed the above issue, but it introduced a little CPU overhead.

I wanted something cleverer and more robust.

Finally I looked to the USD log files that sit %appdata%\Microsoft\Microsoft Dynamics® 365 Unified Service Desk\\UnifiedServiceDesk-2019-01-19.log"

I would interrogate the log file after each step to see if the pages had finished loading and then move onto the next step.

I created a function called Check-Status

Function Check-Status
$checker = 0

$vari = $vari+$repeat
    $log = select-string -path $logdir\UnifiedServiceDesk-*.log -Pattern $task
    $count = $log.count
    write-host "waiting for task to complete"
    $checker ++
    sleep -Milliseconds 500
    $global:labelTotalNeeded.text =  "Total Needed $vari"
    $global:labelTotalLog.text =  "Total in Log $count"
    if ($checker -gt 200){
                          $count = $vari +1  
Until($count -gt $vari)

I know that when USD loads for the first time, it loads a dashboard. This loads 7 CE pages before it has finished.  So it will call the function and the function will loop round once and repeat 6 more times.  It will provide feedback on the GUI form I created.  

If something has gone wrong and 100 seconds passes by (200x500ms), the script will exit.  If the error occurred, then I called a separate function that would email me and exit the script.

Check-Status -task "Name=PageLoadComplete Action= App= Data= Condition= ConditionResult=Success Result=" -vari $PGLOADCOMPLETEcount -repeat 6

This worked really well for most things.  There were a couple of steps which didn't have a definitive log entry item or they were not reliable.

For these I created a more generic function called Check-LogIdle.  This would check the total amount of lines in the Log File.  If this number remained constant for a total of  10 iterations (of a second of so each) it would move onto the next step.

Function Check-LogIdle
    $beforecount = 0
    $aftercount = 0
            sleep 1
                $beforecount = $aftercount
                $log = get-childitem $logdir\UnifiedServiceDesk-*.log | sort LastWriteTime | select -last 1 | get-content
                $aftercount = $log.count

            if ($beforecount -eq $aftercount){$idlecount +=1
                                write-host $idlecount}
            Else {$idlecount = 0
                  write-host $idlecount}

        until($idlecount -eq 10)


-How to measure time to complete tasks
Using the Check-Status function, but placing it inside a Measure-Command bracket will allow me to measure the time taken to complete a task.  After I would collect some other information like the username and the time and then append it to a CSV file.

$time = Measure-Command{
   Check-Status -task "Name=PageLoadComplete Action= App= Data= Condition= ConditionResult=Success Result=" -vari $PGLOADCOMPLETEcount -repeat 6

$TimeAdd = New-Object PSObject -property @{action="ContactLoad";Duration=$time.TotalSeconds;username=$env:username;Time=Get-Date}
$TimeAdd | export-csv c:\temp\timing.csv -Append -NoTypeInformation

The resultant CSV file looks like the below. 

Hopefully the above shows some good techniques to testing USD.  The key here is profiling the specific tests you wish to complete and gathering information such as mouse-clicks and keyboard strokes.  There is not one size fits all approach here unfortunately.

I would love to build a script which you could use to automatically profile a specific scenario, but that is one for another day!

The next blog post in this series will have a sanitised version of the script I use and hopefully a video of it in action.  The last post in the series will cover the controller script which launches these workloads from a central location.

Please leave comments on any of the above.  If there is a better way of completing certain tasks, or other features that could be useful.  Let me know!

update, here is part 2

Tuesday, January 29, 2019

Dynamics 365 USD Performance Testing (Introduction)

Recently I had to perform some performance testing for a Citrix XenApp environment as part of an upcoming upgrade to our Dynamics 365 Customer Engagement application.  We wanted to ensure that this CE upgrade didn't affect the user density on our Citrix servers.

This workload is for our Customer Service Advisors who use Aspect Unified Agent Desktop (UAD) to receive calls integrated into Microsoft Unified Service Desk (USD) to present caller information and a method of completing workflows through Dynamics 365 CE.

We would usually complete this task with LoginVSI.  It is a tool we have available to us and have used in the past.  We even have a basic custom workload for the above scenario.

We were having a problem with a licence lapsing with the product and also, I wanted a bit more control over the workload and some of it's outputs.

So the next best option was creating a set of PowerShell Scripts to complete these tasks.

At a high level, I had a single launcher Virtual Machine which would run a controller PowerShell Script.  This would set up the XenApp servers to collect data and launch the automated sessions against this XenApp server.

Then I had a separate workload PowerShell script that would execute upon login, prepare the session and complete a repeatable set of actions.  It would also check when the test was finished and gracefully logoff.

The next couple of blogposts will cover how this was achieved in a bit more detail. It will also discuss specific challenges, whether they were workload or controller based and the fixes or workarounds that I used to overcome them.

-How to start and stop perfmon data collector sets
-How to launch Citrix sessions from command line
-How to auto start workload script
-How to end the workload gracefully
-How to initiate SIP calls from command line without having to RDP to another server

-How to move the mouse cursor
-How to click the mouse button
-How to type text
-How to click a window button..when you don't know where it will appear
-How to get application focus on a Window
-How to present feedback of the script progress
-How to answer an offered Skype Call
-How to ensure a proper audio/mic device is remoted into ICA session
-How to check whether a step has finished successfully
-How to time how long tasks took to complete

Some of the above challenges are relatively easy to overcome with a bit of Googling (others not so much), but the aim of this is to show you what is possible without having third party testing software like VSI or Selenium etc.

I am also hoping for extra suggestions from those more learned than me in other areas.  My background is Citrix and Infrastructure and if changes can be made in CE or USD configuration to make these scripts work better, that would be awesome.

Here is the next blog in the series

Wednesday, January 02, 2019

Email report of costly activities in Azure

Azure seemingly has endless possibilities and options. Sometimes it is difficult to see the wood through the trees.  The capabilities for techies are mouthwatering but the cost control (or lack thereof) is a constant headache for managers or those who hold the purse strings.

It is very easy to accumulate spend if resources are not being managed in the correct way e.g. keeping VMs on when they are not required, having resources set to the incorrect scale or resources being created for testing and never being deleted.

There are some quite in depth solutions to cost control in Azure but at an organisation which isn't cloud native, you need to learn to walk before you can run.  Quota management for Enterprise Agreements can help, but this will just report on issues after the problem has occurred.

Whilst transitioning support for some Azure Subscriptions to an internal IT team, an IT manager asked me whether there was a way to simple see the resources created, updated or started in a time period.  He could use this to check where fluctuations in costs may start from.

This seemed a simple ask, but there wasn't a simple answer.  You can use Activity Logs and filters to only see Administrative tasks in the portal, but there are just so many of them, it is difficult to produce something valid.  You cannot even use the proper Operation name, only the localizedstring value.

Only 50....

Here is a list of all of the operations which can show up in Azure

Just a snippet

To go through this list and pick every item which might incur cost was way too much work.  Instead, I decided to look for activities which have the following keywords

Start (as in starting a VM)
Update (changing a VM or PaaS Scale)
Create (as in creation of new VM)
Deallocate (turning off a VM)
Change (similar to update, some resources use Change and other Update)
Write (again similar to Change and Update)

I decided to go down the PowerShell route and use get-AzureRMlog.

I created a PowerShell Script which would produce this in a CSV and email it to my colleague on a nightly basis for events in the last 24 hours.  There is the potential that this would create some false positives, but having it in CSV means they can filter what they want.

To take it step further and remove the dependence on having a scheduled task running on my PC every night, I decided to look at Azure PowerShell RunBook.

This was also my first time of running any PowerShell scripts from within Azure.  This requires creating an Automation Account and then creating an PowerShell RunBook.  I had some problems getting my CMDlets to work, so I had to ensure they were all up to date.

Below is the final script. If you want to re-purpose it, you will need an SMTP server.  You will need to ensure the SMTP credentials are stored in your Automation Account.  This allows you to keep your credentials private by not having them in plain text.  Of course, plain text will work, but don't do that....seriously!