Posts

Evaluation Environments

Almost everyone is familiar with the concept of a QA environment, but I’m going to bring up the need for what I’m going to call Evaluation environments. (If there’s already a industry term for this that I’m not famliar with – let me know at https://twitter.com/jefftrotman.)

I’m going to define an Evaluation environment as where the design of a new feature is evaluated. (QA is where the implementation or execution of that design is verified.)

With QA, the expectation is for this feature to continue downstream eventually ending up in production. We may have to correct some errors first, but the design has already been accepted and is expected to flow into production at some point.

Evaluation is a place to test a new feature’s design. Things often make sense in theory but once you see how they actually work, it may be obvious that the design is flawed and needs to be re-worked. In some cases the feature will just be abandoned.

I’m going to talk more about “environments” in a future post, but the short version might be “some kind of web server where the code can be deployed so that someone who’s not a developer can interact with the new feature”. An Evaluation environment should be considered “disposable”.

Cloud

Alternative build agents for Azure Pipelines

Non-technical overview

Azure DevOps is a Microsoft service developed prior to their acquisition of GitHub. There’s a lot of overlap between the two services, and – even though it seems inevitable that GitHub is the future, there are a lot of organizations still using Azure DevOps.

Azure Pipelines is a part of Azure DevOps where you can build and test code projects. (When programmers use the word “build”, they mean to convert code written by a human programmer into something that can run on a computer.) “CI/CD” has recently become a buzzword in the software development community. It stands for Continuous Integration/Continuous Deployment. Azure Pipelines is the part of Azure DevOps that provides CI/CD services.

The “building software” part of CI/CD requires build agents. A build agent is a computer used to build software, but a it’s a different computer than one used by a human programmer to write code. Azure Pipelines maintains a pool of VMs that can serve as your build agent. When a build process is triggered, one of these is temporarily allocated to your process. It does your build and then reverts back to the pool. (CI/CD is all about automating processes. All of these happens “auto-magically” as directed by a script.)

The VMs provided by Microsoft will meet most needs, but not all. I’m working on project for a client where the amount of RAM provided in the Microsoft-hosted agents wasn’t enough and the build process was failing because it ran out of memory.

When the Microsoft-hosted agents won’t meet your needs, you can plug you own “Self-hosted agents” into the process. This can either be physical or virtual machines that you set up, configure and maintain, or – you can take advantage of Azure virtual machine scale set agents. This lets you specify a “size” of Azure VM (allowing you to specify the processor, amount of RAM, amount of storage, etc.) and tells Azure DevOps to use that for your build agent.

This option is more expensive and requires more administration on your part, but if you have needs beyond what the Microsoft-hosted agents can handle – it’s nice to have this option. The rest of this post will recap some of the technical issues I ran into setting this up recently.

Technical details

This is a good starting point for background on Azure Pipelines Agents. If you follow the link in the Microsoft-hosted agents section, you can find out about the specific hardware allocated to these VMs. As of this writing, the amount of RAM is 7GB. This client has a particularly large Angular project that required more RAM than that.

I didn’t want to actually “self-host” in a physical sense, so I went with the Azure virtual machine scale set option. Here is the page with the specific instructions. There’s a lot of information here, but basically it consisted of three steps:

  1. In the Azure portal, create the scale set.
  2. In the Azure DevOps portal, set up an agent pool pointing to what you created in #1.
  3. Change your pipelines to use the new agent pool.

Create the Scale Set in Azure

The instructions walk you through using the Azure Cloud Shell to create the scale set. The az vmss create statement is the one to focus on. The vm-sku setting is probably the most important one. This is what specifies the “size” of the VM. Start here to determine which size will work best for your needs. (Don’t agonize over this too long at the beginning. It’s pretty simple to change this setting on your scale set.) Make sure to pay attention to the usage price of your chosen size. (Here’s the pricing table for Linux VMs.)

The instance-count setting only specifies the number of instances initially created. Once you “hand control of the set over to Azure DevOps” (in step 2). It will override this instance count.

Once you’ve successfully run the az vmss create statement. You’re done with step 1. Azure DevOps will handle everything else (including installing required software on the VMs).

Create the Agent Pool in Azure DevOps

The instructions in the article above are pretty good. Here are some notes of things I ran into:

  • You will need to be an Agent Pool Administrator to create the new Agent Pool. If you are missing this, you will get to the end of the form and get an error when you submit saying you need “Manage permission”. If you get this, have the owner of the organization use the Security button on the Agent Pools page (top right – next to blue Add Pool button) to grant this permission to you.
  • The Maximum number of virtual machines in the scale set setting overrides the instance-count setting from step 1.

Change your Pipelines to use the new Agent Pool

Here is the relevant part of the Azure Pipelines YAML schema. Basically, if you are using a Microsoft-hosted agent pool, you only specify the vmImage value.

If you want to use a self-hosted agent pool, you need to specify the name. This is the name you provided in step 2 and the name that shows up in the list of Agent Pools in Azure DevOps.

In the case shown above, your YAML would contain this:

pool: Self Hosted Scale Set pool

Final Considerations

The 3 main settings on the Azure DevOps Agent Pool will probably involve some trial and error depending on your specific needs.

  • Maximum number of virtual machines in the scale set
  • Number of agents to keep on standby
  • Delay in minutes before deleting the excess idle agents

If you set the Number of agents to keep on standby to 0, you won’t be paying for instances to sit idle, but your Pipeline runs will sit in the Queued state for a little while to give Azure time to “spin up” an image for you. (In my case, this was about 5 minutes.) You can monitor the number (and status) of instances in the Azure portal by finding the Virtual machine scale set and clicking on Instances on the left-side menu. (Hint – if you can’t find it, look at the Resource Group you specified in the az vmss create statement.)


Dip your foot into Continuous Deployment (CD)

DevOps is a big deal in software development right now. Continuous Deployment (CD) is a big part of this trend. (Not to be confused with Continuous Integration, or CI.) You can find more technical definitions of Continuous Deployment, but it’s really about automating the process of getting new code onto your production servers after it has been built, tested and approved. The goal is make this as automated a process as possible, because when people have to manually copy files and change configuration settings – mistakes happen, no matter how many checklists you have.

If you are a ASP.Net developer (Framework or Core) and you’re using Azure App Services to host your application – your App Service can build your Azure DevOps pipeline for you. What’s a pipeline? Here’s a good introductory video:

Azure DevOps is the re-branded Team Foundation Services and I think it’s really good. (It’s free to setup an account for 5 users, so – if you’re already using Azure, why wouldn’t you try this?)

In the video, he demonstrates setting this up in the DevOps console. If you start from your Azure App Service console and go to the Deployment Center – it will create a starter pipeline and release for you. (Azure DevOps isn’t the only one it works with. I’ve used the integration to GitHub and more are listed.)

It just takes a couple of minutes and you should have a working pipeline. If you’ve been publishing from your development computer’s Visual Studio, you’ll want to delete the publishing profiles to remove the temptation of pushing directly. Now that you have the pipeline, always deploy using that.

Authentication vs. Authorization

These aren’t just “10 dollar words” that both start with the letter “A”. If you have an application, you need to understand the difference between these two things.

Let me try to explain this way. Not too far from where I live is a semi-famous (infamous?) bar called the Flora-Bama, so named because it’s on the state line between Florida and Alabama. For purposes of this, imagine that the drinking age in Florida is 18 (it’s not) and that the drinking age in Alabama is 21 (it is). (The Flora-Bama is pretty big and I think there are bars on both sides of the state line. Let’s assume that’s true.) In this scenario, if you’re 19 – you can buy a beer from the bar on the Florida side, but not from one on the Alabama side.

Imagine the doorman out front is checking IDs and putting wristbands on everyone – yellow if you’re 18-20 and green if you’re 21+. That’s authentication. He checked your ID and identified you but didn’t make any decisions about what you could and couldn’t do.

If a yellow-banded 20 year old walks up to an Alabama-side bar, the bartender there will refuse to serve the patron based on the color of the wrist band. This is authorization. The bartender doesn’t check the ID again, but just makes an authorization decision based on the previous authentication.

You can (and in many cases probably should) outsource your application’s authentication function. When you see sites let you “Login with Google” or “Login with Facebook”, that’s what’s happening. They are letting Google or Facebook handle the authentication, but the actual site still has to decide what the user is permitted to do.

The simplest ramification of “outsourcing authentication” is – one less password for your site’s users to remember. More importantly, if you aren’t storing passwords – they can’t be stolen if your data is breached. Of course, you’ll need to decide if all of your users are likely to have a Google or Facebook account (or whatever service you choose to trust to authenticate).

WordPress plugin for Google Analytics

I’ve used WordPress for years. Everytime I’ve gone looking for a plugin to include the tracking code for Google Analytics, what I found was overkill so I wrote my own.

It adds one field to the General Settings screen to let you enter your Google Analytics Tracking ID. That’s all there is to it.

You can download it from https://wordpress.org/plugins/technicality-google-analytics/ .

Adventures in phone number porting

We were recently helping a client port some numbers from a traditional telecom carrier (Spectrum) to a VOIP carrier. The scheduled time for the transition came and the PBX was configured to receive calls from the new VOIP trunk. We tested calling the phone numbers and they showed up in the PBX as expected, so all seemed OK.

A few days later, the client (a medical imaging center) called and said “a referring physican is trying to fax something to us and they’re saying that our number is out of service. I called the number and got fax tones so I assumed the doctor’s office just made a mistake.

But – this kept happening. After days of troubleshooting (the client is successfully receiving faxes from other senders all during this time), I was finally able to reproduce the problem calling from one particular phone line. It finally occurred to me that the line I was calling from was a Spectrum line and we had just ported the number away from Spectrum. We checked with the location that was having trouble faxing our client, and – sure enough, they had Spectrum phone lines as well.

Apparently when Spectrum ported the number out, it worked for the rest of the world but if you were originating a call inside the Spectrum voice network – some configuration hadn’t been changed so from that point of view – it thought this was still an “internal” (to the Spectrum network) call but there was no active line there. Hence – the “this number is out of service” recording.

It took a couple of weeks of working with different people at Spectrum to get this corrected, but they finally did. This definitely falls in the Murphy’s Law category (“whatever can go wrong, will go wrong”). I didn’t even know this was a thing that could wrong.

Posted in IT

AWS Load Balancers and HTTPS

I was helping a client with his web server that is hosted in AWS (Amazon Web Services) EC2. He had gotten a certificate to enable HTTPS but it wasn’t working.

AWS offers free certificates, but you can’t install them into the EC2 web server. In this case, he had set up a load balancer in front of the web server and the Certificate Manager certificate was set up there. This means that when the end user browses to this website, the browser is really talking to the load balancer and load balancer is talking to the web server and passing information back and forth.

I made some assumptions about how he had set up the load balancer forwarding so it took me awhile to get my arms around what was going on. I was configuring the Apache web server to do redirects in the .htaccess file. He wanted to force browsers to use HTTPS and wanted to make “www” his “authoritative URL”, meaning if someone typed “domain.com” into their browser, it would redirect them to “www.domain.com”. (This is a good idea for SEO. Google doesn’t assume/realize that domain.com and www.domain.com are the same website.)

http://domain.com was redirecting perfectly to https://www.domain.com, but http://www.domain.com was not redirecting to https://www.domain.com. I finally realized that the load balancer forwarder was configured via HTTPS and incoming HTTP and HTTPS traffic was forwarding to the webserver over HTTPS, but the load balancer was communicating back to the browser on whatever protocol they came in on. I set up the load balancer to communicate with web server using HTTP and then the redirects flowed properly back to the browser.

It’s easier to configure the load balancer to communicate with the web server using HTTP and just handle the encryption in front of the load balancer.

Posted in IT

Hosted PBXs require fine-tuned firewalls

I’ve been working with a client who has moved to a cloud based VOIP PBX server. In general, I’m a fan of this (and just about anything “cloud”) – but, there are a lot of firewall configurations that need to be just right to make this work well.

This is particularly true if you only have a single server at the hosting data center. Multiple phones at your physical location talking to a server on the other side of your Internet connection is tricky.

A better configuration involves a secure VPN connection between your physical location and your hosting provider. If they’re offering is a single server, they may not be set up to do this. Take a look at setting up a small network inside Amazon Web Services (AWS), Microsoft Azure, or Google Cloud. You should be able to setup a VPN between your physical location and any of these. Once that’s done, from your PBX and phones’ point of view – they are communicating on the same network which is much more straightforward.

Explaining NAT (Network Address Translation) is beyond the scope of this article, but that’s the complicating difference that the VPN eliminates.

Posted in IT