App Registrations vs. Enterprise Applications

In Microsoft Entra ID (formerly Azure AD), there are two menu options that look very similar – App Registrations and Enterprise Applications. When I first started working with Azure – I wondered what Enterprise Applications were and the documentation wasn’t satisfactory.

Software development tutorials frequently had me add new App Registrations, so I had a pretty good idea what they were for – but Enterprise Applications were a mystery.

When you add a new App Registration, you are prompted to decide if you are publishing a Single tenant or Multitenant application. This is the key to understanding Enterprise Applications.

An App Registration is something the publisher of the application uses. An Enterprise Application represents the consumer’s view of that application. If you specify Single tenant – an Enterprise Application will be created in parallel with the App Registration once you (or any user from your Entra ID tenant) login to the application.

If you create a new App Registration called MyApp and login to that application and then go the Enterprise Applications section, you will see an Enterprise Application has been created for MyApp. The Enterprise Application is used by administrators to control user access to the application. For instance, an administrator can use the Enterprise Application to set up mapping so that only certain users or groups can access the application. In short, App Registration is used by software developers and Enterprise Applications are used by administrators.

The Multitenant application example may be more clear. If you publish your application as Multitenant, when someone from another Entra ID tenant logs in – an Enterprise Application will be created in their Entra ID tenant, but not an App Registration. The publisher is the only tenant where an App Registration exists, but (in a Multitenant scenario) – there can be many Enterprise Applications (one per consuming tenant) where administrators in different tenants can control their users’ access to the application.

Each of these unique Enterprise Application (in different consuming tenants) will get a unique Object ID, but the Application ID in the Enterprise Application points back to the Application (client) ID of the referenced App Registration.

Evaluation Environments

Almost everyone is familiar with the concept of a QA environment, but I’m going to bring up the need for what I’m going to call Evaluation environments. (If there’s already a industry term for this that I’m not famliar with – let me know at https://twitter.com/jefftrotman.)

I’m going to define an Evaluation environment as where the design of a new feature is evaluated. (QA is where the implementation or execution of that design is verified.)

With QA, the expectation is for this feature to continue downstream eventually ending up in production. We may have to correct some errors first, but the design has already been accepted and is expected to flow into production at some point.

Evaluation is a place to test a new feature’s design. Things often make sense in theory but once you see how they actually work, it may be obvious that the design is flawed and needs to be re-worked. In some cases the feature will just be abandoned.

I’m going to talk more about “environments” in a future post, but the short version might be “some kind of web server where the code can be deployed so that someone who’s not a developer can interact with the new feature”. An Evaluation environment should be considered “disposable”.

Cloud

Alternative build agents for Azure Pipelines

Non-technical overview

Azure DevOps is a Microsoft service developed prior to their acquisition of GitHub. There’s a lot of overlap between the two services, and – even though it seems inevitable that GitHub is the future, there are a lot of organizations still using Azure DevOps.

Azure Pipelines is a part of Azure DevOps where you can build and test code projects. (When programmers use the word “build”, they mean to convert code written by a human programmer into something that can run on a computer.) “CI/CD” has recently become a buzzword in the software development community. It stands for Continuous Integration/Continuous Deployment. Azure Pipelines is the part of Azure DevOps that provides CI/CD services.

The “building software” part of CI/CD requires build agents. A build agent is a computer used to build software, but a it’s a different computer than one used by a human programmer to write code. Azure Pipelines maintains a pool of VMs that can serve as your build agent. When a build process is triggered, one of these is temporarily allocated to your process. It does your build and then reverts back to the pool. (CI/CD is all about automating processes. All of these happens “auto-magically” as directed by a script.)

The VMs provided by Microsoft will meet most needs, but not all. I’m working on project for a client where the amount of RAM provided in the Microsoft-hosted agents wasn’t enough and the build process was failing because it ran out of memory.

When the Microsoft-hosted agents won’t meet your needs, you can plug you own “Self-hosted agents” into the process. This can either be physical or virtual machines that you set up, configure and maintain, or – you can take advantage of Azure virtual machine scale set agents. This lets you specify a “size” of Azure VM (allowing you to specify the processor, amount of RAM, amount of storage, etc.) and tells Azure DevOps to use that for your build agent.

This option is more expensive and requires more administration on your part, but if you have needs beyond what the Microsoft-hosted agents can handle – it’s nice to have this option. The rest of this post will recap some of the technical issues I ran into setting this up recently.

Technical details

This is a good starting point for background on Azure Pipelines Agents. If you follow the link in the Microsoft-hosted agents section, you can find out about the specific hardware allocated to these VMs. As of this writing, the amount of RAM is 7GB. This client has a particularly large Angular project that required more RAM than that.

I didn’t want to actually “self-host” in a physical sense, so I went with the Azure virtual machine scale set option. Here is the page with the specific instructions. There’s a lot of information here, but basically it consisted of three steps:

  1. In the Azure portal, create the scale set.
  2. In the Azure DevOps portal, set up an agent pool pointing to what you created in #1.
  3. Change your pipelines to use the new agent pool.

Create the Scale Set in Azure

The instructions walk you through using the Azure Cloud Shell to create the scale set. The az vmss create statement is the one to focus on. The vm-sku setting is probably the most important one. This is what specifies the “size” of the VM. Start here to determine which size will work best for your needs. (Don’t agonize over this too long at the beginning. It’s pretty simple to change this setting on your scale set.) Make sure to pay attention to the usage price of your chosen size. (Here’s the pricing table for Linux VMs.)

The instance-count setting only specifies the number of instances initially created. Once you “hand control of the set over to Azure DevOps” (in step 2). It will override this instance count.

Once you’ve successfully run the az vmss create statement. You’re done with step 1. Azure DevOps will handle everything else (including installing required software on the VMs).

Create the Agent Pool in Azure DevOps

The instructions in the article above are pretty good. Here are some notes of things I ran into:

  • You will need to be an Agent Pool Administrator to create the new Agent Pool. If you are missing this, you will get to the end of the form and get an error when you submit saying you need “Manage permission”. If you get this, have the owner of the organization use the Security button on the Agent Pools page (top right – next to blue Add Pool button) to grant this permission to you.
  • The Maximum number of virtual machines in the scale set setting overrides the instance-count setting from step 1.

Change your Pipelines to use the new Agent Pool

Here is the relevant part of the Azure Pipelines YAML schema. Basically, if you are using a Microsoft-hosted agent pool, you only specify the vmImage value.

If you want to use a self-hosted agent pool, you need to specify the name. This is the name you provided in step 2 and the name that shows up in the list of Agent Pools in Azure DevOps.

In the case shown above, your YAML would contain this:

pool: Self Hosted Scale Set pool

Final Considerations

The 3 main settings on the Azure DevOps Agent Pool will probably involve some trial and error depending on your specific needs.

  • Maximum number of virtual machines in the scale set
  • Number of agents to keep on standby
  • Delay in minutes before deleting the excess idle agents

If you set the Number of agents to keep on standby to 0, you won’t be paying for instances to sit idle, but your Pipeline runs will sit in the Queued state for a little while to give Azure time to “spin up” an image for you. (In my case, this was about 5 minutes.) You can monitor the number (and status) of instances in the Azure portal by finding the Virtual machine scale set and clicking on Instances on the left-side menu. (Hint – if you can’t find it, look at the Resource Group you specified in the az vmss create statement.)


Dip your foot into Continuous Deployment (CD)

DevOps is a big deal in software development right now. Continuous Deployment (CD) is a big part of this trend. (Not to be confused with Continuous Integration, or CI.) You can find more technical definitions of Continuous Deployment, but it’s really about automating the process of getting new code onto your production servers after it has been built, tested and approved. The goal is make this as automated a process as possible, because when people have to manually copy files and change configuration settings – mistakes happen, no matter how many checklists you have.

If you are a ASP.Net developer (Framework or Core) and you’re using Azure App Services to host your application – your App Service can build your Azure DevOps pipeline for you. What’s a pipeline? Here’s a good introductory video:

Azure DevOps is the re-branded Team Foundation Services and I think it’s really good. (It’s free to setup an account for 5 users, so – if you’re already using Azure, why wouldn’t you try this?)

In the video, he demonstrates setting this up in the DevOps console. If you start from your Azure App Service console and go to the Deployment Center – it will create a starter pipeline and release for you. (Azure DevOps isn’t the only one it works with. I’ve used the integration to GitHub and more are listed.)

It just takes a couple of minutes and you should have a working pipeline. If you’ve been publishing from your development computer’s Visual Studio, you’ll want to delete the publishing profiles to remove the temptation of pushing directly. Now that you have the pipeline, always deploy using that.

WordPress plugin for Google Analytics

I’ve used WordPress for years. Everytime I’ve gone looking for a plugin to include the tracking code for Google Analytics, what I found was overkill so I wrote my own.

It adds one field to the General Settings screen to let you enter your Google Analytics Tracking ID. That’s all there is to it.

You can download it from https://wordpress.org/plugins/technicality-google-analytics/ .

.Net/Web development challenges with Time Zones (Part 2)

This situation is completely obvious to me in hindsight, but I have to confess – I never thought about this until a couple of weeks ago.

As I talked in about in Part I, cloud computing increases the likelihood that servers are in different time zones than users. It seems like the simple solution to this is to store date/time values in UTC (what we used to call Greenwich Mean Time, or GMT).

If the server’s time zone is set to UTC, this works pretty well. If you send date/time data to the browser in JSON, Javascript running in the user’s browser will adjust it to the local time zone for you. (If you’re sending the date/time value in HTML text, you have to do this yourself.)

The other day I was debugging an application from my development computer, where the time zone is not set to UTC. (I live in Central time zone.) Time values that I had saved to the database in UTC were not being displayed in the browser correctly.

It finally dawned on me that even though I used C#’s DateTime.UtcNow to generate the time originally, the datetime value stored in the database had no concept of which time zone it was in and that when my computer read the value from the database – it assumed that the datetime value was in my computer’s time zone (Central) as opposed to UTC.

In C#, the solution to this problem involves the DateTime.Kind property. I’ll confess – I had never heard of this or used it. By using the SetKind method, you can tell the system whether or not a DateTime is in local time zone or in UTC. Once I started using this, the time values on the browser began displaying correctly.

I’ve been programming a long time and this issue had never occurred to me. Hopefully it has occurred to you, but I’m betting there are at least a few people who read this who are in the same boat as me. Whether or not you’re using C#, this concept (what time zone at DateTime value is stated in) needs to be addressed in an application where servers and browsers are in separate time zones.

.Net/Web development challenges with Time Zones (Part 1)

One of the learning curve issues when moving from a client-server environment where all the application users were in the same building to developing for the web is dealing with different time zones.

Dates have times whether you want them or not

The first time I remember being bitten by the time zone issue was a several years ago and it’s probably not what you’d expect.

My application had an Order table in a SQL Server database with an OrderDate field (DateTime data type). I wasn’t interested in the time part, so I was setting the value by using DateTime.Today, which gives today’s date with the time part set to 00:00:00 (midnight).

My test user called me to say that his OrderDate values were showing up off by 1 day. When retrieving the order, the web server was returning the order information in JSON. I didn’t realize at the time that browsers will automatically “time-zone-shift” JSON dates to the browser’s computer’s time zone. So, if the order date was “1/1/2019 00:00:00” and the test user’s time zone was 6 hours behind UTC – the browser (not realizing that the time part was not significant) translated that value to “12/31/2018 18:00:00”. My UI was formatting the date to only show the date part, so the OrderDate field value was showing as “12/31/2018” when it should have been “1/1/2019”.

This turned out to be a much trickier problem than I initially thought. This was when I realized the need for a Date data type (since there wouldn’t be any concept of time zone shifting with a Date-only type). I know SQL Server has a Date type, but C# still doesn’t.

At the time, I was in a hurry so I cheated a little bit. I started stamping the time part as noon (12:00:00) as opposed to letting it default to midnight (00:00:00). I didn’t care about the time part and it’s never displayed. This gives me 12 hours leeway in either direction with the date part being changed. Apparently some islands in the Pacific do have +13 and +14 time zones, but I was pretty sure that particular application wasn’t going to be used there.

That application has since been retired but I never found a more elegant solution to this problem. If anyone knows of one, please let me know.

Consider CAPTCHA

A few days ago, one of my clients called to say that their credit card processor had suspended their account because their website was being used to submit fraudulent charges and that a CAPTCHA mechanism needed to be added before they would reactivate the account.

By Scooooly – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=47265558

We’ve all been required to complete a CAPTCHA process when doing something online. I knew the general idea was to make sure that the website was being used by a human and not a “bot”, but, I’ll confess – I really hadn’t given them much thought prior to this.

Google’s reCAPTCHA documentation begins by saying “reCAPTCHA protects you against spam and other types of automated abuse”. This was an eye-opening example of automated abuse. I’m still not exactly sure what was being attempted, but my best guess is that hackers were using the site to test combinations of digits to come up with valid credit card numbers, or something along those lines. Because this form was setup for donations and no merchandise was going to be shipped as a result of valid credit card charges, I hadn’t thought about the possibilities for this to be abused.

Example of Google reCAPTCHA v2

Luckily, using Google’s reCAPTCHA tools – it only took an hour or so to add the functionality to the site. I used v2 (pictured above) because that’s the one I was most familiar with, but I was very interested to learn that Google has now released reCAPTCHA v3 that doesn’t require any user interaction.

Now that I’ve seen an example of non-obvious (at least to me) abuse and how easy it was to add CAPTCHA functionality, I’ll be reviewing other sites to see if there are other places it would make sense to use this. I’m encouraging you to do this to.

(BTW – I never knew that CAPTCHA was an acronym for “completely automated public Turing test to tell computers and humans apart”.)