In Microsoft Entra ID (formerly Azure AD), there are two menu options that look very similar – App Registrations and Enterprise Applications. When I first started working with Azure – I wondered what Enterprise Applications were and the documentation wasn’t satisfactory.
Software development tutorials frequently had me add new App Registrations, so I had a pretty good idea what they were for – but Enterprise Applications were a mystery.
When you add a new App Registration, you are prompted to decide if you are publishing a Single tenant or Multitenant application. This is the key to understanding Enterprise Applications.
An App Registration is something the publisher of the application uses. An Enterprise Application represents the consumer’s view of that application. If you specify Single tenant – an Enterprise Application will be created in parallel with the App Registration once you (or any user from your Entra ID tenant) login to the application.
If you create a new App Registration called MyApp and login to that application and then go the Enterprise Applications section, you will see an Enterprise Application has been created for MyApp. The Enterprise Application is used by administrators to control user access to the application. For instance, an administrator can use the Enterprise Application to set up mapping so that only certain users or groups can access the application. In short, App Registration is used by software developers and Enterprise Applications are used by administrators.
The Multitenant application example may be more clear. If you publish your application as Multitenant, when someone from another Entra ID tenant logs in – an Enterprise Application will be created in their Entra ID tenant, but not an App Registration. The publisher is the only tenant where an App Registration exists, but (in a Multitenant scenario) – there can be many Enterprise Applications (one per consuming tenant) where administrators in different tenants can control their users’ access to the application.
Each of these unique Enterprise Application (in different consuming tenants) will get a unique Object ID, but the Application ID in the Enterprise Application points back to the Application (client) ID of the referenced App Registration.
When you have a “user account”, that really means that you are listed in an authentication directory. A directory is basically a list of usernames and passwords. (For simplicity, ignore SSO, MFA and “password-less” for sake of this discussion.)
B2C (Business to Consumer) can be thought of as a “two-party” relationship. When you sign-up for a Facebook account, you and Facebook are the only parties involved in that relationship. Other examples of B2C users are:
* These can be tricky because Microsoft and Google also offer B2B accounts.
B2B (Business to Business) can be thought of as a “three-party” relationship. If you’re company uses Microsoft 365, your user account for that involves Microsoft (who hosts the directory), your employer (who administers the directory), and you. Incidentally, this account has to do with Microsoft but isn’t a “Microsoft Account” because that’s the proper name for their B2C offering.
In a B2C system, you (the user) choose to sign up for the account and only you can delete your account. In the case of your job providing you a Microsoft 365 account (B2B), an administrator issues the account to you and, if you leave – they can (and should) delete your account.
Note that there are plenty of times that B2C authentication is used in a business environment. For instance, if you use GitHub to store source code for a work application – you grant permissions (which is authorization as opposed to authentication) to someone’s GitHub account, which they created themselves and wasn’t issued to them by your company, so – B2C.
Some larger companies have “approved vendor lists”. Instead of being able to purchase from any vendor, you are limited to approved vendors. This is kind of like B2B authentication. In a B2B authentication system, you can only grant permissions to users listed in your corporate directory. Note that – just like being listed on the approved vendor list doesn’t mean that anyone has bought anything from you, being part of the B2B directory doesn’t grant you permission to access anything – it’s a necessary first step.
I find myself trying to explain this a lot, so I thought I’d put it “in writing” for future reference. This is not intended as a technically complete answer but a starting point to understanding what these terms refer to.
I usually start this by saying “Microsoft is bad at naming things”. The .NET Framework (.NET is a pretty bad name to begin with. Even DotNet would have been better. Who comes up with a name that begins with a period?) has been around since 2002. It evolved through many versions but around .NET 4.6 – Microsoft came up with a replacement framework that they decided to name “.NET Core”. It’s much easier to understand this with a visual, so take a look at this timeline in this Tweet – https://twitter.com/buhakmeh/status/1250127850457440256.
The .NET Framework versions are in green. Then came the .NET Core versions in purple. Understanding the version sequence is confusing. For instance, .NET Core 3.1 is newer than .NET Framework 4.8.
After .NET Core 3.1 – Microsoft decided to eliminate “Framework” and “CORE” and just call it .NET (starting with version 5). Since this timeline was produced, .NET 6 and .NET 7 have been released and .NET 8 is on the way. To be clear, .NET 5 and above are the later versions of .NET Core with the name changed. There won’t be any .NET Framework versions delivered after 4.8.
.NET Framework is the older technology
.NET Core is the newer technology. Starting with version 5, they dropped the “Core” but you can think of .NET 5 as .NET Core 5, and .NET 6 as .NET Core 6, etc.
Hopefully this will save someone else some time. I started seeing this error message from Azure B2C when trying to authenticate a user. The error message wasn’t very helpful, but the cause was pretty simple. I had allowed a Client Secret to expire in the B2C App Registration. As soon as I generated a new one and started using that – authentication started working again.
I’m frequently needing to test OpenID Connect (OIDC) login scenarios with different Identity Providers (IdPs). To minimize time generating a test application every time – I wrote this simple ASP.Net Core MVC web application (.NET 6) with the necessary middleware to test an OIDC login.
Almost everyone is familiar with the concept of a QA environment, but I’m going to bring up the need for what I’m going to call Evaluation environments. (If there’s already a industry term for this that I’m not famliar with – let me know at https://twitter.com/jefftrotman.)
I’m going to define an Evaluation environment as where the design of a new feature is evaluated. (QA is where the implementation or execution of that design is verified.)
With QA, the expectation is for this feature to continue downstream eventually ending up in production. We may have to correct some errors first, but the design has already been accepted and is expected to flow into production at some point.
Evaluation is a place to test a new feature’s design. Things often make sense in theory but once you see how they actually work, it may be obvious that the design is flawed and needs to be re-worked. In some cases the feature will just be abandoned.
I’m going to talk more about “environments” in a future post, but the short version might be “some kind of web server where the code can be deployed so that someone who’s not a developer can interact with the new feature”. An Evaluation environment should be considered “disposable”.
Azure DevOps is a Microsoft service developed prior to their acquisition of GitHub. There’s a lot of overlap between the two services, and – even though it seems inevitable that GitHub is the future, there are a lot of organizations still using Azure DevOps.
Azure Pipelines is a part of Azure DevOps where you can build and test code projects. (When programmers use the word “build”, they mean to convert code written by a human programmer into something that can run on a computer.) “CI/CD” has recently become a buzzword in the software development community. It stands for Continuous Integration/Continuous Deployment. Azure Pipelines is the part of Azure DevOps that provides CI/CD services.
The “building software” part of CI/CD requires build agents. A build agent is a computer used to build software, but a it’s a different computer than one used by a human programmer to write code. Azure Pipelines maintains a pool of VMs that can serve as your build agent. When a build process is triggered, one of these is temporarily allocated to your process. It does your build and then reverts back to the pool. (CI/CD is all about automating processes. All of these happens “auto-magically” as directed by a script.)
The VMs provided by Microsoft will meet most needs, but not all. I’m working on project for a client where the amount of RAM provided in the Microsoft-hosted agents wasn’t enough and the build process was failing because it ran out of memory.
When the Microsoft-hosted agents won’t meet your needs, you can plug you own “Self-hosted agents” into the process. This can either be physical or virtual machines that you set up, configure and maintain, or – you can take advantage of Azure virtual machine scale set agents. This lets you specify a “size” of Azure VM (allowing you to specify the processor, amount of RAM, amount of storage, etc.) and tells Azure DevOps to use that for your build agent.
This option is more expensive and requires more administration on your part, but if you have needs beyond what the Microsoft-hosted agents can handle – it’s nice to have this option. The rest of this post will recap some of the technical issues I ran into setting this up recently.
I didn’t want to actually “self-host” in a physical sense, so I went with the Azure virtual machine scale set option. Here is the page with the specific instructions. There’s a lot of information here, but basically it consisted of three steps:
In the Azure portal, create the scale set.
In the Azure DevOps portal, set up an agent pool pointing to what you created in #1.
Change your pipelines to use the new agent pool.
Create the Scale Set in Azure
The instructions walk you through using the Azure Cloud Shell to create the scale set. The az vmss create statement is the one to focus on. The vm-sku setting is probably the most important one. This is what specifies the “size” of the VM. Start here to determine which size will work best for your needs. (Don’t agonize over this too long at the beginning. It’s pretty simple to change this setting on your scale set.) Make sure to pay attention to the usage price of your chosen size. (Here’s the pricing table for Linux VMs.)
The instance-count setting only specifies the number of instances initially created. Once you “hand control of the set over to Azure DevOps” (in step 2). It will override this instance count.
Once you’ve successfully run the az vmss create statement. You’re done with step 1. Azure DevOps will handle everything else (including installing required software on the VMs).
Create the Agent Pool in Azure DevOps
The instructions in the article above are pretty good. Here are some notes of things I ran into:
You will need to be an Agent Pool Administrator to create the new Agent Pool. If you are missing this, you will get to the end of the form and get an error when you submit saying you need “Manage permission”. If you get this, have the owner of the organization use the Security button on the Agent Pools page (top right – next to blue Add Pool button) to grant this permission to you.
The Maximum number of virtual machines in the scale set setting overrides the instance-count setting from step 1.
If you want to use a self-hosted agent pool, you need to specify the name. This is the name you provided in step 2 and the name that shows up in the list of Agent Pools in Azure DevOps.
In the case shown above, your YAML would contain this:
pool: Self Hosted Scale Set pool
The 3 main settings on the Azure DevOps Agent Pool will probably involve some trial and error depending on your specific needs.
Maximum number of virtual machines in the scale set
Number of agents to keep on standby
Delay in minutes before deleting the excess idle agents
If you set the Number of agents to keep on standby to 0, you won’t be paying for instances to sit idle, but your Pipeline runs will sit in the Queued state for a little while to give Azure time to “spin up” an image for you. (In my case, this was about 5 minutes.) You can monitor the number (and status) of instances in the Azure portal by finding the Virtual machine scale set and clicking on Instances on the left-side menu. (Hint – if you can’t find it, look at the Resource Group you specified in the az vmss create statement.)
DevOps is a big deal in software development right now. Continuous Deployment (CD) is a big part of this trend. (Not to be confused with Continuous Integration, or CI.) You can find more technical definitions of Continuous Deployment, but it’s really about automating the process of getting new code onto your production servers after it has been built, tested and approved. The goal is make this as automated a process as possible, because when people have to manually copy files and change configuration settings – mistakes happen, no matter how many checklists you have.
If you are a ASP.Net developer (Framework or Core) and you’re using Azure App Services to host your application – your App Service can build your Azure DevOps pipeline for you. What’s a pipeline? Here’s a good introductory video:
Azure DevOps is the re-branded Team Foundation Services and I think it’s really good. (It’s free to setup an account for 5 users, so – if you’re already using Azure, why wouldn’t you try this?)
In the video, he demonstrates setting this up in the DevOps console. If you start from your Azure App Service console and go to the Deployment Center – it will create a starter pipeline and release for you. (Azure DevOps isn’t the only one it works with. I’ve used the integration to GitHub and more are listed.)
It just takes a couple of minutes and you should have a working pipeline. If you’ve been publishing from your development computer’s Visual Studio, you’ll want to delete the publishing profiles to remove the temptation of pushing directly. Now that you have the pipeline, always deploy using that.
These aren’t just “10 dollar words” that both start with the letter “A”. If you have an application, you need to understand the difference between these two things.
Let me try to explain this way. Not too far from where I live is a semi-famous (infamous?) bar called the Flora-Bama, so named because it’s on the state line between Florida and Alabama. For purposes of this, imagine that the drinking age in Florida is 18 (it’s not) and that the drinking age in Alabama is 21 (it is). (The Flora-Bama is pretty big and I think there are bars on both sides of the state line. Let’s assume that’s true.) In this scenario, if you’re 19 – you can buy a beer from the bar on the Florida side, but not from one on the Alabama side.
Imagine the doorman out front is checking IDs and putting wristbands on everyone – yellow if you’re 18-20 and green if you’re 21+. That’s authentication. He checked your ID and identified you but didn’t make any decisions about what you could and couldn’t do.
If a yellow-banded 20 year old walks up to an Alabama-side bar, the bartender there will refuse to serve the patron based on the color of the wrist band. This is authorization. The bartender doesn’t check the ID again, but just makes an authorization decision based on the previous authentication.
You can (and in many cases probably should) outsource your application’s authentication function. When you see sites let you “Login with Google” or “Login with Facebook”, that’s what’s happening. They are letting Google or Facebook handle the authentication, but the actual site still has to decide what the user is permitted to do.
The simplest ramification of “outsourcing authentication” is – one less password for your site’s users to remember. More importantly, if you aren’t storing passwords – they can’t be stolen if your data is breached. Of course, you’ll need to decide if all of your users are likely to have a Google or Facebook account (or whatever service you choose to trust to authenticate).