Friday, December 22, 2017

Monetize your APIs with Azure API Management

In a world of Microservices and APIs, it might come a time when you realize you have some service that others might want to use.

Azure API Management was made for this purpose, to centralize the management of all your APIs.

It allows an easy tracking of usage, thanks to its subscription-key approach, with a unique key per user; so you can easily charge money to use your APIs.

But one caveat is that the user base in API Management is a "standalone" list, not necessarily aligned with Azure Active Directory (where you probably will have already all your users).

So if you want to align these two worlds, as of now you need to be creative, bend some concept, and implement a few tricks, as explained here.

Let's start from the basic, that is how Azure Active Directory works behind the scenes.

Here you can see the basic entities of Azure Active Directory for an Application: the App Registration and the Enterprise Applications; from these two entry points in the Azure portal you can manage almost everything about your App authentication.

Now, if you want to have some Customer specific division in your Azure Active Directory, you need to be a bit creative, and mis-use some entity to achieve your needs (I was hoping that AAD B2B or B2C would help achieving this, but apparently that is not the case so far).

As you can see in the above picture, you define CustomerA and CustomerB in AAD as Security Groups; then you can add to these group all the users from each Company, which you previously invited to your AAD as described here.

At this point you edit your Application Manifest in the AAD Registration and add two new AppRoles: CustomerA and CustomerB.

Then you open the Enterprise Applications menu of the AAD section in the Azure portal, and assign those new AppRoles to your users individually (behind the scenes through a AppRoleAssignment).

Now, since you use API Management, you need an App Registration in AAD also for its Developer portal (this is where your API consumers will login in order to learn about and test your APIs).
Once you did that, you can switch to the API Management Publisher portal, and import your API, as described here.


You will also need to create an Authorization Server in the Publisher Portal, so that it can use the OAuth2 Permissions in AAD, as shown in the picture above.


Now, to make things a bit more complex, you need to have your API users to register on the API Management Developer portal (as of now, "invite" for some reason does not work, so you have to rely on them to do it...).

Once they did register, and you have your users in API Management, you can create the Customer Groups as you did in AAD, choosing to "Add groups from AAD". in the Product section of the API Management Publisher portal. under the Visibility tab.

Obviously you need to have a Product for each Customer, with the same names as the Groups you previously created. That will allow for an easy match and assignment without confusion, as you can see in the picture here below.


Once you performed all these steps, you are now able to use Role-Based Authorization as usual at your Operation (action or method) level in your APIs, as shown here below.

Limit access to this Operation only to CustomerA users:

Limit access to this Operation only to CustomerB users:


Allow access to this Operation to both CustomerA and CustomerB users:



And thanks to the API Management Analytics section in the Publisher portal, you can see all details about usage of your APIs, by user, as well as Customer.

Now, the only work left for all this to be really useful and easy, is to automate it, of course!

Luckily API Management provides a REST API that you can use, more or less like the Azure Graph API.

But I will tell you about it in a next post.
;)


For now, Happy Holidays!

Tuesday, December 19, 2017

Run your API behind Azure API Management

In order to run your API behind Azure API Management there are a few important steps to take.

You first open the API Management Publisher Portal, this can be done from the Azure Portal by selecting API Management services and your service instance.

If you did not create yet your instance, refer to this article.

Import API

If you want to import your API (rather than creating everything manually), it is essential that you have a correct Swagger definition. I wrote a post about it, so if not sure, have a look here.
You can import definitions in the SwaggerWADL and WSDL format.

Assign a Product

Once you imported the API in API Management, you need to assign it to a Product, so users can subscribe to it and obtain their personal Subscription Key, required to access your API.
You can either use the existing Starter or Unlimited, or create a new one.

Setup Authorization Server

Now you can create the Authorization Server that you will use in the Developer Portal of API Management. Once created, you can assign it to your API.



Azure Active Directory Developer Portal App Registration

You will need to take care of the Azure Active Directory App Registration as well, also in the above link. In particular make sure that you add the Authorization Code Grant URL generated by the Authorization Server to the ReplyURLs list of the AAD App Registration for the API Management Developer Portal; you also need to add the Application ID and created secret Key of the API Management Developer Portal to the Client Credentials fields as shown here.


While there, under the Permissions menu blade, select your Backend API and all the appropriate permissions for the Developer Portal.

Setup SSL Certificate

If you use Mutual Client Authentication with SSL Certificate, you will need to upload your certificate to API Management, and assign it to your API in the Security section. Once you did that, you can verify in the API Management Policies section that you have an Inboud policy to provide your certificate's Thumbprint to your API.
Then you might want to add your Certificate Validation IAuthorizationFilter to your API, where you can check anything you want from the SSL Certificate (sample code here).


Setup C.O.R.S.

You might want to add another policy in API Management to allow C.O.R.S., depending on your API clients. You can simply select the CORS policy in the list.


Test in Developer Portal

If everything is done properly you should now be able to open the API Management Developer Portal and test your API, providing automatically the correct Subscription Key, and obtaining the OAuth2 Token from the created Authorization Server.



Tuesday, December 5, 2017

Devil is in the (Owin/Katana Redirect) Details

Some time ago I wrote about a bug that took a month to be solved, involving a 401 - Unauthorized Access to an Azure AppService.

After lots of troubleshooting, that issue got a solution from Microsoft support with a little code snippet that handles the AAD redirection at run-time, rather than relying on the config file value.

Turns out that this code snippet caused another issue, namely a Owin/Katana bug.

This time the same AppService (that runs just fine on Azure), cannot run anymore locally under IIS, as it generates an infinite redirect loop between the Azure login page and the AppService.

After another month of troubleshooting with Microsoft support, bouncing from one team to another (AppService, IIS, AAD, you name it), they finally were able to reproduce the core issue, which eventually got acknowledged as an official bug (I will update this post with the link to it as soon as I will receive it from Microsoft support):

Symptom:
The MVC project stopped working in IIS 10, and was entering into an infinite loop.

Cause:
There is a problem with OWIN and IIS when specifying either a CallbackPathURL or a Redirect URI. After the authentication happens > OWIN receives the response and drops the cookie > because of that the user is being set as “Not Authenticated”, as there is no cookie to trace the session that just happened. At that point is when the infinite loop starts. There is a problem with Microsoft.Owin.Host.SystemWeb and is being considered as a bug and being investigated. Seems that when mixing usage like HttpContext.GetOwinContext().Response.Cookies and HttpContext.Response.Cookies in some cases OWIN cookies are "lost".


Resolution:
For now as a work-around if you could not specify any callbackPath or Redirect URI in the request and let Azure decide where to send the response it will work just fine for your project in IIS.



So, long story short: if you have multiple Domains in your Azure AppService, and you implement the code snippet shown before, you have to remove the "RedirectUri" and "CallbackPath" from the default "OpenIdConnectAuthenticationOptions" created for you by the ActiveDirectory Client NuGet package, and then you can run your AppService locally under IIS as usual.

In the picture below you can see the red lines over the code to be removed, and the blue line around the code snippet to fix the issue reported here.



Tuesday, November 21, 2017

Fix your Swagger Definition!



So you created your shiny .NET REST API, and added the NuGet package Swashbuckle.Core which generated the SwaggerConfig.cs file in the App_Start of your API project.

All good, now you can browse your API definition and even test your API operations!

So you think you have a OpenAPI Specification file now... well, hold your horses!

Turns out that the .NET implementation with the SwaggerConfig.cs is not so strict on enforcing the Specification, and will produce a JSON that contains errors when checked for compliancy.

To make sure that your Swagger definition is indeed OpenAPI Specification compliant, you can download your generated Swagger JSON and upload it to either SwaggerHub or Swagger Editor.

If any error shows up there after parsing your JSON into YAML, you first have some work to do to fix those.

You might see errors such as "Semantic error at paths ... Equivalent paths are not allowed.", "Schema error at paths ... should NOT have additional properties", "Schema error at paths ... should be equal to one of the allowed values: ...", and so on.

You can easily troubleshoot those, and find all answers you need in order to fix them (Stackoverflow as usual will help you a lot).




Tuesday, November 7, 2017

7 Things I Learned That Made Me a Better Programmer

I bumped into a nice article, which tells stuff very well known to me and my colleagues nowadays.

However it is always good to share it, as it is too easy to get caught in the code details and forget the "big picture".

Consider it like a basic programmer wisdom checklist:

https://blog.toggl.com/how-to-be-better-programmer/

Wednesday, October 25, 2017

Automating Azure Active Directory: Provision Users and Apps

Some time ago I wrote about User App Provisioning in Azure, which can be achieved manually through the Azure Portal.

But if you happen to have already an application that you use to manage your users and permissions, and you want to deploy such application to Azure, you might want to automate things a bit more.

At a high level, this is the Graph API flow:
  1. Find the User in AAD
  2. Invite the User to AAD
  3. Find the App to Assign in AAD
  4. Find existing App Assignment for the User
  5. Assign the App to the User
This is the flow diagram (a bit more detailed):

The Management App (green color) is the main application where you already manage users and permissions, which did not require AAD integration so far.
However, once the application is deployed to Azure, AAD integration becomes essential.
This is the place where you would want to integrate this POC application.

The POC application is represented by the App Provision App (yellow color), and it manages the Graph API flow. It executes HTTP Requests to the Graph REST APIs (blue color), and it parses and displays the returned JSON data.

Considering that this process should be automated behind the scenes of your existing Management App (unlike in this POC where it is a standalone MVC web app), when things go wrong and errors are returned instead of JSON data, an email is sent to an Admin address (so that a manual action in the Azure portal can be performed and the error fixed).

As of now there are 2 different versions of Graph API:


Microsoft recommends to use the Microsoft Graph API, however it is still very raw and unstable (beta), with many features not available yet.

So in this POC I implemented the Azure Graph API for almost all calls, and just used the Microsoft API for the AAD Invitation (not available in the Azure one).

Here are the screenshots of the App Provision App:






And that's it.

Feel free to ask questions (or code) in the comments below!

Friday, October 13, 2017

Scoring bad at Pentest... thanks to Azure :)



As part of a security compliancy, we had our application (deployed in Azure), scanned with a Pentest by an external company.

I just received the Scan Reports, and I was surprised to see issues that I was sure we fixed (such as OWASP XSS (Cross Site Scripting) just to name one..

Well, after a quick analysis of the reports, it turns out that most of those issues belong to Azure resources!

By having our APIs behind Azure API Management, its Developer Portal was scanned as well, and resulted in a few issues (between Low and Medium, nothing critical).

The AAD login page, has a few of those issues as well, and because of the automatic redirect they seem caused by our app during the scan..

Obviously those resources are out of our control, and we can't do anything to fix those issues, maybe Microsoft will.

At the same time it is nice to find out that the hard work to secure our app paid off, and only a few minor issues were found that actually belong to it.

This is the list of issues belonging to Azure resources:


  • Medium (Medium) – Application Error Disclosure
  • Low (Medium) – Web Browser XSS Protection Not Enabled
  • Low (Medium) – Incomplete or No Cache-control and Pragma HTTP Header Set
  • Low (Medium) – X-Content-Type-Options Header Missing
  • Low (Medium) – Cookie Without Secure Flag
  • Low (Medium) – Cross-Domain JavaScript Source File Inclusion
  • Low (Medium) – Password Autocomplete in Browser

Wednesday, October 11, 2017

Tuesday, September 12, 2017

RabbitMQ on Kubernetes Container Cluster in Azure

Introduction

This post is quite technical (and long, and detailed), so sit down, enjoy your coffee, and let’s get started!


Containers are becoming the way forward in the DevOps and IT worlds, as they greatly simplify deployments of applications and IT infrastructure.
RabbitMQ is “the most widely deployed open source message broker”, and easy to use within a Docker Container Image.
Kubernetes is considered the De-facto Standard for Container Orchestration.

To follow this tutorial you can use the built in Azure Cloud Shell, or download and install the Azure CLI, and use PowerShell locally. Make sure you have installed Azure PowerShell.
You will also need Kubectl, so make sure you install that too (I suggest Choco as the easiest way).
Here I am using PowerShell.


Resource Group

First you have to login to Azure through PowerShell:
az login

You will receive a message such as:
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code CF5G5AJQZ to authenticate.

Follow the above instructions, and PowerShell will be logged onto Azure, returning the available Azure Subscriptions details.
Copy the ID of the Subscription you want to use, and use it in the next command:
az account set --subscription "[My-Azure-Subscription-ID]"

Now you can create the Resource Group used for the Kubernetes Cluster:
az group create --name "[My-ResourceGroup]" --location "westeurope"


Service Principal

Now create a Service Principal to be used for the Kubernetes Cluster:
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/"[My-Azure-Subscription-ID]/resourceGroups/[My-ResourceGroup]"

Copy the appId, password and tenantId values returned, and test the login for your newly created Service Principal:
az login --service-principal -u "[My-App-ID]" -p "[My-Password]" --tenant "[My-Tenant-ID]"

You should receive the details of the current Subscription, with user type “servicePrincipal”, so now you can test its permissions by executing the following command:
az vm list-sizes --location westus
If this command returns a long list of VM Sizes, you’re good to go. If not, talk to your Azure Subscription Owner.


Now login again as your main user as you did before, and again set your Subscription:
az login
az account set --subscription "[My-Azure-Subscription-ID]"
You need to create a SSH key, follow this tutorial:



Kubernetes Cluster

You can create a Kubernetes Cluster locally using MiniKube.

On the Azure Portal you can create the Kubernetes Cluster either manually, or using this helpful ARM template:
First download the ARM Parameters file from:
Make sure you fill this file with your details, something like:
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "dnsNamePrefix": {
      "value": "[My-Unique-Kubernetes-Cluster-DNS]"
    },
                "agentCount": {
                  "value": 1
                },
                "masterCount": {
                  "value": 1
                },
    "adminUsername": {
      "value": "[My-ServicePrincipal-Username]"
    },
    "sshRSAPublicKey": {
      "value": "[ssh-rsa My-RSA-PublicKey]"
    },
    "servicePrincipalClientId": {
      "value": "[My-ServicePrincipal-ID]"
    },
    "servicePrincipalClientSecret": {
      "value": "[My-ServicePrincipal-Password]"
    },
    "orchestratorType":{
      "value": "Kubernetes"
    }
  }
}
Now you can run the following command to create the Kubernetes Cluster:
az group deployment create -g "[My-ResourceGroup]" --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-acs-kubernetes/azuredeploy.json" --parameters "[My-Local-Path]\azuredeploy.parameters.json"

After about 15 minutes you should get a response, which hopefully will display ”Finished” and “Succeeded”, along with all the configuration of your newly created Kubernetes Cluster.

So if you now open the Azure Portal, and browse to your Resource Group, you should see something like this:




























This is your newly created Kubernetes Cluster on Azure!
However, you are not done just yet.

You still need to install RabbitMQ on your Cluster, as well as create another Azure Load Balancer and two more Public IPs to expose RabbitMQ publicly.


RabbitMQ

First let’s make sure that you are connected to the right Cluster (in case you have created more than one, this command is essential).
az acs kubernetes get-credentials --resource-group="[My-ResourceGroup]" --name="[My-ContainerServiceName]"

So if you now run the following command to get info about your Cluster:
kubectl cluster-info

You should see the following output:
Kubernetes master is running at https://[My-Unique-Kubernetes-Cluster-DNS].westeurope.cloudapp.azure.com
Heapster is running at https://[My-Unique-Kubernetes-Cluster-DNS].westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://[My-Unique-Kubernetes-Cluster-DNS].westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://[My-Unique-Kubernetes-Cluster-DNS].westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
tiller-deploy is running at https://[My-Unique-Kubernetes-Cluster-DNS].westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/tiller-deploy/proxy

Based on this tutorial, now create a YAML configuration file (call it: rabbitmq.yaml), for your RabbitMQ.
You can use the following:

apiVersion: v1
kind: Service
metadata:
  # Expose the management HTTP port on each node
  name: rabbitmq-management
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 15672
    name: http
  selector:
    app: rabbitmq
  sessionAffinity: ClientIP
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  clusterIP: None
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq-cluster
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  type: LoadBalancer
  selector:
    app: rabbitmq
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: rabbitmq
spec:
  serviceName: "rabbitmq"
  replicas: 4
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: rabbitmq
        image: rabbitmq:3.6.6-management-alpine
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - >
                if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
                  sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
                  cat /etc/resolv.conf.new > /etc/resolv.conf;
                  rm /etc/resolv.conf.new;
                fi;
                until rabbitmqctl node_health_check; do sleep 1; done;
                if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
                  rabbitmqctl stop_app;
                  rabbitmqctl join_cluster rabbit@rabbitmq-0;
                  rabbitmqctl start_app;
                fi;
                rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
        env:
        - name: RABBITMQ_ERLANG_COOKIE
          valueFrom:
            secretKeyRef:
              name: rabbitmq-config
              key: erlang-cookie
        ports:
        - containerPort: 5672
          name: amqp
        - containerPort: 25672
          name: rabbitmq-dist
        volumeMounts:
        - name: rabbitmq
          mountPath: /var/lib/rabbitmq
  volumeClaimTemplates:
  - metadata:
      name: rabbitmq
      annotations:
        volume.alpha.kubernetes.io/storage-class: default
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi # make this bigger in production


So now that you have created the rabbitmq.yaml locally, let’s create a generic secret for the Erlang Cookie by running the command (use a better secret though..):
kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=c-is-for-cookie-thats-good-enough-for-me

And finally create the RabbitMQ Kubernetes Services and Pods:
kubectl create -f "[My-Local-Path]\rabbitmq.yaml"

Verify all Parts

Run the Kubernetes Dashboard locally, by running the following command (port is optional, the default uses 8001):
kubectl proxy --port=8080

And if you open your browser to the URL http://127.0.0.1:8080/ui you should see the Kubernetes Dashboard, like this:

Here you will see all details about the Kubernetes Cluster, Pods, Services, etc.
And if you click on the rabbitmq-management IP Hyperlink displayed (use guest as both username and password), you will access the RabbitMQ Management dashboard, showing four nodes:


You can see in the Pods section of the Kubernetes Dashboard the corresponding four Pods:


And to complete the picture, if you look at your Azure Resource Group again: 



You will notice three new resources added: a Load Balancer for RabbitMQ, and two new Public IP addresses, to expose RabbitMQ and its Management Dashboard.

This is it for this long post, enjoy your Containers!

Monday, September 4, 2017

Azure App Service Deployment Slots








Just a quick one today: Deployment Slots are one way to manage your Azure App Service deployments, giving you the option to do a "hot swap" of a live production application with little to no downtime.

At the same time they allow you to easily manage your app versioning, by making sure that you always have a "Last Known Good" version of your app, a few clicks away from being rolled back on production when something goes wrong in your release.

You can set up VSTS to configure your Staging Slot before deploying to it, as mentioned in a previous post of mine.

Here is a quick overview of the common Deployment Slots usage:


Monday, August 21, 2017

Devil is in the (RedirectUri) detail

When using Azure Active Directory (AAD) as Identity Provider for your Azure App Services, you will set up App Registrations to tell AAD how to handle your app authentication.

One important bit of this is the ReplyURL (RedirectUri) that you need to specify for AAD to redirect the user back to your app after valid authentication.

The usual flow is:
  1. User requests your app URL (ie: https://myappservice.azurewebsites.net)
  2. User is redirected to the AAD Login page (https://login.microsoftonline.com/.../oauth2/authorize)
  3. User inserts valid credentials
  4. User is redirected back to your defined RedirectUri as a logged on User (https://myappservice.azurewebsites.net)



For this to happen, you will need to specify in your application as AppSettings in the web.config file:
 <add key="ida:PostLogoutRedirectUri“ value="https://myappservice.azurewebsites.net" />
<add key="ida:RedirectUri“ value="https://myappservice.azurewebsites.net" />

And in the Startup.Auth.cs file (here I am using a .NET MVC Web App and OpenID Connect Authentication):
app.UseOpenIdConnectAuthentication(

new OpenIdConnectAuthenticationOptions

{

RedirectUri = ConfigurationManager.AppSettings["ida:RedirectUri"],

PostLogoutRedirectUri = ConfigurationManager.AppSettings["ida:PostLogoutRedirectUri"],
Now, let’s assume that as a security requirement in your organization, your App Service must reside behind an F5 LoadBalancer, and all traffic must go through it, and that also Mutual Client Authentication must be in place between the F5 and your App Service.

For this scenario to occur, you need to set up a few parts in the Azure App Service and AAD (the F5 setup and DNS entries are out of scope for this post):

  •      Enable Client Certificates in the Resource Explorer of the App Service: "clientCertEnabled": true (this will make sure that the App Service expects a SSL Certificate for each request).
  •      Define a Custom Domain on the App Service as specified in the SSL Client Certificate, ie: https://myappdomain.corporateurl.com
  •    Upload the valid SSL Client Certificate to the App Service, and create a SSL binding to the Custom Domain
  •      Define a Custom Domain on the App Service as the F5 public endpoint for this web app, ie: https://myappsf5domain.corporateurl.com
  •       Add the new App Service Custom Domains URLs to the AAD App Registration as ReplyURLs: https://myappdomain.corporateurl.com and https://myappsf5domain.corporateurl.com
  •         Implement custom Certificate Validation code inheriting from System.Web.Mvc.IAuthorizeFilter and FilterAttribute (ie: public class ClientCertificateValidatorFilter : FilterAttribute, IAuthorizationFilter)
  •         Add the corresponding [CustomAuthorizeAttribute] to the desired Controller classes (or Actions) (ie: [ClientCertificateValidatorFilter])
  •         Since the security requirement is that all traffic must go through the F5, specify the F5 URL as RedirectUri and PostLogoutRedirectUri in the web.config file:

<add key="ida:PostLogoutRedirectUri“ value="https://myappsf5domain.corporateurl.com" />
<add key="ida:RedirectUri“ value="https://myappsf5domain.corporateurl.com" />

So now the “Happy Path” flow is:
  1. User requests your app URL to the F5 (https://myappsf5domain.corporateurl.com) 
  2. Since the F5 provides the SSL Certificate in the HTTP Header, no popup shows in the browser
  3. User is redirected to the AAD Login page (https://login.microsoftonline.com/.../oauth2/authorize)
  4. User inserts valid credentials
  5. User is redirected back to your app as a logged on User, going again through the F5 (https://myappsf5domain.corporateurl.com)
  6. The Certificate validation code kicks in and validates the right SSL Certificate provided by the F5 (keep in mind that the Authorize filters will execute only AFTER authentication, hence User login)


So far so good, right?

Now, let’s test some less happy flow.

We know that all traffic must go through the F5, so let’s try to call the Azure URL (https://myappservice.azurewebsites.net) directly.
  1. User requests the Azure URL (https://myappservice.azurewebsites.net)
  2. User is prompted for a SSL Certificate
  3. User cannot provide a SSL Certificate, so hits Cancel on the certificate popup
  4. User receives a 403 – Forbidden response (correctly)

Again, so far so good.

But what happens if the User provides the wrong SSL Certificate, instead of canceling the popup?
  1. User requests the Azure URL (https://myappservice.azurewebsites.net)
  2. User is prompted for a SSL Certificate
  3. User provides any (wrong) SSL Certificate
  4. User is redirected to the AAD Login page (https://login.microsoftonline.com/.../oauth2/authorize)
  5. User inserts valid credentials
  6. User is redirected back to your app as a logged on User (incorrectly)
  7. The Certificate validation code kicks in and validates the right SSL Certificate provided by the F5

A few things will go wrong in this scenario: 
  • Since the defined RedirectUri is the F5 URL, now the User is redirected to this endpoint (https://myappsf5domain.corporateurl.com), however the request was coming from the Azure URL (https://myappservice.azurewebsites.net), so it will result in a AuthenticationFailed error, and the User will land on the Error page of your app, showing the message: “IDX10311: RequireNonce is 'true' (default) but validationContext.Nonce is null. A nonce cannot be validated. If you don't need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to 'false'.” 
  • Even though the User provided a wrong SSL Certificate, since now he’s going through the F5 after authenticating, the right SSL Certificate is provided and the validation succeed.
Let leave aside the fact that Client Certificates can only be validated AFTER User authentication (login), which IMHO is a big security design flaw.. but anyways.

So, after several hours spent on the phone with Microsoft Support (thankfully to the Premier level of support, this was possible in the first place), the solution to this scenario has been identified in a small code snippet to be added to the OpenIdAuthenticationOptions in Startup.Auth.cs.

Within the Notifications = new OpenIdConnectAuthenticationNotifications section, let’s add the following code:
RedirectToIdentityProvider = async n => {
n.ProtocolMessage.RedirectUri = "https://" + n.OwinContext.Request.Uri.Host + "/";
       n.ProtocolMessage.PostLogoutRedirectUri = "https://" + n.OwinContext.Request.Uri.Host + "/";
},
},


What this code does, it is simply to force the RedirectUri to whatever URL the request was coming from originally, no matter what has been defined in the AppSettings (either in the web.config file, or in some Application Setting in one of the many Deployment Slots that your app might have… and good luck here).


So now the flow in this scenario becomes: 
  1. User requests the Azure URL (https://myappservice.azurewebsites.net)
  2. User is prompted for a SSL Certificate
  3. User provides any (wrong) SSL Certificate
  4. User is redirected to the AAD Login page (https://login.microsoftonline.com/.../oauth2/authorize)
  5. User inserts valid credentials
  6. User is redirected back to your app on the specific requested URL (https://myappservice.azurewebsites.net)
  7. The Certificate validation code kicks in and validates the wrong SSL Certificate provided by the User, and returns a 403 – Forbidden response
  8. User receives a 403 – Forbidden response (correctly)


Phew, that was easy, wasn’t it?
(:

Now there’s only one last bit remaining: remember we said that all traffic must go through the F5 load balancer?

So what happens now if the User can somehow provide a valid SSL Certificate, and also figures out the Azure URL and calls it directly?

Surprise surprise… He will log onto the web app bypassing altogether the F5!

So for this you will need to implement IP Whitelisting, and so allow ONLY the F5 incoming traffic to the App Service.

Here is a sort of cheat-sheet of the parts needed in this post:

Step-by-Step Guide to Fine-Tune an AI Model

Estimated reading time: ~ 8 minutes. Key Takeaways Fine-tuning enhances the performance of pre-trained AI models for specific tasks. Both Te...