MVC Razor Views and automated Azure deployments

The other day, I decided that I’d publish a work-in-progress website to an Azure Website.  This was a free Website as part of the free package that Azure subscribers can use.  The website was a plain old vanilla ASP.NET MVC site.  Nothing fancy, just some models, some controllers , some infrastructure code and of course, some views.

I was deploying this to Azure via a direct connection to a private BitBucket Git repository I had within my BitBucket account.  This way, a simple “git commit” and “git push” would have, in a matter of seconds, my latest changes available for me to see in a real “on-the-internet” hosted site, thus giving me a better idea of how my site would look and feel than simply running the site on localhost.

An issue I almost instantly came up against was that, whenever I’d make a tiny change to just a view file – i.e. no code changes, just tweaks to HTML markup in the .cshtml Razor view file, and commit and push that change – the automated deployment process would kick in and the site would be successfully deployed to Azure, however, the newly changed Razor View would remain unchanged.

What on earth was going on?   I could run the site locally and see the layout change just fine, however, the version deployed to Azure was the older version, prior to the change.   I first decided that I now had to manually FTP the changed .cshtml files from my local machine to the relevant place in the Azure website, which did work, but was an inelegant solution.  I needed to find out why the changed Razor views were not being reflected in the Azure deployments.

image After some research, I found the answer.

Turns out that some of my view files (the .cshtml files themselves) had the “Build Action” property set to “None”.  In order for your view to be correctly deployed when using automated Azure deployments, the Build Action property must be set to “Content”.

Now, quite how the Build Action property had come to be set that way, I have no idea.  Some of the first views I’d added to the project had the Build Action set correctly, however, some of the newer views I’d added were set incorrectly.  I had not explicitly changed any of these properties on any files, so how some were originally set to Content and others set to None was a mystery.  And then I had an idea…..

I had been creating Views in a couple of different ways.  Turns out that when I was creating the View files using the Visual Studio context menu options, the files were being created with the correct default Build Action of Content.  However, when I was creating the View files using ReSharper’s QuickFix commands by hitting Alt + Enter on the line return View(); which shows the ReSharper menu allowing you to create a new Razor view with or without layout, it would create the View file with a default Build Action of None.

Bizarrely, attempting to recreate this issue in a brand new ASP.NET MVC project did not reproduce the issue (i.e. the ReSharper QuickFix command correctly created a View with a default Build Action of Content!)

I can only assume that this is some strange intermittent issue with ReSharper, although it’s quite probably caused by a specific difference between the projects that I’ve tested this on, thus far I have no idea what that difference may be…   I’ll keep looking and if I find the culprit, I’ll post an update here.

Until then, I’ll remain vigilant and always remember to double-check the Build Action property and ensure it’s set to Content for all newly created Razor Views.

OWIN-Hosted Web API in an MVC Project – Mixing Token-based auth with FormsAuth

One tricky little issue that I recently came across in a new codebase was having to extend an API written using ASP.NET Web API 2.2 which was entirely contained within an ASP.NET MVC project.  The Web API was configured to use OWIN, the abstraction layer which helps to remove dependencies upon the underlying IIS host, whilst the MVC project was configured to use System.Web and communicate with IIS directly.

The intention was to use Token-based Http Basic authentication with the Web API controllers and actions, whilst using ASP.NET Membership (Forms Authentication) with the MVC controllers and actions.  This is fairly easy to initially hook up, and all authentication within the Web API controllers was implemented via a customized AuthorizationFilterAttribute

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class TokenAuthorize : AuthorizationFilterAttribute
{
    bool _active = true;

    public TokenAuthorize() { }

    /// <summary>
    /// Overriden constructor to allow explicit disabling of this filter's behavior.
    /// Pass false to disable (same as no filter but declarative)
    /// </summary>
    /// <param name="active"></param>
    public TokenAuthorize(bool active)
    {
        _active = active;
    }

    /// <summary>
    /// Override to Web API filter method to handle Basic Auth check
    /// </summary>
    /// <param name="actionContext"></param>
    public override void OnAuthorization(System.Web.Http.Controllers.HttpActionContext actionContext)
    {
        // Quit out here if the filter has been invoked with active being set to false.
        if (!_active) return;

        var authHeader = actionContext.Request.Headers.Authorization;
        if (authHeader == null || !IsTokenValid(authHeader.Parameter))
        {
            // No authorization header has been supplied, therefore we are definitely not authorized
            // so return a 401 unauthorized result.
            actionContext.Response = actionContext.ControllerContext.Request.CreateErrorResponse(HttpStatusCode.Unauthorized, Constants.APIToken.MissingOrInvalidTokenErrorMessage);
        }
    }

    private bool IsTokenValid(string parameter)
    {
        // Perform basic token checking against a value
        // stored in a database table.
        return true;
    }
}

This is hooked up onto a Web API controller quite easily with an attribute, applied at either the class or action method level:

[RoutePrefix("api/content")]
[TokenAuthorize]
public class ContentController : ApiController
{
    [Route("v1/{contentId}")]
    public IHttpActionResult GetContent_v1(int contentId)
    {
        var content = GetIContentFromContentId(contentId);
        return Ok(content);
    }
}

Now, the problem with this becomes apparent when a client hits an API endpoint without the relevant authentication header in their HTTP request.  Debugging through the code above shows the OnAuthorization method being correctly called and the Response being correctly set to a HTTP Status Code of 401 (Unauthorized), however, watching the request and response via a web debugging tool such as Fiddler shows that we’re actually getting back a 302 response, which is the HTTP Status code for a redirect.  The client will then follow this redirect with another request/response cycle, this time getting back a 200 (OK) status with a payload of our MVC Login page HTML.  What’s going on?

Well, despite correctly setting our response as a 401 Unauthorized, because we’re running the Web API Controllers from within an MVC project which has Forms Authentication enabled, our response is being captured higher up the pipeline by ASP.NET wherein Forms Authentication is applied.  What Forms Authentication does is to trap any 401 Unauthorized response and to change it into a 302 redirect to send the user/client back to the login page.  This works well for MVC Web Pages where attempts by an unauthenticated user to directly navigate to a URL that requires authentication will redirect the browser to a login page, allowing the user to login before being redirected back to the original requested resource.  Unfortunately, this doesn’t work so well for a Web API endpoint where we actually want the correct 401 Unauthorized response to be sent back to the client without any redirection.

Phil Haack wrote a blog post about this very issue, and the Update section at the top of that post shows that the ASP.NET team implemented a fix to prevent this exact issue.  It’s the SuppressFormsAuthenticationRedirect property on the HttpResponse object!

So, all is good, yes?   We simply set this property to True before returning our 401 Unauthorized response and we’re good, yes?

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class TokenAuthorize : AuthorizationFilterAttribute
{
    // snip...
    public override void OnAuthorization(System.Web.Http.Controllers.HttpActionContext actionContext)
    {
        var authHeader = actionContext.Request.Headers.Authorization;
        if (authHeader == null || !IsTokenValid(authHeader.Parameter))
        {
            HttpResponse.SuppressFormsAuthenticationRedirect = true;
            actionContext.Response = actionContext.ControllerContext.Request.CreateErrorResponse(HttpStatusCode.Unauthorized, Constants.APIToken.MissingOrInvalidTokenErrorMessage);
        }
    }
}

Well, no.

You see, the SuppressFormsAuthenticationRedirect property hangs off the HttpResponse object.  The HttpResponse object is part of that System.Web assembly and it’s intimately tied into the underlying ASP.NET / IIS pipeline.  Our Web API controllers are all “hosted” on top of OWIN.  This, very specifically, divorces all of our code from the underlying server that hosts the Web API.  That actionContext.Response above isn't a HttpResponse object, it's a HttpResponseMessage object.  The HttpResponseMessage object is used by OWIN as it’s divorced from the underlying HttpContext (which is inherently tied into the underlying hosting platform – IIS) and as such, doesn’t contain, nor does it have access to a HttpResponse object, or the required SuppressFormsAuthenticationRedirect property that we desperately need!

There are a number of attempted “workarounds” that you could try in order to get access to the HttpContext object from within your OWIN-compliant Web API controller code, such as this one from Hongmei at Microsoft:

HttpContext context;
Request.Properties.TryGetValue<HttpContext>("MS_HttpContext", out context);

Apart from this not working for me, this seems quite nasty and “hacky” at best, relying upon a hard-coded string that references a request “property” that just might contain the good old HttpContext.  There’s also other very interesting and useful information contained within a Stack Overflow post that gets closer to the problem, although the suggestions to configure the IAppBuilder to use Cookie Authentication and then to perform your own login in the OnApplyRedirect event will only work in specific situations, namely when you’re using the newer ASP.NET Identity, which itself, like OWIN, was designed to be disconnected from the underlying System.Web / IIS host.  Unfortunately, in my case, the MVC pages were still using the older ASP.NET Membership system, rather than the newer ASP.NET Identity.

So, how do we get around this?

Well, the answer lies within the setup and configuration of OWIN itself.  OWIN allows you to configure and plug-in specific “middleware” within the OWIN pipeline.  This allows all requests and responses within the OWIN pipeline to be inspected and modified by the middleware components.  It was this middleware that was being configured within the Stack Overflow suggestion of using the app.UseCookieAuthentication.  In our case, however, we simply want to inject some arbitrary code into the OWIN pipeline to be executed on every OWIN request/response cycle.

Since all of our code to setup OWIN for the Web API is running within an MVC project, we do have access to the System.Web assembly’s objects.  Therefore, the fix becomes the simple case of ensuring that our OWIN configuration contains a call to a piece of middleware that wraps a Func<T> that merely sets the required SuppressFormsAuthenticationRedirect property to true for every OWIN request/response:

// Configure WebAPI / OWIN to suppress the Forms Authentication redirect when we send a 401 Unauthorized response
// back from a web API.  As we're hosting out Web API inside an MVC project with Forms Auth enabled, without this,
// the 401 Response would be captured by the Forms Auth processing and changed into a 302 redirect with a payload
// for the Login Page.  This code implements some OWIN middleware that explicitly prevents that from happening.
app.Use((context, next) =>
{
    HttpContext.Current.Response.SuppressFormsAuthenticationRedirect = true;
    return next.Invoke();
});

And that’s it!

Because this code is executed from the Startup class that is bootstrapped when the application starts, we can reference the HttpContext object and ensure that OWIN calls execute our middleware, which is now running in the context of the wider application and thus has access to the HttpContext object of the MVC project’s hosting environment, which now allows us to set the all-important SuppressFormsAuthenticationRedirect property!

Here’s the complete Startup.cs class for reference:

[assembly: OwinStartup("Startup", typeof(Startup))]
namespace SampleProject
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureWebAPI(app);
        }
        
        private void ConfigureWebAPI(IAppBuilder app)
        {
            var config = new HttpConfiguration();

            // Snip of other configuration.

            
            // Configure WebAPI / OWIN to suppress the Forms Authnentication redirect when we send a 401 Unauthorized response
            // back from a web API.  As we're hosting out Web API inside an MVC project with Forms Auth enabled, without this,
            // the 401 Response would be captured by the Forms Auth processing and changed into a 302 redirect with a payload
            // for the Login Page.  This code implements some OWIN middleware that explcitly prevents that from happening.
            app.Use((context, next) =>
            {
                HttpContext.Current.Response.SuppressFormsAuthenticationRedirect = true;
                return next.Invoke();
            });

            app.UseWebApi(config);
        }
    }
}

SSH with PuTTY, Pageant and PLink from the Windows Command Line

I’ve recently started using Git for my revision control needs, switching from Mercurial that I’ve previously used for a number of years.  I had mostly used Mercurial from a GUI, namely TortoiseHg, only occasionally dropping to the command line for ad-hoc Mercurial commands.

In switching to Git, I initially switched to an alternative GUI tool, namely SourceTree, however I very quickly decided that this time around, I wanted to try to use the command line as my main interface with the revision control tool.  This was a bold move as the Git syntax is something that had always put me off Git and made me heavily favour Mercurial, due to Mercurial’s somewhat nicer command line syntax and generally “playing better” with Windows.

So, I dived straight in and tried to get my GitHub account all set up on a new PC, accessing Git via the brilliant ConEmu terminal and using SSH for all authentication with GitHub itself.  As this is Windows, the SSH functionality was provided by PuTTY, and specifically by the PLink and Pageant utilities within the PuTTY solution.

imageI already had an SSH Key generated and registered with GitHub, and the private key was loaded into Pageant, which was running in the background on Windows.  The first little stumbling block was to get the command line git tool to realise it had to use the PuTTY tools in order to retrieve the SSH Key that was to be used for authentication.

image This required adding an environment variable called GIT_SSH which points to the path of the PuTTY PLINK.exe program.  Adding this tells Git that it must use PLink, which acts as a kind of “gateway” between the program that needs the SSH authentication, and the other program – in this case PuTTY’s Pageant – that is providing the SSH Key.  This is a required step, and is not the default when using Git on Windows as Git is really far more aligned to the Unix/Linux way of doing things.  For SSH on Unix, this is most frequently provided by OpenSSH.

After having set up this environment variable, I could see that Git was attempting to use the PLINK.EXE program to retrieve the SSH key loaded into Pageant in order to authenticate with GitHub, however, there was a problem.  Although I definitely had the correct SSH Key registered with GitHub, and I definitely had the correct SSH Key loaded in Pageant (and Pageant was definitely running in the background!), I was continually getting the following error:

image

The clue to what’s wrong is there in the error text – The server that we’re trying to connect to, in this case it’s github.com, does not have it’s RSA key “installed” on our local PC.  I say “installed” as the PuTTY tools will cache remote server RSA keys in the Windows registry.  If you’re using OpenSSH (either on Windows or more likely on Unix/Linux, they get cached in a completely different place). 

Although the error indicates the problem, unfortunately it gives no indication of how to correct it.

The answer lies with the PLINK.exe program.  We have to issue a special one-off PLINK command to have it connect to a remote server, retrieve that server’s RSA key, then cache (or “install”) the key in the registry to allow subsequent usage of PLINK as a “gateway” (i.e. when called from the git command line tool) to be able to authenticate the server machine first, before it even attempts to authenticate with our own SSH key.

The plink command is simply:

plink.exe -v -agent git@github.com

or

plink.exe -v -agent git@bitbucket.org

(the git@github.com or git@bitbucket.org parts of the command are the specific email addresses required when authenticating with the github or bitbucket servers, respectively).

The –v simply means verbose output and can be safely omitted.  The real magic is in the –agent command which instructs plink to use Pageant for the key:

image

Now we get the opportunity to actually “store” (i.e. cache or install) the key.  If we say yes, this adds the key to our Windows Registry:

image

Once we’ve completed this step, we can return to our command window and attempt our usage of git against our remote repository on either GitHub or BitBucket once more.  This time, we should have much more success:

image

And now everything works as it should!