LESS + CoffeeScript for ASP.NET = LessCoffee

As documented in recent posts, I’ve been tinkering getting the LESS and CoffeeScript compilers running on Windows Script Host. I’ve now got round to wrapping these up as ASP.NET HTTP Handlers so you can easily use them in ASP.NET-based websites. You simply reference the *.less and *.coffee files and they get served up as CSS and JavaScript directly. For example:

<link href="content/style.less" rel="stylesheet">
<script src="content/site.coffee"></script>

No need to install add-ins into Visual Studio or add build steps to your project. The main downside is that it won’t run on non-Windows platforms under Mono (although I’m tempted adapt it to use Mozilla’s SpiderMonkey JavaScript Shell).

If you’re running Visual Studio 2010 then simply use the LessCoffee NuGet package.

PM> Install-Package LessCoffee

If you’re using Visual Studio 2008 you’ll need follow these manual steps:

  • Copy LessCoffee.dll to your web application’s /bin directory
  • Add the following entries to your web.config file:
    <system.web>
        <httpHandlers>
            <add path="*.coffee" type="DotSmart.CoffeeScriptHandler, LessCoffee" verb="*" validate="false"/>
            <add path="*.less" type="DotSmart.LessCssHandler, LessCoffee" verb="*" validate="false"/>
        </httpHandlers>
    </system.web>

    <!-- IIS 7 -->
    <system.webServer>
        <validation validateIntegratedModeConfiguration="false"/>
        <handlers>
            <add path="*.coffee" type="DotSmart.CoffeeScriptHandler, LessCoffee" verb="*" name="DotSmart.CoffeeScriptHandler"/>
            <add path="*.less" type="DotSmart.LessCssHandler, LessCoffee" verb="*" name="DotSmart.LessCssHandler"/>
        </handlers>
    </system.webServer>

If you’re using Windows 2003/IIS 6 then you will need to map the file extensions *.less and *.coffee to aspnet_isapi.dll.

The source is on GitHub, obv: https://github.com/duncansmart/LessCoffee

The simplest way to compile CoffeeScript on Windows

tl;dr: You can compile CoffeeScript on Windows with zero third-party dependencies.

A while back I did a post on running the LESS.js compiler on Windows using the venerable and ubiquitous Window Script Host (WSH: providing JavaScript console scripting since Windows 98… when John Resig was still in 8th grade). At the time I tried something similar to generate JavaScript from the wonderful CoffeeScript language, but I couldn’t get it working due to what I assumed were shortcomings in WSH’s JScript engine. There are plenty of other options out there for compiling CoffeeScript, but incur various third-party dependencies as detailed in this StackOverflow question.

But on a whim the other day I revisited it and thankfully now it does work on plain old WSH without any coaxing (not sure what changed, or what I was doing wrong last time). I took the full browser-based coffee-script.js and wrapped it with a simple *.wsf and batch file to handle command-line options.

Download

It’s on github, natch: https://github.com/duncansmart/coffeescript-windows

Usage

To use it, invoke coffee.cmd like so:

coffee input.coffee output.js

You can also pipe to and from it if you are so inclined via stdin/out. Errors are written to stderr.

In the test directory there’s a version of the standard CoffeeScript tests which can be kicked off using test.cmd. Note that the test just attempts to compile the standard set of  *.coffee test files but doesn’t execute them.

Hope it helps; comments appreciated!

DDDSW Hecklegate

I went to the free DDDSW developer conference on Saturday in Bristol which was excellent. Kudos to all the organisers and speakers and sponsors who made it happen.

One of the sessions I attended though stood out because the speaker, although apparently experienced, had a pretty tough time, especially with some of the comments submitted to an audience feedback web app being used by attendees at the conference. But actually I found myself agreeing with many of the sentiments of these comments (as did the person I sat next to) and felt the session didn’t go well, although it actually contained some great content. Here’s my take on it.

Starting a session by saying how tired you are and how you haven’t slept for days is effectively saying: “sorry, this might be a bit shit”. You may feel justified in saying this because you are delivering the session for no fee and indeed may have incurred substantial expense in traveling to the conference. Nobody cares. It rubs your audience up the wrong way because they also may have incurred considerable expense in getting there too. In fact it’s their free time you’re saying you may be about to waste. They may start to feel their time would have been better spent in another session. Also, consider that your slot at the conference may have been at the expense of someone else, maybe a newbie who would have loved their first opportunity in the spotlight.

Doing too many “hands up if you…” audience questions can get tedious quickly. Indeed, don’t continually ask people to put their hands up if you’re going to say they’re wrong. It might be OK once, but more than that and people are going to feel uncomfortable and antagonised.

If someone walks out, ignore it. Making a point of it makes you look petty. Just maybe they actually had valid reasons for leaving, or indeed, maybe they weren’t enjoying the session. Just let it go.

Finally, there’s a distinction between being “passionate and opinionated” and coming across as a blowhard.

First steps with IronJS 0.2

With the release of IronJS 0.2, the code below is the result of a 30-minute play I had this morning, which shows how easy it is to embed a fully .NET JavaScript runtime in your application by simply referencing IronJS.dll.

It’s changed quite a from prior versions and I think you’ll see it has become much easier to host since  Dan Newcombe’s experiments last year.

//reference IronJS.dll
using System;
using System.IO;

class IronJsDoodles
{
    static void Simple()
    {
        var context = new IronJS.Hosting.CSharp.Context();
        object result = context.Execute("1 + 2;");

        Console.WriteLine("{0} ({1})", result, result.GetType());
        // "3 (System.Double)"
    }

    static void InteractingWithGlobal()
    {
        var context = new IronJS.Hosting.CSharp.Context();

        context.SetGlobal("a", 1d);
        context.SetGlobal("b", 2d);
        context.Execute("foo = a + b;");

        double foo = context.GetGlobalAs<double>("foo");

        Console.WriteLine(foo);
        // "3"
    }

    static void AddingHostFunctions()
    {
        var context = new IronJS.Hosting.CSharp.Context();

        // Effectively the same as context.CreatePrintFunction() 🙂
        var print = IronJS.Native.Utils.createHostFunction<Action<string>>(context.Environment,
            delegate(string str)
            {
                Console.WriteLine(str);
            });
        context.SetGlobal("print", print);

        context.Execute("print('Hello IronJS!')");
    }
}

Hope it helps you get started.

SOLVED: MSDeploy error “(400) Bad Request”

While working from home I was trying to use MSDeploy (aka Web Deploy, or the Publish Web command in Visual Studio 2010) to update an internal site. Whilst this would work perfectly when I was physically in the office, when working from home via the VPN it would fail with the following error:

Remote agent (URL http://myserver.example.com/MSDEPLOYAGENTSERVICE) could not be contacted.  Make sure the remote agent service is installed and started on the target computer.
An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected.
The remote server returned an error: (400) Bad Request.

To see what was going on I started Fiddler and tried the publish again. (One crucial thing I had to do for Fiddler to capture traffic when connected via the VPN was to fully qualify the machine name, so instead of http://myserver, use http://myserver.your-corp.net as the service URL, otherwise it didn’t capture the traffic.)

This is what the exchange looked like:

POST http://myserver.example.com/MSDEPLOYAGENTSERVICE HTTP/1.1
MSDeploy.VersionMin: 7.1.600.0
MSDeploy.VersionMax: 7.1.1042.1
MSDeploy.RequestUICulture: en-US
MSDeploy.RequestCulture: en-GB
Version: 8.0.0.0
MSDeploy.Method: Sync
MSDeploy.RequestId: fde03509-b23e-4759-9353-e8dbf19a2293
Content-Type: application/msdeploy
MSDeploy.ProviderOptions: H4sIAAAAAAAEAO29B2AcSZYlJi9tynt...
MSDeploy.BaseOptions: H4sIAAAAAAAEAO29B2AcSZYlJi9tynt/SvV...
MSDeploy.SyncOptions: H4sIAAAAAAAEAO29B2AcSZYlJi9tynt/SvV...
Host: myserver.example.com
Transfer-Encoding: chunked
Expect: 100-continue
...

And in response:

HTTP/1.1 400 Bad Request (The HTTP request includes a non-supported header. Contact your ISA Server administrator.)
Via: 1.1 IBISA2
Connection: Keep-Alive
Proxy-Connection: Keep-Alive
Pragma: no-cache
Cache-Control: no-cache
Content-Type: text/html
...

There in the clear was the crucial error information that MSDeploy was failing to relay: those whacky MSDeploy HTTP Headers were being blocked by our ISA Server. (Note to the developers of MSDeploy: showing this information in debug or verbose modes would be very useful!)

After specifying an access rule on the ISA server to not filter proxied requests to the server in question based on HTTP headers, it all started working again.

Generating better default DisplayNames from Models in ASP.NET MVC using ModelMetadataProvider

Using the scaffolding in MVC makes it easy to knock up CRUD applications without too much difficulty. Typically you end up with something like this – note the PascalCase field names taken straight from the model generated by Html.LabelFor:

camelcase before

So now the obvious thing is to add DisplayName attributes metadata to your model, for example:

[DisplayName("Assigned To")]
public int AssignedToId { get; set; }

[DisplayName("Customer Name")]
public string CustomerName { get; set; }

Meh, donkey work. In most cases it just needs better default labels in the absence of explicit DisplayName annotations. Nine times out of ten field names such as AssignedToId and CustomerName would be simply expanded to Assigned To and Customer Name. (Of course, they don’t do this out of the box because these simple rules wouldn’t hold true for all languages.)

In MVC you can hook into the metadata discovery/generation process by implementing a ModelMetadataProvider, the default one of which is the DataAnnotationsModelMetadataProvider. So all I did was inherit from the latter and override the GetMetadataForProperty method and if no DisplayName has been specified on the model, create one based on the model’s camel-cased property name.

class MyFabulousModelMetadataProvider : DataAnnotationsModelMetadataProvider
{
   // Uppercase followed by lowercase but not on existing word boundary (eg. the start)
   Regex _camelCaseRegex = new Regex(@"Bp{Lu}p{Ll}", RegexOptions.Compiled);

   // Creates a nice DisplayName from the model’s property name if one hasn't been specified
   protected override ModelMetadata GetMetadataForProperty(
      Func<object> modelAccessor,
      Type containerType,
      PropertyDescriptor propertyDescriptor)
   {
      ModelMetadata metadata = base.GetMetadataForProperty(modelAccessor, containerType, propertyDescriptor);

      if (metadata.DisplayName == null)
         metadata.DisplayName = displayNameFromCamelCase(metadata.GetDisplayName());

      return metadata;
   }

   string displayNameFromCamelCase(string name)
   {
      name = _camelCaseRegex.Replace(name, " $0");
      if (name.EndsWith(" Id"))
          name = name.Substring(0, name.Length - 3);
      return name;
   }
}

Hook the provider up in Application_Start by assigning an instance of it to ModelMetadataProviders.Current.

It’s pretty effective I think:

camelcase after

Anything that isn’t quite right can be overridden simply my explicitly adding DisplayNames to the model.

Is SQL Server Profiler showing Connection Pooling not working?

TL;DR: No – it’s just SQL Profiler not telling you the entire truth.

In evaluating Entity Framework 4.1 (aka EF Code-First or “Magic Unicorn” Edition) I’ve been keeping an eye on what SQL it’s actually executing against SQL Server (how’s that for a leaky abstraction?). Here I saw something slightly worrying which was that it appeared that the client was logging in, executing a query and then logging out as seemingly indicated by the Audit Login and Audit Logout events:

image

The same thing happens without EF using basic SqlCommand queries. To tell you the truth I’d noticed this a while ago but hadn’t got round to investigating.

Rather than assume that connection pooling was broken on my machine, I had a hunch that SQL Profiler was somewhat misrepresenting what was really going on.

Indeed Markus Erickson on StackOverflow mentions the EventSubClass column you can add to SQL Profiler’s output to see if those Audit Logon/Logout events are actually connections being pulled from the pool or fresh connections.

Here’s how you show the EventSubClass column in SQL Profiler (I’m running SQL 2005 on this machine, I can only assume it’s similar on 2008):

  • Go to the Trace Properties window and switch to the Events Selection tab.
  • Click on the Show all columns checkbox
  • Scroll to the right and locate the EventSubClass column and check both checkboxes:

  • Then go to Organize Columns and move the EventSubClass column up so that it’s next to EventClass:

image

Now you can re-run your trace and hopefully be reassured that connection pooling after all is functioning correctly!

image

Hope that helps!

Running IISExpress without a console window

I created a little Windows Script file that you can put in the root of a site that when double-clicked will run IIS Express without its usual accompanying console window.

IIS Express seems to require a parent process so you have to keep the calling process alive whilst it’s running. The lowest tech way I could think of doing this is to use Windows Script Host’s WScript.Run method that allows you to spawn processes in a hidden window and wait for them to exit.

You can view and download the code here.  https://gist.github.com/864322

Just place the IISExpress.js in the root of your website and double-click, it should then launch your browser at the root of the site. You can adjust the port and CLR version by tweaking the variables at the top of the script.

Hope it helps!

Executing Cygwin Bash scripts on Windows

I was reading Jeremy Rothman-Shore’s post regarding kicking off a Cygwin script from a Windows batch file which addresses two things:

  1. Invoking Cygwin’s Bash, passing it a shell script to execute
  2. Resolving the script’s directory in bash so that it can access other resources its directory.

Here I want to improve a bit on the first point and come up with:

  • A general purpose “shim” that will execute a shell script on Windows in Cygwin
  • Will pass through any command-line arguments
  • Doesn’t trip up on any spaces in the path

The idea is that if I have a shell script called foo.sh, I create a Windows equivalent called foo.cmd alongside it that can be called directly on Windows without the caller worrying about Cygwin etc (apart from having it installed):

image

(Or foo.bat – I prefer the *.cmd extension because this isn’t MS-DOS we’re running here).

The CMD script has to:

  1. Find its fully-qualified current location and locate its *.sh counterpart
  2. Translate this Windows-style path into a Unix style path
  3. Pass this to Cygwin’s bash along with any other arguments

Firstly, finding a batch file’s location is similar to how it’s done in Unix: it’s the first argument to the script. So in our case we use %0, and we can extract parts of the path like so: %~dp0 will extract the drive and path from argument 0 (i.e. the directory). See the for command for more information on this funky %~ syntax.

Secondly, the translation from a c:windowsstylepath to a /unix/style/path is done by Cygwin’s cygpath command. We do this in a slightly roundabout way via the ever versatile for command.

Thirdly, arguments passed to a batch file can either be accessed individually using $1, %2, $3 etc, or all in one go using %*, which is what we use here.

In addition, so that we don’t litter the Cygwin environment with temporary variables that we’ve created, we use a little setlocal/endlocal trick.

Here it is:

@echo off
setlocal

if not exist "%~dpn0.sh" echo Script "%~dpn0.sh" not found & exit 2

set _CYGBIN=C:cygwinbin
if not exist "%_CYGBIN%" echo Couldn't find Cygwin at "%_CYGBIN%" & exit 3

:: Resolve ___.sh to /cygdrive based *nix path and store in %_CYGSCRIPT%
for /f "delims=" %%A in ('%_CYGBIN%cygpath.exe "%~dpn0.sh"') do set _CYGSCRIPT=%%A

:: Throw away temporary env vars and invoke script, passing any args that were passed to us
endlocal & %_CYGBIN%bash --login "%_CYGSCRIPT%" %*

Note that you just name this the same as your shell script with a .cmd or .bat file extension. and then just execute it as normal.

For example for the following shell script called foo.sh:

#!/bin/sh
echo
echo "Hello from bash script $0"
echo "Working dir is $(pwd)"
echo
echo "arg 1 = $1"
echo "arg 2 = $2"
echo

Here’s me calling it from Windows:

image

Hope that helps!

SOLVED: Windows Identity Foundation – “The system cannot find the file specified”

I’ve been working on a proof of concept for using claims-based authorisation with Windows Identity Foundation (WIF) against an Active Directory Federation Services (ADFS) 2.0 security token service (STS).

I seemed to have everything in place but came up against the following error in a yellow screen of death:

System.Security.Cryptography.CryptographicException: The system cannot find the file specified.

image

Looking at the stack trace it seems that Data Protection API (DPAPI which in .NET is exposed as System.Security.Cryptography.ProtectedData) is being used to encrypt data. A common use of DPAPI is to do encryption without you having to worry about key management: you leave it to Windows to worry about where the keys are stored. Those keys are typically buried in your user profile/registry somewhere – so it seemed odd DPAPI was being used here at all – the DPAPI keys would need to be part of the App Pool user account profile/registry.

Anyway, to cut a long story short I wasn’t Reading The Fine error Message fully. The interesting/useful bit was scrolled horizontally off-screen:

[CryptographicException: The system cannot find the file specified.]
System.Security.Cryptography.ProtectedData.Protect(Byte[] userData, Byte[] optionalEntropy, DataProtectionScope scope) +681

Microsoft.IdentityModel.Web.ProtectedDataCookieTransform.Encode(Byte[] value) +121
[InvalidOperationException: ID1074: A CryptographicException occurred when attempting to encrypt the cookie using the ProtectedData API (see inner exception for details). If you are using IIS 7.5, this could be due to the loadUserProfile setting on the Application Pool being set to false. ]
Microsoft.IdentityModel.Web.ProtectedDataCookieTransform.Encode(Byte[] value) +1280740
Microsoft.IdentityModel.Tokens.SessionSecurityTokenHandler.ApplyTransforms(Byte[] cookie, Boolean outbound) +74

Sure enough WIF was using DPAPI to encrypt a token, but DPAPI was complaining it couldn’t get to the keys because there was no user profile for the App Pool identity, which in this case Environment.UserDomainName/UserName told me was “IIS APPPOOLDefaultAppPool” – and there was no such user profile directory under C:Users.

So sure enough in IIS, in the advanced settings for the App Pool, Load User Profile was false, and setting it to true creates and loads user profile (a “DefaultAppPool” profile directory appeared in C:Users), and the application worked:

image

Typically WIF tutorials use the ASP.NET Web Sites, which use Cassini for testing, which runs under the current user identity and therefore a user profile with its DPAPI keys will be loaded – which is why if you use the standard run throughs/demos you don’t come up against this issue. But if you run as a Web Application under “real” IIS, this is when you may hit this problem.

It’s also debatable whether the use of DPAPI here is at all sensible. In a web farm environment the DPAPI keys for the App Pool identities across servers will be different, so if you don’t have sticky sessions enabled on your load balancer you run the risk of such federated logins not working 100% of the time. This issue is mentioned by Matias Woloski in the Geneva forums.

Another case of bad defaults all round.