D·ASYNC on Azure Functions

This article demonstrates capabilities of the D·ASYNC technology (preview version), explains how it can be used, and guides though all steps needed to try it out by yourself. The final experience might slightly differ in the future.

Ready?

Let’s push this code into the version control system – your services and workflows are live! (just a little bit more sophisticated version of the hello world application)

// Simply returns "Hello".
public interface IEnglishDictionary
{
    Task<string> GreetingWord();
}

// Simply returns "Hello, {name}!".
public interface IGreetingService
{
    Task<string> Greet(string name);
}

public class EnglishDictionary : IEnglishDictionary
{
    public Task<string> GreetingWord() =>
        Task.FromResult("Hello");
}

public class GreetingService : IGreetingService
{
    private IEnglishDictionary _dictionary;

    // Do it properly with dependency injection.
    public GreetingService(IEnglishDictionary dictionary)
        => _dictionary = dictionary;

    public async Task<string> Greet(string name)
    {
        var greetingWord = await _dictionary.GreetingWord();
        return $"{greetingWord}, {name}!";
    }
}

public static class Startup
{
    // And some code to configure the IoC container.
    // This example uses Autofac.
    public static IContainer CreateIocContainer()
    {
        var builder = new ContainerBuilder();
        builder
            .RegisterType<EnglishDictionary>()
            .As<IEnglishDictionary>()
            .LocalService();
        builder
            .RegisterType<GreetingService>()
            .As<IGreetingService>()
            .LocalService();
        return builder.Build();
    }
}

What services? What workflows?

Microservices (if you will) and distributed workflows expressed in the C# code above. The IEnglishDictionaryand IGreetingService define the contract for 2 services, where EnglishDictionary​ and GreetingService are their corresponding implementations defined by the IoC container in the Startup.CreateIocContainer()​. Nothing spectacular or unusual here. However, according to the D·ASYNC syntax mapping, a workflow is defined by a set of async functions – the Greet and GreetingWord methods. Let’s look at them in details:

/* 1. Entry point for an external caller. */
public async Task Greet(string name)
{
  /* 2. State transition #1. Schedule execution of another routine in a workflow, which belongs to another service. */
  var task = _dictionary.GreetingWord();
  /* 3. Save the state of current Greet routine, and subscribe to the completion of GreetingWord routine. The 'await' keyword serves as a delimiter between state transitions of the generated state machine. */
  await task;
  /* 4. GreetingWord schedules continuation of this routine upon completion. It restores the state on any available node in the system, and keeps executing from exact point with all input arguments and local variables available. */
  /* 5. State transition #2. */
  var greetingWord = task.Result;
  return $"{greetingWord}, {name}!";
  /* 6. Current finite state machines reaches its terminal state and schedules continuation of the caller with the result (if any is subscribed). */
}

How is that code “live”?

There are several ingredients needed to make it work – let’s look at the overall picture first and then deep dive into details.

It shows a standard configuration of having a project in Visual Studio Team Services (VSTS) using GIT for version control system with a Continuous Integration (CI) pipeline, and an Azure Functions host using an Azure Storage account (queues and tables) configured to perform Continuous Delivery (CD) via VSTS whenever you push your code to the ‘master’ branch. The only non-standard component here is D·ASYNC NuGet packages referenced by the C# project itself. And the gateway part will be explained later below.

Initial setup

Don’t worry if you are not familiar with VSTS and Azure Functions, it’s very easy to setup – folks from Microsoft and other people have had described what it is and how to use it in numerous posts. Here I’ll just list a set of general steps as a guidance without additional details – all you need is a standard configuration, nothing special.

In VSTS Online:

  1. Create a new team project or use existing one
  2. Create a new GIT repository in the team project

In Azure portal:

  1. Create a new Storage account or decide which existing one to use
  2. Create a new Function App associated with the Storage account
  3. In the Function App go to ‘Platform Features’ tab and configure the Code Deployment to be done automatically from VSTS whenever code is pushed to the master branch of the repo

On your PC:

  1. Clone the code repository from VSTS (using Visual Studio or your favorite tool)
  2. Make sure that you have the ‘Azure Functions and Web Job Tools’ extension installed in Visual Studio (top level menu > Tools > Extensions and Updates…)

Creating the C# project

Having the initial setup done, let’s create the C# project with Visual Studio that is going to host our services.

An Azure Functions project must be empty – we are not going to write any extra code. Select the .NET Framework project type, because .NET Core version is not fully supported by Azure at the moment of writing this article.

After project is created, add the Dasync.AzureFunctions.TechnologyPreview NuGet package.

Then just create a single .cs file in the project (e.g. ‘Program.cs’) and paste the code below and the code from the beginning of the article.

using System;
using System.Threading.Tasks;
using Autofac;
using Dasync.Ioc.Autofac;

namespace DasyncDemo
{
// Paste the code from the beginning of the article here.
}

The project is ready now.

If you wish to use an alternative IoC container, the only other option that is available in this tech preview is Ninject – just replace the code in Startup class with this snippet:

using Ninject;
using Dasync.Ioc.Ninject;

public static IKernel CreateIocContainer()
{
    var kernel = new StandardKernel();
    kernel
        .Bind<IEnglishDictionary>()
        .To<EnglishDictionary>()
        .AsService();
    kernel
        .Bind<IGreetingService>()
        .To<GreetingService>()
        .AsService();
    return kernel;
}

Deploying the project

At the moment of writing this article VSTS has a know problem of failing to pull NuGet packages from nuget.org. That can be fixed by adding nuget.config file to the root directory of your project with such content:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageRestore>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
  </packageRestore>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/" />
  </packageSources>
    <activePackageSource>
    <add key="All" value="(Aggregate source)" />
  </activePackageSource>
</configuration>

To deploy the project all you have to do is to push the code to the ‘master’ branch.

When you work on a team, you usually go though a code review process and then accept a pull request of a separate branch, but the result is the same – the code ends up in the ‘master’ branch and VSTS triggers its CI/CD pipeline.

Now if you go back to the Azure portal and navigate to your Function App, you should see that the deployment is successful. It might take a couple of minutes though.

What exactly is deployed?

To answer that question let’s build the project in Visual Studio. In the output window you can notice that two services has been found and two corresponding Azure functions have been generated:

The D·ASYNC NuGet package adds a custom build step which uses the IoC container to determine which types are defined as services – the LocalService extension method for Autofac (or AsService for Ninject). If you remove that extension method invocation from the startup code, an Azure function won’t be generated. You can also notice the ‘HTTP gateway’ function, but will get to it later.

Now, after the project has been successfully deployed to the cloud, the Function App will show all generated functions available to use.

Those service functions are listening on a queue from the Azure Storage account, so any asynchronous invocation of a method on a service (or its continuation) puts a message on a queue. If a service receives too many requests, they will just start piling up on the queue, and Azure Function App at some point makes a decision to scale out.

How to invoke service methods?

This part can get tricky, because you should not put any message on a queue by hand – the format of a message can depend on a platform implementation. Here is why a third HTTP-based function is generated – the gateway. It allows you to invoke a service from outside of the system and to connect the system to other services that don’t use D·ASYNC.

In this tech preview the gateway function has the anonymous access level. It’s a know security flaw and is a subject to change in the future.

To test the functions we will be using Postman to send HTTP requests. Let’s start with invoking the GreetingWord on EnglishDictionary. To do so we need to send a HTTP POST request using the URL of the gateway function with a couple of query parameters – the service name and the routine (method) name.

When you hit the Send button you get HTTP 202 Accepted response with the Location header telling where we can get the result when operation is complete.

Then, using that relative URL from the Location header, sending a HTTP GET request will give us the result of the operation.

It might take several seconds to get the result, and reason behind is that the gateway function simply puts a message on a Storage queue, where it takes some time for a Function App to pick it up. When a queue is empty, Function Apps are optimized to exponentially increase the polling interval. However, if you have a lot of messages on the queue, they can be picked up almost instantaneously depending on how much time a single function invocation takes and how many instances of the Function App the Azure infrastructure is running.

The queue polling interval can be changed in the host configuration file. In your project change the host.json file with the following content and re-deploy the Function App.

{
  "queues": {
    "maxPollingInterval": 100
  }
}

At this point you might be very skeptical and think that it’s definitely an overkill for a hello world application, where a simple request-response approach would have been times and times more efficient. But don’t judge too fast, the idea here is to have a resilient distributed workflow, where such slow response is a trade-off of the Function App serverless platform itself, not the D·ASYNC technology.

We can repeat the exercise and invoke the Greet routine on the GreetingService, but this time we need to supply method input parameters in JSON format.

And when you poll for the request using the relative URL from the Location response header, you’ll get the expected “Hello, World!” response body.

What happens behind the scene?

This time we will use Azure Storage Explorer to reveal additional details on how our services work together as a workflow. During initial request to execute Greet on GreetingService, the gateway puts a message on the queue.

The message is in a D·ASYNC-specific CloudEvents format which conveys all data needed to run a routine on a service. When the message is picked up by the Azure function, the D·ASYNC runtime engine invokes the method on the service. Then the method invokes GreetingWord on EnglishDictionary and awaits its completion. At this point D·ASYNC runtime engine saves the state of the method (which is compiled into a state machine). The ‘routines’ table in the table storage holds the latest saved state of all routine invocations. As shown below, the Greet routine has Status of 3 which corresponds to an internal enumeration item saying that ‘routine is awaiting’.

Let’s quickly look at some fields of the generated state machine for the Greet method (re-constructed C# code from IL metadata):

[CompilerGenerated]
private struct d__2 : IAsyncStateMachine
{
    // the input parameter
    public string name; 
    // current state ID
    public int 1__state; 
    // used to await the GreetingWord
    private TaskAwaiter<string> u__1;
}

That’s the state of a method that gets serialized and stored in the ‘State’ column of the ‘routines’ table.

At this point the Azure function finishes its execution and no resources are allocated to synchronously wait on the completion of the EnglishDictionary’s GreetingWord.

When invocation of GreetingWord is requested by Greet, an another message is put on the queue. This time D·ASYNC runtime engine knows exactly what the continuation of the routine being invoked is, and puts such information inside the message:

This information allows to resume the caller when GreetingWord routine is complete, what implements the Event Driven architecture.

Then GreetingWord completes its execution, resumes Greet routine which performs its last state transition and completes as well:

Now the record for the Greet routine in the ‘routines’ table has the ‘Result’ column which contains serialized data of the return type Task:

If GreetingWord throws and exception, we would see a non-null ‘Exception’ property on that ‘Result’. That means any exception raised at the application layer gets propagated to the caller, so regular try-catch blocks in the code for such distributed workflow would work just fine.

Since Greet method has been invokes via the HTTP gateway, the routine does not have any continuation (no subscriber) – that’s where Event Driven design stops working and the caller has to poll for the result instead.

Another handy feature of having routine states in an Azure table is traceability. Using routine unique numerical IDs, service names, and method names, at any point of time you can compose a call stack of routine invocations (like a regular stack trace you can see in a debugger):

Can Function Apps inter-communicate?

Microservices that form a workflow worth nothing if they can’t communicate with each other. The sample above shows how to deploy two services side-by-side in a single Function App – a ‘pod’ if you will. In real production environment you can have multiple teams owning their own services with public API and deployment targets. How does it change approach with D·ASYNC? Just one line of code.

Well, to be fair, you need to stand up and prepare another Function App in Azure first as a prerequisite. Then we can split the code of the ‘Program.cs’ into two projects which can be deployed independently:

The projects are shown in the same solution for clarity, but they should reside in separate code repositories. Besides, you can further split projects by separating the services part from their hosting (the Function App project), where only the hosting project needs references to D·ASYNC NuGet packages.

Because GreetingService depends on IEnglishDictionary, we need to change the way how that dependency is injected into the service. The promised single line change is shown on line 24 of the image above – it defines IEnglishDictionary as an external service using another extension method AsExternalService. It tells D·ASYNC runtime to resolve such dependency using service discovery. In this tech preview version the platform implements very rudiment mechanism of service discovery using table storage on the Azure storage account.

Ideally you should have a separate assembly that defines the contract for your service, but to simplify this example we just defined an exact copy of the IEnglishDictionary interface inside the GreetingService assembly. That code works since there is no contract verification.

After splitting services into two separate independent Function App projects you can run them in exactly the same manner as previously described – no extra code changes needed.

Why Azure Functions?

Azure Functions in combination with Azure Storage Queues and Tables might not be the best choice to get low-latency and high performance, yet it’s an auto-scalable serverless platform which costs nothing when not in use (the Consumption Plan), requires nearly zero maintenance and effort for deployments. You can think of it as a ‘starter kit’ with the idea to throw something into the cloud really quick, see how it goes, and then switch to a better platform later on if needed.

All in all

Let’s step back a little bit and look at the very beginning of this article. The very first paragraph describes how a simple code push translates into deploying services and distributed workflows, and the rest of the article is merely an explanation of the technology. The hello world example serves as a simplest metaphor of much more complex real-world applications. D·ASYNC is not perfect (especially at its preview stage) and has it’s own downsides, but will keep growing to help saving development effort in applicable scenarios.