D·ASYNC on Azure Functions

This article demonstrates capabilities of the D·ASYNC technology (preview version), explains how it can be used, and guides though all steps needed to try it out by yourself. The final experience might slightly differ in the future.

Ready?

Let’s push this code into the version control system – your services and workflows are live! (just a little bit more sophisticated version of the hello world application)

// Simply returns "Hello".
public interface IEnglishDictionary
{
    Task<string> GreetingWord();
}

// Simply returns "Hello, {name}!".
public interface IGreetingService
{
    Task<string> Greet(string name);
}

public class EnglishDictionary : IEnglishDictionary
{
    public Task<string> GreetingWord() =>
        Task.FromResult("Hello");
}

public class GreetingService : IGreetingService
{
    private IEnglishDictionary _dictionary;

    // Do it properly with dependency injection.
    public GreetingService(IEnglishDictionary dictionary)
        => _dictionary = dictionary;

    public async Task<string> Greet(string name)
    {
        var greetingWord = await _dictionary.GreetingWord();
        return $"{greetingWord}, {name}!";
    }
}

public static class Startup
{
    // And some code to configure the IoC container.
    // This example uses Autofac.
    public static IContainer CreateIocContainer()
    {
        var builder = new ContainerBuilder();
        builder
            .RegisterType<EnglishDictionary>()
            .As<IEnglishDictionary>()
            .LocalService();
        builder
            .RegisterType<GreetingService>()
            .As<IGreetingService>()
            .LocalService();
        return builder.Build();
    }
}

What services? What workflows?

Microservices (if you will) and distributed workflows expressed in the C# code above. The IEnglishDictionaryand IGreetingService define the contract for 2 services, where EnglishDictionary​ and GreetingService are their corresponding implementations defined by the IoC container in the Startup.CreateIocContainer()​. Nothing spectacular or unusual here. However, according to the D·ASYNC syntax mapping, a workflow is defined by a set of async functions – the Greet and GreetingWord methods. Let’s look at them in details:

/* 1. Entry point for an external caller. */
public async Task Greet(string name)
{
  /* 2. State transition #1. Schedule execution of another routine in a workflow, which belongs to another service. */
  var task = _dictionary.GreetingWord();
  /* 3. Save the state of current Greet routine, and subscribe to the completion of GreetingWord routine. The 'await' keyword serves as a delimiter between state transitions of the generated state machine. */
  await task;
  /* 4. GreetingWord schedules continuation of this routine upon completion. It restores the state on any available node in the system, and keeps executing from exact point with all input arguments and local variables available. */
  /* 5. State transition #2. */
  var greetingWord = task.Result;
  return $"{greetingWord}, {name}!";
  /* 6. Current finite state machines reaches its terminal state and schedules continuation of the caller with the result (if any is subscribed). */
}

How is that code “live”?

There are several ingredients needed to make it work – let’s look at the overall picture first and then deep dive into details.

It shows a standard configuration of having a project in Visual Studio Team Services (VSTS) using GIT for version control system with a Continuous Integration (CI) pipeline, and an Azure Functions host using an Azure Storage account (queues and tables) configured to perform Continuous Delivery (CD) via VSTS whenever you push your code to the ‘master’ branch. The only non-standard component here is D·ASYNC NuGet packages referenced by the C# project itself. And the gateway part will be explained later below.

Initial setup

Don’t worry if you are not familiar with VSTS and Azure Functions, it’s very easy to setup – folks from Microsoft and other people have had described what it is and how to use it in numerous posts. Here I’ll just list a set of general steps as a guidance without additional details – all you need is a standard configuration, nothing special.

In VSTS Online:

  1. Create a new team project or use existing one
  2. Create a new GIT repository in the team project

In Azure portal:

  1. Create a new Storage account or decide which existing one to use
  2. Create a new Function App associated with the Storage account
  3. In the Function App go to ‘Platform Features’ tab and configure the Code Deployment to be done automatically from VSTS whenever code is pushed to the master branch of the repo

On your PC:

  1. Clone the code repository from VSTS (using Visual Studio or your favorite tool)
  2. Make sure that you have the ‘Azure Functions and Web Job Tools’ extension installed in Visual Studio (top level menu > Tools > Extensions and Updates…)

Creating the C# project

Having the initial setup done, let’s create the C# project with Visual Studio that is going to host our services.

An Azure Functions project must be empty – we are not going to write any extra code. Select the .NET Framework project type, because .NET Core version is not fully supported by Azure at the moment of writing this article.

After project is created, add the Dasync.AzureFunctions.TechnologyPreview NuGet package.

Then just create a single .cs file in the project (e.g. ‘Program.cs’) and paste the code below and the code from the beginning of the article.

using System;
using System.Threading.Tasks;
using Autofac;
using Dasync.Ioc.Autofac;

namespace DasyncDemo
{
// Paste the code from the beginning of the article here.
}

The project is ready now.

If you wish to use an alternative IoC container, the only other option that is available in this tech preview is Ninject – just replace the code in Startup class with this snippet:

using Ninject;
using Dasync.Ioc.Ninject;

public static IKernel CreateIocContainer()
{
    var kernel = new StandardKernel();
    kernel
        .Bind<IEnglishDictionary>()
        .To<EnglishDictionary>()
        .AsService();
    kernel
        .Bind<IGreetingService>()
        .To<GreetingService>()
        .AsService();
    return kernel;
}

Deploying the project

At the moment of writing this article VSTS has a know problem of failing to pull NuGet packages from nuget.org. That can be fixed by adding nuget.config file to the root directory of your project with such content:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageRestore>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
  </packageRestore>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/" />
  </packageSources>
    <activePackageSource>
    <add key="All" value="(Aggregate source)" />
  </activePackageSource>
</configuration>

To deploy the project all you have to do is to push the code to the ‘master’ branch.

When you work on a team, you usually go though a code review process and then accept a pull request of a separate branch, but the result is the same – the code ends up in the ‘master’ branch and VSTS triggers its CI/CD pipeline.

Now if you go back to the Azure portal and navigate to your Function App, you should see that the deployment is successful. It might take a couple of minutes though.

What exactly is deployed?

To answer that question let’s build the project in Visual Studio. In the output window you can notice that two services has been found and two corresponding Azure functions have been generated:

The D·ASYNC NuGet package adds a custom build step which uses the IoC container to determine which types are defined as services – the LocalService extension method for Autofac (or AsService for Ninject). If you remove that extension method invocation from the startup code, an Azure function won’t be generated. You can also notice the ‘HTTP gateway’ function, but will get to it later.

Now, after the project has been successfully deployed to the cloud, the Function App will show all generated functions available to use.

Those service functions are listening on a queue from the Azure Storage account, so any asynchronous invocation of a method on a service (or its continuation) puts a message on a queue. If a service receives too many requests, they will just start piling up on the queue, and Azure Function App at some point makes a decision to scale out.

How to invoke service methods?

This part can get tricky, because you should not put any message on a queue by hand – the format of a message can depend on a platform implementation. Here is why a third HTTP-based function is generated – the gateway. It allows you to invoke a service from outside of the system and to connect the system to other services that don’t use D·ASYNC.

In this tech preview the gateway function has the anonymous access level. It’s a know security flaw and is a subject to change in the future.

To test the functions we will be using Postman to send HTTP requests. Let’s start with invoking the GreetingWord on EnglishDictionary. To do so we need to send a HTTP POST request using the URL of the gateway function with a couple of query parameters – the service name and the routine (method) name.

When you hit the Send button you get HTTP 202 Accepted response with the Location header telling where we can get the result when operation is complete.

Then, using that relative URL from the Location header, sending a HTTP GET request will give us the result of the operation.

It might take several seconds to get the result, and reason behind is that the gateway function simply puts a message on a Storage queue, where it takes some time for a Function App to pick it up. When a queue is empty, Function Apps are optimized to exponentially increase the polling interval. However, if you have a lot of messages on the queue, they can be picked up almost instantaneously depending on how much time a single function invocation takes and how many instances of the Function App the Azure infrastructure is running.

The queue polling interval can be changed in the host configuration file. In your project change the host.json file with the following content and re-deploy the Function App.

{
  "queues": {
    "maxPollingInterval": 100
  }
}

At this point you might be very skeptical and think that it’s definitely an overkill for a hello world application, where a simple request-response approach would have been times and times more efficient. But don’t judge too fast, the idea here is to have a resilient distributed workflow, where such slow response is a trade-off of the Function App serverless platform itself, not the D·ASYNC technology.

We can repeat the exercise and invoke the Greet routine on the GreetingService, but this time we need to supply method input parameters in JSON format.

And when you poll for the request using the relative URL from the Location response header, you’ll get the expected “Hello, World!” response body.

What happens behind the scene?

This time we will use Azure Storage Explorer to reveal additional details on how our services work together as a workflow. During initial request to execute Greet on GreetingService, the gateway puts a message on the queue.

The message is in a D·ASYNC-specific CloudEvents format which conveys all data needed to run a routine on a service. When the message is picked up by the Azure function, the D·ASYNC runtime engine invokes the method on the service. Then the method invokes GreetingWord on EnglishDictionary and awaits its completion. At this point D·ASYNC runtime engine saves the state of the method (which is compiled into a state machine). The ‘routines’ table in the table storage holds the latest saved state of all routine invocations. As shown below, the Greet routine has Status of 3 which corresponds to an internal enumeration item saying that ‘routine is awaiting’.

Let’s quickly look at some fields of the generated state machine for the Greet method (re-constructed C# code from IL metadata):

[CompilerGenerated]
private struct d__2 : IAsyncStateMachine
{
    // the input parameter
    public string name; 
    // current state ID
    public int 1__state; 
    // used to await the GreetingWord
    private TaskAwaiter<string> u__1;
}

That’s the state of a method that gets serialized and stored in the ‘State’ column of the ‘routines’ table.

At this point the Azure function finishes its execution and no resources are allocated to synchronously wait on the completion of the EnglishDictionary’s GreetingWord.

When invocation of GreetingWord is requested by Greet, an another message is put on the queue. This time D·ASYNC runtime engine knows exactly what the continuation of the routine being invoked is, and puts such information inside the message:

This information allows to resume the caller when GreetingWord routine is complete, what implements the Event Driven architecture.

Then GreetingWord completes its execution, resumes Greet routine which performs its last state transition and completes as well:

Now the record for the Greet routine in the ‘routines’ table has the ‘Result’ column which contains serialized data of the return type Task:

If GreetingWord throws and exception, we would see a non-null ‘Exception’ property on that ‘Result’. That means any exception raised at the application layer gets propagated to the caller, so regular try-catch blocks in the code for such distributed workflow would work just fine.

Since Greet method has been invokes via the HTTP gateway, the routine does not have any continuation (no subscriber) – that’s where Event Driven design stops working and the caller has to poll for the result instead.

Another handy feature of having routine states in an Azure table is traceability. Using routine unique numerical IDs, service names, and method names, at any point of time you can compose a call stack of routine invocations (like a regular stack trace you can see in a debugger):

Can Function Apps inter-communicate?

Microservices that form a workflow worth nothing if they can’t communicate with each other. The sample above shows how to deploy two services side-by-side in a single Function App – a ‘pod’ if you will. In real production environment you can have multiple teams owning their own services with public API and deployment targets. How does it change approach with D·ASYNC? Just one line of code.

Well, to be fair, you need to stand up and prepare another Function App in Azure first as a prerequisite. Then we can split the code of the ‘Program.cs’ into two projects which can be deployed independently:

The projects are shown in the same solution for clarity, but they should reside in separate code repositories. Besides, you can further split projects by separating the services part from their hosting (the Function App project), where only the hosting project needs references to D·ASYNC NuGet packages.

Because GreetingService depends on IEnglishDictionary, we need to change the way how that dependency is injected into the service. The promised single line change is shown on line 24 of the image above – it defines IEnglishDictionary as an external service using another extension method AsExternalService. It tells D·ASYNC runtime to resolve such dependency using service discovery. In this tech preview version the platform implements very rudiment mechanism of service discovery using table storage on the Azure storage account.

Ideally you should have a separate assembly that defines the contract for your service, but to simplify this example we just defined an exact copy of the IEnglishDictionary interface inside the GreetingService assembly. That code works since there is no contract verification.

After splitting services into two separate independent Function App projects you can run them in exactly the same manner as previously described – no extra code changes needed.

Why Azure Functions?

Azure Functions in combination with Azure Storage Queues and Tables might not be the best choice to get low-latency and high performance, yet it’s an auto-scalable serverless platform which costs nothing when not in use (the Consumption Plan), requires nearly zero maintenance and effort for deployments. You can think of it as a ‘starter kit’ with the idea to throw something into the cloud really quick, see how it goes, and then switch to a better platform later on if needed.

All in all

Let’s step back a little bit and look at the very beginning of this article. The very first paragraph describes how a simple code push translates into deploying services and distributed workflows, and the rest of the article is merely an explanation of the technology. The hello world example serves as a simplest metaphor of much more complex real-world applications. D·ASYNC is not perfect (especially at its preview stage) and has it’s own downsides, but will keep growing to help saving development effort in applicable scenarios.

10 benefits of D·ASYNC

Assuming that you are already familiar with the idea behind D·ASYNC and its basic syntax mapping, here are top benefits the technology can offer over industry-standard approaches of developing distributed services.

1. Language level of abstraction

As D·ASYNC leverages the syntax of the C# language itself, there is no need for an application to reference any specific assembly with the API definition. As a developer, you just write code, abide certain rules, and your application becomes a cloud-native citizen. The level of abstraction integrated into a programming language hides any implementation detail and complexity of a concrete distributed platform, what allows to focus on the business logic.

2. Natural code flow

If you have written a distributed workflow, you probably have seen how it becomes more and more complex after a while in terms of writing and maintaining the code, having a lot of small functions and/or deployments, losing track of what invokes what, and understanding the overall intent and responsibilities of many moving parts. The core concept of D·ASYNC mitigates that problem by utilizing finite state machines that are automatically generated by the compiler. Having regular async functions calling other async functions (what comprises a workflow) makes it natural to read, understand, and maintain the code and its purpose.

3. Minimal learning curve

Have you had that moment when you want to try out a new technology, but you also understand that you have to commit some time on learning how it works and how to use it? D·ASYNC is no exception – it has its own rules, but every C# developer knows how to write code in C#, so what can be easier than using a tool that you already know? And this time we are not talking about learning new API, but mostly about the actual C# syntax.

4. Free choice of platform

Without being bound to any specific API of a distributed platform, D·ASYNC gives you a choice to select one that matches your needs, or even create new or adapt your home-grown platform. That means it does not matter if you run an application on Windows or Linux, target .NET Framework or .NET Core, deploy to Azure or AWS – the application code remains the same. That bring opportunities to switch the platform in the future to scale up, improve performance, cut costs, or any other reason without re-writing the code. With the microservice architecture in mind, connected services can run on different platforms, where the contract between them is merely an interface with methods.

With a future design it will be also possible to re-route concrete methods of a service to run on a different platform than the service itself. For example, an abstract image processing service might have two responsibilities – managing images and actual processing. The managing part does not need high-end VMs and big scale, but processing part does. You can have only one class that represents the service, but its methods can run on different deployments or platforms to optimize the cost. That replaces the need of having two separate services (managing and processing) which are the necessity imposed by the limitation of current technologies.

 

5. Simplified testing

A good question would be “What happens if you run an application code without D·ASYNC?” Since D·ASYNC is just a plugable middleware, the answer is “It will just run as a normal .NET application inside a single process producing identical behavior.” That being said, testing of services and their integration is no different from regular unit testing, where dependencies on other services can be injected as mocks or actual implementations into the test target. Essentially, you just test classes and their methods. No need to spin up services locally and be aware of their communication mechanism – that’s the concern of a distributed platform itself, which must have its own set of tests.

6. Event-Driven Design

Most of today’s microservices use the request-response design, but not all work can be done within a couple of minutes – that’s where the message-oriented architecture comes into play. There are cons and pros of each method, which are out of scope of this article, but you definitely won’t be able to build a distributed system solely with request-response design. In C# asyncawait naturally falls into the category of the event-driven design – an async function subscribes to a completion of another function with the await keyword. It’s a special case of a publisher-subscriber model, where in 99% of cases you have exactly 1 publisher and exactly 1 subscriber, what removes the problem of understanding the workflow – in contrast of an event hub with random number of publishers and subscribers. D·ASYNC does not dictate which communication style to use and supports both modes: request-response and publisher-subscriber. The communication mechanism might depend on a concrete platform, it’s hidden from developer’s perspective, and is subject for optimization.

The internal design of D·ASYNC middleware has a concept of connectors, which are platform-dependent. You can have gateways which can connect services in less efficient but more standard way, what also allows to connect to other services not based on D·ASYNC technology.

In both cases, whether it’s request-response or publisher-subscriber, D·ASYNC promotes use of CancellationToken at user level to abort time-sensitive or abandoned operations instead of relying on platform level timeouts.

7. Easier Lift-and-Shift

Regardless if a C# application is not cloud ready or is a cloud monolith, D·ASYNC facilitates move to the cloud or splitting a monolith into microservices with less code re-factoring – a cloud service just needs a contract in form of an interface with set of async methods following certain rules. Such code adaptation process can be called “lift-and-shift” which works exactly as if an application would have been architectured for the cloud in first place.

8. Cuts development time

With benefits described above, D·ASYNC technology can significantly reduce the time needed for writing distributed services and workflows, including testing, maintenance, learning, and comprehension. For example, based on a very basic sample workflow with 10 steps, writing few asyncawait functions takes about 5X less time and 2X less code than writing 10 separate functions conveying a shared context, and most importantly following a natural code flow helps any other developer to understand what your code does much faster. Internet resources are full of great ideas and examples on how to wire up different technologies together to create a distributed system or a workflow, but again it takes time to learn, experiment, and support. Instead of maintaining your own version of such system, it tends towards being more beneficial to let other developers build and maintain that complexity, where D·ASYNC is designed to separate those concerns from actual using of a distributed system.

The time and code size savings can vary per person and complexity of a problem. The numbers provided above are solely based on personal experiments and don’t represent any statistical data.

Debugging the code becomes more natural as well. You can potentially attach a debugger to multiple processes that host various services and/or instances of a distributed system, and use regular means to step though the code, even though it does not run in a single process.

9. Business growth

Let’s not forget the economical aspect that any technology brings to the table, what might play a crucial factor for small businesses and startups, where usually research activities, experts in a field, and time are not affordable. A cloud application does not consist purely from a definition of services and workflows, but if a business finds a way to cut corners at least on that, it can give additional boost in competitive market. D·ASYNC puts less pressure on architects and developers during the initial phase of choosing the right stack of technologies, and as any abstraction layer leaves options open to swap the platform in the future when a product reaches its critical mass of users. Even if down the road you decide to opt out D·ASYNC in favor of other technology, you are still left with a perfectly valid software application that can run on a single machine without any code refactoring.

10. Brings dream closer to reality

How nice would it be to just commit simple code to a source code control system (SCCS) and it’s automatically deployed to the cloud and running in distributed auto-scale manner?  Well, we do have it today. But not exactly. Take for example a combination of ASP.NET, Azure App Service, and Visual Studio Team Services’ (VSTS) integrated Continuous Integration / Continuous Delivery (CI/CD) – you push your code for an ASP.NET application to the SCCS via VSTS, the CD pipeline compiles and deploys to Azure App Service. In such example the application still has to be dependent on a concrete framework (the ASP.NET in this case), and it still has to use an alternative mechanism to run distributed workflows. D·ASYNC tries to close that gap and unite technologies by making the application code universal regardless where it’s deployed. D·ASYNC brings closer to the reality where you simply write C# classes with async methods and push them to SCCS. Period. The rest is a subject of infrastructure optimization, which might involve machine learning for instance to dynamically re-scale and re-organize various pieces of an application with a help of service mesh and without any manual intervention.

D·ASYNC on Azure Functions demonstrates the vision in action.

Other considerations

Despite having advantages described above, D·ASYNC shall not be viewed as an ultimate replacement of existing technologies, it’s rather designed to extends them to serve as an additional layer of abstraction between application code and a distributed platform. The complexity of a distributed system is just shifted to a concrete platform implementation, where interface for any such platform from developer’s standpoint is always the same, what helps to keep two concerns separate from each other. Every single problem might have its own unique solution, where D·ASYNC might not offer specific features or meet certain requirements to tackle all of them in the best way possible. However if it can reduce complexity and save a lot of effort in 80% of cases, that can be good enough to balance out the rest 20% of sophisticated work.

 


Read Next

D·ASYNC on Azure Functions

D·ASYNC syntax mapping

The idea behind D·ASYNC technology is to use C# syntax, OOP paradigms, and design patterns as an abstraction layer to describe a distributed application. Here are some basic mappings:

  1. The basics. And resiliency.
/* This is your 'service' or 'workflow'. */
public class BaristaSimulationWorkflow
{
  /* This is a 'routine' of a workflow. */
  public virtual async Task Run()
  {
    /* This will call a sub-routine and save the sate of the current one. */
    var order = await TakeOrder();
    
    /* If the process terminates abruptly here, after restart the routine continue at exact point without executing previous steps. Any async method is compiled into a state machine, so it's possible to save and restore its state and context. */
    
    var cup = await MakeCoffee(order);
    
    /* Essentially this is an Actor Model of a scalable distributed system. A routine maps to an actor, because an async method compiles into a state machine (which has its state), and a routine can call sub-routines - same as an actor can invoke other actors, where async-await is the perfect candidate for a Message-Oriented design. */
        
    await Serve(cup);
  }
  
  /* This is a 'sub-routine' of a workflow. */
  protected virtual async Task<Order> TakeOrder();
  
  protected virtual async Task<Cup> MakeCoffee(Order order);
  
  protected virtual async Task Serve(Cup cup);
}
  1. Inter-service communication, dependency injection, and transactionality.
/* Declaration of the interface of another service that might be deployed in a different environment. */
public interface IPaymentTerminal
{
  Task Pay(Order order, CreditCard card);
}

public class BaristaWorker
{
  private IPaymentTerminal _paymentTerminal;

  /* Another service/workflow can be consumed by injecting as a dependency. All calls to that service will be routed to that particular deployment using its communication mechanism. All replies will be routed back to this service. This is where Dependency Injection meets Service Discovery and Service Mesh. */
  public BaristaWorker(IPaymentTerminal paymentTerminal)
  {
    _paymentTerminal = paymentTerminal;
  }
  
  protected virtual async Task<Order> TakeOrder()
  {
    Order order = ...;
    CreditCard card = ...;
    /* Simple call to another service may ensure transactionality between two. That complexity is hidden to help you focus on the business logic. */
    await _paymentTerminal.Pay(order, card);
    /* And again, state is saved here for resiliency. */
  }
}
  1. Scalability: Factory pattern and resource provisioning.
public interface IBaristaWorker : IDisposable
{
  Task PerformDuties();
}

public interface IBaristaWorkerFactory
{
  Task<IBaristaWorker> Create();
}

public class CoffeeShopManager
{
  private IBaristaWorkerFactory _factory;
  
  public CoffeeShopManager(IBaristaWorkerFactory factory)
  {
    _factory = factory;
  }
  
  public virtual async Task OnCustomerLineTooLong()
  {
    /* Create an instance of a workflow, where 'under the hood' it can provision necessary cloud resources first. That is hidden behind the factory abstraction, what allows to focus on the business logic and put the infrastructure aside. */
    using (var baristaWorker = await _factory.Create())
    {
      // This can be routed to a different cloud resource
      // or deployment what enables dynamic scalability.
      await baristaWorker.PerformDuties();
      /* Calling IDisposable.Dispose() will de-provision allocated resources. */
    }
  }
}
  1. Scalability: Parallel execution.
public class CoffeeMachine
{
  public virtual async Task PourCoffeeAndMilk(Cup cup)
  {
    /* You can execute multiple routines in parallel to 'horizontally scale out' the application. */
    Task coffeeTask = PourCoffee(cup);
    Task milkTask = PourMilk(cup);
    
    /* Then just await all of them, as you would normally do with TPL. */
    await Task.WhenAll(coffeeTask, milkTask);
    
    /* And that will be translated into such series of steps:
    1. Save state of current routine;
    2. Schedule PourCoffee
    3. Schedule PourMilk
    4. PourCoffee signals 'WhenAll' on completion
    5. PourMilk signals 'WhenAll' on completion
    6. 'WhenAll' resumes current routine from saved state. */
  }
}
  1. Statefulness and instances.
/* This service has no private fields - it is stateless. */
public class CoffeeMachine
{
}

/* This service has one or more private fields - it is stateful. */
public class BaristaWorker
{
  private string _fullName;
}

/* Even though this service has a private field, it is stateless, because the field represents an injected dependency - something that can be re-constructed and does not need to be persisted in a storage. */
public class BaristaWorker
{
  private IPaymentTerminal _paymenTerminal;

  public BaristaWorker(IPaymentTerminal paymentTerminal)
  {
    _paymentTerminal = paymentTerminal;
  }
}
 
/* Most likely this factory service is a singleton, however it creates an instance of a service, which can be a multiton for example. */
public interface IBaristaWorkerFactory
{
  Task<IBaristaWorker> Summon(string fullName);
}
  1. Integration with other TPL functions.
public class BaristaWorker
{
  protected virtual async Task<Order> TakeOrder()
  {
    var order = new Order();
    
    order.DrinkName = Console.ReadLine();
    
    /* Normally, 'Yield' instructs runtime to re-schedule continuation of an async method, thus gives opportunity for other work items on the thread pool to execute. Similarly, DASYNC Execution Engine will save the state of the routine and will schedule its continuation, possibly on a different node. */
   
    await Task.Yield();
    
    /* I.e. if the process terminates abruptly here (after the Yield), the method will be re-tried from exact point without calling Console.ReadLine again. */
    
    order.PersonName = Console.ReadLine();
    
    /* No need to call 'Yield' here, because this is the end of the routine, which result will be committed upon completion of the last step. */
    
    return order;
  }

  public async Task ServeCustomers()
  {
    while (!TimeToGoHome)
    {
      if (!AnyNewCustomer)
      {
        /* The delay is translated by the DASYNC Execution Engine to saving state of the routine and resuming after given amount of time. This can be useful when you do polling for example, but don't want the method to be volatile (lose its execution context) and/or to allocate compute and memory resources. */
        await Task.Delay(20_000);
      }
      else
      {
        ...
      }
    }
  }
}

 

As you can see from examples above, the code does not specify whether your application is running in a single process or distributed across multiple nodes. That gives you flexibility of choosing what cloud/distributed platform your application is running on without a need to be locked to any particular technology, and without necessity to re-write the code if you want to switch. In addition, it also helps to adapt existing monoliths applications to the microservice ecosystem faster – aka simplified “lift-and-shift” that works exactly as it would have been architectured for the cloud in first place.

At the moment of writing this article, few concepts are not fully fleshed out, and can be changed in the future. Other extra concepts are yet to come.

 


Read Next

10 benefits of D·ASYNC

What is D·ASYNC?

D·ASYNC (also D-ASYNC or DASYNC, where D stands for Distributed) is an ambitious framework for writing cloud-native distributed applications in C# language using just its syntax and paradigms of Object-Oriented Programming with the help of built-in support for Task Parallel Library (the async and await keywords).

Motivation

Having experience working on several .NET projects that involve distributed systems, I’ve noticed that we’ve done the same thing over and over again – build a resilient framework that runs a distributed workflow. This is good on one hand, because it’s tailored to project’s needs and can be easily extended, but not that good on other hand, because (as with any internal tool) it often lacks documentation, no community around that technology, and only small group of people know how it works, and things get even more uglier when it gets too complex over time or original authors quit.

If there is a framework available that can be easily used by any software developer, it would save a lot of effort for all of us, and it would be much easier for the community to talk in terms of the same technology. And there are technologies that can help to build distributed workflows, but all of them miss one part – something that can be almost integrated into C# language and .NET framework, so anyone would know how to use it right away. So.. why not to use the syntax of C# itself!

Core Concept

A decade ago we had to build our own libraries that allow to run tasks in parallel using a thread pool, but then the Task Parallel Library (TPL) evolved which quickly became part of the .NET framework, and today we have very cool async-await syntactic sugar that behind the scenes decomposes your code into finite state machines.

If you take a look at a distributed workflow, in general it has exactly the same idea as an async method: you decompose a workflow into state machine, where a state transition ideally should be an idempotent action, then execute state transitions on available nodes where affinity to particular process or virtual/physical machine is not guaranteed (same as TPL does not guarantee execution on a particular thread). The only difference is when a state transition in a distributed workflow fails, you might want to re-try it (such aspect should be built-in to a framework), and to be able to run a workflow action in any process, you must be able to save and restore the state of a state machine.

Building state machines by hand can be very hard, tedious, error-prone, and it’s not easy to understand the flow with large number of small functions that represent state transitions, but it’s very easy to do with the async modifier:
    async Task FiniteStateMachine1()
    {
      // state transition 1
      await FiniteStateMachine2();
      // state transition 2
    }

If you look at how C# compiler generates state machines, you can notice that the await keyword serves as a delimiter between state transitions. Hence, if you can capture and restore the state (including input arguments and local variables) of such functions, and control the execution of underlying state machine (suspend before await, resume after await), then you can build a framework that runs your async functions as a distributed workflow. This is the key to the concept of the D·ASYNC technology, although there is much more beyond that.

Abstraction Layer

D·ASYNC engine acts as a PaaS middleware between a .NET application and a distributed platform, where the C# language is the contract: classes represent services, their async method comprise a resilient distributed workflow. It registers in the .NET runtime of your application to control the executeion of compiler-generated finite state machines, then breaks the normal execution flow and delegates state transition intents with all contextual data to a distributed platform. It’s up to the platform to decide how to serialize data and how to distribute the load between multiple nodes. Regardless how it’s done the D·ASYNC engine reconstructs all necessary state machines and services from a state transition intent, then resumes the execution of an application from saved state.

Normally, you would use a platform API directly in the application code, but D·ASYNC provides a very high level of abstraction and draws a strict boundary between two. For example, all exceptions occurred in the user code always stay in the user code, and all exceptions occurred in engine/platform code always stay there and never get propagated to the user level.

In Short

This technology allows to combine together a general purpose programming language with features of a workflow definition language to facilitate authoring of cloud native applications.


Read Next

D·ASYNC syntax mapping