Get started with FRENDS, the best .NET Hybrid Integration platform.

On this page you can read about the FRENDS architecture and functionality in the Concepts section to get a better understanding of the features of FRENDS and how they work.

You can also access the developer Reference documentation for detailed descriptions on how to utilize each feature in development and operations.

Quick Start

If you are looking to just get quickly started with FRENDS you should start by looking into:

What is FRENDS?

FRENDS is a hybrid integration platform with a focus on flexibility and providing a clean DevOps experience to experienced and newer integration developers alike.

The main focus of a hybrid integration platform is to bring together various systems and services located both in the cloud and on-premise data centers. With FRENDS you can model the required integration flows between these systems using a BPMN based visual GUI.

frends process

FRENDS access both cloud and on-premise through a distributed agent architecture where multiple agents in multiple environments communicate with one-another through a centralized hub.

FRENDS Cloud Infra

Combining these two approaches together results in a platform where you can simply install one agent on-premise and one on the cloud and dicatate in the visual BPMN which parts of the integration flow should be executed in which agent and FRENDS will take care of the rest.



Triggers are registered by the Agent and can only activate if the triggering event is registered by the hosting server.

Triggers are an integral part of any FRENDS Process as they are the way a process can be started dynamically based on an event that the Agent is able to recieve. The trigger also acts as a starting point for the process and the first step in the Process diagram.


These events can include:

  1. A file is created to a designated folder
  2. A webservice call is recieved by the Agent
  3. A message appears in a queue the Agent is subscribed to
  4. A schedule is activated in it's time window
  5. A manual message is sent by the user through the FRENDS UI

This means that if you are creating a REST API you should use and configure an HTTP Trigger or an API trigger as the start event for that integration process. Likewise in the case of a batch job you should most likely use a Schedule or a File Trigger.

Using Triggers

Triggers are used when developing an integration process and they are the first element in the process editor canvas. You can then configure the trigger to match the integration scenario that is being developped.

Note that you can have as many triggers as you would like in any process and you can combine different trigger types together. This means that you can create and integration process that is run whenever a file is created to a folder AND atleast every 6 hours.

Triggers also use the #hashtag notation to offer relevant information about the event that initiated the trigger, as an example the file trigger offers the name and metadata of the file that it was triggered from. You can then use this information to build logic in the process itself.

Different Trigger Types

Currently FRENDS Supports 6 different triggers:



When developing processes or integration flows in FRENDS you will need to deploy the newly created integration flows to an Environment in order for them to be executed. A common use case for this is the traditional path of:

  1. Developing an integration process in the Development environment
  2. Deploy it for testing to the Test environment
  3. Running test scenarios for the integration flow
  4. After the tests have been passed deploy it to the Production environment

FRENDS enforces this best practice with the deployment and Environment architecture.

Deploying a Process

The deployment of a process is done from the FRENDS UI Process View by selecting the desired processes to be deployed and selecting the "Deploy" action found in the Actions dropdown menu. You will also be required to choose which version of the process you wish to deploy and into which environment. This same method is also used to rollback previous deployments by simply selecting an older version.

When the deployment is initialized FRENDS will automatically send a notification to the selected Environment which will cause all the Agents in that environment to download and take the selected version of that process into use.

This way the whole deployment process is automated with a single click of a button.

Note that you can only deploy integration flows from the Development environment to other environments.
Note that triggers will be also activated by default when deploying processes to an Environment. You can choose not to do this by unchecking the option in the deploy dialog during deployment.

Monitoring Rules

When you have created an integration Process you might want to activate a monitoring rule for that integration process to keep track of specific data or to alert you when there is a problem in the integration process.

For this purpose FRENDS offers monitoring rules which inspect the execution of a set of processes as a whole and focuses on the data being processed instead on the technical success of the process.

This means that instead of trying to figure out if a process has been successfull in for example delivering orders to their destination inside a single process, you can gather up all the orders from a set period of time and see if enough have been delivered.

A good example of this would be to configure a monitoring rule to gather up all the amount fields of all orders across all processes and then configure a monitoring rule to make sure that at least $10,000 has been successfully processed every 24 hours.

Monitoring rules also provide an analytical view into the data that they are monitoring, or for example in the example below you are able to see the number of cities where the relative humidity was less than 50%.


Using Monitoring Rules

In order to take advantage of monitoring rules you need to use Promoted Variables in the integration processes you want to monitor using monitoring rules.

You are then able to use these Promoted Variables to set up monitoring rules on that set of data.



Processes are configured using the FRENDS UI, stored in the database and executed as compiled code in FRENDS Agents.

A FRENDS process is the common name for all integration flows inside FRENDS. A process is a combination of visual configuration using the BPMN process builder canvas and Task configuration inside that canvas.

frends process

Process Types

There are two kinds of processes available in FRENDS:

  • Regular Processes
  • Sub-Processes

A regular process is used to create the integration flow functionality and active visual documentation of what that integration flow does and a sub-process can be used to wrap smaller parts of processes to create reusable microservices across other processes.

This enables a process hierarchy where you can create a FRENDS process which executes a sub-process, which executes a sub-process and so on which can be used to create an orchestration layer and where you can isolate for example access to a specific system inside a sub process.

Main Process
  Sub Process
  Sub Process
    Sub Process
      Sub Process
  Sub Process
Main Process

Process Functionality

A process always contains a starting point has some functionality and has an ending point. The flow of the execution is then dictated by arrows connecting different elements

Starting Point




End Point


These three parts of a process combined create a ready made integration process which executes a desired integration flow.


Creating Integration Processes

When using FRENDS creating integration processes is the most important functionality which you do by following the process functionality logic above and creating the desired integration flow by using FRENDS Tasks and passing information from one task to another.

You can also use assisting functionality to create more elegant integration processes by utilizing decicions, loops, scopes, parallel executions scripts and more.

Create your processes as clearly as possible as the process diagram will act as active documentation for the operations as well as future developers on what the process is doing.

Process Elements

For a full reference list of all the process elements available you should see Process Elements Reference

Process Instance


The process instances are stored in the FRENDS database and viewed through the UI.

A process instance is the single execution of an integration process created in FRENDS. The process instance is used for monitoring and auditing purposes since the process instance stores all the information relating to the execution of that specific process in that specific instance.

As an example when a process is being built the view looks like this...


... and when it's finished the process instance shows the data and the execution path the process took during that execution:


Finding Process Instances

When you have built your integration process and need to find a specific process instance tied to that process, you can use the Process page in the UI to search and filter your process executions to find the instance you were looking for.

A good example of this would be to for example search for specific data in the process execution such as name of the city being processed:



The dashboard is a part of the FRENDS UI which gives the users a widget based configurable splash page when first loading FRENDS. This splash page enables users to configure different kind of statistical views into the day-to-day operation of FRENDS to be able to get a sense of the current state of integrations with a single glance.


The dashboard data is stored on the FRENDS database and the configuration of the widgets is stored in the users browser.


The dashboard contains multiple different widgets which can be added or removed, resized and repositioned based on the users preference. Each configuration you make for widgets is saved locally on your browsers memory, which means that the Widgets are unique to each browser and each user.

Process Count Widget



The process count widget can be used to either show the number of failed or successfull process executions in chosen environment over a chosen period of time. For example a user could configure a process count widget to show the number of failed processes in the Production Environment over the last 7 days.

Process Graph Widget


The process graph widget can be used to further ellaborate the number of failed of successfull processes by drawing them out in an area chart to get a visual representation of the number of executions over a period of time.

Like with the Process Count Widget the Process Graph Widget can be configured to only display a specific environment(s).

Error List Widget


The error list widget is used to display and group any problems that might have occurred in FRENDS and can be used to quickly navigate to the problematic integration process or environment.

The error list widget is able to display:

  • Agent related errors, such as connection problems
  • Process execution errors
  • Other possible errors in the FRENDS UI or maintenance tasks

Environment Variables


Environment variables are configured through the FRENDS UI and stored securely in the database.

Environment variables are optional static configuration information that is attached to a specific Environment. These variables are most commonly used to store integration process related information such as passwords and usernames of systems that are being connected into.

You can create environment variables in different categories to help organize similar variables together.

  • Key-Value-Pairs, for storing simple information such as connection strings
  • Hierarchical Groups, for storing all the information relating to a specific object such as ERP password, username and server location
  • Lists, for storing repetitive information such as the IP addresses of your client servers

There is no limitations in what you can store in an environmental variable.

Storing Information

The main advantage in using environment variables is that after configuring them you can simply refer back to an environment variable in your integration Process to access the configured value and if you need to change that value you can update it on the fly in the environment variables page.

Environment Specific Information

The other benefit of using environment variables is that you can configure them to be environment specific. This means that you can use a different password, or even a different server, for an integration process in the test environment than you do in the production environment. This allows for seamless development, testing and production lifecycle because the configuration of each environment is tied to an appropriate environment variable.

Environment Variable Use Cases

  1. You only need to keep track of these variables in a single place
  2. If a variable changes you only need to update it once
  3. You can securely store sensitive information such as passwords as environment variables
  4. You can have different variables for different environments



FRENDS tasks are configured in the user interface, stored in the database and executed as a part of a process on the FRENDS Agent.

FRENDS tasks are the building blocks with which you build FRENDS Processes. They are meant to be reusable microservice like components which can be utilized for connector like actions by parametrization.

For example a FRENDS task could read files in from a director and another task could write something to a database, by then connecting these two tasks together you can create an integration process which reads files and writes the contents to a database consisting of two tasks.


Configuring FRENDS Tasks

Before you can use FRENDS tasks to build an integration process, you need to configure them according to the specific task you are using. This configuration changes depending on what task you are using.

As an example configuring a task to read files would require you to give the file name and directory location while a task to write to a database would require you to specify the SQL query that will be used for the write operation.

All the configuration is done using the FRENDS Parameter Editor.


FRENDS support generating processes from OpenAPI (swagger) 2.0 specifications. Processes within an API specification can be managed and deployed as a unit. Once a valid OpenAPI specification has been imported into FRENDS, you can easily tweathe API management is straight-forward and does not require deep insights into the workings of swagger.


FRENDS can generate Processes for OpenAPI operations that take in the parameters defined in the operation in the specification, as well as generate samples of the expected responses. A process bound to a OpenAPI operation has a generated API trigger.

For more information on OpenAPI specifications, see the official documentation. FRENDS supports OpenAPI 2.0.

API Discovery

Active processes part of an API Specification can be found, explored and tested from the Agent API Discovery page. By navigating to https://<agent url>:<port>/api/docs/, a list of active specifications will be shown. By navigating to a Specification, each active operation can be explored and tested.


API Keys

Access to APIs can be managed efficiently with API Keys. API keys are generated per environment. API keys use Rulesets to give access to HTTP endpoints according to their path and the request method. This makes it possible to quickly give an API key access to a full API Specification by allowing access to the API Specifications basepath.



In FRENDS the environments are logical containers which connect the created integration processes to actual executing agents and are a result of our distributed Agent architecture.

Environments always contain a number of FRENDS Agents and are used to isolate as well as group different agents together to create a system where FRENDS is able to operate on separate "sub-installations" to fulfill the need for different use cases during an integration processes lifecycle.

A usual scenario of this involves three logical environments in FRENDS:

  • Development environment
  • Testing environment
  • Production environment

These environments would then be used to preform specific tasks on an integration flow depending on the environment in question. For example running test scenarios on an integration process in the Testing environment.

Environment Operations

As the different environments are isolated from one-another you can do the following actions within each environment:


A FRENDS Agent is the actual part of FRENDS which executes the integration flows or processes. Each FRENDS Agent is their own idependant actor and does not rely on any other component to function.


The Agents are connected to the FRENDS UI and Database through a Microsoft Service Bus queue, through which they recieve updates on the integration flows they are hosting and report back execution statistics.

Each Agent is also always assigned to a single logical Environment

Agent Updates

Each Agent is able to dynamically update the integration flows it's hosting by recieving an update notification through the Service Bus Queue. This means that once you click "Deploy" on the UI the Agent is able to automatically retrieve the desired version of an integration flow or a process.

High Availability (HA)

When hosting multiple Agents in the same Environment the Agent's within that Environment are set to an Agent farm. This also requires that the Agent's have a common SQL database to which they have access and which is configured on the Environment page. Having Agent's in a farm configuration activates the HA functionality which causes the Agent's to share load with one another and to take over the execution responsibilites of a failed agent.

Note that Agents still need a load balancer to be installed in front of them to split HTTP traffic in on-premise installations.

API Trigger

API Triggers are specialized HTTP Triggers bound to a swagger operation. API Triggers can only be created trough API Management. API Triggers shares configuration with HTTP Triggers to a large part.

API trigger


HTTP Method

The HTTP method is locked to that provided in the Swagger operation, and can not be changed. Valid values are GET, POST, PUT, DELETE, HEAD, OPTIONS and PATCH.


The url path is locked to that provided in the swagger operation, and can not be changed. Path parameters are allowed. If the path parameters are of type integer or boolean, then the path will be restricted to containing only those types.

This enables having endpoints like /api/pet/{id} and /api/pet/getStatus active at the same time with no collision, if the {id} parameter is of type integer. However having /api/pet/{name} and /api/pet/getStatus at the same time would not be possible if the {name} parameter would be of type string.

Allowed protocols

API triggers can be configured to accept requests with HTTP, HTTPS or both. If a request is made with a protocol that is not allowed, the reply will be Forbidden (403).


API triggers can use four different kinds of authentication:

  • None - No authentication at all
  • Basic - Authenticate with HTTP basic authentication
  • Certificate - Use a client certificate to authenticate
  • Api key - Authenticate with an API key

We strongly recomend to only use Authentication over HTTPS.

Basic authentication authenticates the user either against the Active Directory or the local users. Which one is used depends on the FRENDS Agent service user. If the agent uses a local user account, users are authenticated against the local machine users. If the agent uses an AD user account, users are authenticated against the AD users. The user name and password need to be encoded with UTF-8 before being converted to Base64 for the basic authentication header.

Certificate authentication requires that the client certificate is valid for the FRENDS Agent user on the agent machine. Also the issuer for the certificate needs to be found in the agent user's Client Authentication Issuers certificate store.

Api key authentication uses an API key together with Rulesets to determine if the client has access to an url. For more information, see API keys.

Cross-origin Resource Sharing

If there is need to allow a certain page to trigger a process, it is possible to do with cross-origin resource sharing (CORS). Check the "Allow requests from these origins" checkbox, and define the allowed origins in the textbox. The * character allows calls from all origins.

Note: if the call does not come from the default port, it must be included in the origin. The origin making the call must also support CORS.


A read-only display of the swagger operation bound to the trigger.

Trigger Reference List

Reference Description IP of the client as a string Cookies associated with the request as a Dictionary<string,string> HTTP method type (e.g. GET, POST..) Request URI (e.g. The username associated with the caller. Only set if authentication is used. The following values are passed for the different types out authentications:
Api Key: The name of the api key
Basic authentication: The provided username
Certificate: The certificate's SubjectName.Name field Will contain whatever is passed on the request body. If the body contains a JSON object, the properties will be accessable with dot notation. Eg, if the JSON string { "house": { "windows": 4}} is passed in the body, it would be possible to acces the "window" property with Contains path parameters. Automatic casting will be attempted if the parameters have been defined in the swagger spec. Path parameters are mandatory and thus always populated.

If the path /user/{id} has been configured, and the parameter id is of type int, then the reference can be used straight away for integer comarisons (for example in a Decision expression>3 would be usable) Contains query parameters. Automatic casting will be attempted if the parameters have been defined in the swagger spec. If the parameter has a default value and the request does not contain the parameter, the default value will be passed to the process.

Query parameters defined in the swagger spec are always populated in the trigger, even if no value is provided. Contains header parameters. Automatic casting will be attempted if the parameters have been defined in the swagger spec. If the parameter has a default value and the request does not contain the parameter, the default value will be passed to the process.

Header parameters defined in the swagger spec are always populated in the trigger, even if no value is provided.

You can try to access an optional reference from any of the references (e.g. and if it is found the value will be returned and if not the value will be set to null.

Automatic casting

Swagger parameters usually contain a type definition. Parameters of type integer, number or boolean will be cast to their corresponding .NET type (Int, Long, Float, Double or Boolean). For array type parameters, the array will use the separator defined in the swagger parameter and the array content in turn will be cast according to their types. An array parameter with a csv separator and content type integer has the call content "1,2,3,4,5" and will be accesable as an JArray containing integer values.

Intermediate Response


A Process can return a response for the user before the Process is finished. This functionality is enabled by adding a Intermediate result element to the Process. When this element is executed the caller will recieve a http response from the Process. This can for example be used when calling a long-running Process and the caller should be notified that the long-running task has started.

HTTP Response Formatting

The API Trigger returns the result of the executed Process as the HTTP response. The response varies according to the following conditions. When the Process' result is a string, the string is set as the body of the response. If it was an object, it will be returned either as JSON or XML depending on the requests ACCEPT header or JSON by default. For example ACCEPT: application/xml would produce an XML response, while ACCEPT: application/json would produce a JSON response.

If the result is an object with the properties HttpStatusCode and Content, the result will be mapped to a response followingly:

HTTP Response
HttpStatusCodeintReponse status code
ContentstringThe body of the response
ContentEncodingstringThe encoding for the body, e.g. utf-8
ContentTypestringContentType header value, e.g. application/xml or application/json
HttpHeadersKeyValuePair[]Response headers

Http response

The process elements Return, Intermediate return and Throw all have the option to generate a pre-defined Http response. See Http Response results.

Schedule Trigger

If you need to start a process within a specific schedule you can use a Schedule Trigger to define a schedule which will then start that process within the scheduled times. Schedules can be configured to start in specific intervals within set time and date ranges or to execute once at given dates and times.


Any process can contain multiple schedule triggers if you need to have different or overlapping schedules. To add multiple schedule triggers simple add a Start Element to the Process canvas and connect it to the first step of the process.

Configuring Schedule Triggers

To configure a schedule trigger you should fill out the following required fields:

  • Start Time
  • End Time
  • Time Zone
  • Is Repeated Every
  • Run Only One Scheduled Instance At a Time


Advanced Configuration

After configuring the basic Schedule Trigger properties you can access the Advanced Options to further configure your trigger with specific date based logic such as:

  • Execute on specific week days
  • Execute on specific days of month
  • Execute on specific months
  • Execute on first or last days of the month
  • Execute on weekends
  • Execute on weekdays
  • Execute every N days
  • Execute every N weeks
  • Exclude pre-chosen days from the schedule

You can also combine the above to create complex schedules.


API Management

FRENDS supports importing and editing Swagger Specifications. Swagger Specifications can be used to generate Processes that serves as API endpoints. Processes that belong to a Specification can be managed and deployed as a unit.


The API Management page can be found under "API".

Importing a Swagger definition

FRENDS support importing Swagger 2.0 Specification in JSON format. YAML markup is not supported. If your Swagger Specification is in YAML, use for example the Online Swagger Editor to download a converted version. This tool can also be used for creating Swagger Specifications that can be imported into FRENDS.

It's possible to modify an imported Swagger Specification using external tools. If you want to update an imported Swagger Specification, just import the updated specification and FRENDS will automatically create a new version of the Specification. Note that the one thing that can not change in between imports is the base path of the Specification - if the base path differ, a new API will be created rather than a new version of the old API.

API Deployment and Version Control

API versions exists in two different states. The version that is seen in the Development environment is always the current Development version of an API. In all other environments, published versions are shown.

Development versions

A development version of an API is a version where the linked processes does not have locked down versions. That means that the user can update any process that is a part of the API without taking any additional actions. The development version can also have it's Swagger specification modified. When an API is ready for deployment, a Published version will be created.

Published versions

A published version contains evrything a development version does, but it no longer allows any changes. It locks down the process versions which is in use, and the Swagger Specification can no longer be changed. A published version can be deployed as a unit, and it can also be used to rollback the Development version to a previous point.


When deploying the user can choose to deploy a previously Published version, or create a new Published version from the current Development version. The Deployment dialog allows the user to see which processes will be deployed as well as the Swagger Specification. If a Published version is no longer valid, for example due to a used process being deleted, then it can no longer be Deployed.


Editing Swagger

FRENDS supports editing imported Swagger Specifications. The Base path of an Swagger Spec can not be changed once it has been imported.

Note that editing a Swagger Specification will override the current Swagger Specification, it will not create a rollback point by default. If you want a rollback point before editing, press Deployment and choose "Save and deploy". This will create a new version of the Specification and allow you to roll back at a later stage. It's not mandatory to go trough with the Deploy step.

Creating API processes

Once a Swagger Specification has been imported, FRENDS can create Processes matching the API operations defined in the specification.

A process generated from an API Specification will contain an API Trigger. The Trigger will give the process access to all expected variables passed to the endpoint, and will even cast them to the correct type.

A generated process will also come with pre-generated responses. For example, if an endpoint is defined to return a Pet object on success and an Error object otherwise, then the process will contain both of these responses upon creation, complete will expected parameters (as long as Swagger Specification contains all required information, of course). Whatever happens in between the trigger and the responses is up to the user.

Note that some settings that are defined in the Swagger specification are set on a process level. Supported schemas as well as Authentication will be set by the API Trigger, and might differ from what has been defined in the Specification.

Unlinking a process

A process that has been created from an API Operation is linked to the API Specification that created it. That means the process will be deployed when the API is deployed, and the API Deploymet will make sure the right version of that process is deployed.


If you wish to unlink a process from an API, to example create a new process for that API Operation, simply click "Unlink Process". An unlinked process can easily be re-linked to an API.

On Swagger Operation changed

FRENDS detects when a Swagger Operation has changed for a Process with an API Trigger. This can happen when importing a new version of a Specification or when editing the Swagger Specification - for example an operation can gain an extra parameter, or there's a schema defintion change.


FRENDS offer the functionality to update the Process' API trigger to match the new Swagger Specification. Note that this only updates the trigger- if the expected responses have changed, then it's up to the user to modify those.

In case an operation is removed entierly, then the process gets unlinked from the Specification. API processes that are unlinked from an API are still visible in the view.


HTTP Response types

Responses are defined in a Swagger operation by HTTP status codes. For codes beginning with 2 or 3 (success / redirection), a Return element will be generated. For others, a Throw element will be generated.

Note that the behaviour of these two elements are different.

A Throw element will end the process in an error state, and if used in a Scope or a loop, it'll end the process execution without continuing. Throw will also cause Error handlers to trigger.

A Return will end the process in a success state, and will continue executing if used in a Scope or loop.

If you need to send out an error response but do not want the behaviour that comes with a Throw element just add a Return element with the same settings as the Throw.

Deleting API Specifications

Deleting from a non-Development environment only removes the deployed processes and the deployed API Specification - they will still exist in the Development environment and can be re-deployed from there. Deleting from the Development environment removes the API as well as the linked processes. It's only possible to delete the API from the Development enviornment if it's not deployed in other environments.

Swagger features not supported

  • Form parameters are not supported.
  • File parameters are not supported.
  • Only parameters defined on an operation level will be available for auto-complete in the Process editor.

Custom Tasks

FRENDS fully supports creating your own task packages. To do this you must create a .NET library which is then wrapped in a NuGet package file and uploaded into FRENDS through the Tasks page.


Creating a FRENDS Task Package

Tasks must be in NuGet's nupkg format and the assembly name and package Id must be identical, e.g. Frends.TaskLibrary.dll and Frends.TaskLibrary. By default each public static method with a return value (void methods are not accepted) inside the assembly will be added as a Task. The methods can not be overloaded, e.g. you cannot have Frends.TaskLibrary.CreateFile(string filePath) and Frends.TaskLibrary.CreateFile(string filePath, bool overwrite).

The task parameters may use the DefaultValueAttribute to provide a default value which is shown in the editor, remember that the parameters are expressions in the editor and the default values need to be provided as such, e.g. "true" for a boolean value, "\"C:\Temp\\"" for a string containing a filePath.

Also, if a parameter should not be logged, the PasswordPropertyTextAttribute should be added. The value of the parameter will be replaced with: << Secret >> during logging. The parameters may have a more complex hierarchical structure, we recommend using at most only two levels of hierarchy.

For Example:

using System.ComponentModel;

namespace Frends.TaskLibrary 
    /// <summary>
    /// File action type (nothing/delete/rename/delete)
    /// </summary>
    public enum ActionType 
        /// <summary>
        /// Nothing is done to the file
        /// </summary>

        /// <summary>
        /// File will be deleted
        /// </summary>
        /// <summary>
        /// File will be renamed
        /// </summary>

        /// <summary>
        /// File will be moved
        /// </summary>

    /// <summary>
    /// File class
    /// </summary>
    public class File
        /// <summary>
        /// File path
        /// </summary>
        public string Path { get; set; }

        /// <summary>
        /// Maximum size of the file
        /// </summary>
        public int MaxSize { get; set; }

        /// <summary>
        /// Password for unlocking the file
        /// </summary>
        public string Password { get; set; }

    /// <summary>
    /// FileAction class defines what will be done to the file
    /// </summary>
    public class FileAction
        /// <summary>
        /// Action to be done with the file
        /// </summary>
        public ActionType Action { get; set; }

        /// <summary>
        /// If ActionType is Move or Rename then To is the path to be used
        /// </summary>
        public string To { get; set; }

    public static class Files 
        /// <summary>
        /// DoFileAction task does the desired action to file
        /// </summary>
        /// <param name="file">File to handle</param>
        /// <param name="action">Action to perform</param>
        /// <returns>Returns information if task was successful</returns>
        public static string DoFileAction(File file, FileAction action)
            // TODO: change logic
            return $"Input values. Path: '{file.Path}', Max size: '{file.MaxSize}', Action: '{action.Action}', To: '{action.To}'";

All arguments specified for the method will be used as Task Parameters. If the argument is of class type, it will be initialized as a structure.

By adding a FrendsTaskMetadata.json file to the root of the NuGet package, unwanted static methods can be skipped by listing only the methods which are wanted as Tasks. For example the following json structure would only cause the DoFileAction to be considered as a Task:

    "Tasks": [
            "TaskMethod": "Frends.TaskLibrary.FileActions.DoFileAction"

XML Documentation

Custom Tasks can also be commented/documented in the code by using XML Documentation Comments. These comments will show up in the process task editor automatically if the documentation XML file is included inside the Task NuGet (if the nuget Id is Frends.TaskLibrary then a file Frends.TaskLibrary.xml will be searched).

The generation of this file can be done automatically by enabling - Build/Ouput/ XML documentation from Visual Studio for example. When the comments are being queried the Task Parameter definition is checked first and if this is not found then the type definition will be checked.

<?xml version="1.0"?>
        <member name="T:Frends.TaskLibrary.ActionType">
            File action type (nothing/delete/rename/delete)
        <member name="F:Frends.TaskLibrary.ActionType.Nothing">
            Nothing is done to the file
        <member name="F:Frends.TaskLibrary.ActionType.Delete">
            File will be deleted
        <member name="F:Frends.TaskLibrary.ActionType.Rename">
            File will be renamed
        <member name="F:Frends.TaskLibrary.ActionType.Move">
            File will be moved
        <member name="T:Frends.TaskLibrary.File">
            File class
        <member name="P:Frends.TaskLibrary.File.Path">
            File path
        <member name="P:Frends.TaskLibrary.File.MaxSize">
            Maximum sife of the file
        <member name="T:Frends.TaskLibrary.FileAction">
            FileAction class defines what will be done to the file
        <member name="P:Frends.TaskLibrary.FileAction.Action">
            Action to be done with the file
        <member name="P:Frends.TaskLibrary.FileAction.To">
            If ActionType is Move or Rename then To is the path to be used
        <member name="M:Frends.TaskLibrary.Files.DoFileAction(Frends.TaskLibrary.File,Frends.TaskLibrary.FileAction)">
            DoFileAction task does the desired action to file
            <param name="file">File to handle</param>
            <param name="action">Action to perform</param>
            <returns>Returns information if task was successful</returns>

Service Bus Trigger

Service Bus triggers are similar to Queue Triggers, in that they allow you to trigger Processes on messages received from a message queue, in this case an Azure Service Bus or Service Bus for Windows Server queue or subscription.


NOTE: The service bus trigger cannot accept message sessions, so it cannot listen to queues or subscriptions requiring sessions. It can, however, send replies to session queues, as described below.

Configuring Service Bus Triggers

The Service bus trigger needs the following settings in order to work

Name Description
Queue Name of the queue or subscription to listen to
Connection string The full service bus connection string
Max concurrent connections Limit on how many messages will be processed at a time. Essentially will limit the number of Processes running at the same time.
Consume message immediately If set, the message will be consumed from the queue immediately on receive. If not set, the listener will use the PeekLock receive mode, and acknowledge the message only if it was processed successfully. This means that if the process fails with an exception, the message will return to the queue, and will be processed again. In this case, the trigger will retry processing the message until the max delivery count on the queue or subscription is reached.
Reply If set, the Process response will be sent to a reply queue, usually defined by the `ReplyTo` Property in the request message. See Reply messages below for more.
Reply errors If set and the Process fails with an exception, the exception message will be serialized and sent to the reply queue. See Reply messages below for more.
Default reply queue Needed if the 'Reply' option is set. The default queue or topic where the reply message will be sent if the request did not specify it with the `ReplyTo` property in the request. See Reply messages below for more.

Trigger data for the Process

The trigger will pass the message content serialized as string to the Process. It can be accessed via the reference.

The trigger will also set the dictionary from the message properties. Any custom properties will be included in the list by name and value. The built-in message properties are also accessible; they are prefixed with the "BrokerProperties." prefix. The following table summarizes the available properties.

Property reference Description["BrokerProperties.CorrelationId"] Correlation ID["BrokerProperties.SessionId"] Session ID["BrokerProperties.DeliveryCount"] Delivery count, i.e. how many times the message has been received from the queue["BrokerProperties.LockedUntilUtc"] Message lock timeout if not consuming message immediately["BrokerProperties.LockToken"] Message lock token if not consuming message immediately["BrokerProperties.MessageId"] Message ID["BrokerProperties.Label"] Label given to the message["BrokerProperties.ReplyTo"] Queue name where to send replies to. See Reply messages below for more.["BrokerProperties.ReplyToSessionId"] Session ID to set in the reply so the caller can identify it. See Reply messages below for more.["BrokerProperties.ContentType"] Body content type

Reply messages

Sometimes you need to get a reply back to the sender of the request, e.g. when the caller needs to wait for the triggered Process to finish, or needs the results. In this case, you can turn on replies on the Service Bus trigger. This will then return the result of the process in a message that is put to the given reply queue.

The request-reply process usually goes as follows:

  • The caller will decide on a session ID and queue for receiving the reply. It will set these to the ReplyToSessionId and ReplyTo properties in the request message, and send the message to the queue listened to by the trigger. The caller will then start listening on the reply queue, accepting only the message session with the given session ID. This means the caller will only get the response that was meant for it, even from a shared queue.
  • The trigger will receive the request and start a new Process instance, passing the message body and properties as trigger properties to the Process.
  • Once the Process has finished, if the 'Reply' option is set, the trigger will create the response message. The response message will have the serialized result in the message body, with the SessionId set to the given ReplyToSessionId value from the request and CorrelationId set to the CorrelationId value from the request. The response is then sent to the queue or topic given in the ReplyTo property, or if the request did not define one, in the default queue for replies, configured in the trigger.
  • The caller will receive the reply message in the session.

If the Process fails and 'Reply Errors' was selected, the exception that caused the failure will be written to the reply message. The message will also have the SessionId and CorrelationId set if required.

Process Elements



The Start element is used to mark the starting point of the process. A Start element contains a trigger configuration. Multiple Start elements can exist in the root level of the process, but they all have to lead to the same element.


Start elements also exists within scopes. A Start element within a scope does not contain a trigger configuration, it's only used to mark the starting point of the scope.


A Return marks the end of an execution path and defines the return value.Return value marks the end of the execution for either the scope it's placed in, or the process itself.


Intermediate return

Intermediate return works in a similar manner to return, with one big difference. An intermediate return does not end execution, instead it allows the process to continue executing. An intermediate return only works when the process is triggered by a HTTP Trigger. It allows giving a result back to the caller before a time consuming process begins. Intermediate returns are drawn as an alternative execution path and can only be attached to a Task, Call Subprocess or Code element. While it's possible to have multiple Intermediate returns in a Process, the intermediate result will only be returned back to the caller for the first Intermediate Return encountered.

intermediate return usage Example usage of Intermediate return.


Throw is used to throw an exception. An uncaught exception will cause the Process execution to end in an error state.



Catch is used to handle an exception A Catch can be attached to a Task, Call Subprocess or a Scope. The outgoing connection from a Catch will point to an Error Handler element.


The exception that is caught can be accessed within the error handler by defining a variable name in the Catch element, and then using a #var. reference. An element can only have one Catch element attached.

Error Handler

error handler

An error handler is a Task, Code, Call Subprocess or Scope element that is used to handle an exception. An error handler always have an incoming connection from a catch, and it must always continue to the same element(s) as the element which the catch is attached to.

If an exception occurs then the execution of the throwing element will stop, and the error handler will kick in. The return type of the Error handler should be the same as the throwing element's, since the return of the error handler will be used in the same way as the return of the throwing element.

subprocess error handler example An Error handler can end the execution of the whole process by placing a Throw shape as the end element within a Scope.

A catch attached to a Scope element will catch all exceptions within the Scope. Note that the execution of the whole Scope will stop even if the exception is thrown on the very first element within the scope. It is possible to define an error handler for the entire process by encapsulating everything but the Start element(s) and the final return within a Scope.

Conditional Gateways

Conditional gateways are used for conditional execution paths.

Exclusive Decision

exclusive gateway empty

An Exclusive Decision element is used to choose in between two exclusive execution paths. The Exclusive Decision element contains a conditional expression that returns a Boolean value and will be evaluated at run time. If the expression evaluates to true, then the conditional branch is taken, otherwise the default branch is taken.

It's possible to join the two branches of an Exclusive Decision element. It's also possible for each branch to end in their own return element. Default branch

The default branch, taken when the expression evaluates to false, is marked with a diagonal line. Conditional branch

The conditional branch is taken only when the expression evaluates to true.

Multiple Exclusive Decisions can be stacked to provide more than two exclusive execution paths

Empty condition branches can be useful for conditional compensation flows.

Inclusive Decision


An Inclusive Decision is used when there's multiple execution paths that can be taken. The Inclusive Decision does not contain an expression, instead every outgoing conditional branch contains it's own expression that has to evaluate to true in order for the path to be taken.

All branches of an Inclusive Decision element must join at the same element. It's not possible to return within an Inclusive Decision branch.

The return value of an Inclusive Decision is a dictionary containing the name of branches taken, and the last return value of the branch.

The order which the Inclusive Decision branches are executed can not be guaranteed. In case one branch depends on the work of another branch, then that work should be done prior to the Inclusive Decision.

The Inclusive Decision element has the option of a Default branch, just like the Exclusive Decision element. The default branch does not contain an expression, it is always executed. There can only be one Default branch per Inclusive Decision element.

The blue line shows which condition branches would be taken in this process. Each branch is executed before the "Continue" Task is executed.


Activities are the elements doing most of the work in FRENDS Processes.



A Task is a reusable components which can be modified by parametrization. Tasks are designed as simple actions that can be chained together to create more complex operations.

FRENDS provides a range of Task types out of the box. It's also possible to create custom Tasks.

The parameters and result type of a Task is decided by the Task implementation.



Some tasks might not always succeed on the first try - for example a task trying to write into a database might have a temporary connection problem. Task elements have the option of automatic retries in case of an exception.

A task marked for retries is visually different.

To enable automatic retries for a Task, toggle "Retry on failure" and set the maximum number of retries.

task retry settings

Call Subprocess


Call Subprocess is used to call an external Subprocess. A Subprocess is a special kind of Process that can be executed from other Processes. The parameters given to Call Subprocess corresponds to the Manual Trigger parameters defined in the Subprocess. The return type of a Call Subprocess is dynamic and is defined by the Subprocess.

Call Subprocess can have Error Handlers attached.

call subprocess error handlers



The Code element allows you to create Process variables and execute C# code directly in a Process. The Code element has two modes - one which declares a variable and assigns a value, and one that executes an expression.

If you chose to declare a variable and enter a variable name, the variable can be accessed with a #var. reference.

A Code variable declared in the root of a Process is accessible from child scopes and modification to it in the child scopes will be visible from the root. An Code variable declared in a child scope will not be accessible in the root.

If a Code element declares a variable, then the return value of the element will be the value of the variable.

A Code element that does not declare a variable will only return a String value indicating that it has been executed.



A scope is a isolated part of a Process. The return values of elements within a scope are not accessible from without the scope. Scope

A Scope has no special properties other than being able to release the resources used within once the execution of the Scope is complete. Some use cases for a Scope:

  • As an error handler. A Scope can contain any other element, and it's therefore excellent for more complex error handling.
  • Control when result-sets are released
  • A Scope can have an Error handler, so any exception happening within the Scope will be caught by the Scope Error handler.
  • The return value of a Scope is that of the executed Return element.



A While element is a scope that will execute over and over again up till a set criteria is met. While-elements are especially useful in combination with Code elements, since it allows complex retries, loop checks and in some cases recursive behaviors.

A While element contains an Expression parameter as well as a Max iterations parameter. The While element will keep on executing for as long as the Expression is evaluated to true, and the max iteration count has not been reached.

The return value of a While scope is the same as the last executed Return element. Foreach

The return value of a Foreach scope is a list of the return values gotten for each iteration. The return values are ordered in the same way as the provided list.

Annotation elements

Annotation elements are only for documentation purposes and does not interact with the functionality of the Process itself.

Data Store reference


The Data store reference is used to represent a data store of any kind, for example a database.

Data Object reference


The Data object reference is used to represent a data object of any kind, for example a variable declared within the Process.

Text annotation

Text annotations can be added to almost every element in a Process. It can contain for example a description of what an element does.

Parameter Editor

When building FRENDS Processes and Sub-Processes you will need to configure Tasks to tell them what they should exactly do. Examples of this kind of configuration could be configuring the SQL query an SQL Task should execute.

The configuration of these tasks is done using the parameter editor which appears on the right side of the Process Editor when selecting a task from the canvas:


Configuring Element Basic Properties

When configuring tasks using the parameter editor you should start with the basic properties located on the top side of the parameter editor:


These basic properties include:


Many elements have a name input. Elements with return values generally require a name. The element name is used when referencing the result of a previous element and therefore the element name must be unique within a Process.

Condition branches are a special case when it comes to unique naming - the names of Condition Branches only have to be unique for the Exclusive or Inclusive Decision it's attached to.

Elements that do not have a name input can still be named by double clicking on the element in the editor, but it will only used for display purposes.


For elements of type Start, Task and Call Subprocess a type selection must be done. Clicking on the type selector drop down will show a list of available types. After selecting a type the parameters associated with it will be displayed.

The Task return type and package can be seen by hovering a selected type.


Each element also has an optional description field where you can enter freeform information or documentation about the operation the task is preforming. This is a good place to store for example contact information of a specific system holder if there is a problem executing the task etc.

Promote result

Elements of type Return, Intermediate Return, Task, Call Subprocess and Code has the option to promote result. To activate the option, simply toggle "Promote result as" and enter the variable name to be used.

Activate promoted result by toggling on Promote result as and entering a variable name

Retry on failure

Tasks has the option of toggling automatic retries in case of an exception. A Task can be automatically retried up to 10 times. The retries are done using an exponential back off wait time. The formula for the exponential back off wait time is 500 ms * 2 ^ (retry count).

To enable automatic retries for a Task, toggle "Retry on failure" and set the maximum number of retries. Retry attempt Wait time (seconds)

1 try  0.5 seconds
2 try  1 second
3 try  2 seconds
4 try  4 seconds
5 try  8 seconds
6 try  16 seconds
7 try  32 seconds
8 try  64 seconds
9 try  128 seconds
10 try 256 seconds

Time in between task retries depending on the retry attempt

Configuring Element Specific Properties

While the properties above are common for all FRENDS tasks and elements each element also includes element specific configuration which changes depending on the element type you are using, and example of this cloud be for example configuring the location of a file that you want to read.

FRENDS also provides multiple different ways to enter element specific properties, which depend on the element used.

Entering Element Specific Properties

For each property you will be provided with the field for the property input and a label describing what the input should be for each property. This description label can also be hovered with a mouse to reveal additional information on how to correctly configure said property.


Parameter Input Modes

When entering the input for a parameter you will be given the option to specify what kind of data you are giving as an input using the input type selector.

change input mode

Text Input Mode

When using the text input mode you can enter freeform text as your given input. This input can be modified using the standardized {{ handlebar }} notation of FRENDS. For example one could give a file name with the current day in the format:


Which would result in an input of for example file_20170401.xml.

XML Input Mode

The XML input mode allows you to enter valid XML as the input instead of freeform text. The advantage of this is that it provides on the fly validation of the given XML and allows for easier editing of the formated data. The XML input mode can also be modified using the standardized {{ handlebar }} notation. For example you could inject the current date to a structured XML with the following input:

    <body>Don't forget me this weekend!</body>

Which would result in an XML input of:

    <body>Don't forget me this weekend!</body>

JSON Input Mode

The JSON input mode works exactly the same as the XML input mode in that regard that you can enter structured JSON data which can then be modified by injecting dynamic data using the {{ handlebar }} notation. For example:

  "note": {
    "to": "Tove",
    "from": "Jani",
    "heading": "Reminder",
    "body": "Don't forget me this weekend!",
    "date": "{{DateTime.Now.ToString()}}"

Would result in a JSON input:

  "note": {
    "to": "Tove",
    "from": "Jani",
    "heading": "Reminder",
    "body": "Don't forget me this weekend!",
    "date": "2017-04-01T12:00:00.000Z"

SQL Input Mode

As with the JSON and XML input modes the SQL input mode allows you to enter structured SQL as an input which can then be modified using the {{ handlebar }} notation.

Expression Editor Input Mode

The expression editor input mode gives you full control over the input you are giving to a specific task. This means that you can enter C# code in the expression editor to convert other incoming dynamic data to a format that is supported by the task. The {{ handlebar }} notation does not work with the expression editor, but you can instead access all of the process related variables straight in the editor without the handlebars.

Adding Results of Previous Tasks as Input

When building integration flows it's often necessary to pass data between two FRENDS elements and tasks to for example first retrieve data from a database and then sending that data to a web service.

This can be done using the #hashtag notation which provides all the available references to your current input field. This means that you can for example pass the result of a previous task as input to a different task:

bpmn result reference

These results from previous tasks and other variables can be freely combined to create a desired result. For example you could create a JSON document which combines data from two previous tasks with the input:

  "note": {
    "to": "Tove",
    "from": "Jani",
    "heading": "{{#result[GetHeading]}}",
    "body": "{{#result[GetBody]}}",
    "date": "{{DateTime.Now.ToString()}}"

Using other References as input

Besides the {{ handlebar }} notation and the results of previous tasks you can also access various other references relating to the process with the #hashtag notation. These include:

  • #process - Which contains dynamic information about the execution of the process
  • #trigger - Which contains dynamic information about the trigger of the process. This can be used, for example, to access the REST request properties which started the process.
  • #env - Which contains access to the environment variables for accessing staticly managed variables
  • #var - Which contains all the other variables of the process such as assigned temporary variables and errors

For the full list of available properties you should see the approriate references for Processes, Triggers, Environment Variables and Process Variables.

HTTP Response results

Elements of types Return, Intermediate Return and Throw have the option to give an HTTP Response result. This result is used by HTTP and API triggers to control the HTTP response returned to the caller. While it's possible to use HTTP Responses in processes without HTTP or API triggers, it doesn't have any special effect.


The HTTP Response contains an HTTP status code that will be used for the result message sent to the client, a content type, encoding and http headers. The HTTP Content field expects a String type input, complex objects will not work.

Note that the when using the HTTP Response return type, the HTTP request handler will skip all content negotiation: the response will have the content type and encoding given, even if the request had an ACCEPT header with a specific request, e.g. for application/xml.

HTTP Trigger

HTTP Triggers enable triggering processes through unencrypted and TLS encrypted HTTP requests. The HTTP endpoint is hosted by the FRENDS Agent, using the operating system's HttpListener interfaces. The Agent can be configured to listen for requests on multiple ports. Each hosted HTTP Trigger will have its own path for triggering just the specific process.



HTTP Method

HTTP Method determines which methods the trigger URL can be called with. Allowed values are GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH and ANY. ANY allows any method to go trough, while the others allows only the defined method.


All paths configured for an environment need to be unique in combination with the method, overlapping paths will cause errors. The paths may contain variables as route parameters (inside the path: runmyprocess/{variable}) or as query parameters (in the end of the path: runmyprocess?id=1)

For example, if you have

  • Agent running on host
  • Agent configured to use port 9998
  • HTTP Trigger configured as runmyProcess/{myvariable}

This will register a trigger that listens on the address{myvariable}

If you call the trigger with the following URL:

the following references and their values will be available in the process: = anyValyeForMyVariable = 1 = "foo"

Allowed Protocols

HTTP triggers can be configured to accept requests with HTTP, HTTPS or both. If a request is made with a protocol that is not allowed, the reply will be Forbidden (403).


HTTP triggers can use four different kinds of authentication:

  • None - No authentication at all
  • Basic - Authenticate with HTTP basic authentication
  • Certificate - Use a client certificate to authenticate
  • Api key - Authenticate with an API key

We strongly recomend to only use Authentication over HTTPS.

Basic authentication authenticates the user either against the Active Directory or the local users. Which one is used depends on the FRENDS Agent service user. If the agent uses a local user account, users are authenticated against the local machine users. If the agent uses an AD user account, users are authenticated against the AD users. The user name and password need to be encoded with UTF-8 before being converted to Base64 for the basic authentication header.

Certificate authentication requires that the client certificate is valid for the FRENDS Agent user on the agent machine. Also the issuer for the certificate needs to be found in the agent user's Client Authentication Issuers certificate store.

Api key authentication uses an API key together with Rulesets to determine if the client has access to an url. For more information, see API keys.

Cross-origin Resource Sharing

If there is need to allow a certain page to trigger a process, it is possible to do with cross-origin resource sharing (CORS). Check the "Allow requests from these origins" checkbox, and define the allowed origins in the textbox. The * character allows calls from all origins.

Note: if the call does not come from the default port, it must be included in the origin. The origin making the call must also support CORS.

Trigger Reference List


Dictionary<string, string> of parameters passed in the URL, both route and query parameters. (e.g. anotherVariable...)

DEPRICATED - Use pathParameters or queryParameters to access the path and query parameters.

Dictionary<string, string> of passed HTTP query parameters

Dictionary<string, string> of passed path parameters

Dictionary<string, string> of passed HTTP request headers (e.g. Host, Accept..).

HTTP request body as a string

HTTP method type (e.g. GET, POST..).

Request URI (e.g. of the client as a string associated with the request as a Dictionary<string,string>

The username associated with the caller. Only set if authentication is used. The following values are passed for the different types out authentications:
Api Key: The name of the api key
Basic authentication: The provided username
Certificate: The certificate's SubjectName.Name field

You can try to access an optional reference from any of the references (e.g. and if it is found the value will be returned and if not the value will be set to null.

Intermediate Response


A Process can return a response for the user before the Process is finished. This functionality is enabled by adding a Intermediate result element to the Process. When this element is executed the caller will recieve a http response from the Process. This can for example be used when calling a long-running Process and the caller should be notified that the long-running task has started.

HTTP Response Formatting

The HTTP Trigger returns the result of the executed Process as the HTTP response. The response varies according to the following conditions. When the Process' result is a string, the string is set as the body of the response. If it was an object, it will be returned either as JSON or XML depending on the requests ACCEPT header or JSON by default. For example ACCEPT: application/xml would produce an XML response, while ACCEPT: application/json would produce a JSON response.

If the result is an object with the properties HttpStatusCode and Content, the result will be mapped to a response followingly:

HTTP Response
HttpStatusCodeintReponse status code
ContentstringThe body of the response
ContentEncodingstringThe encoding for the body, e.g. utf-8
ContentTypestringContentType header value, e.g. application/xml or application/json
HttpHeadersKeyValuePair[]Response headers

Http response

The process elements Return, Intermediate return and Throw all have the option to generate a pre-defined Http response. See Http Response results.

Manual Trigger

A process can have a manual trigger to manually pass parameters from the user to start the process.


Unlike other trigger types a manual trigger can be configure with a dynamic number of parameters. When defining manual parameters you need to define each of the parameters by using the "Add parameter" button.

A Manual Parameter consists of:

  • Key - Required
  • Default value - Optional
  • Description - Optional
  • Secret-flag - Indicates that this parameter will not be logged

These manual parameters can be accessed in the process using the same #hashtag and {{ handlebar }} notation like any other trigger variables.

File Trigger

File watch triggers are triggered when a file matching the File filter is saved to the Directory path to watch.


The trigger watches for new files added to the watched directories, e.g. a newly created file will cause the trigger to launch a process, but if that file is left in the directory and modified, that will not cause a new execution.

Note that if a file is deleted (for example after being processed by the process), it may take the agents in the environment up to 10 seconds to notice that the file was deleted before accepting new files with the same name.

File watch trigger can define:

  • Name – Descriptive name for the trigger
  • Directory path to watch – Directory path from where the files will be fetched.
  • File filter – File filter to use (e.g. '*.xml').
  • Include sub directories – If enabled, fetches all the matching files also from subdirectories.
  • Batch trigger events – Batches the possible trigger events so there will only be one process instance for all modified files. If not set, a new process instance will be created for each file. The trigger waits for one second or until 100 files have been modified.
  • Username – If the username field is used the File Trigger will not use the Frends Agent account credentials to poll for files but a different account. Expected input is domain\username
  • Password – The password for the user above

string[] containing all the names of files

string[] containing all the full paths to the files

Queue Trigger

Queue triggers enable triggering Processes on messages received from an AMQP 1.0 queue. The queue trigger consumes the message from the queue whenever there is a new message available in the queue. The contents of the consumed message are then available in the process for further processing.


Configuring Queue Triggers

The queue trigger offers the following configuration properties to connect to a specified queue.

QueueThe name of the AMQP queue which to listen to
Bus UriThe URI for the AMQP Bus, e.g. amqps://owner:<SharedSecretValue>@<service_bus_namespace>


Should we send the succeeding process result to the Queue specified by the 'Reply To' option
Reply ErrorsShould we send failing process result to to the Queue specified by the 'Reply To' option
Reply ToThe queue where we should send the replies to

Trigger Reference List

PropertyDescription body of the message, see body handling custom headers of the message The AMQP message properties, for details see for details see

Receiving messages

The Queue trigger receives and accepts(completes) messages from the queue as they arrive with the limit of 10 concurrent messages being processed per Queue trigger per Agent. If configured to do so, the trigger will send a reply message to the 'Reply To' queue when the process finishes.

Note: The AMQP body may contain different types of data. Most of the time this provided as is to the process, the exception being when the body is a byte array and the property 'ContentType' has the 'Charset' field set, e.g. 'text/plain; charset=UTF-8'. In this case the binary data is converted to a string with the encoding matching the charset.

Reply messages

If the Process failed and 'Reply Errors' was selected, the exception that caused the failure will be written to the reply message. The message will have a new Guid as the MessageId and the same CorrelationId as the original trigger message.

When replying a success to a queue, the result is is written as the body of the message. Complex structures(objects) are serialized by default as JSON. In this case the Correlation Id of the triggering message is copied to the reply message.

It is possible to define the message structure directly in the result. This is done when the result contains an object which has at least either of of the properties 'Body' or 'ApplicationProperties'. In this case the result object is mapped directly as the reply message with the following structure:

Body: object - the body of the reply message
ApplicationProperties: Dictionary<string, object> - the custom headers for the message
Properties - the AMQP message properties
MessageId; string
AbsoluteExpiryTime: DateTime
ContentEncoding: string
ContentType: string
CorrelationId: string
CreationTime: DateTime
GroupId: string
GroupSequence: uint
ReplyToGroupId: string
ReplyTo: string
Subject: string
UserId: byte[]
To: string

API Keys

API keys are used to authenticate a caller triggering an HTTP or API Trigger that is using API Key authentication.

An API key is valid only for a specific Environment. API key access rights is determined by the Rulesets applied to it.


Rulesets are used to group access rules used for API keys. Rulesets are shared across all Environments. This makes it possible to have a partner or a system have exactly the same access rights for multiple Environments, by having API keys for each environment share the same Rulesets. An API key can have multiple Rulesets active at once.


In the example above, System X have access to the Development, Testing and Production Environment (using different keys). However exactly the same access rules are applied, since they all share the same Ruleset. This means that if everything works as expected in the Test environment, then we can be sure that a key with the same Rulesets will work in Production.


A Ruleset consists of simple Rules which gives the user access to an URL path called with a specific method. Path parameters are not supported.

Rules are enforced by the Agent recieving the HTTP(S) call to a HTTP or API trigger. The agent is aware of all API keys for the Environment it resides in, as well as which Rulesets are applied. For each Rule that is applied to the API key, the path of the call as well as the mehtod used is inspected. If the path starts with the same path as in the Rule and the method matches, then the call goes trough.

An agent is running on A call is made to

The part of the call that determines if the call gets access is /api/myApi/v2/getStatus, the rest is ignored. Let's say this call is made with GET.

A rule with the path /api/ and ANY method will allow this call to go trough, since the path starts with the same and any method is allowed.

However, a rule configured to match /api/myApi/v1 will not go torugh, since the start of the paths do not match fully.

Note that the comparison in between the rule path and the call path is case insensitive.


API Keys and Rulesets are managed in the Administration->API Keys page.


ui-api-ruleset Rulesets contains a collection of Rules. Each Rule has a path and a method. By clicking on the path, it's possible to see which API specifications are covered by the rule (or if a full API specification base path is covered, which operations in a specification is covered). *Note that operations containing path parameters are not shown in this list since they might or might not be covered by the rule. *

A ruleset have a list of API keys that are using it. New keys can easily be added or old keys removed from there. Whenever a Ruleset is changed, updates are sent to the Agents.

API Keys

ui-new-api-key API keys are created per Environment, and cannot be moved or copied to other . Once the environment have been set after creation it won't be possible to change it. Once the key has been saved, a key value will be generated for use. It's possible to add or remove Rulesets that should affect the API Key in the API Key page.

Using API Keys from the Agent API Discovery page

Once the API keys and ruleset has been set up, and there's a process that's using API key authentication, the Agent API Discovery page will allow you to enter an API key. The API key is added as a header to the request (X-ApiKey).


Passing an API key from a client

An API Key can be passed in the header in two ways:

Header name Value
Authorization ApiKey <value>
X-ApiKey <value>

Process Error Handler

You can define a process level error handler that can report any Exceptions thrown by the Process. When an Exception is thrown, if a Subprocess is configured as a Process error handler, it will be called. Note that you cannot continue the execution in the main Process after an Process Error Handler has been called.

A Process Error Handler can be configured in the process settings side panel. To pass the actual exception that occured to the error handling Subprocess the variable #var.error must be used.

Process Error Handler Configuration

Any return value from the Process error handler will be ignored. If you want to catch the error and return e.g. a custom error message to the caller, you need to wrap your Process in Scope with a custom error handler.

Process Log Settings

You can configure what information is logged for each execution of a Process, and how long the data is retained. You define these Process log settings per Environment. You can also override the Environment-level settings per process, if needed.

The Environment-level Process log setting defaults are managed from the Process list view for the environment. Clicking on the "Log settings" button at the top of the page will open the Process Log settings dialog for the selectd Environment.


Log level

The Log level determines how much information is logged for each executed Process step. The following log levels are available, from least to largest amount of logged data:

Only errors

As the name suggests, only errors will be logged with this Log level setting. No step or subprocess execution data will be logged, which will speed up the process execution and log message processing as a whole.

If an exception happens within the Process, then the parameters used for that Task or Subprocess will be logged along with the exception. The result of steps which are set to promote result are also logged, as always.

Note that if you have promoted results in subprocesses, or handle any exceptions without rethrowing them, the subprocess instances themselves are not logged under Only Errors Log level, whereas the steps are. This may lead to redundant logging of data you cannot actually view. The data will eventually be cleared, but for maximum performance, you should not promote results of subproceses under Only Errors log level.


The Default Log level logs results for each step executed in a graph, with the exception for Foreach elements. Parameters for Tasks and Subprocesses are not logged by default, nor is the variable references used in expressions for Condition branches or While loops. In case parameters or results are very big (over 10 000 characters), the logged value will be truncated.


Sometimes you need to know everything that happens within a process - this is especially useful when developing a new Process. With the Everything Log level, every parameter and each result is logged. For conditional expressions, referenced variable values are also logged. Log level Everything will log the full values, and not truncate large result or parameter sets, as the Default Log level would do.

Log process parameters and return value

For some Processes, you can be mostly interested in the execution performance and latency. This is especially true to any API Processes called often. Setting the Log level for such processes to Only errors will speed up the execution and reduce the amount of redundant log data.

However, you could still be interested in logging the complete request and response data for the Process, e.g. for internal auditing purposes. For this, you can just turn on Log process parameters and return value. When set, all Process input parameters passed from the trigger, as well as any values returned (including intermediate return values) are logged. The data is then visible in the Process instance list

Process instance history retention period

To prevent the log database size from growing wihtout control, the FRENDS Log Service deletes old Process Instances from the database periodically. By default, any Process instances older than 60 days will be deleted, but you can set the retention period for specific Environments or individual Processes as needed. See Database Maintenance for more.

Process-level settings

If one or more Processes deployed to an Environment have different log requirements than the rest of the Processes, you can override the Environment-level settings for individual processes. Clicking the "Log settings" menu item from the Process action menu, opens the Process-specific log settings dialog. There you can choose to override the Environment-level settings by checking the "Use process-specific settings" option.


If checked, the settings will override any Environment-level settings. If you later want to revert back to the Environment-level settings, just uncheck the override option.


Since logging large amount of data will affect performance, it's recommended to set especially production Environments to log as little as possible, e.g.:

  • Log level to Only Errors
  • Log process parameters and return value off
  • Process instance history retention period to the shortest period you think you will need

You can then override the log settings (e.g. set a longer log retention period) for the processes that have more stringent requirements.

Database Maintenance

FRENDS uses SQL Server for storing the configuration and log data. The databases need to be periodically maintained. Databases are created and migrated to the newest version with the Frends.DatabaseInitializer tool, which is automatically executed by the deployment scripts. To get a full list of parameters, execute it with the '--help' parameter. By default the databases are created with the simple backup recovery model.

To prevent the database size from growing without control, FRENDS Log Service deletes old Process Instances from the database periodically. By default, any instances older than 60 days will be removed, but you can change the settings for a specific Environment or Process. See Process Log Settings for more. The purge is done by executing the stored procedure 'PurgeProcessHistory'. The purge procedure has a 30 minute timeout, if it cannot finish or an error occurs, the execution is retried after 30 minutes.

After purging old Process Instances successfully, the Log Service will reorganize indexes that have reached at least 30% fragmentation, each index reorganize has a 30 minute timeout.

By default, the Process instance purge and index reorganizaiton will be run on Log Service startup, and is rescheduled to run every 24 hours after finishing successfully. The maintenance actions will run for 30 minutes max; if the actions time out, or there is some other error, they will be retried 5 times by default.

You can configure the maintenance actions with the following optional settings in deploymentSettings.json. These settings should be put directly under the root settings node:

  • maintenanceTimeWindowStart: string with a format of "[hour]:[minute]:[second]", e.g. 00:30:00 for half past midnight
  • maintenanceRetryCount: number
  • disableDatabaseMaintenance : boolean, set to true if you have set up your own scheduled cleanup and maintenance procedures.

Backups for each database on on-premise installations are handled with three SQL Agent Jobs:

  • defaultbackup[databaseName] - Creates a full database backup to the SQL Server backup directory, executed every Sunday at 00:30
  • defaultdifferentialbackup_[databaseName] - Creates a differential backup to the SQL Server backup directory, executed every hour
  • defaultcleanbackups_[databaseName] - Cleans up over two month old backups from the SQL Server backup directory, executed every Sunday at 01:30


Users are created automatically on their first login. Users can also be created manually and the desired roles can be assigned before the users login for the first time.

user usermanagement

  • 'User is locked' - Setting this to enabled disables the user from logging in.
  • 'Inherit roles from Active Directory' - Setting is only visible if Windows authentication is used. Overrides the role assignment and uses Active Directory security groups for the user.
  • 'Roles' - Roles for the user

A user may be in multiple different roles. If the user is in no roles and 'Inherit' is not enabled the user will not be able to do anything.

Note that if a user is in many roles, the rules from the roles will be combined, and any Ddeny rules will take precedence over Allow rules. E.g. if the user is part of an "Administrators" role allowed to access everything, as well as a "Users" role with access to all views except user management, then the user will not have access to the user management page, even if he or she is in the "Adminstrators" role

Access Management - Configuration

The FRENDS UI requires users to log in with OpenId Connect (Office 365 or Azure AD) or local domain user account (on a local installation). It also allows you to restrict access to views, processes or environments for specific authenticated users or groups.

By default, every authenticated user has access to all functionality except user management. To restrict access to specific views and actions, you can define custom rules which can be defined in the User Management view that can be found under Administration. Only users with Administrator role can manage user access.

Windows Authentication

IIS Configuration: Windows Authentication enabled and Anynomous Authentication disabled

When Windows Authentication is enabled, the users will be logged in using their Windows domain accounts. By default, they will be considered to be in any roles matching the names of the domain groups they are part of in AD. This can be turned off for a user by unchecking the 'Inherit roles from Active directory' option, if you wish to manage the role membership in FRENDS explicitly.

NOTE: You will still have to create and manage the FRENDS roles separately, they will not be automatically generated - except for the built-in roles.

For example, say you have a Windows domain user 'DOM\fooUser' that is part of domain groups 'Users', 'BusinessUsers' and 'LocXUsers'. By default, the user will be in the built-in 'Users' FRENDS Role, and uses the rules for that. If you then create a new 'BusinessUsers' Role in FRENDS, the user will be part of that group also.


By default the user who installs FRENDS will be given the role of Administrator.

The users who automatically get the Administrator role can be configured by modifying the WebUI web.config file.

Example of application key containing the administration configuration:

<add key="LocalAdministratorsJson" value='["DOMAIN\\User","DOMAIN\\Example]' />
The users in the list above are only given the role of Administrator when the user is created. So if the user existed before they were added to the administrators list they will not get the administrator role.

OpenId Connect

IIS Configuration: Windows Authentication disabled and Anynomous Authentication enabled

Currently the only supported OpenId Connect provider is Azure AD (Office 365).

Register Azure AD Application

You can use the following instructions to register a new Azure AD Application. The Application should be a Web Application and the Sign-On URL should be the link to FRENDS, for example

Configure Frends

For FRENDS to be able to use the AD Application the following information is needed from the registered Application

  • Application ID: e.g. 50549e93-99dd-4690-9948-3c8ec076ddfb
  • Tenant: e.g

FRENDS is configured to use the OpenId Connect provider by modifying the WebUI web.config file.

The key is called "OwinAuthenticationProvidersJson" and the value should be a JSON Array of objects(providers). The configuration JSON object should have the following fields:

  • displayName: Shown as the name of the provided in the sign-in page
  • type: Type of authentication, "OpenIdConnectAuthentication" is currently the only supported type
  • clientId: The Application ID from Azure portal
  • defaultRole: The role new users who log in to the FRENDS application are assigned. The following roles are pre-created: Users (Default from 4.3), Editor, Viewer, Administrator
  • tenant: The Azure AD tenant name
  • instance: For azure AD this is always "{0}"
  • administrators: The users that will be given the Administrator role.


<add key="OwinAuthenticationProvidersJson" value='[{
  "displayName": "Provider login",
  "type": "OpenIdConnectAuthentication",
  "clientId": "50549e93-99dd-4690-9948-3c8ec076ddfb",
  "tenant": "",
  "administrators": ["",""]
}]' />
The users in the administrators list are only given the role of Administrator when the user is first created. So if the user existed before they were added to the administrators list they will not get the administrator role.


A role has a collection of rules that are used to restrict or allow users to access views, processes or environments.


There are multiple different type of Rules:

  • AllowAction - rule describes the activities that the user in the role can do
  • DenyAction - rule describes the activities that the user in the role explicitly cannot do
  • AllowTag - rule allows the users in the role to only see processes with the tags
  • DenyTag - rule explicitly hides the processes with the tags.
  • AllowEnvironment - rule allows the users in the role to only see the environment given.
  • DenyEnvironment - rule explicitly hides the environment given for the users in the role.

There can be multiple roles, and each role can have multiple allow or deny rules. There is no hierarchy between the roles. If a user belongs to multiple roles that have different rules defined, the rules from each role are combined.

Limit access to Views and actions - Activity

The activity-based configuration is based on a two-part configuration scheme where individual activities are defined by the controller and action names. A Controller essentially represents a menu item on the UI, and an action is functionality available for user to perform. The following activities are available for configuration.


Following wildcards are supported for activities

  • *.* - match all activities
  • *.{action} - match all actions with given name in every controller
  • {controller}.* - match all actions for given controller

Order of the activities being authorized

  • Explicitly allowed activity (e.g. Process.Start)
  • Explicitly denied activity (e.g. Process.Deploy)
  • Wildcard allowed activity (e.g. Process.*)
  • Wildcard denied activity (e.g. *.Edit)
  • Full allow wildcards (*.*)
  • Full deny wildcards (*.*)

This means that if activity has been configured with explicit allow option, then it cannot be overridden by any following value.

When creating a new role, you most probably should always add the "Common.View" rule, as it is required e.g. for seeing the navigation menu as well as other common views.

Example operator-example

A operator that can view everything and edit process executions (Process Instances). The users of this role can acknowledge errors and start new process executions.

Default roles

  • Users - Legacy role from older frends. This allows access to everything except user management.
  • Editor - Allows every Edit Action.
  • Administrator - Allows every Action
  • Viewer - Allows every View Action

Limiting access to only specific Processes - Tag

You can limit the processes a role can see and access by using tags and the AllowTag and DenyTag rules. The rules work the same way as the view rules (allow and deny). The view rules still take precendence, though: if you cannot e.g. edit processes, you cannot edit them even if the tag would allow you to.

  • If no Tag rules are active for a user, the user can see all processes.
  • Wildcards are not supported.
  • AllowTag rule limits the users in the role to just see and access processes with the definied tag.
  • DenyTags allows the users in the role to access and view all processes except those that are denied.
You cannot use both Allow- and DenyTag rules at the same time, as they would conflict.

Limiting access to only specific Environments - Environment

You can limit the Environments users in a role can see and access using the AllowEnvironment or DenyEnvironment rules.

  • If no environment rules are active, the user can see all Environments.
  • Wildcards are not supported.
  • AllowEnvironment rule limits the users in the role to just see and access the Environments with the defined Environment
  • DenyEnvironment rule allows the user in the role to see and access all Environments except those that are denied.

Example test-env-example

The role allows users to do everything except administrative actions and access Environments: Default, Test and Staging

NOTE: Everyone can always see the "Default" environment
You cannot use both Allow- and DenyEnvironment rules at the same time, as they would conflict.

4.2 Release notes

  • You can create widget to monitor successful Processes, failing Processes, errors, and Process executions
  • It is possible to promote results of a Task or an entire Process
  • These promoted values can be seen in the Process instance view and used to filter them
  • They can also be used in silence monitoring rules
  • Process instances moved from their own view to the Process list view as a sublist
  • Clicking the arrow in front of the Process name or anywhere on the background of the Process, opens the Process instance list below the Process name
  • The user can choose what information is shown in the list
  • The instances can be filtered with dates and information in, for example, promoted values
  • Clicking on the Process name opens the Process editor
  • Silence Monitoring Rules
  • The rule will compare the count, distinct values, or minimum, maximum or sums of promoted values, and send an alert if the rule is not met
  • The UI indicates whether Agent Process configuration is out of date
  • The UI will inform the user if updating or activating a Process is not complete in the Agents in the environment
  • Hide passwords from showing in the UI by using 'secret' environment variable type
  • The user can write a description for Tasks in the Process editor
  • The user can check which Task packages have a newer version available and can choose what Tasks are updated
  • Tasks grouped according to NuGet package in Task view
  • Parallel foreach loops allowed
  • Triggers made more reliable and usable
  • Parameter change to or from array type fixed
  • Cobalt editor now saves parameter changes after updating the Tasks

Breaking changes

If you use 'secret' environment variables, the process must be compiled with 4.2 or later. For example, changing an existing password environment variable to 'secret' may cause runtime errors if the field is used by processes compiled in older versions.

4.2 Service Release 1 - 28th June 2016

  • Maintenance release, with fixes for:
  • Process listing performance: list will be shown even if counts take long to fetch
  • Unmanaged DLLs in task packages do not cause problems with delpoying processes

4.2 Service Release 2 - 14th September 2016

  • Maintenance release with fixes mostly to the memory usage of the agent and performance of the web UI:
  • Old, unused process versions are now periodically unloaded from agent memory, so agents running for months do not use up too much memory
  • You can further reduce agent’s memory usage by installing shared library DLLs to the global assembly cache of the machine
  • Process instance counts are now stored in the database to speed up the process list load time
  • Process instance list load times are also reduced by changing to a simpler pager that does not need to calculate the total number of instances

4.2 Service Release 3 - 5th October 2016

  • Performance fix release. The fixes include:
  • Process list is now paged. This greatly speeds up rendering of the list if there are > 100 processes.
  • Drastically reduced web server memory usage
  • Building and deploying new versions of processes is faster on on-premise installations with a lot of deployed process versions

Also as a small fix, you can again query for a specific process execution graph by the execution GUID, in order to e.g. generate links to the process in error emails. You get the execution GUID in process via the #process.executionId reference, and the link would be in format

https://<website>/ProcessInstance/Instance/<execution guid>

As a small breaking change, for performance reasons, audit logging all actions to the database has now been disabled by default. If you need it, to turn it on again, you need to set the EnableAuditLogging option in web.config to “true”.

4.2 Service Release 4 - 27th October 2016

  • Maintenance release with fixes for:
  • Showing executed decision branches correctly
  • Allowing import of tasks referencing NuGet packages that also have references to netstandard packages
  • Improved performance for instance count query

4.2 Service Release 5 - 29th November 2016

  • This version fixes a problem with automatic retries in the internal service message processing: in case of some transient errors, the message processing would not get retried, which could cause configuration or log messages not being processed at all. This could then lead to e.g. process versions not getting deployed correctly or processes seem to never finish.
  • Other changes and fixes in the release include:
  • Array parameters of a task (e.g. Cobalt’s message processing steps) are no longer cleared when you update the task version
  • SQL query performance tweaks
  • The authorization.config file can now be used for defining authorization rules in on-premise installations for easier editing

4.4 Release notes

FRENDS 4.4 has many new features, mainly focused on making it easier to create and manage Processes implementing HTTP APIs, as well as users and their access.

API Management

You can now easily create and manage Processes that implement an operation from an OpenAPI 2.0 (Swagger) specification. If you have a ready-made OpenAPI specification, implementing it in FRENDS is as simple as importing the specification and then creating a new process for each operation. The Process designer has auto-complete support for the request parameters, and template responses based on the operation specification are also automatically generated.

The Processes use the same FRENDS version control scheme as other Processes, so you can easily continue developing Processes in Development while the current stable version has been deployed to Production. You can also deploy all Processes implementing a specific API version together right from the API Management page, so you can deploy the complete API implementation in one go when needed.

As a developer using the exposed APIs, you can easily see the available specifications and operations in the API discovery page. The page is hosted in the public HTTP endpoint also hosting the actual API operations. It shows you the operation documentation and allows you to test the operations as well, provided you have the necessary API key or otherwise can authenticate to the agent.

For more details, please see the API Management section.

API Key Authentication

You can now create and manage API keys for authorizing access in HTTP and API triggers. You can do this right in the FRENDS UI, and the changes will be automatically propagated to the agents. Compared to the authentication methods previoysly (basic or certificate authentication), which required you to have custom deployment steps to create users or deploy certificates, API key management is much less work.

API keys are Environment-specific, so there is no danger of someone gaining access to your Production environments with just a developer key. Furthermore, you can also easily limit which paths the API key grants access to, making it possible to grant rights to just e.g. specific API operations.

User management UI and OpenID Connect Support

You can now easily manage users as well as their roles and access rules in the User management UI. There you can easily see which roles a user is part of and what they can view and access.

You can now set up FRENDS UI to use an existing user directory supporting OpenID Connect. For instance, if your Active Directory is federated to Azure AD or Office365, you can easily use your existing user accounts and passwords to access the FRENDS UI.

If you are updating from an existing installation with customized authorization rules, you will need to do the customizations manually after upgrade, as the syntax and rule storage format have changed a bit. Unfortunately, there is no migration for existing authorization rules to the new ones.

Updated Process Log Settings

You can now define Process log settings for all processes in an Environment. This allows you to e.g. shorten the default data retention period for all processes in Test. You can still override the default for individual Processes if needed.

The "Only Errors" log level has been tweaked to really only log errors, i.e. the results of any steps that have failed. This improves performance and reduces the amount of redundant data logged. This is especially important for API processes with low latency requirements; it is recommended to set the "Only errors" level as the default for any Production environments.

There is now also the option to log full Process parameters and return values. This is meant especially for API processes with high request rates: if you set the log level to "Only errors" for performance, you may want to log the incoming request and outgoing response in full, e.g. for auditing or error diagnosis purposes.

In order to use the new log level settings, you will need to create and deploy a new version of the Processes (or Subprocesses)
If you are using monitoring rules, please note that the logged values may also have changed a bit, e.g. for throw shapes. After upgrade, please make sure your rules still behave as expected.

Upgrade notes

  • Process triggers are now activated by default when deploying a Process to an Environment. This change was done especially to make deploying API processes easier. You can still choose to not activate the Process triggers during deployment; see Deploying a Process for more.

4.4.1 - 4th August 2017

This is a maintenance release, fixing the following issues:

  • In 4.4, Processes are set active by default when deploying them. As this may not be desired in some situations, now the deploy dialog has an option to choose not to activate the process triggers on deploy.
  • Request-reply messaging using the service bus trigger did not work correctly, because the trigger did not the SessionId correctly to the reply messages. Now it does.
  • Some UI issues are fixed, especially some crashes in the new API management views if the environments had multiple agents in them.

4.4.2 - 29th August 2017

This release mainly fixes some performance and process deployment issues:

  • In 4.4, Processes are set active by default, but this may not always be desirable. Now you can choose whether to activate a Process when creating or importing it. Also copied Processes are not activated by default.
  • The environment variables page could take over 10 seconds to view if you had lots of environment variables. Now the view is paged, and the search has been improved so you can find variables by subkeys and values as well.
  • The periodic Process instance cleanup job could slow down if you had millions of instances in the database. Now the cleanup job works much better even with large data amounts.
  • Using expressions to set enum values in task parameters now works.

4.3 Release notes - 25th January 2017

  • New Process editor that is based on the bpmn-js BPMN rendering library. The new editor has much better performance and supports highly-requested things like zooming, moving many elements at once, or copy and paste.
  • New Process elements:
  • Subprocesses allow you to create small, reusable processes that can be used in other processes.
  • The expression shape allows you to execute short C# expressions as well as initialize and assign variables
  • While loop allows you to go through a list of unknown length or e.g. retry some steps.
  • Improved parameter editors, with XML, JSON and SQL highlighting

The new Process editor will be shown by default to Processes created with the old editor as well, migrating the Process model to match that of the new editor. However, the processes will not be automatically migrated: you will need to saved the Processes in the new editor to start using the new format. You can still use the old editor as well; you can access it from the link at the top of the new editor.

In 4.3, the Process instance data table schema has been tweaked for better performance. When upgrading, these new Process instance tables will be recreated as empty, renaming the old tables. This means any instance history before the upgrade will not be shown in the UI. The instance history data is still available in the database, if needed.

4.3 Service Release 1 (4.3.393) - 23rd February 2017

The main improvements in this release are:

  • Process error handler: You can now set a subprocess as the error handler to the entire process, allowing you to easily e.g. set up common error reporting. Please see the documentation for details.
  • Import/export BPMN: You can import BPMN from an XML file to the process editor, allowing you to design the process first with a separate BPMN editing tool and working from that with FRENDS. You can also export the process graph as BPMN or as an SVG image.
  • Improved internal process logging performance: The log messages are now processed in batches, which speeds up process execution and reduces load on the log database. Also the process and event log history delete performance should be improved.

There are also many bug fixes, e.g. fixing some process parameter editor crashes due to invalid parameters, and HTTP trigger allowing requests with invalid charset values.

NOTE: This service release also updated the NuGet libraries to newer ones that only support three-part version numbers (major.minor.patch). If you have been using custom task packages that are versioned by only changing the fourth part of the version number, the references may not get resolved correctly during build process. This can cause process build failures, especially if task parameters have been changed between the versions. Essentially, if you have two versions, and of a task package imported, the build process will use the older one, even if you explicitly reference the newer task version. The workaround is to create a new version of the task package, with a version that updates e.g. the third version number part, i.e. instead of, you use 1.0.1.

4.3 Service Release 2 (4.3.408) - 9th March 2017

This release fixes some issues with the new editor as well as logging:

  • In 4.3 SR 1, the logged results and parameters of process steps executing at the same time may get mixed up. This was due to an issue in batching the database insert commands, leading result and parameter data to sometimes be written to wrong rows in the database. The actual process execution is unaffected, but the execution graph could show wrong parameter and result values for task and loop executions.
  • Links for viewing possible process error handler executions are now shown correctly
  • Variable and result references as well as annotation connections are now correctly validated, also in inclusive branch condition expressions

4.3 Service Release 3 (4.3.422) - 22nd March 2017

Maintenance release, which mainly fixes usability issues like:

  • Schedule triggers sometimes not being validated correctly
  • Run once action not being shown for users with execution rights, and
  • Reference autocomplete adding an extra square bracket to an expression

4.3 Service Release 4 (4.3.432) - 4th April 2017

Minor maintenance release, fixing mainly user interface issues like:

  • Process list shows erroneous warnings for missing environment variables
  • Task import allows you to import a task package with four-part version number, potentially causing problems during compilation
  • Trigger status update fails if there are newly created processes

4.3 Service Release 5 (4.3.443) - 2nd May 2017

Minor bug fix release, mostly for user interface issues like:

  • Array object default values were not initialized on task update, causing Cobalt updates to fail
  • “Show subprocess” button is sometimes disabled for failed subprocesses that actually ran
  • Log service crash if cache warmup query takes too long

4.3 Service Release 6 (4.3.451) - 15th May 2017

Minor bug fix release, fixing issues like:

  • Basic authentication on HTTP triggers fail for concurrent users
  • Empty error messages if process migration to new editor version failed
  • Open process instance list polls backend on every new process execution, causing unnecessary load on the database

4.3 Service Release 7 (4.3.458) - 24th May 2017

This release mainly fixes an issue with subprocess log message processing that caused the processing to queue up and instances not showing in the UI

4.3 Service Release 8 (4.3.477) - 15th June 2017

This release fixes some rare but nasty issues:

  • The agent could crash when running a process with hundreds of variable references
  • Executing tasks calling async methods while capturing the context (i.e. not using ConfigureAwait(false)) could hang the process execution
  • Duplicate process versions could be created due to too aggressive caching
  • Deploying processes on a cloud installation would take long (minutes) if there already are hundreds of process version packages stored in the package repository
Thank you for subscribing to our newsletter :)