Introduction

Get started with FRENDS, the best .NET Hybrid Integration platform.

On this page you can read about the FRENDS architecture and functionality in the Concepts section to get a better understanding of the features of FRENDS and how they work.

You can also access the developer Reference documentation for detailed descriptions on how to utilize each feature in development and operations.

Quick Start

If you are looking to just get quickly started with FRENDS you should start by looking into:

What is FRENDS?

FRENDS is a hybrid integration platform with a focus on flexibility and providing a clean DevOps experience to experienced and newer integration developers alike.

The main focus of a hybrid integration platform is to bring together various systems and services located both in the cloud and on-premise data centers. With FRENDS you can model the required integration flows between these systems using a BPMN based visual GUI.

frends process

FRENDS accesses both the cloud and on-premise through a distributed agent architecture where multiple agents in multiple environments communicate with one-another through a centralized hub.

FRENDS Cloud Infra

Combining these two approaches together results in a platform where you can simply install one agent on-premise and one on the cloud and dicatate in the visual BPMN which parts of the integration flow should be executed in which agent and FRENDS will take care of the rest.

Monitoring

When you have created an integration Process you might want to activate a Monitoring rule for that integration process to keep track of specific data or to alert you when there is a problem in the integration process.

For this purpose FRENDS offers monitoring rules which inspect the execution of a set of processes as a whole and focuses on the data being processed instead on the technical success of the process.

This means that instead of trying to figure out if a process has been successful in for example delivering orders to their destination inside a single process, you can gather up all the orders from a set period of time and see if enough have been delivered.

A good example of this would be to configure a monitoring rule to gather up all the amount fields of all orders across all processes and then configure a monitoring rule to make sure that at least $10,000 has been successfully processed every 24 hours.

Monitoring rules also provide an analytical view into the data that they are monitoring, or for example in the example below you are able to see the number of cities where the relative humidity was less than 50%.

ui-monitoring-rule

Using Monitoring Rules

In order to take advantage of monitoring rules you need to use Promoted Variables in the integration processes you want to monitor using monitoring rules.

You are then able to use these Promoted Variables to set up monitoring rules on that set of data. See the section on Monitoring rules for more.

APIs

FRENDS support generating processes from OpenAPI (swagger) 2.0 specifications. Processes within an API specification can be managed and deployed as a unit. Once a valid OpenAPI specification has been imported into FRENDS, you can easily create processes from the operations.

api-swagger

FRENDS can generate Processes for OpenAPI operations that take in the parameters defined in the operation in the specification, as well as generate samples of the expected responses. A process bound to a OpenAPI operation has a generated API trigger.

For more information on OpenAPI specifications, see the official documentation. FRENDS supports OpenAPI 2.0.

API Discovery

Active processes part of an API Specification can be found, explored and tested from the Agent API Discovery page. By navigating to https://<agent url>:<port>/api/docs/, a list of active specifications will be shown. By navigating to a Specification, each active operation can be explored and tested.

ui-agent-api-discovery

API Keys

Access to APIs can be managed efficiently with API Keys. API keys are generated per Environment. API keys use Rulesets to give access to HTTP endpoints according to their path and the request method. This makes it possible to quickly give an API key access to a full API Specification by allowing access to the API Specifications basepath. You can also set the amount of requests a single key can be used for.

Agent groups

Agent groups are used to group Agents that need to share their configuration, e.g. for a high-availability or API gateway installation. All Agents in the group have the same process versions deployed, and share the same settings. Agent groups can contain one or more Agents and API gateways.

As the different Agent groups are isolated from one another, you can do the following actions within each group:

  • Deploy a Process
  • Activate a Process
  • Deactivate a Process
  • View the execution statistics of each process within that Agent group
  • View the real time execution of each process within that Agent group

Triggers

arch-agent-highlight

Triggers are registered by the Agent and can only activate if the triggering event is registered by the hosting server.

Triggers are an integral part of any FRENDS Process as they are the way a process can be started dynamically based on an event that the Agent is able to recieve. The trigger also acts as a starting point for the process and the first step in the Process diagram.

ui-triggers-4.4

These events can include:

  1. A file is created to a designated folder
  2. A webservice call is recieved by the Agent
  3. A message appears in a queue the Agent is subscribed to
  4. A schedule is activated in it's time window
  5. A manual message is sent by the user through the FRENDS UI

This means that if you are creating a REST API you should use and configure an HTTP Trigger or an API trigger as the start event for that integration process. Likewise in the case of a batch job you should most likely use a Schedule or a File Trigger.

Using Triggers

Triggers are used when developing an integration process and they are the first element in the process editor canvas. You can then configure the trigger to match the integration scenario that is being developped.

Note that you can have as many triggers as you would like in any process and you can combine different trigger types together. This means that you can create and integration process that is run whenever a file is created to a folder AND atleast every 6 hours.

Triggers also use the #hashtag notation to offer relevant information about the event that initiated the trigger, as an example the file trigger offers the name and metadata of the file that it was triggered from. You can then use this information to build logic in the process itself.

Different Trigger Types

Currently FRENDS Supports 6 different triggers:

Environment Variables

arch-env-highlight

Environment variables are configured through the FRENDS UI and stored securely in the database.

Environment variables are optional static configuration information that is attached to a specific Environment. These variables are most commonly used to store integration process related information such as passwords and usernames of systems that are being connected into.

You can create environment variables in different categories to help organize similar variables together.

  • Key-Value-Pairs, for storing simple information such as connection strings
  • Hierarchical Groups, for storing all the information relating to a specific object such as ERP password, username and server location
  • Lists, for storing repetitive information such as the IP addresses of your client servers

There is no limitations in what you can store in an Environment variable.

Storing Information

The main advantage in using Environment variables is that after configuring them you can simply refer back to an Environment variable in your integration Process to access the configured value and if you need to change that value you can update it on the fly in the environment variables page.

Environment Specific Information

The other benefit of using Environment variables is that you can configure them to be Environment specific. This means that you can use a different password, or even a different server, for an integration process in the test environment than you do in the production environment. This allows for seamless development, testing and production lifecycle because the configuration of each environment is tied to an appropriate environment variable.

Environment Variable Use Cases

  1. You only need to keep track of these variables in a single place
  2. If a variable changes you only need to update it once
  3. You can securely store sensitive information such as passwords as environment variables, using the secret variable type
  4. You can have different variables for different environments

API Gateways

API gateways are limited Agents that only act as simple load-balancing proxies in front of actual execution Agents. They only expose API and HTTP trigger endpoints and forward valid requests to the execution Agents. The API gateway Agents will authenticate and validate the requests before forwarding them upstream. The gateways will also throttle excessive requests. The idea is that you can install API gateway agents to public-facing servers, without exposing your actual execution Agents, with connections to internal systems, to public network traffic.

Please note that you do not need an API gateway to be able to use API processes. The execution Agents also directly expose all API process endpoints, and do authentication, throttling, etc. Therefore, if you have a load balancing proxy server already set up for your Agents, you should not need to deploy an API gateway.

API gateways are always configured as part of an Agent group, and by default, they will expose the same API and HTTP triggers as the execution Agents. You can choose to set an API or HTTP trigger as private, which means the gateways will not expose them; they will be only accessible from the internal execution Agents. This can be done by adding the 'private' tag to the API spec or by checking the private flag for the HTTP trigger.

If the Agent group has more than one executing Agent, the API gateway will do simple load balancing between them in a round-robin fashion. The gateway polls each agent every second and removes or adds the agents to the routing pool accordingly. A forwarded request that fails due to an unexpected error, such as a network disconnect or timeout, will also cause the Agent to be removed from the routing pool. The Agent will be returned the pool once the gateway can succesfully poll it. The traffic to upstream Agents will be routed to their configured external URLs.

API gateways are installed with the same installer as any other Agent; the configuration tells the Agent to work in proxying gateway mode. Because API gateways do not need to synchronize with other Agents, they can be installed without any external dependencies (apart from the service bus connection), using individual LocalDB instances.

Process Instance

arch-ui-db-highlight

The process instances are stored in the FRENDS database and viewed through the UI.

A process instance is the single execution of an integration process created in FRENDS. The process instance is used for monitoring and auditing purposes since the process instance stores all the information relating to the execution of that specific process in that specific instance.

As an example when a process is being built the view looks like this...

ui-process-creation

... and when it's finished the process instance shows the data and the execution path the process took during that execution:

ui-process-execution

Finding Process Instances

When you have built your integration process and need to find a specific process instance tied to that process, you can use the Process page in the UI to search and filter your process executions to find the instance you were looking for.

A good example of this would be to for example search for specific data in the process execution such as name of the city being processed:

ui-process-instance-list

Deployment

arch-deployment-highlight

When developing processes or integration flows in FRENDS you will need to deploy the newly created integration flows to an Agent group in order for them to be executed. A common use case for this is the traditional path of:

  1. Developing an integration process in the Development Agent group in the Development Environment
  2. Deploy it for testing to a separate Test Agent group (in the Test Environment)
  3. Running test scenarios for the integration flow
  4. After the tests have been passed deploy it to an Agent group in the Production environment

FRENDS enforces this best practice with the deployment and Environment architecture.

Deploying a Process

The deployment of a Process is done from the FRENDS UI Process View by selecting Processes to be deployed and selecting the "Deploy Processes to Agent group" action found in the Actions dropdown menu. You will also be required to choose which version of a Process you wish to deploy and into which Agent group. This same method is also used to rollback previous deployments by simply selecting an older version.

When deploying a Process, FRENDS will automatically send a notification to the Agents in the target Agent group which will cause them download and take the selected version of that Process into use. This way the whole deployment process is automated with a single click of a button.

Note that triggers will be also activated by default when deploying processes to another Agent group. You can choose not to activate the Processes by unchecking the option in the deploy dialog during deployment.

Processes

arch-ui-db-agent-highlight

Processes are configured using the FRENDS UI, stored in the database and executed as compiled code in FRENDS Agents.

A FRENDS process is the common name for all integration flows inside FRENDS. A process is a combination of visual configuration using the BPMN process builder canvas and Task configuration inside that canvas.

frends process

Process Types

There are two kinds of processes available in FRENDS:

  • Regular Processes
  • Subprocesses

A regular process is used to create the integration flow functionality and active visual documentation of what that integration flow does and a subprocess can be used to wrap smaller parts of processes to create reusable microservices across other processes.

This enables a process hierarchy where you can create a FRENDS process which executes a subprocess, which executes a subprocess and so on which can be used to create an orchestration layer and where you can isolate for example access to a specific system inside a sub process.

Main Process
  Sub Process
  Sub Process
    Sub Process
      Sub Process
  Sub Process
Main Process

Remote subprocess

A Subprocess can be executed on any other Agent group in the same Environment as the parent process. This functionality can for example be used to call a Subprocess on an on-premise Agent from a cloud Agent in a secure and simple way.

When designing a subprocess call you define in which Agent group the Subprocess should be executed when it is run in that Environment. See reference for implementation details.

See release notes for additional information

Process Functionality

A Process always contains a starting point has some functionality and has an ending point. The flow of the execution is then dictated by arrows connecting different elements

Starting Point

ui-process-start-point

Functionality

ui-process-function

End Point

ui-process-end-point

These three parts of a process combined create a ready made integration process which executes a desired integration flow.

ui-process-complete

Creating Integration Processes

Creating integration Processes is the most important functionality in FRENDS. You create Processes based on the process functionality logic above, using FRENDS Tasks and passing information from one task to another. For control flow, you can use decisions, loops, scopes, parallel executions and more.

Create your processes as clearly as possible as the process diagram will act as active documentation for the operations as well as future developers on what the process is doing.

Process Elements

For a full reference list of all the process elements available you should see Process Elements Reference

Environments

arch-env-highlight

In FRENDS, Enviroments are the logical containers for isolating Agents used in different roles during the Process lifecycle. In a common scenario you have three logical environments:

  • Development environment
  • Testing environment
  • Production environment

Each Environment then has a set of Agent groups with the actual Agents executing the Processes.

Environments share basic settings like

Environments are also used for security settings and limiting access to different parts of the system:

  • You can define e.g user access rules to deny developers access to the production environments.
  • Remote subprocesses can only be invoked on other Agent groups within the same Environment.
  • API keys are also Environment-specific.

Agents

A FRENDS Agent is the actual execution engine, the part which executes the integration flows or processes. Each FRENDS Agent works independently from other components and does not rely on any other component to function: e.g. the FRENDS UI can be offline for maintenance, but the agents will still execute processses and respond to requests.

arch-agent-highlight

The Agents are connected to the FRENDS UI and Database through a Service Bus connection; Azure Service bus for cloud or hybrid installations, Service bus for Windows Server for on-premise installations. The Agents will receive configuration updates etc. through the Service bus queues, and also use them to report back execution logs and statistics.

Each Agent is also always assigned to a single Agent group

Agent Updates

Agents update the deployed integration flows automatically after receiving an update notification through the Service Bus Queue. This means that once you click "Deploy" on the UI the Agent will automatically download and take into use the desired version of a Process.

High Availability (HA)

If you have many Agents in an Agent group, they will essentially form a farm configuration, where they share their configuration and state. This also requires that the Agents have a shared SQL database. Having Agents in a farm configuration activates the HA functionality by allowing the Agents to share load with one another and to take over the execution responsibilites of a failed agent.

Note that Agents still need a load balancer or an API gateway to be installed in front of them to split HTTP traffic in on-premise installations.

Certificates

HTTPS triggers require the Agent to have a certificate available in the local machine certificate store. We recommend providing a thumbprint for a valid certificate that is located in the local machine store. If no certificate thumbprint is provided in the configuration, the Agent installer will request one from the AD domain. If the Agent installer cannot get a certificate from the domain controller, it will create a self-signed one. The domain controller provided certificate expiration depends on the the domain settings and the self-signed certificate will expire after 20 years.

The Agent will continue using the same generated certificate until the Agent is fully uninstalled. The generated certificate is removed on uninstall from the local machine certificate store and forgotten about. This means that you may regenerate the certificate by uninstalling and then re-installing the Agent, this will recreate the certificate and set it up for the Agent.

Dashboard

The dashboard is a part of the FRENDS UI which gives the users a widget based configurable splash page when first loading FRENDS. This splash page enables users to configure different kind of statistical views into the day-to-day operation of FRENDS to be able to get a sense of the current state of integrations with a single glance.

arch-ui-highlight

The dashboard data is stored on the FRENDS database and the configuration of the widgets is stored in the users browser.

Widgets

The dashboard contains multiple different widgets which can be added or removed, resized and repositioned based on the users preference. Each configuration you make for widgets is saved locally on your browsers memory, which means that the Widgets are unique to each browser and each user.

Process Count Widget

ui-success-widget

ui-failed-widget

The Process count widget can be used to either show the number of failed or successful Process executions in the chosen Environment over a chosen period of time. For example a user could configure a Process count widget to show the number of failed Processes in the "Production" Environment over the last 7 days.

Process Graph Widget

ui-process-exe-widget

The Process Graph Widget can be used to get a visual representation of the number of failed or successful Process executions over a period of time. Like with the Process Count Widget the Process Graph Widget can be configured to only display a specific Environment.

Error List Widget

ui-errors-widget

The error list widget is used to display and group any problems that might have occurred in FRENDS and can be used to quickly navigate to the problematic integration process or environment.

The error list widget is able to display:

  • Agent related errors, such as connection problems
  • Process execution errors
  • Other possible errors in the FRENDS UI or maintenance tasks

Tasks

arch-ui-db-agent-highlight

FRENDS tasks are configured in the user interface, stored in the database and executed as a part of a process on the FRENDS Agent.

FRENDS tasks are the building blocks with which you build FRENDS Processes. They are meant to be reusable microservice like components which can be utilized for connector like actions by parametrization.

For example a FRENDS task could read files in from a director and another task could write something to a database, by then connecting these two tasks together you can create an integration process which reads files and writes the contents to a database consisting of two tasks.

ui-task

Configuring FRENDS Tasks

Before you can use FRENDS tasks to build an integration process, you need to configure them according to the specific task you are using. This configuration changes depending on what task you are using.

As an example configuring a task to read files would require you to give the file name and directory location while a task to write to a database would require you to specify the SQL query that will be used for the write operation.

All the configuration is done using the FRENDS Parameter Editor.

API Keys

API keys are used to authenticate a caller triggering an HTTP or API Trigger that is using API Key authentication.

An API key is valid only for a specific Environment. API key access rights is determined by the Rulesets applied to it.

Rulesets

Rulesets are used to group access rules used for API keys. Rulesets are shared across all Environments. This makes it possible to have a partner or a system have exactly the same access rights for multiple Environments, by having API keys for each environment share the same Rulesets. An API key can have multiple Rulesets active at once.

api-key-rulesets

In the example above, System X have access to the Development, Testing and Production Environment (using different keys). However exactly the same access rules are applied, since they all share the same Ruleset. This means that if everything works as expected in the Test environment, then we can be sure that a key with the same Rulesets will work in Production.

Rules

A Ruleset consists of simple Rules which gives the user access to an URL path called with a specific method. Path parameters are not supported.

Rules are enforced by the Agent recieving the HTTP(S) call to a HTTP or API trigger. The agent is aware of all API keys for the Environment it resides in, as well as which Rulesets are applied. For each Rule that is applied to the API key, the path of the call as well as the mehtod used is inspected. If the path starts with the same path as in the Rule and the method matches, then the call goes trough.

Example
An agent is running on https://agent.org:9999. A call is made to https://agent.org.9999/api/myApi/v2/getStatus?paging=4

The part of the call that determines if the call gets access is /api/myApi/v2/getStatus, the rest is ignored. Let's say this call is made with GET.

A rule with the path /api/ and ANY method will allow this call to go trough, since the path starts with the same and any method is allowed.

However, a rule configured to match /api/myApi/v1 will not go torugh, since the start of the paths do not match fully.

Note that the comparison in between the rule path and the call path is case insensitive.

Configuration

API Keys and Rulesets are managed in the Administration->API Keys page.

Rulesets

ui-api-ruleset Rulesets contains a collection of Rules. Each Rule has a path and a method. By clicking on the path, it's possible to see which API specifications are covered by the rule (or if a full API specification base path is covered, which operations in a specification is covered). *Note that operations containing path parameters are not shown in this list since they might or might not be covered by the rule. *

A ruleset have a list of API keys that are using it. New keys can easily be added or old keys removed from there. Whenever a Ruleset is changed, updates are sent to the Agents.

API Keys

ui-new-api-key API keys are created per Environment, and cannot be moved or copied to other . Once the environment have been set after creation it won't be possible to change it. Once the key has been saved, a key value will be generated for use. It's possible to add or remove Rulesets that should affect the API Key in the API Key page.

Throttling

Agents and API gateways can do simple throttling on requests based on the API keys. If you set the "Request limit" for an API key, the agent (or API gateway) will check that the key is not used more than the given number of times within the time period. The first request with the API key will start a new time period. If there are more requests within the time period than the given limit, the extra calls will receive a HTTP 429 response. After the period has ended, new time period will start again with the first call using a specific API key.

Please note that the limits are currently checked per Agent (or API gateway) instance. This means that in a load balanced environment, the effective limits will actually scale together with the number of agents. E.g. if the API key request limit is set to 100 per hour, and you have a two agents in an agent group, the effective limit will be about 200 requests per second. It could also be a little less, depending on how evenly the load balancer shares the load: once one of the agents hits the request limit, it will start returning the HTTP 429 responses, even if the other agent might not yet have hit its limit.

Using API Keys from the Agent API Discovery page

Once the API keys and ruleset has been set up, and there's a process that's using API key authentication, the Agent API Discovery page will allow you to enter an API key. The API key is added as a header to the request (X-ApiKey).

swagger-ui-api-key

Passing an API key from a client

An API Key can be passed in the request headers either within the Authorization header using the ApiKey type, or with the X-ApiKey

For example:

Authorization: ApiKey 12345678-1234-1234-1234-1234567890ab

Or:

X-ApiKey: 12345678-1234-1234-1234-1234567890ab

Process Elements

Events

Start

The Start element is used to mark the starting point of the process. A Start element contains a trigger configuration. Multiple Start elements can exist in the root level of the process, but they all have to lead to the same element.

StartEvent

Start elements also exists within scopes. A Start element within a scope does not contain a trigger configuration, it's only used to mark the starting point of the scope.

Return

A Return marks the end of an execution path and defines the return value.Return value marks the end of the execution for either the scope it's placed in, or the process itself.

Return

Intermediate return

Intermediate return works in a similar manner to return, with one big difference. An intermediate return does not end execution, instead it allows the process to continue executing. An intermediate return only works when the process is triggered by a HTTP Trigger. It allows giving a result back to the caller before a time consuming process begins. Intermediate returns are drawn as an alternative execution path and can only be attached to a Task, Call Subprocess or Code element. While it's possible to have multiple Intermediate returns in a Process, the intermediate result will only be returned back to the caller for the first Intermediate Return encountered.

intermediate return usage Example usage of Intermediate return.

Throw

Throw is used to throw an exception. An uncaught exception will cause the Process execution to end in an error state.

Throw

Catch

Catch is used to handle an exception A Catch can be attached to a Task, Call Subprocess or a Scope. The outgoing connection from a Catch will point to an Error Handler element.

Catch

The exception that is caught can be accessed within the error handler by defining a variable name in the Catch element, and then using a #var. reference. An element can only have one Catch element attached.

Error Handler

error handler

An error handler is a Task, Code, Call Subprocess or Scope element that is used to handle an exception. An error handler always have an incoming connection from a catch, and it must always continue to the same element(s) as the element which the catch is attached to.

If an exception occurs then the execution of the throwing element will stop, and the error handler will kick in. The return type of the Error handler should be the same as the throwing element's, since the return of the error handler will be used in the same way as the return of the throwing element.

subprocess error handler example An Error handler can end the execution of the whole process by placing a Throw shape as the end element within a Scope.

A catch attached to a Scope element will catch all exceptions within the Scope. Note that the execution of the whole Scope will stop even if the exception is thrown on the very first element within the scope. It is possible to define an error handler for the entire process by encapsulating everything but the Start element(s) and the final return within a Scope.

Conditional Gateways

Conditional gateways are used for conditional execution paths.

Exclusive Decision

exclusive gateway empty

An Exclusive Decision element is used to choose in between two exclusive execution paths. The Exclusive Decision element contains a conditional expression that returns a Boolean value and will be evaluated at run time. If the expression evaluates to true, then the conditional branch is taken, otherwise the default branch is taken.

It's possible to join the two branches of an Exclusive Decision element. It's also possible for each branch to end in their own return element. Default branch

The default branch, taken when the expression evaluates to false, is marked with a diagonal line. Conditional branch

The conditional branch is taken only when the expression evaluates to true.

Multiple Exclusive Decisions can be stacked to provide more than two exclusive execution paths

Empty condition branches can be useful for conditional compensation flows.

Inclusive Decision

InclusiveGateway

An Inclusive Decision is used when there's multiple execution paths that can be taken. The Inclusive Decision does not contain an expression, instead every outgoing conditional branch contains it's own expression that has to evaluate to true in order for the path to be taken.

All branches of an Inclusive Decision element must join at the same element. It's not possible to return within an Inclusive Decision branch.

The return value of an Inclusive Decision is a dictionary containing the name of branches taken, and the last return value of the branch.

The order which the Inclusive Decision branches are executed can not be guaranteed. In case one branch depends on the work of another branch, then that work should be done prior to the Inclusive Decision.

The Inclusive Decision element has the option of a Default branch, just like the Exclusive Decision element. The default branch does not contain an expression, it is always executed. There can only be one Default branch per Inclusive Decision element.

The blue line shows which condition branches would be taken in this process. Each branch is executed before the "Continue" Task is executed.

Activities

Activities are the elements doing most of the work in FRENDS Processes.

Task

Task

A Task is a reusable components which can be modified by parametrization. Tasks are designed as simple actions that can be chained together to create more complex operations.

FRENDS provides a range of Task types out of the box. It's also possible to create custom Tasks.

The parameters and result type of a Task is decided by the Task implementation.

Retries

task-retry

Some tasks might not always succeed on the first try - for example a task trying to write into a database might have a temporary connection problem. Task elements have the option of automatic retries in case of an exception.

A task marked for retries is visually different.

To enable automatic retries for a Task, toggle "Retry on failure" and set the maximum number of retries.

task retry settings

Call Subprocess

CallActivity

Call Subprocess is used to call an external Subprocess. A Subprocess is a special kind of Process that can be executed from other Processes. The parameters given to Call Subprocess corresponds to the Manual Trigger parameters defined in the Subprocess. The return type of a Call Subprocess is dynamic and is defined by the Subprocess.

A subprocess call can be configured to be a remote call by enabling "Remote call" under "Show advanced settings".

To configure a remote subprocess call the destination agent group needs to be defined for each environment.

Example remote subprocess call configuration

remote subprocess example

If the process is run in:

  • Development environment the subprocess will be executed in the Development agent group.
  • TestCount environment the subprocess will not be executed and the process execution will fail.
  • Test environment the subprocess will be executed in the TestOnPremise agent group.
  • Production environment the subprocess will be executed in the ProductionOnPremise agent group.

Call Subprocess can have Error Handlers attached.

call subprocess error handlers

Code

Expression

The Code element allows you to create Process variables and execute C# code directly in a Process. The Code element has two modes - one which declares a variable and assigns a value, and one that executes an expression.

If you chose to declare a variable and enter a variable name, the variable can be accessed with a #var. reference.

A Code variable declared in the root of a Process is accessible from child scopes and modification to it in the child scopes will be visible from the root. An Code variable declared in a child scope will not be accessible in the root.

If a Code element declares a variable, then the return value of the element will be the value of the variable.

A Code element that does not declare a variable will only return a String value indicating that it has been executed.

The Process has the following libraries referenced which can be utilized: mscorlib, System.Diagnostics, System.Dynamic, Microsoft.CSharp.RuntimeBinder, NewtonSoft.Json, System.Xml, System.Xml.Linq, System.Data and System.Linq

Scopes

Scope

A scope is a isolated part of a Process. The return values of elements within a scope are not accessible from without the scope. Scope

A Scope has no special properties other than being able to release the resources used within once the execution of the Scope is complete. Some use cases for a Scope:

  • As an error handler. A Scope can contain any other element, and it's therefore excellent for more complex error handling.
  • Control when result-sets are released
  • A Scope can have an Error handler, so any exception happening within the Scope will be caught by the Scope Error handler.
  • The return value of a Scope is that of the executed Return element.

While

While

A While element is a scope that will execute over and over again up till a set criteria is met. While-elements are especially useful in combination with Code elements, since it allows complex retries, loop checks and in some cases recursive behaviors.

A While element contains an Expression parameter as well as a Max iterations parameter. The While element will keep on executing for as long as the Expression is evaluated to true, and the max iteration count has not been reached.

The return value of a While scope is the same as the last executed Return element. Foreach

The return value of a Foreach scope is a list of the return values gotten for each iteration. The return values are ordered in the same way as the provided list.

Annotation elements

Annotation elements are only for documentation purposes and does not interact with the functionality of the Process itself.

Data Store reference

DataStore

The Data store reference is used to represent a data store of any kind, for example a database.

Data Object reference

DataObject

The Data object reference is used to represent a data object of any kind, for example a variable declared within the Process.

Text annotation

Text annotations can be added to almost every element in a Process. It can contain for example a description of what an element does.

Agent status endpoint

If you have at least one HTTP trigger deployed, an agent also hosts a special endpoint at /frendsstatusinfo on the HTTP (and HTTPS) port. This endpoint just returns HTTP result code 200 (OK) if the agent is running and HTTP routes are loaded. It is used e.g. by API gateways for monitoring if the upstream execution agents are running. It can of course also be used by external load-balancers configured for the systems.

If the agent is paused, the endpoint will return 503 (Service unavailable). If you have API gateways set up, you can use this to turn off traffic to agents behind that gatway in a controlled way: even if the gateway is paused, it will still route traffic to the upstream agents, but return 503 to anyone checking the status.

Process Error Handler

You can define a process level error handler that can report any Exceptions thrown by the Process. When an Exception is thrown, if a Subprocess is configured as a Process error handler, it will be called. Note that you cannot continue the execution in the main Process after an Process Error Handler has been called.

A Process Error Handler can be configured in the process settings side panel. To pass the actual exception that occured to the error handling Subprocess the variable #var.error must be used.

Process Error Handler Configuration

Any return value from the Process error handler will be ignored. If you want to catch the error and return e.g. a custom error message to the caller, you need to wrap your Process in Scope with a custom error handler.

Schedule Trigger

If you need to start a process within a specific schedule you can use a Schedule Trigger to define a schedule which will then start that process within the scheduled times. Schedules can be configured to start in specific intervals within set time and date ranges or to execute once at given dates and times.

ui-schedule-trigger

Please note that Processes are scheduled with only best effort guarantees: the process will be started if the schedule is open when the scheduling database is polled. Because the poll delay is one second by default, this can mean that the actual execution of a process may start 1-2 seconds later than the actual scheduled time. Therefore you should not create too short time windows or expect the processes will start at exactly the given time.

Once a process has been launched, it will be allowed to start and execute. This means that if you launch a process with a large number of tasks, the execution of the process instance may take a long time, continuing even after the time window has ended. If this can be a problem, the process executions should be scheduled more evenly in time.

When a scheduled Process is created, imported or activated, it will execute when the time window next opens.

Advanced settings

Any process can contain multiple schedule triggers if you need to have different or overlapping schedules. To add multiple schedule triggers simple add a Start Element to the Process canvas and connect it to the first step of the process. However, please note that each schedule is evaluated separately, so you may get more than one executions of a Process triggered at the same time.

You can limit a process to run only one scheduled instance at a time by setting the Run only one scheduled instance at a time option for the trigger. For instance, if a process is scheduled to execute every 10 minutes, but the process instance takes 13 minutes to complete, a new process instance will be scheduled only after all the task instances of the previous process instance have finished, i.e. about 14 minutes from the start of the previous instance. However, there are some things to note about this feature:

  • The setting is trigger-specific: if a process has multiple schedule triggers that overlap, two or more instances of the process may be executed at the same time.
  • If you start a process manually e.g. with the "Run Once" action, the new instance will be executed, even if the setting was turned on and a previous instance was already running. This is because the setting is only checked for scheduled process instances.

You can also set a schedule to be open only on specific days of week or month. For monthly schedules, you can also define it to be open e.g. on the first or last day (or weekday or e.g. Monday) of a month, based on the day ranks (Note that if you choose multiple days, they will all be counted, and weekday/weekend day options will override any specific days). Also, if you use the explicit days option to execute a process e.g. on the 30th of each month, the day check is exact, so the schedule will not actually be open on February, with no 30th.

For more complex scenarios, you can also give specific dates (e.g. bank holidays) when the schedule should or should not execute. To do this, just add the dates to the Chosen dates list and choose if the schedule should be open "Only on these dates" or "Never on these dates". You can also import the dates from a .ics file. Please note that the excluded dates will be evaluated separately from the other date limits, and after any season limits are checked. This means that if you e.g. have a schedule that runs on the first of every month, but exclude bank holidays, the Process will not run ion January at all, as the 1st of January would be a bank holiday. Also, if you have defined a season end time, e.g. 2017-12-31, the Process will not run after it, even if you would have explicitly chosen a date after the season end, e.g. 2018-01-06.

Daylight saving time effects

The process schedules are checked according to the given time zone, which also takes possible daylight saving time (DST) adjustments into account. This means that if you schedule a process to run every day at 12:00, it will execute at that time, whether DST is in effect or not.

However, because the scheduler uses the adjusted time, the time periods when DST is starting or ending may cause processes to be scheduled a bit differently than normally: In spring, when DST is starting, the clocks are turned forward, and one hour gets skipped. The opposite happens in autumn, when the clocks are turned back, and there is one additional hour. The effects of these adjustments for different schedules are shown in the tables below.

When the DST is starting, one hour (in EU, the hour from 3 to 4 AM) will be skipped, i.e. the clock will skip from 2:59:59 to 4:00:00. For processes this means that the schedules cannot be open during this period. The schedule start and end times are adjusted as follows:

  • 2:30 - 3:30 --> 2:30 - 2:59:59 - When the end time is invalid, the nearest valid time (right before 3:00:00) is used
  • 3:30 - 5:00 --> 4:00 - 5:00 - When the start time is invalid, the nearest valid time (4:00) is used
  • 3:30 - 3:45 --> (none) - The schedule will be skipped because it cannot be adjusted to a valid time. You should not use these kinds of schedules.
  • 3:30 - no end time --> 3:30 - 2:59:59 and 4:00 - 3:30 - Schedule with no end time is implicitly open by 24 hours. If the start time is invalid, the start and end times are adjusted so they are valid, and the execution will continue uninterrupted over the DST change.

When the DST is ending, one hour will be added to the day, i.e. the clock will be turned back from 4:00 to 3:00. Therefore the local time will be 3:00 twice for that day. For processess that start or stop at this ambiguous hour, the ambiguity is resolved by always choosing the first possible occurrence.

DST changes also affect repeating schedules. Because the repeats are calculated from the schedule's local start time, the repeats will start to adhere to the adjusted local time as soon as a new schedule is opened.

If the schedule start time or end time is adjusted due to the DST change, a warning will be logged to the event log for every time the schedule is checked, i.e. every minute. If the schedule would be skipped, i.e. the schedule starts and stops during the invalid period (3:00-3:59:59), an error is logged.

API Trigger

API Triggers are specialized HTTP Triggers bound to a swagger operation. API Triggers can only be created trough API Management. API Triggers shares configuration with HTTP Triggers to a large part.

API trigger

Parameters

HTTP Method

The HTTP method is locked to that provided in the Swagger operation, and can not be changed. Valid values are GET, POST, PUT, DELETE, HEAD, OPTIONS and PATCH.

URL

The url path is locked to that provided in the swagger operation, and can not be changed. Path parameters are allowed. If the path parameters are of type integer or boolean, then the path will be restricted to containing only those types.

This enables having endpoints like /api/pet/{id} and /api/pet/getStatus active at the same time with no collision, if the {id} parameter is of type integer. However having /api/pet/{name} and /api/pet/getStatus at the same time would not be possible if the {name} parameter would be of type string.

Allowed protocols

API triggers can be configured to accept requests with HTTP, HTTPS or both. If a request is made with a protocol that is not allowed, the reply will be Forbidden (403).

Authentication

API triggers can use four different kinds of authentication:

  • None - No authentication at all
  • Basic - Authenticate with HTTP basic authentication
  • Certificate - Use a client certificate to authenticate
  • Api key - Authenticate with an API key
  • OAuth2 - Authenticate using OAuth 2.0 bearer tokens

We strongly recomend to only use Authentication over HTTPS.

Basic authentication authenticates the user either against the Active Directory or the local users. Which one is used depends on the FRENDS Agent service user. If the agent uses a local user account, users are authenticated against the local machine users. If the agent uses an AD user account, users are authenticated against the AD users. The user name and password need to be encoded with UTF-8 before being converted to Base64 for the basic authentication header.

Certificate authentication requires that the client certificate is valid for the FRENDS Agent user on the agent machine. Also the issuer for the certificate needs to be found in the agent user's Client Authentication Issuers certificate store.

Api key authentication uses an API key together with Rulesets to determine if the client has access to an url. For more information, see API keys.

OAuth2 uses OAuth bearer tokens from registered OAuth applications to gain access to the API. You need to set an API Access Policy to allow access.

Cross-origin Resource Sharing

If there is need to allow a certain page to trigger a process, it is possible to do with cross-origin resource sharing (CORS). Check the "Allow requests from these origins" checkbox, and define the allowed origins in the textbox. The * character allows calls from all origins.

Note: if the call does not come from the default port, it must be included in the origin. The origin making the call must also support CORS.

Swagger

A read-only display of the swagger operation bound to the trigger.

Trigger Reference List

Reference Description
#trigger.data.httpBody The body of the HTTP request in string format
#trigger.data.httpClientIp IP of the client as a string
#trigger.data.httpCookies Cookies associated with the request as a Dictionary<string,string>
#trigger.data.httpMethod HTTP method type (e.g. GET, POST..)
#trigger.data.httpRequestUri Request URI (e.g.  https://myfrendsagent.example.org:9998/api/MyApi/execute?mode=1).
#trigger.data.username The username associated with the caller. Only set if authentication is used. The following values are passed for the different types out authentications:
Api Key: The name of the api key
Basic authentication: The provided username
Certificate: The certificate's SubjectName.Name field
#trigger.data.body Will contain whatever is passed on the request body. If the body contains a JSON object, the properties will be accessable with dot notation. Eg, if the JSON string { "house": { "windows": 4}} is passed in the body, it would be possible to acces the "window" property with #trigger.data.body.house.window
#trigger.data.path Contains path parameters. Automatic casting will be attempted if the parameters have been defined in the swagger spec. Path parameters are mandatory and thus always populated.

If the path /user/{id} has been configured, and the parameter id is of type int, then the reference #trigger.data.path.id can be used straight away for integer comarisons (for example in a Decision expression #trigger.data.path.id>3 would be usable)
#trigger.data.query Contains query parameters. Automatic casting will be attempted if the parameters have been defined in the swagger spec. If the parameter has a default value and the request does not contain the parameter, the default value will be passed to the process.

Query parameters defined in the swagger spec are always populated in the trigger, even if no value is provided.
#trigger.data.header Contains header parameters. Automatic casting will be attempted if the parameters have been defined in the swagger spec. If the parameter has a default value and the request does not contain the parameter, the default value will be passed to the process.

Header parameters defined in the swagger spec are always populated in the trigger, even if no value is provided.

You can try to access an optional reference from any of the references (e.g. #trigger.data.httpHeader.foo) and if it is found the value will be returned and if not the value will be set to null.

Automatic casting

Swagger parameters usually contain a type definition. Parameters of type integer, number or boolean will be cast to their corresponding .NET type (Int, Long, Float, Double or Boolean). For array type parameters, the array will use the separator defined in the swagger parameter and the array content in turn will be cast according to their types. An array parameter with a csv separator and content type integer has the call content "1,2,3,4,5" and will be accesable as an JArray containing integer values.

Intermediate Response

IntermediateReturn

A Process can return a response for the user before the Process is finished. This functionality is enabled by adding a Intermediate result element to the Process. When this element is executed the caller will recieve a http response from the Process. This can for example be used when calling a long-running Process and the caller should be notified that the long-running task has started.

HTTP Response Formatting

The API Trigger returns the result of the executed Process as the HTTP response. The response varies according to the following conditions. When the Process' result is a string, the string is set as the body of the response. If it was an object, it will be returned either as JSON or XML depending on the requests ACCEPT header or JSON by default. For example ACCEPT: application/xml would produce an XML response, while ACCEPT: application/json would produce a JSON response.

If the result is an object with the properties HttpStatusCode and Content, the result will be mapped to a response followingly:

Property
Type
HTTP Response
HttpStatusCodeintReponse status code
ContentstringThe body of the response
ContentEncodingstringThe encoding for the body, e.g. utf-8
ContentTypestringContentType header value, e.g. application/xml or application/json
HttpHeadersKeyValuePair[]Response headers

Http response

The process elements Return, Intermediate return and Throw all have the option to generate a pre-defined Http response. See Http Response results.

HTTP Trigger

HTTP Triggers allow you to trigger Processes by HTTP or HTTPS requests. The HTTP endpoint is hosted by the FRENDS Agent, using the operating system's HttpListener interfaces. The Agent can be configured to listen for requests on multiple ports. Each hosted HTTP Trigger will have its own path for triggering just the specific process.

ui-http-trigger-4.4

Parameters

HTTP Method

HTTP Method determines which methods the trigger URL can be called with. Allowed values are GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH and ANY. ANY allows any method to go trough, while the others allows only the defined method.

Url

All paths configured for an Agent group need to be unique in combination with the method, overlapping paths will cause errors. The paths may contain variables as route parameters (inside the path: runmyprocess/{variable}) or as query parameters (in the end of the path: runmyprocess?id=1)

For example, if you have

  • Agent running on host myfrendsagent.example.org
  • Agent configured to use port 9998
  • HTTP Trigger configured as runmyProcess/{myvariable}

This will register a trigger that listens on the address https://myfrendsagent.example.org:9998/runmyprocess/{myvariable}

If you call the trigger with the following URL:

https://myfrendsagent.example.org:9998/runmyprocess/anyValueForMyVariable?anothervariable=1&yetanother=foo

the following references and their values will be available in the process:

#trigger.data.pathParameters.myvariable = anyValyeForMyVariable
#trigger.data.queryParameters.anothervariable = 1
#trigger.data.queryParameters.yetanother = "foo"

Allowed Protocols

HTTP triggers can be configured to accept requests with HTTP, HTTPS or both. If a request is made with a protocol that is not allowed, the reply will be Forbidden (403).

Authentication

HTTP triggers can use four different kinds of authentication:

  • None - No authentication at all
  • Basic - Authenticate with HTTP basic authentication
  • Certificate - Use a client certificate to authenticate
  • Api key - Authenticate with an API key

We strongly recomend to only use Authentication over HTTPS.

Basic authentication authenticates the user either against the Active Directory or the local users. Which one is used depends on the FRENDS Agent service user. If the agent uses a local user account, users are authenticated against the local machine users. If the agent uses an AD user account, users are authenticated against the AD users. The user name and password need to be encoded with UTF-8 before being converted to Base64 for the basic authentication header.

Certificate authentication requires that the client certificate is valid for the FRENDS Agent user on the agent machine. Also the issuer for the certificate needs to be found in the agent user's Client Authentication Issuers certificate store.

Api key authentication uses an API key together with Rulesets to determine if the client has access to an url. For more information, see API keys.

Cross-origin Resource Sharing

If there is need to allow a certain page to trigger a process, it is possible to do with cross-origin resource sharing (CORS). Check the "Allow requests from these origins" setting, and define the allowed origins in the textbox. The * character allows calls from all origins.

Note: if the call does not come from the default port, it must be included in the origin. The origin making the call must also support CORS.

Public / private HTTP triggers

You can choose to mark a HTTP trigger public by checking the "Public - will accessible on API Gateways" setting. As the option says, this means the Trigger endpoint will be published on API gateways. Private triggers can only be accessed from the actual execution Agents. This way you can e.g. limit some APIs to be used only from your internal network.

Trigger Reference List

Reference Description
#trigger.data.httpParameters

Dictionary<string, string> of parameters passed in the URL, both route and query parameters. (e.g.  anotherVariable...)

DEPRECATED - Use pathParameters or queryParameters to access the path and query parameters.

#trigger.data.queryParameters

Dictionary<string, string> of passed HTTP query parameters

#trigger.data.pathParameters

Dictionary<string, string> of passed path parameters

#trigger.data.httpHeaders

Dictionary<string, string> of passed HTTP request headers (e.g.  Host, Accept..).

#trigger.data.httpBody

HTTP request body as a string

#trigger.data.httpMethod

HTTP method type (e.g. GET, POST..).

#trigger.data.httpRequestUri

Request URI (e.g.  https://myfrendsagent.example.org:9998/runmyprocess/anyValueForMyVariable?anothervariable=1).

#trigger.data.httpClientIp IP of the client as a string
#trigger.data.cookies Cookies associated with the request as a Dictionary<string,string>
#trigger.data.username

The username associated with the caller. Only set if authentication is used. The following values are passed for the different types out authentications:
Api Key: The name of the api key
Basic authentication: The provided username
Certificate: The certificate's SubjectName.Name field

You can try to access an optional reference from any of the references (e.g. #trigger.data.httpHeader.foo) and if it is found the value will be returned and if not the value will be set to null.

Intermediate Response

IntermediateReturn

A Process can return a response for the user before the Process is finished. This functionality is enabled by adding a Intermediate result element to the Process. When this element is executed the caller will recieve a http response from the Process. This can for example be used when calling a long-running Process and the caller should be notified that the long-running task has started.

HTTP Response Formatting

The HTTP Trigger returns the result of the executed Process as the HTTP response. The response varies according to the following conditions. When the result of the Process is a string, the string is set as the body of the response. If it was an object, it will be returned either as JSON or XML depending on the requests ACCEPT header or JSON by default. For example ACCEPT: application/xml would produce an XML response, while ACCEPT: application/json would produce a JSON response.

If the result is an object with the properties HttpStatusCode and Content, the result will be mapped to a response followingly:

Property Type HTTP Response
HttpStatusCode int Reponse status code
Content string The body of the response
ContentEncoding string The encoding for the body, e.g. utf-8
ContentType string ContentType header value, e.g. application/xml or application/json
HttpHeaders KeyValuePair[] Response headers

Http response

The process elements Return, Intermediate return and Throw all have the option to generate a pre-defined Http response. See Http Response results.

Custom Tasks

FRENDS fully supports creating your own task packages. To do this you must create a .NET class library which is then wrapped in a NuGet package file https://docs.microsoft.com/en-us/nuget/quickstart/create-and-publish-a-package and uploaded into FRENDS through the Tasks page.

ui-import-nuget

Creating a FRENDS Task Package

To create a FRENDS task you first need to create a .NET class library, preferably targeting .NET standard 2.0.

FRENDS supports .NET Standard 2.0 class libraries starting from version 5.0. Previous FRENDS versions supported only libraries targeting .NET framework up to 4.5.2, so if you need to be able to run on older FRENDS versions, you need to target NET 4.5.2.

When creating the class library, please note that the task should be implemented as a public static method with a return value. Non-static methods or methods with no return value (void) cannot be used as tasks. The methods can not be overloaded, e.g. you cannot have Frends.TaskLibrary.CreateFile(string filePath) and Frends.TaskLibrary.CreateFile(string filePath, bool overwrite).

Task libraries are distributed as NuGet packages (.nupkg). When creating a custom task library, please note that the assembly name and package Id must be identical, e.g. Frends.TaskLibrary.dll and Frends.TaskLibrary.1.0.0.0.nupkg.

Task parameters

All parameters specified for the method will be used as Task Parameters. If the parameter is of class type, it will be initialized as a structure.

For Example:

using System.ComponentModel;

namespace Frends.TaskLibrary 
{ 
    /// <summary>
    /// File action type (nothing/delete/rename/delete)
    /// </summary>
    public enum ActionType 
    {
        /// <summary>
        /// Nothing is done to the file
        /// </summary>
        Nothing,

        /// <summary>
        /// File will be deleted
        /// </summary>
        Delete,
        /// <summary>
        /// File will be renamed
        /// </summary>
        Rename,

        /// <summary>
        /// File will be moved
        /// </summary>
        Move
    }

    /// <summary>
    /// File class
    /// </summary>
    public class File
    {
        /// <summary>
        /// File path
        /// </summary>
        [DefaultValue("\"C:\\Temp\\myFile.json\"")]
        public string Path { get; set; }

        /// <summary>
        /// Maximum size of the file
        /// </summary>
        [DefaultValue("0")]
        public int MaxSize { get; set; }


        /// <summary>
        /// Password for unlocking the file
        /// </summary>
        [PasswordPropertyText]
        public string Password { get; set; }
    }

    /// <summary>
    /// FileAction class defines what will be done to the file
    /// </summary>
    public class FileAction
    {
        /// <summary>
        /// Action to be done with the file
        /// </summary>
        public ActionType Action { get; set; }

        /// <summary>
        /// If ActionType is Move or Rename then To is the path to be used
        /// </summary>
        [DefaultValue("\"\"")]
        public string To { get; set; }
    }

    public static class Files 
    {
        /// <summary>
        /// DoFileAction task does the desired action to file
        /// </summary>
        /// <param name="file">File to handle</param>
        /// <param name="action">Action to perform</param>
        /// <returns>Returns information if task was successful</returns>
        public static string DoFileAction(File file, FileAction action)
        {
            // TODO: change logic
            return $"Input values. Path: '{file.Path}', Max size: '{file.MaxSize}', Action: '{action.Action}', To: '{action.To}'";
        }
    }
}

In case of a complex or large parameter structure you can use custom attributes from the System.ComponentModel and System.ComponentModel.DataAnnotations namespaces to specify how the parameters are shown in the UI. For .NET Standard 2.0 task libraries, the attributes are available from the System.ComponentModel.Annotations NuGet package.

The custom attributes from System.ComponentModel.DataAnnotations are supported from FRENDS version 4.5.6.

Default value: DefaultValueAttribute

The task parameters may use the DefaultValueAttribute to provide a default value which is shown in the editor, remember that the parameters are expressions in the editor and the default values need to be provided as such, e.g. "true" for a boolean value, "\"C:\Temp\\"" for a string containing a filePath.

Sensitive information not to be logged: PasswordPropertyTextAttribute

Also, if a parameter should not be logged, the PasswordPropertyTextAttribute should be added. The value of the parameter will be replaced with << Secret >> in the log. The parameters may have a more complex hierarchical structure, we recommend using at most only two levels of hierarchy.

Optional inputs: UIHintAttribute

[UIHint(nameof(Property),"", conditions: object[]]

Show or hide editor inputs based on the value of other inputs.

Example:

public bool Rename { get; set; }
[UIHint(nameof(Rename),"", true)]
public string NewFileName { get; set; }

The NewFileName field will only be visible if the Rename property has the value true.

public FileEnum FileOptions { get; set; }
[UIHint(nameof(FileOptions),"", FileEnum.Rename, FileEnum.CreateNew)]
public string NewFileName { get; set; }

The NewFileName field will only be visible if the FileOptions choise is either Rename or CreateNew

Default editor type: DisplayFormatAttribute

[DisplayFormat(DataFormatString = "")]

Sets the default editor input type. The parameter input editor will try to use this format when e.g. filling out new task parameters.

Possible values for DataFormatString are:

  • Json
  • Text
  • Xml
  • Sql
  • Expression

Example:

[DisplayFormat(DataFormatString = "Sql")]
public string Query { get; set; }

Tabbed parameter panels: PropertyTabAttribute

[PropertyTab]

Group parameters as tabs

Example:

public static bool Delete([PropertyTab] string fileName, [PropertyTab] OptionsClass options)

Using Frends.Tasks.Attributes

Using Frends.Tasks.Attributes to customize parameter display has been deprecated in FRENDS 4.5.6 / 4.6

You can also use the custom attributes from Frends.Tasks.Attributes to customize the Task parameter editor:

Customize task discovery in FrendsTaskMetadata.json

By adding a FrendsTaskMetadata.json file to the root of the NuGet package, unwanted static methods can be skipped by listing only the methods which are wanted as Tasks. For example the following json structure would only cause the DoFileAction to be considered as a Task:

{
    "Tasks": [
        {
            "TaskMethod": "Frends.TaskLibrary.FileActions.DoFileAction"
        }
    ]
}

XML Documentation

Custom Tasks can also be commented/documented in the code by using XML Documentation Comments. These comments will show up in the process task editor automatically if the documentation XML file is included inside the Task NuGet (if the nuget Id is Frends.TaskLibrary then a file Frends.TaskLibrary.xml will be searched).

The generation of this file can be done automatically by enabling - Build/Ouput/ XML documentation from Visual Studio for example. When the comments are being queried the Task Parameter definition is checked first and if this is not found then the type definition will be checked.

Service Bus Trigger

Service Bus triggers are similar to Queue Triggers, in that they allow you to trigger Processes on messages received from a message queue, in this case an Azure Service Bus or Service Bus for Windows Server queue or subscription.

DOC-ServiceBusTriggerSettings

NOTE: The service bus trigger cannot accept message sessions, so it cannot listen to queues or subscriptions requiring sessions. It can, however, send replies to session queues, as described below.

Configuring Service Bus Triggers

The Service bus trigger needs the following settings in order to work

Name Description
Queue Name of the queue or subscription to listen to
Connection string The full Service Bus connection string
Max concurrent connections Limit on how many messages will be processed at a time. Essentially will limit the number of Processes running at the same time.
Consume message immediately If set, the message will be consumed from the queue immediately on receive. If not set, the listener will use the PeekLock receive mode, and acknowledge the message only if it was processed successfully. The lock will be refreshed according to the lock duration defined for the queue, however if the connection string does not have management access to the Service Bus, the lock duration cannot be read and the lock is assumed to have a duration of one minute. If the lock is shorter than a minute, the lock might get released before the Process has finished executing. If the process fails with an exception, the message will return to the queue, and will be processed again. In this case, the trigger will retry processing the message until the max delivery count on the queue or subscription is reached.
Reply If set, the Process response will be sent to a reply queue, usually defined by the `ReplyTo` Property in the request message. See Reply messages below for more.
Reply errors If set and the Process fails with an exception, the exception message will be serialized and sent to the reply queue. See Reply messages below for more.
Default reply queue Needed if the 'Reply' option is set. The default queue or topic where the reply message will be sent if the request did not specify it with the `ReplyTo` property in the request. See Reply messages below for more.

Trigger data for the Process

The trigger will pass the message content serialized as string to the Process. It can be accessed via the #trigger.data.body reference.

The trigger will also set the #trigger.data.properties dictionary from the message properties. Any custom properties will be included in the list by name and value. The built-in message properties are also accessible; they are prefixed with the "BrokerProperties." prefix. The following table summarizes the available properties.

Property reference Description
#trigger.data.properties["BrokerProperties.CorrelationId"] Correlation ID
#trigger.data.properties["BrokerProperties.SessionId"] Session ID
#trigger.data.properties["BrokerProperties.DeliveryCount"] Delivery count, i.e. how many times the message has been received from the queue
#trigger.data.properties["BrokerProperties.LockedUntilUtc"] Message lock timeout if not consuming message immediately
#trigger.data.properties["BrokerProperties.LockToken"] Message lock token if not consuming message immediately
#trigger.data.properties["BrokerProperties.MessageId"] Message ID
#trigger.data.properties["BrokerProperties.Label"] Label given to the message
#trigger.data.properties["BrokerProperties.ReplyTo"] Queue name where to send replies to. See Reply messages below for more.
#trigger.data.properties["BrokerProperties.ReplyToSessionId"] Session ID to set in the reply so the caller can identify it. See Reply messages below for more.
#trigger.data.properties["BrokerProperties.ContentType"] Body content type

Reply messages

Sometimes you need to get a reply back to the sender of the request, e.g. when the caller needs to wait for the triggered Process to finish, or needs the results. In this case, you can turn on replies on the Service Bus trigger. This will then return the result of the process in a message that is put to the given reply queue.

The request-reply process usually goes as follows:

  • The caller will decide on a session ID and queue for receiving the reply. It will set these to the ReplyToSessionId and ReplyTo properties in the request message, and send the message to the queue listened to by the trigger. The caller will then start listening on the reply queue, accepting only the message session with the given session ID. This means the caller will only get the response that was meant for it, even from a shared queue.
  • The trigger will receive the request and start a new Process instance, passing the message body and properties as trigger properties to the Process.
  • Once the Process has finished, if the 'Reply' option is set, the trigger will create the response message. The response message will have the serialized result in the message body, with the SessionId set to the given ReplyToSessionId value from the request and CorrelationId set to the CorrelationId value from the request. The response is then sent to the queue or topic given in the ReplyTo property, or if the request did not define one, in the default queue for replies, configured in the trigger.
  • The caller will receive the reply message in the session.

If the Process fails and 'Reply Errors' was selected, the exception that caused the failure will be written to the reply message. The message will also have the SessionId and CorrelationId set if required.

Task testing

Task test execution enables the developer to rapidly test a Task without having to create a new version of a process for each change made to the parameters used for calling the Task. The Task test view works in the same way as a regular Task editor.

To access the Task test functionality you can click on the "Test" button on the right side of the process editor. It is also possible to copy the parameters from a already configured Task to the test from the Task parameter editor by clicking on 'Show advanced settings' and then 'Create new test'.

All the parmeters and results for the test execution will be shown in the same Test editor under the "Result" tab. Only the previous test execution result will be shown and old test executions can not be recovered.

To be able to execute the Task a Agent needs to be installed in the development Environment.

TaskTest

Manual Trigger

A process can have a manual trigger to manually pass parameters from the user to start the process.

ui-manual-trigger

Unlike other trigger types a manual trigger can be configure with a dynamic number of parameters. When defining manual parameters you need to define each of the parameters by using the "Add parameter" button.

A Manual Parameter consists of:

  • Key - Required
  • Default value - Optional
  • Description - Optional
  • Secret-flag - Indicates that this parameter will not be logged

These manual parameters can be accessed in the process using the same #hashtag and {{ handlebar }} notation like any other trigger variables.

Monitoring rules

You can define monitoring rules to check your processes are executing as expected. You create rules to check for given Promoted variable values in the database during a time interval, e.g. if the count of "OrderId" variable values ever goes under 100 within an hour, or if there are any instances of a "PROC-e234-ERROR" variable promoted.

You define rules per environment. So, in order to create a new monitoring rule, go to the Monitoring rules view, and choose Create new. The editor will open, which will allow you to define the rule in more detail.

Examples

Alert if number of orders is less than expected

The following configuration shows a common rule of sending an alert if not as many orders have been processed as you would have expected:

DOC MonitoringRules example LessThan100OrdersPerHour

Things to note:

  • You can define the rule to be active only on selected days and between selected times. Make sure to set the timezone correctly in this case.

Alert if step takes longer than expected

This is a rule for alerting if something has happened, in this case a process step taking longer than expected:

DOC MonitoringRules example MoreThanZeroErrors

Things to note:

  • The filter on variable values means that only those instances with a value greater than expected will be counted. NOTE: when using greater or less than operators in the filter, make sure the actual promoted value is numeric!
  • As any instance not filtered away is an error, we want to send an alert if there are 1 or more instances.
  • The time interval is set to 5 minutes, so any errors will be sent without too much delay. In this case, e.g. an error reported at 12:34 would get alerted at 12:50 by default (wait one full time interval i.e. 5 minutes + 10 minutes, see below).

Rule processing

The rules are processed by the message processing service periodically, once per minute. The processor will generate the value series for the individual rule, which is then used to check if the rule is met or an alert should be sent.

The rules can have a max time interval of 24 hours. The time intervals always start from the start of the hour or day.

NOTE: The daily interval also means that you need to align your time intervals accordingly. If you have e.g. a time interval of 10 hours, it will be only evaluated every day at 10:00 and 20:00, meaning any values for the time period from 20:01 to 0:00 will not be monitored. Therefore, make sure your time intervals nicely fit full hours / days. E.g. use time intervals like 5 minutes, 10 minutes or 1 hours.

The monitoring rule series are generated and checked when a one full rule interval (e.g. 1 hour) with an additional 10 minute delay by default. This delay is there to make sure all process log messages and promoted variables for the time interval, to the data store, so the generated series are valid, and no false alerts are sent. You can change the default limits from the messaging service config file, by changing the MonitoringRuleAlertProcessingDelay value.

Rules are also only evaluated starting from the time the rule was originally created. Therefore, right after creating a rule, you may need to wait a while before you get any alerts shown.

Delivering alerts with email

Monitoring rule alerts can be delivered to multiple email addresses via SMTP. The receiving email addresses are configured per environment. Email recipients are added as a comma separated string in the monitoring rule view.

If an alert for a rule has already been sent there will by default be a one hour waiting period before new alerts are sent for the same rule. This is to prevent excessive spamming of email inboxes.

You can configure the SMTP server and other settings during deployment in the deploymentSettings.json file. The settings are passed from there to the messaging service config file.

The alert emails are sent with the alert@frendsapp.com sender address. You can configure this from the messaging service config file by setting the appSetting AlertEmailSenderAddress.

API Management

FRENDS supports importing and editing Swagger Specifications. Swagger Specifications can be used to generate Processes that serves as API endpoints. Processes that belong to a Specification can be managed and deployed as a unit.

api-swagger

The API Management page can be found under "API".

Importing a Swagger definition

FRENDS support importing Swagger 2.0 Specification in JSON format. YAML markup is not supported. If your Swagger Specification is in YAML, use for example the Online Swagger Editor to download a converted version. This tool can also be used for creating Swagger Specifications that can be imported into FRENDS.

It's possible to modify an imported Swagger Specification using external tools. If you want to update an imported Swagger Specification, just import the updated specification and FRENDS will automatically create a new version of the Specification. Note that the one thing that can not change in between imports is the base path of the Specification - if the base path differ, a new API will be created rather than a new version of the old API.

API Deployment and Version Control

API versions exists in two different states. The version that is seen in the Development environment is always the current Development version of an API. In all other environments, published versions are shown.

Development versions

A development version of an API is a version where the linked processes does not have locked down versions. That means that the user can update any process that is a part of the API without taking any additional actions. The development version can also have it's Swagger specification modified. When an API is ready for deployment, a Published version will be created.

Published versions

A published version contains evrything a development version does, but it no longer allows any changes. It locks down the process versions which is in use, and the Swagger Specification can no longer be changed. A published version can be deployed as a unit, and it can also be used to rollback the Development version to a previous point.

Deploying

When deploying the user can choose to deploy a previously Published version, or create a new Published version from the current Development version. The Deployment dialog allows the user to see which processes will be deployed as well as the Swagger Specification. If a Published version is no longer valid, for example due to a used process being deleted, then it can no longer be Deployed.

ui-api-deployment

Editing Swagger

FRENDS supports editing imported Swagger Specifications. The Base path of an Swagger Spec can not be changed once it has been imported.

Note that editing a Swagger Specification will override the current Swagger Specification, it will not create a rollback point by default. If you want a rollback point before editing, press Deployment and choose "Save and deploy". This will create a new version of the Specification and allow you to roll back at a later stage. It's not mandatory to go trough with the Deploy step.

Creating API processes

Once a Swagger Specification has been imported, FRENDS can create Processes matching the API operations defined in the specification.

A process generated from an API Specification will contain an API Trigger. The Trigger will give the process access to all expected variables passed to the endpoint, and will even cast them to the correct type.

A generated process will also come with pre-generated responses. For example, if an endpoint is defined to return a Pet object on success and an Error object otherwise, then the process will contain both of these responses upon creation, complete will expected parameters (as long as Swagger Specification contains all required information, of course). Whatever happens in between the trigger and the responses is up to the user.

Note that some settings that are defined in the Swagger specification are set on a process level. Supported schemas as well as Authentication will be set by the API Trigger, and might differ from what has been defined in the Specification.

Unlinking a process

A process that has been created from an API Operation is linked to the API Specification that created it. That means the process will be deployed when the API is deployed, and the API Deploymet will make sure the right version of that process is deployed.

ui-api-unlinked-process

If you wish to unlink a process from an API, to example create a new process for that API Operation, simply click "Unlink Process". An unlinked process can easily be re-linked to an API.

On Swagger Operation changed

FRENDS detects when a Swagger Operation has changed for a Process with an API Trigger. This can happen when importing a new version of a Specification or when editing the Swagger Specification - for example an operation can gain an extra parameter, or there's a schema defintion change.

ui-api-operation-changed

FRENDS offer the functionality to update the Process' API trigger to match the new Swagger Specification. Note that this only updates the trigger- if the expected responses have changed, then it's up to the user to modify those.

In case an operation is removed entierly, then the process gets unlinked from the Specification. API processes that are unlinked from an API are still visible in the view.

ui-api-operation-not-existing

HTTP Response types

Responses are defined in a Swagger operation by HTTP status codes. For codes beginning with 2 or 3 (success / redirection), a Return element will be generated. For others, a Throw element will be generated.

Note that the behaviour of these two elements are different.

A Throw element will end the process in an error state, and if used in a Scope or a loop, it'll end the process execution without continuing. Throw will also cause Error handlers to trigger.

A Return will end the process in a success state, and will continue executing if used in a Scope or loop.

If you need to send out an error response but do not want the behaviour that comes with a Throw element just add a Return element with the same settings as the Throw.

Deleting API Specifications

Deleting from a non-Development environment only removes the deployed processes and the deployed API Specification - they will still exist in the Development environment and can be re-deployed from there. Deleting from the Development environment removes the API as well as the linked processes. It's only possible to delete the API from the Development enviornment if it's not deployed in other environments.

Swagger features not supported

  • Form parameters are not supported.
  • File parameters are not supported.
  • Only parameters defined on an operation level will be available for auto-complete in the Process editor.

Process Log Settings

You can configure what information is logged for each execution of a Process, and how long the data is retained. You define these Process log settings per Environment. You can also override the Environment-level settings per process, if needed.

The Environment-level Process log setting defaults are managed from the Process list view for the Environment. Clicking on the "Log settings" button at the top of the page will open the Process Log settings dialog for the selectd Environment.

DOC-EnvLogSettings

Log level

The Log level determines how much information is logged for each executed Process step. The following log levels are available, from least to largest amount of logged data:

Only errors

As the name suggests, only errors will be logged with this Log level setting. No step or subprocess execution data will be logged, which will speed up the process execution and log message processing as a whole.

If an exception happens within the Process, then the parameters used for that Task or Subprocess will be logged along with the exception. The result of steps which are set to promote result are also logged, as always.

Note that if you have promoted results in subprocesses, or handle any exceptions without rethrowing them, the subprocess instances themselves are not logged under Only Errors Log level, whereas the steps are. This may lead to redundant logging of data you cannot actually view. The data will eventually be cleared, but for maximum performance, you should not promote results of subproceses under Only Errors log level.

Default

The Default Log level logs results for each step executed in a graph, with the exception for Foreach elements. Parameters for Tasks and Subprocesses are not logged by default, nor is the variable references used in expressions for Condition branches or While loops. In case parameters or results are very big (over 10 000 characters), the logged value will be truncated.

Everything

Sometimes you need to know everything that happens within a process - this is especially useful when developing a new Process. With the Everything Log level, every parameter and each result is logged. For conditional expressions, referenced variable values are also logged. Log level Everything will log the full values, and not truncate large result or parameter sets, as the Default Log level would do.

Log process parameters and return value

For some Processes, you can be mostly interested in the execution performance and latency. This is especially true to any API Processes called often. Setting the Log level for such processes to Only errors will speed up the execution and reduce the amount of redundant log data.

However, you could still be interested in logging the complete request and response data for the Process, e.g. for internal auditing purposes. For this, you can just turn on Log process parameters and return value. When set, all Process input parameters passed from the trigger, as well as any values returned (including intermediate return values) are logged. The data is then visible in the Process instance list

Process instance history retention period

To prevent the log database size from growing wihtout control, the FRENDS Log Service deletes old Process Instances from the database periodically. By default, any Process instances older than 60 days will be deleted, but you can set the retention period for specific Environments or individual Processes as needed. See Database Maintenance for more.

Process-level settings

If one or more Processes deployed to an Environment have different log requirements than the rest of the Processes, you can override the Environment-level settings for individual processes. Clicking the "Log settings" menu item from the Process action menu, opens the Process-specific log settings dialog. There you can choose to override the Environment-level settings by checking the "Use process-specific settings" option.

DOC-ProcessLevelLogSettings

If checked, the settings will override any Environment-level settings. If you later want to revert back to the Environment-level settings, just uncheck the override option.

Recommendations

Since logging large amount of data will affect performance, it's recommended to set especially production Environments to log as little as possible, e.g.:

  • Log level to Only Errors
  • Log process parameters and return value off
  • Process instance history retention period to the shortest period you think you will need

You can then override the log settings (e.g. set a longer log retention period) for the processes that have more stringent requirements.

File Trigger

File watch triggers are triggered when a file matching the File filter is saved to the Directory path to watch.

ui-file-trigger

The trigger watches for new files added to the watched directories, e.g. a newly created file will cause the trigger to launch a process, but if that file is left in the directory and modified, that will not cause a new execution. The file watch also check the files in the directory every 10 seconds.

The trigger keeps track of files it has already processed by their file names. This means that it will not notice if you e.g. overwrite the file, or quickly delete a file and then create a new one with the same name. It may take up to the max poll delay of 10 seconds to notice a file has been removed.

NOTE: It is recommended to always clear the files from the watched directories after processing, because keeping track of them takes up resources. If you would have thousands of files in the folder, the processing may slow down. The trigger will process only max 1000 files at a time.

File watch trigger can define:

  • Name – Descriptive name for the trigger
  • Directory path to watch – Directory path from where the files will be fetched.
  • File filter – File filter to use (e.g. '*.xml').
  • Include sub directories – If enabled, fetches all the matching files also from subdirectories.
  • Batch trigger events – Batches the possible trigger events so there will only be one process instance for all modified files. If not set, a new process instance will be created for each file.For non-batched triggers, the process generation is limited to 10 processes each 10 seconds. NOTE: Non-batched triggers are being deprecated due to performance reasons, so it is recommended to always turn on batching.
  • Username – If the username field is used the File Trigger will not use the Frends Agent account credentials to poll for files but a different account. Expected input is domain\username
  • Password – The password for the user above

Once the process is triggered by a file, the file paths available to the process via the #trigger.data.filePaths reference, as a list of strings. For more details, there is also the #trigger.data.files reference, which returns a list of objects with the following properties:

  • FileChangeType - Always "Created"
  • FullPath - Full path to the file
  • FileName - Name of the file

Process Element Logging

You can configure a task (or any other element) to skip all result and parameter logging to make sure no sensitive information will ever be logged.

Even when an error occures nothing will be logged if this setting is set to true.

This setting can be found on the Process Editor -> Element Parameter Editor -> Show advanced settings -> Skip logging result and parameters

SkipLoggingResultAndParametersForElement

Parameter Editor

When building FRENDS Processes and Subprocesses you will need to configure Tasks to tell them what they should exactly do. Examples of this kind of configuration could be configuring the SQL query an SQL Task should execute.

The configuration of these tasks is done using the parameter editor which appears on the right side of the Process Editor when selecting a task from the canvas:

ui-param-editor

Configuring Element Basic Properties

When configuring tasks using the parameter editor you should start with the basic properties located on the top side of the parameter editor:

ui-param-close-up

These basic properties include:

Name

Many elements have a name input. Elements with return values generally require a name. The element name is used when referencing the result of a previous element and therefore the element name must be unique within a Process.

Condition branches are a special case when it comes to unique naming - the names of Condition Branches only have to be unique for the Exclusive or Inclusive Decision it's attached to.

Elements that do not have a name input can still be named by double clicking on the element in the editor, but it will only used for display purposes.

Type

For elements of type Start, Task and Call Subprocess a type selection must be done. Clicking on the type selector drop down will show a list of available types. After selecting a type the parameters associated with it will be displayed.

The Task return type and package can be seen by hovering a selected type.

Description

Each element also has an optional description field where you can enter freeform information or documentation about the operation the task is preforming. This is a good place to store for example contact information of a specific system holder if there is a problem executing the task etc.

Promote result

Elements of type Return, Intermediate Return, Task, Call Subprocess and Code has the option to promote result. To activate the option, simply toggle "Promote result as" and enter the variable name to be used.

Activate promoted result by toggling on Promote result as and entering a variable name

Retry on failure

Tasks has the option of toggling automatic retries in case of an exception. A Task can be automatically retried up to 10 times. The retries are done using an exponential back off wait time. The formula for the exponential back off wait time is 500 ms * 2 ^ (retry count).

To enable automatic retries for a Task, toggle "Retry on failure" and set the maximum number of retries. Retry attempt Wait time (seconds)

1 try  0.5 seconds
2 try  1 second
3 try  2 seconds
4 try  4 seconds
5 try  8 seconds
6 try  16 seconds
7 try  32 seconds
8 try  64 seconds
9 try  128 seconds
10 try 256 seconds

Time in between task retries depending on the retry attempt

Configuring Element Specific Properties

While the properties above are common for all FRENDS tasks and elements each element also includes element specific configuration which changes depending on the element type you are using, and example of this cloud be for example configuring the location of a file that you want to read.

FRENDS also provides multiple different ways to enter element specific properties, which depend on the element used.

Entering Element Specific Properties

For each property you will be provided with the field for the property input and a label describing what the input should be for each property. This description label can also be hovered with a mouse to reveal additional information on how to correctly configure said property.

ui-input-description

Parameter Input Modes

When entering the input for a parameter you will be given the option to specify what kind of data you are giving as an input using the input type selector.

change input mode

Text Input Mode

When using the text input mode you can enter freeform text as your given input. This input can be modified using the standardized {{ handlebar }} notation of FRENDS. For example one could give a file name with the current day in the format:

file_{{DateTime.Now.ToString("yyyyMMdd")}}.xml

Which would result in an input of for example file_20170401.xml.

XML Input Mode

The XML input mode allows you to enter valid XML as the input instead of freeform text. The advantage of this is that it provides on the fly validation of the given XML and allows for easier editing of the formated data. The XML input mode can also be modified using the standardized {{ handlebar }} notation. For example you could inject the current date to a structured XML with the following input:

<note>
    <to>Tove</to>
    <from>Jani</from>
    <heading>Reminder</heading>
    <body>Don't forget me this weekend!</body>
  <date>{{DateTime.Now.ToString()}}</date>
</note>

Which would result in an XML input of:

<note>
    <to>Tove</to>
    <from>Jani</from>
    <heading>Reminder</heading>
    <body>Don't forget me this weekend!</body>
  <date>2017-04-01T12:00:00.000Z</date>
</note>

JSON Input Mode

The JSON input mode works exactly the same as the XML input mode in that regard that you can enter structured JSON data which can then be modified by injecting dynamic data using the {{ handlebar }} notation. For example:

{
  "note": {
    "to": "Tove",
    "from": "Jani",
    "heading": "Reminder",
    "body": "Don't forget me this weekend!",
    "date": "{{DateTime.Now.ToString()}}"
  }
}

Would result in a JSON input:

{
  "note": {
    "to": "Tove",
    "from": "Jani",
    "heading": "Reminder",
    "body": "Don't forget me this weekend!",
    "date": "2017-04-01T12:00:00.000Z"
  }
}

SQL Input Mode

As with the JSON and XML input modes the SQL input mode allows you to enter structured SQL as an input which can then be modified using the {{ handlebar }} notation.

Expression Editor Input Mode

The expression editor input mode gives you full control over the input you are giving to a specific task. This means that you can enter C# code in the expression editor to convert other incoming dynamic data to a format that is supported by the task. The {{ handlebar }} notation does not work with the expression editor, but you can instead access all of the process related variables straight in the editor without the handlebars.

Adding Results of Previous Tasks as Input

When building integration flows it's often necessary to pass data between two FRENDS elements and tasks to for example first retrieve data from a database and then sending that data to a web service.

This can be done using the #hashtag notation which provides all the available references to your current input field. This means that you can for example pass the result of a previous task as input to a different task:

bpmn result reference

These results from previous tasks and other variables can be freely combined to create a desired result. For example you could create a JSON document which combines data from two previous tasks with the input:

{
  "note": {
    "to": "Tove",
    "from": "Jani",
    "heading": "{{#result[GetHeading]}}",
    "body": "{{#result[GetBody]}}",
    "date": "{{DateTime.Now.ToString()}}"
  }
}

Using other References as input

Besides the {{ handlebar }} notation and the results of previous tasks you can also access various other references relating to the process with the #hashtag notation. These include:

  • #process - Dynamic information about the execution of the Process
  • #trigger - Dynamic information and parameters for the Trigger that started the Process. This can be used, for example, to access the REST request properties which started the process. For details on the available references, please see the individual Trigger reference, e.g. on HTTP Trigger
  • #env - Access to the Environment variable values for the current Environment
  • #var - All available variables in the current Process scope, e.g. those initialized by the Code element

Passwords and other sensitive parameters

The parameter editor will by default mask the inputs if any parameters named "Password" or marked with the PasswordPropertyTextAttribute by the Task library developer. However, the data is still stored to the field in plain text, it will just not be shown in Text input mode. Changing to e.g. Expression input mode will show the actual value. This is because you need to use Environment variables to store sensitive data, and you need to be able to write variable references (e.g. #env.my_password) to the parameter fields.

Therefore, in order to pass sensitive information in parameters so it will never be shown in the UI, you need to:

  • Store the sensitive data in secret Environment variables, as they are never exposed on the UI.
  • Use the environment variable reference as the parameter
  • Make sure the Task parameter has the hidden icon next to the label (as depicted e.g. above). This means the value will not be logged.
Note that if the parameter field does not have the hidden icon, the value may get logged if someone turns up the logging level for the process. To make sure the value is not logged, you can turn off all parameter and result logging for the indivudual Task from the "Advanced settings", by setting the "Skip logging result and parameters" on. This feature is available starting from version 4.6.

HTTP Response results

Elements of types Return, Intermediate Return and Throw have the option to return a HTTP Response result. This return type is used by HTTP and API triggers to build the actual HTTP response returned to the caller.

Other triggers, e.g. queue triggers do not have any special handling for the HTTP return type; if it is used, they will just return the given result structure as JSON.

return-http-response

The HTTP Response allows you to define the HTTP status code, the content type, encoding and http headers. Note that the "Http content" field expects Text (or JSON/XML etc.) as the input type. Object references will not get serialized, so any custom Expressions need to return a string.

Starting from version 4.5.4, you can also return binary HTTP responses. Selecting "HTTP Binary response" as the return type lets you give an expression that returns a byte array to the "Http bytes" field.

Note that the when using the HTTP Response or HTTP Binary Response return types, the HTTP request handler will skip all content negotiation: the response will have the content type and encoding given, even if the request had an ACCEPT header with a specific request, e.g. for application/xml.

API Access Policies

When using OAuth for API triggers, you need to assign an access policy to the API as well to grant access. If there are no API access policies set, no calls with OAuth bearer tokens will be granted access.

Managing access policies

As an administrator, you can create new access policies and edit existing ones in the Administration > API Access Policies view. doc api access policies

You need to give the policy a descriptive name and at least one rule. Rules are based on the claims of the OAuth token. The rule can match a claim by its type and possibly also value. The type of the claim is the name of the field in the token. If you only want to check that a claim exists (e.g. isAdmin: true), then you only need to give the claim name, and leave the value empty. But if you want to check that the claim has a specific value, e.g. "role": ["admin", "user"], you can give the value to match as well. Please note that all matches are exact, i.e. case-sensitive and wild cards are not supported.

Allow rules define the rules based on which a token will grant access. If you have multiple allow rules, they all will need to match for the token to grant access and the call to be allowed. Deny rules on the other hand will block access if any of the rules match. Deny rules can be used e.g. for maintaining token blacklists.

You can also set the policy to apply to only a specific issuer. E.g. you can give more stringent rules from public token providers, like Google, and allow all tokens from internal Active directory.

Queue Trigger

Queue triggers enable triggering Processes on messages received from an AMQP 1.0 queue. The queue trigger consumes the message from the queue whenever there is a new message available in the queue. The contents of the consumed message are then available in the process for further processing.

ui-queue-trigger

Configuring Queue Triggers

The queue trigger offers the following configuration properties to connect to a specified queue.

Option
Description
QueueThe name of the AMQP queue which to listen to
Bus UriThe URI for the AMQP Bus, e.g. amqps://owner:<SharedSecretValue>@<service_bus_namespace>.servicebus.windows.net:5671

Reply

Should we send the succeeding process result to the Queue specified by the 'Reply To' option
Reply ErrorsShould we send failing process result to to the Queue specified by the 'Reply To' option
Reply ToThe queue where we should send the replies to

Trigger Reference List

PropertyDescription
#trigger.data.bodyThe body of the message, see body handling
#trigger.data.applicationPropertiesThe custom headers of the message
#trigger.data.properties The AMQP message properties, for details see for details see 
http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-properties

Receiving messages

The Queue trigger receives and accepts(completes) messages from the queue as they arrive with the limit of 10 concurrent messages being processed per Queue trigger per Agent. If configured to do so, the trigger will send a reply message to the 'Reply To' queue when the process finishes.

Note: The AMQP body may contain different types of data. Most of the time this provided as is to the process, the exception being when the body is a byte array and the property 'ContentType' has the 'Charset' field set, e.g. 'text/plain; charset=UTF-8'. In this case the binary data is converted to a string with the encoding matching the charset.

Reply messages

If the Process failed and 'Reply Errors' was selected, the exception that caused the failure will be written to the reply message. The message will have a new Guid as the MessageId and the same CorrelationId as the original trigger message.

When replying a success to a queue, the result is is written as the body of the message. Complex structures(objects) are serialized by default as JSON. In this case the Correlation Id of the triggering message is copied to the reply message.

It is possible to define the message structure directly in the result. This is done when the result contains an object which has at least either of of the properties 'Body' or 'ApplicationProperties'. In this case the result object is mapped directly as the reply message with the following structure:

Body: object - the body of the reply message
ApplicationProperties: Dictionary<string, object> - the custom headers for the message
Properties - the AMQP message properties
MessageId; string
AbsoluteExpiryTime: DateTime
ContentEncoding: string
ContentType: string
CorrelationId: string
CreationTime: DateTime
GroupId: string
GroupSequence: uint
ReplyToGroupId: string
ReplyTo: string
Subject: string
UserId: byte[]
To: string

Users

Users are created automatically on their first login. Users can also be created manually and the desired roles can be assigned before the users login for the first time.

user usermanagement

  • 'User is locked' - Setting this to enabled disables the user from logging in.
  • 'Inherit roles from Active Directory' - Setting is only visible if Windows authentication is used. Overrides the role assignment and uses Active Directory security groups for the user.
  • 'Roles' - Roles for the user

A user may be in multiple different roles. If the user is in no roles and 'Inherit' is not enabled the user will not be able to do anything.

Note that if a user is in many roles, the rules from the roles will be combined, and any Ddeny rules will take precedence over Allow rules. E.g. if the user is part of an "Administrators" role allowed to access everything, as well as a "Users" role with access to all views except user management, then the user will not have access to the user management page, even if he or she is in the "Adminstrators" role

Database Maintenance

FRENDS uses SQL Server for storing the configuration and log data. The databases need to be periodically maintained. Databases are created and migrated to the newest version with the Frends.DatabaseInitializer tool, which is automatically executed by the deployment scripts. To get a full list of parameters, execute it with the '--help' parameter. By default the databases are created with the simple backup recovery model.

To prevent the database size from growing uncontrollably, FRENDS Log Service deletes old Process Instances from the database periodically. By default, any instances older than 60 days will be removed, but you can change the settings for a specific Environment or Process. See Process Log Settings for more. The purge is done by executing the stored procedure 'PurgeProcessHistory'. The purge procedure has a 30 minute timeout, if it cannot finish or an error occurs, the execution is retried after 30 minutes.

After purging old Process Instances successfully, the Log Service will reorganize indexes that have reached at least 30% fragmentation, each index reorganize has a 30 minute timeout.

By default, the Process instance purge and index reorganization will be run on Log Service startup, and is rescheduled to run every 24 hours after finishing successfully. The maintenance actions will run for 30 minutes max; if the actions time out, or there is some other error, they will be retried 5 times by default.

You can configure the maintenance actions with the following optional settings in deploymentSettings.json. These settings should be put directly under the root settings node:

  • maintenanceTimeWindowStart - string with a format of "[hour]:[minute]:[second]", e.g. 00:30:00 for half past midnight
  • maintenanceRetryCount - number
  • disableDatabaseMaintenance - boolean, set to true if you have set up your own scheduled cleanup and maintenance procedures.

Backups

For on-premise installations, the backups are done with three SQL Agent Jobs by default:

  • default_backup_[databaseName] - Creates a full database backup to the SQL Server backup directory, executed every Sunday at 00:30
  • default_differential_backup_[databaseName] - Creates a differential backup to the SQL Server backup directory, executed every hour
  • default_clean_backups_[databaseName] - Cleans up over two month old backups from the SQL Server backup directory, executed every Sunday at 01:30

For Azure deployments, the configuration and log store databases use Azure SQL Automated backups. By default, the data can be restored to a specific point in time within the last five weeks. In addition, monthly backups of the configuration databases are stored for a year.

Access Management - Configuration

The FRENDS UI requires users to log in with OpenId Connect (Office 365 or Azure AD) or local domain user account (on a local installation). It also allows you to restrict access to views, processes or environments for specific authenticated users or groups.

By default, every authenticated user has access to all functionality except user management. To restrict access to specific views and actions, you can define custom rules which can be defined in the User Management view that can be found under Administration. Only users with Administrator role can manage user access.

Windows Authentication

IIS Configuration: Windows Authentication enabled and Anynomous Authentication disabled

When Windows Authentication is enabled, the users will be logged in using their Windows domain accounts. By default, they will be considered to be in any roles matching the names of the domain groups they are part of in AD. This can be turned off for a user by unchecking the 'Inherit roles from Active directory' option, if you wish to manage the role membership in FRENDS explicitly.

NOTE: You will still have to create and manage the FRENDS roles separately, they will not be automatically generated - except for the built-in roles.

For example, say you have a Windows domain user 'DOM\fooUser' that is part of domain groups 'Users', 'BusinessUsers' and 'LocXUsers'. By default, the user will be in the built-in 'Users' FRENDS Role, and uses the rules for that. If you then create a new 'BusinessUsers' Role in FRENDS, the user will be part of that group also.

Administrators

By default the user who installs FRENDS will be given the role of Administrator.

The users who automatically get the Administrator role can be configured by modifying the WebUI web.config file.

Example of application key containing the administration configuration:

<add key="LocalAdministratorsJson" value='["DOMAIN\\User","DOMAIN\\Example]' />
The users in the list above are only given the role of Administrator when the user is created. So if the user existed before they were added to the administrators list they will not get the administrator role.

OpenId Connect

IIS Configuration: Windows Authentication disabled and Anynomous Authentication enabled

Currently the only supported OpenId Connect provider is Azure AD (Office 365).

Register Azure AD Application

You can use the following instructions to register a new Azure AD Application. The Application should be a Web Application and the Sign-On URL should be the link to FRENDS, for example https://demo.frendsapp.com

Configure Frends

For FRENDS to be able to use the AD Application the following information is needed from the registered Application

  • Application ID: e.g. 50549e93-99dd-4690-9948-3c8ec076ddfb
  • Tenant: e.g companyname.onmicrosoft.com

FRENDS is configured to use the OpenId Connect provider by modifying the WebUI web.config file.

The key is called "OwinAuthenticationProvidersJson" and the value should be a JSON Array of objects(providers). The configuration JSON object should have the following fields:

  • displayName: Shown as the name of the provided in the sign-in page
  • type: Type of authentication, "OpenIdConnectAuthentication" is currently the only supported type
  • clientId: The Application ID from Azure portal
  • defaultRole: The role new users who log in to the FRENDS application are assigned. The following roles are pre-created: Users (Default from 4.3), Editor, Viewer, Administrator
  • tenant: The Azure AD tenant name
  • instance: For azure AD this is always "https://login.microsoftonline.com/{0}"
  • administrators: The users that will be given the Administrator role.

Example:

<add key="OwinAuthenticationProvidersJson" value='[{
  "displayName": "Provider login",
  "type": "OpenIdConnectAuthentication",
  "clientId": "50549e93-99dd-4690-9948-3c8ec076ddfb",
  "defaultRole":"Users",
  "tenant": "company.onmicrosoft.com",
  "instance":"https://login.microsoftonline.com/{0}",
  "administrators": ["test@example.com","example@example.com"]
}]' />
The users in the administrators list are only given the role of Administrator when the user is first created. So if the user existed before they were added to the administrators list they will not get the administrator role.

Agent installation variations

The Agents and Agent groups can be configured in multiple different ways for different kind of needs.

Multiple agents in a agent group with a shared SQL database

If the agent group has a connection string set the agents will use the shared SQL database and all triggers are available in High Availability mode.

This is the most common mode of installing multiple Agents in one Agent group and should be used in most cases.

Multiple agents without a shared SQL

If no connection string is set all the agents will use a local database for storing configuration information. The first agents will be a so called "Primary agent", this agent will be able to execute all triggers. The other agents will be able to run Http, Api and Servicebus Triggers. If this mode is selected Schedule and File Triggers will not be run in High Availability mode.

This mode of installation is useful if you do not have a SQL server but still want to have redundancy and load balancing for HTTP processes.

Agent without bundled localdb.

If you already have a SQL server installed you do not need to download the bundled version of the agent that includes localdb. When downloading this lighter version you may install multiple agents on the same machine. This can be useful when you for example want to run test and development agents on the same machine.

Troubleshooting

Agent out of sync

Sometimes Agents can get out of sync, especially if there have been database glitches. This be noticed as problems like latest process versions not being deployed to the Agents or Environment variables not being found.

The Agents should recover from these situations eventually, but you can also manually force them to resynchronize. To do this, go to the Environments view, select the affected Agent group, and click the settings cog button on the top right of the page. There you see the "Synchronize agents" action. Choosing it will resend all the latest process versions, environment variables, API keys etc. to all the Agents in the group, taking them into use.

Roles

A role has a collection of rules that are used to restrict or allow users to access views, processes or environments.

role-usermanagement

There are multiple different type of Rules:

  • AllowAction - rule describes the activities that the user in the role can do
  • DenyAction - rule describes the activities that the user in the role explicitly cannot do
  • AllowTag - rule allows the users in the role to only see processes with the tags
  • DenyTag - rule explicitly hides the processes with the tags.
  • AllowEnvironment - rule allows the users in the role to only see the environment given.
  • DenyEnvironment - rule explicitly hides the environment given for the users in the role.

There can be multiple roles, and each role can have multiple allow or deny rules. There is no hierarchy between the roles. If a user belongs to multiple roles that have different rules defined, the rules from each role are combined.

Limit access to Views and actions - Activity

The activity-based configuration is based on a two-part configuration scheme where individual activities are defined by the controller and action names. A Controller essentially represents a menu item on the UI, and an action is functionality available for user to perform. The following activities are available for configuration.

rules-usermanagement

Following wildcards are supported for activities

  • *.* - match all activities
  • *.{action} - match all actions with given name in every controller
  • {controller}.* - match all actions for given controller

Order of the activities being authorized

  • Explicitly allowed activity (e.g. Process.Start)
  • Explicitly denied activity (e.g. Process.Deploy)
  • Wildcard allowed activity (e.g. Process.*)
  • Wildcard denied activity (e.g. *.Edit)
  • Full allow wildcards (*.*)
  • Full deny wildcards (*.*)

This means that if activity has been configured with explicit allow option, then it cannot be overridden by any following value.

When creating a new role, you most probably should always add the "Common.View" rule, as it is required e.g. for seeing the navigation menu as well as other common views.

Example operator-example

A operator that can view everything and edit process executions (Process Instances). The users of this role can acknowledge errors and start new process executions.

Default roles

  • Users - Legacy role from older frends. This allows access to everything except user management.
  • Editor - Allows every Edit Action.
  • Administrator - Allows every Action
  • Viewer - Allows every View Action

Limiting access to only specific Processes - Tag

You can limit the processes a role can see and access by using tags and the AllowTag and DenyTag rules. The rules work the same way as the view rules (allow and deny). The view rules still take precendence, though: if you cannot e.g. edit processes, you cannot edit them even if the tag would allow you to.

  • If no Tag rules are active for a user, the user can see all processes.
  • Wildcards are not supported.
  • AllowTag rule limits the users in the role to just see and access processes with the definied tag.
  • DenyTags allows the users in the role to access and view all processes except those that are denied.
You cannot use both Allow- and DenyTag rules at the same time, as they would conflict.

Limiting access to only specific Environments - Environment

You can limit the Environments users in a role can see and access using the AllowEnvironment or DenyEnvironment rules.

  • If no environment rules are active, the user can see all Environments.
  • Wildcards are not supported.
  • AllowEnvironment rule limits the users in the role to just see and access the Environments with the defined Environment
  • DenyEnvironment rule allows the user in the role to see and access all Environments except those that are denied.

Example test-env-example

The role allows users to do everything except administrative actions and access Environments: Default, Test and Staging

NOTE: Everyone can always see the "Default" environment
You cannot use both Allow- and DenyEnvironment rules at the same time, as they would conflict.

OAuth Settings

In order to use OAuth2 bearer token authentication for API triggers, you need to provide the details of the OAuth applications that are to be allowed to access APIs. You configure the OAuth application settings from the Settings view.

OAuth application settings

docs oauth settings new

For each OAuth application you need to give at least:

  • Name - the unique descriptor of the OAuth application
  • Issuer - the URL for the OAuth token issuer. This value should be exactly the same as is given in the token.
  • Audience - the intended audience in the issued token, usually the client or resource ID registered on the OAuth provider. This value also needs to be exactly the same as in the issued tokens.

You can also configure some additional settings for the apps:

  • Name claim type - the claim from the token that contains the name of the user, if given. This value will be used for logging purposes, to show who called the API
  • Role claim type - the claim from the token that contains the role name of the user, if available. If set, this value can be used in processes e.g. by calls to ClaimsPrincipal.IsInRole()
  • Scope claim type - the claim from the token that contains the scopes from the token.
  • Signing certificate thumbprints string - the thumbprints of the signing certificates already deployed on the agent machines to use for validating tokens. If left empty, the agent will try to fetch the signing certificates automatically using the OpenID .well-known/openid-configuration endpoint from the issuer. Note that if the issuer is down for some reason, this automatic fetch may fail and token validation with it, so you may want to handle the certificate deployment manually and give the thumbprints here.

4.4 Release notes

FRENDS 4.4 has many new features, mainly focused on making it easier to create and manage Processes implementing HTTP APIs, as well as users and their access.

API Management

You can now easily create and manage Processes that implement an operation from an OpenAPI 2.0 (Swagger) specification. If you have a ready-made OpenAPI specification, implementing it in FRENDS is as simple as importing the specification and then creating a new process for each operation. The Process designer has auto-complete support for the request parameters, and template responses based on the operation specification are also automatically generated.

The Processes use the same FRENDS version control scheme as other Processes, so you can easily continue developing Processes in Development while the current stable version has been deployed to Production. You can also deploy all Processes implementing a specific API version together right from the API Management page, so you can deploy the complete API implementation in one go when needed.

As a developer using the exposed APIs, you can easily see the available specifications and operations in the API discovery page. The page is hosted in the public HTTP endpoint also hosting the actual API operations. It shows you the operation documentation and allows you to test the operations as well, provided you have the necessary API key or otherwise can authenticate to the agent.

For more details, please see the API Management section.

API Key Authentication

You can now create and manage API keys for authorizing access in HTTP and API triggers. You can do this right in the FRENDS UI, and the changes will be automatically propagated to the agents. Compared to the authentication methods previoysly (basic or certificate authentication), which required you to have custom deployment steps to create users or deploy certificates, API key management is much less work.

API keys are Environment-specific, so there is no danger of someone gaining access to your Production environments with just a developer key. Furthermore, you can also easily limit which paths the API key grants access to, making it possible to grant rights to just e.g. specific API operations.

User management UI and OpenID Connect Support

You can now easily manage users as well as their roles and access rules in the User management UI. There you can easily see which roles a user is part of and what they can view and access.

You can now set up FRENDS UI to use an existing user directory supporting OpenID Connect. For instance, if your Active Directory is federated to Azure AD or Office365, you can easily use your existing user accounts and passwords to access the FRENDS UI.

If you are updating from an existing installation with customized authorization rules, you will need to do the customizations manually after upgrade, as the syntax and rule storage format have changed a bit. Unfortunately, there is no migration for existing authorization rules to the new ones.

Updated Process Log Settings

You can now define Process log settings for all processes in an Environment. This allows you to e.g. shorten the default data retention period for all processes in Test. You can still override the default for individual Processes if needed.

The "Only Errors" log level has been tweaked to really only log errors, i.e. the results of any steps that have failed. This improves performance and reduces the amount of redundant data logged. This is especially important for API processes with low latency requirements; it is recommended to set the "Only errors" level as the default for any Production environments.

There is now also the option to log full Process parameters and return values. This is meant especially for API processes with high request rates: if you set the log level to "Only errors" for performance, you may want to log the incoming request and outgoing response in full, e.g. for auditing or error diagnosis purposes.

In order to use the new log level settings, you will need to create and deploy a new version of the Processes (or Subprocesses)
If you are using monitoring rules, please note that the logged values may also have changed a bit, e.g. for throw shapes. After upgrade, please make sure your rules still behave as expected.

Upgrade notes

  • Process triggers are now activated by default when deploying a Process to an Environment. This change was done especially to make deploying API processes easier. You can still choose to not activate the Process triggers during deployment; see Deploying a Process for more.

4.4.1 - 4th August 2017

This is a maintenance release, fixing the following issues:

  • In 4.4, Processes are set active by default when deploying them. As this may not be desired in some situations, now the deploy dialog has an option to choose not to activate the process triggers on deploy.
  • Request-reply messaging using the service bus trigger did not work correctly, because the trigger did not the SessionId correctly to the reply messages. Now it does.
  • Some UI issues are fixed, especially some crashes in the new API management views if the environments had multiple agents in them.

4.4.2 - 29th August 2017

This release mainly fixes some performance and process deployment issues:

  • In 4.4, Processes are set active by default, but this may not always be desirable. Now you can choose whether to activate a Process when creating or importing it. Also copied Processes are not activated by default.
  • The environment variables page could take over 10 seconds to view if you had lots of environment variables. Now the view is paged, and the search has been improved so you can find variables by subkeys and values as well.
  • The periodic Process instance cleanup job could slow down if you had millions of instances in the database. Now the cleanup job works much better even with large data amounts.
  • Using expressions to set enum values in task parameters now works.

4.4.3 - 20th September 2017

This release fixes some performance and API process schema issues:

  • Process count updates could freeze the UI on the process list if there are hundreds of processes.
  • API processes did not correctly include referenced definitions in their operation schemas.
  • API Swagger editor did not validate schemas with referenced definitions correctly.

4.4.4 - 4th October 2017

This release fixes issues with the monitoring rules expressions and UI:

  • Monitoring rules always reported errors for rules with “less than” conditions if there was no data
  • The monitoring rule editor fields were redesigned a bit for clarity
  • You now can define the sender address for the monitoring rule alert emails
  • API trigger now always has the raw request body as string accessible via #trigger.data.httpBody, the same way as with HttpTrigger

4.5 Release Notes

FRENDS 4.5 has new features especially targeted for hybrid integration scenarios, where you have on-premise and cloud agents that will need to communicate with each other. FRENDS 4.5 also allows you to expose APIs from a separate API gateway agents that are easy to install and manage.

Easy communication between cloud and on-premise agents

Some integration situations require you to combine both on-premise and cloud environments: e.g. you need to expose an API from the cloud, but the API operation needs to update something in your on-premise systems. You wouldn't want to setup a VPN connection, specific authentication etc. from the cloud environment to the on-premise servers just for this update action. In FRENDS 4.5, you can easily implement a solution for this, because now you can execute subprocesses on other agents, e.g. calling a subprocess running on an on-premise agent from a cloud one.

In order to manage processes running on different sets of agents, the agent environment model has been changed slightly: Agents now belong to Agent groups, and you can have more than one Agent group per Environment. This way you can have a logical "Production" Environment with separate Agent groups based on the actual deployment infrastructure, e.g. "prodDMZ", "prodBackend". You deploy and execute Processes on specific Agent groups, and Processes can then call Subprocesses from other Agent groups within the same Environment. This means e.g. that an API process running in "prodDMZ" can call a Subprocess running in "prodBackend". All communication happens via the secure FRENDS message bus, so there is no need to open ports or setup VPNs.

API Gateways

In FRENDS 4.5, Agents can also run as API gateways. In this reduced gateway mode, the Agents will only expose any HTTP or API triggers, authenticate the requests and then pass the call to the internal agent actually executing the processes. The gateway Agent can also do simple load balancing, and will also authorize and throttle requests the same way as your actual execution agents. This way your actual execution Agents will not be burdened with too many requests.

You can also mark your API processes as private, which means they will not be exposed from a gateway, but only from the internal executing Agent. This allows you to keep your internal APIs only available from within your network.

Usability fixes and tweaks

There are also many minor features making the product easier to use:

  • Process list rendering performance is now a lot better, allowing you to view hundreds of process rows at a time, if you so wish.
  • You can copy the actual JSON result structure returned from a task step, so you can more easily verify it or store it separately.
  • You can set specific dates in a schedule trigger to run a Process only on the given days.

Upgrade notes

  • During upgrade, the log database tables are migrated to include the AGENT_GROUP_ID column. If you have a lot of process instances in the log tables, this may take a while.

4.5.1 - 2nd November 2017

The first maintenance release fixes a compilation problem as well as many UI issues:

  • Compiling processes with Linq expressions failed with "You must add a reference to assembly 'System.Runtime'
  • Copying while or foreach scopes to a process editor in another tab failed
  • Cannot change log settings for older version of process
  • Calls to Trace.WriteLine in custom tasks logged as errors to Event log

4.5.2 - 30th November 2017

The second maintenance release fixes performance problems as well as many UI issues:

  • Running hundreds of file triggers on an agent could exhaust SQL connection pool, also blocking other connections
  • Couldn’t call remote subprocesses with large parameters (>250KB)
  • Searches on process instance parameters or results would partly run in web server memory, causing the entire server to become unresponsive for large queries. Now the queries are done entirely in the backend database.
  • There was no confirmation for deleting a process

4.5.3 - 21st December 2017

This release changes the default setting which controls HTTP trigger availability on the API gateway. Previously, HTTP triggers were public by default, the same way as API triggers, i.e. available on the API gateway. However, in many cases you actually did not want to expose the existing HTTP triggers (e.g. for internal APIs) on the gateway, which may be running on the DMZ.

Therefore, the HTTP triggers are now private by default, and you need to change them explicitly to "Public" to expose them on the API gateway. All HTTP triggers will still be accessible on the non-gateway agents. API triggers will still be public and exposed on the API gateways by default, as before.

The release also fixes issues with intermediate results as well as some UI problems:

  • Intermediate results did not actually return to the caller if the next step was a synchronous call, such as a subprocess call.
  • Intermediate results could not return custom HTTP results.
  • API management list did not correctly show process links if the operation path case had changed.
  • You can now also filter the process list to show only those processes with executions or errors since the given number of days, as you could in version 4.4 and before.

4.5.4 - 30th January 2018

As a highly requested improvement, this release brings support for returning binary results from HTTP triggers. For more details, please see HTTP Response results above.

This release also has a lot of UI and performance fixes, such as:

  • Old process instance cleanup procedure timed out if there were millions of rows already in database. With an improved index, the performance should be much better
  • Loading the process list was slow if you had hundreds of tags or tasks. The queries are now better optimized to return only the data needed for the list
  • Scheduler UI did not correctly update monthly type selections, leading to invalid schedules
  • Monitoring rule data series got generated with different offsets during daylight savings changes, causing false alerts. Now the rules will ignore any DST changes, and use only the base offset for a timezone

4.5.5 - 14th February 2018

This release fixes UI issues, and improves the initial load performance of the process list:

  • Navigation bar didn’t always show available actions for the first initial requests after signing in. Now the cookies are set correctly.
  • Process list now fetches the execution counts only for the last 7 days by default, to reduce loaded data amounts and improve performance.
  • Service bus trigger now allows you to fetch messages in a batch. Turning batching on will greatly improve performance when processing lots of messages.
  • Return and throw shapes now allow you to set an expression as the return value.

4.5.6 - 9th March 2018

Custom task package users and developers should note that 4.5.6 now also supports System.ComponentModel.DataAnnotations attributes for specifying how you want your custom task parameter editor to look, i.e. having tab panels or optional input fields. Previously, you were supposed to use the attributes from the Frends.Task.Attributes package, but due to versioning issues, using two different versions of that package seems to cause problems during task import as well as task execution not being logged correctly. Therefore, the Frends.Task.Attributes package has now been deprecated, and all new custom task versions should use the ones from System.ComponentModel.DataAnnotations. Existing tasks will still work as before, but all new task versions should use the new attributes. Please see the documentation for more.

The FRENDS platform tasks (Frends.Web, Frends.Json, etc) as well as Frends.Cobalt have also been updated to use the new attributes. Please note that you should only update to the latest versions once you have updated to 4.5.6, otherwise the task parameter editor may not look as you’d expect.

The release also has other fixes, e.g.: • Deleting an API spec from an agent group not named the same as the environment now works • The API management view will now show a warning if an agent group has a deployed API process without a matching API spec deployed (ACC-6628) • Importing an API spec no longer requires access to http://json-schema.org. • Enum parameter values are now migrated correctly to the new editor format from existing 4.2 processes. • The file share paths for large message store etc. are now validated on service startup.

4.5.7 - 12th April 2018

This release improves subprocess thread usage and execution performance, especially when there is a large spike in number of executions running at the same time. Please note that in order to take advantage of the subprocess call performance improvements, you need to recompile the processes that call subprocesses.

There are also many fixes to the UI and other issues like:

  • Trigger editors could be slow to open if you had a lot of environment variables. Now the trigger parameter inputs use the same, more efficient editors as task parameter inputs.
  • Processes that were automatically updated when upgrading a task package could not execute due to metadata not set correctly.
  • Using different versions of Newtonsoft.Json package in task packages now again works.

4.5.8 - 22nd May 2018

This release fixes two major issues:

  • Subprocess call parameters were not always released correctly, causing memory usage to keep growing. Please note that to apply this fix completely, you will need to recompile and deploy new versions of the processes calling subprocesses.
  • When running file watch triggers on two or more agents, the trigger could start a process twice for the same file, if the file was still being written to when the first agent noticed it. If the second agent then noticed it with a slightly different size, it could also start a process.

5.0 Release Notes

OAuth token support for API endpoints

FRENDS now supports OAuth bearer token authentication for API processes. This means that you can register the identity management (IDM) solution you are already using for managing and authorizing users, e.g. Azure AD, with FRENDS. After this, developers only need to register their applications with the IDM and get access tokens from there. These tokens can then be used in FRENDS API calls as bearer authentication headers, allowing easy access to them without managing the clients or keys in FRENDS.

You manage what APIs a token can be used to call with API access policies. The policies are applied per API and environment, and allow you to easily define rules for e.g. defining that only users in specific roles in your IDM can call an API.

FRENDS Identity Server

In some situations, you may not have a ready IDM solution or the client management features of the IDM may be difficult to use. For these cases, FRENDS also comes with a separate FRENDS Identity Server component. It is essentially a separate web site that you can use for managing users, OAuth client registrations, API scopes etc. You can also integrate it with your existing OpenID provider to reuse the same users.

New environment variable editor

The environment variable editor has now been rewritten to be more performant and easier to use, especially if you have hundreds of variables. docs environment variables

You can now see the values for a specific variable across all environments at a glance. The new editor now also shows if a variable is actually being used, and can show links to the processes in different environments using the variable.

Runs on .NET framework 4.7.1

FRENDS now targets the latest .NET framework version 4.7.1. This update means that FRENDS now supports many important features, like TLS 1.2 or .NET Standard 2.0 class libraries.

Support for .NET Standard 2.0 tasks

FRENDS now supports Tasks from .NET Standard 2.0 class libraries. This means you can use the latest .NET features, and the Task libraries will be reusable also with .NET Core and run on non-Windows hosts as well.

This change has no effect on processes: they will be compiled and executed on the full .NET Framework 4.7.1, and old Task libraries targeting .NET 4.5.2 can be used alongside Task libraries targeting .NET standard 2.0.

The change is mostly for future-proofing: FRENDS will also support .NET Core in the near future. Then you will be able to choose to run an Agent either on the full .NET framework or .NET Core. The .NET Core Agent could be used especially for more advanced deployment scenarios, e.g. deploying Agents in Docker containers.

However, the .NET Core Agent will only be able to use Task libraries targeting .NET Standard 2.0. Therefore Task libraries should start targeting .NET Standard 2.0, if possible. Then they can run on both the .NET Framework Agent as well as the .NET Core version.

Still, Task libraries targeting the full .NET Framework will be supported: they can run on the full .NET Framework Agent also in the future.

Other improvements

There are also many other improvements for making developing and managing FRENDS easier, for instance:

  • You can test individual tasks separately right in the process editor to see how they would work with given parameters
  • You can now download a smaller, pre-configured agent installer package without the bundled LocalDB for advanced deployment scenarios
  • You can configure a task (or any other shape) to skip all result and parameter logging to make sure no sensitive information will ever be logged
  • Added two validations for interacting with Subprocesses:
  • You can not delete a Subprocess that is being used by another Process.
  • You can not deploy a Process before all Subprocesses used by that Process are deployed to the target Environment.

Upgrade notes

  • As FRENDS now requires .NET Framework 4.7.1, you may need to install that separately before upgrade.
  • During the upgrade from 4.x, all environment variable data will be migrated to the new format used by the new editor. If you have a lot of environment variables, upgrading the data may take several minutes.

5.0.1 - 26th June 2018

Fixes for UI issues, especially fix for the validation error when trying to save an API key

5.0.2 - 22nd August 2018

UI fixes and improvements, such as word wrapping and search in input editors. Now you can also add an API trigger to an existing Process, and filter the dashboard widgets by Process tags.

5.0.3 - 29th August 2018

UI fixes, especially Subprocess creation works again. Now you can also rebuild a Process right from the Process list, e.g. after a FRENDS update with a bugfix to the internal libraries, and you want to switch the Processes to use the latest versions of the libraries.

4.3 Release notes - 25th January 2017

  • New Process editor that is based on the bpmn-js BPMN rendering library. The new editor has much better performance and supports highly-requested things like zooming, moving many elements at once, or copy and paste.
  • New Process elements:
  • Subprocesses allow you to create small, reusable processes that can be used in other processes.
  • The expression shape allows you to execute short C# expressions as well as initialize and assign variables
  • While loop allows you to go through a list of unknown length or e.g. retry some steps.
  • Improved parameter editors, with XML, JSON and SQL highlighting

The new Process editor will be shown by default to Processes created with the old editor as well, migrating the Process model to match that of the new editor. However, the processes will not be automatically migrated: you will need to saved the Processes in the new editor to start using the new format. You can still use the old editor as well; you can access it from the link at the top of the new editor.

In 4.3, the Process instance data table schema has been tweaked for better performance. When upgrading, these new Process instance tables will be recreated as empty, renaming the old tables. This means any instance history before the upgrade will not be shown in the UI. The instance history data is still available in the database, if needed.

4.3 Service Release 1 (4.3.393) - 23rd February 2017

The main improvements in this release are:

  • Process error handler: You can now set a subprocess as the error handler to the entire process, allowing you to easily e.g. set up common error reporting. Please see the documentation for details.
  • Import/export BPMN: You can import BPMN from an XML file to the process editor, allowing you to design the process first with a separate BPMN editing tool and working from that with FRENDS. You can also export the process graph as BPMN or as an SVG image.
  • Improved internal process logging performance: The log messages are now processed in batches, which speeds up process execution and reduces load on the log database. Also the process and event log history delete performance should be improved.

There are also many bug fixes, e.g. fixing some process parameter editor crashes due to invalid parameters, and HTTP trigger allowing requests with invalid charset values.

NOTE: This service release also updated the NuGet libraries to newer ones that only support three-part version numbers (major.minor.patch). If you have been using custom task packages that are versioned by only changing the fourth part of the version number, the references may not get resolved correctly during build process. This can cause process build failures, especially if task parameters have been changed between the versions. Essentially, if you have two versions, 1.0.0.1 and 1.0.0.2 of a task package imported, the build process will use the older one, even if you explicitly reference the newer task version. The workaround is to create a new version of the task package, with a version that updates e.g. the third version number part, i.e. instead of 1.0.0.2, you use 1.0.1.

4.3 Service Release 2 (4.3.408) - 9th March 2017

This release fixes some issues with the new editor as well as logging:

  • In 4.3 SR 1, the logged results and parameters of process steps executing at the same time may get mixed up. This was due to an issue in batching the database insert commands, leading result and parameter data to sometimes be written to wrong rows in the database. The actual process execution is unaffected, but the execution graph could show wrong parameter and result values for task and loop executions.
  • Links for viewing possible process error handler executions are now shown correctly
  • Variable and result references as well as annotation connections are now correctly validated, also in inclusive branch condition expressions

4.3 Service Release 3 (4.3.422) - 22nd March 2017

Maintenance release, which mainly fixes usability issues like:

  • Schedule triggers sometimes not being validated correctly
  • Run once action not being shown for users with execution rights, and
  • Reference autocomplete adding an extra square bracket to an expression

4.3 Service Release 4 (4.3.432) - 4th April 2017

Minor maintenance release, fixing mainly user interface issues like:

  • Process list shows erroneous warnings for missing environment variables
  • Task import allows you to import a task package with four-part version number, potentially causing problems during compilation
  • Trigger status update fails if there are newly created processes

4.3 Service Release 5 (4.3.443) - 2nd May 2017

Minor bug fix release, mostly for user interface issues like:

  • Array object default values were not initialized on task update, causing Cobalt updates to fail
  • “Show subprocess” button is sometimes disabled for failed subprocesses that actually ran
  • Log service crash if cache warmup query takes too long

4.3 Service Release 6 (4.3.451) - 15th May 2017

Minor bug fix release, fixing issues like:

  • Basic authentication on HTTP triggers fail for concurrent users
  • Empty error messages if process migration to new editor version failed
  • Open process instance list polls backend on every new process execution, causing unnecessary load on the database

4.3 Service Release 7 (4.3.458) - 24th May 2017

This release mainly fixes an issue with subprocess log message processing that caused the processing to queue up and instances not showing in the UI

4.3 Service Release 8 (4.3.477) - 15th June 2017

This release fixes some rare but nasty issues:

  • The agent could crash when running a process with hundreds of variable references
  • Executing tasks calling async methods while capturing the context (i.e. not using ConfigureAwait(false)) could hang the process execution
  • Duplicate process versions could be created due to too aggressive caching
  • Deploying processes on a cloud installation would take long (minutes) if there already are hundreds of process version packages stored in the package repository

4.2 Release notes

  • You can create widget to monitor successful Processes, failing Processes, errors, and Process executions
  • It is possible to promote results of a Task or an entire Process
  • These promoted values can be seen in the Process instance view and used to filter them
  • They can also be used in silence monitoring rules
  • Process instances moved from their own view to the Process list view as a sublist
  • Clicking the arrow in front of the Process name or anywhere on the background of the Process, opens the Process instance list below the Process name
  • The user can choose what information is shown in the list
  • The instances can be filtered with dates and information in, for example, promoted values
  • Clicking on the Process name opens the Process editor
  • Silence Monitoring Rules
  • The rule will compare the count, distinct values, or minimum, maximum or sums of promoted values, and send an alert if the rule is not met
  • The UI indicates whether Agent Process configuration is out of date
  • The UI will inform the user if updating or activating a Process is not complete in the Agents in the environment
  • Hide passwords from showing in the UI by using 'secret' environment variable type
  • The user can write a description for Tasks in the Process editor
  • The user can check which Task packages have a newer version available and can choose what Tasks are updated
  • Tasks grouped according to NuGet package in Task view
  • Parallel foreach loops allowed
  • Triggers made more reliable and usable
  • Parameter change to or from array type fixed
  • Cobalt editor now saves parameter changes after updating the Tasks

Breaking changes

If you use 'secret' environment variables, the process must be compiled with 4.2 or later. For example, changing an existing password environment variable to 'secret' may cause runtime errors if the field is used by processes compiled in older versions.

4.2 Service Release 1 - 28th June 2016

  • Maintenance release, with fixes for:
  • Process listing performance: list will be shown even if counts take long to fetch
  • Unmanaged DLLs in task packages do not cause problems with delpoying processes

4.2 Service Release 2 - 14th September 2016

  • Maintenance release with fixes mostly to the memory usage of the agent and performance of the web UI:
  • Old, unused process versions are now periodically unloaded from agent memory, so agents running for months do not use up too much memory
  • You can further reduce agent’s memory usage by installing shared library DLLs to the global assembly cache of the machine
  • Process instance counts are now stored in the database to speed up the process list load time
  • Process instance list load times are also reduced by changing to a simpler pager that does not need to calculate the total number of instances

4.2 Service Release 3 - 5th October 2016

  • Performance fix release. The fixes include:
  • Process list is now paged. This greatly speeds up rendering of the list if there are > 100 processes.
  • Drastically reduced web server memory usage
  • Building and deploying new versions of processes is faster on on-premise installations with a lot of deployed process versions

Also as a small fix, you can again query for a specific process execution graph by the execution GUID, in order to e.g. generate links to the process in error emails. You get the execution GUID in process via the #process.executionId reference, and the link would be in format

https://<website>/ProcessInstance/Instance/<execution guid>

As a small breaking change, for performance reasons, audit logging all actions to the database has now been disabled by default. If you need it, to turn it on again, you need to set the EnableAuditLogging option in web.config to “true”.

4.2 Service Release 4 - 27th October 2016

  • Maintenance release with fixes for:
  • Showing executed decision branches correctly
  • Allowing import of tasks referencing NuGet packages that also have references to netstandard packages
  • Improved performance for instance count query

4.2 Service Release 5 - 29th November 2016

  • This version fixes a problem with automatic retries in the internal service message processing: in case of some transient errors, the message processing would not get retried, which could cause configuration or log messages not being processed at all. This could then lead to e.g. process versions not getting deployed correctly or processes seem to never finish.
  • Other changes and fixes in the release include:
  • Array parameters of a task (e.g. Cobalt’s message processing steps) are no longer cleared when you update the task version
  • SQL query performance tweaks
  • The authorization.config file can now be used for defining authorization rules in on-premise installations for easier editing
Thank you for subscribing to our newsletter :)