Locked Content Approval Errors

Guide to setting up secure Content Approval Workflows – 1 of 2 – Johnny Mullaney

Guide to setting up secure Content Approval Workflows – 2 of 2 – Johnny Mullaney

Related to my previous series on setting up Secure Content Approval Workflows, I had a discussion with someone who followed the steps but their ERP Integration API was sporadically throwing the following error when updating content:

System.ComponentModel.DataAnnotations.ValidationException: Content is locked by ‘Epi Admin’ with lock identifier ‘contentapproval’

What Causes the Error?

This error is thrown when a version of content is in the middle of an approval workflow and the EPiServer Content Repository attempts to update the content. The error is saying that EPiServer will not allow this content to get updated while it is going through an approval workflow.

This makes perfect sense as if content is partially approved, we don’t really want to be updating it.

However the scenario explained to me by the developer and agreed with the client, was that this content should be updated regardless of the approval sequence.

Content Lock Evaluator

EPiServer’s IContentLockEvaluator determines if content is locked for editing under given circumstances. The default implementation is this internal class is as below.

  internal class ContentApprovalLockEvaluator : IContentLockEvaluator
    public static string Identifier = "contentapproval";
    private readonly IContentVersionRepository _contentVersionRepository;
    private readonly IApprovalRepository _approvalRepository;

    public ContentApprovalLockEvaluator(
      IContentVersionRepository contentVersionRepository,
      IApprovalRepository approvalRepository)
      this._contentVersionRepository = contentVersionRepository;
      this._approvalRepository = approvalRepository;

    public ContentLock IsLocked(ContentReference contentLink)
      ContentVersion contentVersion = this._contentVersionRepository.Load(contentLink);
      if (contentVersion == (ContentVersion) null)
        return (ContentLock) null;
      if (contentVersion.Status != VersionStatus.AwaitingApproval)
        return (ContentLock) null;
      ContentApproval result = this._approvalRepository.GetAsync(contentLink).Result;
      if (result == null)
        return (ContentLock) null;
      return result.Status != ApprovalStatus.InReview ? (ContentLock) null : new ContentLock(contentLink, result.StartedBy, ContentApprovalLockEvaluator.Identifier, result.Started);

Content in an approval workflow will return a lock status and block any updates.


The solution for this use case was to inject our custom implementation of the Content Lock Evaluator to allow content in an approval workflow to be updated. Our implementation would simply return null meaning the content is not locked.

   public class CustomContentLockEvaluator : IContentLockEvaluator
        public ContentLock IsLocked(ContentReference contentLink)
            return null;


Proceed with caution. This was the solution in a very specific integration where the risks of removing the Content Lock logic was considered and understood by all.

Depending on your situation, consider maintaining as much of the logic in the default EPiServer implementation as possible.

Guide to setting up secure Content Approval Workflows – 2 of 2

This post follows on from my previous post on setting up secure content approval workflows where we looked at the code updates necessary to consider. In this post we will complete the configuration of our content approval workflow to meet the requirements detailed in the original post.

User Groups

First set up the necessary groups in the Optimizely CMS Administrative interface. These groups will later be used to add users to the Content Approval Workflow.

Create the following groups to match those matched to our virtual roles.

  • ContentReviewers
  • ContentPublishers

Then create three further roles which we will use to configure our language specific content approval workflows:

  • EnglishReviewers
  • FrenchReviewers
  • SwedishReviewers

Catalog Access Rights

Next assign catalog access rights appropriately to our user groups.

Navigate to the Catalog in Commerce and Click the “Manage” button beside “Visible to Everyone”

Grant reviewers only access to change catalog content.


All Content Reviewers will be added to the ContentReviewers group. This group gives them the appropriate catalog permissions.

We will then add a Content Reviewer to the appropriate language reviewer group. The language reviewer group will be used to configure the language specific Approval Sequence workflow. So for example the French reviewer will be added to both ContentReviewers and FrenchReviewers.

Content Approval Workflow Configuration

Finally we can use the groups created to configure the language specific content approval workflow required. The example below has a Content Reviewer group assigned to each language.

Members of these groups will be notified when a version of the content is assigned to them for review. They will then be able to either:

  1. Approve
  2. Decline -> Edit previous version -> Approve

Importantly a content reviewer will not have access to publish content or override an approval sequence. Any content that is not directly assigned via an approval workflow sequence will be read only.

On approval, members of the Content Publishers group will be notified. They can then do a final review across all languages before marking as Ready for Publish.

Guide to setting up secure Content Approval Workflows – 1 of 2

Optimizely Content Approvals are a mature and highly configurable feature. However every project is different and in designing an optimal workflow for our customers – it is important to plan accordingly to ensure a clean user experience while adhering to security principles when dealing with access rights.

The principle of least privilege (PoLP) refers to an information security concept in which a user is given the minimum levels of access – or permissions – needed to perform his/her job functions.

This is a key principle that we will take forward in designing our workflow.

Planning Content Approval Workflows

The key to planning an Approval workflow is defining the types of user roles who will be involved in a sequence.

For each user role you define, consider the “Principle of least privilege” in granting them permissions to your Optimizely system. We only want to give each role access that is absolutely necessary to the functioning of your optimal approval workflow.

Consider the following for each user role you are planning.

  • Should members of this user role have access to CMS or Commerce content or both?
  • Will the user role be responsible for approving or publishing content or both?
  • Can users in a role override the Approval sequence to publish content that has not gone through it’s full workflow?

Working Example

In the rest of this series we’ll work through setting up an optimal workflow to meet a requirement.

The Requirement

  • The Approval Workflow is to manage Commerce Content only
  • Products are added programmatically though an API integration and should enter the approval sequence automatically
  • Content to be approved only by designated language specific approvers (English, Spanish, French). Spanish approvers can only review Spanish content.
  • The approvers have the ability to edit content during the review process
  • Content in all languages is published by a user with publishing permissions

Our User Roles

Given this requirement we can define 2 distinct roles

Content Reviewers

  • Edits and approves content assigned through a workflow
  • Cannot publish content

Content Publishers

  • Publish content in any language once assigned in the workflow after approval by a Content Approver
  • Does not approve content
  • However can override an approval sequence for a product to force the publishing even if it has not yet been approved by a Content Approvers.

Code Base Updates

Virutal Roles

If you’re not familiar – this page will explain Optimizely virtual roles: Virtual roles | Optimizely Developer Community (episerver.com)

The “CatalogManagers” virtual role grants access for the Catalog system in Commerce only.

We will define two roles for our system which both map to this CatalogManager virtual role:

ContentReviewers – Can review content that has been assigned

ContentPublishers – full permission to publish content. They have the ability override approval sequences and force publish if required

In the web.config map these roles to the “CatalogManagers” as follows:

      <add name="CatalogManagers" type="EPiServer.Security.MappedRole, EPiServer.Framework" roles="ContentReviewers, ContentPublishers" mode="Any" />


Avoid adding these roles to the “CommerceAdmins” virtual role. That should be kept for WebAdmins and Administrators only.

Content Repository Save Actions

The wrong Content Repository Save Action can cause the approval sequence to be overridden.

Review your code base to make sure that  content programmatically created that should go through an approval sequence uses the “Request Approval” save action.

  _contentRepository.Save(writableContent, SaveAction.RequestApproval, AccessLevel.NoAccess);

Next Post

In the next post we will proceed to configure Optimizely Access Rights, User Groups, Roles and finally the Approval Sequence to meet our requirement while adhering the principles outlined at the beginning of the post.

Is Click Tracking working in your EPiServer Find implementation?

What is Click Tracking in EPiServer Find?

When a customer performs a search and subsequently clicks a search result, your website should be informing Find of this event.

Why is Click Tracking important?

There are two fundamental reasons:

  • Quality of search results returned from Find
  • Using statistics for continuous search optimisation

Find Relevancy Scores

The EPiServer Find algorithm will assign a relevancy score to each search result it deems to match the words in the search query. Search results are then by default ordered by the relevancy score.

Some of the factors taken into account in the EPi Find algorithm are as follows:

  • Boost Weighting: the weighting you assign to fields in your search query. For example your query may stipulate that matches in the “Title” field are should be twice as relevant as the same matches in the “Description” field
  • Term Frequency: number of occurrences of search term words within a result
  • Inverse Frequency: measurement of how frequently words in a search term occur across the entire set of results. Words in the search query which occur in many potential results have a lower impact to the relevancy score than rare words across the result set.
  • Number of keyword matches:search results which match all of the keywords in a search term will rank higher than those that match a subset

However a major part of how the EPi Finds algorithm assigns relevancy is the intelligence the platform gathers on click throughs. Search results that customers are frequently clicking for a search term will be assigned higher relevancy scores.

Find needs to know what results people are clicking.


You’re marketing team should be using the Find Search Statistics interface to continually optimise results using Best Bets, Related queries, Synonyms and Autocomplete.

The cornerstone of this process is solid reporting.

If Click Tracking is not working, all search results will have a Click-through rate of 0% denying your marketing team of very valuable information.

Does Click Tracking not work out of the box?

If you’re using EPiServer Find Unified Search – Yes. Just enable Tracking on the search query. As long as you are injecting the EPi Client Resources in your root template (which you probably are!), then the JS injected handles all the rest.

If you’re not using Unified Search – No, you should read on…

How do i implement custom click tracking?

This excellent post from Henrik Fransas explains adding custom click tracking excellently:


The only thing i will add is that sometimes we need to use the Find GetResult method to query the index. The GetResult methods supports returning extended data on the content type which we can set up in an initialisation module as follows with the MyCustomExtendedProperty() method being an simple static extension of the BaseProduct class.

public class SearchIndexInitialization : IInitializableModule
public void Initialize(InitializationEngine context)
.IncludeField(x => x.MyCustomExtendedProperty());

public void Uninitialize(InitializationEngine context)
//Add uninitialization logic

Using GetResult the following method will add the HitId and HitType properties to your search results view model. I hope it comes in useful for someone!

public List AddStatisticsAndRelevancyToSearchResults(SearchResults searchResult) where T : ISearchResultViewModel
var searchResultsWithRelevancyScore = searchResult.Hits.Select(x =>
var result = _contentLoader.Get(x.Document.ContentLink);
x.Document.HitId = SearchClient.Instance.Conventions.IdConvention.GetId(result);
x.Document.HitType = SearchClient.Instance.Conventions.TypeNameConvention.GetTypeName(result.GetType());
catch (Exception e)
_logger.Error("Error thrown retrieving hit count from Find", e);
x.Document.RelevancyScore = x.Score;
return x.Document;
return searchResultsWithRelevancyScore;

Introduction to extending EPiServer Visitor Intelligence for Omnichannel Marketing

With EPiServer Visitor Intelligence you can create micro targeted onmichannel segments of customers based on their interactions with your brand in store and online. EPiServer’s product suite then gives you the tools to target these segments with super personalised web experiences (CMS), recommendations (EPi Recommendations) and automated marketing campaigns (Campaign).

Collecting your Big Data

The EPiServer Profile Store is the big data repository we should be using to collect anything that may be useful for tracking our customers interactions with our brand. Behind the scenes it is an Azure Data Explorer cluster hosted in Azure. This detail will become important later in the series of blog posts when we write our queries against the data we collect using our channels.

Early in your implementation you should plan for the data you need to send to the profile store as customers interact with your site. It’s best to send anything you think you could potentially use, even at a future date, to get insights on the interaction of customers with your brand.

The two core concepts to understand when implementing tracking are:

• Customer Profiles
• Tracking Events

Customer Profiles

A customer profile is simply data we collect about a specific customer.

Events that you track about a customer are later consolidated back into their profile behind the scenes.

Out of the box the profile data is very basic with the customers email address. However you can easily extend this by sending extra profile data via the Profile Store API.

You should consider integrating the following pages with the profile API: Registration, profile pages, address management etc

Tracking Event

A tracking event can be any data which describes a specific action a customer takes when interacting with your brand. It can be anything from a page view, to creating an order or using a loyalty card in store. Any data you gather on any system which relates to a customer interacting with your brand can be tracked as an event in the profile store.

EPiServer Tracking API

EPiServer provides us with the Tracking API to enable us to very efficiently integrate profile store Tracking Events with out our EPi websites. It is a nuget package you install from the EPiServer feed.

The Tracking API enables us to quickly build payloads with the Tracking Data Factory and then send to the profile store.

Tracking Data Factory

The TrackingDataFactory is a class you can use to easily build event based Tracking payloads as customer interact with your website in such scenarios as:

  • Homepage View
  • Product Page View
  • Category Page View
  • Add to Cart or Wishlist
  • Search

Adding this tracking is very easy but sometimes we will want to customise the tracking payloads with our extended data. There are a couple of options for this using the Tracking API.

  1. Extending the payload per event
    You can extend the payload using the tracking data “setCustomAttribute” extension. This is well explained under the “Customizing Tracking Data” section of the Tracking and Recommendations EPi World developer guide.

    On an E-Commerce implementation, you will find yourself extending tracking especially for products. For example if you are selling books, how useful would it be to extend the tracking data to include the author so the website can serve recommendations to customers based on the authors they are viewing or segment customers we perceive to be interested in certain authors so we can notify them of the next big launch.

  2. Extending all payloads with specific data points
    When there are properties we want to track for all events, use the Tracking Data Interceptor. The Tracking Data Interceptorguide on EPi World explains it with a nice example for adding the current market and currency to tracking events.

Custom Events

If the Tracking Data Factory doesn’t support the event you want to track, you can build your own object and use the Tracking API library to send the event. This is a good example on Epi World of using the Tracking API to send a form submission to profile store.

Next Post

Now that we’ve covered the first step of implementing Tracking into your EPiServer site, in the next post we’ll look at techniques to extend to other systems to enable Omnichannel tracking with profile store.

This will set the platform for us to finally use EPiServer Visitor Intelligence to create and target segments of customers who are interacting with your brand both in store and online.

Building Mobile Apps at speed with EPiServer Commerce & Flagship

In this post I’ll give an overview of the key technologies and building blocks involved in delivering a full featured E-Commerce mobile app across Android and iOS that is fully integrated with your existing EPiServer Commerce website.

Native Apps

Research has shown that conversion rates are 3 times higher for customers on Native mobile apps compared to responsive web sites. Native apps can provide a speedier, cleaner user experience while taking advantage of mobile features such as push notifications to drive engagement.

The issue has been that Native App development is often prohibitively expensive. Done in it’s purest form, it requires separate iOS and Android developers working in totally different technology stacks to deliver apps that are fundamentally the same. Maintenance costs are then effectively doubled as you roll features out across both platforms.

React Native

React Native is one of a number of cross platform mobile development frameworks and frankly, without getting into comparing with other options, it’s the one i have been by far the most impressed with.

Most EPiServer or web developers will know of ReactJS as a framework for building applications using JavaScript. React Native however is an entire platform facilitating us to build native, cross platform mobile apps in which we use React for constructing the UI layers. ReactJS syntax and patterns are at the heart of React Native so there is a minimal learning curve for those already familiar with the framework.

While React uses the Virtual DOM to render code in the browser, React Native uses Native API’s to render components on mobile. These Native Components and Native Modules which come with React Native are optimised for performance.

For these components to work, React Native has to plug different technologies together – JavaScript and the Native ecosystem it is targeting. This communication happens using a “bridge” which is central to the React Native architecture. This provides the mechanism for two way asynchronous communication between the JavaScript that we write and the Native platform through JSON serialised messages. At a very basic level React Native could be described as a JavaScript to Xcode or Java translator.


ReactNative and EPiServer Commerce with Flagship

Flagship is an opensource accelerator kit developed by Branding Brand for mobile applications built on ReactNative. The open source code lives in Branding Brands Github repository at: https://github.com/brandingbrand/flagship

This code base provides a whitelabel pre-built starter site and set of reusable commerce components which you can use to kick start your React Native application development.

Flagship’s JavaScript is implemented using TypeScript which introduces standards to JavaScript that, once you get your head around the syntax, EPiServer developers will be more familiar with including strongly typed variables and object oriented design principles.

Being a React Native application, styling is handled within the components via close to standard CSS style sheets.

Flagship already comes with connectors for Commerce platforms such as Shopify & Demandware with the EPiServer connector coming soon through integration with EPiServer’s Service API and Content Delivery API. However there is already the opportunity to integrate with standard Flagship components by normalising your existing responses to a Flagship JSON standard.

Dynamic Category Page Search Faceting with Epi Find

The goal is to create the most dynamic E-Commerce Category pages possible with Epi Commerce & Find.

First lets clarify the terminology. By category page i am referring to the EPi Commerce “NodeContent” type used to structure the product catalog.  Customers will be able to navigate the site using this category structure seeing the top relevant products at all points.

My project type is a Dynamics AX ERP to EPiServer Commerce integration using Avensia Storefront to manage the catalog synchronisation. More information on this type of integration is available here: Delivering Unified Commerce Solutions With Episerver Avensia Storefront And Dynamics For Finance & Operations

Using Avensia Storefront we can replicate the AX Product Catalog in the EpiServer Commerce Catalog. With this approach if our client publishes a new category and/or products in AX, within minutes that set up will be replicated in our EPiServer catalog.

The Category pages of an E-Commerce site are absolutely critical to get right as an optimal set up will provide the customer with an easy method of navigating the site while providing the functionality and information they need to find the product(s) they need. From an SEO perspective it’s also extremely important that crawlers can easily traverse the hierarchy and gain context.

Because categories are added dynamically, we have one category page implementation but the search faceting and product view should dynamically optimise based on the descendant products types of the current category and their attributes.

Take for example this simplistic catalog structure:


All Product Types inherit from a “BaseProduct” which contains common properties across the catalog.

Our Category Page inherits from Base Node

My requirement is:

  • Category A – advanced facet filtering options for Product Type 1
  • Category B – advanced facet filtering options for Product Type 2
  • Category C – facets common to Base Product

Technical Set Up

The Category Controller inherits from ContentController<NodeContent> meaning that it gets hit on all page loads of Node Content. We are going to use a search service which is injected into the Category Controller and manages the communication with Find. The following sequence diagram will give you a feel for the flow.

Category - High Level View

Dynamically getting Child Content Types

My Search Service interface defines the following methods:

public interface ISearchService
  SearchResultsModel SearchFromCategory(NodeContent currentContent, string query, string sort = "", int page = 1, int? pageSize = null, List facetGroups = null);
  SearchResultsModel Search(NodeContent currentContent, string query, string sort, int page = 1, int? pageSize = null, List facetGroups = null, bool trackSearch = false) where T : BaseProduct;

In the SearchFromCategory method we need to find the descendant product types.

This query took a bit of figuring out and i have to thank the excellent community in Epi World for pointing me in the right direction:


The solution i arrived at is the following method which uses Find to get return all product types that are descendants of the current Node. To accomplish this “_Type” is included in the request where the indexed property will contain the entire type string.

public List GetProductTypesForCurrentCategory(NodeContent currentContent)

  // use Find to execute search with the content type id set as a facet
  var searchQuery = _findClient.Search()
    .Filter(x =&gt;
      .Take(0).TermsFacetFor(x =&gt; x.CommonType(), request =&gt; request.Field = "_Type")

  // extract the content type id's form the result
  var terms = searchQuery.TermsFacetFor(x =&gt; x.CommonType()).Terms;
  var productTypes = new List();

  // add returned types to list
  foreach (var typeNamespace in terms)
    var type = Type.GetType(typeNamespace.Term);

  return productTypes;

The Search From Category can then be implemented where it customises search to a product type where possible or otherwise using the Base Product object

// Called from category page
public SearchResultsModel SearchFromCategory(BaseNode currentContent, string query, string sort = "", int page = 1, int? pageSize = null, List facetGroups = null)
  var productTypes = GetProductTypesForCurrentCategory(currentContent);

  if (productTypes?.Count == 1)
    // get product type
    var productType = productTypes.First();
    // instantiate and invoke generic search
    var searchMethod = typeof(SearchService).GetMethod("Search", new [] {typeof(BaseNode), typeof(string), typeof(string), typeof(int), typeof(int?), typeof(List), typeof(bool) });

    if (searchMethod != null)
      var genericSearch = searchMethod.MakeGenericMethod(productType);
      return genericSearch.Invoke(this, new object[] {currentContent, query, sort, page, pageSize, facetGroups, false}) as SearchResultsModel;
  // search for base type
  return Search(currentContent, query, sort, page, pageSize, facetGroups);

The generic Search Service in this solution is then built to customise facets and search results based on the type of product so this solution. Thats a large topic so happy to do another blog post on that!

Also note that this solution uses reflection and this might not always be the optimal implementation from a performance perspective. You could just as easily implement a factory method with a switch statement to map types to Search<T>() method invocations.

Getting the Content Reference from EPiServer Content Approval Events

Although this post is quite trivial, EPiServer Content Approvals are a large step forward from the old functionality and as it’s relatively recent, there’s not a lot of posts out there on customising the engine. Most importantly i would have found this post useful had i found it last week!

My requirement is simple in that customers can upload images via an interface on the website product details page. On successful upload, these images are saved as a media asset in a Draft state. These assets contain properties associating it with both the product and customer who uploaded it.

The image will go through a content approval workflow before eventually being either published on the web site or rejected in which case we need to send an email to the customer to notify them of the rejection. Comments are mandatory in the Content Approval workflow on declining an Image so this comment will be included in the email.

There are Approval Engine Events that we can easily hook into and it is well documented in the EPiServer World:


Our image content type has properties linking it to the customer and product so we need to extract the media content reference from the event to do just about whatever we want. Getting that content reference should be the trivial bit…and it is…once you know what to do!

As detailed in the link above, each Approval Engine event provides us with an ApprovalEventArgs object providing us with the context.

Within this object we have the ApprovalId and ApprovalReference properties which we can use to query the approval repository to get us more details:

private void OnRejected(ApprovalEventArgs e)


var approvalRepo = ServiceLocator.Current.GetInstance();

IEnumerable approvalIds = new List() {e.ApprovalID};

var approvalItems = approvalRepo.GetItemsAsync(approvalIds);

The next bit is the bit that got me initially…


Approval Items Results contain an “Approval” object….However this Approval object does not contain the content reference of the item which the event is triggered for. To retrieve the ContentReference you need to cast the Approval to a “ContentApproval” which is derived from Approval.

var contentApproval = (approvalItems.Result.FirstOrDefault() as ContentApproval);

We can now use our contentApproval object to do just about whatever we want. See the code sample below:

using System.Collections.Generic;

using System.Linq;

using EPiServer;

using EPiServer.Approvals;

using EPiServer.Approvals.ContentApprovals;

using EPiServer.Framework;

using EPiServer.Framework.Initialization;

using EPiServer.ServiceLocation;

using EPiServer.WebApplication.Core.ContentTypes.Media;

namespace EPiServer.WebApplication.Infrastructure



public class ApprovalEngineEvents : IInitializableModule


private IApprovalEngineEvents _approvalEngineEvents;

public void Initialize(InitializationEngine context)


_approvalEngineEvents = context.Locate.Advanced.GetInstance();

_approvalEngineEvents.Rejected += OnRejected;


private void OnRejected(ApprovalEventArgs e)


var contentLoader = ServiceLocator.Current.GetInstance();

var approvalRepo = ServiceLocator.Current.GetInstance();

IEnumerable approvalIds = new List() {e.ApprovalID};

var approvalItems = approvalRepo.GetItemsAsync(approvalIds);

var contentApproval = (approvalItems.Result.FirstOrDefault() as ContentApproval);

if (contentApproval != null)


var rejectedContent = contentLoader.Get(contentApproval.ContentLink);

if (rejectedContent != null)


string comment = e.Comment;

// get customer id from page data

// get product from page data

// send email to customer




public void Uninitialize(InitializationEngine context) => _approvalEngineEvents.Rejected -= OnRejected;



Using Azure Queues in EPiServer

EPiServer’s DXC cloud based Azure platform as a service provides the availability and scalability necessary to auto scale enterprise level high transaction applications. DXC has bundled within it’ set up an Azure Storage Account used for Blob Storage and a Service Bus used for managing events such as invalidation of cache between instances. Once your application is deployed to DXC, it just works and let’s the partner concentrate on delivering a robust quality build.

An Azure Storage Account comes with the following types of storage but do note that while these are technically within the DXC storage account they are not yet exposed for integration purposes.

However that shouldn’t mean that we can’t use an Azure Storage account to build our applications. Even if these services are not exposed within DXC, you can set up a Storage Account under another Azure subscription as the cost even under a large volume of integrations is tiny: https://azure.microsoft.com/en-us/pricing/details/storage/queues/

Azure Queues

Azure Queue Storage is for storing messages in the cloud to be exchanged between components of systems. Typically a “producing” system will create a message which is to be processed by the “consumer”.

Azure Queue’s are well used in an EPiServer application to decouple the logic around integrating with third party systems from our EPiServer code base. Once solution design is complete we can treat the EPiServer application which pushes the message as a separate code base to the delivery application. This approach has loads of benefits:


If there are issues with an integration, developers no longer have to debug the web application to get to the bottom of it. They can simply check the messages entering the queue, verify the data in the queue and then the effort can instantly focus on the pushing of the message or delivery application components.

Managing both applications in separate deployment pipelines also mean we don’t have to test the web application code base if we are updating an integration and vice versa.


Azure Queue’s are insanely scalable coping with up to 20,000 messages per second! All our EPiServer application has to do is push the message.

Error Tracking

When the delivery of a message fails, Azure Queue’s will be default retry 5 times. However both the retry attempts and time between retries can be configured. When a message fails past the set thresholds it enters a “poison” queue where it can be viewed and have additional error handling implemented. I’ll touch on this further down the article.


In addition to each of the reasons above, having separate application in separate CI deployment pipelines gives us the foundation to write independent sets of robust unit tests. Done well, we will find mistakes in code long before they get deployed a live environment.

Time To Market

After initial solution design is complete, you can have separate independent development teams working on the web and integration applications increasing velocity and hopefully getting us live earlier! Hiring managers also love it as they do not necessarily need new hires to have EPiServer skills to make an impact on a project.

Let’s see some Code!

Putting a message on a queue

In this example we’ll put a very simple Contact on a queue which can will later be synced to an Email Marketing Platform.

First create your storage account on your subscription as documented by Microsoft: https://code.visualstudio.com/tutorials/static-website/create-storage

If you like, you can download Azure Storage Explorer to inspect your Storage account:


The first step in our EPiServer application is then to install the following nuget package: https://www.nuget.org/packages/WindowsAzure.Storage/

On events such as a new user registering for your site, you can then simply push the relevant data to a queue and from that point on the web application will have not responsibility for the delivery of the package to the third party platform. Pushing a message to a queue is very simple as illustrated in this sample where i push a json serialised contact model.


Introducing Azure Functions

Now that we have a message saved in an Azure Queue, we need to process it and send to our third party application.

Azure Functions is an event driven, serverless computing service that extends the existing Azure platform allowing us to implement code triggered by events occurring in Azure. Functions can also be extended to third party service and on-premises systems but in this post we’re focused on Azure.

Using Visual Studio we can simply add a new Azure Functions project to our solution.


On the set up screen we can choose a Queue Trigger template setting the “ConnectionString” and “Queue name” to the same values as in the previous step.


This will scaffold a very simple function:


Every time a message is inserted on our “EmailMarketingPlatformQueue” queue the function will get executed and from here we can implement the integration with our third party system.

Azure Functions Pro Tips

This library gives you Dependency Injection capabilities you will require to use a test driven approach to your development, :


What if it fails, logging retries?

By default, a queue trigger will be tried 5 times before it then inserts the message into a “poison” queue.

You can configure both the number of retry attempts and the time between retries in the host.json file which is added to your solution in the VS set up.


Alternatively you can set the retry policy programmatically as you push messages to a queue.

If you want to do extra processing when a message enters the poison queue, you can simply write a new Azure Function to be triggered on that event.

Suggestions for DXC

I would love to see this approach supported out of the box in the DXC and these are the two things i would like to see happen:

  • Support for Deployment of Functions to DXC
  • Future versions of the EPiServer.Azure package to provide methods to push data to other Azure Storage services

I don’t know if this is on the roadmap for DXC and if it isn’t I trust there are good reasons behind that. However in the meantime, running your own Storage account can repay the small cost over time.

Building an EPiServer Campaign Connector for Transactional Emails

This post was originally written as a work around to the EPiServer’s Campaign Connector’s limitation of being tied to one client. However as @David kindly points out in his comment below this has been rectified in version 5.0.0 of the Episerver Connect for Marketing Automation package.

However the information below may still be helpful for non EPiServer implementations of Campaign


EPiServer released a Campaign Connector package (https://world.episerver.com/add-ons/connect-for-marketing-automation/connect-for-campaign/) which abstracts away authentication from Campaign’s API’s while providing an EPiServer Forms connector as well as support for a range of marketing and transactional email functionality.

Once installed Authentication is managed by simply inputting your Campaign client credentials in the Connect for Marketing Automation tool.


Campaign Transactional Connector

It was important to retain the marketing features of EPiServer’s Campaign Connector so we’ll continue to use that connector for the marketing client and then create a custom transactional email Campaign Connector which is solely used for sending transactional emails on the site.

Campaign has two API’s, a SOAP API and a HTTP API. The SOAP API is more powerful containing all the marketing endpoints but the HTTP API contains the endpoint we need for transactional emails.

The connector we will proceed to create will manage authentication using the HTTP endpoints.

Setting up our Infrastructure

To make sure we were building our solution in a maintainable way I wanted to define the Constants we will need later in the post in a dedicated class.


As solid error handling is important to me, I created the following class to map the text received in the response to a description taken from the API documentation.



The HTTP Gateway layer manages authentication and requests sent to the HTTP API endpoint. It is a re-usbale class that can connect to other endpoints exposed by Campaign via HTTP.


Notice that the authorization code for security is included in the request Url. That’s why requests to the HTTP API need to always managed on the server side. This point is very important for obvious security reasons!

If the Response Text is not equal to “enqueued” we are throwing our custom exception. Also note there is no catching of potential exceptions at this layer of the application as I want errors to be handled further up the chain.

Transactional Email Sender

The transactional email sender class lives a layer up from the HTTP Gateway class. It will be responsible for consuming the HTTP Gateway and using it to perform transactional email sends.

The first step was to create a class which campaign API responses can be mapped to giving the next outer service layer an easy to interpret interface.


The implementation of the transactional email sender simply gets the properties which are necessary to send transactional emails and uses the HTTP gateway to manage the send.

Exceptions are caught, logged and handles at this layer.



To configure the connector you simply need to retrieve your Client Id, Authorisation Code and Master Recipient List Id from your Campaign Client and populate your app Settings.