Introduction to extending EPiServer Visitor Intelligence for Omnichannel Marketing

With EPiServer Visitor Intelligence you can create micro targeted onmichannel segments of customers based on their interactions with your brand in store and online. EPiServer’s product suite then gives you the tools to target these segments with super personalised web experiences (CMS), recommendations (EPi Recommendations) and automated marketing campaigns (Campaign).

Collecting your Big Data

The EPiServer Profile Store is the big data repository we should be using to collect anything that may be useful for tracking our customers interactions with our brand. Behind the scenes it is an Azure Data Explorer cluster hosted in Azure. This detail will become important later in the series of blog posts when we write our queries against the data we collect using our channels.

Early in your implementation you should plan for the data you need to send to the profile store as customers interact with your site. It’s best to send anything you think you could potentially use, even at a future date, to get insights on the interaction of customers with your brand.

The two core concepts to understand when implementing tracking are:

• Customer Profiles
• Tracking Events

Customer Profiles

A customer profile is simply data we collect about a specific customer.

Events that you track about a customer are later consolidated back into their profile behind the scenes.

Out of the box the profile data is very basic with the customers email address. However you can easily extend this by sending extra profile data via the Profile Store API.

You should consider integrating the following pages with the profile API: Registration, profile pages, address management etc

Tracking Event

A tracking event can be any data which describes a specific action a customer takes when interacting with your brand. It can be anything from a page view, to creating an order or using a loyalty card in store. Any data you gather on any system which relates to a customer interacting with your brand can be tracked as an event in the profile store.

EPiServer Tracking API

EPiServer provides us with the Tracking API to enable us to very efficiently integrate profile store Tracking Events with out our EPi websites. It is a nuget package you install from the EPiServer feed.

The Tracking API enables us to quickly build payloads with the Tracking Data Factory and then send to the profile store.

Tracking Data Factory

The TrackingDataFactory is a class you can use to easily build event based Tracking payloads as customer interact with your website in such scenarios as:

  • Homepage View
  • Product Page View
  • Category Page View
  • Add to Cart or Wishlist
  • Search

Adding this tracking is very easy but sometimes we will want to customise the tracking payloads with our extended data. There are a couple of options for this using the Tracking API.

  1. Extending the payload per event
    You can extend the payload using the tracking data “setCustomAttribute” extension. This is well explained under the “Customizing Tracking Data” section of the Tracking and Recommendations EPi World developer guide.

    On an E-Commerce implementation, you will find yourself extending tracking especially for products. For example if you are selling books, how useful would it be to extend the tracking data to include the author so the website can serve recommendations to customers based on the authors they are viewing or segment customers we perceive to be interested in certain authors so we can notify them of the next big launch.

  2. Extending all payloads with specific data points
    When there are properties we want to track for all events, use the Tracking Data Interceptor. The Tracking Data Interceptorguide on EPi World explains it with a nice example for adding the current market and currency to tracking events.

Custom Events

If the Tracking Data Factory doesn’t support the event you want to track, you can build your own object and use the Tracking API library to send the event. This is a good example on Epi World of using the Tracking API to send a form submission to profile store.

Next Post

Now that we’ve covered the first step of implementing Tracking into your EPiServer site, in the next post we’ll look at techniques to extend to other systems to enable Omnichannel tracking with profile store.

This will set the platform for us to finally use EPiServer Visitor Intelligence to create and target segments of customers who are interacting with your brand both in store and online.

Building Mobile Apps at speed with EPiServer Commerce & Flagship

In this post I’ll give an overview of the key technologies and building blocks involved in delivering a full featured E-Commerce mobile app across Android and iOS that is fully integrated with your existing EPiServer Commerce website.

Native Apps

Research has shown that conversion rates are 3 times higher for customers on Native mobile apps compared to responsive web sites. Native apps can provide a speedier, cleaner user experience while taking advantage of mobile features such as push notifications to drive engagement.

The issue has been that Native App development is often prohibitively expensive. Done in it’s purest form, it requires separate iOS and Android developers working in totally different technology stacks to deliver apps that are fundamentally the same. Maintenance costs are then effectively doubled as you roll features out across both platforms.

React Native

React Native is one of a number of cross platform mobile development frameworks and frankly, without getting into comparing with other options, it’s the one i have been by far the most impressed with.

Most EPiServer or web developers will know of ReactJS as a framework for building applications using JavaScript. React Native however is an entire platform facilitating us to build native, cross platform mobile apps in which we use React for constructing the UI layers. ReactJS syntax and patterns are at the heart of React Native so there is a minimal learning curve for those already familiar with the framework.

While React uses the Virtual DOM to render code in the browser, React Native uses Native API’s to render components on mobile. These Native Components and Native Modules which come with React Native are optimised for performance.

For these components to work, React Native has to plug different technologies together – JavaScript and the Native ecosystem it is targeting. This communication happens using a “bridge” which is central to the React Native architecture. This provides the mechanism for two way asynchronous communication between the JavaScript that we write and the Native platform through JSON serialised messages. At a very basic level React Native could be described as a JavaScript to Xcode or Java translator.

ReactArchitecture.png

ReactNative and EPiServer Commerce with Flagship

Flagship is an opensource accelerator kit developed by Branding Brand for mobile applications built on ReactNative. The open source code lives in Branding Brands Github repository at: https://github.com/brandingbrand/flagship

This code base provides a whitelabel pre-built starter site and set of reusable commerce components which you can use to kick start your React Native application development.

Flagship’s JavaScript is implemented using TypeScript which introduces standards to JavaScript that, once you get your head around the syntax, EPiServer developers will be more familiar with including strongly typed variables and object oriented design principles.

Being a React Native application, styling is handled within the components via close to standard CSS style sheets.

Flagship already comes with connectors for Commerce platforms such as Shopify & Demandware with the EPiServer connector coming soon through integration with EPiServer’s Service API and Content Delivery API. However there is already the opportunity to integrate with standard Flagship components by normalising your existing responses to a Flagship JSON standard.

Dynamic Category Page Search Faceting with Epi Find

The goal is to create the most dynamic E-Commerce Category pages possible with Epi Commerce & Find.

First lets clarify the terminology. By category page i am referring to the EPi Commerce “NodeContent” type used to structure the product catalog.  Customers will be able to navigate the site using this category structure seeing the top relevant products at all points.

My project type is a Dynamics AX ERP to EPiServer Commerce integration using Avensia Storefront to manage the catalog synchronisation. More information on this type of integration is available here: Delivering Unified Commerce Solutions With Episerver Avensia Storefront And Dynamics For Finance & Operations

Using Avensia Storefront we can replicate the AX Product Catalog in the EpiServer Commerce Catalog. With this approach if our client publishes a new category and/or products in AX, within minutes that set up will be replicated in our EPiServer catalog.

The Category pages of an E-Commerce site are absolutely critical to get right as an optimal set up will provide the customer with an easy method of navigating the site while providing the functionality and information they need to find the product(s) they need. From an SEO perspective it’s also extremely important that crawlers can easily traverse the hierarchy and gain context.

Because categories are added dynamically, we have one category page implementation but the search faceting and product view should dynamically optimise based on the descendant products types of the current category and their attributes.

Take for example this simplistic catalog structure:

catalog

All Product Types inherit from a “BaseProduct” which contains common properties across the catalog.

Our Category Page inherits from Base Node

My requirement is:

  • Category A – advanced facet filtering options for Product Type 1
  • Category B – advanced facet filtering options for Product Type 2
  • Category C – facets common to Base Product

Technical Set Up

The Category Controller inherits from ContentController<NodeContent> meaning that it gets hit on all page loads of Node Content. We are going to use a search service which is injected into the Category Controller and manages the communication with Find. The following sequence diagram will give you a feel for the flow.

Category - High Level View

Dynamically getting Child Content Types

My Search Service interface defines the following methods:

public interface ISearchService
{
  SearchResultsModel SearchFromCategory(NodeContent currentContent, string query, string sort = "", int page = 1, int? pageSize = null, List facetGroups = null);
  SearchResultsModel Search(NodeContent currentContent, string query, string sort, int page = 1, int? pageSize = null, List facetGroups = null, bool trackSearch = false) where T : BaseProduct;
}

In the SearchFromCategory method we need to find the descendant product types.

This query took a bit of figuring out and i have to thank the excellent community in Epi World for pointing me in the right direction:

https://world.episerver.com/forum/developer-forum/EPiServer-Search/Thread-Container/2019/1/using-termfacets-to-return-a-list-of-product-namespaces-under-a-node/#200943

The solution i arrived at is the following method which uses Find to get return all product types that are descendants of the current Node. To accomplish this “_Type” is included in the request where the indexed property will contain the entire type string.

public List GetProductTypesForCurrentCategory(NodeContent currentContent)
{

  // use Find to execute search with the content type id set as a facet
  var searchQuery = _findClient.Search()
    .Filter(x =&gt;
     x.Ancestors().Match(currentContent.ContentLink.ToReferenceWithoutVersion().ToString()))
      .Take(0).TermsFacetFor(x =&gt; x.CommonType(), request =&gt; request.Field = "_Type")
      .StaticallyCacheFor(TimeSpan.FromMinutes(SearchConstants.ProductTypesCacheInMinutes))
      .GetContentResult();

  // extract the content type id's form the result
  var terms = searchQuery.TermsFacetFor(x =&gt; x.CommonType()).Terms;
  var productTypes = new List();

  // add returned types to list
  foreach (var typeNamespace in terms)
  {
    var type = Type.GetType(typeNamespace.Term);
    productTypes.Add(type);
  }  

  return productTypes;
}

The Search From Category can then be implemented where it customises search to a product type where possible or otherwise using the Base Product object

// Called from category page
public SearchResultsModel SearchFromCategory(BaseNode currentContent, string query, string sort = "", int page = 1, int? pageSize = null, List facetGroups = null)
{
  var productTypes = GetProductTypesForCurrentCategory(currentContent);

  if (productTypes?.Count == 1)
  {
    // get product type
    var productType = productTypes.First();
    // instantiate and invoke generic search
    var searchMethod = typeof(SearchService).GetMethod("Search", new [] {typeof(BaseNode), typeof(string), typeof(string), typeof(int), typeof(int?), typeof(List), typeof(bool) });

    if (searchMethod != null)
    {
      var genericSearch = searchMethod.MakeGenericMethod(productType);
      return genericSearch.Invoke(this, new object[] {currentContent, query, sort, page, pageSize, facetGroups, false}) as SearchResultsModel;
    }
  }
  // search for base type
  return Search(currentContent, query, sort, page, pageSize, facetGroups);
}

The generic Search Service in this solution is then built to customise facets and search results based on the type of product so this solution. Thats a large topic so happy to do another blog post on that!

Also note that this solution uses reflection and this might not always be the optimal implementation from a performance perspective. You could just as easily implement a factory method with a switch statement to map types to Search<T>() method invocations.

Getting the Content Reference from EPiServer Content Approval Events

Although this post is quite trivial, EPiServer Content Approvals are a large step forward from the old functionality and as it’s relatively recent, there’s not a lot of posts out there on customising the engine. Most importantly i would have found this post useful had i found it last week!

My requirement is simple in that customers can upload images via an interface on the website product details page. On successful upload, these images are saved as a media asset in a Draft state. These assets contain properties associating it with both the product and customer who uploaded it.

The image will go through a content approval workflow before eventually being either published on the web site or rejected in which case we need to send an email to the customer to notify them of the rejection. Comments are mandatory in the Content Approval workflow on declining an Image so this comment will be included in the email.

There are Approval Engine Events that we can easily hook into and it is well documented in the EPiServer World:

https://world.episerver.com/documentation/developer-guides/CMS/Content/content-approvals/working-with-content-approvals/

Our image content type has properties linking it to the customer and product so we need to extract the media content reference from the event to do just about whatever we want. Getting that content reference should be the trivial bit…and it is…once you know what to do!

As detailed in the link above, each Approval Engine event provides us with an ApprovalEventArgs object providing us with the context.

Within this object we have the ApprovalId and ApprovalReference properties which we can use to query the approval repository to get us more details:


private void OnRejected(ApprovalEventArgs e)

{

var approvalRepo = ServiceLocator.Current.GetInstance();

IEnumerable approvalIds = new List() {e.ApprovalID};

var approvalItems = approvalRepo.GetItemsAsync(approvalIds);

The next bit is the bit that got me initially…

 

Approval Items Results contain an “Approval” object….However this Approval object does not contain the content reference of the item which the event is triggered for. To retrieve the ContentReference you need to cast the Approval to a “ContentApproval” which is derived from Approval.


var contentApproval = (approvalItems.Result.FirstOrDefault() as ContentApproval);

We can now use our contentApproval object to do just about whatever we want. See the code sample below:


using System.Collections.Generic;

using System.Linq;

using EPiServer;

using EPiServer.Approvals;

using EPiServer.Approvals.ContentApprovals;

using EPiServer.Framework;

using EPiServer.Framework.Initialization;

using EPiServer.ServiceLocation;

using EPiServer.WebApplication.Core.ContentTypes.Media;

namespace EPiServer.WebApplication.Infrastructure

{

[InitializableModule]

public class ApprovalEngineEvents : IInitializableModule

{

private IApprovalEngineEvents _approvalEngineEvents;

public void Initialize(InitializationEngine context)

{

_approvalEngineEvents = context.Locate.Advanced.GetInstance();

_approvalEngineEvents.Rejected += OnRejected;

}

private void OnRejected(ApprovalEventArgs e)

{

var contentLoader = ServiceLocator.Current.GetInstance();

var approvalRepo = ServiceLocator.Current.GetInstance();

IEnumerable approvalIds = new List() {e.ApprovalID};

var approvalItems = approvalRepo.GetItemsAsync(approvalIds);

var contentApproval = (approvalItems.Result.FirstOrDefault() as ContentApproval);

if (contentApproval != null)

{

var rejectedContent = contentLoader.Get(contentApproval.ContentLink);

if (rejectedContent != null)

{

string comment = e.Comment;

// get customer id from page data

// get product from page data

// send email to customer

}

}

}

public void Uninitialize(InitializationEngine context) => _approvalEngineEvents.Rejected -= OnRejected;

}

}

Using Azure Queues in EPiServer

EPiServer’s DXC cloud based Azure platform as a service provides the availability and scalability necessary to auto scale enterprise level high transaction applications. DXC has bundled within it’ set up an Azure Storage Account used for Blob Storage and a Service Bus used for managing events such as invalidation of cache between instances. Once your application is deployed to DXC, it just works and let’s the partner concentrate on delivering a robust quality build.

An Azure Storage Account comes with the following types of storage but do note that while these are technically within the DXC storage account they are not yet exposed for integration purposes.

However that shouldn’t mean that we can’t use an Azure Storage account to build our applications. Even if these services are not exposed within DXC, you can set up a Storage Account under another Azure subscription as the cost even under a large volume of integrations is tiny: https://azure.microsoft.com/en-us/pricing/details/storage/queues/

Azure Queues

Azure Queue Storage is for storing messages in the cloud to be exchanged between components of systems. Typically a “producing” system will create a message which is to be processed by the “consumer”.

Azure Queue’s are well used in an EPiServer application to decouple the logic around integrating with third party systems from our EPiServer code base. Once solution design is complete we can treat the EPiServer application which pushes the message as a separate code base to the delivery application. This approach has loads of benefits:

Maintainability

If there are issues with an integration, developers no longer have to debug the web application to get to the bottom of it. They can simply check the messages entering the queue, verify the data in the queue and then the effort can instantly focus on the pushing of the message or delivery application components.

Managing both applications in separate deployment pipelines also mean we don’t have to test the web application code base if we are updating an integration and vice versa.

Scalability

Azure Queue’s are insanely scalable coping with up to 20,000 messages per second! All our EPiServer application has to do is push the message.

Error Tracking

When the delivery of a message fails, Azure Queue’s will be default retry 5 times. However both the retry attempts and time between retries can be configured. When a message fails past the set thresholds it enters a “poison” queue where it can be viewed and have additional error handling implemented. I’ll touch on this further down the article.

Quality

In addition to each of the reasons above, having separate application in separate CI deployment pipelines gives us the foundation to write independent sets of robust unit tests. Done well, we will find mistakes in code long before they get deployed a live environment.

Time To Market

After initial solution design is complete, you can have separate independent development teams working on the web and integration applications increasing velocity and hopefully getting us live earlier! Hiring managers also love it as they do not necessarily need new hires to have EPiServer skills to make an impact on a project.

Let’s see some Code!

Putting a message on a queue

In this example we’ll put a very simple Contact on a queue which can will later be synced to an Email Marketing Platform.

First create your storage account on your subscription as documented by Microsoft: https://code.visualstudio.com/tutorials/static-website/create-storage

If you like, you can download Azure Storage Explorer to inspect your Storage account:

https://docs.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows

The first step in our EPiServer application is then to install the following nuget package: https://www.nuget.org/packages/WindowsAzure.Storage/

On events such as a new user registering for your site, you can then simply push the relevant data to a queue and from that point on the web application will have not responsibility for the delivery of the package to the third party platform. Pushing a message to a queue is very simple as illustrated in this sample where i push a json serialised contact model.

azure1

Introducing Azure Functions

Now that we have a message saved in an Azure Queue, we need to process it and send to our third party application.

Azure Functions is an event driven, serverless computing service that extends the existing Azure platform allowing us to implement code triggered by events occurring in Azure. Functions can also be extended to third party service and on-premises systems but in this post we’re focused on Azure.

Using Visual Studio we can simply add a new Azure Functions project to our solution.

azure2

On the set up screen we can choose a Queue Trigger template setting the “ConnectionString” and “Queue name” to the same values as in the previous step.

azure3

This will scaffold a very simple function:

azure4

Every time a message is inserted on our “EmailMarketingPlatformQueue” queue the function will get executed and from here we can implement the integration with our third party system.

Azure Functions Pro Tips

This library gives you Dependency Injection capabilities you will require to use a test driven approach to your development, :

https://github.com/BorisWilhelms/azure-function-dependency-injection

What if it fails, logging retries?

By default, a queue trigger will be tried 5 times before it then inserts the message into a “poison” queue.

You can configure both the number of retry attempts and the time between retries in the host.json file which is added to your solution in the VS set up.

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue#host-json

Alternatively you can set the retry policy programmatically as you push messages to a queue.

If you want to do extra processing when a message enters the poison queue, you can simply write a new Azure Function to be triggered on that event.

Suggestions for DXC

I would love to see this approach supported out of the box in the DXC and these are the two things i would like to see happen:

  • Support for Deployment of Functions to DXC
  • Future versions of the EPiServer.Azure package to provide methods to push data to other Azure Storage services

I don’t know if this is on the roadmap for DXC and if it isn’t I trust there are good reasons behind that. However in the meantime, running your own Storage account can repay the small cost over time.

Building an EPiServer Campaign Connector for Transactional Emails

This post was originally written as a work around to the EPiServer’s Campaign Connector’s limitation of being tied to one client. However as @David kindly points out in his comment below this has been rectified in version 5.0.0 of the Episerver Connect for Marketing Automation package.

However the information below may still be helpful for non EPiServer implementations of Campaign

 

EPiServer released a Campaign Connector package (https://world.episerver.com/add-ons/connect-for-marketing-automation/connect-for-campaign/) which abstracts away authentication from Campaign’s API’s while providing an EPiServer Forms connector as well as support for a range of marketing and transactional email functionality.

Once installed Authentication is managed by simply inputting your Campaign client credentials in the Connect for Marketing Automation tool.

authentication

Campaign Transactional Connector

It was important to retain the marketing features of EPiServer’s Campaign Connector so we’ll continue to use that connector for the marketing client and then create a custom transactional email Campaign Connector which is solely used for sending transactional emails on the site.

Campaign has two API’s, a SOAP API and a HTTP API. The SOAP API is more powerful containing all the marketing endpoints but the HTTP API contains the endpoint we need for transactional emails.

The connector we will proceed to create will manage authentication using the HTTP endpoints.

Setting up our Infrastructure

To make sure we were building our solution in a maintainable way I wanted to define the Constants we will need later in the post in a dedicated class.

Constants.png

As solid error handling is important to me, I created the following class to map the text received in the response to a description taken from the API documentation.

exception

Gateway

The HTTP Gateway layer manages authentication and requests sent to the HTTP API endpoint. It is a re-usbale class that can connect to other endpoints exposed by Campaign via HTTP.

httpgateway

Notice that the authorization code for security is included in the request Url. That’s why requests to the HTTP API need to always managed on the server side. This point is very important for obvious security reasons!

If the Response Text is not equal to “enqueued” we are throwing our custom exception. Also note there is no catching of potential exceptions at this layer of the application as I want errors to be handled further up the chain.

Transactional Email Sender

The transactional email sender class lives a layer up from the HTTP Gateway class. It will be responsible for consuming the HTTP Gateway and using it to perform transactional email sends.

The first step was to create a class which campaign API responses can be mapped to giving the next outer service layer an easy to interpret interface.

campaign-response-model.png

The implementation of the transactional email sender simply gets the properties which are necessary to send transactional emails and uses the HTTP gateway to manage the send.

Exceptions are caught, logged and handles at this layer.

transactionalemailsender

Configuration

To configure the connector you simply need to retrieve your Client Id, Authorisation Code and Master Recipient List Id from your Campaign Client and populate your app Settings.

configuration

 

Unified Commerce Solutions with EPiServer, Avensia Storefront and Dynamics for Finance & Operations

Episerver Commerce and Microsoft Dynamics 365 for Finance and Operations (formerly branded Dynamics AX) are two of the global leaders in their respective Commerce and ERP fields. In this article, we’ll explore how we can integrate these platforms to empower businesses with full unified commerce solutions.

We’ll start with the fundamental reasons we would tackle such an integration as we explore the concept of Unified Commerce…

Unified Commerce

Most retailers use detached and sometimes completely disconnected applications to manage different aspects of their business from the Point of Sale (POS) in retail stores to their eCommerce shop. This results in duplicated data and processes across channels often introducing inconsistencies in pricing, promotions, inventory management, loyalty, CRM and just about every other aspect of running a successful business.

Some use complex integrations to manage the synchronization of data across these applications but that can be a less than efficient taking into account expensive builds and the cost of maintaining these integrations over time. It’s also a less than a scalable solution which can introduce new challenges over time as businesses mature.

The fact is that retailers are accumulating unprecedented amounts of data throughout their various online and offline channels. In order to streamline internal processes and provide customers with a seamless experience across online and offline channels, retailers are abandoning disconnected systems and moving to a unified commerce environment where online and offline experiences come together as retailers use in-store data to personalise experience across the web and vice versa.

The result is that customer experiences across brands improve. There are savings in the time and money being eaten up by integrations, the risk of errors are reduced while companies get the power and scalability of managing their data in one centralised place. In essence unified commerce transformations have the potential to streamline processes, reduce costs and increase channel revenues. To deliver unified commerce experiences we need integrate with the Enterprise Resource Planning systems (ERP’s) that retailers run their business on.

Explaining ERP’s

To understand what an ERP is, you need to take a step back to think about all the processes involved in running a business. With regard to an e-commerce shop alone there is stock/inventory management, pricing, loyalty, promotions, order and delivery management but then retailers also need software to manage human resources, accounting, staff and the list goes on. PIM (Product Information Management) is sometimes managed from within the ERP too.

The purpose of an ERP system is to provide an interface with one central source of truth for data across a shared database so various departments like IT, accounting, sales and Point of Sale staff can rely on the same up to date information.

Microsoft’s ERP sits within its D365 offerings.

Introducing Microsoft Dynamics 365 for Finance & Operations

The first release, formerly known as Microsoft Dynamics AX was in 2006 following on from an acquisition in 2004. Through the following decade, a number of upgrades were released. Then over 2016 & 2017 a major revision of the platform was released with a completely new UI delivered through the web browser. This was branded as Microsoft Dynamics 365 for Finance and Operations with an additional version more heavily customised to retail businesses branded as Microsoft Dynamics 365 for Retail.

Dynamics 365 for Finance and Operations consists of a collection of connected subsystems such as Catalog Management, Pricing, Promotions etc. It also includes an integrated PIM (Product Information Management) system.

Dedicated channels will typically be set up for offline retail and online stores. An abstraction of data relevant for the channel will be configured to be published via the “Real Time Service” to relevant channel databases.

overview

At this point, it is important that we understand the Retail Server component from D365 for Retail at a high level.

Retail Server

Retail Server provides a set of services exposed via an API with encapsulated business logic for E-Commerce and Point of Sale (POS) clients. It is through these services with which we can build a unified commerce solution with Episerver.

odata

While the Odata Web API layer exposes the services, the Commerce Runtime (CRT) is responsible for executing the channels business logic as per the configuration of that instance of D365 for Retail.

Data created by customers in the retail online channel are stored in the Dynamics Channel database using Dynamics Retail Server. Real-Time Service then imports them back into the headquarter database.

Delivering Unified Commerce with Episerver

To make this integration work there are two fundamental things we need to accomplish:

  • Synchronize the Product Catalog between D365 for Retail and Episerver to up to date product data is displayed on the E-Commerce site
  • D365 for Retail integrated Shopping Experience

detail

Product Catalog Synchronization

The Avensia Storefront Connector is a licensed product which is an integration engine that integrates D365 for Retail with Episerver making features configured in D365 available in Episerver. Avensia also provides a Storefront starter site which can be set up to provide an e-commerce shop connected to a channel in the D365 for Finance & Operations demo environment.

Storefront comes with two core scheduled jobs. The staging job manages the fetching of category and product information from the Retail channel and staging it in a database. The second jobs manage the importing of this data into the Episerver Commerce product catalogue maintaining any hierarchical category, product and variant information.

Shopping Experience

Retail Server is called directly from the Episerver Commerce site pages to manage the cart and checkout process ensuring that all calculations are driven by the data and configuration set up in the channel database.

When a customer navigates to the product page we query Retail Server directly for pricing and stock real-time calculations.

As customers add products to their shopping cart, price and discounts are calculated with the cart being managed by Dynamics business logic. Dynamics is then used for all calculations as the customer progresses through the checkout flow setting delivery methods, payment methods and creating the order.

Data in order confirmation pages and emails would subsequently be pulled from the Order History Retail Server service.

Conclusion

EPiServer Commerce with Avensia Storefront provides the perfect platform to provide a fully integrated Unified Commerce solution on D365.

The power of EPiServer and additional products such as Perform, Campaign, Social and Insight can give brand managers the power to maximise the impact increasing revenues significantly when used properly.

Choosing between Umbraco and EPiServer CMS

EPiServer and Umbraco are excellent .Net CMS platforms.

EPiServer is an enterprise level fully licensed product while Umbraco is an open source option driven by a passionate community of developers.

If you are choosing between these two options, there are a number of factors that you should consider and I want to touch on these without delving too deep into any particular topic.

Both EPiServer and Umbraco come with MVC versions which provide full control of rendered markup. They also support standard CMS features such as publishing pipelines, versioning, personalisation, reusable content blocks and forms. The extent to which these features are polished and customisable varies but the fact is that they are supported by both platforms.

So what are the major factors to consider when choosing between these platforms?

Budget

This is obvious so let’s get it out of the way first.

EPiServer is a very polished enterprise level CMS with a price tag to match. If your client does not have a budget to match the investment required, it is better to look at open source options and this is where you will invariably turn to Umbraco.

Admin UI

Think about the user personas who will be editing and approving content. How technology savvy are they and how intuitive will they expect the editing and administrative interface to be.

The UI of EPiServer provides the content editor with inline page editing. Would such an intuitive feature be viewed as a big win for your target audience?

What are the functional requirements for administration and how much customisation is required? Is there a requirement for advanced content approval workflows? Will the administrator need a strong level of control over the execution of scheduled tasks for example?

The UI of Umbraco is very functional and can be customised through custom page properties. However, it is not as customizable or polished as the EPiServer UI.

Personalisation

Personalisation provides the ability to personalise the content being served to particular users. It is a very powerful marketing tool when used to its potential.

CMS’s administrators can define sets of attributes that a user must fulfil to belong to a particular group of visitors and thus be served personalised content. The possibilities for defining this criterion are endless. It could be related to the profile of the user by attributes such as age or location. It could also be defined by external factors such as the day of the week, time of day or weather if we get more advanced.

There is a Personalisation Groups package for Umbraco which comes with a number of criteria you can use.

However, EPiServer is more powerful in the personalisation space and for me, this is where it starts to stand out from open source competitors. Personalisation comes out of the box and there is an impressive array of criteria included which allows you to create visitor groups for anything from number of site visits to the referrer or search key words.

The interface for defining visitor groups and ability to in page edit the configuration on a given page provide a very intuitive experience for editors.

EPiServer also has a simple programming interface so developers can quickly create custom criteria.

EPiServer is the clear winner on this feature. The question is that if an advanced level of personalisation is a pivotal requirement or if it may be in the future.

Commerce

Is e-Commerce either a requirement now or will it be in the future? If so, what scale of commerce do you need to plan for?

Umbraco has a number of both paid and licensed e-Commerce packages with Merchello being an open source option I have experience with. While there is an initial learning curve from a developer perspective, it does a solid job at providing single market e-Commerce capabilities. It also provides a nice interface for catalogue, customer and sales management. If you need a CMS application with basic commerce, I do recommend checking out the Umbraco and Merchello combination.

EPiServer Commerce is an enterprise commerce add-on and a whole different beast. By enterprise commerce, I mean that it has the power to cope with multi-market solutions where each market is customisable in terms of products, pricing, warehouse integration,  ERP integrations, sales, payment providers, tax and all the complexities this level of solution brings to the table.

In essence, these options are competing at totally different sides of the commerce market and it should be quite obvious which end of the market the solution you are scoping is veering to as you delve into your business requirements.

Systems integration

Neither Umbraco or EPiServer come with integrated analytics or CRM. This, in my opinion, is a good thing because I want the flexibility to use the best solutions out there for my requirement rather than use a half-baked implementation that comes bundled with a CMS.

Both products have connectors which are sometimes free and sometimes licensed to connect to your platform of choice. Although it is possible to develop your integrations on both platforms, using to a suitable connector can drastically reduce the cost of development.

If there are custom implementations required on an internal system such as an ERP, the EPiServer service API is developer friendly providing the tools do this efficiently. The Umbraco API is not as powerful so there will be more leg work for the development team which can impact project costs.

Localisation

Both CMS’s have single site multi-lingual localisation capabilities.

EPiServer has very strong localisation support and does not add significantly to the development cost. It’s UI is very clean ensuring that it can be very easily managed by an administrator. On certain projects, this can be a very real cost saver.

If localisation is not a core requirement, Umbraco may well be sufficient. It does require more developer work and does not have the intuitive UI of EPiserver but it may be enough.

Deployments

Think about the roadmap of the application after initial build. How often do you expect there to be maintenance and future development phases carried out?

Largely the deployment process will be quite standard but they do differ on CMS templates. The object which defines a CMS template is called a “document type” in Umbraco and a “content type” in EPiServer but they are the same concept.

Umbraco requires an administrator to manually set up and configure “document types” in the admin UI each time new or updated templates are deployed. This get’s inefficient as you deploy to stage, UAT and live environments. More importantly it adds the inevitability of human error.

EPiServer on the other hand allows the developer to set up “content types” programmatically. As code is deployed, new or updated templates will be automatically configured.

Limiting configuration and user error associated with deployment could be a minor win or it could be a game changer. This really depends on the complexity of the project at hand.

Developer Support

Although Umbraco is a free CMS option, you need to sign up to a subscription if you want developer support from the Umbraco team. Before you buy a subscription, consider that there is a large and very active community of developers who will often be able to provide assistance in resolving issues.

As you would expect with a licensed product, EPiServer provides developer support through it’s service desk and from my experience reasonably fast turn around times.

So which is better?

Both are excellent platforms.

The question is which is better for your client and the project at hand.

Strive to choose the right tool for the right job by assessing the information above and you will be on the right path.

Picking the correct Agile Process

In software development, there are two popular and much-debated categories of process workflow in software development: Waterfall and Agile.

Agile software development requires thorough planning with rigid requirements at development time. Agile software development is executed in short iterations where planning is done in its entirety up front in Waterfall.

Even on a Waterfall project with a fixed scope & cost, a strong management team will move the operational strategy in the direction of agile development borrowing processes and practices to improve team collaboration and efficiency.

Waterfall

waterfall-cringe.jpg

Software professionals will cringe when they hear the word “Waterfall”.

The truth is that Waterfall does have its place and the predictability is suited to certain projects.

The problems with a waterfall process arise when a project passes that threshold where it is no longer fully predictable in terms of requirements, workflows and technical challenges.

As a project management discipline, waterfall also makes a lot of sense. In this model a significant amount of time is invested up front in thoroughly defining requirements so that every possible caveat is covered and we know when it is going to be delivered.
The team will then plan how they are going to approach the problem and how long it will take. The project plan contains time for testing and bug fixing before deployment with maintenance being carried out under warranty or retainer.

This is perfect in fields where requirements don’t change after scoping and implementation time is predictable to an extent with the construction industry being a good example.

The beauty and challenge of software projects is that we are doing something unique. The nature of such endeavours is that these undertakings are inherently unpredictable.

The reasons I believe Waterfall isn’t always the correct choice for software are:

1. Estimations are done up front and do not take the status and learnings gained throughout the implementation into account.

2. Waterfall assumes requirements are thoroughly fleshed out on all levels. Particularly on large, complex scopes of work, this is an unrealistic misconception.

3. A fixed scope does not respond to evolving customer needs over time. The finished product can be what a client thought they wanted rather than the solution they really need.

4. Little or no feedback loop to harness learnings and gain efficiencies during implementation.

Agile

Agile.png

There are several subsets of Agile. Each tie back to the principle of iterative work cycles.

Another defining characteristic is flexibility on all sides so that scope evolves to meet the needs of the customer throughout implementation. This can be best achieved by releasing what is a minimum viable product early and assessing its effectiveness against key KPI’s.

The term agile development does not correlate with a particular way of doing things. It is an umbrella term which covers a number of different processes and practices. Two of the most widely adopted processes are Kanban and Scrum.

**Tell me about Kanban?**

Kanban interestingly has its roots in car manufacturing with it originally being the brain child of Toyota in the 1940’s. Toyota took their inspiration from an unlikely source: the supermarket! The supermarket clerks restocked grocery items by their stores inventory levels and not by their vendors supply.

Toyota used this quite simple and yet profound realisation to drive the adoption of a revolutionary “just-in-time” process that would match inventory with demand throughout the entire production lifecycle in turn achieving higher levels of efficiency and throughput.

In software the idea is that a Kanban board will have various columns that represent task statuses e.g. in progress, test, ready to deploy. If you are trying to visualise this, think Trello boards. With each column containing tasks, it gives the entire team a visual representation of the current project status.

Kanban as a software development process emphasises limiting the number of tasks in any column prioritising a smooth, lean flow right through to deployment. The number of tasks in progress at any given time should correlate with the capacity of the team.

If a bottleneck is identified with too many features sitting under any one status, the entire team will work together to identify the cause of the impasse and keep features moving through.

What about Scrum?

Scrum teams develop projects or products in an iterative, incremental model. It borrows its name from the term in Rugby Union which refers to when a team of players interlocked together push in a common direction against the other team.

Development in Scrum is executed in cycles called Sprints which are typically 2 week iterations.
A Sprint planning meeting happens at the start of a sprint where work is prioritised, estimated and committed to depending on capacity. After a sprint planning, there are no changes in the scope of what is being committed to during this iteration.

Each morning the Scrum team will gather to give each other a status update and discuss next steps working towards the common goal of delivering all functionality committed to.

At the end of the sprint the team demonstrates what has been delivered to all stakeholders before entering into a retrospective where an open, honest conversations takes place discussing learnings and what actions are necessary to improve the velocity of the next sprint.

The most important takeaway for this is that scope is frozen during a Sprint. While Scrum does embrace change, any changes need to go into the next sprint.

Kanban or Scrum?

I have seen both Kanban & Scrum used effectively in the right situations. It’s not a case of one methodology being better than the other, it’s about using the right methodology in the right place.

In both Kanban and Scrum, a prioritised and unified product backlog is essential to the success of the project. Product management will prioritise that backlog having a clear vision of what gets worked on first and the development team will decide how.

Another consideration is the deployment infrastructure. Kanban relies on frequent releases of software so a mature continuous deployment infrastructure is essential.

Deployments in Scrum are on the other hand less frequent so manual deployments, while not ideal, would not be as much as an obstacle here. Many Scrum teams choose to batch the product of their sprints into quarterly release plans although there are others who choose to release monthly or in extreme cases by sprint.

Last but definitely not least the single biggest factor to consider when choosing a methodology is the type of project or product this is and what your goals are. If your goal is to frequently add new features to an existing product, Kanban is the way to go. Spotify is the most famous exponent of Kanban done right. A solid product with frequent releases adding new features.

Scrum on the other hand is very well suited to moving new scopes of work forward with product management being very closely integrated with the development team. Velocity in scrum projects is tracked against sprint planning estimations which gives the product management team improved project plan visibility.

Keep it simple

Whatever methodology you choose for your project, the most important thing to realise is that every organisation is different and what works for some will not necessarily work for all.

Remember that the defining characteristic of agile is flexibility.

Try something, see what works, see what doesn’t work, get feedback from the entire team and evolve. The key is to keep it simple. Do not over-complicate and you will reap the benefits.

8 Web Security Standards for every .Net MVC web application

Security is a beast! As a bare minimum, we need to ensure our applications meet industry best practices when released into the wild.

The vast majority of attacks seek to exploit common vulnerabilities so by implementing basic standards we can dramatically reduce risk.

Reviewing the 8 practices below are a great start when assessing the security of your coding practices.

1. Always use an ORM

SQL injection, the injection of untrusted SQL into the database via a query inputted to the application, is still one of the most common vulnerabilities on the web today. The fact this most basic type of attack is still prevalent points to the volume of poorly designed and implemented applications on the web today.

Protecting yourself against SQL Injection is easy and can be done in numerous ways by simply ensuring that data inputted to your application is never directly injected at database level.

Parameterised stored procedures with input validated against a whitelist is an acceptable approach as long as the developer does not concatenate strings within the SP.

However, the best way to protect your application is to use an ORM every time you are querying or writing to the database. There are numerous options for .Net ORM’s with the Entity Framework being the most popular choice but it really is a personal preference.

2. Using Get and Post attributes correctly

Get data is appended to the query string and available publicly while Post data is passed to the server as part of the request body.

Action methods should be decorated with the appropriate Get or Post attribute.

Get should be used when a call is made to a controller function that retrieves data to load a page.


[HttpGet]
 public ActionResult Index(string username)
 {
 // ... etc
 }

Any controller function that accepts and processes data should have the Post attribute.

[HttpPost]
 public ActionResult Delete(string username)
 {
 // ... etc
 }

3. Anti-Forgery Tokens

When the server is processing a request, it should validate that the source of the request to confirm that the user is not the subject of a Cross-Site Request Forgery (CSRF) attack.

MVC provides antiforgery token helpers to that gives you a means to detect and block CSRF attacks through putting a hidden field in the form and then checking that the correct value was submitted on the server side.

To use this first place an HTML.AntiForgeryToken() helper in the form


@using (Html.BeginForm()) {

@Html.AntiForgeryToken()

  <!-- Rest of code here -->
}

This will output a token in the HTML and also give the visitor a corresponding cookie.

Next we need to validate that incoming post at the target action method.

To do this we just need to add the ValidateAntiForgeryToken attribute to the action method.


public ViewResult SubmitPasswordChangeUpdate()
{
 // add code here
}

As a standard all form requests should be validated with anti-forgery tokens. Otherwise you have opened a vulnerability in your application.

4. SSL

SSL will ensure that personal information going up the line is encrypted in the request body.

Ensure your site has a valid SSL certificate. Then force any pages that either retrieve or send personal information from http to https .

Use the .Net RequiresHttps action filter by applying the attribute where required as follows:

[RequireHttps]
public ActionResult SignIn()
{
    return View();
}

Or why not apply the filter globally to force all pages to Https.

protected void Application_Start()
{
    GlobalFilters.Filters.Add(new RequireHttpsAttribute());

    // rest of code here
}

5. Cachable SSL Pages

Any pages that are forced onto HTTPS and contains sensitive data should disable caching. Add “Cache-Control: no-cache, no-store, must-revalidate” to the response headers.

This can be achieved in .Net by setting specific response values. Do this by creating a custom action filter attribute as demonstrated here:

public class NoCacheAttribute : ActionFilterAttribute
{
   public override void OnResultExecuting(ResultExecutingContext filterContext)
   {
      if (filterContext == null)
throw new ArgumentNullException(filterContext);
var cache = GetCache(filterContext);
      cache.SetExpires(DateTime.UtcNow.AddDays(-1));
      cache.SetValidUntilExpires(false);
      cache.SetRevalidation(HttpCacheRevalidation.AllCaches);
      cache.SetCacheability(HttpCacheability.NoCache);
      cache.SetNoStore();
      base.OnResultExecuting(filterContext);
   }
protected virtual HttpCachePolicyBase GetCache(ResultExecutingContext filterContext)
   {
      return filterContext.HttpContext.Response.Cache;
   }
}

6. Password Storage

Ensure you have a strong password policy implemented. As an absolute minimum your policy should insist on 7 character comprising of lowercase, uppercase, numeric and special characters.

You can do that by setting membership provider attributes in the web.config or by configuring the provider to use a regular expression to validate passwords as in this example.

<membership>
     <providers add passwordStrengthRegularExpression= "^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,10}$" ...></providers>
</membership>

Passwords stored in the database should be hashed using a salt. Better yet you could move to a more robust hashing algorithm than .Net’s standard SHA1.

7. Error Handling and Custom Error Pages

It is imperative that we never give away information to a potential threat about the internal workings of our system.

If a visitor can generate an error message like below it is technically poor and not a good user experience.

All unhandled exceptions should be redirected to a custom error page that only contains a friendly generic message (i.e. message that does not reveal any information nor the type of error that occurred).

This can also be set in the web.config like below:

 <customErrors mode="RemoteOnly" defaultRedirect="Error">
 <error statusCode="404" redirect="Error"" >
 </customErrors>

8 Secure Cookie

These days we live a large amount of our lives online and how do websites know who we are? Through cookies of course!

If there is an XSS vulnerability on the site, a malicious user could inject some JS that could modify cookies on page load or worse still, send the cookies to a remote server! If somebody has your browser cookies they essentially have your identity for that website.

A HTTP only cookie tell’s the browser that this cookie should only be accessed by the server. Any attempt to access the cookie from a client script will be forbidden.

You can add the following line to your application which will set the HttpOnly header on cookies for your application:

<httpCookies httpOnlyCookies="true">

HttpOnly cookies don’t make you immune from XSS attacks but they do raise the bar