There are lots of good quality Blog and Forum posts about differing ways to implement Block level Donut style Output caching in Optimizely.
The below is the approach I prefer to use. It keeps things simple while making sure that cache is unique for visitor groups and languages. Using this method cache will also be invalidated when a new version of a block is published.
Add the Output Cache attribute to the Block Controller index method.
VaryByParam: Setting this to “currentBlock” will cache based on the ToString() method of the BlockData model returning the cache key. We add our logic there to make sure cache is refreshed appropriately.
VaryByCustom: Used to extend output caching with custom requirements. You must handle logic for the custom string by overriding the GetVaryByCustomString method in the Global.asax. We use this to cache language specific versions of our block.
Override the Block model ToString() method to return unique cache keys for published versions of the block.
public override string ToString()
var content = this as IContent;
var changeTrackable = this as IChangeTrackable;
if (content != null && changeTrackable != null)
Override the GetVaryByCustomString method in the Global.asax for the custom string passed into the OutputCache declaration. In our case this makes sure that language versions of the block are uniquely cached.
public override string GetVaryByCustomString(HttpContext context, string custom)
if (custom == "language")
return base.GetVaryByCustomString(context, custom);
Use the code bases discussed in this series to get started really quickly.
Start with a simple experiment.
1 Experiment leads to many more
We picked a simple Experiment – Sort Order on Category pages. From that experiment came data on customer behaviour.
From that data came questions. A big finding was that customers seem to have quite different expectations across markets.
Let’s run more targeted experiments to find out more on this…
We should also run experiments on all key functionality as we continue to optimise conversions.
Strategic Process Changes
Well designed Experiments are a low cost way of validating assumptions around how your customers interact either with the current site or may interact with new features being thought up.
Run simple Experiments with MVP implementations to prove value before proceeding with full delivery of an expensive feature. Failed experiments are valuable. This ensures budget is used effectively.
Also Experiment to optimise existing user journeys and functionality to hit the sweet spot across all segments of users. And then continue to measure. It is guaranteed that sweet spots will move over time.
In this post we will get into the fun stuff and implement a simple experiment.
For one of our E-Commerce clients, there are differing theories on what Category Listing Page sorting algorithm will result in customers seeing products they are more likely to buy.
The default sorting for listing pages on this website was by Newest products. The theory is that customers who interact with this brand prefer to see the latest products.
A hypothesis is that more sales would convert if products were displayed in order of Best Seller. Best Seller is a numeric value which contains the accumulated purchases of that products variants over 2 years. It will favour older and well known products.
Category listing pages perform well so any change to a Newest algorithm as default needs to be evidence driven. A perfect first Experiment!
Optimizely Experiment Configuration
With the Optimizely Commerce integrations discussed in the previous posts in place, we can proceed to configure our first experiment as follows:
Attributes and Segmentation
You can target particular segments of your audience with an experiment through setting “Audience Attributes”. These attributes are synced to Optimizely in the User Context of each Optimizely tracking request.
We decided to roll this experiment out across all markets and devices so set our Audience to the default “Everyone”.
Even though we are not using Audience Attributes to target a subset of our audience for the experiment, they are still very important. Optimizely Experiment Reporting allows you to filter by these attributes to see how experiment results might differ across segments of the audience. It’s very powerful and I’ll talk about this more in the fourth and final post in this series.
We decided to target 50% of Everyone as this will give us large enough sample size to accurately measure the results of the experiments. This means that 50% of all visitors will be included in the experiment. The other 50% will simply not be included so the site will function on the current defaults.
These metrics are what allows you to define what success is for your experiment. Quite a few come out of the box with this implementation so it’s worth checking out. The metrics for our experiment are:
Overall Revenue – The most important metric. This tells us what variation of the experiment is generating the most revenue.
Over Value – An interesting metric which will tell us if there is a difference is average order value across variations.
Product – In which variation of the experiment is a customer more likely to click through to view a product details page.
Here we define what variations of the experiment we want customers included in the Experiment audience to get. In our example we are running:
Best Sellers – 50%
Newest – 50%
You could easily add more supported sort orders to the experiment.
In the following commit I added an Experimentation Service class that can be used to easily call Optimizely to decide on Experiment Results. The Decide method integrates with the User Context so you don’t need to repeat this code from your Web Application layer.
In this snippet from the Category Listing Page controller, if no sortBy parameter is included as a query string parameter, Optimizely will decide if the customer session should be included in the Experiment.
If Optimizely decides to Enable the experiment, you can then extract the variation from the OptimizelyExperiment object.
With our Experimentation code layer in place, adding the experiment was as easy as these few lines of code.
public async Task<ActionResult> Index(ProductCategoryListingPage currentPage, string sortBy = null)
if (sortBy == null)
// integrate with optimizely experiment
var decision = _experimentationService.Decide(HttpContext, "plp_sort_order_experiment_experiment");
// retrieve Experiment Variation
sortBy = decision.Variables.GetValue<string>("default_sort_order");
// proceed to get results from Find
Next and Final Post
We now have set up and configured our first experiment.
In the next and final post we will talk about Optimizely Reports, the insights gained from this particular experiment and what it means for the roadmap for this Commerce application going forward.
There are some interesting findings. Stay tuned! 🙂
To improve maintainability in you web application project, you can create a page view tracking attribute like below. The code below could easily be extended to CMS:
public class ExperimentPageViewTrackingActionFilter : ActionFilterAttribute
public override void OnResultExecuting(ResultExecutingContext filterContext)
var viewResult = filterContext.Result as ViewResult;
if (viewResult == null)
var experimentTrackingService = ServiceLocator.Current.GetInstance<ITrackingService>();
var currentLanguage = ServiceLocator.Current.GetInstance<ICurrentLanguage>();
var httpContext = new HttpContextWrapper(HttpContext.Current);
if (viewResult.Model is ProductListingPageViewModel listingPageViewModel)
experimentTrackingService.TrackProductListingEvent(httpContext, listingPageViewModel.CurrentPage.Name, currentLanguage.GetCurrentLanguage());
// check for Commerce Page types
if (viewResult.Model is ProductDetailPageViewModel)
Tracking User Interactions
User Interactions like Adding to Cart and Creating an Order can easily be tracked using the following code snippets
Bot Filtering will exclude tracking events triggered from bots in your reports. Although the capability to filter these requests is not included in the free Rollouts plan, it’s a good to be extend your integration so it can easilt be turned on in future.
Optimizely’s documentation specifies that you should pass the reserved $opt_user_agent attribute in the Track, Activate, Is Feature Enabled, and Get Enabled Features functions.
I’ve been excited to recently get the opportunity to work with the Optimizely Experimentation platform for the first time. My goal has been to analyse the platform technically and demonstrate to clients how experimentation is a game changer in proving what generates results.
Rollouts was the plan I started the journey on. At that stage we wanted to demonstrate the potential through running some simple experiments. The results would speak for themselves and open the door to move to a Full Stack plan.
Rollouts allows you to run an experiment on the free plan but you don’t get everything understandably. However what’s available for free is more than enough to start demonstrating results.
Be aware of the following free plan limitations:
The Rollouts limit is 1 active experiment at a time in each environment including Production. This is fair on a free plan!
Bot Filtering is not available in the free plan. This could skew the reporting metrics somewhat.
Experiments Rest API
The Experiments Rest API endpoint does not work in Rollouts which may limit some clever integrations. However if your going to start pushing the boundaries of a platform integration, you’re probably going to be investing in a paid plan!
His Foundation Experiments branch on GitHub can easily be added to your solution. It provides some really neat integrations out of the box such as with Optimizely Projects and Visitor Groups. His blog series will bring you through all this.
Related to my previous series on setting up Secure Content Approval Workflows, I had a discussion with someone who followed the steps but their ERP Integration API was sporadically throwing the following error when updating content:
System.ComponentModel.DataAnnotations.ValidationException: Content is locked by ‘Epi Admin’ with lock identifier ‘contentapproval’
What Causes the Error?
This error is thrown when a version of content is in the middle of an approval workflow and the EPiServer Content Repository attempts to update the content. The error is saying that EPiServer will not allow this content to get updated while it is going through an approval workflow.
This makes perfect sense as if content is partially approved, we don’t really want to be updating it.
However the scenario explained to me by the developer and agreed with the client, was that this content should be updated regardless of the approval sequence.
Content Lock Evaluator
EPiServer’s IContentLockEvaluator determines if content is locked for editing under given circumstances. The default implementation is this internal class is as below.
internal class ContentApprovalLockEvaluator : IContentLockEvaluator
public static string Identifier = "contentapproval";
private readonly IContentVersionRepository _contentVersionRepository;
private readonly IApprovalRepository _approvalRepository;
this._contentVersionRepository = contentVersionRepository;
this._approvalRepository = approvalRepository;
public ContentLock IsLocked(ContentReference contentLink)
ContentVersion contentVersion = this._contentVersionRepository.Load(contentLink);
if (contentVersion == (ContentVersion) null)
return (ContentLock) null;
if (contentVersion.Status != VersionStatus.AwaitingApproval)
return (ContentLock) null;
ContentApproval result = this._approvalRepository.GetAsync(contentLink).Result;
if (result == null)
return (ContentLock) null;
return result.Status != ApprovalStatus.InReview ? (ContentLock) null : new ContentLock(contentLink, result.StartedBy, ContentApprovalLockEvaluator.Identifier, result.Started);
Content in an approval workflow will return a lock status and block any updates.
The solution for this use case was to inject our custom implementation of the Content Lock Evaluator to allow content in an approval workflow to be updated. Our implementation would simply return null meaning the content is not locked.
public class CustomContentLockEvaluator : IContentLockEvaluator
public ContentLock IsLocked(ContentReference contentLink)
Proceed with caution. This was the solution in a very specific integration where the risks of removing the Content Lock logic was considered and understood by all.
Depending on your situation, consider maintaining as much of the logic in the default EPiServer implementation as possible.
First set up the necessary groups in the Optimizely CMS Administrative interface. These groups will later be used to add users to the Content Approval Workflow.
Create the following groups to match those matched to our virtual roles.
Then create three further roles which we will use to configure our language specific content approval workflows:
Catalog Access Rights
Next assign catalog access rights appropriately to our user groups.
Navigate to the Catalog in Commerce and Click the “Manage” button beside “Visible to Everyone”
Grant reviewers only access to change catalog content.
All Content Reviewers will be added to the ContentReviewers group. This group gives them the appropriate catalog permissions.
We will then add a Content Reviewer to the appropriate language reviewer group. The language reviewer group will be used to configure the language specific Approval Sequence workflow. So for example the French reviewer will be added to both ContentReviewers and FrenchReviewers.
Content Approval Workflow Configuration
Finally we can use the groups created to configure the language specific content approval workflow required. The example below has a Content Reviewer group assigned to each language.
Members of these groups will be notified when a version of the content is assigned to them for review. They will then be able to either:
Decline -> Edit previous version -> Approve
Importantly a content reviewer will not have access to publish content or override an approval sequence. Any content that is not directly assigned via an approval workflow sequence will be read only.
On approval, members of the Content Publishers group will be notified. They can then do a final review across all languages before marking as Ready for Publish.
Optimizely Content Approvals are a mature and highly configurable feature. However every project is different and in designing an optimal workflow for our customers – it is important to plan accordingly to ensure a clean user experience while adhering to security principles when dealing with access rights.
The principle of least privilege (PoLP) refers to an information security concept in which a user is given the minimum levels of access – or permissions – needed to perform his/her job functions.
This is a key principle that we will take forward in designing our workflow.
Planning Content Approval Workflows
The key to planning an Approval workflow is defining the types of user roles who will be involved in a sequence.
For each user role you define, consider the “Principle of least privilege” in granting them permissions to your Optimizely system. We only want to give each role access that is absolutely necessary to the functioning of your optimal approval workflow.
Consider the following for each user role you are planning.
Should members of this user role have access to CMS or Commerce content or both?
Will the user role be responsible for approving or publishing content or both?
Can users in a role override the Approval sequence to publish content that has not gone through it’s full workflow?
In the rest of this series we’ll work through setting up an optimal workflow to meet a requirement.
The Approval Workflow is to manage Commerce Content only
Products are added programmatically though an API integration and should enter the approval sequence automatically
Content to be approved only by designated language specific approvers (English, Spanish, French). Spanish approvers can only review Spanish content.
The approvers have the ability to edit content during the review process
Content in all languages is published by a user with publishing permissions
Our User Roles
Given this requirement we can define 2 distinct roles
Edits and approves content assigned through a workflow
Cannot publish content
Publish content in any language once assigned in the workflow after approval by a Content Approver
Does not approve content
However can override an approval sequence for a product to force the publishing even if it has not yet been approved by a Content Approvers.
In the next post we will proceed to configure Optimizely Access Rights, User Groups, Roles and finally the Approval Sequence to meet our requirement while adhering the principles outlined at the beginning of the post.
When a customer performs a search and subsequently clicks a search result, your website should be informing Find of this event.
Why is Click Tracking important?
There are two fundamental reasons:
Quality of search results returned from Find
Using statistics for continuous search optimisation
Find Relevancy Scores
The EPiServer Find algorithm will assign a relevancy score to each search result it deems to match the words in the search query. Search results are then by default ordered by the relevancy score.
Some of the factors taken into account in the EPi Find algorithm are as follows:
Boost Weighting: the weighting you assign to fields in your search query. For example your query may stipulate that matches in the “Title” field are should be twice as relevant as the same matches in the “Description” field
Term Frequency: number of occurrences of search term words within a result
Inverse Frequency: measurement of how frequently words in a search term occur across the entire set of results. Words in the search query which occur in many potential results have a lower impact to the relevancy score than rare words across the result set.
Number of keyword matches:search results which match all of the keywords in a search term will rank higher than those that match a subset
However a major part of how the EPi Finds algorithm assigns relevancy is the intelligence the platform gathers on click throughs. Search results that customers are frequently clicking for a search term will be assigned higher relevancy scores.
Find needs to know what results people are clicking.
You’re marketing team should be using the Find Search Statistics interface to continually optimise results using Best Bets, Related queries, Synonyms and Autocomplete.
The cornerstone of this process is solid reporting.
If Click Tracking is not working, all search results will have a Click-through rate of 0% denying your marketing team of very valuable information.
Does Click Tracking not work out of the box?
If you’re using EPiServer Find Unified Search – Yes. Just enable Tracking on the search query. As long as you are injecting the EPi Client Resources in your root template (which you probably are!), then the JS injected handles all the rest.
If you’re not using Unified Search – No, you should read on…
How do i implement custom click tracking?
This excellent post from Henrik Fransas explains adding custom click tracking excellently:
The only thing i will add is that sometimes we need to use the Find GetResult method to query the index. The GetResult methods supports returning extended data on the content type which we can set up in an initialisation module as follows with the MyCustomExtendedProperty() method being an simple static extension of the BaseProduct class.
public class SearchIndexInitialization : IInitializableModule
public void Initialize(InitializationEngine context)
.IncludeField(x => x.MyCustomExtendedProperty());
public void Uninitialize(InitializationEngine context)
//Add uninitialization logic
Using GetResult the following method will add the HitId and HitType properties to your search results view model. I hope it comes in useful for someone!
public List AddStatisticsAndRelevancyToSearchResults(SearchResults searchResult) where T : ISearchResultViewModel
var searchResultsWithRelevancyScore = searchResult.Hits.Select(x =>
var result = _contentLoader.Get(x.Document.ContentLink);
x.Document.HitId = SearchClient.Instance.Conventions.IdConvention.GetId(result);
x.Document.HitType = SearchClient.Instance.Conventions.TypeNameConvention.GetTypeName(result.GetType());
catch (Exception e)
_logger.Error("Error thrown retrieving hit count from Find", e);
x.Document.RelevancyScore = x.Score;