Manually Importing Products from Commerce Cloud to Optimizely Data Platform (ODP)

ODP has a turn key Integration app called the Commerce Cloud Connector which can be responsible for the synchronisation of Contact, Order and Product data between your Optimizely Commerce Cloud instance and ODP.

However in Europe the Commerce Cloud Connector is not due to be released until the end of June 2022.

In Europe, due to GDPR guideline compliance you may not ever be able to turn on the connector for Customer Contact or Order data . I can cover that in a separate blog post but you will need to make sure that the relevant cookie policies have been selected by the customer.

While waiting for the connectors release in Europe, a manual product data export and import to ODP is a low cost interim solution. The key is to make sure that the schema you choose to save the products in ODP will be maintained when you turn on the connectors product import.

Key Product Data Schema Import Considerations

ODP currently supports 1 currency with that currency in the UI showing as USD. It is on the product roadmap to support additional currencies but for the moment you will have to choose whatever you determine to be the default currency on your website for the product export.

ODP is flexible with regard to your product data structure. However if you intend to use the connector, it’s best to follow a Product->Variant model to match your Commerce catalog so that the consistency of your data will be maintained.

Exporting Product data

Assuming your default market is GB, the following script will export your products from the catalog in GBP£.

CN.[Name] as category,
CE.[Name] AS [name],
              WHEN Charindex('_',CE.Code)> 0 THEN CE.Code
              ELSE CE.Code
)AS sku,
min(PD.UnitPrice)AS price
              WHEN CEChild.Code IS NOT NULL THEN CEChild.Code
              ELSE CE.Code
)AS parent_product_id
FROM [dbo].[CatalogEntry] CE
		INNER JOIN NodeEntryRelation NR ON NR.CatalogEntryId = CE.CatalogEntryId 
		INNER JOIN CatalogNode CN ON CN.CatalogNodeId = NR.CatalogNodeId
       LEFT OUTER JOIN [dbo].[CatalogEntryRelation] ER ON ER.ChildEntryId = CE.CatalogEntryId
       LEFT OUTER JOIN [dbo].[CatalogEntry] CEParent ON ER.ParentEntryId = CE.CatalogEntryId
          LEFT OUTER JOIN [dbo].[CatalogEntry] CEChild ON ER.ParentEntryId = CEChild.CatalogEntryId
       LEFT OUTER JOIN [PriceDetail] PD ON PD.CatalogEntryCode = CE.Code
WHERE (PD.MarketId ='GB'OR PD.MarketId IS NULL)
GROUP BY  CE.CatalogEntryId, NR.CatalogNodeId, CN.[Name], CE.ClassTypeId, CE.Code, CEParent.Code, CE.MetaClassId, CEParent.MetaClassId, CE.Name, CEParent.Name, CEParent.CatalogEntryId, CEChild.Code, CEChild.MetaClassId
Order BY CE.CatalogEntryId

The output of the script can be copied to a CSV file for import to ODP.

Row 1 of the above export is the product with the variants linked by the parent_product_id

Price in this example is pound price for the GB market. You can update the Where clause to isolate the prices for your default market of choice. Support for multi-market & multi-currency pricing is on the roadmap for ODP so i look forward to hearing more on that in the coming months.

Importing to ODP

Copy the output of this script to Excel to convert to a CSV file before doing the following:

  • Do a Find/Replace to convert any “NULL” values in the price column to empty
  • Update price values to 2 decimal places

The name the file odp_products.csv and drop into the ODP Integration CSV Upload interface


While you’re waiting for the ODP Commerce Cloud connector to be released in Europe, this simple Export/Import process will help you migrate product data into ODP in a way that will be consistent with the connector schema.

Experimenting with your Keyword Search Algorithm – 2 of 2

In post 1 in the series we set up a CMS manageable Search Algorithm Settings Page and plugged that into our Search & Navigation query.

We’ll now build on that to create an Experiment that will help us determine the optimal algorithm configuration.

Extend the Search Algorithm Settings Page

Firstly we will extend our Algorithm Settings Page with the following properties

        [Display(Name = "Is Default", Description = "Indicates that this content represents the default Search Algorithm configuration", GroupName = TabNames.Experimentation, Order = 200)]
        public virtual bool IsDefault { get; set; }

        [Display(Name = "Experiment Key", Description = "Matches the variation key of the Optimizely Experiment", GroupName = TabNames.Experimentation, Order = 210)]
        public virtual string ExperimentKey { get; set; }

When we set up our experiment these culture specific properties will allow us to set a default configuration and experiment with algorithms across language sites.

Setting up the Experiment

In my previous Optimizely Experiments series, I talked about setting up and running an Experiment which covers the basics of the Optimizely Full Stack product. Rather than revisit I’m going to assume the foundations and concepts explained in this series are understood.

Experiment Event Metrics

Log into your Optimizely Full Stack account to begin configuration.

Define a new metric for this Experiment Called “Keyword Search Click Thru”. This event will be registered when a customer clicks a search result from a keyword search. Higher numbers of these events will indicate that the algorithm is returning relevant results to our customers.

If you are using Optimizely Search and Navigation typed search, you will most likely have Click Tracking implemented in your solution. Read more on that here:

Assuming you are using Optimizely Search and Navigation you could push the event to the Experiments API at the same point that you register the search click tracking.

Experiment Set Up

Lets now publish variations of our Search Algorithm Settings page in the CMS, setting the Experiment Key to unique values for each instance of the page.

Then log into Optimizely Full Stack and configure your Experiment Variations noting that we will use the “experiment_key” variable to map to the settings page instance.

Finally we can configure our experiment to push to AB test instances of our Algorithm Settings page

Executing the Experiment

To retrieve the Experiment Key in code using the Experiment Service detailed in my previous blog series you simply need a few lines of code

                var decision = _experimentationService.Decide(HttpContext, "keyword_search_algorithm_experiment");
                if (decision != null && decision.Enabled)
                    return _experimentationService.GetFeatureVariableString(HttpContext, decision.FlagKey, "experiment_key");

Now that you have the key, you can use this to load the appropriate Algorithm Settings page and AB test the effectiveness of your search configuration.


Search Algorithms are tricky and the optimal configuration for each Commerce application depends on too many moving parts to predict such as catalog size, data, user intent, market specific preferences etc.

Using Experimentation to measures your algorithm success, tweak, learn and measure again is the best possible way to move towards the sweet spot and grow your conversions.

Experimenting with your Keyword Search Algorithm – 1 of 2

This series will discuss an approach to experimenting with your Keyword Search Algorithm using the Optimizely Search & Navigation, Commerce Cloud and Experimentation products.

Search & Navigation

At a basic level the following happens on a keyword search:

  • Customer attempts a keyword search on your Commerce Cloud website
  • Commerce Cloud code builds a Search & Navigation query and posts to API
  • Search & Navigation executes the query on it’s Elasticsearch index taking into account factors such as Statistics gathered through Click Tracking to assign a relevancy score to the products determined to be the best search results

Assuming click tracking is already implemented, the variable we have control over to tweak in optimising search performance is the search query sent to Search & Navigation.

Search Query Optimisation

To CMS manage the query, we create an Algorithm Settings Page that can integrate with the search query sent to the API.

Algorithm Settings Page

The following is an example of a Search Algorithm Settings page PoC from my Github:

I’ve divided the content type into three tabs representing important aspects of the query.

1 Property Weightings

Property weightings specify what perceived value we assign to a keyword match in various content property’s.

For example we may specify in the query that a keyword match in a Product Title should be twice as relevant as a match in the Description. This will be a key factor when Search & Navigation assigns relevancy score to search results based on that request.

2 Boosting

Your query can instruct Search & Navigation to boost the relevancy scores further of search results that meet a pre-defined conditions. For example a query may include a relevancy boosting for products that are marked in the CMS as “Popular”.

3 Query Optimisation

I am categorising query optimisations as anything else that may improve the quality of search results such as Synonyms, Fuzzy Search, And/Or operators etc.

There is a very useful Labs Github repository which contains a number of extensions that can be integrated with your Search & Navigation solution to improve search performance.

Integrate Settings Page with Search Query

We can integrate our Search Algorithm Settings Page into the query that we build in code.

var query = _findClient.Search<GenericProduct>()
                .InField(x => x.DisplayName,     searchAlgorithmSettings.DisplayNameWeighting)
                .InField(x => x.Brand, searchAlgorithmSettings.BrandWeighting)
                .InField(x => x.Description, searchAlgorithmSettings.DescriptionWeighting);

            if (searchAlgorithmSettings.EnableSynonymsImproved)
                query = query.UsingSynonymsImproved();
                query = query.UsingSynonyms();

            if (searchAlgorithmSettings.EnableDisplayNameRelevanceImproved)
                query = query.UsingRelevanceImproved(x => x.DisplayName);

            if (searchAlgorithmSettings.EnableDisplayNameFuzzyMatch)
                query = query.FuzzyMatch(x => x.DisplayName);

            if (searchAlgorithmSettings.EnableDisplayNameWildcardMatch)
                query = query.WildcardMatch(x => x.DisplayName);

            if (searchAlgorithmSettings.PopularProductBoostingValue > 0)
                typeSearch = typeSearch.BoostMatching(x => x.PopularProduct.Match(true), searchAlgorithmSettings.PopularProductBoostingValue);

Next Post

In the next post, we’ll create an Optimizely Experiment to AB test the success of search query variations.

Foundations of “Good SEO” with Optimizely

Maximising traffic acquisition through good SEO practices is a key strategic goal of any commercial website.

Recently I’ve been working with some brands on SEO optimisation from a technical perspective and have spent some time trying to map out what exactly “Good SEO” means.

So what is Good SEO?

That’s a tough question!

In my opinion Good SEO goes far beyond any particular discipline. It means great health and performance across a number of overlapping broad areas from technical to the quality of content and beyond. To execute effectively in each area entails strong process, execution and ongoing health checks.

The broad area’s I look at SEO through are as follows:

  1. Technical Best Practices
  2. Crawlability / Indexation  optimisations so crawlers can efficiently discover the entire site
  3. On Page HTML optimisations so crawlers can understand the intent and content of your page displaying relevant information in Search Engine Results Pages
  4. International domain strategy linking content across languages and markets
  5. Off Page brand health contributing to a brands authority

I attempted to map SEO considerations in these areas out in the diagram below. For diagramming simplicity each point is allocated to a particular area but I acknowledge that many of the points can overlap multiple areas. Please feel free to leave a comment below if you see something I have omitted.

SEO with Optimizely

Optimizely CMS and Commerce Clouds provides the framework to enable really strong SEO practices.

By practices I mean both high quality performant technical implementations coupled with the tooling to enable good content planning & management processes across your organisation.

There are great community add-ons that are extendable to provide really strong SEO foundations.

Site Health Checks

Websites we work with often have tens of thousands of pages or more. A stray piece of code released on production or content management mistakes can cause real SEO issues. These issues will often be invisible… until they are discovered. So the challenge is discovering the issue before they cause a commercial impact.

I’ve worked with and recommend SEMRush as a tool. You can automate SEO Site Audits to execute regularly and email metric driven reports detailing areas like HTML Markup errors, International SEO errors, Crawlability, Site Performance and a number of other metrics. It also plugs into Google Search Console and Google Analyics so you can view current performance within the reports.

Regularly tracking these metrics means you can identify issues early as you improve your site health scores over time.

Next Post

In this post I’m simply sharing details about something I have been working on in the hope that someone will find it useful.

This is a really big topic so am happy to extend to posts on more specific areas with regard to good technical practices. If it’s useful, let me know 🙂

Optimizely Block Output Caching

There are lots of good quality Blog and Forum posts about differing ways to implement Block level Donut style Output caching in Optimizely.

The below is the approach I prefer to use. It keeps things simple while making sure that cache is unique for visitor groups and languages. Using this method cache will also be invalidated when a new version of a block is published.

Block controller

Add the Output Cache attribute to the Block Controller index method.

    [OutputCache(VaryByParam = "currentBlock", VaryByCustom = "language", Duration = 600)]
    public ActionResult Index(ProductListBlock currentBlock)

VaryByParam: Setting this to “currentBlock” will cache based on the ToString() method of the BlockData model returning the cache key. We add our logic there to make sure cache is refreshed appropriately.

VaryByCustom: Used to extend output caching with custom requirements. You must handle logic for the custom string by overriding the GetVaryByCustomString method in the Global.asax. We use this to cache language specific versions of our block.

Block Model

Override the Block model ToString() method to return unique cache keys for published versions of the block.

    public override string ToString()
        var content = this as IContent;
        var changeTrackable = this as IChangeTrackable;

        if (content != null && changeTrackable != null)
            return $"{content.ContentGuid}-{changeTrackable.Saved.Ticks}-{EPiServer.Editor.PageEditing.PageIsInEditMode}";

        return base.ToString();


Override the GetVaryByCustomString method in the Global.asax for the custom string passed into the OutputCache declaration. In our case this makes sure that language versions of the block are uniquely cached.

    public override string GetVaryByCustomString(HttpContext context, string custom)
        if (custom == "language")
            return HttpContext.Current.Request.Cookies["Language"]?.ToString();
        return base.GetVaryByCustomString(context, custom);

Personalising Headless Content across Channels

Personalising Headless CMS content across channels can be achieved with Optimizely Visitor Groups when a property in the http request can identify the source.

For example a mobile app request sent to a Headless CMS could include the following Http Header key value: “mobile-app-request”:true

Request Header Visitor Group Criterion

A visitor group criterion is required that can can match a request if it sees a header with a specified value.

We have pushed the following GitHib repo containing source code for a Request Header Visitor Group Criterion packaged so that it can be added to a nuget feed.

Once installed, CMS Editor’s have the ability to personalise content for channels.

My Experience with Optimizely Fullstack (via Rollouts) – 4 of 4

In this final post of the series, we will investigate the result of a real world experiment and discuss what the results meant to the roadmap of this project.

So far we have talked about working with Optimizely Fullstack from a development perspective and integrating with a Commerce application. Then we configured and coded our first experiment in a few lines of code.

A Real World Experiment

Just like the Experiment we have talked through setting up in this series, we ran an experiment for an E-Commerce website to determine the optimal Sort Order for category pages.

This websites category pages defaulted to display Newest products first. The theory was that customer interact with this website to purchase new release products.

However there was a hypothesis that:

If customers view products on category pages with Best Selling products first, the following success criteria will be proven:

  • Customers will click to view more products
  • Average Order Value will increase
  • Revenue will increase

Real World Results

With Event Tracking enabled as discussed in the previous posts in the series, Optimizely provided us with the following results on each of our success criteria.

Overall Results

We rolled the experiment out to just 10% of total sessions. Even with a low percentage we began to see interesting trends.

At this early point in the Experiment, Customers who were served Best Sellers by default:

  • Were as likely to click through to view a product details page on either New or Best selling products. No notable difference
  • Had an Average Order Value 10% behind New Products
  • Were 19% behind on generating revenue

Of course you need much more sessions and to run the experiment over a longer period of time for the result to achieve significance but these early findings were interesting.

It showed the original hypothesis for the Experiment was likely wrong. The Experiment was failing – and that in itself was a game changer.

What else did we find out

Using Optimizely Reports, you can refine your Results by any Audience Attribute.

Refining the reports per language and/or market showed that some Markets had a huge preference for New products. It didn’t seem to matter for other Markets where the difference was insignifant.

Another game changer. We need to refine and improve our algorithms to best serve products to users of different markets.

And how do we do that?

  1. We are going to create our hypothesis for what customers prefer in each market based on analytics and experimentation results
  2. We are going to run experiments for each market as we work through a program of experimentation, analysis and continued optimisation.

And this is just for the Category page sort order!


Starting is Low Cost & Easy!

Sign Up for a Free Rollout Accounts.

Use the code bases discussed in this series to get started really quickly.

Start with a simple experiment.

1 Experiment leads to many more

We picked a simple Experiment – Sort Order on Category pages. From that experiment came data on customer behaviour.

From that data came questions. A big finding was that customers seem to have quite different expectations across markets.

Let’s run more targeted experiments to find out more on this…

We should also run experiments on all key functionality as we continue to optimise conversions.

Strategic Process Changes

Well designed Experiments are a low cost way of validating assumptions around how your customers interact either with the current site or may interact with new features being thought up.

Run simple Experiments with MVP implementations to prove value before proceeding with full delivery of an expensive feature. Failed experiments are valuable. This ensures budget is used effectively.

Also Experiment to optimise existing user journeys and functionality to hit the sweet spot across all segments of users. And then continue to measure. It is guaranteed that sweet spots will move over time.

My Experience with Optimizely Fullstack (via Rollouts) – 3 of 4

During this series of posts we have talked about working with Optimizely Fullstack from a development perspective and integrating with a Commerce application.

In this post we will get into the fun stuff and implement a simple experiment.

The Experiment

For one of our E-Commerce clients, there are differing theories on what Category Listing Page sorting algorithm will result in customers seeing products they are more likely to buy.

The default sorting for listing pages on this website was by Newest products. The theory is that customers who interact with this brand prefer to see the latest products.

A hypothesis is that more sales would convert if products were displayed in order of Best Seller. Best Seller is a numeric value which contains the accumulated purchases of that products variants over 2 years. It will favour older and well known products.

Category listing pages perform well so any change to a Newest algorithm as default needs to be evidence driven. A perfect first Experiment!

Optimizely Experiment Configuration

With the Optimizely Commerce integrations discussed in the previous posts in place, we can proceed to configure our first experiment as follows:


Attributes and Segmentation

You can target particular segments of your audience with an experiment through setting “Audience Attributes”. These attributes are synced to Optimizely in the User Context of each Optimizely tracking request.

A number of audience attributes were already integrated in David’s initial Experiments project which I extended and can be viewed in the User Retriever – GetUserAttributes method.

We decided to roll this experiment out across all markets and devices so set our Audience to the default “Everyone”.

Even though we are not using Audience Attributes to target a subset of our audience for the experiment, they are still very important. Optimizely Experiment Reporting allows you to filter by these attributes to see how experiment results might differ across segments of the audience. It’s very powerful and I’ll talk about this more in the fourth and final post in this series.

Audience Percentage

We decided to target 50% of Everyone as this will give us large enough sample size to accurately measure the results of the experiments. This means that 50% of all visitors will be included in the experiment. The other 50% will simply not be included so the site will function on the current defaults.


Metrics are events which are integrated in Events Tracking Service covered in the previous post.

These metrics are what allows you to define what success is for your experiment. Quite a few come out of the box with this implementation so it’s worth checking out. The metrics for our experiment are:

  • Overall Revenue – The most important metric. This tells us what variation of the experiment is generating the most revenue.
  • Over Value – An interesting metric which will tell us if there is a difference is average order value across variations.
  • Product – In which variation of the experiment is a customer more likely to click through to view a product details page.


Here we define what variations of the experiment we want customers included in the Experiment audience to get. In our example we are running:

Best Sellers – 50%

Newest – 50%

You could easily add more supported sort orders to the experiment.

Experiment Code

In the following commit I added an Experimentation Service class that can be used to easily call Optimizely to decide on Experiment Results. The Decide method integrates with the User Context so you don’t need to repeat this code from your Web Application layer.

In this snippet from the Category Listing Page controller, if no sortBy parameter is included as a query string parameter, Optimizely will decide if the customer session should be included in the Experiment.

If Optimizely decides to Enable the experiment, you can then extract the variation from the OptimizelyExperiment object.

With our Experimentation code layer in place, adding the experiment was as easy as these few lines of code.

public async Task<ActionResult> Index(ProductCategoryListingPage currentPage, string sortBy = null)

            if (sortBy == null)
                // integrate with optimizely experiment
                var decision = _experimentationService.Decide(HttpContext, "plp_sort_order_experiment_experiment");
                if (decision.Enabled)
                    // retrieve Experiment Variation
                    sortBy = decision.Variables.GetValue<string>("default_sort_order");

            // proceed to get results from Find

Next and Final Post

We now have set up and configured our first experiment.

In the next and final post we will talk about Optimizely Reports, the insights gained from this particular experiment and what it means for the roadmap for this Commerce application going forward.

There are some interesting findings. Stay tuned! 🙂

My Experience with Optimizely Fullstack (via Rollouts) – 2 of 4

In post 1 of 4 we talked about getting started with Optimizely Experimentations on Commerce.

I have taken David’s original experiments project, pushed a version to my GitHub repository and added some updates that may be helpful for some others working with Commerce applications.

In this post I’ll discuss some of the specifics around this.

The Github repository is available:

johnnymullaney/Optimizely-Experiments (

Experiment Tracking Service

The foundation experiments code base does a really nice job of minimising library dependencies by using EPiServer Tracking to intercept the payload and send tracking events to Optimizely.

My situation was a little different. Visitor Intelligence is obsolete since the Zaius CDP acquisition .

I decided to add the EPiServer.Commerce.Core library as a dependency to the project. This allowed me to write a Tracking Service class that could be easily called from the main application.

Added Tracking Service to handle events called directly from the web … · johnnymullaney/Optimizely-Experiments@fd19b94 (

Page View Tracking

To improve maintainability in you web application project, you can create a page view tracking attribute like below. The code below could easily be extended to CMS:

    public class ExperimentPageViewTrackingActionFilter : ActionFilterAttribute
        public override void OnResultExecuting(ResultExecutingContext filterContext)

            var viewResult = filterContext.Result as ViewResult;
            if (viewResult == null)

            var experimentTrackingService = ServiceLocator.Current.GetInstance<ITrackingService>();
            var currentLanguage = ServiceLocator.Current.GetInstance<ICurrentLanguage>();
            var httpContext = new HttpContextWrapper(HttpContext.Current);

            if (viewResult.Model is ProductListingPageViewModel listingPageViewModel)
                experimentTrackingService.TrackProductListingEvent(httpContext, listingPageViewModel.CurrentPage.Name, currentLanguage.GetCurrentLanguage());

            // check for Commerce Page types
            if (viewResult.Model is ProductDetailPageViewModel)
                experimentTrackingService.TrackProductPageView(httpContext, currentLanguage.GetCurrentLanguage());

Tracking User Interactions

User Interactions like Adding to Cart and Creating an Order can easily be tracked using the following code snippets

_experimentTrackingService.TrackOrderEvent(HttpContext, cart, _currentLanguage);
 _experimentTrackingService.TrackBasketEvent(HttpContext, lineItem, _currentCurrency, _currentLanguage);

Bot Filtering

Bot Filtering will exclude tracking events triggered from bots in your reports. Although the capability to filter these requests is not included in the free Rollouts plan, it’s a good to be extend your integration so it can easilt be turned on in future.

Optimizely’s documentation specifies that you should pass the reserved $opt_user_agent attribute in the Track, Activate, Is Feature Enabled, and Get Enabled Features functions.

How to filter out bots – Optimizely Full Stack

To enable bot filtering capabilties, I made the following update to the User Retiever class which generates the user context sent on each tracking request:

Added user agent reserved attribute https://docs.developers.optimizel… · johnnymullaney/Optimizely-Experiments@b2e5df7 (

Next Post

In the next post in the series, we’ll use the Experiments project to execute a simple experiment and discuss the impact of that experiment in a real world example.

My Experience with Optimizely Fullstack (via Rollouts) – 1 of 4

I’ve been excited to recently get the opportunity to work with the Optimizely Experimentation platform for the first time. My goal has been to analyse the platform technically and demonstrate to clients how experimentation is a game changer in proving what generates results.

Optimizely Rollouts

Rollouts was the plan I started the journey on. At that stage we wanted to demonstrate the potential through running some simple experiments. The results would speak for themselves and open the door to move to a Full Stack plan.

Rollouts allows you to run an experiment on the free plan but you don’t get everything understandably. However what’s available for free is more than enough to start demonstrating results.

Be aware of the following free plan limitations:

Experiment Limits

The Rollouts limit is 1 active experiment at a time in each environment including Production. This is fair on a free plan!

Bot Filtering

Bot Filtering is not available in the free plan. This could skew the reporting metrics somewhat.

Experiments Rest API

The Experiments Rest API endpoint does not work in Rollouts which may limit some clever integrations. However if your going to start pushing the boundaries of a platform integration, you’re probably going to be investing in a paid plan!

Let’s Get Started

Sign Up for a free Rollouts account here.

After that you need to extend your Commerce code base to integrate with Optimizely Experiments through the C# SDK.

This series of posts by David Knipe is a great place to start if you are integrating with the Optimizely CMS or Commerce platforms.

Integrating Optimizely Full Stack with Episerver |

His Foundation Experiments branch on GitHub can easily be added to your solution. It provides some really neat integrations out of the box such as with Optimizely Projects and Visitor Groups. His blog series will bring you through all this.

episerver/foundation-experiments: Foundation Experiments offers a starting point for integrating Optimizely Full Stack into an Episerver project (

Some Minor Code Base Updates

My project was an Optimizely Commerce application without Visitor Intelligence so I made the following updates to make the Optimizely Tracking and Decision integration seamless.

Added Commerce Core Library

I extended the code to integrate directly with EPiServer Commerce core library. This simplified the integration with my Commerce code base so I could pass classes like IOrderGroup for tracking.

User Retriever

Extended IUserRetriver to include a method that will return an object with the User Id and Attributes stored in one object.

I also added an extra reserved attribute to the user context which would enable bot filtering in reporting tools on a paid version.

Tracking Service

Added a Tracking Service class that can be called directly from the main Web Application to track various events.

Experimentation Service

Added an Experimentation Service class that can be called directly from the main Web Application to get experiment decisions and variables.

Next Post

In the next post I will link to my GitHub repository where I have pushed these changes and will talk through specifics of integrating into your Optimizely Commerce application.