0 Comments

The new Inversion of Control pattern introduced with EPiServer 7 is, in my opinion, one of the best new features of EPiServer CMS. Anywhere in your application you can simply call ServiceLocator.Current.GetInstance<ISomeInterface>() and it will return the concrete implementation of your choosing of that interface.

This is of course nothing new, StructureMap has been around for years and there are dozens of other excellent frameworks for IoC.

However, in the EPiServer world this is still kind of new and I’ve seen several epi-projects where the concept of IoC is still largely misunderstood. But this is not a post about IoC as such, but if you’re interested here’s an introduction to StructureMap, it’s a pretty old post but still interesting.

I was working in a project recently where we created a couple of concrete implementations for cache handling. A CacheManager and a NullCacheManager, the CacheManager utilized the HttpRuntime.Cache in the background, the NullCacheManager simply cached nothing as one would expect. They both implement the ICacheManager interface, shown highly simplified below.

public interface ICacheManager {
 GetCachedItem<T>(string key, Func<T> uncachedMethod);
 SetCachedItem<T>(string key, T item);
 RemoveCachedItem(string key); 
}

This was all good and we could now easily decouple our design using code looking something like this.

public void SomeMethod() {
 var cacheManager = ServiceLocator.Current.GetInstance<ICacheManager>();
 var cachedItem = cacheManager.GetCachedItem<SomeObject>("key", SomeMethodThatFetchesSomObjectUnCached);
}

This is great and gives us a very loosely coupled design. We could easily create a new concrete type of ICacheManager such as SqlCacheManager or FileSystemCacheManager and switch between the implementations using a ConfigureableModule, something like this:

public class ConfigureableModule : IConfigurableModule
	{
		public void ConfigureContainer(ServiceConfigurationContext context)
		{
			var container = context.Container;
		    container.Configure(c => c.For<ICacheManager>().Use<CacheManager>());
		}
	}

We could also have different concrete implementations for different environments by using profiles, like this:

public class ConfigureableModule : IConfigurableModule
 {
     public void ConfigureContainer(ServiceConfigurationContext context)
	{
		var container = context.Container;
		container.Configure(c => { 
			c.Profile("debug", ctx => { ctx.For<ICacheManager>().Use<NullCacheManager>(); 
			c.Profile("release", ctx => { ctx.For<ICacheManager>().Use<CacheManager>(); 
				}); 
			});  
		});
	}
 }

But wouldn’t it be pretty cool if we could switch the implementation at runtime? Turns out it’s not too difficult to achieve… Lets create an admin plugin where an administrator can choose which implementation of ICacheManager should currently be used by the site.

We start out by creating a simple plugin:

[EPiServer.PlugIn.GuiPlugIn(Area = EPiServer.PlugIn.PlugInArea.AdminConfigMenu, Url = "/modules/samplesite/ChangeCacheManager/Index", DisplayName = "Cache management")]
[Authorize(Roles = "Administrators")]
public class ChangeCacheManagerController : Controller
 {
	public ActionResult Index() { return View(); }
 }

Remember that we need to add this nonsense to the episerver.shell part of our web.config as well for our module to be picked up:

<episerver.shell>    <publicModules rootPath="~/modules/" autoDiscovery="Minimal">
      <add name="samplesite">
        <assemblies>
          <add assembly="WhateverYouNameYourAssembly" />
        </assemblies>
      </add>
   </publicModules></episerver.shell>

Also please note that I’ve added an authorization attribute to the controller to make sure noone but an administrator stumbles upon it.

Add an empty index.cshtml view for now and type something awesome in it. Compile and run and make sure your plugin appears in the admin config menu.

Allright, next lets create a model for our view.

public class CacheManagerViewModel {
	public string SelectedManager { get; set; }
	public List<SelectListItem> ConfiguredManagers { get; set; }

	public CacheManagerViewModel() {
		ConfiguredManagers = new List<SelectListItem>();
	}

}

Simple, all our plugin will need is a list of the available cachemanagers and a string representing the currently selected one. Ok, lets flesh out our view next.

@model Afa.web.AdminPlugins.Models.CacheManagerViewModel
 
<!DOCTYPE html>
<html>
    <head>
        <link rel="stylesheet" type="text/css" href="/EPiServer/Shell/7.11.1.0/ClientResources/epi/themes/legacy/ShellCore.css">
        <link rel="stylesheet" type="text/css" href="/EPiServer/Shell/7.11.1.0/ClientResources/epi/themes/legacy/ShellCoreLightTheme.css">
        <link href="../../../App_Themes/Default/Styles/ToolButton.css" type="text/css" rel="stylesheet">
        <title>Cache management</title>
    </head>
    <body>
        <div class="epi-contentContainer epi-padding">
            <h1>Select active cachemaanger</h1>
           
            <p>Select which CacheManager implementation to use for this site.</p>
            
 
            @using (Html.BeginForm())
            {
                <div class="epi-padding">
                   
                    <p>Currently active cachemanager: <strong>@Model.SelectedManager</strong></p>
 
                    <div class="epi-size25">
                       
                        <div>
                            <label>Select cachemanager: </label>
                            @Html.DropDownListFor(m => m.SelectedManager, Model.ConfiguredManagers, new { @class = "episize240" })
                        </div>
 
                       
                    </div>
                </div>
 
                    <div class="epi-buttonContainer">
                    <span class="epi-cmsButton">
                        <input class="epi-cmsButton-text epi-cmsButton-tools epi-cmsButton-Save" type="submit" value="Save" />
                    </span>
                </div>
 
            }
        </div>
    </body>
</html>

Now we have a model and a view, all that’s left is to add some code to our controller:

 public ActionResult Index()
        {
            return View(GetViewModel());
        }
 
        [HttpPost]
        public ActionResult Index(CacheManagerViewModel model)
        {
            var container = ServiceLocator.Current.GetInstance<IContainer>();
 
            var selectedManager = Type.GetType(model.SelectedManager);
 
            container.Model.EjectAndRemoveTypes(t => t == selectedManager);
            var instance = (ICacheManager)container.GetInstance(selectedManager);
            container.Configure(x => x.For<ICacheManager>().Use(instance));
            return View(GetViewModel());
        }
 
        private CacheManagerViewModel GetViewModel()
        {
            var model = new CacheManagerViewModel();
            var currentActiveManager = ServiceLocator.Current.GetInstance<ICacheManager>();
 
            model.SelectedManager = currentActiveManager.GetType().Name;
 
            foreach (var manager in ServiceLocator.Current.GetAllInstances<ICacheManager>())
            {
                model.ConfiguredManagers.Add(new SelectListItem()
                {
                    Text = manager.GetType().Name,
                    Value = manager.GetType().AssemblyQualifiedName,
                    Selected = currentActiveManager.GetType() == manager.GetType()
                });
            }
 
            return model;
        }

There are a few interesting points in the code above. Lets first consider GetViewModel() method. We simply get the currently active concrete implementation and then loop through all instances of ICacheManager that is currently registered in our IoC container. This brings us to an important point, all implementations of ICacheManager must have been registered in our container or they wont be returned by the GetAllInstances method. This could be done in simple fashion by using the AddAllTypesOf<ICacheManager> when we configure our container. Something like this:

public void ConfigureContainer(ServiceConfigurationContext context) {
	context.Container.Configure(c => c.Scan(s => s.AddAllTypesOf<ICacheManager>()));
}

Next lets have a look at the post method. This method looks kinda peculiar, what’s all this ejecting and removing stuff? It turns out that we can’t really just change which configured cachemanager to use with For.Use because that would add another instance of the same type and not replace the existing one, and that’s why we first need to eject the model of the selected type and then re-configure it.

And that should be it! We now have a working admin plugin that lets us change fundamental inner workings of our application at runtime. We could substitute ICacheManager for IContentRepository and radically change how we fetch our content, we could substitute it for IContentRenderer and add functionality such a detailed logging to the rendering of our content… There’s no end to the possibilities!

But, as we’ve learned from modern classics such as the bible and the amazing spiderman:

spider

0 Comments

Have you ever tried running an EPiServer MVC website with the latest version of the ASP.NET MVC Framework and latest versions of all the nuget packages? Well, it won’t work very well…

jsonnetfail

Unfortunately EPiServers EPiServer.Framework nuget package only supports Newtonsoft.Json version >= 5.0.8 and <= 6.0 (which is the same thing as saying we only support 5.0.8 as there are no other releases in the span).

This means that we will have to downgrade our Newtonsoft.Json package to be able to install the EPiServer.Framework nuget package. Fortunately this is quite simple and WebGrease (another package used by the Microsoft ASP.NET Web Optimization package) only requires NewtonSoft.Json >= 5.0.4 so thus far this shouldn’t be a problem.

Go to your Package Manager Console and type the command:

Update-Package Newtonsoft.Json -Version 5.0.8

This will downgrade your Json.Net installation to version 5.0.8. Now we can carry on installing the EPiServer.Framework nuget package without issue.

However, you might still run into this error:

Could not load file or assembly 'Newtonsoft.Json' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.

Turns out that the assembly redirect in web.config is still looking for the 6.0.x version of Newtonsoft.Json. To fix this do not change the version to 5.0.8 and hope for the best, the version number of < 6.0 of Json.Net had the version 4.5.0.0 so changing to that version should work. However, simply removing the assembly redirect is a simpler and more direct solution. It serves no purpose anyway as we know that the version we have installed is the one we want to use so we simply put the assembly redirect tag out of its misery.

 

boromir

3 Comments

In a quite recent blog post by Per Bjurström of EPiServer he wrote about a new database version for EPiServer CMS where they actually include the database schema changes in the nuget update package.

This is an awesome step in the right direction where everything that is needed to upgrade an EPiServer site is included in the nuget package, but… How do we integrate this with our continuous integration process?

Turns out it’s not so difficult at all. Looking at this blog post by Paul Stovell, the man behind Octopus Deploy, we can get some inspiration as to how this could be achieved.

Lets start by adding a console application project to our solution and add the DbUp nuget package. This will be the project that contains all of our database schema changes, in most cases probably only the SQL scripts from EPiServer, but we could of course have different databases that also need to update their schema from time to time.

We then update our EPiServer nuget packages. If we try to run our site now we’ll get the old familiar yellow screen of death saying: “The database has not been updated to the version 7007.0, current database version is 7006.0.”

dberror

This is easily remedied by running the “update-epidatabase” command from the Nuget Package Manager Console, however this will only fix our development database. The database will still be out of sync when we deploy this release to any other environment. We need to extract the SQL script from the nuget package by using the “export-epiupdates” command:

PM> Export-EPiUpdates An Export package is created C:\EPiServer\AlloyDemo\wwwroot\EPiUpdatePackage Exporting epiupdates into EPiUpdatePackage\EPiServer.CMS.Core.7.8.2\epiupdates

This will create a folder in our project root where we can find the sql scripts being run: EPiUpdatePackage\EPiServer.CMS.Core.7.8.2\epiupdates\sql

episql

 

We’ll grab the 7.8.0.sql file from the folder and include it in our console application and make sure to set the build action to “embedded resource” to make the sql script a part of the generated .exe file.

episql2

Next we’ll add our episerver database connection string to our App.config.

connectionstring

It doesn’t really matter what you call the connectionstring here and you could of course have several if you have multiple databases that need to be updated.

Then we write som code to trigger the update by DbUp in the main method of our console app. This code is taken directly from Paul Stovells blog post and works just perfectly.

DbUp will automatically wrap all the calls in a transaction that will only be committed if all of the calls are successful. This is really nice as it means that we won’t have to worry about leaving our database in a half-upgraded state of some kind.

static int Main(string[] args) { //Grab a reference to our connectionstring var connectionString = ConfigurationManager.ConnectionStrings["DatabaseConnection"].ConnectionString; //DeployChanges is a fluent builder for creating databases. //There are lots of options other than executing scripts embedded in the assembly, //including from a specified file location or manually created scripts. var upgrader = DeployChanges.To .SqlDatabase(connectionString) .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly()) .LogToConsole() .Build(); var result = upgrader.PerformUpgrade(); //If the result is unsuccessful we'll change the fore color to red and display the error if (!result.Successful) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(result.Error); Console.ResetColor(); return -1; } Console.ForegroundColor = ConsoleColor.Green; Console.WriteLine("Success!"); Console.ResetColor(); return 0; }

DbUp is also clever enough to add a simple database table storing every script that has already been executed so we do not need to worry about unnecessarily running scripts multiple times.

We could run this application right now, as is, and it would upgrade our database for us, but that’s not our goal right now. We want Octopus Deploy (or whichever deployment service you use) to be able to run this app and upgrade our databases automatically. That’s why we also add a simple powershell script that Octopus deploy will run as part of the deployment process. We put this in a Deploy.ps1 file and include it in our console project, making sure to put Copy to output directory to Copy if newer. This file will be automatically picked up by Octopus and run when the project has been deployed.

deployps

The Deploy.ps1 file looks like this (again courtesy of Paul Stovell):

& .\OctoSample.Database.exe | Write-Host

As I mentioned in a previous blog post I currently use TeamCity as my build server of choice and let TeamCity run Octopack to package my projects into nuget packages that Octopus deploy then grabs and deploys. To be able to let TeamCity pack our console application we need to install the OctoPack nuget package. Then when we check in our changeset TeamCity will pick it up and run OctoPack automatically, creating a nuget package for every project with OctoPack installed.

The last step to get the whole process to work is configuring Octopus Deploy. Simply add a step to the deployment process called “Update database” or something similar where you fetch the nuget package “EpiDbUp”, deploy it and execute the resulting .exe file to update the database.

octopusdeploy

The complete process now looks something like this:

  • Include the .sql file we want to run in our EPiDbUp-project (making sure to embedd it into the .exe).
  • Check it in to source control. Which triggers a build on our TeamCity Build server which upon completion triggers the creation of a release in Octopus Deploy.
  • Choose which environment we wish to deploy the current release to.
  • The deployment process runs in two steps, first updating the database through our console project, then deploying our web application.

And that’s it! The next time a database schema change is included in an EPiServer update we’ll simply add the script to our database project and all our environments will be automatically updated on next deploy.

0 Comments

Lately I’ve been working with streamlining a clients deployment process for several different EPi-sites. The CI server of choice is TeamCity and we decided to go with Octopus deploy when it came to automating deployment. The result is a pretty awesome continuous delivery process where we can push code from any check in directly to production should we like to.

Just one of the many features of Octopus is that it can easily run all your config transforms for you. And not only web.config, any config! This means we can easily make the pesky episerver.config and episerverframework.config transforms that we unavoidably need when we deploy any EPiServer site.

We ran into just one minor speed bump during the deployment process. It’s spelled license.config. Normally you just throw up the license.config for your different environments and leave it there, but it turns out that Octopus creates a new fresh directory for every deploy which means that if the license.config is not part of our deployment process it won’t get deployed to the new directory and we’ll face something similar to this beauty:

licenseerror

Ah, the joy!

Turns out it’s really easy to fix this though. License.config is just another configuration file really, lets just transform it! However every developer has it’s own during development, so how do we do that?

We first need to add license.config to source control. Just add an empty one like this in a /licenses/ folder.

<Signature xmlns="http://www.w3.org/2000/09/xmldsig#"> <!--This license file is just here for the transforms it's nothing to worry about, just put your own license.config in the site root as usual (but do not check it in to source control)--> </Signature>

Add the appropriate config transform files e.g. license.staging.config, license.release.config etc. and make sure to run an insert on the appropriate sections:

<SignedInfo xdt:Transform="Insert" /> <SignatureValue xdt:Transform="Insert" /> <KeyInfo xdt:Transform="Insert" />

And then update your episerver.framework.config transforms to make sure it points to the new location of the license.config files.

<?xml version="1.0" encoding="utf-8" ?> <episerver.framework xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <licensing xdt:Transform="Insert" licenseFilePath="licenses/license.config" /> </episerver.framework>

Note that this is for EPiServer 7. For EPi 6 you would need to transform the episerver.config (Check out this blog post for more info) instead, for EPi 5 you would just sit around and look sad that you haven’t upgraded yet.

That’s nice! We now have a continuous delivery process. Just one more thing… When we deploy through Octopus it does not automatically remove our config transformation files which leaves alot of junk web.environment.config files lying around. Unfortunately there’s no built in feature in Octopus that handles this (although there was an issue raised), but this is easily handled through a simple powershell script.

We add a postdeploy.ps file to our solution (make sure to set the Build Action to “Content” on the file). This file will be automatically picked up by Octopus and the script will run after deployment has completed.

Here’s the script:

get-childitem .\ -include *debug.config, *release.config, *staging.config -recurse | foreach ($_) {remove-item $_.fullname -force }

And that’s it. Continous delivery made easy with TeamCity and Octopus for EPiServer websites.

2 Comments

I had a strange experience when upgrading a clients site from EPiServer 7 to EPiServer 7.5. Suddenly the site performed noticeably worse with loading times being measured in seconds. At one point it was so bad the site was hardly useable.

Naturally we initially blamed the new code we had deployed and tried to investigate where we had gone wrong performance-wise, but after some further investigation it became clear that the performance drain was not coming from any new code but rather from multiple calls to the EPiServer Dynamic Data Store (DDS). This was very strange as we had made no changes to the code handling the DDS calls and would therefore assume that the performance would be similar as before the upgrade. The performance was so appallingly bad that we had to temporarily disable some of the functionality of the site. Hurriedly we rewrote parts of the application to instead use Entity Framework and most of the performance issues went away.

While this might have been an edge case (thousands upon thousands  of objects in the DDS with many nested objects and relations) and certainly the DDS implementation could have been rewritten to perform better it still raises an interesting question.

Just how badly does the DDS perform in comparison to EF 6.1?

I decided to create a few simple test scenarios and try to make an even comparison.

First I created a few POCO objects to store in the respective ORM:

    [EPiServerDataStore(AutomaticallyCreateStore = true, AutomaticallyRemapStore = true)]
    public class DdsLike : IDynamicData
    {
        public string User { get; set; }
        public int PageId { get; set; }

        

        public Identity Id
        {
            get;
            set;
        }
    }

    [EPiServerDataStore(AutomaticallyCreateStore = true, AutomaticallyRemapStore = true)]
    public class DdsComment : IDynamicData
    {
        public string User { get; set; }
        public int PageId { get; set; }
        public string Text { get; set; }
        public string Heading { get; set; }

        public Data.Identity Id
        {
            get;
            set;
        }
    }

   public class EfComment
    {
        public int Id { get; set; }
        public string User { get; set; }
        public int PageId { get; set; }
        public string Text { get; set; }
        public string Heading { get; set; }
    }

    public class EfLike
    {
        public int Id { get; set; }
        public string User { get; set; }
        public int PageId { get; set; }
    }

Likes and comments the bread and butter of the web!

I then created 20 000 objects (10 000 of each entity) in both Entity Framework (using EF code first) and the DDS:

        public void InitializeData()
        {
            var itemCount = 10000;

            using (var alloyContext = new AlloyContext())
            {
                for (int i = 0; i < itemCount; i++)
                {
                    alloyContext.Likes.Add(new EfLike() { 
                        PageId = i, 
                        User = "bla"
                    });
                    alloyContext.Comments.Add(new EfComment() { PageId = i, User = "bla", 
                        Heading = "The raven", Text = "Once upon a midnight dreary, while I pondered weak and weary",
                    });

                }

                alloyContext.SaveChanges();
            }

            var likeStore = DynamicDataStoreFactory.Instance.GetStore(typeof(DdsLike));
            var commentsStore = DynamicDataStoreFactory.Instance.GetStore(typeof(DdsComment));

            var ddsLikes = new List<DdsLike>();
            var ddsComments = new List<DdsComment>();

            for (int i = 0; i < itemCount; i++)
            {
                likeStore.Save(new DdsLike()
                {
                    PageId = i,
                    User = "bla",
                    
                });
                commentsStore.Save(new DdsComment() { PageId = i, User = "bla", 
                    Heading = "The raven", Text = "Once upon a midnight dreary, while I pondered weak and weary" });
            }

        }

I then started measuring different calls. The procedure was to setup a scenario and measure it 5000 times with EF and DDS respectively, measuring the average time of all the calls made. Here’s the breakdown of the result:

Procedure DDS EF
Load 20 000 items, return count 19 ms 12 ms
Load 20 000 items, sort on pageid > 5000, orderby pageId, return count 88 ms 10 ms
Load 20 000 items, sort on pageid > 5000 && < 15000, orderby pageId, return count 104 ms 11 ms
Load 20 000 items, sort on pageid > 5000 && < 15000, orderby pageId, tolist, count 840 ms 11 ms

Load 20 000 items, get by id 5555, orderby pageid, tolist, count

40 ms 8 ms
Load 20 000 items, order by pageid, take 20, tolist, count 313 ms 33ms
Load 20 000 items, order by pageid, skip 80, take 20, tolist, count 420 ms 42 ms
Load 20 000 items, order by pageid, skip 5000, take 20, tolist, count 390 ms 190 ms

Returning a count of 20 000 items is of course no problem for either mapper, but it’s interesting to note how quickly the performance of the DDS deteriorates as soon as we add some ordering and filtering. Especially noteworthy is when we load our 20 000 items, sort on pageId > 5000 and < 15 000, order by pageId then make a ToList() call, effectively forcing the execution of the query against the database. 840 ms vs 11 ms!

To make things a bit more interesting I reran the test but added some related objects in both databases. I added two nested classes to the Comments class for the DDS and a simple reference to two other classes in EF.

public class DdsSomeNestedClass
    {
        public int MyProperty { get; set; }
        public string MyString { get; set; }
        public Guid Id { get; set; }
        public int PageId { get; set; }
        public string SomethingElse { get; set; }
        public Guid GuidId { get; set; }
    }

    public class DdsSomeOtherClass
    {
        public string MyProperty { get; set; }
        public int SomeId { get; set; }
        public int PageId { get; set; }
        public string Something { get; set; }
        public DateTime Date { get; set; }
        public DateTime ChangedDate { get; set; }
        public Guid GuidMaster { get; set; }
    }

I ran all the tests again and performance went down quite dramatically for the DDS while EF kept going at about the same pace:

Procedure DDS EF
Load 20 000 items, return count 11 ms 12 ms
Load 20 000 items, sort on pageid > 5000, orderby pageId, return count 78 ms 8 ms
Load 20 000 items, sort on pageid > 5000 && < 15000, orderby pageId, return count 78 ms 13 ms
Load 20 000 items, sort on pageid > 5000 && < 15000, orderby pageId, tolist, count 1346 ms 212 ms

Load 20 000 items, get by id 5555, orderby pageid, tolist count

42 ms 6 ms
Load 20 000 items, order by pageid, take 20, tolist, count 407 ms 33ms
Load 20 000 items, order by pageid, skip 80, take 20, tolist, count 420 ms 36 ms
Load 20 000 items, order by pageid, skip 5000, take 20, tolist, count 500 ms 40 ms

There’s absolutely nothing scientific about the tests, but it’s still interesting to note how poorly the DDS performs in comparison on certain calls. Especially when filtering and sorting is involved.

To conclude we can say that using EF (or some other ORM that does not rely on a big table solution) is always preferable. If you’re going to use the DDS make sure to cache your calls and try to keep them as simple as possible for best performance.