Tuesday, 29 May 2018


This post is about implementing a web-job in azure which will restart a service on a predefined interval. When restarting a service on a predefined interval clearly says that there is some problem with the implementation, we may rely on this as an emergency measure until we tackle the problem. Here we will ignore the implication behind it but will focus on how to get this done. To get this done, we need an application in AD which has the required permission to restart the service. The components which are required in this process are as mentioned below. Even when scripts are available to create application for you, it is good to go via the portal way to make sure that we do what we understood.

  1. AD app
  2. PowerShell script
  3. Web Job service

AD app

It is with this application that we are going to restart the application. This application should have the required permission to restart the service. In order to create this app you have to go to the 
Azure AD blade -> App registration -> New application registration

Once you have created the application using the desired name and an url(dummy also works), you have to set the service principal for the application. For this you should go to the settings of the application and select keys. Now to add key, you have two different ways. One by using a public/private key and the other by using a key and a password pair. Here we will try the key password pair for our example. To do this, create a key by entering a description and expiry as desired. Now save this and it will generate a password for you. You must save it to some safe place as you will not be able to find it again. Now your application is ready. Now we must assign a role to this application.

How to assign a role to AD app

In order to assign a role to the application in the desired resource group, you have to go to the desired resource group then select Access control and click on Add(+). Now it will open a blade to where you can select a desired role. In this case we will select the owner role. Now to the “select” text box, type your application name which you have created in your AD. It will list your application. Select that and click on save. Now you have assigned a role for the application in your resource group. Now with this you are ready to restart your application. We now need a script which can execute the restart logic.

Powershell script.

Below is the script which is the restart logic for your webjob.

$ProgressPreference= "SilentlyContinue"
$appId = "your application id"
$tenantId = "the tenant id"
$password = "password"
$subscriptionId = "subscription id"

$secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
$mycreds = New-Object System.Management.Automation.PSCredential ($appId, $secpasswd)
Add-AzureRmAccount -ServicePrincipal -Tenant $tenantId -Credential $mycreds
Select-AzureRmSubscription -SubscriptionId $subscriptionId
Stop-AzureRmWebApp -Name 'name of your service' -ResourceGroupName 'Name of your resource group'
Start-AzureRmWebApp -Name 'name of your service' -ResourceGroupName 'Name of your resource group'

** Description
$appId = "This is the application id of the application which you have created in the AD"
$tenantId = " Go to Azure AD blade, select app registration, Select endpoints and find the tenant id from any of the url mentioned. From the url you can find the tenant id. It is the one which looks like a guid."

$password - This is the secret key which you got at the time of creating a key for your application. The one which you have saved
$subscriptionId = "In azure, Go to the service that you want to restart and you can find the subscription id”

Now we need a web job which can execute this job on a periodic interval without any intervention.


To create a webjob, go to the resource group and select your service where you need a restart logic. Select webJobs from the settings blade of your service. Click on Add(+) and create a new web-job. Give a proper name then Upload the PowerShell script which you have created earlier and Select Triggered as the type of the job. And to the cron expression provide the desired trigger pattern. In our example we will try to run this job every minute. The pattern should be 0 * * * * *. Now say ok and your job will restart the service.

Now every minute is too much of restart. Provide the pattern which you desire. You can create a pattern which azure accepts from https://cronexpressiondescriptor.azurewebsites.net/

By default, azure will unload the service from the instance where it runs and recycle the instance if the service is idle for some period. To have a web job to run properly, we should disable this behaviour by stating that we want the service to be available always even if there is no traffic. We can do this by enabling the “Always on” option. To do this, you should go to the service and select the application settings and set the always on to “On” and save the setting.

Thursday, 10 August 2017


         Logging is one of the very important factor in any application. You require it to analyse what when and how things went wrong. It serves as the log of the application. But when you have multiple instance of the same application running in parallel, then you don’t have the purpose served properly by the logging mechanism as it will be writing all the log entry into the same file and the log will be cluttered. Here in this post we will discuss how to handle this typical situation when you have multiple instances running at the same time and log4net writes all the logs into the same file.

Possible solutions

  1. Let log4net write each log entry with the process id.
  2. We can have log4net write the log in to different file appended by a process id or anything similar.
  3. Log it into a central location with process Id or something similar.
  4. Log everything into same file but append an identifier to each log entry.

     Here we are going to look at a solution which can be called as a combination of 1&2. We will have log4net write to a different file if another instance is running. If not then it will use the same log file. From management perspective it would be nice to have only one file because then we can have a log rotate and keep the file under manageable size. when we have another instance running then it will be logged into a file with the time appended to the file name. This way we can handle logging by second instance. Let us now see how this is achieved using log4net.

Drawback of this solution:

     When you have an application which has high probability of running multiple instances, then you are going to have many log files which will not be rotated or handled by the logging mechanism. So choose this solution if the situation where you have multiple instances running in parallel are rare.

To achieve this we will be using two concepts.

  1. Pattern Converter in log4net
  2. Inter process synchronization technique.

Pattern Converter in log4net

      Log4net has this pattern converter which you can use in the configuration. Here we will make use of it to handle the file name. Let us see how to achieve this. I will not be discussing the complete configuration of log4net and the below shown code is just a snippet from the configuration. This is as shown below.

<appender name="FileAppender" type="log4net.Appender.RollingFileAppender">
    <file type="log4net.Util.PatternString">
        <name value="fileName" />
        <type value="SampleApp.FilePatternConverter,SampleApp" />
      <conversionPattern value="${APPDATA}\LogFolder\%fileName" />

Here we have defined a converter for the file name. Conversion pattern is the format for the file name. Here "${APPDATA} specifies the system defined application folder and it has nothing to do with the pattern. %fileName is an identifier which redirects to the converter with name as filename. We have defined a converter with name as filename and value as SampleApp.FilePatternConverter,SampleApp. Here SampleApp.FilePatternConverter specifies the namespace and the class name of our converter class and SampleApp is the assembly name where we can find the converter. Now let us see how the converter class looks like. It is as shown below.

namespace SampleApp
    using System.IO;

    public class FilePatternConverter: log4net.Util.PatternConverter
        protected override void Convert(TextWriter writer, object state)
            String filename = “logfile.log”;
            var logFileName = LogFileNameConverter.GetLogFileName(filename);

The text writer will then replace the pattern in the configuration. To be precise it will replace %fileName from the conversionPattern with the log file name. Here the core of the logic is in another file and lets have a look at it. It is as shown below.

public class LogFileNameConverter
        private const string FileName = "application.log";
        private static readonly Mutex ApplicationSyncLock;
        private static readonly bool LockAcquired;
        private static string dateToAppend;
        static LogFileNameConverter()
// register an application exit event and release the mutex lock.
                Application.Current.Exit += CurrentExit;
                bool created;
                ApplicationSyncLock = new Mutex(false, "ApplicationMutexName", out created);
                LockAcquired = ApplicationSyncLock.WaitOne(2000);
            catch (AbandonedMutexException)
// here will will release the abandoned mutex.

                //lets acquire the lock.
                LockAcquired = ApplicationSyncLock.WaitOne(2000);
            catch (Exception)
                LockAcquired = false;
        public static string GetLogFileName(string logFileName)
            if (string.IsNullOrEmpty(logFileName))
                logFileName = FileName;
            if (LockAcquired)
                return logFileName;
                var extension = Path.GetExtension(logFileName);
                var fileName = Path.GetFileNameWithoutExtension(logFileName);
                if (string.IsNullOrEmpty(dateToAppend))
                    Regex rgx = new Regex("[^a-zA-Z0-9]");
                    dateToAppend = rgx.Replace(DateTime.Now.ToString(CultureInfo.InvariantCulture), "-");
                fileName = (fileName + "-instance-at-" + dateToAppend + extension).Replace(' ', '-');
                return fileName;
        private static void CurrentExit(object sender, ExitEventArgs e)

Here we use named Mutex as an inter-process synchronization or signalling. We acquire a mutex lock and keep it locked till the application exits. The new instance will not be able to acquire a lock and will move ahead and create a new logfile name with datetime append to the filename. Here we chose Mutex because it has a mechanism where we can handle the abandoned Mutex which results from an application which had the lock got terminated from task manager. When a new instance of application is launched we can handle this abandoned mutex.

This is all you need and when you run multiple instance, it will start to write to a different file. If you are running it in serial then you have all the log in one file and log rotate will make sure that you have only one file with the specified size.

Now if you like to pass the log file name from the configuration itself, then also it is possible. Let’s see how this is achieved. The converter should have property attribute defined and this will be available in the converter. Let’s see how this looks like. This is as shown below.

<appender name="FileAppender" type="log4net.Appender.RollingFileAppender">
      <file type="log4net.Util.PatternString">
          <name value="fileName">
          <type value="SampleApp.FilePatternConverter,SampleApp">
            <key value="FileName">
            <value value="app.log">
        <conversionpattern value="${APPDATA}\LogFolder\%fileName">

Here we have defied a property with key as “FileName” and value as “app.log”. This will be available in the converter class as shown below.

protected override void Convert(TextWriter writer, object state)
            string fileName = this.Properties["FileName"] as string;
            var logFileName = LogFileNameConverter.GetLogFileName(fileName);

4.Log everything into same file but append an identifier to each log entry

Now that we have seen this solution, let us see how "Log everything into same file but append an identifier to each log entry" can be achieved. For simplicity I am using the exact same converter.
We can achieve this by modifying the layout of log by using the below mentioned configuration.

<layout type="log4net.Layout.PatternLayout">
       <name value="fileName">
       <type value="SampleApp.FilePatternConverter,SampleApp">
         <key value="FileName">
         <value value="app.log">
    <param name="ConversionPattern" value="%fileName %d [%t] %-5p [%C %F %L] - %m%n" />

Monday, 8 May 2017


   A recent encounter in windows server where a need for a folder alias was unavoidable, we made a discovery that unlike linux, folder alias is hard to achieve in windows command for windows version less than windows 10. Though it is achievable, it is hard to achieve when the target directory path is dynamic. This lead to a search for the alternative to deploy in this scenario. The scenario was when we have to use the editbin.exe and its LARGEADDRESSAWARE where we discovered that LARGEADDRESSAWARE take only a particular character length for the input file and if there are more then it will trim it. This was leading to an error:

The alternative was to CD into the folder and execute the command and get out of the folder. And we have something called as setlocal which help us achieve this by scoping the step in and out of a folder. Setlocal will help us scope the working directory and reverts the working directory when the scope is over. The below snippet shows how we can achieve this. I was using this in VisualStudio postbuild event.

cd $(TargetDir)
IF EXIST C:\masm32\bin\editbin.exe (
  C:\masm32\bin\editbin.exe /LARGEADDRESSAWARE $(TargetFileName)

Wednesday, 14 December 2016


   I started on this issue when I had my WCF service times out when I am trying a normal socket connection. This was running fine in windows 8. We had a keep-alive WCF call which tells the server that the client is still active. For this we need the port sharing to be enabled on the machine. I was on an impression that the port sharing was not working and because of which it fails. From the log I came to a conclusion that the keep alive was failing so this made me doubt the TCP port sharing. But I could also see that when I install visual studio, then the application works. This made me doubt the .net framework. Finally I detected the slow network. This was the real culprit.

Windows 10 comes with a real time protection which scans each packets received across TCP. This slows down the network. The fix for this is to disable the real time protection.

On the windows settings you can find windows defender. You can switch off real-time protection.

But nothing is intended for bad purpose. So let’s not disable it. We have an option of setting the exclusion list. We have to use that to ignore known applications from this real time scanning. This was we can bring the application back to normal.

Friday, 22 July 2016


  It is important that we test an application in all network conditions. This is because we may want to put demand on a system or device and measure its response to determine a system’s behavior under both normal and anticipated peak load conditions. This makes sure that the application is robust enough to handle all the network conditions before we roll out to users. It would be a painful task to do this kind of testing without the help of any software which hook up the adapter and controls the packet flow. In the absence of such software we will have to manually pull the network wires every time. Also this approach won’t help us test the spiky network conditions. To help us in these testing we have a tool which is called Network Emulator for Windows Toolkit (NEWT). In this article we will see how to configure and use Network Emulator Toolkit to test an application in limited network bandwidth.

Network Emulator Toolkit

NEWT is a software-based solution that can emulate the behavior of both wired and wireless networks using a reliable physical link, such as an Ethernet. It is available in a 32 Bits version and a 64 Bits version. You can download this from https://blog.mrpol.nl/2010/01/14/network-emulator-toolkit. This is a pretty handy tool which can mimic all the behaviors of a network. It can be configured to setup latency on packets in the network and can be configured to raise random errors and can be configured to not to deliver some packets and many more. Here I will show some of handy features of this tool. But all other functionality will be self-explanatory once you start using this tool.

1. Configure Network Emulator Client for limited bandwidth.

 This setup is relatively simpler. The network emulator has an option to select the network type. We could choose the dial up network so that it allows packets up and down as if we have a dial up network.
Once installed, open the Network Emulator Client in administrator mode. This will open up Network Emulator Client as shown in the figure below. Here you can see the drop down to select the network types. Select the dial up option from it. It then automatically opens up the default configuration for the dial up setup. For this example the only thing which we require is this configuration. Now we have to press the start button and the network emulator will start controlling the packets.

Configure Dial-up

After configuring Dial-up network

2. Configure Network Emulator Client to mimic spiky network which can also drop some packets.

 In order to configure the advanced scenarios, let us see two important configurations in network emulator. These are Link and Filter. Link defines the network behavior when sending and receiving packets. This configuration can be used to configure packet loss, network error, packet latency, bandwidth and spiky network (connect/disconnect). Filter is used to configure the sender and receiver to which the link configuration is applied. With filter configuration you can control whether the link configuration should be applied to all traffic or to traffic from a particular IP. These configuration objects are as shown in the image below.

Configuration options
In order to configure this scenario, we have to first select the network type, let us use the dial-up network this time. You can select this from the drop-down at the top just like the way you did it in the first scenario. This opens up a channel page where the link and filter is configured. Right click on the link object and you can see a pop up window with options for UpStream and DownStream. Both have the same configuration. UpStream defines the network behavior for packets which are send from the local machine and the DownStream defines the other way. When you click on one of these, it opens up a window where you can configure loss, error, latency, bandwidth, and connect disconnect behavior of the network. This is as shown in the image below. The default option is “No” we have to select the appropriate one which we want.

The filter option is set by default. If you want to control the filter, you can right click on it and say delete. Now from the menu in the top bar, select “Configuration” and then select “New Filter”. Here the default option is all network which says the link configuration will be applied to all network cards. You can also see an option of selecting the sender and receiver, you can use this option if you want to control traffic from a particular source. Once configured, you need to press on the start button and it will control the behavior of the network.

Thursday, 4 February 2016


    In this article I will explain in detail how to setup SQLite with EntityFramework. Setting up SQLite and EntityFramework is straight forward but when I tried, it turn out that the version available at the time when this article was written is not really straight forward. So I thought of sharing this in the form of an article. In this article we will see how we could setup a Database First Entity model with SQLite with just a simple table. In this article we will also move the SQLite and EntityFramwork into a separate dll unlike other sample applications. This is to address another scenario which we will see later in this article.

For this article we will be using

  1. Visual Studio 2012
  2. SQLite 3
  3. EntityFramework 6
  4. .Net framework 4.5

Building the application.

    Here in this example we will be having a single visual studio solution with two projects which targets .net framework version 4.5. To start with let us create an application and name it as StockWatch and the framework version should be 4.5. The main application could be any project type. This will be our main project. In my case I have created a WPF project. Now let us create a ClassLibrary project and name it as DataService and the .net framework should be 4.5. This is the project where we will be handling everything related to SQLite and EntityFramework. The main project will just remain as a dumb project which will refer to the DataService project and will fire a call to this project. This is not necessary to get everything up and running but this is to show one scenario while using SQLite which we will find towards the end of this article.

   SQLite is a third party database and there is no inbuilt driver (ADO.Net provider) in .net framework to connect to this database. To connect to the SQLite database, we need an ADO.Net provider (driver) and this is provided by the SQLite website and can be found as a nugget package. System.Data.SQLite is an ADO.NET provider for SQLite. We need it in our project to proceed with SQLite. The name of the nugget package is also System.Data.SQLite. When we install this package, it will also install EntityFramework. This package comes as a complete package for getting everything done with EntityFramework and SQLIte. So let us install this first into our DataService project.

    The prior requirement for this article is that we need an SQLite3 database with a simple table having two fields (id and name). We will now create an EntityFramework model out of this existing database. Currently EntityFramework supports ModelFirst approach but it is expected that they are going to drop ModelFirst approach going forward and we will not find this in EntityFramework7. Moreover SQLite provider for EntityFramework6 (System.Data.SQLite.EF6) does not support creation of table (such feature is not supported till date). So we cannot implement ModelFirst approach with EntityFramework when the database is SQLite. So here we will only address DatabaseFirst approach.

    We will now create a model out of this existing database. To do this, create a new “ADO.Net Entity Data Model” item to the DataService project and name it as “StockModel” and proceed. From the “Choose Model Contents” dialog which pops up, select “EF Designer from database” and proceed. Now on the “Choose Your Data Connection” dialog which pops up, click on the “New Connection” button which pops up another “Connection Properties window”. On the “Data source” section click on “Change” button which will open up another dialog to Change Data Source as shown below. 

    Here we are missing the SQLite Data source in the dropdown. If you can see an entry for SQlite, then you already have everything ready in your machine and you can skip this step. Now if you are also like me, then we need to install an SQLite component for VisualStudio to have it list here. To get this component we have to go to the SQLIte site(https://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki) and download the appropriate package for the version of visualstudio which we use. In my case I am using VisualStudio 2012 so I will download a package for VisualStudio 2012 as shown in the image below. Also the version of the package should be same as the version of the SQLite nugget package (See Issue 3 at the bottom of this article)


Now install this after downloading. While installing don’t forget to mark the option to install SQLIte component to GAC and to install the designer component for visualstudio. Once installed, please restart your machine. Once done, come back to the step where we were about to change the data source, now you can see SQLite component listed in the list and Select it and proceed.

   Now it will show another dialog asking you to select your data base, here you browse and find the database which you have created. Now when you say OK to this dialog, it will create a model out of your database.

Issue 1

You may get some error at this point like “Unable to determine the provider name for provider factory of type 'System.Data.SQLite.SQLiteFactory'. Make sure that the ADO.NET provider is installed or registered in the application config.”

This is a bug in SQLite provider. Install SQLite nugget package into the project after changing the framework version to 4.0 and later change it to whatever .net framework which you desired. This will be a one time activity and later if you wish to add this package from project which targets 4.5, then also it will work.

Now once you have fixed all issues, and when you create a model, you will get the model diagram as shown above. Now we will try to add some stock into the database using EntityFramework. For this we will create a class file in DataService project as shown below.

public class StockService
   private StockTradeEntities conn;
   public StockService()
    conn = new StockTradeEntities();

  public void AddStock(string stockCode)
    Stock st = new Stock();
    st.name = stockCode;

Now refer this project from the main application and fire the method AddStock(). This will add the stock item into database. Here when you try to run the application and when you fire the AddStock method, you will get an error as below.

Issue 2

The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application.

  Now here all of our SQLite setting is added to the app.config file of our main application and it is expected that SQLite dlls must be present in the output folder where we have the app.config file. We could add the nugget package reference to the main application to fix this, but this breaks the convention of a standard application because for an enterprise application, we would expect that functionalities should be moved as separate modules for better maintenance and adding reference to components used in another module to main application doesn’t look nice or it doesn’t make sense. Here what we could do is that we will change the output directory of the DataService project to point to the same output directory of your main application. This way all the dlls referred by your SQLite module project will be available in the same folder where you have your application config file. This way you could fix this issue.

Issue 3

Cannot find the SQLite Data source in visual studio data source list even after installing SQLite component for Visualstudio.

  Here if you are getting this issue, then please make sure that you have added the SQLite nugget package to the project where you are trying to create a model. This step is necessary to have it listed in the visual studio list of data sources. When you add the nugget package, it adds certain entries to the app.config. These entries are necessary for visual studio to list the component in the data source list.

Now this is all about setting up EntityFramework and SQLite. Now you can build more functionality on top of this. :)

Tuesday, 20 October 2015


    Starting with IIS 7, we can have IIS hosting application which can run on separate App-domain. Running application on separate App-domain is required when we need application to work in isolation i.e. to prevent applications in one application pool from affecting applications in another application pool on the server. An application can have several virtual directories, and each one will be served by the same App-domain as the application to which they belong. In most of the cases it should be fine to run an application in the default app-domain created by IIS. But on special scenarios say for example, if you want your application to be served by a different version of the run-time, then you can create an app-domain by specifying the run-time version and can use that to serve the application. Or if you want your applications to work in complete isolation then also you can do the same. Here in this post we will see how to host multiple WCF service under one website where each service is running in isolation with each on separate app-domain.

Hosting WCF service using separate app-domain.

  An application is an object important to the server at run-time. Every website object in IIS must have an application and every application must have at least one virtual directory where each virtual directory points to a physical directory path. When you create a website object in IIS, you are creating a default application. An application can have several virtual directories. To know how you can host a WCF service in default application in IIS, you can refer to the article [Article] How to host a WCF service in IIS 8. Let us now examine how we have created a website object with default application.

  Here in the IIS, in connections section, on Sites, when we right click and say “Add website” it opens up a website object(default application) creation wizard like as shown in the image above. Here “Site name” refers to the name of the site(application) which you want. “Physical directory” should point to the root directory of the website. Just for the demo purpose we will point the physical directory to an empty folder. It could also be a root folder of one WCF service where you have the .svc file. And in the section where it is asking for the “Application pool” is where we have to select the application pool or so called app-domain. Here at the moment we only have a default app domain so we will leave it as is. And if you say ok, then the website object is ready. Now what we have done here is we have created a website object, we have created a default application, we have created a virtual directory which will point to the physical directory as specified in “Physical directory” and the default application will now run in an app domain specified by us in the “Application pool” section. Now we will see how we can create another app-domain.

Creating Application Pool(App-domain)

In IIS, on the connection section, if you expand the server node, you can see “Application Pool” node. On the “Application Pools” node if you right-click and say “Add Application Pool” it will open up a window as shown below. Give a name for your new app-domain, this could be any unique name. and select the version of run-time which you like the application to run with and say Ok. Now your new app-domain is ready to use. 

Adding new application object

  Now on to the website object which we just created, right click and say Add Application, it opens up a wizard to create an application object under the website object. Now in this wizard, Alias is the name which we are going to give to the application and this name is also going to be the part of the url of the application so give it a proper name. The Physical Path is where your root folder for your wcf service where you have the .svc file. Now the Application pool section is where you can select your App-domain which you created just before.

Now when you browse the .svc file from the application folder, IIS will then use the App-domain which you have assigned for the application to serve the request. In this way you can create more app domain and more WCF services which will then work in isolation.


    In this post we will see how we can identify if the application has been maximized or it is restored to normal state. We will use the Resize event equivalent in WPF which is SizeChanged event. We will subscribe to this event and will check the window state to identify if the window has been maximized or it is restored to normal.
Application.Current.MainWindow.SizeChanged += WindowSizeChanged;
private void WindowSizeChanged(object sender, SizeChangedEventArgs e)
private void HandleWindowState()
    WindowState windowState = Application.Current.MainWindow.WindowState;
    if (windowState == WindowState.Maximized)
    else if (windowState == WindowState.Normal)
And let’s not forget to clean it by unsubscribing to the SizeChanged event by
Application.Current.MainWindow.SizeChanged -= WindowSizeChanged;