Setting up new .Net projects

Intro

I have been a developer for somewhere around 30 years now. In my time I have created a few thousand projects of varying sizes, and for every new iteration I try to improve slightly on my previous. I also see a lot of code from others, and most what I see don’t seem care much about project setup. So I thought I would share how I set up new .Net projects.

This is not a guide in proper architecture or camel casing, and my preferences may vary from yours. You may even find some expert who did a lot of thinking and wrote a book on the subject that disagrees with me. That is fine, you should combine what you learn and find what suits you. The goal here is to have a clean consistent setup. I will try to give the reasoning for why I do it the way I do, staring with some background.

So is this important? Well, failing to name a project properly will usually not break your code. But other people may come into the project later, and having an intuitive easy setup will make their life easier. And making the right decision on platform and initial split of code can save you some refactoring down the road.

Background

At the time of writing we are half-way through 2018 and .Net 4.7.2 is out, .Net Core has matured into the clearly preferred choice for many projects. .Net Core WPF and WinForms support on Windows will be released soon. .Net Standard 2.0 is widely supported.

.Net Standard

.Net Standard is a standardization of .Net frameworks. It is not actual interfaces, but it may help to think of it as a set of common interfaces every .Net implementation must offer. A minimum and uniform set of features all .Net implementations must provide.

This means that if you write a project for .Net Standard 2.0 it will work on all platforms that support .Net Standard 2.0. For example .Net on Windows, Mono, Unity, Xamarin (Android and iOS) and UWP. See here for a detailed list.

A cool feature is that we get binary compatible files. Authors of NuGet packages only need to keep their package within .Net Standard and compile once to support all the platforms. For example when Entity Framework Core was released for .Net Core, it can run on .Net 4.6.1. Want NLog for your Unity 3D project? No problem.

So if you are writing a class library, why not just make it .Net Standard? Well, you could end up needing to pull in dependencies that are not .Net Standard. But that is not a problem, you can change target from .Net Standard to .Net or .Net Core at any time.

Start with .Net Standard on any new library project.

.Net Core

It’s cross platform, fast(!) and generally better in many ways. Community driven and a chance to start fresh and drop a lot of old .Net legacy. If you are starting a new project then most likely this is what you want to use. For example, one of the latest cool features allows you you compile a stand-alone .exe which contains the .Net Core framework and your own code in a single .exe. So the computer running it doesn’t need .Net installed.

With .Net Core a new csproj format was introduced that gives some advantages over the old one. First of all you get a “Edit csproj” mneu item when you right-click it, you can edit it directly. This is nice since it allows you to easily compare and copy-paste for example references from other projects. It also supports implicit include of files (“*.cs”), automatically loads references projects (you load business.csproj, which automatically loads database.csproj), better support for NuGet references and automatically resolves their references.  Generally this translates to less stuff to think of for you. But note that currently it doesn’t support WPF, Windows Forms, ASP.Net Web Forms/ASP.Net 4, etc…

If you need .Net 4.x due to references, interop or whatever then simply change the target. Open csproj file and change from netcoreapp to net. See here for a list of supported targets.

The cool thing about .Net Core framework stuff on NuGet is that its also compiled for .Net Standard, so yes, you can use new .Net Core features in .Net 4.7.

Start with .Net Core on any new ASP.Net or Console project. 

Create new solution

This always starts with creating a new project. I name the solution only “Customer.Solution“, and I name the project “Customer.Solution.Project“. To me this is pretty obvious, but I would guess nearly all of the projects I have seen are named either “Website” (or “WebSite”), or something cryptic like “DynaBo” or “DFN”, that only those in the inner circles know what means. If customer name is long I usually shorten it, three to eight letters is fine.

Giving the project this full name also sets the namespace to the same name, and puts it in a folder named the same. This makes it consistent and very easy to understand what belongs where, especially if I end up pulling in a library from another solution. There are several ways to go to achieve this, but I prefer this because of the simplicity of it.

Notice that the location is set to “C:\Source\<solution>”, and solution name is set to “src” (yes, contrary to what I just said about solution names).

Click OK an a wizard will ask you what project type you want. I often start with WebApi or MVC.

First thing now is to rename “src” to “Tedd.TestProject”. Right click solution and click Rename.

Once this is done I am following a standard folder layout in line with the giants such as ASP.Net, .Net Core, jQuery, Node.js, and many, many more. The reason for this is that Visual Studio will simply create a folder for the solution and a subfolder for each project. This is fine for many projects, but with source control and the need to add more folders and files it is nice that the projects are not in the root folder. I don’t always to though though.

Since our root folder now only contains “src” there is plenty of room for other folders such as “docs”, “tools”, “thirdparty”, etc…

ThirdParty-folder

Sometimes I need to add references to assemblies that are not available on NuGet or we need to add some source code from GitHub.

Some developers reference a file on their own disk, such as “C:\Program Files\SomeVendor\assemblies\SomeFile.dll” and say “you must install vendor package to run this project”. That is almost always wrong, and that .dll-file with its dependencies (or even the whole folder) can be put into the ThirdParty-folder.

Others put the file directly into the bin-folder (yes, the transient compile output folder) and wonder why solution is not working after I go over and clean up the SVN or Git ignores.

Often libraries are shared by multiple projects in a solution So I use a place, outside the solution itself, to put third party files. That is also a good place to keep a copy of the license for these files.

Git or SVN ignore

When using source code repositories there are a lot of “junk” files I don’t want committed/pushed/checked in. For example being asked to check in all binary files every single time can be a hassle, and checking them in does nothing more than waste space and lead to merge conflicts. They are recreated every time I compile, so there is no point in saving them. The same goes for user settings file, Visual Studio temp folder, ReSharper-folder, etc…

If I am using Git I simply google github visual studio ignore, grab https://github.com/github/gitignore/blob/master/VisualStudio.gitignore and save it as .gitignore inside src-folder. If I add source files in ThirdParty-folder I may also want to copy it to those source folders too.

If I am using SVN I’ll need to manually add ignore of a few files and folders. For each project you want minimum to ignore the bin and obj-folders, and any .suo. Look at the .gitignore for the rest.

Experiment: Run ASP.Net Core on >=.Net 4.6.1

I did this so you can do it if you need to. Run your project and see that it works on .Net Core.

This is just to demonstrate that it is possible. Unless you have a very good reason to run ASP.Net Core on .Net you should not do this.

As I mentioned earlier, .Net Standard is good. ASP.Net Core is written highly modular, where much of the key components are downloaded over NuGet. And most of these components target .Net Standard 2.0. This means that they run just fine on .Net 4.6.1 and up. We will use .Net 4.7.2, which is the latest at time of writing.

Right click the project (Tedd.TestProject.WebUi) and select “Edit Tedd.TestProject.WebUi.csproj”.

As expected the target framework is “netcoreapp2.1”, which translates to .Net Core 2.1.

Replace “netcoreapp2.1” with “net472” (from this list). (Note that .Net Standard can’t be used for executable such as web or console.)

The package “Microsoft.AspNetCore.App” (which recently replaced “Microsoft.AspNetCore.All” to help lower third party dependencies) is a combined package containing all packages needed to do ASP.Net development on .Net Core. It does not target .Net Standard because the package contains a few packages that are .Net Core specific, mainly the .Net Core runtime and libuv. Libuv is a native library used by Kestrel web server, it will soon replaced by managed sockets library. We can download “Microsoft.ASpNetCore.App” and add the packages manually, removing the two that doesn’t work. The config we end up with then is:

I had to “Upgrade” (force download) all components to make them work, since the already downloaded version is for .Net Core. Something like “UpdatePackage reinstall Project Tedd.TestPRoject.WebUi” from Package Manager Console should work. And with this config we are running on .Net 4.7.2. Fantastic. That was fun. Now reset it back to .Net Core by copying the initial config back.

Adding libraries

I usually don’t put all my code one project, unless its a small project with clear and limited scope. I allow myself to be a bit pragmatic in my choices, so I don’t go for a full architecture for everything. But when I do, there are some guidelines I follow. I will get back to them in a later blog post, so for now just a few samples.

Database

First out in our example is the database project.

Again I name it with its full namespace. This time i use .Net Standard Class Library. As I demonstrated in the experiment above, you can easily change this to .Net 4.6.1 or above, or .Net Core. The difference is that a library doesn’t use the “Microsoft.AspNetCore.App” so you don’t have to create all those references. Since .Net Standard can only reference .Net Standard, changing it doesn’t even require re-downloading any packages.

Notice after adding the .Net Standard library that it is using the new csproj, it can be edited to use a new target. And if we add Entity Framework Core through NuGet we see that it also uses .Net Standard 2.0.

Models

Sometimes models must live in their own project to make dependencies work. Other times they can be split between Database and main project. This depends on the project. Either way I almost always put models into a project or folder named Models, possible with subfolders if the project is big enough. Separation of data and logic doesn’t seem to be very clear to many, and I see a lot of projects that mix this. I say always, because there are many reasons why I do it differently. But for a vanilla web project I think clarity and least surprise is important.

Enums

I do the same for enums, where I put them in a separate folder. The reason is that I usually don’t navigate much to enums, and when I do its through F12/ctrl-click or similar. The reason for defining an enum is usually to share it between mutliple parts of the code, so it doesn’t necessarily make sense to put them along with the code files.

Other things…

  • One class per file, even enums.
  • Namespace always matches folder structure.

Summary

  • Naming convention such as Customer.Solution.Project should be followed. The projects name is the same as the root namespace is the same as the folder name. Clear and consistent.
  • A standard folder structure makes it easier to navigate the project.
  • .Net Core projects are preferred, and can run under .Net 4.6.1 and up.
  • .Net Standard libraries are preferred, and can easily be changed into another target if required.

sizeof() vs Marshal.SizeOf()

To get the size of a data type in .Net you can use sizeof()  or Marshal.SizeOf() .

I’ll briefly explain the difference between the two.

sizeof()

sizeof()  (MSDN) can only be used on data types that have a known size at compile-time. It is compiled as a constant. If you attempt to use it on invalid data types you get a compiler error CS0233.

This means it has zero overhead during execution.

Example

Compiles to IL code:

Marshal.SizeOf()

System.Runtime.InteropServices.Marshal.SizeOf()  (MSDN) however works on struct data types at runtime by measuring their size. This means both the overhead of calling the method, as well as extra work for boxing and measuring.

Example

Compiles to IL code:

Peeking inside

If we look at the source code (referencesource)

From this we can see that the struct will be measured by external API call. If Type is provided, extra checks are made. If object is provided as parameter boxing from struct to object occurs.

Caveats

In two cases Marshal.SizeOf<T>()  will differ from sizeof(T) .

This has to do with how .Net marshals data types to unmanaged code.

Managed data type Char (2 bytes) is identified as Windows type SBYTE (1 byte), and managed data type Boolean (1 byte) is identified as windows type BOOL (4 bytes).

Cost of method wrapper

Introduction

What happens if a method is just a wrapper for another method? Is the extra jump optimized away by compiler? Does it take much time? I thought I’d look into this and measure a bit. With the different compilers, Jits and runtimes I thought it would be fun to see what happens.

I’ll use a == operator implementation calling IEquatable<T>.Equals(T other)  for testing. A good practice when creating structs is to implement Object.Equals , GetHashCode() , IEquatable<T> , op_Equality (== operator) and op_Inequality  (!= operator). (Read more on Microsoft docs.) Since Object.Equals(object) , Equal(T other) , op_Equality  and op_Inequality  all more or less implement the same logic I figured one could just call the other. So whats the cost?

Note that this is not for optimization. The cost we are talking about here is negligle compared to the rest of your code, so this is purely for fun.

And this is not an attempt to measure the cost of an additional JMP, which is well documented and even varies depending on scenarios.

Test setup

Public variables and using Count for something after run, since thought I had some issue with RyuJit being too smart.

OpEqualsDirect

OpEqualsIndirect

Decompiled

OpEqualsDirect

This one is pretty much as we would expect.

Bytecode hex

IL

C#

OpEqualsIndirect

My first question was whether the extra jump would be optimized away. I can’t see that from decoding the method directly, but we see for reference that it loads the argument and calls Equals on the struct instance. Pretty much as expected.

Bytecode hex

IL

C#

Callee

So how about callee? Is it optimized away?

No, it is calling op_Equality  which in turn is calling IEquatable<T>.Equals(T other) .

Benchmark

All of this is before Jit. So lets see how it performs with some test runs.

For the test I’m doing a lightweight operation where I am comparing three int’s and it will fail on third.

Method Job Jit Runtime Mean Error StdDev Scaled ScaledSD Allocated
‘Direct op_equals’ LegacyJit-Mono LegacyJit Mono x64 13.4742 ns 1.0661 ns 0.0602 ns 1.00 0.00 N/A
‘Indirect op_equals’ LegacyJit-Mono LegacyJit Mono x64 15.5428 ns 6.9294 ns 0.3915 ns 1.15 0.02 N/A
‘Direct op_equals’ Llvm-Mono Llvm Mono x64 13.4156 ns 5.5125 ns 0.3115 ns 1.00 0.00 N/A
‘Indirect op_equals’ Llvm-Mono Llvm Mono x64 15.9306 ns 8.7020 ns 0.4917 ns 1.19 0.04 N/A
‘Direct op_equals’ RyuJit-Clr RyuJit Clr 0.9740 ns 0.8871 ns 0.0501 ns 1.00 0.00 0 B
‘Indirect op_equals’ RyuJit-Clr RyuJit Clr 1.1444 ns 1.1916 ns 0.0673 ns 1.18 0.07 0 B
‘Direct op_equals’ RyuJit-Mono RyuJit Mono x64 14.6879 ns 4.9166 ns 0.2778 ns 1.00 0.00 N/A
‘Indirect op_equals’ RyuJit-Mono RyuJit Mono x64 15.8684 ns 4.6367 ns 0.2620 ns 1.08 0.02 N/A

 

Result

For the most part we can see a penalty of 8% to 19% in our simple test scenario. Neither compilers/JIT’ers optimize away the jump. However we can see that RyuJit on Clr is doing some black (register?) magic here. It stil has the relative overhead of 18%, but it is much faster than the other runtimes.

Speeding up Unitys Vector in lists/dictionaries

Introduction

With this post I am digging into some performance improvements for Unity 3D’s Vector2, Vector3, Vector4 and Quaternion. The short version is that they really need IEquatable<T>  and can benefit from a better GetHashCode() . I’m demonstrating this with example of how it severely decreased performance in my project.

Adding IEquatable<T> has no side-effects, if is actually best practice and documented. More info can be read at https://docs.microsoft.com/en-us/dotnet/api/system.iequatable-1?view=netframework-4.7.2 Quote from docs:

For a value type, you should always implement IEquatable<T> and override Equals(Object) for better performance. Equals(Object) boxes value types and relies on reflection to compare two values for equality. Both your implementation of Equals(T) and your override of Equals(Object) should return consistent results.

Voxel engine

Christmas, spare time, Unity got support for 32-bit mesh indexes and so thought I would look into writing a voxel rendering engine and optimizing it a bit.

The engine is basically a Minecraft-style block based rendering engine. This is pretty basic stuff. You store volumetric information about the whole world in a format that can be divided up and rendered part by part. A common approach is to use chunks of 16x16x16 (4096) blocks, each consisting of a number which references a mesh model (often a cube) and a texture. Even air is represented as a number.

The trick with voxel rendering is always what you do not render. In a flat world with completely filled ground you would only render the flat surface of the ground, not the air, and not the walls in-between each block underground. So two sides of a block facing each other should not render. In the picture above I have randomized the data (air / solid block) to challenge the engine a bit. My test setup keeps randomizing the data at a very high rate.

The format I chose was the simplest I could think of. Each block has a set of triangles, and each triangle has a direction. This allows me to determine which sides not to render when blocks are adjacent to each other (covering each others sides). Rendering the blocks is a simpe for-loop iterating the chunks volumetric data.

In the initial setup I would generate all meshes (2 per surface) for each side (6 sides) for each cube (32.768 cubes) regardless. This meant uploading 393.216 meshes to the graphics card. Though only triangles and normals actually needed was uploaded, which could be anywhere from 0 to 32.76862. I chose a chunk size of 32x32x32 (32.768 blocks), which put it around 1 chunk every few seconds..

Optimizing: 2-4 chunks/sec

Initial optimization was to only add the meshes actually needed, which worked fine and speed was much better. To optimize it further I used a Dictionary<Vector3, *>  to keep track of existing positions. This allowed me to not add the same position twice, and hence in corner situations share mesh positions in triangles. It’s not an huge save in terms of veritces, but I was interested in seeing the result.

The result was not encouraging: 2-4 chunks per second. Which was interesting. I have tested the speed of Dictionary<TKey, TValue> before and found it to perform very well. Dictionary operates on int that it gets from GetHashCode(), and uses object comparison to sort out collisions. Using Vector3 as key would add a GetHashCode and 3 float compares. This adds a few more CPU instructions to the mix, but its not a major change.

Profiling

So what is happening here? Why is it slow? Luckily these are questions profilers like to answer. And it so just happen to be that I write my projects in a shared setup with a Unit Test runner outside of Unity, so I can run a proper profiler on it.

From this we see that Dictionary is spending 50% of total rendering time. That seems a bit odd. I am using Vector3 as key in the dictionary, so next step is to decompile UnityEngine.dll to have a look at Vector3.

Decompiling Unitys Vector3

Vector3 contains a lot of functions and data. I have extracted the bits relevant to Dictionary. GetHashCode()  is used to get an int which is used for indexing. Since the hashcode can cause collisions (two different objects share same hashcode) Dictionary will compare the objects to see that they are in fact different, then keep a list of them.

Immediately we can see that Vector3 does have GetHashCode, that is nice.

Where is IEquatable<T>?

Vector3 is lacking IEquatable<Vector3> .

Non-generic objects such as ArrayList  that operate on Object-types requires Object.Equals()  to be overridden for structs, or it will fail on .Remove and .Contains. But generic objects such as Dictionary<TKey, TValue> , HashSet<T>  and List<T>  require both GetHashCode()  (except for List<T> )and IEquatable<T> . Lacking this implementation, Dictionary is forced to use object.Equals, which as we see is an “object” type. Passing a struct (value type) as parameter to an object type causes boxing, which in turn produces work for garbage collector.

This means that any List<T>.Contains() , List<T>.IndexOf() , List<T>.Remove() , Dictionary<TKey, TValue>  and HashSet<T>  operations would generate objects that GC have to clean up. In the case of list, one object for every Equals-operation. This meaning in some cases one object for every item in the list for every operation. That is pretty bad.

Tip! You can implement a IEqualityComparer<T>  and pass it to your strongly typed objects such as Dictionary<TKey, TValue>(IEqualityComparer)  to work around this problem.

Testing with IEquatable<T>: 100 chunks/sec

So I might have a clue as to why things are slower than expected. I  implement a simple FasterVector3 which I just initialize from a Vector3. I get as slight overhead on copying the data into a new object, but still, lets see…

So that took us from a variable 2-4 chunks per second up to a fairly stable ~100 chunks per second. It was variable before because GC was working “randomly”, while now we have zero GC. I will get back to proof of that a bit later.

 

Profiling looks slightly better, but not quite what I expected. We have solved the GC-problem, which gave us a fairly good speed increase. But Dictionary time only decreased by around 10 percent point. The next suspect on my list is GetHashCode() .

GetHashCode()

The purpose of GetHashCode()  is to provide an Int32 hash of the object. Although this is not an unique representation of the object, it gives lookup objects such as Dictionary<TKey, TValue>  and HashSet<T>  something to index on. If two different objects return the same hashcode we have whats called a collision, forcing Dictionary to revert into a flat list mode of operation on the colliding objects. Immediately I can see that I don’t like that >>2 bitshift, which causes loss of precision. But these things are a bit tricky, and a more extensive test setup is required.

It should be noted that the GetHashCode()  call on a float returns the 32-bit raw value of the float, which is a very good representation of the value (0 collisions on a single float). Running a quick test of Unitys implementation shows that as long as you keep the values low (-/+ 4 million of 0) then there are no collisions on Z<<2, but once you get past that similar numbers start colliding. Meaning edge cases is a problem, the further from 0 you get the bigger the problem. Collision on 10000000: Hashcode 314975648
Collision on 10000001: Hashcode 314975648
Collision on 10000002: Hashcode 314975648
Collision on 10000003: Hashcode 314975648
Collision on 10000004: Hashcode 314975649
Collision on 10000005: Hashcode 314975649
Collision on 10000006: Hashcode 314975649
Collision on 10000007: Hashcode 314975649
Collision on 10000008: Hashcode 314975650
Collision on 10000009: Hashcode 314975650
Collision on 10000010: Hashcode 314975650
Collision on 10000011: Hashcode 314975650

In my use case a single chunk would never be that large, so I would never encounter these limits. But collisions don’t show the whole picture of the internal mechanics of indexing, we need proper tests to experiment further.

How about precomputing GetHashCode()?

To experiment I added caching of the hashcode to my FasterVector3. When created hashcode is computed, and after that it is served from memory. That seems to improve my use case a bit, but at the cost of memory. My Vector3 now went from 3×4 (12) to 4×4 (16) bytes, taking up 33% more memory, memory bandwidth and CPU cache. I also short-circuited Equals(Vector3)  to check hashcode first, saving me a few cycles.

Running benchmarks

So far I have used Unity to tell me how many chunks per second I can render. Initially at 2-4 chunks per second this was an easy way, but as the speed increased it became difficult to get a good number because other factors started affecting result. For example how many chunks was on the screen in total, render queue machanism, etc.. But the tests showed that further investigation is warranted.

So I set up a test project with BenchmarkDotNet and isolated the components in question. I had so far looked at Vector3, but from decompiling Vector2, Vector4 and Quaternion I can see the same most likely applies.

The tests are fairly basic. I set up a radius of 50, meaning 100x100x100 around 0 and add the vectors to a dictionary.

Since I wanted to test the actual performance with some changes to GetHashCode I also run the tests on an object similar to Unitys Vector3 where the only change I’ve made is to the hashcode, as well as calling GetHashCode directly on these.

The new and improved Vector3 looks like this.

With the helper object:

The results: 50x-100x improvement in speed

From this we can see that “GetHashCode() using Vector3.Equals(object)” causes allocation, and hence some work for garbage collect. Using IEquatable<T>  consistently avoids this. Note that all Tedd:Vector3.Equals(object) are only there for reference to see differences.

For Vector4 and Quaternion the result is even better, at almost 150x increase in speed.

Though isolated, my implementation of GetHashCode() was slower than Unitys, tests with only GetHashCode improvement shows its faster when used in Dictionary.

While I was at it I noticed that ToString could benefit from a small change too. Normally we don’t print Vector3.ToString() so often, but what we may not realize is that if included in a log-line it usually gets executed even if logging is disabled. And besides, why not make it faster? Though this one only gave us a 1,6x speed increase.

For List<T> with 33% of adds last item is immediately removed after being added we see the same problem with garbage collection as well as some increase in speed. Lists are then ~2-4,5x faster.

Conclusion

For my specific use case where I am optimizing meshes before rendering I saw a dramatic performance increase. Not only did the code overall execute 25 times faster, but it was a lot smoother with less garbage collection. These are fixes that would be quick for Unity to test and include in a future release.

I set up isolated tests and ran benchmarks on both Mono and .Net with LegacyJit, RyuJit and Llvm. Though I have not published the results of all 16 tests with ~15 subtests the result is consistent on all of them.

In the isolated use of Dictionary<TKey, TValue>  with IEquatable<T>  we see a performance increase of 2.3 to 4.3x depending on what Jit and platform it runs on. This test has very little searching/removing, so I will re-visit it with better test cases.

In the isolated use of List<T> with IEquatable<T>  and new GetHashCode()  we see a performance increase of 50x to 100x depending on what Jit and platform it runs on.

My implementation of GetHashCode()  was slower than Unitys implementation by itself, but was faster in actual use.

Benchmark code (with results)

Remember not execute code from random strangers on the internet. I provide you code with result for reference.

ZIP-source with results

(PS! I made TeddVector3 x,y,z readonly … Should not be. Also missing overload for == and !=.)

Simple app.config appSettings deserializer (with tree structure)

There are many ways to read configuraiton files in .Net. Using ConfigurationManager you can easily read for example appSettings section. But usually I want this to be in managed code. The solution often suggested is to create separate sections with a varying degree of manual work, field mapping and xml readers.

Many of the solutions also suggest that you read and convert the type from string on every access, which gives you an overhead if you access config variables a lot. So I figured I’d throw together something simple that works. A baseclass that supports a tree structure without all the hassle of creating sections, defining types and names, etc… It reads and converts the variables once at startup to avoid conversion overhead on every access of properties.

In short, Config.SmtpClient.Host  corresponds to appSettings value “SmtpClient.Hosts“. You can add as many levels as you like.

app.config with appSettings section

The config variable names corresponds to properties in the classes. When there is a hierarchy of classes the names in the path will be separated by a dot.

Creating the config object

Config = new ConfigModel();

The implementation

In the implementation of config model we can create a hierarchy of models. As long as they inherid DictionaryConfigReader  and override the mandetory base class constructor they will automatically read config variables corresponding to their name.

Baseclass