Span in .NET, C# and other languages

By now probably most .NET developers have heard about the new Span classes, that will be added in the upcoming .NET and C# versions (C# 7.2 to be more precise).slices
I won’t go into details about what it is and why it’s one of the few new features that it’s not just syntactic sugar – the post by Stephen Toub explains it very well.
To summarize, it allows the developer to work with “ranges”, or “slices” defined over array-like data types (array, string but also unmanaged memory buffers), without having to copy the data from that range, without allocating new memory in the heap and accessing it as fast as using an array.
Even if it’s strictly speaking a framework feature (Span, ReadOnlySpan etc…), it’s made possible by several language-level new features.
Why was this necessary in C#? First, doing something similar until now, involved working with pointers and unsafe keyword, or by manually passing arround a reference and the start/end index.
Where is this feature really necessary? Mostly in code that must be highly optimized, that is working with very large arrays (

What I wanted to write about is something else – that for a developer, in order to easier understand such new features, in any language or framework, it pays to learn about other languages, to take a look around to what other are doing.

Let’s look at a simple example in C#:

var arr = new byte[10];
for (byte i = 1; i < arr.Length; i++) arr[i] = i;

Console.WriteLine("\nOriginal array:"); // 0 1 2 3 4 5 6 7 8 9
for (int i = 0; i < arr.Length; i++) Console.Write($"{arr[i]} ");

var slice = new Span(arr, 5, 2);
slice[0] = 42;
slice[1] = 43;
for (int i = 0; i < slice.Length; i++) Console.Write($"{slice[i]} "); // 42 43

Console.WriteLine("\nOriginal array:"); // 0 1 2 3 4 42 43 7 8 9
for (int i = 0; i < arr.Length; i++) Console.Write($"{arr[i]} ");

and a similar piece of code in Go, where it’s called a slice:

var arr [10]int
for i := 0; i < 10; i++ {
  arr[i] = i
fmt.Printf("\nOriginal array: %v", arr) // 0 1 2 3 4 5 6 7 8 9

var slice = arr[5:7]
slice[0] = 42
slice[1] = 43
fmt.Printf("\nSlice: %v", slice) // 42 43

fmt.Printf("\nOriginal array: %v", arr) // 0 1 2 3 4 42 43 7 8 9

In both languages, a span(C#) or a slice (Go) represent a similar concept:
Span: ‘Span is a value type containing a ref and a length‘ , ‘represent contiguous regions of arbitrary memory
Slice: ‘A slice is a descriptor for a contiguous segment of an underlying array and provides access to a numbered sequence of elements from that array

Are there differences? Probably – .NET spans can be defined also over strings (ReadOnlySpan) or a block of memory allocated on the stack (stackalloc). In .NET, a Span can exists on and point only to objects allocated on the stack, not on the heap (there is Memory for that).
Obviously, a go slice is a built-in language feature, while a .NET Span is a framework feature (that requires suport from the language compiler).

Are there other languages that have the concept of slices? Of course: Fortran, Algol, D, Perl, Python, Ruby etc..
The main difference could be: a slice points back to the original array, or is a copy?

As an example, in Ruby it seems to create a copy:

Interactive ruby ready.
> array = [:pea­nut, :butt­er, :and,­ :jell­y]
=> [:peanut, :butter, :and, :jelly]
> slice = array­[2,2]
=> [:and, :jelly]
> slice[0] = :lett­uce
=> :lettuce
> slice
=> [:lettuce, :jelly]
> array
=> [:peanut, :butter, :and, :jelly]

while in D language, like in C# or Go, it’s pointing to the original elements:

void main()
    import std.stdio : writefln;

    int[] arr = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
    writefln("Original array: %s\n", arr); // 0 1 2 3 4 5 6 7 8 9
    int[] slice = arr[5..7];
    slice[0] = 42;
    slice[1] = 43;
    writefln("Slice: %s\n", slice); // 42 43
    writefln("Original array: %s\n", arr); // 0, 1, 2, 3, 4, 42, 43, 7, 8, 9

The colclusion would be: if somebody make the effort to learn a bit the fundamentals, to look beyond it’s own backyard, wil find it much easier to grasp new concepts, or even to switch to a different platform.

An excellent (but somehow dry) book that took this approach was (in Romanian): ‘Fundamentele limbajelor de programare’ by Bazil Pârv and Alexandru Vancea. Back then (1992) it was more of an academic book, not something that you can use to learn practical programming, but the fundamentals were well illustrated.

Posted in .NET, C#, Uncategorized | Tagged , , , | Leave a comment

Detecting code smells with NDepend

I recently had the opportunity to play again with NDepend. From my experience in many companies the developers, even if they find NDepend useful, don’t push enough the management to buy it, for various reasons: ‘we have free/built-in similar tools’, ‘it’s overkill for what I need’ and many other reasons.

The truth is that also the name might be misleading – NDepend is much, much more than ‘visualize the code dependencies’. It has evolved to be one of the very few tools in .NET space that takes code analysis, metrics and code quality analysis to a professional level.

Many people talk about SOLID principles, detecting code smells  but think that they can follow or detect them only ‘manually’ while writing code or while doing code reviews. Yes, this is possible with enough experience and care, but we live in a less than ideal world, where we have to work on other developers’ code, pressed by deadlines or other constraints.

Here is an analysis result produced by NDepend, on a real-world project, where we had used daily FxCop, StyleCop and Resharper for static code analysis (run automatically at build), all with a carefully selected set of rules, and where periodic code reviews were done after each task was implemented:


NDepend – Queries and Rules Explorer

of course, since these are the default NDepend rules, some warnings might not be appropriate for our project and some rules might be too strict.

Looking closer, we have at least 2 methods that are too complex – this is the ‘conditional complexity’ smell. One method has a cyclomatic complexity of 13 – not too much as a number, but when looking at the code, the method has an if, containing a foreach, containing an if with 4 conditions, and inside another foreach with 2 ifs and a for inside. Hard to test, hard to maintain and also difficult to understand for somebody not familiar with the code.

NDepend not only measures the complexity, but lets me control exactly how the rule is implemented and explains what the rules means:


Another example – NDepend shows a ‘Types with too many methods’ warning – we have one class with 22 methods (‘large class’ smell). This tells me that maybe that class is doing too much, maybe it violates the single responsibility principles and it might be possible to split it into several smaller classes. Again in this case I can easily configure the rule if needed:


One more example for today – NDepends warns me about 1 assembly with poor cohesion. Indeed, the entire library is contained in a single assembly having 89 types, some of which are too coupled between themselves. While a high coupling might be a good sign – the assembly has a single responsibility, on the other hand any change in one class might have ripple effects on other classes, so the cost of a change might be high.

Again it’s possible to see how this figure was computed:


Let’s see which classes are the main offenders: NDepend / Metrics / Most Coupled / Type:


This might signal some ‘god’ classes that are doing to much.
With NDepend we can visualize even better the dependencies:


that big green line shows me that we have a class that is using most other classes, and that it might have too many responsibilities.

Some developers might argue that they don’t have time to analyze all these daily, or that some analysis rules can be subjective – indeed, but they can be used as a starting point to detect problems in code.

Also, not the absolute metrics values or the number of warnings are important, but the trends – has the code quality degraded in the recent weeks? NDepend makes it possible to visualize also the trends over time, but about this maybe in a future episode 🙂

These are just a few reasons why, for a complex project, it’s worth to invest in a tool like NDepend – the built-in features in Visual Studio are limited, and it’s a good thing – Microsoft, even if they have the capacity to produce such a tool, should leave the room for third-party vendors that are dedicated for this.
A good sign in this respect is when the authors/vendor invest time to write and explain the concepts they care about:

Posted in .NET, Code Quality, design | Tagged , , , , , , | 1 Comment

When the length of a database field can be misleading

The length of a database field, at least on MS SQL Server, is not what many people think it is.

Let’s say that I have a 10 character field: nvarchar(10):
table structure

Surely we can insert a row with 5 chars in this field:
Insert 5 chars
and the result is the expected one:
Successful run

Let’s try to insert a 6’Th character, it should work fine, right?
Adding 6 chars

Well, not really 🙂

Run failure

What happened?

By default, SQL Server is using UCS-2 encoding for nvarchar columns.
UCS-2 represent each character on 16 bits (2 bytes) – 65,536 chars should be enough for everybody, right? 🙂
Well, not exactly 🙂 Since 2001, many more characters were added to the Unicode standard, reaching a total of 120,737 chars today (2015, Unicode 8.0). These clearly can’t be represented on only 2 bytes, so 3 or 4 are needed.

In our case, A, B, C, D… are not the letters from the latin alphabet, but… ‘MATHEMATICAL BOLD CAPITAL A, B, C…’:

In UTF-16, this is represented on 4 bytes as: 0xD835 0xDC00 (hexa).
MS SQL Server will happily accept it, but by default will consider it as 2 chars. The same happens in .NET Framework, that will return the length 12 for the above string:
Get the string length

Length in .NET

Posted in .NET, SQL Server | Tagged , , , , | Leave a comment

Debugging a performance issue in production

One of the projects I’m working on has a component that has a very simple task: reads a record from a database table and based on it, send a message to Microsoft Windows Service Bus. Then the next record is read, and so on, until no more rows are found in the table.

Somebody noticed in the log files, in production, that the application runs very slowly – one message is being send in 5 or 6 seconds. Even if Service Bus is not the brightest piece of software, it has no reason to be so slow.
Time to check what’s going on.

Since the the app is in production, we are not allowed to just attach a debugger to it. Also when running it locally for just a few minutes, the issue does not reproduce. What can be done?

[ to protect the innocent, all examples below are ‘anonymized’ ]

I asked the support team to take a memory dump from the running application, using the Task Manager:
Task Manager Memory Dump
– obviously, my process was not Chrome 🙂

I tried to analyze the .DMP file with Visual Studio 2013 Ultimate – no luck – I got a ‘memory analysis could not be completed due to insufficient memory‘ because the DMP file has over 780 MB, which seem too much for my 11 GB of free memory. 🙂

Let’s try the big guns – WinDbg ( – now can be downloaded as part of Windows SDK ).

After opening the 32-bit version of WinDbg (the application was compiled with AnyCPU and ‘Prefer 32 bit’) we have to set-it-up for .NET (CLR) applications:
– File / Symbol File Path: SRV*;c:\symbols*;
– File / Open Crash Dump
.loadby sos clr in the WinDbg command prompt in order to load the SOS debugging extension ( ; the ‘clr’ parameter is there because we are on .NET >= 4.0

Next, let’s analyze a bit the heap:

0:000:x86> !dumpheap -stat
MT Count TotalSize Class Name
73309888 1 12 System.ServiceModel.Diagnostics.Utility
69725164 1 12 System.Collections.Generic.ObjectEqualityComparer`1[[System.Linq.Expressions.LabelTarget, System.Core]]
. . .
. . .
02cd0690   421577     16863080 System.Data.Entity.Core.EntityKey
05373330   421575     18549300 System.Data.Entity.Core.Objects.Internal.EntityWrapperWithoutRelationships`1[[SampleDomain.NotificationChange, Sample.Domain]]
0537e2ac   421575     23608200 System.Data.Entity.Core.Objects.EntityEntry
667ac484  2105915     25270980 System.Int32
667a8e58  1257983     30191592 System.Guid
667a6f1c  2544457     30533484 System.Boolean
667aacc4   440021     32087956 System.String
009fe608   421575     67452000 Sample.Domain.NotificationChange
6675ab9c    26544     90426208 System.Object[]
05553478   421581    166945692 System.Data.Entity.Core.Objects.StateManagerValue[]
Total 12892240 objects

after a huge list of entries, we finally see something interesting – over 400 thousand instances of our class, NotificationChange.
This roughly matches the number of rows from the table processed so far.

This might hint where is the problem, but to be sure, we have to dig deeper:

!dumpheap -mt 009fe608
. . .
. . .
24254ef0 009fe608      160     
24257608 009fe608      160     
2425a95c 009fe608      160     
2425e64c 009fe608      160     
2426187c 009fe608      160     
242643ec 009fe608      160     
24268054 009fe608      160     

      MT    Count    TotalSize Class Name
009fe608   421575     67452000 Sample.Domain.NotificationChange
Total 421575 objects

yes, it will list the addresses of all 421.575 objects, but I found no way to get the address for just one of them. 🙂

Now we have to find out why the GC has not released yet these objects:

!gcroot 242643ec
    00000000001611b4 (strong handle)
    -> 0000000000d0378c System.Object[]
    -> 0000000000ca3524 Sample.EF.MyStorage
    -> 0000000000cde438 System.Data.Entity.Internal.LazyInternalContext
    -> 0000000000ce3c08 System.Collections.Generic.Dictionary`2[[System.Type, mscorlib],[System.Data.Entity.Internal.Linq.IInternalSetAdapter, EntityFramework]]
    -> 0000000000d01314 System.Collections.Generic.Dictionary`2+Entry[[System.Type, mscorlib],[System.Data.Entity.Internal.Linq.IInternalSetAdapter, EntityFramework]][]
    -> 0000000000d00bc4 System.Data.Entity.DbSet`1[[Sample.Domain.NotificationChange, Sample.Domain]]
    -> 0000000000d00ba0 System.Data.Entity.Internal.Linq.InternalSet`1[[Sample.Domain.NotificationChange, Sample.Domain]]
    -> 000000000123497c System.Data.Entity.Core.Objects.ObjectQuery`1[[Sample.Domain.NotificationChange, Sample.Domain]]
    -> 00000000012349b8 System.Data.Entity.Core.Objects.EntitySqlQueryState
    -> 000000000121613c System.Data.Entity.Core.Objects.ObjectContext
    -> 0000000001328b08 System.Data.Entity.Core.Objects.ObjectStateManager
    -> 0000000001530a78 System.Collections.Generic.Dictionary`2[[System.Data.Entity.Core.EntityKey, EntityFramework],[System.Data.Entity.Core.Objects.EntityEntry, EntityFramework]]
    -> 0000000025221000 System.Collections.Generic.Dictionary`2+Entry[[System.Data.Entity.Core.EntityKey, EntityFramework],[System.Data.Entity.Core.Objects.EntityEntry, EntityFramework]][]
    -> 0000000024264578 System.Data.Entity.Core.Objects.EntityEntry
    -> 00000000242644fc System.Data.Entity.Core.Objects.Internal.EntityWrapperWithoutRelationships`1[[Sample.Domain.NotificationChange, Sample.Domain]]
    -> 00000000242643ec Sample.Domain.NotificationChange

Found 1 unique roots (run '!GCRoot -all' to see all roots).

This points to the culprit – the Entity Framework DbContext which indirectly holds a reference to each object loaded so far from the database. 🙂

Looking closer at the source code, it’s doing something like this (in pseudo code):

  1. start application
  2. create DbContext
  3. while (there are rows in the table)
    1. Using the above DbContext: Read next row from database and load it in a NotificationChange object
    2. Send a message to Service Bus
    3. Mark the row in the database as processed
    4. Repeat..

What could go wrong?

Well, nothing for a few rows, except that Entity Framework can’t read my thoughts and won’t guess that, after loading and updating one row, I won’t need it anymore.
It will keep it in the first-level cache (identity map) and will loop through all 400.000 objects each time a new row is loaded from database (maybe it was already loaded 🙂 ).
More on this:

The fix was simple – re-create the DbContext inside the loop – in our case there is no reason for the unit-of-work (DbContext) to span more than one row ( ).

After doing that – miracle – the average processing time per messages decreased from 5-6 seconds to 0,02 sec.

Posted in .NET, Entity Framework | Tagged , , , , | 4 Comments

NameOf and Obfuscators

I was wondering some time ago how the new ‘nameof‘ operator from C# 6.0 works when using .. obfuscators.

Let’s write some code to verify this. I included a few other methods to get the member name (VS2015 RC was used):

using System;
using System.Runtime.CompilerServices;

namespace TestNameOf
    class Program
        static void Main(string[] args)
            var o = new Foo();


    internal class Foo
        public void Bar()
            Console.WriteLine("nameof(Bar): " + nameof(Bar));
            Console.WriteLine("Action name: " + GetName(this.Bar));

        private void ShowCallerName(
            [CallerMemberName] string callerName = null)
            Console.WriteLine("CallerMemberName atribute: " + callerName);

        public static string GetName(Action action)
            return action.Method.Name;            

The result when the code is not obfuscated is the expected one:

nameof - not obfuscated

nameof – not obfuscated

When the code is obfuscated (using Eazfuscator.Net) the result is:

nameof - obfuscated

nameof – obfuscated

Unsurprisingly, it works as expected: the name from the original source code is preserved, even if the code is obfuscated. That’s because nameof is applied at compile time, and most (maybe all) obfuscators are applied immediately after the compile step.

Are there cases when this might not be the desired behavior? Maybe, but only if we try really hard, like when we combine nameof with reflection:

var m = typeof(Foo).GetMember(nameof(Bar))[0];

we will get an Exception:

nameof and reflection

nameof and reflection

The decision to return the source code information instead of metadata info was taken only in the late phases of C# designn:

And, let’s not forget that in general, typeof(Class).Name != nameof(Class):

typeof vs nameof

typeof vs nameof

Posted in .NET, C# | Tagged , | 1 Comment

Patterns and frameworks

Many people, when they first start to study design patterns (usually in university), dive into the ‘Gang-of-four’ reference book and if they have the energy to read it all, in the end they think something like: ‘well, very cool and interesting, I understood some of them, and maybe if I am lucky I will encounter projects interesting enough to actually use some of them’.. 🙂
It’s a normal reaction: unless you have a lot of experience in many real-world projects, you might never deliberately used, or realized that you used many of those patterns.

And here is a point that many people miss: design pattern, when they are really understood, might help somebody not only to improve it’s own code, but also to understand how and why the code in many frameworks and libraries is designed the way it is.
We don’t have to look any further than what we use every day – the .NET Framework. Here are some examples, in no particular order; I won’t explain each pattern, nor how it’s used in each case:

Decorator: I/O streams: Stream (the common ‘interface’), FileStream (concrete/component class), StreamReader, BufferedStream, CryptoStream (decorators)
and of course the Decorator class from WPF
Iterator: IEnumerator (generic iterator interface), IEnumerable (aggregator in GoF book), List (or any other collection), yield keyword
Observer: EventHandler delegate (abstract observer), any class exposing an event handler, like Button (concrete subject)
or IObserver/IObservable use in Reactive Extensions.
Abstract factory and bridge patterns: ASP.NET WebForms or ADO.NET providers (DbProviderFactory) – introduced in .NET 2.0
Factory: WebRequest.Create() method
Template method: many places, like ASP.NET WebForms Control class protected methods: OnLoad, OnInit, OnDataBinding etc..
Command: in WPF: ICommand, ICommandSource, RoutedCommand, or Action class in Java Swing or Delphi VCL
Facade: ApplicationUserManager from ASP.NET Identity framework
Flyweight: string interning, WPF dependency properties
Adapter: each time we use COM components from .NET or DataAdapter used in ADO.NET/DataSet world
Strategy: IComparer interface used in many sorting and searching methods in the framework
Composite: CompositeControl or Component base class and all it’s derived classes used in WinForms, ADO.NET etc..
Proxy: obviously, the proxy classes used in WCF or .NET Remoting clients
Interpreter: System.Linq.Expressions.Expression and it’s derived classes (also an example of composite pattern)
Memento: .NET serializable clases
Visitor : System.Linq.Expressions.ExpressionVisitor

These are just some random examples and maybe there are many more.
What’s the point in knowing this: when learning a new framework, if you identify a pattern, it’s easy to answer the question: ‘why the heck did they do it like this?’ 🙂

Many more patterns can be found in Fowler book (‘Patterns of Enterprise Application Architecture’), but maybe I’ll talk about those in a next episode..

Posted in .NET | Tagged , , , , | 1 Comment

On closures and captured variables

A few days ago, on the project I’m working on, I’ve stumbled on an interesting bug – an example of why it pays off to learn the ‘deeper’ areas of C# language (or any other language).
Image copyright: Pavel Shlykov (Shutterstock)

Image copyright: Pavel Shlykov (Shutterstock)

Greatly simplified (and with the class names changed to protect the innocent 🙂 ), we had:
– a structure of orders and order lines/items, something pretty straightforward:

// ...
    public class Order
        public Order()
            Items = new List&lt;OrderItem&gt;();

        public string Number { get; set; }
        // ... other fields

        public IList&lt;OrderItem&gt; Items { get; set; }
// ...
    public class OrderItem
        public int ItemId { get; set; }
        public string ProductName { get; set; }
        public decimal Price { get; set; }
        // ... other fields

An order contains several lines.

For one reason or another, let’s say that we want to deep copy this structure to another class, OrderLineDto, that flattens the structure:

    public class OrderLineDto
        public string OrderNumber { get; set; } // the parent order number
        // OrderItem attributes:
        public int ItemId { get; set; }
        public string ProductName { get; set; }
        public decimal Price { get; set; }
        // ... other fields

Because OrderItem has several hundred properties (don’t ask me why 🙂 ), I’m using AutoMapper to simplify the mapping job.
We added a helper class that it’s supposed to keep the code nice and tidy:

public class OrderMapper
    private readonly Order _order;

    public OrderMapper(Order order)
        _order = order;

        AutoMapper.Mapper.CreateMap&lt;OrderItem, OrderLineDto&gt;()
          .ForMember(orderLineDto =&gt; orderLineDto.OrderNumber, 
             config =&gt; config.MapFrom(sourceOrderItem =&gt; _order.Number));

     public OrderLineDto GetLineDto(OrderItem orderItem)
         var dtoLine 
           = AutoMapper.Mapper.Map&lt;OrderItem, OrderLineDto&gt;(orderItem);
         return dtoLine;

The constructor gets the current Order instance, defined the mapping, and the GetLineDto method is doing the actual mapping from a OrderItem to a new OrderLineDto. Pretty simple..
Only for OrderLineDto.OrderNumber, we have to tell AutoMapper to take the value from the ‘parent’ _order.Number.

Let’s test it in a console application:

        static void Main()
            var order1 = new Order {Number = &quot;O1&quot;};
            var orderItem1 = new OrderItem
                ItemId = 1,
                ProductName = &quot;Book 1&quot;,
                Price = 100

            var order2 = new Order {Number = &quot;O2&quot;};
            var orderItem2 = new OrderItem
                ItemId = 2,
                ProductName = &quot;Book 2&quot;,
                Price = 200

            var orderMapper1 = new OrderMapper(order1);
            OrderLineDto dto1 = orderMapper1.GetLineDto(orderItem1);

            Console.WriteLine(&quot;\n\rItem 1 order number: {0}  == DTO 1 order number: {1}&quot;, 
                                order1.Number, dto1.OrderNumber);

            Console.WriteLine(&quot;Item 1 prod. name: {0}  == DTO 1 prod name: {1}&quot;, 
                                orderItem1.ProductName, dto1.ProductName);

            var orderMapper2 = new OrderMapper(order2);
            OrderLineDto dto2 = orderMapper2.GetLineDto(orderItem2);

            Console.WriteLine(&quot;\n\rItem 2 order number: {0}  == DTO 2 order number: {1}&quot;, 
                                order2.Number, dto2.OrderNumber);

            Console.WriteLine(&quot;Item 2 prod. name: {0}  == DTO 2 prod name: {1}&quot;, 
                                orderItem2.ProductName, dto2.ProductName);


I create 2 Order objects, each with one OrderItem, and for each OrderItem, I map it to a OrderLineDto object.
Finally, I compare the original and DTO properties to make sure they were copied properly.

However, the result is not the expected one:

Item 1 order number: O1  == DTO 1 order number: O1
Item 1 prod. name: Book 1  == DTO 1 prod name: Book 1

Item 2 order number: O2  == DTO 2 order number: O1
Item 2 prod. name: Book 2  == DTO 2 prod name: Book 2

Obviously, the 2’nd DTO object does not have the right order number (‘O2’), but the first one, ‘O1’.
Is AutoMapper broken? 🙂

No – the culprit is the was I’m defining the custom mapping for OrderNumber:

public class OrderMapper
    private readonly Order _order;

    public OrderMapper(Order order)
        _order = order;

        AutoMapper.Mapper.CreateMap&lt;OrderItem, OrderLineDto&gt;()
          .ForMember(orderLineDto =&gt; orderLineDto.OrderNumber, 
             config =&gt; config.MapFrom(sourceOrderItem 
                                   =&gt; _order.Number));

sourceOrderItem => _order.Number
is a lambda expression, but because _order field is referenced, a closure is created.
As for any closure, the _order variable instance is captured, and will be available each time the lambda expression is evaluated.

Nothing unexpected so far – the question is: which instance of _order?
The one from the moment OrderMapper constructor is called and the lambda expression is instantiated, right?
That was the intention, at least.
However, AutoMapper has the good habit of caching the mappings in a static field, for good performance reasons.
So even if we try to redefine the mapping for a certain type, the first mapping is used.
In our case, the mapping will allways use the lambda expression created during the first call to Mapper.CreateMap, when the first OrderMapper is instantiated, so the first _order instance is captured by the closure and always used when OrderNumber is mapped.

How to fix this? Quite easy: copy the OrderNumber directly in code and don’t use AutoMapper for such a simple task:

    public class OrderMapper
        private readonly Order _order;

        public OrderMapper(Order order)
            _order = order;

            AutoMapper.Mapper.CreateMap&lt;OrderItem, OrderLineDto&gt;();
            //.ForMember(orderLineDto =&gt; orderLineDto.OrderNumber, 
            //        config =&gt; config.MapFrom(sourceOrderItem =&gt; _order.Number));

        public OrderLineDto GetLineDto(OrderItem orderItem)
            var dtoLine = AutoMapper.Mapper.Map&lt;OrderItem, OrderLineDto&gt;(orderItem);
            dtoLine.OrderNumber = _order.Number;
            return dtoLine;

To make sure that such a ‘bug’ is not introduced again by mistake, we can move the call to AutoMapper.Mapper.CreateMap in a static constructor that won’t be able to access instance fields.

More on closures and a comparation with Java:
or in JavaScript:

Posted in .NET, C# | Tagged , , , | Leave a comment

On assumptions and formats

In .NET (and any other framework for that matter), it’s better to never assume anything, but to check twice.
Let’s take an example – what do you think, will the following unit test always pass?

public void ShortDateLength()
    DateTime d = new DateTime(2015, 01, 18);
    string dateString = d.ToString("yyyyMMdd");

    Assert.AreEqual(8, dateString.Length);

… well, usually, it will, but once every blue moon, it will fail 🙂 .
All it takes it’s an user changing the regional settings of the computer, or adding the following 3 lines of code:

public void ShortDateLength()
    CultureInfo c = new CultureInfo("he-IL", false);
    c.DateTimeFormat.Calendar = new HebrewCalendar();
    Thread.CurrentThread.CurrentCulture = c;

    DateTime d = new DateTime(2015, 01, 18);
    string dateString = d.ToString("yyyyMMdd");

    Assert.AreEqual(8, dateString.Length);

Yes, on some cultures and calendars, the dates are not represented in arab numerals, and years might not fit in 4 chars.
In the above case, dateString will have the following ‘unexpected’ value:

– a solid 10 chars in length.
When does this matter? When dateString is going to be displayed on the UI, probably not – in such cases I want to be formatted in the format chosen by the end-user.
However, if the DateTime value is going to be serialized to a text file or send to a web service, I want to make sure that I will be able to decode it later.
In such cases it’s better to replace the line 9 above with:
string dateString = d.ToString(“yyyyMMdd”, CultureInfo.InvariantCulture);

Somehow related – what do you think, which of the following tests will pass?

const string digits1 = "5678";
Assert.IsTrue(Regex.IsMatch(digits1, @"^\d+$"));

const string digits2 = "୮౪୩";
Assert.IsTrue(Regex.IsMatch(digits2, @"^\d+$"));

Unexpectedly for some, both will pass 🙂
\d in .NET matches any digit, and is Unicode-aware (
୮౪୩ are.. digits in some cultures (

Again, why is this relevant? Because many developers use \d as a quick way to validate input that it’s supposed to be only one of 0,1,2,…,9 – well, digit might mean more than that.
Some will say that in their application there is no risk that an end-user might enter something like ୮౪୩ – true, unless the input comes from a mis-behaving client application that calls a web service, and it just happen to send by accident the following sequence of bytes (hex values):
E0 B1 AA
E0 AD A9
– in UTF-8 these are.. digits.

Posted in .NET, C# | Tagged , , , | Leave a comment

How developers start to ignore code smells

Many people wonder how some developers blissfully ignore some best practices when writing code, or aren’t too bothered when they see a code smell in their project.
There are many explanations, but an old one is the code they see when working with Microsoft framework and samples (and not only Microsoft).

Even if Microsoft did great improvements in this direction in the recent years (clean code, best practices etc.), when some developers see code like the one below, in one of the most recent Microsoft frameworks, what conclusion will they draw? It’s from MS, so it must be right, no? 🙂

IdentityConfig.cs – part of the latest ASP.NET Identity 2.0 project template – 6 classes in one file:

UserManager class – part of ASP.NET itself, new class added by Microsoft last year – the screenshot is truncated, it could be twice this size – I’m too tired to count how many public members are in there:
UserManager class

Posted in .NET, IT | Tagged , , , , | Leave a comment

Service Bus for Windows Server: How to define authorization rules at topic level

This is just a ‘reminder’ post for myself (and maybe others) when encountering the same issue.
For Service Bus 1.0 for Windows Server (not Azure), at least on a server not joined to a domain, when using local (workgroup) Windows users for authentication, in order to define the topic authorization rules, the AllowRule.ClaimValue must be the username (without machinename\ in front).
The IssuerName must be the namespace name, and ClaimType must be “nameidentifier”.

An example:

const string userClaim = "nameidentifier";
string userName = "TestUser"; // actual name of the local Windows user
string issuerNamespace = "TestNamespace"; // maybe dynamically obtained using namespaceManager.Address.PathAndQuery.Replace("/", "")
List<AccessRights> accessRights = new List<AccessRights> {AccessRights.Listen};
var topicDescription = namespaceManager.GetTopic("MyTopic");
         new AllowRule(issuerNamespace, userClaim, userName, accessRights));

Obviously, the local Windows user must exists on both the Service Bus server machine and on the client computer, with the same name and password, and the client application must run using this user.
This type of authentication, Windows integrated using ‘workgroup users’, not joined to a domain, is not quite supported by Microsoft, that assumes that all computers will be joined to a Windows domain, but it works so far.

The MSDN documentation on this issue is not helpfull at all:
– just auto-generated stuff, with examples taken from Azure Service Bus and usually not updated for Service Bus for Windows Server.

Posted in .NET | Tagged , , , , | 1 Comment