Sunday, September 26, 2010

Unit Testing aspx and acsx pages with Visual Studio 2008

If you say, We can not make unit test cases of aspx pages or user control.

Think again and again.

And after reading this post you will say, YES we can do it.

Introduction
With Visual Studio 2008 it is now possible to unit test you ASP.Net applications. This is achieved using three new method attributes HostType, UrlToTest and AspNetDevelopmentServerHost, along with the TestContext  and PrivateObject classes. Using the combination of these attributes and classes it is possible to run tests for an ASP.Net page in the development server included with Visual Studio or IIS.







Example 1:
This is a very simple web page, it consists of a button that when clicked will display some text in a label.
1.      Create a new Web application project in Visual Studio 2008.
2.      Add a test project to the solution with a test class called DefaultUnitTests.cs.  Your solution should now contain the following.



















3.      We’ll start by defining the test. So open DefaultTests.cs. The first thing we need to do is add a using directive for Microsoft.VisualStudio.TestTools.UnitTesting.Web this namespace contains all the attributes relating to testing web pages. You will also need to add using directives for System.Web.UI and System.Web.UI.WebControls you need these as you will be using web controls in the test method.
4.      We are now able to add the test method. As this test will be executed against a web page there are a number of attributes that must be added to the method so that the tests are executed in the correct environment. The first of these is the HostType attribute which allows you to specify the type of host that the test will run in. As we want to run our tests in the ASP.Net engine we will specify the string ASP.Net.
5.      The next attribute that needs to be specified is UrlToTest. When using the ASP.Net development server the port that the ASP.Net development server runs on is randomly assigned, therefore when testing with the ASP.Net development server the URL will be fixed up to use the correct port number. However if you run your tests in IIS then no fix up is needed.
6.      The final attribute that needs to be applied is AspNetDevelopmentServerHost. This is used to tell the ASP.Net development server where the web site is located on disc, if you plan to test only with IIS then you can omit this attribute.
7.     Now that all the attributes have been applied to the method we can define the method body. This is where we will use the TestContext and PrivateObject classes to get a handle to the page and invoke methods.
8.      The TestContext contains the RequestedPage property that allows access to the Page object that was created to server the URL requested by the test. Once you have a handle on the Page object you can use the pages FindControl method to access the various controls on the page.
9.      Next create an Instance of a PrivateObject passing the page as a parameter to the PrivateObjects constructor. Once the PrivateObject has been created we can use the Invoke method to call any method of the page object. In this case we will be calling the event handler of the buttons click event, thus simulating the process of the user clicking the button. The code snippet below shows the full test method.

        [TestMethod]
        [HostType("ASP.NET")]
        [UrlToTest("http://localhost:1687/WebSite2/Default.aspx")]
        [AspNetDevelopmentServerHost(@"C:\Documents and Settings\DIL\My Documents\Visual Studio 2008\WebSites\WebSite2", "/WebSite2")]
        public void TestMethod1()
        {
            Page page = testContextInstance.RequestedPage;
            Button button = page.FindControl("Button1") as Button;
            Label label = page.FindControl("Label1") as Label;
            PrivateObject po = new PrivateObject(page);
            po.Invoke("Button1_Click", button, EventArgs.Empty);
            Assert.AreEqual("Did you hear that?We can test aspx", label.Text);
        }

10.Run the test and test will fail as the label contains an empty string. 












Therefore we need to add the following to the button click event handler

protected void Button1_Click(object sender, EventArgs e)
{
    
     Label1.Text = "
Did you hear that?We can test aspx";
}






11. Build and run the test again and it should now pass for you. 




      This is a very simplistic example of unit testing an Asp.Net page, but this has been done on purpose so that you can see the nuts and bolts of the technology, this should give you an idea of the attributes needing applied to a method that tests ASP.Net pages, you should also now understand how to obtain a handle to the Page object and access the controls and event handlers contained within the page.


Example 2:In this example I will use the same principles as described in example 1 but I  will now use the UrlToTest attribute to execute a different URL for the two test methods. The page will look for a parameter from the query string and display this in label. 



     As the process of creating the solution is identical to that of example 1 I will outline only the additions to this solution. 
  •    We now have two test methods one of which will pass and the other will fail, this is to illustrate that it is possible to invoke each test against a different URL in this case one with the parameter included in the URL and the other without the parameter. The code snippet below shows one of the test methods with the UrlToTest attribute applied.
  [TestMethod]
        [HostType("ASP.NET")]
        [UrlToTest("http://localhost:1687/WebSite2/Default.aspx?demoParam=Hello")]
        [AspNetDevelopmentServerHost(@"C:\Documents and Settings\DIL\My Documents\Visual Studio 2008\WebSites\WebSite2", "/WebSite2")]
public void TestMethod1()
{
     Page page = _testContextInstance.RequestedPage;

     // need to manually invoke the Page_Load method.
     PrivateObject po = new PrivateObject(page);
     po.Invoke("Page_Load", page, EventArgs.Empty);

     Label label = page.FindControl("Label1") as Label;

     Assert.AreEqual("Hello", label.Text);
}
  •     I have also had to use a PrivateObject to invoke the Page_Load method. When unit testing a page the methods normally called as part of the pages life cycle are not invoked therefore you must use a PrivateObject to manually invoke the methods if their execution is required.

Example 3:
In this example I will show a slightly more realistic implementation of an ASP.Net page. The page uses a data access component to retrieve a string and display the value in a label on the page. In this example I will again touch on the use of a PrivareObject and mock objects.






Again as the process of creating the solution is identical to that of Examples 1 and 2 I will therefore skip the step by step explanation and explain only the more interesting parts of the code.
The basic class diagram below shows the basic structure of the storage components used in this example.


























  •        Implementing the RealDataAccess component is unnecessary for this example therefore I will not provide an implementation for this. In a real application the RealDataAccess class could involve connecting to a database which would obviously add an extra layer of complexion that would  be a distraction from the purpose of this example.
  •     As always the first thing we’ll define is the test method. The code snippet below shows this test. As this method will be testing and ASP.Net web page I need to add the attributes TestMethod (mark this as a test method), HostType (indicate that this test should run in the ASP.NET), UrlToTest and AspNetDevelopmentServerHost (needed as I’m running my tests in the ASP.Net development server, if you want to use IIS you can omit this attribute).
    [TestMethod]



        [HostType("ASP.NET")]
        [UrlToTest("http://localhost:1687/WebSite2/Default.aspx")]
        [AspNetDevelopmentServerHost(@"C:\Documents and Settings\DIL\My Documents\Visual Studio 2008\WebSites\WebSite2", "/WebSite2")]
public void TestMethod1() { Page page = _testContextInstance.RequestedPage; Button button = page.FindControl("Button1") as Button; Label label = page.FindControl("Label1") as Label; PrivateObject po = new PrivateObject(page, new PrivateType(page.GetType().BaseType)); po.SetField("_dataAccess", new MockDataAccess()); po.Invoke("Button1_Click", button, EventArgs.Empty);
       Assert.AreEqual("Did you hear that?We can test aspx",label.Text);
    }

  •       The actual test method should be straight forward as it reuses techniques from previous examples. First the TestContext is used to get a handle on the page object, from this a handle can be gotten to the label and button controls that have been placed on the page.
  •     We then make use of a PrivateObject to set the page to use the MockDataAccess object. The code snippet below shows how the PrivateObject is created for the page. A PrivateType has to be passed to the constructor of the PrivateObject. Here I pass page.GetType().BaseType to the PrivateType constructor. This is necessary as the type of the page is generated dynamically and in this scenario I want to manipulate a private field declared in the code behind of the page therefore I need to ensure the PrivateObject is constructed with the type of the code behind otherwise I would not be able to manipulate the private field.
PrivateObject po =
new
PrivateObject(page, new PrivateType(page.GetType().BaseType));
  •     At this point everything has been setup and configured to run in a controlled environment, as the page uses the mock data access component we know exactly what to expect.
  •     To carryout the actual test we need to click the button, to do this in a unit testing scenario we don’t want to have to manual click a button each time as one advantage of unit tests is they can run unattended as part of a night build process. In order to simulate a user clicking the button we can simply use a PrivateObject to invoke the click handler of the button.

  •     Once the button has been clicked it is simply a matter to checking that the label’s text has been updated with the correct text.
 
Conclusion
Hopefully from these simple examples you will have gained an appreciation of the features now available with Visual Studio 2008 that allow you to conduct unit testing on your ASP.Net applications.

Tuesday, September 21, 2010

Garbage collector - The unsung Hero

Many developers would like to just shrug off and say 'Should we really worry about GC and what it does behind scenes?". Yes, actually we should not worry about GC if you write your code properly. GC has the best algorithm to ensure that your application is not impacted. But many times the way you have written your code and assigned/cleaned memory resources in your code affects GC algorithm a lot. Sometimes this impact leads to bad performance of GC and hence bad performance for your application.

So let's first understand what different tasks are performed by the garbage collector to allocate and clean up memory in an application.

Let's say we have 3 classes where in class 'A' uses class 'B' and class 'B' uses class 'C'.




When the first time your application starts predefined memory is allocated to the application. When the application creates these 3 objects they are assigned in the memory stack with a memory address. You can see in the below figure how the memory looks before the object creation and how it looks after object creation. In case there was an object D created it will be allocated from the address where Object C ends.


Internally GC maintains an object graph to know which objects are reachable. All objects belong to the main application root object. The root object also maintains which object is allocated on which memory address. In case an object is using other objects then that object also holds a memory address of the used object. For example in our case object A uses Object B. So object A stores the memory address of Object B.



Now let's say Object 'A' is removed from memory. So the Object 'A' memory is assigned to Object 'B' and Object 'B' memory is assigned to object 'C'. So the memory allocation internally looks something as shown below.


As the address pointers are updated GC also needs to ensure that his internal graph is updated with new memory addresses. So the object graph becomes something as shown below. Now that's a bit of work for GC it needs to ensure that the object is removed from graph and the new addresses for the existing objects is updated throughout the object tree.


An application other than his own custom objects also has .NET objects which also form the part of the graph. The addresses to those objects also need to be updated. The number of objects of .NET runtime is very high. For instance below is the number of objects created for a simple console based hello world application. The numbers of objects are approximately in 1000's. Updating pointers for each of these objects is a huge task.

Generation algorithm - Today, yesterday and day before yesterday
 

GC uses the concept of generations to improve performance. Concept of generation is based on the way human psychology handles tasks. Below are some points related to how tasks are handled by humans and how garbage collector algorithm works on the same lines:-

. If you decide some task today there is a high possibility of completion of those tasks.
. If some task is pending from yesterday then probably that task has gained a low priority and it can be delayed further.
. If some task is pending from day before yesterday then there is a huge probability that the task can be pending forever.

GC thinks in the same lines and has the below assumptions:-

. If the object is new then the life time of the object can be short.
. If an object is old then it can have a long life time.

So said and done GC supports three generations (Generation 0, Generation 1 and Generation 2).


Generation 0 has all the newly created objects. When the application creates objects they first come and fall in the Generation 0 bucket. A time comes when Generation 0 fills up so GC needs to run to free memory resources. So GC starts building the graph and eliminating any objects which are not used in application any more. In case GC is not able to eliminate an object from generation 0 it promotes it to generation 1. If in the coming iterations it's not able to remove objects from generation 1 it's promoted to generation 2. The maximum generation supported by .NET runtime is 2.

Below is a sample display of how generation objects are seen when you run CLR profiler. In case you are new to CLR profiler you can catch up the basics from This blog(very next blog will be on CLR Profiler)
 




Ok, so how does generation help in optimization
 

As the objects are now contained in generations, GC can make a choice which generation objects he wants to clean. If you remember in the previous section we talked about the assumptions made by GC regarding object ages. GC assumes that all new objects have shorter life time. So in other words GC will mainly go through generation 0 objects more rather than going through all objects in all generations.

If clean up from generation 0 does not provide enough memory it will then move towards cleaning from generation 1 and so on. This algorithm improves GC performance to a huge extent.
 

Conclusion about generations
 

. Huge number of object in Gen 1 and 2 means memory utilization is not optimized.
. Larger the Gen 1 and Gen 2 regions GC algorithm will perform more worst.
 

Using finalize/destructor leads to more objects in Gen 1 and Gen 2
 

The C# compiler translates (renames) the destructor into Finalize. If you see the IL code using IDASM you can see that the destructor is renamed to finalize. So let's try to understand why implementing destructor leads to more objects in gen 1 and gen 2 regions. Here's how the process actually works:-

. When new objects are created they are moved to gen 0.
. When gen 0 fills out GC runs and tries to clear memory.
. If the objects do not have a destructor then it just cleans them up if they are not used.
. If the object has a finalize method it moves those objects to the finalization queue.
. If the objects are reachable it's moved to the 'Freachable' queue. If the objects are unreachable the memory is reclaimed.
. GC work is finished for this iteration.
. Next time when GC again starts its goes to Freachable queue to check if the objects are not reachable. If the objects are not reachable from Freachable memory is claimed back.

In other words objects which have destructor can stay more time in memory.

Let's try to see the same practically. Below is a simple class which has destructor.
 

class clsMyClass 
{ 
public clsMyClass()
{ 
}
~clsMyClass()
{
}
}

We will create 100 * 10000 objects and monitor the same using CLR profiler.
 

for (int i = 0; i < 100 * 10000; i++)
{
clsMyClass obj = new clsMyClass();
}

If you see the CLR profiler memory by address report you will see lot of objects in gen 1. 

Now let's remove the destructor and do the same exercise.
 

class clsMyClass 
{ 
public clsMyClass()
{ 
}
}

You can see the gen 0 has increased considerably while gen 1 and 2 are less in number.
If we see a one to one comparison it's something as shown in the below figure.

Get rid of the destructor by using Dispose
 

We can get rid of the destructor by implementing our clean up code in the dispose method For that we need to implement the 'IDisposable' method , write our clean up code in this and call suppress finalize method as shown in the below code snippet. 'SuppressFinalize' dictates the GC to not call the finalize method. So the double GC call does not happen.
 

class clsMyClass : IDisposable
{ 
public clsMyClass()
{ 
}
~clsMyClass()
{
}

public void Dispose()
{
GC.SuppressFinalize(this);
} 
}

The client now needs to ensure that it calls the dispose method as shown below.
 

for (int i = 0; i < 100 ; i++)
{
clsMyClass obj = new clsMyClass();
obj.Dispose(); 
}

Below is the comparison of how Gen 0 and 1 distribution looks with constructor and with dispose. You can see there is marked improvement in gen 0 allocation which signifies good memory allocation.

What if developers forget to call Dispose?
 

It's not a perfect world. We cannot ensure that the dispose method is always called from the client. So that's where we can use Finalize / Dispose pattern as explained in the coming section.

There is a detailed implementation of this pattern at http://msdn.microsoft.com/en-us/library/b1yfkh5e(VS.71).aspx.

Below is how the implementation of finalize / dispose pattern looks like.
 

class clsMyClass : IDisposable
{ 
public clsMyClass()
{

}

~clsMyClass()
{
// In case the client forgets to call
// Dispose , destructor will be invoked for
Dispose(false);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// Free managed objects.
}
// Free unmanaged objects

}

public void Dispose()
{
Dispose(true);
// Ensure that the destructor is not called
GC.SuppressFinalize(this);
} 
}

Explanation of the code:-

. We have defined a method called as Dispose which takes a Boolean flag. This flag says is this method called from Dispose or from the destructor. If this is called from the 'Dispose' method then we can free both managed and unmanaged resources.
. If this method is called from the destructor then we will just free the unmanaged resources.
. In the dispose method we have suppressed the finalize and called the dispose with true.
. In the destructor we have called the dispose function with false value. In other words we assume that the GC will take care of managed resources and let's take the destructor call to clean unmanaged resources.
In other words if the client does not call the dispose function the destructor will take care of cleaning the unmanaged resources.
 

Conclusion
 

. Do not have empty constructors in your classes.
. In case you need to clean up use finalize dispose pattern with 'SupressFinalize' method called.
. If there is a dispose method exposed by a class , ensure to call the same from your client code.
. Application should have more objects allocated in Gen 0 than Gen 1 and Gen 2. More objects in Gen 1 and 2 is sign of bad GC algorithm execution.
 

Tuesday, September 14, 2010

Software Measurements: Code Metrics Values

Here is another helpful tool provided by Visual Studio 2010 to its users...

Code metrics is a set of software measures that provide developers better insight into the code they are developing. By taking advantage of code metrics of Visual Studio 2010, developers can understand which types and/or methods should be reworked or more thoroughly tested. Development teams can identify potential risks, understand the current state of a project, and track progress during software development.

The following list shows the code metrics results that Visual Studio calculates:
  • Maintainability Index – Calculates an index value between 0 and 100 that represents the relative ease of maintaining the code. A high value means better maintainability.
  • Cyclomatic Complexity – Measures the structural complexity of the code. It is created by calculating the number of different code paths in the flow of the program. A program that has complex control flow will require more tests to achieve good code coverage and will be less maintainable.
  • Depth of Inheritance – Indicates the number of class definitions that extend to the root of the class hierarchy. The deeper the hierarchy the more difficult it might be to understand where particular methods and fields are defined or/and redefined.
  • Class Coupling – Measures the coupling to unique classes through parameters, local variables, return types, method calls, generic or template instantiations, base classes, interface implementations, fields defined on external types, and attribute decoration. Good software design dictates that types and methods should have high cohesion and low coupling. High coupling indicates a design that is difficult to reuse and maintain because of its many interdependencies on other types.
  • Lines of Code – Indicates the approximate number of lines in the code. The count is based on the IL code and is therefore not the exact number of lines in the source code file. A very high count might indicate that a type or method is trying to do too much work and should be split up. It might also indicate that the type or method might be hard to maintain.
          Today we will discuss about Complexity metrics. Most Important part of project or development metrics, So that before hitting Code Metrics of Visual Studio 2010. We can understand  why its so important part of development life cycle.

                                                  Complexity metrics
The following metrics measure the complexity of executable code within methods. This includes both the internal complexity of a single method and the complexity of the data flow in and out of a method.
High complexity may result in bad understandability and more errors. Complex methods also need more time to develop and test. Therefore, excessive complexity should be avoided. Too complex methods should be simplified by rewriting or splitting into several methods.
Complexity is often positively correlated to code size. A big program or function is likely to be complex as well. These are not equal, however. A method with relatively few lines of code might be far more complex than a long one. We recommend the combined use of LOC and complexity metrics to detect complex code.

CC Cyclomatic complexity

Cyclomatic complexity is probably the most widely used complexity metric in software engineering. Defined by Thomas McCabe, it's easy to understand, easy to calculate and it gives useful results. It's a measure of the structural complexity of a procedure.

How to calculate cyclomatic complexity?
CC = Number of decisions + 1

Thus, cyclomatic complexity equals the number of decisions plus one. What are decisions? Decisions are caused by conditional statements. they are If..Then..Else, Select Case, For..Next, Do..Loop, While..Wend/End While, Catch and When.

The cyclomatic complexity of a procedure with no decisions equals 1.

There is no maximum value since a method can have any number of decisions.
Cyclomatic complexity, also known as V(G) or the graph theoretic number, is calculated by simply counting the number of decision statements. A multiway decision, the Select Case statement, is counted as several decisions. This version of the metric does not count Boolean operators such as And and Or, even if they add internal complexity to the decision statements.
ConstructEffect on CCReasoning
If..Then+1An If statement is a single decision.
ElseIf..Then+1ElseIf adds a new decision.
Else0Else does not cause a new decision. The decision is at the If.
Select Case+1 for each CaseEach Case branch adds one decision in CC.
Case Else0Case Else does not cause a new decision. The decisions were made at the other Cases.
For [Each] .. Next+1There is a decision at the start of the loop.
Do..Loop+1There is a decision at Do While|Until or alternatively at Loop While|Until.
Unconditional Do..Loop0There is no decision in an unconditional Do..Loop without While or Until. *
While..Wend
While..End While
+1There is a decision at the While statement.
Catch+1Each Catch branch adds a new conditional path of execution. Even though a Catch can be either conditional (catches specific exceptions) or unconditional (catches all exceptions), we treat all of them the same way. *
Catch..When+2The When condition adds a second decision. *
.

Variations to cyclomatic complexity

Cyclomatic complexity comes in a few variations as to what exactly counts as a decision.

CC2 Cyclomatic complexity with Booleans ("extended cyclomatic complexity")

CC2 = CC + Boolean operators
CC2 extends cyclomatic complexity by including Boolean operators in the decision count. Whenever a Boolean operator (And, Or, Xor, Eqv, AndAlso, OrElse) is found within a conditional statement, CC2 increases by one. The conditionals considered are: If, ElseIf, Select, Case, Do, Loop, While, When. The reasoning behind CC2 is that a Boolean operator increases the internal complexity of the branch. You could as well split the conditional statement in several sub-conditions while maintaining the complexity level.
Alternative names: CC2 is sometimes called ECC extended cyclomatic complexity or strict cyclomatic complexity.

CC3 Cyclomatic complexity without Cases ("modified cyclomatic complexity")

CC3 = CC where each Select block counts as one
CC3 equals the regular CC metric, but each Select Case block is counted as one branch, not as multiple branches. In this variation, a Select Case is treated as if it were a single big decision. This leads to considerably lower complexity values for procedures with large Select Case statements. In many cases, Select Case blocks are simple enough to consider as one decision, which justifies the use of CC3.
Alternative name: CC3 is sometimes called modified cyclomatic complexity.

Summary of cyclomatic complexity metrics

MetricNameBoolean operatorsSelect CaseAlternative name
CCCyclomatic complexityNot counted+1 for each Case branchRegular cyclomatic complexity
CC2Cyclomatic complexity with Booleans+1 for each Boolean+1 for each Case branchExtended or strict cyclomatic complexity
CC3Cyclomatic complexity without CasesNot counted+1 for an entire Select CaseModified cyclomatic complexity
CC, CC2 or CC3 — which one to use? This is your decision. Pick up the one that suits your use best. CC is the original version and is probably the most widely used. CC3 provides the lowest values, CC comes next. CC2 is the highest variant, the most pessimistic one, one might say. All of them are heavily correlated, so you can achieve good results with any of them.

Values of cyclomatic complexity

A high cyclomatic complexity denotes a complex method that's hard to understand, test and maintain. There's a relationship between cyclomatic complexity and the "risk" in a method .
CCType of procedureRisk
1-4A simple procedureLow
5-10A well structured and stable procedureLow
11-20A more complex procedureModerate
21-50A complex procedure, alarmingHigh
>50An error-prone, extremely troublesome, untestable procedureVery high
The original, usual limit for a maximum acceptable value for cyclomatic complexity is 10. Other values, such as 15 or 20, have also been suggested. Regardless of the exact limit, if cyclomatic complexity exceeds 20, you should consider it alarming. Methods with a high cyclomatic complexity should be simplified or split into several smaller methods.

Cyclomatic complexity equals the minimum number of test cases you must execute to cover every possible execution path through your methods. This is important information for testing. Carefully test methods with the highest cyclomatic complexity values.

Bad fix probability

There is a frequently quoted table of "bad fix probability" values by cyclomatic complexity. This is the probability of an error accidentally inserted into a program while trying to fix a previous error.
CCBad fix probability
1-105%
20-3020%
>5040%
approaching 10060%
As the complexity reaches high values, changes in the program are likely to produce new errors.

Cyclomatic complexity and Select Case

The use of multi-branch statements (Select Case) often leads to high cyclomatic complexity values. This is a potential source of confusion. Should a long multiway selection be split into several procedures?


Although a procedure consisting of a single multiway decision may require many tests, each test should be easy to construct and execute. Each decision branch can be understood and maintained in isolation, so the method is likely to be reliable and maintainable. Therefore, it is reasonable to exempt methods consisting of a single multiway decision statement from a complexity limit. Note that if the branches of the decision statement contain complexity themselves, the rationale and thus the exemption does not automatically apply. However, if all the branches have very low complexity code in them, it may well apply.

Resolution: For each method, either limit cyclomatic complexity to 10 (or another sensible limit) or provide a written explanation of why the limit was exceeded.

DECDENS Decision Density

Cyclomatic complexity is usually higher in longer methods. How much decision is there actually, compared to lines of code? This is where you need decision density (also called cyclomatic density).
DECDENS = CC / LLOC
This metric shows the average cyclomatic density of the code lines within the procedures of your project. Single-line procedure declarations aren't counted since cyclomatic complexity isn't defined for them. The denominator is the logical lines of code metric.

TCC Total Cyclomatic Complexity

The total cyclomatic complexity for a project or a class is calculated as follows.
TCC = Sum(CC) - Count(CC) + 1
In other words, CC is summed over all methods. Count(CC) equals the number of methods. It's deducted because the complexity of each method is 1 or more. This way, TCC equals the number of decision statements + 1 regardless of the number of methods these decisions are distributed in.

Depth of nesting metrics

The following few metrics measure nesting levels. It is assumed that the deeper the nesting, the more complex the code.

DCOND Depth of Conditional Nesting

Depth of conditional nesting, or nested conditionals, is related to cyclomatic complexity. Whereas cyclomatic complexity deals with the absolute number of branches, nested conditionals counts how deeply nested these branches are.
The recommended maximum for DCOND is 5. More nesting levels make the code difficult to understand and can lead to errors in program logic. If you have too many levels, consider splitting the method. You may also find a way to rewrite the logic with a Select Case statement, or an easier-to-read If..Then..ElseIf..Else structure.
Although it might seem to give a lower DCOND, it's not recommended to join multiple conditions into a single, big condition involving lots of And, Or and Not logic.

DLOOP Depth of Looping

Depth of looping equals the maximum level of loop nesting in a procedure. Target at a maximum of 2 loops in a procedure.

Happy reading...

In next coming posts we will discuss each point of code metrics and will see how to use it with some examples...

Monday, September 6, 2010

Custom rules for Code Analysis feature in 2010

Initially we were using FxCop for code analysis i.e, coding standards, best practices, etc.

Thanks to MS!! Now Team Foundation Server's check-in policies can be an excellent solution for this kind of scenario. One of the policies TFS ships with out of the box is the "Code Analysis Check-In policy", which runs your Code Analysis rules every time someone attempts to check-in code. If the rules fail, the check-in does not succeed. (For those of you that haven't heard of Code Analysis, it is basically FxCop integrated into Visual Studio, version 2005 onwards).

So one of the things we had to do to enforce our own  rules not already covered by FxCop/Code Analysis. It turns out that this is a relatively easy task with FxCop 1.36, which provides a new API called Introspection to perform the code analysis.

One important detail is that FxCop works against compiled Intermediate Language. This means that a huge advantage is that the rules are not dependant on the language of your code, as long that it targets the .NET Framework.
Enough for the introduction, let's go to the coding and create a simple rule: private varible should start with "_" and first letter should small in caps like "private string _studentID".

The steps involved to implement such a rule are:

1) Create a new C# class library project and add references to Microsoft.FxCop.Sdk.dll and       Microsoft.CCi.dll. Both come with FxCop.
You can find these refrence at (\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop)

2) Add an xml file, which will contain the description about your rules; the name of the file is not important. This information will be used by FxCop/Code Analysis to show your rule on the UI. It will have the following structure:


Most of the information here is pretty self-explanatory, with exception of the Resolution field, wich I will cover later.

3) Add a class to your project, and make it inherit from BaseIntrospectionRule. You should add a parameterless contstructor to your class, but that calls a base constructor and provides three parameters:
a) Name: name of the rule, must match the TypeName specified in the rule metadata XML.
b)Xml config: the name of the metadata XML resource. It is the name of the assembly plus the name of the xml created on step 2 but without the extension.

c)Xml assembly: the assembly containing the metadata XML resource.
When the analysis is run, this method gets called by the runtime every time a type is found (such as a class, delegate, enum, struct or interface). If you are interested in checking other elements, such as methods, there are other overloads of the Check method that you can override.

When a rule violation is found, we must add a Problem to the Problem collection inherited from the base rule class. We must supply a Resolution object to the problem constructor. This object just builds the description that will be shown to the developer about how to fix the rule violation. Remember that in the config xml we supplied the description and placed a positional placeholder such as the one's used in String.Format?

Well, in the Resolution constructor we are supplying the parameters to fill the placeholders, in our rule we are passing-in the private variable's name.

Multiple resolutions can be defined which are assigned with a Name attribute in the xml and then accessed by code with the this.GetResolutionByName() method.

4)The last step is to build the assembly and copy it to c:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\Rules. In case that you are using FxCop standalone exe, copy the dll to C:\Program Files\Microsoft FxCop 1.36\Rules.

And that's it. Open the tool (Visual Studio Code Analysis or FxCop), and you will see your newly defined rule.


Enjoy!!!

for any suggestion write on k.manojt1@gmail.com.