Wednesday, November 28, 2012

Linq's syntax symmetry with Postgres

Can't help but notice, Linq multiple condition has more affinity with Postgres flexible syntax:


from c in Customers
join o in Orders on new {c.CompanyID, c.CustomerID} equals new {o.CompanyID, o.CustomerID}


Postgresql:


from Customers as c 
join Orders as o on (c.CompanyID,c.CustomerID) = (o.CompanyID,o.CustomerID)

Sunday, November 25, 2012

Bye bye attributes-based APIs! Hello fluent APIs!

Cool kids are not doing this style of programming anymore:


public class Flight
{
    [DisplayWidth(200)]
    [FinderController(UseFinder=typeof(CountryFinder), SelectedID="SelectedCountryCode")]
    public string CountryCode { get; set; }
    
    [DisplayWidth(200)]
    [Color(Color.Blue)]
    [FinderController(UseFinder=typeof(CityFinder), SelectedID="SelectedCityID", CascadingField="CountryCode")]
    public int CityID { get; set; }
    
    
    [DisplayWidth(100)]
    [Spinner(From=1,To=30)]
    public int StayDuration { get; set; }
}



Not that there's something fundamentally wrong with that, however, with the confluence of the language's new features and the desire of many programmers to have their code be free from the clout of the tyrants, many programmers nowadays are realizing that their codes can be geared towards using plain classes. Classes that are not encumbered with infrastructure concerns, classes that best mirror the problem domain, classes that are just plain. Hence we just want the following class above to be like this now:



public class Flight
{
    public int CountryID { get; set; }
    public int CityID { get; set; }
    public int StayDuration { get; set; }
}

For example, if you used that class in Entity Framework yet you want to re-use that class in other ORM(say NHibernate), you won't want any EF infrastructure concerns be carried over from EF to NHibernate. You wanted your classes be in POCO form so you can re-use those classes in any way you deem them they fit.


Given that we strive towards writing POCO classes now, how can we deliver the input functionality around your domain classes without tainting them? One way is to use XML, but as any discerning programmers knows, XML-based APIs are going the way of the dodo now


With the confluence of C# 3's Expression and fluent programming, we can come up with an elegant way to map the domain classes to input. We can avoid XML and attributes-based API altogether.


But before we head to that, there's something inelegant with attributes-based APIs that needed be pointed out. It promotes social-based programming, it's good to be social, with code however, it's a different matter altogether. With social-based programming, every time you need to know about something, you often have to ask someone for their expertise. With attributes-based API, you'll often find yourself asking someone if there's already an API that does this or does that. As attributes are not discoverable, you can't quickly know which attribute needed be used for certain functionality, thus oftentimes you need to ask a colleague. This is not good for an industry such as ours, our industry is prone to bus factor. Whom to ask if the experts are not around? Though some might suggest you read the manual, but as we all know, most programmers don't read manuals


Another bad thing with .NET attributes is they are designed to be simple, i.e. you cannot use generics on it. Some just contend with the fact that they can only fashion stringly-typed API on top of it, e.g.


[FinderController(UseFinder=typeof(CityFinder), SelectedID="SelectedCityID", CascadingField="CountryCode")]
public int CityID { get; set; }



With stringly-typed API, it's not readily apparent from where you should get your CascadingField, is it something that is inside the CityFinder class? or is it in the Flight class?


However, with C# 3's Expression + fluent programming, you can make your API more discoverable, you'll be able to discover things by yourself fairly automatically.





So here's how we can make our input API be more discoverable with the use of C# 3's Expression and fluent programming.



public class Flight
{
    public int CountryCode { get; set; }
    public int CityID { get; set; }
    public int StayDuration { get; set; }
}


public class FlightInput : InputMap<Flight>
{
    public FlightInput ()
    {
        Input(x => x.CountryCode) // Use C# 3's Expression
            .DisplayWidth(200)
            .Color(KnownColor.Blue)
            .UseFinder<CountryFinder>().SelectedID(x => x.SelectedCountryCode);

        Input(x => x.CityID)
            .DisplayWidth(200)
            .Color(KnownColor.Blue)
            .UseFinder<CityFinder>().SelectedID(x => x.SelectedCityID).CascadingField(x => x.CountryCode);

        Input(x => x.StayDuration)
            .DisplayWidth(100).Color(KnownColor.Green)
            .UseSpinner(1,10); 
    }
}




With fluent-based API, you can check where the CountryCode on CascadingField come from(F12 or Ctrl+B(if using Resharper)). And while writing the CascadingField, the IDE can show you where the  CascadingField is getting its value from:





That's it! that's how elegant fluent-based API is! It keeps your domain classes in their pristine form (termed as POCO if you may), making your domain classes be very amenable to be re-used on many parts of your system.


One more thing, your code is refactoring-friendly with Expression and fluent-based API.


Sample Fluent API barebone code: http://www.ienablemuch.com/2012/12/sample-fluent-api-barebone-code.html


Happy Coding! ツ

ASP.NET MVC returned type discipline

A few times I'm seeing this in code...


public ActionResult GetSoap()
{
    return Json(new Soap { Name = "Tide", Feature = "With Calamansi" } );
}



...instead of the following one in which you can easily infer the code's intent:

public JsonResult GetSoap()
{
    return Json(new Soap { Name = "Tide", Feature = "With Calamansi" } );
}


That's deliberately leaking an abstraction. A devil's advocate would suppose you are just showing off your deep knowledge on ASP.NET MVC's object hierarchy, that you know that JsonResult derives from ActionResult, thus you can just make every MVC actions' return type an ActionResult. Well guess what, everything derives from object anyway, hence just make everything return an object and it will still compile and run:


public object GetSoap()
{
    return Json(new Soap { Name = "Tide", Feature = "With Calamansi" } );
}


The problem with that approach(and the ActionResult code above for that matter) is you need to cast the returned results to its actual returned type(JsonResult) when you need to do TDD on that method. Hence warranting an explicit casting.


[Test]
public void SomeTest()
{
    // unnecessary casting
    JsonResult r = (JsonResult) GetSoap(); 
    
    Soap s = (Soap)r.Model;    
}


Whereas if you declare the MVC method to return its actual returned type, i.e. JsonResult, you don't need to do extra casting when doing unit test on it:


[Test]
public void SomeTest()
{
    JsonResult r = GetSoap(); 
    
    Soap s = (Soap)r.Model;    
}


Giving the controller action's return type an specific type makes the code intent clearer

Happy Coding! ツ


Saturday, November 24, 2012

We are at the mercy of language designers

using System;

namespace Craft
{

    public delegate string Greeter(string s);
    public class HeyAttribute : Attribute
    {
        public Greeter GreetAction { get; set; }
    }


  
    // Strongly-typed mechanism on attribute is no joy in C#, it's not
    // possible.  C# language designers reduces the attributes "complexity" :
    // http://stackoverflow.com/questions/294216/why-does-c-sharp-forbid-generic-attribute-types

    [Hey(GreetAction = Mate.Yo)] // This will not compile, despite the simplicity
    public class Test
    {
    }



    public class Mate
    {
        public static string Yo(string s)
        {
            return "Hello " + s;
        }
    }


    class Program
   {
        static void Main(string[] args)
        {           
        }
    }

   
}



   

Monday, November 12, 2012

Banker's Rounding


.NET uses banker's rounding by default, not the common rounding.To override it, specify MidpointRounding.AwayFromZero(aka common rounding) on Math.Round

That is, the presence of 0.5 in a number won't automatically round the number up, rounding up is balanced between odd and even numbers.

When you use Math.Round ​on a number with extra 0.5, the even numbers are rounded down, odd numbers are rounded up.

To illustrate "banker's rounding" (.NET default rounding is banker's rounding. If you want to be explicit about it, specify MidpointRounding.ToEven on Math.Round) :


static void TestBanker()
{
    Console.WriteLine(Math.Round(0.5)); // 0
    Console.WriteLine(Math.Round(1.5)); // 2
    Console.WriteLine(Math.Round(2.5)); // 2
    Console.WriteLine(Math.Round(3.5)); // 4
    Console.WriteLine(Math.Round(4.5)); // 4
    Console.WriteLine(Math.Round(5.5)); // 6
    Console.WriteLine(Math.Round(6.5)); // 6
    Console.WriteLine(Math.Round(7.5)); // 8
    Console.WriteLine(Math.Round(8.5)); // 8
    Console.WriteLine(Math.Round(9.5)); // 10
}


To illustrate "common rounding" (.NET parameter is MidpointRounding.AwayFromZero)


static void TestCommon()
{
    Console.WriteLine(Math.Round(0.5, MidpointRounding.AwayFromZero)); // 1
    Console.WriteLine(Math.Round(1.5, MidpointRounding.AwayFromZero)); // 2
    Console.WriteLine(Math.Round(2.5, MidpointRounding.AwayFromZero)); // 3
    Console.WriteLine(Math.Round(3.5, MidpointRounding.AwayFromZero)); // 4
    Console.WriteLine(Math.Round(4.5, MidpointRounding.AwayFromZero)); // 5
    Console.WriteLine(Math.Round(5.5, MidpointRounding.AwayFromZero)); // 6
    Console.WriteLine(Math.Round(6.5, MidpointRounding.AwayFromZero)); // 7
    Console.WriteLine(Math.Round(7.5, MidpointRounding.AwayFromZero)); // 8
    Console.WriteLine(Math.Round(8.5, MidpointRounding.AwayFromZero)); // 9
    Console.WriteLine(Math.Round(9.5, MidpointRounding.AwayFromZero)); // 10
}


Other good explanations:

Sunday, November 11, 2012

Page splitting with clustered composite primary key. Performance and database size

Here are the performance obtained on unique-incrementing+clustered-composite-primary-key + vs clustered-incrementing-primary-key+unique-composite.

Non-increasing are those composites that received random/semi-random numbers, increasing are those composites that received monotonically increasing values. Units are in seconds.

                                                        non-increasing  increasing 
                                        
unique incrementing + clustered composite primary key   23              19      

clustered incrementing primary key + unique composite   20              19





And here are the database sizes, units are in megabytes:

                                                        non-increasing  increasing 
                                        
unique incrementing + clustered composite primary key   56              46

clustered incrementing primary key + unique composite   46              46




We can infer that the increase in database size and slow performance stems from page splitting. Clustered primary key(be it one or more columns) arranges the records based on the primary key value(s)' order, so if the values inserted on a clustered primary key(s) is random/non-increasing, the re-arranging of rows happens, which causes slow performance. Physically re-arranging rows causes page splits, which in turn causes larger database size



Following are the scripts that test the impact of different database designs to your app's performance and its database size.



Unique Incrementing + Clustered Composite Primary Key. Has non-increasing value on clustered composite primary key, hence page-splitting happens. Slow and larger database size, 23 seconds and 56 MB

use master;
go

-- drop database testC;
-- go

create database testC;
go


use testC;


create table tbl
(
 i int not null identity(1,1) unique,
 a int not null,
 b int not null,
 x varchar(2000) not null,  
 constraint pk_tbl primary key clustered(a,b) 
);


set nocount on;

declare @i int = 0;
while @i < 8 begin
 
 declare @j int = 0;
 while @j < 2000 begin
  
  insert into tbl(a,b, x) values(@j,@i, replicate('x',2000));

  set @j = @j + 1;
 end;

 set @i = @i + 1;
end;

set nocount off;

-- 23 seconds. 56 MB



Clustered Incrementing Primary key + Unique composite. Has non-increasing value on unique composite, yet increasing on clustered primary key, hence no page-splitting happens. Fast and small database size, 20 seconds and 46 MB

use master;
go

-- drop database testD;
-- go

create database testD;
go


use testD;

create table tbl
(
 i int identity(1,1) primary key clustered,
 a int not null,
 b int not null,
 x varchar(2000) not null,
 constraint uk_tbl unique(a,b) 
);


set nocount on;
declare @i int = 0;
while @i < 8 begin
 
 declare @j int = 0;
 while @j < 2000 begin 
  
  insert into tbl(a,b, x) values(@j,@i, replicate('x',2000));

  set @j = @j + 1;
 end;
  
 set @i = @i + 1;
end;
set nocount off;

-- 20 seconds. 46 MB


Unique Incrementing + Clustered Composite Primary key. Has increasing value on clustered composite primary key, hence no page-splitting happens. Fast and small database size. 19 seconds and 46 MB


use master
go

-- drop database testE;
-- go

create database testE;

go

use testE;

create table tbl
(
 i int not null identity(1,1) unique,
 a int not null,
 b int not null,
 x varchar(2000) not null,
 constraint pk_tbl primary key clustered(a,b) 
);


set nocount on;

declare @i int = 0;
while @i < 8 begin

 declare @j int = 0;
 while @j < 2000 begin
  
  insert into tbl(a,b, x) values(@i,@j, replicate('x',2000));

  set @j = @j + 1;
 end;

 set @i = @i + 1;
end;

set nocount off;

-- 19 seconds. 46 MB



Clustered Incrementing Primary key + Unique Composite. Using increasing value on clustered primary key, hence no page-splitting happens. Fast and small database size, 19 seconds and 46 MB


use master
go

-- drop database testF;
-- go

create database testF;

go

use testF;

create table tbl
(
 i int identity(1,1) primary key clustered,
 a int not null,
 b int not null,
 x varchar(2000) not null,
 constraint uk_tbl unique(a,b) 
);


set nocount on;

declare @i int = 0;
while @i < 8 begin

 declare @j int = 0;
 while @j < 2000 begin
  
  insert into tbl(a,b, x) values(@i,@j, replicate('x',2000));
  set @j = @j + 1;

 end;

 set @i = @i + 1;
end;
  
set nocount off;

-- 19 seconds. 46 MB 

SSRS: Column groups visibility

Related to: http://www.ienablemuch.com/2012/11/ssrs-grouping-dynamically.html

Given the type of report below, the e-mail information belongs to the Person(grouped entity) not on the Sales section(detail entity). Perhaps you've seen the following scenario already and wanted to set the e-mail promotion visibility conditionally:



There's a limitation in SSRS, even on 2012 version, you cannot set the visibility of the columns that are on grouped section programmatically:





That option is available on details section only:


What we can do is to move those grouped columns to details section, and then adjust things accordingly, merging similar vertical values in particular. There's no mechanism to move columns in SSRS though. Just delete the columns from grouped section one by one:

Then when you are deleting the last column from the group section, you will be prompted with this confirmation:




Select the second option:



This will be what left of your report when you delete all the columns from the grouped section:

Then re-insert the deleted columns, right click the column, then select Insert Column > Left :


Then select the fields need to be re-added:



The Column Visibility option for columns that are on grouping entity(Person) is now an available option, that means we can hide columns programatically instead of deleting the columns permanently.



You can also override the sorting on your grouped entity(Person), right click SalesPersonID on Row Groups, and then select Group Properties...




You can also override the sorting of your grouped entity, by default if you don't override the sorting, the rows will be sorted by SalesPersonID



This is how your report shall look like:

Note the entity (Person) you are grouping on are repeated. To prevent the grouped entity on repeating, highlight all the cells from SalesPersonID to EmailPromotion columns, then on Properties(you can press F4 if Properties toolbox is not visible on your IDE), then select HideDuplicates, then select SalesPersonID:


This is how your report shall look like:

Now to give the illusion of vertical cell merging similar to the columns that are in group section, highlight the columns from SalesPersonId to Email Promotion:

From the Properties toolbox, select the BorderStyle's Top, then select <Expression...>, then put this expression:  =IIf(Previous(Fields!SalesPersonID.Value) = Fields!SalesPersonID.Value, "None", "Solid"):

Then on their BorderStyle's Bottom property, select the <Expression...> too:

Then enter this: =IIf(COUNT(Fields!SalesPersonID.Value,"SalesPersonID") = RowNumber("SalesPersonID"),"Solid", "None")

That expression basically detects if we need to put a border on the cell, we just need to put one if it is the last row of the group(the grouping entity is Person(facilitated by the "SalesPersonID" column))
This is now the final output as intended:






This is the Stored Proc used in the report:

CREATE procedure [dbo].[GetTheSalesOfSalesPerson]
as

begin

 select 

  h.SalesPersonID, p.LastName, p.MiddleName, p.FirstName, p.EmailPromotion

  , h.SalesOrderNumber
  , h.OrderDate
  , s.Name as CustomerName
 from Person.Person p
 join Sales.SalesOrderHeader h on h.SalesPersonID = p.BusinessEntityID
 join Sales.Customer c on c.CustomerID = h.CustomerID
 join Sales.Store s on s.BusinessEntityID = c.StoreID


 where h.OrderDate >= convert(date, '2008-01-01')


 order by h.SalesPersonID;

end;

Download the database from: http://msftdbprodsamples.codeplex.com/releases/view/55330


Happy Computing! ツ

Wednesday, November 7, 2012

Sunday, November 4, 2012

SSRS grouping dynamically

Initial steps: http://www.ienablemuch.com/2012/11/ssrs-nested-grouping.html

This how-to guide shows how to group by column(user-selected) dynamically. 50 steps

If you want to group on a column that is in a group section, meaning you need to hide that selected column(the option Column Visibility when on group section, is not available even in SSRS 2012) by the user, follow this guide: http://www.ienablemuch.com/2012/11/ssrs-hiding-a-column-from-group.html

1

2

3

4

5
6

7

8

9

10
11

12

13

14

15

16


17

18

19

20

21

22

23

24. We got to format the grouped column if it is a date

25. To do that, we set its Text Box Properties

26. Go to Number tab, then choose Custom

27. Then put the above expression

28. Then conditionally hide the column if the grouping is on that column

29

30

31

32

33

34. Hide the Order Date conditionally too (if the grouping is on Order Date)


36




37

38. Good

39. Good, the Order Date displays accordingly. But it is not yet quite good. Even we group on Order , the date is  displayed via Customer name's order. The latest Order Date on AdventureWorks database is June 2008

40. We must set the order based on the grouping column

41. The default was CustomerName

42. Click the fx expression button. The above will appear

43. Then change the expression with the above simple expression. As we cannot do conditional order (ascending vs descending), we will just put the descending Order Date on a separate sorting option

44. Your sort by should appear as Expr

45. Then add another column

46. The expression that shall appear on another order expression

47

48

49. Change the Then By's sorting to descending

50. The sorting is correct now


Happy Coding! ツ