Friday, December 31, 2010

Lazy-loading fibonacci

Let's start the next year by becoming a lazy person, in a Dilbert-kind-of-way. Less lines of code, lean code, less to debug.



For a start, here's your friendly neighborhood yield return in action (Fibonacci using IEnumerable):

using System;
using System.Collections.Generic;

using System.Linq;

namespace Fibonacci
{
    class MainClass
    {
        public static void Main (string[] args)
        {
            Console.WriteLine("Sans list. Lazy load stuff:");
            int i = 0;
            
            
            foreach(int n in Fibonacci().Take(10))
            {
                ++i;
                Console.WriteLine("Loading {0} {1}", i, n);             
            }
            
            
            Console.WriteLine("\nPick the 20th fibonacci:");
            Console.WriteLine("\n20th fibonacci: {0}", Fibonacci().Skip(20 - 1).Take(1).Single());
            
            
            Console.WriteLine("\nEagerly load everything in list:");
            i = 0;      
            foreach(int n in Fibonacci().Take(10).ToList())
            {
                ++i;
                Console.Write("\nEager loading {0} {1}", i, n);
            }
                    
        }
        

                        
        static IEnumerable<int> Fibonacci()
        {
            int a = 0,  b = 1;
                        
            for(;;)
            {
                Console.Write("Lazy");         
                yield return a;
             
                int n = a;
                a += b;
                b = n;          
            }                           
        }
    }//class
        
}


Contrast that to an old way, a lot of boilerplate code is needed (which maybe favorable to a PHB) :

using System;

namespace Fib2
{
    using System.Linq;
    using FibLib;

    class MainClass
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Sans list. Lazy load stuff:");
            int i = 0;

            var f = new Fibonacci();

            foreach (int n in f.Take(10))
            {
                ++i;
                Console.WriteLine("Loading {0} {1}", i, n);
            }


            Console.WriteLine("\nPick the 20th fibonacci:");
            Console.WriteLine("\n20th fibonacci: {0}", f.Skip(20 - 1).Take(1).Single());


            Console.WriteLine("\nEagerly load everything in list:");
            i = 0;
            foreach (int n in f.Take(10).ToList())
            {
                ++i;
                Console.Write("\nEager loading {0} {1}", i, n);
            }

        }
    }


}

namespace FibLib
{    
    class Fibonacci : System.Collections.Generic.IEnumerable<int>
    {
        System.Collections.Generic.IEnumerator<int> _f = new FibonacciIterator();
        // implement IEnumerable<int>...
        System.Collections.Generic.IEnumerator<int> System.Collections.Generic.IEnumerable<int>.GetEnumerator()
        {
            _f.Reset();
            return _f;
        }


        System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
        {
            throw new Exception("This doesn't get executed. What's the use?");
            return _f;
        }


        // ...implement IEnumerable



        class FibonacciIterator : System.Collections.Generic.IEnumerator<int>
        {
            int a = 0, b = 1;

            int System.Collections.Generic.IEnumerator<int>.Current
            {
                get
                {
                    return b;
                }
            }

            object System.Collections.IEnumerator.Current
            {
                get
                {
                    throw new Exception("This doesn't get executed. What's the use?");
                    return b;
                }
            }

            void System.IDisposable.Dispose() { }


            bool System.Collections.IEnumerator.MoveNext()
            {
                Console.Write("Lazy");
                int n = a;
                a += b;
                b = n;
                return true;
            }

            void System.Collections.IEnumerator.Reset()
            {
                a = 0;
                b = 1;
            }

        }

    }//class Fibonacci
}


Code A has 55 lines of code, while code B has 108 lines

yield return saves the day! :-)

Output of both approach:

Sans list. Lazy load stuff:
LazyLoading 1 0
LazyLoading 2 1
LazyLoading 3 1
LazyLoading 4 2
LazyLoading 5 3
LazyLoading 6 5
LazyLoading 7 8
LazyLoading 8 13
LazyLoading 9 21
LazyLoading 10 34

Pick the 20th fibonacci:
LazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazyLazy
20th fibonacci: 4181

Eagerly load everything in list:
LazyLazyLazyLazyLazyLazyLazyLazyLazyLazy
Eager loading 1 0
Eager loading 2 1
Eager loading 3 1
Eager loading 4 2
Eager loading 5 3
Eager loading 6 5
Eager loading 7 8
Eager loading 8 13
Eager loading 9 21
Eager loading 10 34

This is the SQL and eager-loading approach: http://www.ienablemuch.com/2010/09/fibonacci-using-sql.html

Wednesday, December 29, 2010

Fluent NHibernate AddFromNamespaceOf. The overlooked method of Fluent NHibernate

Unless I'm missing something, writing AddFromNamespaceOf method is a simple undertaking

public static class Helper
{

    public static FluentMappingsContainer AddFromNamespaceOf<T>(
           this FluentMappingsContainer fmc)
    {
        Type t = typeof(T);
        string ns = t.Namespace;
        
    
        var y = from x in t.Assembly.GetTypes()
                where 
                    x.FullName == ns + "." + x.Name
                    && x.BaseType.IsGenericType                        
                    && x.BaseType.GetGenericTypeDefinition().IsDerivedFrom(typeof(ClassMap<>))
                select x;
                
        foreach(Type z in y)
            fmc.Add(z);                 
    
        
        return fmc;
    }
    
    private static bool IsDerivedFrom(this Type givenType, Type genericType)
    {
        if (givenType == genericType)
            return true;
        
        if (givenType.BaseType.IsGenericType)
            return givenType.BaseType
                .GetGenericTypeDefinition().IsDerivedFrom(genericType);
        
        return false;
    }
}  

Useful when doing unit tests or other kind of testing and don't want to be bothered with creating separate projects/assembly for group of entities. Example: http://code.google.com/p/fluent-nhibernate-lowercase-system/source/browse/#svn

Tuesday, December 28, 2010

C# equivalent of Python pass statement





I'm coding an extension method today, I encounter again the empty statement pattern.

I wanted to write Assert.Success() on successful condition, but alas, there's no Assert.Success(), we only have Assert.Fail() and Assert.Ignore().

So I am compelled to write this:

public static class Helper
 {
  public static void AssertContainsMessage(this string message, string messagePortion)
  {
   if (message.Contains(messagePortion))
    ;
   else
    Assert.AreEqual(message, messagePortion);
  }
 }

However, the compiler balk at the notion that my semicolon code is possibly a mistaken empty statement.


Python has a pass statement...
if i != 0:
        pass
else:
        print 'please check your parameters'

..., and I wish C# allows this explicit intent of an empty statement:

if (i != 0)
    null;
else
    Console.WriteLine("please check your parameters");

That null == empty right? null statement == empty statement. But that is not palatable to most C-based programmers.

So we have to get by with using a block with no statements, this works:

if (i != 0)
    {}
else
    Console.WriteLine("please check your parameters");

Hmmm.. :-) There's no more compiler warning. Typing curly braces requires a more conscious effort, an intent, hence the compiler rationale for not raising a warning; whereas a semicolon, is a second nature on all C-based programmers. Perhaps the best analogy could be is like most people, when they want to click something, they perform double-click, regardless if it is needed or not, it's not a conscious effort, most users just double click. And most C-based programmers just terminate anything with a semicolon.

One time or another, you have mistakenly write a property with semicolon...

public class Person
{
    public int PersonId { get; set; };
}

..., only to quickly backspace that obnoxious semicolon after of closing bracket, how unforgiving compiler!

I digress, just think if the compiler will not balk on empty statement:

if (ans == 'Y');
       IZ_SHREDDIN_UR_FILE_NAO();
    

Advice for morts, heed compiler warnings :-)

Monday, December 27, 2010

Troubleshooting NUnit Testing of Linq on Mono 4.0

At the time of this writing(Mono 2.8.1), you might receive error (TestFixtureSetup failed in MyClass) when you perform NUnit testing of Linq on Mono targeting .NET framework version 4.0. This can be avoided by going back to framework version 3.5. If you don't intend to perform NUnit testing, Linq runs fine on Mono targeting framework version 4.0

The Video: http://www.youtube.com/watch?v=8hhxGVtaosY

Sigh.. Mono should just synchronize their version numbering on .NET framework version

Thursday, December 23, 2010

Date functions

Michael-Buens-MacBook:~ Michael$ cal 2010
                             2010

      January               February               March
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
                1  2      1  2  3  4  5  6      1  2  3  4  5  6
 3  4  5  6  7  8  9   7  8  9 10 11 12 13   7  8  9 10 11 12 13
10 11 12 13 14 15 16  14 15 16 17 18 19 20  14 15 16 17 18 19 20
17 18 19 20 21 22 23  21 22 23 24 25 26 27  21 22 23 24 25 26 27
24 25 26 27 28 29 30  28                    28 29 30 31
31                                          
       April                  May                   June
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
             1  2  3                     1         1  2  3  4  5
 4  5  6  7  8  9 10   2  3  4  5  6  7  8   6  7  8  9 10 11 12
11 12 13 14 15 16 17   9 10 11 12 13 14 15  13 14 15 16 17 18 19
18 19 20 21 22 23 24  16 17 18 19 20 21 22  20 21 22 23 24 25 26
25 26 27 28 29 30     23 24 25 26 27 28 29  27 28 29 30
                      30 31                 
        July                 August              September
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
             1  2  3   1  2  3  4  5  6  7            1  2  3  4
 4  5  6  7  8  9 10   8  9 10 11 12 13 14   5  6  7  8  9 10 11
11 12 13 14 15 16 17  15 16 17 18 19 20 21  12 13 14 15 16 17 18
18 19 20 21 22 23 24  22 23 24 25 26 27 28  19 20 21 22 23 24 25
25 26 27 28 29 30 31  29 30 31              26 27 28 29 30
                                            
      October               November              December
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
                1  2      1  2  3  4  5  6            1  2  3  4
 3  4  5  6  7  8  9   7  8  9 10 11 12 13   5  6  7  8  9 10 11
10 11 12 13 14 15 16  14 15 16 17 18 19 20  12 13 14 15 16 17 18
17 18 19 20 21 22 23  21 22 23 24 25 26 27  19 20 21 22 23 24 25
24 25 26 27 28 29 30  28 29 30              26 27 28 29 30 31
31                                          


Months' first Mondays...
#
    base_date    | month_first_monday 
-----------------+--------------------
 Jan 01 2010 Fri | Jan 04 2010 Mon
 Feb 01 2010 Mon | Feb 01 2010 Mon
 Mar 01 2010 Mon | Mar 01 2010 Mon
 Apr 01 2010 Thu | Apr 05 2010 Mon
 May 01 2010 Sat | May 03 2010 Mon
 Jun 01 2010 Tue | Jun 07 2010 Mon
 Jul 01 2010 Thu | Jul 05 2010 Mon
 Aug 01 2010 Sun | Aug 02 2010 Mon
 Sep 01 2010 Wed | Sep 06 2010 Mon
 Oct 01 2010 Fri | Oct 04 2010 Mon
 Nov 01 2010 Mon | Nov 01 2010 Mon
 Dec 01 2010 Wed | Dec 06 2010 Mon
(12 rows)

...query (month_beginning_dow):

select to_char(z.d, 'Mon dd yyyy Dy') as base_date, 

    to_char( month_beginning_dow(extract(year from z.d)::int, 
        extract(month from z.d)::int, 1), 'Mon dd yyyy Dy') 
            as month_first_monday
from
(
     select 'Jan 1 2010'::date + (interval '1' month * x.n) as d
     from generate_series(0,11) as x(n)
) as z

Months' last Sundays...
#
    base_date    | month_last_date | month_last_sunday 
-----------------+-----------------+-------------------
 Jan 01 2010 Fri | Jan 31 2010 Sun | Jan 31 2010 Sun
 Feb 01 2010 Mon | Feb 28 2010 Sun | Feb 28 2010 Sun
 Mar 01 2010 Mon | Mar 31 2010 Wed | Mar 28 2010 Sun
 Apr 01 2010 Thu | Apr 30 2010 Fri | Apr 25 2010 Sun
 May 01 2010 Sat | May 31 2010 Mon | May 30 2010 Sun
 Jun 01 2010 Tue | Jun 30 2010 Wed | Jun 27 2010 Sun
 Jul 01 2010 Thu | Jul 31 2010 Sat | Jul 25 2010 Sun
 Aug 01 2010 Sun | Aug 31 2010 Tue | Aug 29 2010 Sun
 Sep 01 2010 Wed | Sep 30 2010 Thu | Sep 26 2010 Sun
 Oct 01 2010 Fri | Oct 31 2010 Sun | Oct 31 2010 Sun
 Nov 01 2010 Mon | Nov 30 2010 Tue | Nov 28 2010 Sun
 Dec 01 2010 Wed | Dec 31 2010 Fri | Dec 26 2010 Sun
(12 rows)

...query:
select to_char(z.d, 'Mon dd yyyy Dy') as base_date,
to_char(month_last_date(extract(year from z.d)::int, 
    extract(month from z.d)::int), 'Mon dd yyyy Dy') 
        as month_last_date,
to_char( month_ending_dow(extract(year from z.d)::int, 
    extract(month from z.d)::int, 0), 'Mon dd yyyy Dy') 
        as month_last_sunday       
from
(
     select 'Jan 1 2010'::date + (interval '1' month * x.n) as d
     from generate_series(0,11) as x(n)
) as z

Date library:
CREATE FUNCTION month_last_date(_year integer, _month integer)
  RETURNS date AS
$$
 select (($1 || '-' || $2 || '-1')::date + interval '1' month)::date - 1;
$$
  LANGUAGE sql immutable;



-- this function and the next function, has nice symmetry to each other
CREATE FUNCTION date_next_dow(_base_date date, _dow integer)
  RETURNS date AS
$$
 select
  case when extract(dow from $1) <= $2 then
   $1 + ($2 - (extract(dow from $1)::int))
  else
   $1 + ($2 - (extract(dow from $1)::int)) + 7
  end 
$$
  LANGUAGE sql immutable;


CREATE FUNCTION date_previous_dow(_base_date date, _dow integer)
  RETURNS date AS
$$
 select
  case when $2 <= extract(dow from $1) then
   $1 + ($2 - (extract(dow from $1)::int))
  else
   $1 + ($2 - (extract(dow from $1)::int)) - 7
  end;
$$
  LANGUAGE sql immutable;  



CREATE FUNCTION month_beginning_dow(_year integer, _month integer, _dow integer)
  RETURNS date AS
$$
 select date_next_dow( ($1 || '-' || $2 || '-1')::date, $3);
  
$$
  LANGUAGE sql immutable;


CREATE FUNCTION month_ending_dow(_year integer, _month integer, _dow integer)
  RETURNS date AS
$$
 select date_previous_dow( month_last_date( $1, $2 ), $3);
$$
  LANGUAGE sql immutable;




Other usage, next Friday...

#
    base_date    |   next_friday   
-----------------+-----------------
 Nov 05 2010 Fri | Nov 05 2010 Fri
 Nov 06 2010 Sat | Nov 12 2010 Fri
 Nov 07 2010 Sun | Nov 12 2010 Fri
 Nov 08 2010 Mon | Nov 12 2010 Fri
 Nov 09 2010 Tue | Nov 12 2010 Fri
 Nov 10 2010 Wed | Nov 12 2010 Fri
 Nov 11 2010 Thu | Nov 12 2010 Fri
 Nov 12 2010 Fri | Nov 12 2010 Fri
 Nov 13 2010 Sat | Nov 19 2010 Fri
 Nov 14 2010 Sun | Nov 19 2010 Fri
 Nov 15 2010 Mon | Nov 19 2010 Fri
 Nov 16 2010 Tue | Nov 19 2010 Fri
 Nov 17 2010 Wed | Nov 19 2010 Fri
 Nov 18 2010 Thu | Nov 19 2010 Fri
 Nov 19 2010 Fri | Nov 19 2010 Fri
 Nov 20 2010 Sat | Nov 26 2010 Fri
 Nov 21 2010 Sun | Nov 26 2010 Fri
 Nov 22 2010 Mon | Nov 26 2010 Fri
(18 rows)

..., query:

select to_char(z.d, 'Mon dd yyyy Dy') as base_date, 
       to_char( date_next_dow(z.d, 5), 'Mon dd yyyy Dy') as next_friday
from
(
     select 'Nov 5 2010'::date + x.n as d
     from generate_series(0,17) as x(n)
) as z

Previous Saturday...
#
    base_date    |  previous_saturday  
-----------------+-------------------
 Nov 05 2010 Fri | Oct 30 2010 Sat
 Nov 06 2010 Sat | Nov 06 2010 Sat
 Nov 07 2010 Sun | Nov 06 2010 Sat
 Nov 08 2010 Mon | Nov 06 2010 Sat
 Nov 09 2010 Tue | Nov 06 2010 Sat
 Nov 10 2010 Wed | Nov 06 2010 Sat
 Nov 11 2010 Thu | Nov 06 2010 Sat
 Nov 12 2010 Fri | Nov 06 2010 Sat
 Nov 13 2010 Sat | Nov 13 2010 Sat
 Nov 14 2010 Sun | Nov 13 2010 Sat
 Nov 15 2010 Mon | Nov 13 2010 Sat
 Nov 16 2010 Tue | Nov 13 2010 Sat
 Nov 17 2010 Wed | Nov 13 2010 Sat
 Nov 18 2010 Thu | Nov 13 2010 Sat
 Nov 19 2010 Fri | Nov 13 2010 Sat
 Nov 20 2010 Sat | Nov 20 2010 Sat
 Nov 21 2010 Sun | Nov 20 2010 Sat
 Nov 22 2010 Mon | Nov 20 2010 Sat
(18 rows)

...query:

select to_char(z.d, 'Mon dd yyyy Dy') as base_date, 
       to_char( date_previous_dow(z.d, 6), 'Mon dd yyyy Dy') 
           as previous_saturday
from
(
     select 'Nov 5 2010'::date + x.n as d
     from generate_series(0,17) as x(n)
) as z


Happy new year folks!

Related to: Finding previous day of the week

Tuesday, December 21, 2010

Custom Sort Order on Postgresql

Given the countries database, how to place US on top of the list?

Click to expand countries database:
create table countries
(
country text not null unique
);

create index on countries(country);


insert into countries values
('Afghanistan'),
('Albania'),
('Algeria'),
('American Samoa'),
('Andorra'),
('Angola'),
('Anguilla'),
('Antarctica'),
('Antigua and Barbuda'),
('Argentina'),
('Armenia'),
('Aruba'),
('Australia'),
('Austria'),
('Azerbaijan'),
('Bahamas'),
('Bahrain'),
('Bangladesh'),
('Barbados'),
('Belarus'),
('Belgium'),
('Belize'),
('Benin'),
('Bermuda'),
('Bhutan'),
('Bolivia'),
('Bosnia and Herzegovina'),
('Botswana'),
('Bouvet Island'),
('Brazil'),
('British Indian Ocean Territory'),
('Brunei Darussalam'),
('Bulgaria'),
('Burkina Faso'),
('Burundi'),
('Cambodia'),
('Cameroon'),
('Canada'),
('Cape Verde'),
('Cayman Islands'),
('Central African Republic'),
('Chad'),
('Chile'),
('China'),
('Christmas Island'),
('Cocos (Keeling) Islands'),
('Colombia'),
('Comoros'),
('Congo'),
('Congo, The Democratic Republic of The'),
('Cook Islands'),
('Costa Rica'),
('Cote D''ivoire'),
('Croatia'),
('Cuba'),
('Cyprus'),
('Czech Republic'),
('Denmark'),
('Djibouti'),
('Dominica'),
('Dominican Republic'),
('Easter Island'),
('Ecuador'),
('Egypt'),
('El Salvador'),
('Equatorial Guinea'),
('Eritrea'),
('Estonia'),
('Ethiopia'),
('Falkland Islands (Malvinas)'),
('Faroe Islands'),
('Fiji'),
('Finland'),
('France'),
('French Guiana'),
('French Polynesia'),
('French Southern Territories'),
('Gabon'),
('Gambia'),
('Georgia'),
('Germany'),
('Ghana'),
('Gibraltar'),
('Greece'),
('Greenland'),
('Grenada'),
('Guadeloupe'),
('Guam'),
('Guatemala'),
('Guinea'),
('Guinea-bissau'),
('Guyana'),
('Haiti'),
('Heard Island and Mcdonald Islands'),
('Honduras'),
('Hong Kong'),
('Hungary'),
('Iceland'),
('India'),
('Indonesia'),
('Iran'),
('Iraq'),
('Ireland'),
('Israel'),
('Italy'),
('Jamaica'),
('Japan'),
('Jordan'),
('Kazakhstan'),
('Kenya'),
('Kiribati'),
('Korea, North'),
('Kosovo'),
('Kuwait'),
('Kyrgyzstan'),
('Laos'),
('Latvia'),
('Lebanon'),
('Lesotho'),
('Liberia'),
('Libyan Arab Jamahiriya'),
('Liechtenstein'),
('Lithuania'),
('Luxembourg'),
('Macau'),
('Macedonia'),
('Madagascar'),
('Malawi'),
('Malaysia'),
('Maldives'),
('Mali'),
('Malta'),
('Marshall Islands'),
('Martinique'),
('Mauritania'),
('Mauritius'),
('Mayotte'),
('Mexico'),
('Micronesia, Federated States of'),
('Moldova, Republic of'),
('Monaco'),
('Mongolia'),
('Montenegro'),
('Montserrat'),
('Morocco'),
('Mozambique'),
('Myanmar'),
('Namibia'),
('Nauru'),
('Nepal'),
('Netherlands'),
('Netherlands Antilles'),
('New Caledonia'),
('New Zealand'),
('Nicaragua'),
('Niger'),
('Nigeria'),
('Niue'),
('Norfolk Island'),
('Northern Mariana Islands'),
('Norway'),
('Oman'),
('Pakistan'),
('Palau'),
('Palestinian Territory'),
('Panama'),
('Papua New Guinea'),
('Paraguay'),
('Peru'),
('Philippines'),
('Pitcairn'),
('Poland'),
('Portugal'),
('Puerto Rico'),
('Qatar'),
('Reunion'),
('Romania'),
('Russia'),
('Rwanda'),
('Saint Helena'),
('Saint Kitts and Nevis'),
('Saint Lucia'),
('Saint Pierre and Miquelon'),
('Saint Vincent and The Grenadines'),
('Samoa'),
('San Marino'),
('Sao Tome and Principe'),
('Saudi Arabia'),
('Senegal'),
('Serbia and Montenegro'),
('Seychelles'),
('Sierra Leone'),
('Singapore'),
('Slovakia'),
('Slovenia'),
('Solomon Islands'),
('Somalia'),
('South Africa'),
('South Georgia and The South Sandwich Islands'),
('Spain'),
('Sri Lanka'),
('Sudan'),
('Suriname'),
('Svalbard and Jan Mayen'),
('Swaziland'),
('Sweden'),
('Switzerland'),
('Syria'),
('Taiwan'),
('Tajikistan'),
('Tanzania, United Republic of'),
('Thailand'),
('Timor-leste'),
('Togo'),
('Tokelau'),
('Tonga'),
('Trinidad and Tobago'),
('Tunisia'),
('Turkey'),
('Turkmenistan'),
('Turks and Caicos Islands'),
('Tuvalu'),
('Uganda'),
('Ukraine'),
('United Arab Emirates'),
('United Kingdom'),
('United States'),
('United States Minor Outlying Islands'),
('Uruguay'),
('Uzbekistan'),
('Vanuatu'),
('Vatican City'),
('Venezuela'),
('Vietnam'),
('Virgin Islands, British'),
('Virgin Islands, U.S.'),
('Wallis and Futuna'),
('Western Sahara'),
('Yemen'),
('Zambia'),
('Zimbabwe');


In Postgres, false sorts first, true sorts last

select n from (values(true),(false)) as x(n) order by n

n 
---
 f
 t
(2 rows)

So here's how to sort non-US last (which essentially sorts US first):

select country from countries order by country <> 'United States', country


If you want to use it on other RDBMS, use CASE WHEN yourConditionHere and use integer:

select country from countries 
order by 

 case when country = 'United States' then 1 else 2 end
 
 ,country


If you want to sort two countries first, use CASE yourFieldNameHere:

select country from countries 
order by 
 case country 
 when 'United States' then 1 
 when 'Philippines' then 2
 else 3 
 end
 
 ,country




Keyword search: Postgres sort order custom

Monday, December 20, 2010

ANY clause is superior to IN clause. At least on Postgresql :-)












Why is ANY clause superior to IN clause? Postgresql's ANY accepts array, which in turn can be manipulated further.

To illustrate the point, you cannot do this with IN expression:

select * from y 
where lower(email) IN 
    lower( ('ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com') );

Neither this:

select * from y 
where lower(email) 
    IN ( lower ('ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com') );



But with ANY clause, you can manipulate the array before passing values to it, a trick you cannot do on IN clause:

create function lower(t text[]) returns text[]
as
$$
select lower($1::text)::text[]
$$ language sql;
 
 
select * from y 
where lower(email) = 
    ANY( lower(ARRAY['ISteve.Jobs@Apple.com'
                     ,'Linus.Torvalds@Linux.com']) );

Sunday, December 19, 2010

Postgresql case-insensitive design and searching choice

First, what not to do, don't use ilike...

create table y
(
id serial not null,
email text not null unique
);

insert into y(email) 
values('iSteve.jobs@apple.com') ,('linus.Torvalds@linUX.com');
insert into y(email) 
select n from generate_series(1,1000) as i(n);
create index ix_y on y(email);

explain select * from y 
where email ilike 
    ANY(ARRAY['ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com']);

...,as that would result to Sequential Scan:

memdb=# explain select * from y where email ilike ANY(ARRAY['ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com']);
                                       QUERY PLAN                                       
----------------------------------------------------------------------------------------
 Seq Scan on y  (cost=0.00..17.52 rows=1 width=7)
   Filter: (email ~~* ANY ('{ISteve.Jobs@Apple.com,Linus.Torvalds@Linux.com}'::text[]))
(2 rows)


It's either you create an indexed lower expression...

create function lower(t text[]) returns text[]
as
$$
select lower($1::text)::text[]
$$ language sql;

create unique index ix_y_2 on y(lower(email));

explain select * from y 
where lower(email) = 
    ANY(lower(ARRAY['ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com']));

...which properly uses index:
memdb=# explain select * from y where lower(email) = ANY(lower(ARRAY['ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com']));
                                                           QUERY PLAN                                                           
--------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on y  (cost=22.60..27.98 rows=10 width=7)
   Recheck Cond: (lower(email) = ANY ((lower(('{ISteve.Jobs@Apple.com,Linus.Torvalds@Linux.com}'::text[])::text))::text[]))
   ->  Bitmap Index Scan on ix_y_2  (cost=0.00..22.60 rows=10 width=0)
         Index Cond: (lower(email) = ANY ((lower(('{ISteve.Jobs@Apple.com,Linus.Torvalds@Linux.com}'::text[])::text))::text[]))
(4 rows)


Or you use citext data type...

create table x
(
id serial not null,
email citext not null unique
);

insert into x(email) 
values('iSteve.jobs@apple.com'),('linus.Torvalds@linUX.com');
insert into x(email) 
select n from generate_series(1,1000) as i(n);
create index ix_x on x(email);

explain select * from x 
where email = 
ANY(ARRAY['ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com']::citext[]);

...which properly uses index even you don't create an index on expression (e.g. create index zzz on yyy(lower(field))):

memdb=# explain select * from x where email = ANY(ARRAY['ISteve.Jobs@Apple.com','Linus.Torvalds@Linux.com']::citext[]);
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on x  (cost=8.52..12.75 rows=2 width=7)
   Recheck Cond: (email = ANY ('{ISteve.Jobs@Apple.com,Linus.Torvalds@Linux.com}'::citext[]))
   ->  Bitmap Index Scan on ix_x  (cost=0.00..8.52 rows=2 width=0)
         Index Cond: (email = ANY ('{ISteve.Jobs@Apple.com,Linus.Torvalds@Linux.com}'::citext[]))
(4 rows)

Happy Indexing!

Finding previous day of the week




This question in Stackoverflow...

In PostgreSQL 8.4, given a date, if that date is not a Friday, I would like to find the date of the previous Friday. Can someone tell me if there is an inbuilt function or give the logic behind getting my own function.

...has an accepted answer of:

SELECT 
    CASE 
-- 1. if Friday, return date
    WHEN EXTRACT(DOW FROM my_date) = 5 
    THEN my_date
-- 2. if Saturday, subtract 1
    WHEN EXTRACT(DOW FROM my_date) = 6 
    THEN my_date - INTERVAL '1 day'
-- 3. all other days of the week, subtract `DOW + 2` from my_date
    -- should be ELSE for future-proofing ;-) MB
    ELSE -- WHEN EXTRACT(DOW FROM my_date) < 5 THEN
        my_date - ((EXTRACT(DOW FROM my_date) + 2)::TEXT||'days')::INTERVAL
    END AS tgif
FROM 
    my_table
WHERE 
    my_date IS NOT NULL

That query is hard to modify if the requirement changes. What's wrong with that? For one, there is special handling of a corner case, i.e. 6, for Saturday. Second, there's a magic number in that query, which is number 2. And there is complication when the requirement changes, when you change the parameter to Thursday, what need to be changed? The obvious are: change the 6 and 5 to 5 and 4 respectively; what's the not so obvious is the number 2, where is that number derived? Should we increment it, or should we decrement it? Though in the defense of the SOer, one cannot introduce a variable outside of function in Postgresql.

So to make the intent more clear, here's his functionalized routine:

create function recent_day(_dow int, out src text, out dst text) returns setof record
as
$$

SELECT 
 to_char(your_date, 'Mon dd yyyy dy'),

 to_char(
     CASE 
 -- 1. if Friday, return date
     WHEN EXTRACT(DOW FROM your_date) = $1
     THEN your_date
 -- 2. if Saturday, subtract 1
     WHEN EXTRACT(DOW FROM your_date) =  $1 + 1 
     THEN your_date - INTERVAL '1 day'
 -- 3. all other days of the week, subtract `DOW + 2` from your_date
     ELSE -- WHEN EXTRACT(DOW FROM your_date) < 4 THEN 
  your_date - ((EXTRACT(DOW FROM your_date) + (7 - $1))::TEXT||'days')::INTERVAL
     END,
  'Mon dd yyyy dy')
      AS tgif

from
(
     select 'Nov 5 2010'::date + x.n as your_date
     from generate_series(0,17) as x(n)
) as x
$$ language 'sql';


Here's the output of that function:
memdb=# select * from recent_day(5);
       src       |       dst       
-----------------+-----------------
 Nov 05 2010 fri | Nov 05 2010 fri
 Nov 06 2010 sat | Nov 05 2010 fri
 Nov 07 2010 sun | Nov 05 2010 fri
 Nov 08 2010 mon | Nov 05 2010 fri
 Nov 09 2010 tue | Nov 05 2010 fri
 Nov 10 2010 wed | Nov 05 2010 fri
 Nov 11 2010 thu | Nov 05 2010 fri
 Nov 12 2010 fri | Nov 12 2010 fri
 Nov 13 2010 sat | Nov 12 2010 fri
 Nov 14 2010 sun | Nov 12 2010 fri
 Nov 15 2010 mon | Nov 12 2010 fri
 Nov 16 2010 tue | Nov 12 2010 fri
 Nov 17 2010 wed | Nov 12 2010 fri
 Nov 18 2010 thu | Nov 12 2010 fri
 Nov 19 2010 fri | Nov 19 2010 fri
 Nov 20 2010 sat | Nov 19 2010 fri
 Nov 21 2010 sun | Nov 19 2010 fri
 Nov 22 2010 mon | Nov 19 2010 fri
(18 rows)

So it's correct :-)

Now let's try Thursday :-)

memdb=# select * from recent_day(4);
       src       |       dst       
-----------------+-----------------
 Nov 05 2010 fri | Nov 04 2010 thu
 Nov 06 2010 sat | Oct 28 2010 thu
 Nov 07 2010 sun | Nov 04 2010 thu
 Nov 08 2010 mon | Nov 04 2010 thu
 Nov 09 2010 tue | Nov 04 2010 thu
 Nov 10 2010 wed | Nov 04 2010 thu
 Nov 11 2010 thu | Nov 11 2010 thu
 Nov 12 2010 fri | Nov 11 2010 thu
 Nov 13 2010 sat | Nov 04 2010 thu
 Nov 14 2010 sun | Nov 11 2010 thu
 Nov 15 2010 mon | Nov 11 2010 thu
 Nov 16 2010 tue | Nov 11 2010 thu
 Nov 17 2010 wed | Nov 11 2010 thu
 Nov 18 2010 thu | Nov 18 2010 thu
 Nov 19 2010 fri | Nov 18 2010 thu
 Nov 20 2010 sat | Nov 11 2010 thu
 Nov 21 2010 sun | Nov 18 2010 thu
 Nov 22 2010 mon | Nov 18 2010 thu
(18 rows)

Uh oh, something wrong. Look at Nov 20's recent Thursday, it should be November 18, yet in his query it is November 11. Lesson of the day, sometimes when you handle corner cases, it only handles corner cases very well ;-)

This is my solution(though unfortunately was not accepted):

create or replace function recent_day_mb(_dow int, out src text, out dst text) returns setof record
as
$$
select
 to_char(ds, 'Mon dd yyyy dy'), 

 to_char(

  ds - (extract(dow from ds) - $1)::int +

   case when extract(dow from ds) - $1 < 0 then -7 
   else 0 end

  ,'Mon dd yyyy dy') as recent_friday



from
(
     select 'Nov 5 2010'::date + x.n as your_date
     from generate_series(0,17) as x(n)
) as x
$$ language 'sql';


Let's try Friday

memdb=# select * from recent_day_mb(5);
       src       |       dst       
-----------------+-----------------
 Nov 05 2010 fri | Nov 05 2010 fri
 Nov 06 2010 sat | Nov 05 2010 fri
 Nov 07 2010 sun | Nov 05 2010 fri
 Nov 08 2010 mon | Nov 05 2010 fri
 Nov 09 2010 tue | Nov 05 2010 fri
 Nov 10 2010 wed | Nov 05 2010 fri
 Nov 11 2010 thu | Nov 05 2010 fri
 Nov 12 2010 fri | Nov 12 2010 fri
 Nov 13 2010 sat | Nov 12 2010 fri
 Nov 14 2010 sun | Nov 12 2010 fri
 Nov 15 2010 mon | Nov 12 2010 fri
 Nov 16 2010 tue | Nov 12 2010 fri
 Nov 17 2010 wed | Nov 12 2010 fri
 Nov 18 2010 thu | Nov 12 2010 fri
 Nov 19 2010 fri | Nov 19 2010 fri
 Nov 20 2010 sat | Nov 19 2010 fri
 Nov 21 2010 sun | Nov 19 2010 fri
 Nov 22 2010 mon | Nov 19 2010 fri
(18 rows)

It's correct

Now let's try Thursday:

memdb=# select * from recent_day_mb(4);
       src       |       dst       
-----------------+-----------------
 Nov 05 2010 fri | Nov 04 2010 thu
 Nov 06 2010 sat | Nov 04 2010 thu
 Nov 07 2010 sun | Nov 04 2010 thu
 Nov 08 2010 mon | Nov 04 2010 thu
 Nov 09 2010 tue | Nov 04 2010 thu
 Nov 10 2010 wed | Nov 04 2010 thu
 Nov 11 2010 thu | Nov 11 2010 thu
 Nov 12 2010 fri | Nov 11 2010 thu
 Nov 13 2010 sat | Nov 11 2010 thu
 Nov 14 2010 sun | Nov 11 2010 thu
 Nov 15 2010 mon | Nov 11 2010 thu
 Nov 16 2010 tue | Nov 11 2010 thu
 Nov 17 2010 wed | Nov 11 2010 thu
 Nov 18 2010 thu | Nov 18 2010 thu
 Nov 19 2010 fri | Nov 18 2010 thu
 Nov 20 2010 sat | Nov 18 2010 thu
 Nov 21 2010 sun | Nov 18 2010 thu
 Nov 22 2010 mon | Nov 18 2010 thu
(18 rows)

See the Nov 20, its recent Thursday is November 18, not November 11. Works well :-)

Happy querying!


[EDIT]


My most succint solution:
create or replace function previous_date_of_day(the_date date, dow int) returns date
as
$$
select 
    case when extract(dow from $1) < $2 then
        $1 - ( extract(dow from $1) + (7 - $2) )::int 
    else
        $1 - ( extract(dow from $1) - $2)::int
    end;
$$ language 'sql';


select to_char(z.ds, 'Mon dd yyyy dy') as source, 
       to_char( previous_date_of_day(z.ds, 5), 'Mon dd yyyy dy') as dest
from
(
     select 'Nov 5 2010'::date + x.n as ds
     from generate_series(0,17) as x(n)
) as z


[EDIT]

The asker(Rabin) already changed the accepted answer to mine. Thanks Rabin for acknowledging code correctness! :-)

[EDIT 2010-12-04]

Here's the other related functions, the function previous_date_of_day in this article is simplifed to date_previous_dow, check here : http://www.ienablemuch.com/2010/12/date-functions.html

Saturday, December 18, 2010

Intelligent brownfield mapping system on Fluent NHibernate




The code below tackles the flaw found by Stefan Steinegger when mapping multiple references. It's working robustly now, and compared to vanilla FNH, FNH.BF mapper won't let silent errors creep in, e.g. it won't let you silently misconfigure ambiguous references, it fail fast when there's ambiguity

The problem being solved by the brownfield mapping system: http://www.ienablemuch.com/2010/12/brownfield-system-problem-on-fluent.html


using System;

using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text.RegularExpressions;

using NHibernate;
using NHibernate.Dialect;

using FluentNHibernate.Cfg;
using FluentNHibernate.Cfg.Db;
using FluentNHibernate.Conventions;
using FluentNHibernate.Conventions.Instances;
using FluentNHibernate.Conventions.Helpers;
using FluentNHibernate.Mapping;
using FluentNHibernate.Mapping.Providers;


namespace FluentNHibernate.BrownfieldSystem
{
    public class ClassMapExt<T> : ClassMap<T>
    {
        public IList<IManyToOneMappingProvider> ExtReferences { get { return this.references; } }
        public IList<ICollectionMappingProvider> ExtCollections { get { return this.collections; } }
    }

    public static class BrownfieldSystemHelper
    {


        public static T AddBrownfieldConventions<T>(this SetupConventionFinder<T> fluentMappingsContainer, string referenceSuffix, params IConvention[] otherConventions)
        {
            return fluentMappingsContainer.AddBrownfieldConventions(referenceSuffix, false, otherConventions);
        }

        public static T AddBrownfieldConventions<T>(this SetupConventionFinder<T> fluentMappingsContainer, string referenceSuffix, bool toLowercase, params IConvention[] otherConventions)
        {

            IList<IConvention> brown =
                new IConvention[]
                {
                    Table.Is(x => x.EntityType.Name.ToLowercaseNamingConvention(toLowercase))
                    ,ConventionBuilder.Property.Always(x => x.Column(x.Name.ToLowercaseNamingConvention(toLowercase)))
                    ,ConventionBuilder.Id.Always( x => x.Column(x.Name.ToLowercaseNamingConvention(toLowercase)) )        
                    ,ConventionBuilder.HasMany.Always(x => x.Key.Column( x.NormalizeReference().ToLowercaseNamingConvention(toLowercase) + referenceSuffix )  )
                
                    // Instead of this...
                    // ,ForeignKey.EndsWith(referenceSuffix)                                
                    // ... we do this, so we have direct control on Reference name's casing:
                    ,ConventionBuilder.Reference.Always(x => x.Column( x.Name.ToLowercaseNamingConvention(toLowercase) + referenceSuffix ) )
                
                };

            fluentMappingsContainer.Add(brown.ToArray());

            return fluentMappingsContainer.Add(otherConventions);
        }


        public static string ToLowercaseNamingConvention(this string s)
        {
            return s.ToLowercaseNamingConvention(true);
        }

        public static string ToLowercaseNamingConvention(this string s, bool toLowercase)
        {
            if (toLowercase)
            {
                var r = new Regex(@"
                (?<=[A-Z])(?=[A-Z][a-z]) |
                 (?<=[^A-Z])(?=[A-Z]) |
                 (?<=[A-Za-z])(?=[^A-Za-z])", RegexOptions.IgnorePatternWhitespace);

                return r.Replace(s, "_").ToLower();
            }
            else
                return s;
        }


        public static string NormalizeReference(this IOneToManyCollectionInstance x)
        {
            

            
            string cannotDeduceReferenceFromValue = "Ambiguous references found. Do explicit column mapping on both end of the objects";
            string cannotDeduceCollectionFromEntity = "Ambiguous collection found. Do explicit column mapping on both end of the objects";

            string parentKeyfield = "";

            bool needExplicitness = false;
            // Find ambiguous in parent
            {
                // string defaultKeyName = x.EntityType.Name + "_id"; // gleaned from FNH's source code


                var parentType = x.EntityType; // e.g. Person

                // Find ClassMapExt of the parentType(e.g. Person)
                var parent = (from r in x.ChildType.Assembly.GetTypes()
                              where r.BaseType.IsGenericType
                                    && r.BaseType.GetGenericTypeDefinition() == typeof(ClassMapExt<>)
                                    && r.BaseType.GetGenericArguments()[0] == parentType
                              select r).Single(); // there is only one class mapping for any objects.

                var parentInstance = Activator.CreateInstance(parent);
                var parentCollectionsOfChildType = 
                            from cr in ((IList<ICollectionMappingProvider>)parent.InvokeMember("ExtCollections", BindingFlags.GetProperty, null, parentInstance, null))
                            where cr.GetCollectionMapping().ChildType == x.ChildType
                            select cr;


                if (parentCollectionsOfChildType.Count() == 1)
                    parentKeyfield = parentCollectionsOfChildType.Single().GetCollectionMapping().Key.Columns.Single().Name;
                else
                {
                    // example: Contacts.  must match one parentCollectionsOfChildType only
                    parentKeyfield = parentCollectionsOfChildType.Where(y => y.GetCollectionMapping().Member.Name == x.Member.Name)
                                    .Single().GetCollectionMapping().Key.Columns.Single().Name;
                }

 
                bool hasAmbigousCollection =
                        parentCollectionsOfChildType.Count() > 1
                        &&
                        parentCollectionsOfChildType.Any(z => !z.GetCollectionMapping().Key.Columns.HasUserDefined());




                if (hasAmbigousCollection)
                    throw new Exception(cannotDeduceCollectionFromEntity);

                needExplicitness = parentCollectionsOfChildType.Any(z => z.GetCollectionMapping().Key.Columns.HasUserDefined());
            }

            

            // Find ambiguous in children
            {

                // Find ClassMapExt of the x.ChildType(e.g. Contact)
                var child = (from r in x.ChildType.Assembly.GetTypes()
                             where r.BaseType.IsGenericType
                                   && r.BaseType.GetGenericTypeDefinition() == typeof(ClassMapExt<>)
                                   && r.BaseType.GetGenericArguments()[0] == x.ChildType  // Contact
                             select r).Single();



                

                var childInstance = Activator.CreateInstance(child); // ContactMapExt                                        

                

                /*
                 * 
                 * References(x => x.Owner)
                 * the Owner's property type is: Person                
                 * can be obtained from:
                 *      cr.GetManyToOneMapping().Member.PropertyType 
                 * 
                 * x.EntityType is: Person
                 * 
                 * */

                var childReferences =
                                    from cr in ((IList<IManyToOneMappingProvider>)child.InvokeMember("ExtReferences", BindingFlags.GetProperty, null, childInstance, null))
                                    where cr.GetManyToOneMapping().Member.PropertyType == x.EntityType
                                    select cr;
                                      

             

                /*
                 if you do in Classmap: References(x => x.Owner).Column("Apple")
                  
                        y.GetManyToOneMapping().Columns.Single().Name == "Apple"
                 
                 if you do in Classmap: References(x => x.Owner)
                  
                        y.GetManyToOneMapping().Columns.Single().Name == "Owner_id"
                 
                 in both cases:
                    
                        y.GetManyToOneMapping().Name == "Owner"
                 */


                //// return string.Join( "$", childReferences.Select(zz => "@" + zz.GetManyToOneMapping().Name + " " + zz.GetManyToOneMapping().Columns.Single().Name + "!" ).ToList().ToArray() );



                if (needExplicitness)
                {
                    // all not defined
                    if (childReferences.All(y => !y.GetManyToOneMapping().Columns.HasUserDefined()))
                    {
                        throw new Exception(
                            string.Format("Explicitness needed on both ends. {0}'s {1} has no corresponding explicit Reference on {2}",
                            x.EntityType.Name, x.Member.Name, x.ChildType.Name));
                    }// all not defined
                    else
                    {
                        var isParentKeyExistingInChildObject = childReferences.Any(z => z.GetManyToOneMapping().Columns.Single().Name == parentKeyfield);

                        if (!isParentKeyExistingInChildObject)
                        {
                            if (childReferences.Count() == 1)
                            {
                                string userDefinedKey = childReferences.Single().GetManyToOneMapping().Columns.Single().Name;
                                throw new Exception(
                                        string.Format(
                                            "Child object {0} doesn't match its key name to parent object {1}'s {2}. Child Key: {3} Parent Key: {4}",
                                            x.ChildType.Name, x.EntityType.Name, x.Member.Name, userDefinedKey, parentKeyfield)
                                            );
                            }
                            else
                            {
                                throw new Exception(
                                        string.Format(
                                            "Child object {0} doesn't match any key to parent object {1}'s {2}. Parent Key: {3}",
                                             x.ChildType.Name, x.EntityType.Name, x.Member.Name, parentKeyfield));
                            }
                        }//if
                        else
                        {
                            return parentKeyfield;
                        }
                    }//if at least one defined
                }// if needExplicitness
                else
                {
                    bool hasUserDefined = childReferences.Count() == 1 && childReferences.Any(y => y.GetManyToOneMapping().Columns.HasUserDefined());

                    if (hasUserDefined)
                    {
                        throw new Exception(
                                string.Format("Child object {0} has explicit Reference while the parent object {1} has none. Do explicit column mapping on both ends",
                                            x.ChildType.Name, x.EntityType.Name));
                    }
                }

                bool hasAmbiguousReference =
                    ( childReferences.Count() > 1 && childReferences.Any(y => !y.GetManyToOneMapping().Columns.HasUserDefined()) )
                    
                    
                    ||
                    
                    
                    ( !needExplicitness && childReferences.Any(y => y.GetManyToOneMapping().Columns.HasUserDefined()) );


                if (hasAmbiguousReference)
                    throw new Exception(cannotDeduceReferenceFromValue);


                return childReferences.Single().GetManyToOneMapping().Name;
            }


            return "";

        }//Normalize


    }// class BrownfieldSystemHelper
}// namespace FluentNHibernate.BrownfieldSystem

Thursday, December 16, 2010

Postgresql dec to hex

test=# select to_hex(12648430) as favorite_drink, to_hex(3405697037) as person_type;
 favorite_drink | person_type
----------------+-------------
 c0ffee         | cafed00d
(1 row)

Brownfield system problem on Fluent NHibernate. And solution

If you have these tables:

create table Person
(
Id int identity(1,1) not null primary key,
PersonName varchar(255) not null,
BirthDate Date not null
);

create table Contact
(
OwnerId int not null references Person(Id),
Id int identity(1,1) not null primary key,
Number varchar(50) not null,
Type varchar(20) not null
);

Click to expand mapping
public class Person
    {
        public virtual int Id { get; set; }
        
        public virtual string PersonName { get; set; }
        public virtual DateTime Birthdate { get; set; }        
                    
        public virtual IList<Contact> Contacts { get; set; }
    }
    

    public class Contact
    {
        public virtual Person Owner { get; set; }
                        
        public virtual int Id { get; set; }        
        public virtual string Number { get; set; }        
        public virtual string Type { get; set; }


        
        
    }
    
    public class PersonMap : ClassMap <Person>
    {
        public PersonMap()
        {        
    
            Id(x => x.Id);
            Map(x => x.PersonName).Not.Nullable();
            Map(x => x.Birthdate).Not.Nullable();
            HasMany(x => x.Contacts).Inverse(); 
        }
    }


  
    public class ContactMap : ClassMap <Contact>
    {
        public ContactMap()
        {
            References(x => x.Owner);
            Id(x => x.Id);
            Map(x => x.Number).Not.Nullable();
            Map(x => x.Type).Not.Nullable();
        }
        
    }   

This code...

ISessionFactory sf = 
        Fluently.Configure()
            .Database(MsSqlConfiguration.MsSql2008.ConnectionString(@"Data Source=.\SQLExpress;Initial Catalog=test3;Trusted_Connection=true"))
            .Mappings(m =>
                m.FluentMappings.AddFromAssemblyOf<MainClass>()
                .Conventions.Add(ForeignKey.EndsWith("Id"))                        
                )
            .BuildConfiguration()
            .BuildSessionFactory();

    var px = from p in sf.OpenSession().Query<Person>()
                select p;
         


    foreach (var p in px)
    {
        Console.WriteLine("{0} {1}", p.PersonName, p.Birthdate);

        foreach (var c in p.Contacts)
        {
            Console.WriteLine("{0}", c.Number);
        }
    }

...will produce an incorrect query on foreach(var c in p.Contacts) (note the extraneous PersonId):

SELECT 
contacts0_.Person_id as Person5_1_, 
contacts0_.Id as Id1_, 
contacts0_.Id as Id2_0_, 
contacts0_.Number as Number2_0_, 
contacts0_.Type as Type2_0_, 
contacts0_.OwnerId as OwnerId2_0_ 
FROM [Contact] contacts0_ 
WHERE contacts0_.PersonId=?

Even if you add this:

  
.Conventions.Add(
    ConventionBuilder.Reference
      .Always(x => x.Column(x.Name+ "Id" )))
  


The only way to rectify this problem is to set up the joining key(e.g. OwnerId) on both ends. That is, we must set it up on .HasMany.Always and .References.Always conventions

But first, we must design a nice API solution around this problem, this is what I came up with:
.Conventions.AddBrownfieldConventions(referenceSuffix: "Id");


And that works! :-) When your run the Linq code above, it will work now. Expand to check the code:

using System;

using System.Collections.Generic;
using System.Text.RegularExpressions;

using NHibernate;
using NHibernate.Dialect;

using FluentNHibernate.Cfg;
using FluentNHibernate.Cfg.Db;
using FluentNHibernate.Conventions;
using FluentNHibernate.Conventions.Helpers;
using FluentNHibernate.Mapping;
using System.Reflection;
using System.Collections;
using FluentNHibernate.Mapping.Providers;


using System.Linq;
using FluentNHibernate.BrownfieldSystem;
using FluentNHibernate.Conventions.Instances;

namespace FluentNHibernate.BrownfieldSystem
{
    public class ClassMapExt<T> : ClassMap<T>
    {
        public IList<IManyToOneMappingProvider> ExtReferences { get { return this.references; } }
    }

    public static class BrownfieldSystemHelper
    {


        public static T AddBrownfieldConventions<T>(this SetupConventionFinder<T> fluentMappingsContainer, string referenceSuffix, params IConvention[] otherConventions)
        {
            return fluentMappingsContainer.AddBrownfieldConventions(referenceSuffix, false, otherConventions);
        }

        public static T AddBrownfieldConventions<T>(this SetupConventionFinder<T> fluentMappingsContainer, string referenceSuffix, bool toLowercase, params IConvention[] otherConventions)
        {

            IList<IConvention> brown =
                new IConvention[]
                {
                    Table.Is(x => x.EntityType.Name.ToLowercaseNamingConvention(toLowercase))
                    ,ConventionBuilder.Property.Always(x => x.Column(x.Name.ToLowercaseNamingConvention(toLowercase)))
                    ,ConventionBuilder.Id.Always( x => x.Column(x.Name.ToLowercaseNamingConvention(toLowercase)) )        
                    ,ConventionBuilder.HasMany.Always(x => x.Key.Column( x.NormalizeReference().ToLowercaseNamingConvention(toLowercase) + referenceSuffix )  )
                
                    // Instead of this...
                    // ,ForeignKey.EndsWith(referenceSuffix)                                
                    // ... we do this, so we have direct control on Reference name's casing:
                    ,ConventionBuilder.Reference.Always(x => x.Column( x.Name.ToLowercaseNamingConvention(toLowercase) + referenceSuffix ) )
                
                };

            foreach (IConvention c in brown)
                fluentMappingsContainer.Add(c);

            return fluentMappingsContainer.Add(otherConventions);
        }


        public static string ToLowercaseNamingConvention(this string s)
        {
            return s.ToLowercaseNamingConvention(false);
        }

        public static string ToLowercaseNamingConvention(this string s, bool toLowercase)
        {
            if (toLowercase)
            {
                var r = new Regex(@"
                (?<=[A-Z])(?=[A-Z][a-z]) |
                 (?<=[^A-Z])(?=[A-Z]) |
                 (?<=[A-Za-z])(?=[^A-Za-z])", RegexOptions.IgnorePatternWhitespace);

                return r.Replace(s, "_").ToLower();
            }
            else
                return s;
        }


        public static string NormalizeReference(this IOneToManyCollectionInstance x)
        {
            foreach (Type t in x.ChildType.Assembly.GetTypes())
            {
                if (t.BaseType.IsGenericType)
                    if (t.BaseType.GetGenericTypeDefinition() == typeof(ClassMapExt<>))
                        if (t.BaseType.GetGenericArguments()[0] == x.ChildType) // e.g.Contact
                        {

                            var pz = Activator.CreateInstance(t); // ContactMapExt                                        
                            var extRef = (IList<IManyToOneMappingProvider>)t.InvokeMember("ExtReferences", BindingFlags.GetProperty, null, pz, null);

                            if (extRef.Count > 1) 
                                throw new Exception("Ambiguous collection found. Do explicit column mapping on both entity and value table");
                            foreach (IManyToOneMappingProvider imt1 in extRef)
                                return imt1.GetManyToOneMapping().Member.Name;
                        }
            }

            return "";

        }//Normalize


    }// class BrownfieldSystemHelper
}// namespace FluentNHibernate.BrownfieldSystem

What you need to set up in your program is to change all brownfield tables' ClassMap to ClassMapExt. ClassMapExt exposes the this.references(which by James Gregory's design is protected on ClassMap, the references contain the data structure for ManyToOne mapping), so we can use it in ConventionBuilder.HasMany.Always

To verify that our code is indeed working, this Fluently configuration(sans Brownfield) will not output the correct DDL...
string sx = string.Join(";" + Environment.NewLine,
  Fluently.Configure()
  .Database(MsSqlConfiguration.MsSql2008)
  .Mappings(m => 
   m.FluentMappings
   .AddFromAssemblyOf<MainClass>()
   .Conventions.Add(ForeignKey.EndsWith("Id"))
   )
  .BuildConfiguration()
  .GenerateSchemaCreationScript(new MsSql2008Dialect())
  );


Console.WriteLine("Copy this: \n\n{0}", sx);


..., note the extraneous PersonId :

create table [Contact] 
(
Id INT IDENTITY NOT NULL, 
Number NVARCHAR(255) not null, 
Type NVARCHAR(255) not null, 
OwnerId INT null, 
PersonId INT null, 
primary key (Id)
);

create table [Person] 
(
Id INT IDENTITY NOT NULL, 
PersonName NVARCHAR(255) not null, 
Birthdate DATETIME not null, 
primary key (Id)
);

alter table [Contact] add constraint FK4FF8F4B2F8EC9C3E 
foreign key (OwnerId) references [Person];

alter table [Contact] add constraint FK4FF8F4B2141D8C21 
foreign key (PersonId) references [Person]


While this one with adding of Brownfield conventions...
string sx = string.Join(";" + Environment.NewLine,
        Fluently.Configure()
        .Database(MsSqlConfiguration.MsSql2008)
        .Mappings(m => 
            m.FluentMappings
            .AddFromAssemblyOf<MainClass>()
            .Conventions.AddBrownfieldConventions(referenceSuffix: "Id")                        
            )
        .BuildConfiguration()
        .GenerateSchemaCreationScript(new MsSql2008Dialect()));


Console.WriteLine("Copy this: \n\n{0}", sx);

...yields correct DDL (note that there's no more extraneous PersonId):

create table Contact 
(
Id INT IDENTITY NOT NULL, 
Number NVARCHAR(255) not null, 
Type NVARCHAR(255) not null, 
OwnerId INT null, 
primary key (Id)
);

create table Person 
(
Id INT IDENTITY NOT NULL, 
PersonName NVARCHAR(255) not null, 
Birthdate DATETIME not null, 
primary key (Id)
);

alter table Contact add constraint FK4FF8F4B2F8EC9C3E 
foreign key (OwnerId) references Person
 

And the correct query(note that it's using OwnerId now, and no more extraneous Person_id):

SELECT 
contacts0_.OwnerId as OwnerId1_, 
contacts0_.Id as Id1_, 
contacts0_.Id as Id3_0_, 
contacts0_.Number as Number3_0_, 
contacts0_.Type as Type3_0_, 
contacts0_.OwnerId as OwnerId3_0_ 
FROM Contact contacts0_ 
WHERE contacts0_.OwnerId=?

Happy Fluenting! :-)

EDIT: 2010-12-18

Program update on flaw found by Stefan Steinegger on multiple references: http://www.ienablemuch.com/2010/12/intelligent-brownfield-mapping-system.html

HTTP Error 500.19 - Internal Server Error

If you received this error when configuring an application in IIS


Error Summary
HTTP Error 500.19 - Internal Server Error
The requested page cannot be accessed because the related configuration data for the page is invalid.


Add the IIS_IUSRS and IUSR to your application directory permission. Right click the application on IIS, select Edit Permissions..., click Security tab, click Edit... button; on Enter the object names to select, paste this: IIS_IUSRS; IUSR. Click OK, click OK, refresh the erring page


Here's the video: http://www.youtube.com/watch?v=v5VMGYlGfag

Postgresql Speeding Up Coalesce

Using this:

create table x as 
select * from generate_series(1,1000) as x(n);

insert into x values(null);

If we perform a query such as this one:

select * from x order by coalesce(n,0)

We will receive Seq Scan query plan:
QUERY PLAN                         
-----------------------------------------------------------
 Sort  (cost=64.90..67.40 rows=1001 width=4)
   Sort Key: (COALESCE(n, 0))
   ->  Seq Scan on x  (cost=0.00..15.01 rows=1001 width=4)
(3 rows)

To optimize that coalesce, we should make an index on coalesce expression:
create index ix_x on x(coalesce(n,0));

And that will produce Index Scan query plan:
QUERY PLAN                            
------------------------------------------------------------------
 Index Scan using ix_x on x  (cost=0.00..55.27 rows=1001 width=4)
(1 row)

But the problem with using sentinel approach on that kind of coalesce, if the sentinel changes, your database won't use the index you've created, so this...

select * from x order by coalesce(n,-1);

...produces Seq Scan query plan again:

QUERY PLAN                         
-----------------------------------------------------------
 Sort  (cost=64.90..67.40 rows=1001 width=4)
   Sort Key: (COALESCE(n, (-1)))
   ->  Seq Scan on x  (cost=0.00..15.01 rows=1001 width=4)
(3 rows)

On that simple example, since Postgresql has a first class support for sorting nulls first, we could just index on that expression directly, let's check first what's the execution plan of the following query without the index...

select * from x order by n nulls first;


...that produces:


QUERY PLAN                         
-----------------------------------------------------------
 Sort  (cost=64.90..67.40 rows=1001 width=4)
   Sort Key: n
   ->  Seq Scan on x  (cost=0.00..15.01 rows=1001 width=4)
(3 rows)

Now let's create an index on x's n...

create index ix_x_y on x(n nulls first);

...then the last query (select * from x order by n nulls first;), will produce:
QUERY PLAN                             
--------------------------------------------------------------------
 Index Scan using ix_x_y on x  (cost=0.00..43.27 rows=1001 width=4)
(1 row)

That's more neat than using COALESCE for sorting nulls

Related: http://www.ienablemuch.com/2010/04/how-to-speed-up-order-by.html


Monday, December 13, 2010

NHibernate 3 + Fluent + Linq = One Stop Shop application development


Do you see yourself years from now, doing repetitive coding, still trying to remember each and every attributes and format of a not-so-compelling XML configuration files? Trying to remember each and every Attributes that needed be attached to your class and its properties? Writing countless simple stored procedures? Neither do I

We are lucky in our time, that the confluence of many programming technologies is enabling us to write verifiable code, a code that can be verified just by reading the er.. code, no more XML, no more magic strings, no need to switch between environments/contexts incessantly, codes that are refactoring-friendly. These technologies also reduces us of mental context-switching

With the introduction of Linq in NHibernate 3, fluent and lambda-based mapping technique(courtesy of Fluent NHibernate), the future is here now. With these technologies, one-stop-shop programming with C# holds more true today than ever before, it's all in the source Luke.

Another one-stop-shop coding in a one-stop-shop coding of FNH is its ConventionBuilder(a topic we will cover on next post), if you have repetitive mappings that follows some patterns/rules and it's a bummer to adhere to them manually on each and every mapping, you can place them in one centralized location, in ConventionBuilders; no more scattered mappings, no more manual typing. And what does Convention-based mappings have that Attributes-based(e.g. Linq to SQL) ones don't have? The latter is not programmable and is scattered on your code.


Following is the simplest code. We omit the repository pattern in our code, yet it maintains its 'real world'-ness, in particular we have prevention mechanism in place for accidentally building a SessionFactory twice. Here's the simplest implementation of NHibernate + Fluent NHibernate + Linq:

using System;
using System.Linq;

using NHibernate;
using NHibernate.Cfg;
using NHibernate.Linq;

/* 
* a bit amused :-) James Gregory has a good program organization,
* he doesn't stash the Fluent NHibernate core functionalities to FluentNHibernate namespace
*/
// using FluentNHibernate; 
using FluentNHibernate.Cfg;
using FluentNHibernate.Cfg.Db;
using FluentNHibernate.Mapping;
using FluentNHibernate.Conventions.Helpers;



namespace SimpleFnhLinq
{
    
    class MainClass
    {
        public static void Main (string[] args)
        {
            var sess = SessionFactoryBuilder.GetSessionFactory().OpenSession();
                        
            
            Action listProduct =  delegate
            {
                foreach(var p in sess.Query<Product>())
                {
                    Console.WriteLine("{0}, {1}, {2}, {3}", p.ProductID, p.Name, p.Category, p.Price);
                }
                    
            };
            

            
            
            Console.WriteLine("\nINSERT: ");
            var nh = new Product { Name = "NHibernate", Description = "Awesome", Category = "ORM", Price = 1234567 };
            sess.Save(nh); // buffer it in memory
            sess.Flush(); // actual database act
            listProduct();
            
            
            Console.WriteLine("\nUPDATE: ");
            nh.Name = "NHibernate+Linq";
            sess.Save(nh);
            sess.Flush();             
            listProduct();
            
            
                    
            Console.WriteLine("\nINSERT: ");
            sess.Save(new Product { Name = "Fluent NHibernate", Description = "Bee's Knees", Category = "Friend", Price = 8901234 } );
            sess.Flush();
            listProduct();
            
            
            
            // NHibernate 3 has Linq support
            var q = from p in sess.Query<Product>()
                    where p.Category == "ORM"
                    select p;
            
            
            Console.WriteLine("\n\nList of ORM: ");
            foreach(var p in q)
            {
                Console.WriteLine("{0}, {1}, {2}, {3}", p.ProductID, p.Name, p.Category, p.Price);
            }

            

            Console.WriteLine("\n\nRecords count: {0}", sess.Query<Product>().Count());
            Console.ReadLine();
        }
    }//MainClass


    #region POCO...

    public class Product 
    { 
        public virtual int ProductID { get; set; }
        public virtual string Name { get; set; }
        public virtual string Description { get; set; }
        public virtual decimal Price { get; set; }
        public virtual string Category { get; set; }
    }

    #endregion ...POCO


    #region Mappings...


    public class ProductMap : ClassMap<Product>
    {
        public ProductMap()
        {        
            Id(x => x.ProductID).GeneratedBy.HiLo("1");
            Map(x => x.Name);
            Map(x => x.Description);
            Map(x => x.Category);
            Map(x => x.Price);            
        }
    }
    
    #endregion ...Mappings



    public static class SessionFactoryBuilder
    {
        static ISessionFactory _sf = null;
        public static ISessionFactory GetSessionFactory()
        {
            // Building SessionFactory is costly, should be done only once, making the backing variable static would prevent creation of multiple factory


            if (_sf != null) return _sf;

            var configurer = new IPersistenceConfigurer[]
                {
                    PostgreSQLConfiguration.Standard
                        .ConnectionString("Server=localhost;Database=store;User ID=postgres;Password=opensesame;"),
                    MsSqlConfiguration.MsSql2008
                        .ConnectionString(@"Data Source=.\SQLExpress;Initial Catalog=store;Trusted_Connection=true")
                };

            var dialects = new NHibernate.Dialect.Dialect[]
            {
                new NHibernate.Dialect.PostgreSQLDialect(),
                new NHibernate.Dialect.MsSql2008Dialect()                
            };

            int n = 1; // Sql Server

            _sf = Fluently.Configure()
                .Database(configurer[n])
                .Mappings
                    (m =>
                        m.FluentMappings.AddFromAssemblyOf<MainClass>()
                        .Conventions.Add(ConventionBuilder.Class.Always(x => x.Table(x.TableName.ToLower())))  // Postgresql Compatibility
                    )
                /*.ExposeConfiguration
                    (x => 
                        { 
                        // If you want don't want to create the script yourself copy the string from here and paste it to your database admin tool
                            string ddl = string.Join("; ", x.GenerateSchemaCreationScript(dialects[n]));
                        Console.WriteLine( "{0}", ddl );
                        Console.ReadLine();
                        }
                    )*/
                .BuildSessionFactory();

            return _sf;
        }
    }//SessionFactoryBuilder
    
                
}//namespace

Here's the DDL, compatible to Postgresql and Sql Server
CREATE TABLE Product
(
  ProductId int NOT NULL primary key,
  Name varchar(100) NOT NULL,
  Description varchar(500) NOT NULL,
  Category varchar(50) NOT NULL,
  Price numeric NOT NULL
);


CREATE TABLE hibernate_unique_key
(
  next_hi integer
);

insert into hibernate_unique_key values(1);

The components needed, add this to your project's References:

Saturday, December 11, 2010

Postgresql said, Sql Server(2008) said. Non-Unique Nulls, Unique Nulls

Postgres said:

create table x
(
name varchar(50) null unique
);

insert into x values('John');
insert into x values('Paul');
insert into x values(null); -- will allow this
insert into x values(null); -- will also allow this. Postgres allow duplicate nulls, it only indexes non-nulls. Compliant to ANSI SQL specifcation

How to emulate that in Sql Server 2008?

create table x
(
name varchar(50) null -- don't put unique here
)


create unique index uk_x on x(name) where name is not null -- this index creation, covers non-nulls only

insert into x values('John')
insert into x values('Paul')
insert into x values(null) -- will allow this
insert into x values(null) -- will also allow this


Sql Server 2008 said:

create table x
(
name varchar(50) null unique 
)


insert into x values('John')
insert into x values('Paul')
insert into x values(null)
insert into x values(null) -- this won't be allowed, Sql Server's unique column cover all values(non-nulls and nulls)


How to emulate that in Postgres?

create table x
(
name varchar(50) null unique -- keep this
);

-- Then we resolve all null to single value, e.g. number 1. 
-- essentially preventing duplicate nulls
create unique index uk_x on x (cast(1 as int)) where name is null; 


insert into x values('John');
insert into x values('Paul');
insert into x values(null);
insert into x values(null); -- this won't be allowed



Related to: http://stackoverflow.com/questions/10468657/postgres-index-for-join-table/10468686#10468686

Exceptions to Rule #1 and Rule #2. One-to-one relationship on both ends of two tables

According to Rule-1-Rule-2, the following query falls under rule #2 (primary key on first table appears on second table's join condition). This means the second table is many-to-one to first table, there are many countries to a person:

SELECT Country.CountryName, Leader.PersonName AS Leader
FROM Person Leader
JOIN Country ON Country.CurrentLeaderID = Leader.PersonID


How this applies in real world?

Let's take Hongkong as an example, imagine that the records of Hongkongers are not yet updated to China's id; so one cannot delete HK from Country table, so it means that HK still has an entry on Country table, so what one can do is just update HK's leader to China's leader. So there goes the case of two (or more) countries that belong to same leader only.

But what if we really want to enforce these conditions: (1) one leader to one country only (2) one country to one leader only. How can we do those?

First, though at first glance, we can put a LeadingCountryID to Person table so it would satisfy the second condition, this would however introduce a problem, a country can be governed by two leaders with that solution. And the Person table will be filled with nulls on those who are not leaders, that's a waste of I/O; and at most, only one row with null column is allowed on unique nullable column on Sql Server 2005 and below; there's a work-around on SQL Server 2008 though, you create unique index out of filtered-out nulls. Another solution is we put unique constraints on Country's CurrentLeaderID and reverse the query:

SELECT Country.CountryName, Person.PersonName 
FROM Country
JOIN Person Leader ON Leader.PersonID = Country.CurrentLeaderID

Hmm.. it seems we are able to achive our goal of stating the intent on code. It beautifully fall under rule #1.

However, that would have problems on entities with circular references. Take an example: a Person is born in a Country, a Country has a leader(Person), see this article: http://www.ienablemuch.com/2010/12/do-not-make-circular-references-on.html

If you can be sure that your entities won't have a circular references on the future, you can stop here.



.
.
.
.
.
.
.


Otherwise, the only neat solution is to remove the CurrentLeaderID from the Country table and make an association table:

CREATE TABLE CountryCurrentLeader
(
  dummy serial not null primary key,
  CountryID int NOT NULL unique key references Country(CountryID),
  CurrentLeaderID int NOT NULL unique references CurrentLeaderID(PersonID)
)


So does our design automatically fall to rule #1 if we use this query?

SELECT CountryCurrentLeader.CountryID, Person.PersonName 
FROM Person
JOIN CountryCurrentLeader 
ON CountryCurrentLeader.CurrentLeaderID = Person.PersonID



Apparently it isn't, a primary key on first table (though not selected on SELECT clause) does appear on second table's join condition, so that's rule #2, it means the second table is many-to-one on first table.



So how can we at least make it appear in query that the new design(association table) conforms to a right rule(one-to-one on both ends), rule #1?

To achieve rule #1, we simply arrange the query to this form...

SELECT CountryCurrentLeader.CountryID, Person.PersonName 
FROM CountryCurrentLeader
JOIN Person ON Person.PersonID = CountryCurrentLeader.CurrentLeaderID 


...so by using an association table, there won't be a primary key from first table that will appear on second table's join condition. Conforms to rule #1, FTW!


Let's extend our query further if our two rules will still hold water on complex queries:

SELECT CountryName.CountryName AS Country, Person.PersonName As LeaderName, LeaderBirthPlace.CountryName AS LeaderBirthPlace 
FROM CountryCurrentLeader -- 1
JOIN Person ON Person.PersonID = CountryCurrentLeader.CurrentLeaderID -- 2
JOIN Country CountryName ON CountryName.CountryID = CountryCurrentLeader.CountryID -- 3
JOIN Country LeaderBirthPlace ON LeaderBirthPlace.CountryID = Person.BirthCountryID -- 4

Let's dissect our query:

J1) Line 1 to Line 2: Rule #1 one-to-one. OK
J2) Line 1 to Line 3: Rule #1 one-to-one. OK
J3) Line 2 to Line 4: Rule #1 one-to-one. OK

So all is well, J1 and J2 safely conforms to Rule #1 as long we use association table with dummy primary key. J3 conventionally fall under Rule #1


There's no fullproof way to deduce the intent of a query based on just two rules. But we can at least follow the existing simplest rule (it's the simplest, one have to remember two rules only) by re-arranging the query and/or redesigning the database design.

Rule #1 and Rule #2

Simple guide to deciphering query's rows relationship.

How can one know that B is one-to-one to A, or if B is many-to-one to A?

SELECT *
FROM A
JOIN B ON B.FieldNameHere = A.FieldNameHere

Two simple rules:

  1. If the first table's primary key don't appear on second table's join condition, the second table is one-to-one to first table.
  2. If the first table's primary key appear on second table's join condition, the second table is many-to-one to first table.



How one knows which field is the primary key?
They are easy to remember, there are only two(or three) variations of them, namely: Id, EntityId, Entity_Id; other variations are not encouraged.


Rule #1 If the first table's primary key don't appear on second table's join condition, the second table is one-to-one to first table. The second table will always return one row at most to first table. There's only one birth country to a person. Example:

SELECT * 
FROM Person
JOIN Country ON Country.CountryId = Person.BirthCountryId

Rule #2 If the first table's primary key appear on second table's join condition, the second table is many-to-one to first table. The second table will return zero or more rows to first table. Example:

SELECT * 
FROM Person
JOIN Bid ON Bid.PersonId = Person.PersonId


Would this technique still work on reverse query? Yeah it would, check this:

SELECT * 
FROM Bid
JOIN Person ON Person.PersonId = Bid.PersonId 

The first table's primary key don't appear on second table's join condition; so, second table is one-to-one to first table. One given bid always belong to one person only.


How about this?

SELECT * 
FROM Country
JOIN Person ON Person.BirthCountryId = Country.CountryId

The first table's primary key appears on second table's join condition, falls under rule #2, it means the second table is many-to-one to first table. There could be zero or more persons that was born on that country.

It would also.

Apparently there's some exceptions to this two rules, check this: http://www.ienablemuch.com/2010/12/exceptions-to-rule-1-and-rule-2-one-to.html

Do not make circular references on tables

Don't paint yourself in the corner. Do not make circular references on tables

Example:

create table country
(
country_code varchar(2) not null primary key,
country text not null 
);

create table person
(
person_code varchar(6) not null primary key,
person_name text not null,
birth_country_code
 varchar(2) null 
 references country(country_code)

);

alter table country
add column current_leader_code
 varchar(6) null 
 references person(person_code);

insert into country(country_code, country) values('PH','Philippines');
insert into person(person_code, person_name, birth_country_code) values('MB','Michael Buen', 'PH');
update country set current_leader_code = 'MB' where country_code = 'PH';



Even we can get away with making the two tables reference each other by strictly adhering to the sequence of the scripts above and making the country's current_leader_code nullable, we are painting ourselves in the corner by doing so.

When it's time for us to transfer this scripts to production machines, and we forgot to save those scripts; we will be left with this script(extracted from RDBMS admin tool):

create table country
(
country_code varchar(2) not null primary key,
country text not null,
current_leader_code 
 varchar(6) null
 references person(person_code)
);

create table person
(
person_code varchar(6) not null primary key,
person_name text not null,
birth_country_code
 varchar(2) null 
 references country(country_code)

);

That fails big time! How can that script execute?

So to design things properly for the case described above, we must introduce an association table:

create table country
(
country_code varchar(2) not null primary key,
country text not null
);

create table person
(
person_code varchar(6) not null primary key,
person_name text not null,
birth_country_code
 varchar(2) null 
 references country(country_code)

);



-- associations table
create table country_current_leader
(
dummy serial not null primary key,
country_code varchar(2) not null unique references country(country_code),
current_leader_code varchar(6) not null unique references person(person_code)
);


 
insert into country(country_code, country) values('PH','Philippines');
insert into country(country_code, country) values('CN','China');
insert into person(person_code, person_name, birth_country_code) values('MB','Michael Buen','PH');
insert into person(person_code, person_name, birth_country_code) values('HJ','Hu Jintao','CN');
insert into country_current_leader(country_code, current_leader_code) values('PH','MB');
insert into country_current_leader(country_code, current_leader_code) values('CN','HJ');
-- insert into country_current_leader(country_code, current_leader_code) values('CN','MB'); -- this will fail, only one leader per country, and only one country per leader
-- insert into country_current_leader(country_code, current_leader_code) values('PH','HJ'); -- this will fail too, only one leader per country, and only one country per leader


Plural table names are one trick pony




English plurals are not ORM-friendly.

Is that a proof by blatant assertion? It is if you can assert otherwise ;-)

Case in point, Chinese have two form of plurals, one for person, another for non-person, you use "men" suffix for person, "henduo" prefix for non-person. Filipinos plural form for everything(person or non-person) is facilitated by using "mga" prefix


Chinese-way:

SELECT * FROM EmployeeMen

SELECT * FROM ChildMen

SELECT * FROM HenduoFruit

SELECT * FROM HenduoMouse

Filipino-way:

SELECT * FROM MgaEmployee

SELECT * FROM MgaChild

SELECT * FROM MgaFruit

SELECT * FROM MgaMouse


English-way:

SELECT * FROM Employees

SELECT * FROM Children

SELECT * FROM Fruits

SELECT * FROM Mice

See? Mice and Children are not directly mappable to ORMs. And any decent ORM without an AI of sort, will not be able to infer that Mice's singular form is Mouse. In Filipino and Chinese languages, pluralizers are consistent, more amenable to ORMs


Let's say you are in a radical programming environment and the management forbids you from using mnemonic aliasies...

SELECT E.Name AS EmployeeName, C.Name as ChildName
FROM Employees AS E
JOIN Children AS C ON C.ParentId = E.ID

...you will find AS keyword a glorified singularizer for table names:

SELECT Employee.Name AS EmployeeName, Child.Name AS ChildName 
FROM Employees AS Employee
JOIN Children AS Child ON Child.ParentId = Employee.ID

And let's say you are in a very very extremely radical programming environment, they won't allow you to use aliases...

SELECT Employees.Name AS EmployeeName, Children.Name AS ChildName -- ugh, those look odd
FROM Employees 
JOIN Children ON Children.ParentId = Employee.ID

..., your queries will look odd.

BTW, we programmers are good in switching mental model, we don't really read this...

box.setX( box.getX() + 1);

...as set the X based on its old X with 1.

we read that as:

box.X = box.X + 1;
box.X += 1;
box.X++;


And the real best programmers are the ones that are trying to infer the intent of code given the lack of comments or proper method name, they don't go through the motions, they infer intents, they read the code above as: Move the box to the right

The best programmers are the ones with mental models that makes them able to abstract a concept very well, with or without the first-class constructs on syntaxes.

So the best programmer, would really read this...

SELECT * FROM Computer 

SELECT * FROM Toy

...as:

SELECT * FROM Computer(s)

SELECT * FROM Toy(s)

And since I'm a Filipino programmer, prefixing words to entity to make them plural easily gets old...

SELECT * FROM MgaEmployee

SELECT * FROM MgaChild

SELECT * FROM MgaFruit

SELECT * FROM MgaMouse

...and almost akin to incessant prefixing of tbl to table names:

SELECT * FROM tblEmployee

SELECT * FROM tblChild

SELECT * FROM tblFruit

SELECT * FROM tblMouse


Naming tables in plural forms are one of the design patterns that is like a novelty, aside from they easily wear off, they are just a one-trick pony. Plural names only work well if you will query one table only; reading SELECT Lastname, Firstname, Birthdate FROM Employees gives one a warm-and-fuzzy feeling; but the norm is, we join tables, and aliasing table names are warranted, especially to plural-advocates. To plural-advocates, wanna bet you would give your tables aliases rather than going through the motion when joining tables? i.e. This feels icky to you despite your plural-form-fervor for naming tables...

SELECT Persons.Name AS PersonName, Countries.CountryCode 
FROM Persons JOIN Countries ON Countries.CountryID = Persons.BirthCountryID

... Why that feels icky to you? hint: because to any self-respecting programmers, any selected columns with table qualifier would look odd if the table qualifier is of plural form.

With plural form table names, you can't seamlessly plug-and-play them to other components(think ORM), can't(or won't) leave them alias-less when querying two or more tables(looks funneh):


SELECT Children.Name AS ChildName, Children.Age, Fathers.Name AS FatherName 
FROM Children 
JOIN Fathers ON Fathers.ID = Children.FatherID)

So next time you encounter advocates of singular names, don't argue with them, they are mostly right.

And if you don't want to flunk MCSD 70-300 exam, use singular names


In terms of ORM-friendly plural naming conventions, here's their rankings:

0. Singular (hey, you said ORM-friendly plural naming conventions! how this manage to sneak in here?)

1. Filipino plurals

2. Chinese plurals (words are immutable, you just prefix "henduo" or suffix "men")

3. English plurals (that's why this article exist ;-) ) Aside from suffix "s", you have irregular and mutable plural forms (child -> children, mouse -> mice, goose -> geese, moose -> moose( not mutable, i just want to expose the inconcistencies of the language ;-) ) )


So keep in mind folks, plural table names are one trick pony only, the only instance it could give you a warm and fuzzy feeling is when you are getting data from one table only.


SELECT Name, FavoriteLanguage, FavoritePizzaBrand FROM Programmers

When joining tables with plural names, pleasant-wise, all bets are off

English plural 101:
http://en.wikipedia.org/wiki/English_plural


[UPDATE March 11 2011]

My desire to name a DbSet on Entity Framework was not honored, EF tried to look for People table for my public DbSet Persons, need to override things on OnModelCreating :-)