Wednesday, July 31, 2013

Why so fond of UNION?



I'm wearing my sargchitect hat in our current sprint. Aside from improving the speed of the existing queries by making them sargable and devoid of unintended cartesian-yielding joins, the optimization team need to make sure the queries are properly refactored, easy to maintain, smooth and suave to the eyes, so the developers could have a pleasant time reading and understanding the queries.


I saw some UNION queries that can be made simpler using OR. I don't think it's ok to have a union of the same table, union like that tend to be DRY-violating, tend to have many lines, hence harder to debug and harder to optimize


I'm thinking, why the inordinate fondness for UNION if the query can be made simpler with OR? Last I checked, Microsoft haven't disabled the OR functionality. So let's rejoice to the fact that Microsoft don't have any plans to disable OR functionality for the foreseeable future.




Happy Sarging! Scratch that, Happy SARGing! ツ

Thursday, July 25, 2013

Tuple Design Pattern for SQL Server

If your RDBMS doesn't support tuples (just an RDBMS-fancy speak for record), instead of this:
SELECT a,b FROM aTable
WHERE 
    (aTable.a,aTable.b) IN
    
    (SELECT anotherTable.a,anotherTable.b 
    FROM anotherTable
    WHERE anotherTable.IsActive = 1);


You have to do this:
SELECT a,b FROM aTable
WHERE     
    EXISTS
    (
        SELECT *
        FROM anotherTable
        WHERE anotherTable.a = aTable.a AND anotherTable.b = aTable.b
            AND anotherTable.IsActive = 1
    );


I prefer to write it this way though, so as to make the intent clearer:
SELECT a,b FROM aTable
WHERE 
-- (aTable.a,aTable.b) IN -- leave this commented, it makes the intent more clear
    EXISTS
    (
        SELECT anotherTable.a,anotherTable.b -- do not remove this too, perfectly fine for self-documenting code, i.e., tuple presence check
        FROM anotherTable
        WHERE anotherTable.a = aTable.a AND anotherTable.b = aTable.b
        
            AND anotherTable.IsActive = 1 -- put a blank line above, to emphasize that the above condition is filter for the tuple we are looking for
    );


Design patterns are bug reports against your programming language — Peter Norvig



Happy Computing! ツ

Debunking the myth that JOIN is faster than IN

We see now that contrary to the popular opinion, IN / EXISTS queries are not less efficient than a JOIN query in SQL Server. In fact, JOIN queries are less efficient on non-indexed tables, since Semi Join methods allow aggregation and matching against a single hash table, while a JOIN needs to do these two operations in two steps. -- http://explainextended.com/2009/06/16/in-vs-join-vs-exists/




Converted this JOIN (with DISTINCT) … (From 1 minute and 24 seconds) :

INNER JOIN dbo.fn_ssrs_ConstrainID(@LanguageCultureCode,@NeutralLanguageCode,@UserID,@PersonID,@Application,@Module,'PEOPLE','Review') cons
 
ON rpp.PersonID = cons.id


…to IN expression (optimized, down to 43 seconds) :

and rpp.PersonID in
    (select cons.id
    from dbo.fn_ssrs_ConstrainID(@LanguageCultureCode,@NeutralLanguageCode,@UserID,@PersonID,@Application,@Module,'PEOPLE','Review') cons)


Advised the two colleagues to further optimize the query's existence filters by converting all the JOIN+DISTINCT combo to IN. They were able to optimized the original query of 1 minute and 24 seconds down to 1 second.



Using the same technique as above, another colleague and I were able to optimized a stored proc that is very crucial to the web app’s performance, as that stored proc determines the visibility of some buttons on the web page. The original stored proc was 5 seconds slow, on one page there are two buttons that are using the stored proc, that makes the web page to appear only after 10 seconds or so, we were able to tame it down to sub-second response time just by converting some of the JOINs to INs.





Happy Computing! ツ

Wednesday, July 24, 2013

SQL Server GREATEST function optimization

PostgreSQL has a GREATEST function, SQL Server don't have a built-in equivalent of it. So we have to create one ourselves. This was the first iteration:

CREATE FUNCTION dbo.fn_GetTheLatestDate
(
    @DateTime1 SMALLDATETIME = NULL,
    @DateTime2 SMALLDATETIME = NULL,
    @DateTime3 SMALLDATETIME = NULL,
    @DateTime4 SMALLDATETIME = NULL,
    @DateTime5 SMALLDATETIME = NULL,
    @DateTime6 SMALLDATETIME = NULL,
    @DateTime7 SMALLDATETIME = NULL,
    @DateTime8 SMALLDATETIME = NULL
)
RETURNS SMALLDATETIME
AS
BEGIN
    DECLARE @temp TABLE (dt SMALLDATETIME)
     
    IF @DateTime1 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime1
    END
     
    IF @DateTime2 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime2
    END
    IF @DateTime3 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime3
    END
     
    IF @DateTime4 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime4
    END
     
    IF @DateTime5 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime5
    END
     
    IF @DateTime6 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime6
    END
     
    IF @DateTime7 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime7
    END
     
    IF @DateTime8 IS NOT NULL
    BEGIN
        INSERT INTO @temp ( dt )
        SELECT @DateTime8
    END
     
         
    -- Return the result of the function
    RETURN (SELECT MAX(dt) FROM @temp)
END

The query that uses that function took 4 seconds to complete, and that function has an execution cost of 13% on that query



Sans table variable:

CREATE FUNCTION dbo.fn_GetTheLatestDate
(
    @DateTime1 SMALLDATETIME = NULL,
    @DateTime2 SMALLDATETIME = NULL,
    @DateTime3 SMALLDATETIME = NULL,
    @DateTime4 SMALLDATETIME = NULL,
    @DateTime5 SMALLDATETIME = NULL,
    @DateTime6 SMALLDATETIME = NULL,
    @DateTime7 SMALLDATETIME = NULL,
    @DateTime8 SMALLDATETIME = NULL
)
RETURNS SMALLDATETIME
AS
BEGIN
    
    
    return
    (
        
        select max(v.d)
        from
        (
            select @DateTime1 as d
            where @DateTime1 is not null
            
            union all
            select @DateTime2
            where @DateTime2 is not null
 
            union all
            select @DateTime3
            where @DateTime3 is not null
 
            union all
            select @DateTime4
            where @DateTime4 is not null
 
            union all
            select @DateTime5
            where @DateTime5 is not null
 
            union all
            select @DateTime6
            where @DateTime6 is not null
            
            union all
            select @DateTime7
            where @DateTime7 is not null
 
            union all
            select @DateTime8
            where @DateTime8 is not null
        ) as v
    )        
    
 
END

The query that uses that function took 0 second to complete, and that function has an execution cost of 0% on that query

Tuesday, July 23, 2013

Debunking the myth that CTE is slow

There are some folks who believe that CTE is slow. I don't know where they got that impression. We might as well say that our SELECT statement is slow because our code is using er.. SELECT statement. And there are some folks who believe CTEs underpinning is a cursor, really, no joking. I'm starting to lose faith in humanity ^_^ lol


For those who believe CTE is made out of cursor, here's how to partner cursor to a CTE:

http://stackoverflow.com/questions/14601407/using-a-cursor-with-a-cte


So if one can define a cursor out of CTE, what is a cursor on CTE? a cursor on cursor? Head-scratching! :D There's a lot of engineering effort that goes into CTE and all some folks can say about CTE is that it is a cursor, this kind of folks is dangerous to our industry. It might be their excuse for not learning CTE and windowing functions, which by the way can be used separately, but commonly seen together as they come out at the same time on most RDBMSes.


As with derived tables, CTEs allow you to create complex queries that would otherwise require the use of temporary tables and cursors, which can be inefficient. CTEs remove the overhead of creating and dropping those temporary tables, and of the INSERT and UPDATE statements required to populate them. This can make for faster execution, though you should always compare all available options when performance is critical -- http://www.blackwasp.co.uk/SQLCTEs.aspx


What we do with CTE is what determines its performance. CTE is nothing but just a programming functionality that makes our query more manageable, it helps when we are simplifying very complex and slow queries; and granted that you are doing the right approach, it aids the developer on making the query performant.


If you have tried programming with C or C++, you might be familiar with macro. Yes, CTE is just like that, a macro. It is expanded to full form when you use it. It will not magically make your query faster nor slower.



CTE is just an inline view:

with product_count_query as
(
    select product_id, product_count = count(*)
    from tbl
    group by product_id
)

select p.*, the_count = isnull(c.product_count, 0)
from product p 
left join product_count_query c -- derive table from CTE
on p.product_id = c.product_id;

CTE is just a shorthand for an explicit view:

create view product_count_query as

    select product_id, product_count = count(*)
    from tbl
    group by product_id;
go


select p.*, the_count = isnull(c.product_count, 0)
from product p 
left join product_count_query c -- derive table from VIEW
on p.product_id = c.product_id;


A view, and a CTE for that matter, don't have their own execution plan, both are ultimately just being expanded and bolted to the main query:

select p.*, the_count = isnull(c.product_count, 0)
from product p
left join
(
    select product_id, product_count = count(*)
    from tbl
    group by product_id
) as c -- derive table from an inline view, er.. a CTE? ;-)
 on p.product_id = c.product_id


All of them, same banana, exactly the same execution plan, same speed.


If you believe CTE is just a cursor, you might as well believe that view is just a cursor too. A darndest and silly belief.


Time to stop spewing the nonsense that CTE is just a cursor, all those three queries have exactly the same execution plan and same speed. I don't know what magic sauce that could give the table-deriving query a performance boost over its CTE cousin query, there's none.


If you believe that there's something inherently slow with CTE, thus you prefer to write your query using table-deriving approach because you believe it is faster than CTE, you are merely doing Cargo Cult Programming. You are depriving yourself of the benefits of abstractions, readability and manageability of CTE offers to the table by having a cult-like belief that CTE is slow.


It's what you do with CTE that makes it slow or fast. If you will use JOIN+DISTINCT combo on a CTE or any form of query abstraction for that matter, expect bad performance and rewrites.


CTE is just one of the vast array of SQL tools at your disposal that can make your daily craft easier. You don't need to use it exclusively, you can mix-and-match it with other SQL tools too.


So what makes a query fast?


Start with the right assumption, it's just a macro:
@crokusek they(CTE) don't get materialized at all AFAIK - they are just a table expression -- http://dba.stackexchange.com/questions/13112/whats-the-difference-between-a-cte-and-a-temp-table#comment69061_13117



This is what gives CTE a bad name: wrong assumption:

One reason CTE's can be employable for performance reasons is because their (possibly forced) materialization could be more easily parallelized whereby temp tables are generally computed serially before their use. -- http://dba.stackexchange.com/questions/13112/whats-the-difference-between-a-cte-and-a-temp-table#comment69060_13117


Until MATERIALIZED is supported...
with 
T MATERIALIZED AS
(
    select ROW_NUMBER() over(partition by CustomerCode order by OrderID) as rn, * from test
)
,R(CustomerCode, Rn, OrderId, Qty, RunningTotal) as
(
    select CustomerCode, Rn, OrderID, Qty, Qty
    from t 
    where rn = 1
     
    union all
     
    select t.CustomerCode, t.Rn, t.OrderId, t.Qty, p.RunningTotal + t.Qty
    from t t
    join r p on p.CustomerCode = t.CustomerCode and t.rn = p.rn + 1
     
)
select R.CustomerCode, R.OrderId, R.Qty, R.RunningTotal from r
order by R.CustomerCode, R.OrderId 
option(maxrecursion 0);

...we should mix-and-match CTE with temporary table to give our query a performance boost. That is, there are times we need to materialize the result of a query, e.g., we should materialize the results of a row_numbering query to an actual table(temporary table or variable table will do) first when we wanted to use that row-numbered set to a recursive query, this is fast:

select ROW_NUMBER() over(partition by CustomerCode order by OrderID) as rn, * into #xxx 
from test;
 
with 
T AS
(
    select * from #xxx
)
,R(CustomerCode, Rn, OrderId, Qty, RunningTotal) as
(
    select CustomerCode, Rn, OrderID, Qty, Qty
    from t 
    where rn = 1
     
    union all
     
    select t.CustomerCode, t.Rn, t.OrderId, t.Qty, p.RunningTotal + t.Qty
    from t t
    join r p on p.CustomerCode = t.CustomerCode and t.rn = p.rn + 1
     
)
select R.CustomerCode, R.OrderId, R.Qty, R.RunningTotal from r
order by R.CustomerCode, R.OrderId 
option(maxrecursion 0);
 
drop table #xxx;

Happy Computing! ツ

Tuesday, July 16, 2013

Don't use common denominator, take advantage of your RDBMS special capabilites

I want to justify the use of cross-platform construct:

with p as
(
 select BusinessEntityID, FirstName, MiddleName, LastName, PersonType
 from Person.Person 
)
select p.BusinessEntityId, 'FirstName' as FieldLabel, FirstName
from p
union all
select p.BusinessEntityId, 'MiddleName' as FieldLabel, MiddleName
from p
union all
select p.BusinessEntityId, 'LastName' as FieldLabel, LastName
from p
union all
select p.BusinessEntityId, 'PersonType' as FieldLabel, PersonType
from p
order by BusinessEntityID, FieldLabel;


However, RDBMS-specific constructs are well-optimized:
with p as 
(
select BusinessEntityID, 
    FirstName = convert(varchar, FirstName), 
    LastName = convert(varchar, LastName), 
    MiddleName = ISNULL(convert(varchar,MiddleName), ''), 
    PersonType = convert(varchar, PersonType)
from Person.Person 
)
select unpvt.BusinessEntityID, unpvt.FieldLabel, unpvt.FieldValue
from p
unpivot (FieldValue FOR FieldLabel in (FirstName, MiddleName, LastName, PersonType)) as unpvt
order by unpvt.BusinessEntityID, unpvt.FieldLabel;



These are the two queries' logical reads:
(79888 row(s) affected)
Table 'Person'. Scan count 12, logical reads 5143, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(79888 row(s) affected)
Table 'Person'. Scan count 3, logical reads 4186, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Saturday, July 13, 2013

Streaming Table-Valued function via CLR function

We can't use dynamic SQL on a Table-Valued Function, as we cannot call stored procedures inside a function. The solution is to create Table-Valued Function via CLR function.

Without further ado, this is the complete code:

using System.Collections;

using System.Data.SqlTypes;
using System.Data.SqlClient;

using Microsoft.SqlServer.Server;

namespace FirstClrFunctionProject
{
    public partial class TheUserDefinedFunctions
    {
        private class Person
        {
            public SqlInt32 PersonId;
            public SqlString PersonName;

            public Person(SqlInt32 personId, SqlString personName)
            {
                PersonId = personId;
                PersonName = personName;
            }
        }


        [SqlFunction(
           DataAccess = DataAccessKind.Read,
           FillRowMethodName = "FindBandMembers_FillRow",
           TableDefinition = "PersonId int, PersonName nvarchar(4000)")]
        public static IEnumerable FindBandMembers(SqlString s)
        {
            bool tryTheAwesome = true;

            string field = s.Value;

            if (tryTheAwesome)
            {
                string connectionString; // "context connection=true"; // can't work on streaming, i.e. this connection string doesn't work if we are using: yield return

                // Must use this instead:
                connectionString = "data source=localhost;initial catalog=AdventureWorks2012;integrated security=SSPI;enlist=false";

                using (var connection = new SqlConnection(connectionString))
                {
                    connection.Open();

                    using (var personsFromDb = new SqlCommand("select BusinessEntityId, " +  field + " from Person.Person", connection))
                    using (var personsReader = personsFromDb.ExecuteReader())
                    {
                        while (personsReader.Read())
                        {
                            yield return new Person(personId: personsReader.GetInt32(0), personName: personsReader.GetString(1));
                        }
                    }//using                    
                }//using
            }//if

            yield return new Person(personId: 1, personName: "John");
            yield return new Person(personId: 2, personName: "Paul");
            yield return new Person(personId: 3, personName: "George");
            yield return new Person(personId: 4, personName: "Ringo");
            yield return new Person(personId: 5, personName: "Francisco");
            yield return new Person(personId: 6, personName: "Nino");
            yield return new Person(personId: 7, personName: "Marc");
            yield return new Person(personId: 8, personName: "Michael");


        }

        public static void FindBandMembers_FillRow(object personObj, out SqlInt32 personId, out SqlString personName)
        {
            var p = (Person)personObj;

            personId = p.PersonId;
            personName = p.PersonName;
        }
    }
}

Sign the assembly, I just named the strong name key same as the assembly name. No need to put a password on strong name key:




For the SQL:
use master;

go


sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO


IF EXISTS (SELECT * from sys.asymmetric_keys where name = 'MyDllKey') begin
    drop LOGIN MyDllLogin;
    drop ASYMMETRIC KEY MyDllKey;
end;

go



IF NOT EXISTS (SELECT * from sys.asymmetric_keys where name = 'MyDllKey') begin
    CREATE ASYMMETRIC KEY MyDllKey FROM EXECUTABLE FILE = 'c:\x\FirstClrFunctionProject.dll';
    -- http://stackoverflow.com/questions/7503603/cannot-find-the-asymmetric-key-because-it-does-not-exist-or-you-do-not-have-p
    CREATE LOGIN MyDllLogin FROM ASYMMETRIC KEY MyDllKey;
    GRANT EXTERNAL ACCESS ASSEMBLY TO MyDllLogin;
end;

go



use AdventureWorks2012;


IF EXISTS (SELECT name FROM sysobjects WHERE name = 'FindBandMembers')
   DROP FUNCTION FindBandMembers;
go

IF EXISTS (SELECT name FROM sys.assemblies WHERE name = 'FirstClrFunctionProject')
   DROP ASSEMBLY FirstClrFunctionProject;
go





CREATE ASSEMBLY FirstClrFunctionProject FROM 'c:\x\FirstClrFunctionProject.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS;
GO




-- Thanks Stackoverflow! 
-- http://stackoverflow.com/questions/7823488/sql-server-could-not-find-type-in-the-assembly

CREATE FUNCTION FindBandMembers(@hmm nvarchar(4000)) 
RETURNS TABLE (
   PersonId int,
   PersonName nvarchar(4000)
)
AS EXTERNAL NAME FirstClrFunctionProject.[FirstClrFunctionProject.TheUserDefinedFunctions].[FindBandMembers];
go

SELECT * FROM dbo.FindBandMembers('FirstName') order by personId;
SELECT * FROM dbo.FindBandMembers('LastName') order by personId;
go


-- http://stackoverflow.com/questions/6901811/sql-clr-streaming-table-valued-function-results


If you'll notice, the FirstClrFunctionProject is repeated on this expression:
AS EXTERNAL NAME FirstClrFunctionProject.[FirstClrFunctionProject.TheUserDefinedFunctions].[FindBandMembers];

If you done it this way...
AS EXTERNAL NAME FirstClrFunctionProject.TheUserDefinedFunctions.[FindBandMembers];

...it would have an error, as SQL Server will look for TheUserDefinedFunctions class without a namespace, however TheUserDefinedFunctions is enclosed in a namespace, hence you will get the following error when you run the SQL with the expression above:

Could not find Type 'TheUserDefinedFunctions' in assembly 'FirstClrFunctionProject'.


This...
AS EXTERNAL NAME FirstClrFunctionProject.[FirstClrFunctionProject.TheUserDefinedFunctions].[FindBandMembers];

...is interpreted as:
AS EXTERNAL NAME AssemblyName.[NamespaceName.Classname].[FunctionName];


If you have this kind of error when you run the SQL:
The certificate, asymmetric key, or private key file is not valid or does not exist; or you do not have permissions for it.

Chances are the certificate is not existing, or the asymmetric key is pointing to a path that SQL Server can't access. The latter case happened to me, I tried pointing the asymmetric key directly on the output path, e.g.

CREATE ASYMMETRIC KEY MyDllKey FROM EXECUTABLE FILE = 'C:\Users\Michael\Documents\GitHub\FirstClrFunctionProject\FirstClrFunctionSolution\FirstClrFunctionProject\bin\Debug\FirstClrFunctionProject.dll';

If that's the case, try to copy the FirstClrFunctionProject.DLL and the FirstClrFunctionProject.snk file to other directories.


Another source of error might be if you are using .NET Framework 4.0 and up, and your SQL Server is version 2008, recompile your solution with .NET 3.5 Framework


If you have an error like the following, which seems to effect only today, remove this Windows update: KB2840628

Msg 6260, Level 16, State 1, Line 2
An error occurred while getting new row from user defined Table Valued Function : 
System.TypeInitializationException: The type initializer for 'System.Data.SqlClient.SqlConnection' threw an exception. ---> System.TypeInitializationException: The type initializer for 'System.Data.SqlClient.SqlConnectionFactory' threw an exception. ---> System.TypeInitializationException: The type initializer for 'System.Data.SqlClient.SqlPerformanceCounters' threw an exception. ---> System.MethodAccessException: Attempt by method 'System.Configuration.TypeUtil.CreateInstanceRestricted(System.Type, System.Type)' to access method 'System.Diagnostics.SwitchElementsCollection..ctor()' failed. ---> System.Security.SecurityException: Request failed.
System.Security.SecurityException: 
   at System.Security.CodeAccessSecurityEngine.ThrowSecurityException(RuntimeAssembly asm, PermissionSet granted, PermissionSet refused, RuntimeMethodHandleInternal rmh, SecurityAction action, Object demand, IPermission permThatFailed)
   at System.Security.CodeAccessSecurityEngine.CheckSetHelper(PermissionSet grants, PermissionSet refused, PermissionSet demands, RuntimeMethodHandleInternal rmh, Object assemblyOrString, SecurityAction action, Boolean throwException)
   at System.Security.PermissionListSet.CheckSetDemandWithModification(PermissionSet pset, PermissionSet& alteredDemandSet, RuntimeMethodHandleInternal rmh)
   at System.Security.PermissionListSet.CheckSetDemand(PermissionSet pset, RuntimeMethodHandleInternal rmh)
   at System.Security.PermissionListSet.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet)
   at System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant)
System.MethodAccessException: 
   at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, 
 ...
System.TypeInitializationException: 
   at System.Data.SqlClient.SqlConnection..ctor()
   at System.Data.SqlClient.SqlConnection..ctor(String connectionString)
  ...


If you think using dynamic SQL on table-valued function is ought to be this hard, try to have a different perspective: http://www.anicehumble.com/2013/07/seamless-dynamic-sql-using-postgresql.html


Download the full code and SQL from: https://github.com/MichaelBuen/FirstClrFunctionProject

Happy Computing! ツ

Monday, July 1, 2013

What's rawer than Razor's raw?

@{
    ViewBag.Title = "Index";
}

<h2>Index</h2>

@{
    System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("fr-FR");
    decimal d = 1234567.89M;
}


@d

<hr />

@Html.Raw(d)



The output is:

1234567,89 
1234567,89

I wanted to return these though:
1234567,89
1234567.89


Yeah I know, raw != English-centric, the world doesn't revolve around English culture. However, the programming language's raw(or culture if you may) is English-centric, i.e., we write one milllion and two hundred thirty four thousand five hundred sixty seven pesos and eighty nine centavos as 1234567.89 not 1234567,89

I want to use 1234567.89 even I'm in other culture. What's the use of that whimsical need? It's easier to bridge server-side values to javascript when the programming language format is preserved properly.

This view...

@{
    System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("fr-FR");
    decimal d = 1234567.8912M;
}

<script>    
    var x = @d;
    alert(x);    
</script>

...is materialized to html as the following, an invalid syntax, hence the alert will not pop-up:

<script>    
    var x = 1234567,89;
    alert(x);    
</script>


Any suggestion? I'll post this stackoverflow and will keep you posted when an answer come up


UPDATE: Feb 19, 2014


Some answers are just coming up when there's a damning need for a solution to a problem. A colleague has a problem on his Razor view. His problem is similar to above, he's passing a Model(with numeric values) from controller to view, and in turn his view is passing the model's numeric values to a JavaScript control, this is his razor view:


<script>
var c = chart.setup({ LowestScore : @Model.LowestScore, HighestScore : @Model.HighestScore });
</script>


However this is the HTML output of Razor view:
<script>
var c = chart.setup({ LowestScore : 12345,67 , HighestScore : 56789,12 });
</script>


Spot the error? It's not a valid JavaScript, the cents gets separated from the values. The problem stems from the razor being locale-aware, so if you set the language to French, the fraction will get emitted with comma separator. To fix the problem, we just serialize the values right there on View. Why on view and not on controller? If we do it on controller we are compelled to introduce another property on the model for the JSON string of the values we wanted to pass from the model, or worse yet we have to marshal it to ViewBag or ViewData from the controller. With that in mind, this is the best way to tackle the problem, serialize the values in View:

<script>
    @{
        string j = Newtonsoft.Json.JsonConvert.SerializeObject(
            new 
            { 
                LowestScore = Model.LowestScore,
                HighestScore = Model.HighestScore,                 
            });
    }

    var jsonObject = @Html.Raw(j);
    var c = chart.setup(jsonObject);
</script>



HTML generated:
<script>
    var jsonObject = {"LowestScore":12345.67,"HighestScore":56789.12};
    var c = chart.setup(jsonObject);
</script>



I hope I'm wrong with my approach above, if there's a shorter way to do it in Razor, i.e. its localization can be turned off programatically, I'll be more than glad to know.



Happy Coding! ツ