Friday, October 7, 2011

SQLServer Denali : OFFSET & NEXT Keywords

 

Hi All,

 

I found these two new & very useful keywords in SQLServer Denali

OFFSET & NEXT

 

SELECT *
FROM [Order Details]
ORDER BY [Order ID]
OFFSET 20 ROWS
FETCH NEXT 7 ROWS ONLY

 

 

Above script returns records 21 to 27 from the tables [Order Details]. OFFSET clause will skip the first 20 rows and then returns 7 rows. Please note, this T-SQL will throw error if ORDER BY is not used in the query.

 

Reference: PinalDave's blog

http://blog.sqlauthority.com/2011/10/07/sql-server-server-side-paging-in-sql-server-ce-compact-edition/

 

Namaste!

Anugrah

Friday, September 23, 2011

Create a .NET Publisher Policy

Vendors of assemblies can state that previously compiled applications/clients should use a newer version of an assembly by including a publisher policy file with the upgraded assembly. The publisher policy file specifies assembly redirection and code base settings, and uses the same format as an application configuration file. The publisher policy file is compiled into an assembly and placed in the global assembly cache.
There are three steps involved in creating a publisher policy:
  1. Create a publisher policy file.
  2. Create a publisher policy assembly.
  3. Add the publisher policy assembly to the global assembly cache.
 
1. Creating the Publisher Policy XML file
The schema for publisher policy is described in Redirecting Assembly Versions. The following example shows a publisher policy file that redirects one version of myAssembly to another.
<configuration>
   <runtime>
      <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
       <dependentAssembly>
         <assemblyIdentity name="myAssembly"
                           publicKeyToken="32ab4ba45e0a69a1"
                           culture="en-us" />
         <!-- Redirecting to version 2.0.0.0 of the assembly. -->
         <bindingRedirect oldVersion="1.0.0.0"
                          newVersion="2.0.0.0"/>
       </dependentAssembly>
      </assemblyBinding>
   </runtime>
</configuration>
 
 
2. Creating the Publisher Policy Assembly
Use the Assembly Linker (Al.exe) to create the publisher policy assembly.
Type the following command at the command prompt: al /link: publisherPolicyFile /out:publisherPolicyAssemblyFile /keyfile:keyPairFile /platform:processorArchitecture
In this command:
  • The publisherPolicyFile argument is the name of the publisher policy file.
  • The publisherPolicyAssemblyFile argument is the name of the publisher policy assembly that results from this command. The assembly file name must follow the format:
policy. majorNumber . minorNumber . mainAssemblyName .dll
  • The keyPairFile: You must sign the assembly and publisher policy assembly with the same .snk file
  • The processorArchitecture argument identifies the platform targeted by a processor-specific assembly. It can be amd64, ia64, msil, or x86.
The following command creates a publisher policy assembly called policy.1.0.myAssembly from a publisher policy file called pub.config, assigns a strong name to the assembly using the key pair in the sgKey.snk file, and specifies that the assembly targets the x86 processor architecture.
al /link:pub.config /out:policy.1.0.myAssembly.dll /keyfile:sgKey.snk /platform:x86
The publisher policy assembly must match the processor architecture of the assembly that it applies to. Thus, if your assembly has a ProcessorArchitecture value of MSIL, the publisher policy assembly for that assembly must be created with /platform:anycpu. You must provide a separate publisher policy assembly for each processor-specific assembly.
A consequence of this rule is that in order to change the processor architecture for an assembly, you must change the major or minor component of the version number, so that you can supply a new publisher policy assembly with the correct processor architecture. The old publisher policy assembly cannot service your assembly once your assembly has a different processor architecture.
Another consequence is that the version 2.0 linker cannot be used to create a publisher policy assembly for an assembly compiled using earlier versions of the .NET Framework, because it always specifies processor architecture.
 
3. Adding the Publisher Policy Assembly to the Global Assembly Cache
  • Drag the policy dll to GAC or do it using gacutil command from console
  • Make sure that the publisher policy assembly cannot be added to the global assembly cache unless the original publisher policy file is located in the same directory as the assembly.
 
Namaste!
Anugrah Atreya
 

Thursday, September 22, 2011

ASP.NET : MVC3 : Introduction

Hi All,
 
If you are an ASP.NET developer and not yet got a chance to catch up with MVC3 introduced with .Net 4.0
Here is a must watch presentation on MVC3 (using new Razor View)
 
 
Namaste!
Anugrah Atreya
 

Friday, September 16, 2011

SQLServer : XML Shredding : Open XML : Part 1

In previous post we saw how we can get XML out of SQL Server.

In this post we will see the reverse of same i.e. how we can put XML back to SQL server.

 

Over to Pinal Dave's original post below

 

Namaste !

  Anugrah Atreya

 

 

From: Journey to SQLAuthority [mailto:no-reply@wordpress.com]
Sent: Saturday, September 03, 2011 12:21 PM
To: Anugrah Atreya
Subject: Daily digest for September 3, 2011

SQL SERVER - Tips from the SQL Joes 2 Pros Development Series - Preparing XML in Memory - Day 34 of 35

Consuming XML

Anytime you turn XML into another data format (e.g., into a SQL Server table) that process is called shredding XML. Before you can shred your XML, you must first prepare it. Preparing XML for SQL means storing the XML in a memory space where a query can retrieve and make use of the data. SQL Server 2008 provides native XML support to make streaming and shredding XML more robust.

http://www.pinaldave.com/bimg/j2p/j2p_33_2.png

Think of XML as a big meal for SQL. While we essentially added just one line of code to our Select query to turn tabular data into an XML document, going the other direction is more complex. Behind the scenes, SQL Server uses additional memory when consuming XML. Our preparation will include a step to parse and store the XML document in memory. And the same way a big feast requires additional cleanup, we must include a step to remove the XML document from the memory cache once it is no longer needed.

Now that we understand the essential rules and components of well-formed XML documents, we will turn our attention toward consuming XML data. In the next posts we will see an advanced method (nodes) for shredding XML. Right now we want you to be familiar with the terms used when going back and forth from SQL to XML.

Preparing XML in Memory

If you want to take XML data and create a result set in SQL Server, you must first store the XML in memory. The process of preparing XML in SQL includes storing the XML in memory and processing the XML so that all the data and metadata is ready and available for you to query.

Recall that element levels in your XML document appear in the same order that tables appear in your SELECT list and are named according to any aliases you may have chosen (e.g., cust, Order, OrderDetail).

A related query from the figure below helps to illustrate our root element (cust) having a top-level element (Order) and a child-level element (OrderDetail).  There is a 1:Many (One-to-Many) relationship between the root node and the lower elements.

Copy your XML result into a brand new query window and enclose it in single quotes (see figure below). Let's briefly digress for another comparison which may be helpful for our next step. Preparing to have your XML document shredded by SQL Server is a bit like the steps you take when having your tailor create a custom garment for you. One of my students recently underwent this process.  Her favorite tailor, Kim, had the design for a poncho she liked.  The tailor sent my student to the fabric store to select the particular fabric and trim she wanted for her poncho.When she brought the fabric and trim to Kim's shop, Kim took the fabric, wrote up an order slip, and then gave my student a claim ticket and said her poncho would be done in a week. You stored your garment at the tailor and when you want to use it you must show your claim ticket.

This tailor runs a small neighborhood shop but is always very busy due to the high quality of her work. While Kim could eventually have located the order without the benefit of the claim ticket, my conscientious student made very sure to bring her ticket when she returned the following week. She submitted her claim ticket and in exchange she was handed her lovely new hand-made garment.

Much the same way, when you send an XML document to be tailored into memory, SQL Server gives you a claim number (called a handle) which you need later when referring to that document.  We will send our XML document into memory and in exchange we will get back the handle in the form of an integer. To send our document into memory, we first need to declare an XML variable. In order for this variable to contain our XML document, we will set it equal to our XML.

The @Doc variable's data type is XML. Next we will declare the variable @hDoc, which we know will be an integer because it is the variable which will act as our document handle (i.e., our "claim ticket"). We will also use sp_XML_PrepareDocument, a system-supplied stored procedure which reads our XML document (@Doc), parses it, and makes it available for SQL's use.

When we send our XML to SQL Server's internal cache, we will receive a number which functions as our "claim ticket." Run all of the code together, including a SELECT statement to display the document handle (i.e., our "claim ticket" which SQL Server provides in exchange for the XML document). Run all of the code together and make note of the handle number, which is 1.

Be sure to run this code only once, otherwise you will create multiple handles and instances of your XML document. Avoid accidentally creating unneeded instances of your document and handle.

Using the OpenXML Function

We just sent our prepared XML into the SQL Server's internal cache so that we may pull out the data we want. The OpenXML function provides a rowset view of your XML data. It works with the in-memory copy of the XML document you've stored and provides a view of the data, no longer formatted as XML but with all its parts separated into a large grid. This allows you to query just the data that you need.

We know the key to accessing the stored XML document is the document handle (like a claim ticket). The first argument needed by the OpenXML function is this value expressed as an integer. The second argument is the rowpattern hint for the data we wish to see.

After declaring an integer variable and setting it equal to 1 (i.e., the value of our document handle, from 2 figures ago), we can use a SELECT statement to query the result set of the OpenXML function. The variable @iNum is the first parameter. The second parameter '/cust/Order/OrderDetail' specifies that we wish to see data for the OrderDetail element level.

Rowpattern

Since XML can have root tags, top level tags, and many levels of child tags, rowpatterns are needed to figure out which level represent your row data.  A rowpattern is an XPath pattern telling your query where to look for the data that you want to see in your result.

In our current example, the rowpattern hint ('/cust/Order/OrderDetail') narrows our query to the attributes found at the OrderDetail element level (see figure above). While the surrounding data isn't immediately interpretable, we can see the text for the ProductID attribute shows a 72, and the text for the Quantity attribute shows a 4 (see lower right corner of the figure above).

Shredding One Level

Adding the WITH clause to our existing query allows us to pick just the values we wish to see.  Our query specifies that we are still interested in data from the OrderDetail element level (see figure below). Our WITH clause lists the field names we want (ProductID, Quantity) from this element level and that these values should be expressed as integer data. In other words, ProductID and Quantity are both integers. The WITH clause allows us to shred data at a single element level.

Note: If you want to setup the sample JProCo database on your system you can watch this video. For this post you will want to run the SQLInteropChapter1.0Setup.sql script from Volume 5.

Question 34

The sp_XML_PrepareDocument stored procedure requires a parameter that is an XML data type. What is the output parameter for?

1.     The handle as an INT

2.     The handle as an XML

3.     The handle as a Varchar

Add a comment to this post

 

WordPress

WordPress.com | Thanks for flying with WordPress!
Manage Subscriptions | Unsubscribe | Reach out to your own subscribers with WordPress.com.

Trouble clicking? Copy and paste this URL into your browser: http://subscribe.wordpress.com

Tuesday, August 30, 2011

SQL Server 2008 : FOR XML RAW

This is amazing to see how we can get the output of SQL Queries in XML Format.

See complete article from Pinal Dave in subscription email below.

 

Namaste!

  Anugrah Atreya

 

From: Journey to SQLAuthority [mailto:no-reply@wordpress.com]
Sent: Tuesday, August 30, 2011 12:16 PM
To: Anugrah Atreya
Subject: Daily digest for August 30, 2011

 

SQL SERVER - Tips from the SQL Joes 2 Pros Development Series - What is XML? - Day 30 of 35

Let's look at another example from the Employee table.  If you ran the reset script for this chapter, you should see 14 JProCo employees showing in your Employee table.

Next we will add FOR XML RAW to view the result from the Employee table as an XML output using the raw mode. We have changed our Employee table result to output as XML RAW. Notice that every row of our XML RAW output is labeled "row" by default.


We next will add a root to our output. We will add the keyword ROOT to our existing code (see figure below) and then look at our revised output.  We are adding the keyword ROOT in order to see a root node in our XML output. We now see the root node (a.k.a., the root element). Not only is our output more readable and organized, but this is considered "well-formed XML"


Now let's put the data into elements. We can see each employee now has three sub-elements under the top element, which is "row"


Each row has three child elements (FirstName, LastName, and LocationID). The exception is John Marshbank, who only has two elements. If we query the Employee table, we quickly see the reason for this is that John Marshbank is the only one with a NULL LocationID. John Marshbank has just two sub-elements beneath the top element, "row".


Our mystery is solved – we understand John Marshbank's having just two data sub-elements is caused by his LocationID value having a NULL value.  Suppose the program which needs to consume our result requires three data sub-elements. Or suppose company policy specifies that each employee record must contain three data sub-elements. John Marshbank's record doesn't meet the criteria and would thus be in violation of the policy.

XSINIL

For fields in SQL Server which include a null value for some records but are populated with regular values in the other records, you will seem to have missing tags for the null record. Often this is alright, as missing tags are presumed to be null. The XSINIL option allows you to force an XML tag to be present for every field in the query, even if the underlying data has NULL field values. Our next example will show us how to make a LocationID tag appear for John Marshbank.

If you require all tags to be present (even if they have no data), then you can specify the XSINIL option for your XML stream. The XSINIL option will force tags to be present for all fields of all records, including those which contain null values. Let's rerun our prior code and add the XSINIL option.


We now see a third sub-element for John Marshbank. The LocationID tag is no longer missing. It is present and shows the value xsi:nil="true" in place of a LocationID. Our objective has been met:  John Marshbank's record now includes three data elements thanks to XSINIL.

Note: If you want to setup the sample JProCo database on your system you can watch this video. For this post you will want to run the SQLInteropChapter1.0Setup.sql script from Volume 5.

Add a comment to this post

 

Monday, August 29, 2011

SQLServer : TABLE Valued Functions

SELECT * FROM GetCategoryProducts('Medium-stay')

Where GetCategoryProducts is not a View or Table but a function which can take input arguments as well.

 

See complete post in subscription email from http://blog.sqlauthority.com/ below.

 

Namaste!

  Anugrah Atreya

SQL SERVER - Tips from the SQL Joes 2 Pros Development Series - Table-Valued Functions - Day 26 of 35

Answer simple quiz at the end of the blog post and -

Every day one winner from India will get Joes 2 Pros Volume 4.

Every day one winner from United States will get Joes 2 Pros Volume 4.

Note: If you want to setup the sample JProCo database on your system you can watch this video. For this post you will want to run the SQLProgrammingChapter5.1Setup.sql script from Volume 4.

Table-Valued Functions

Scalar-valued functions return a single value. Table-valued functions return tabular result sets ("tabular" meaning like a table). Table-valued functions look a lot like views because they both show us a tabular result set based on an underlying query. Table-valued functions can be based on one or more base tables.

Creating and Implementing Table-Valued Functions

The body of a table-valued function will essentially contain a query.  Let's begin with a query containing four fields and all of the records from the CurrentProducts table.

This query will become the heart of a new table-valued function, GetAllProducts.  By placing the query within the set of parentheses after the keyword RETURN, we have the body of the function.  The RETURNS TABLE keyword specifies that the table-valued function GetAllProducts must return the result in the form of a table.

CREATE FUNCTION GetAllProducts( )
RETURNS TABLE
AS
RETURN
(SELECT ProductID, ProductName, RetailPrice, Category
FROM CurrentProducts)
GO
Just how do you query a table-valued function?  The syntax is somewhat similar to how you would run a SELECT statement against a table or a view. All functions need to be called by using a set of parentheses with all required parameters inside them. If the function has no parameters (which is currently the case with GetAllProducts), then you will simply include an empty set of parentheses.

To view all of the table-valued functions contained in the JProCo database from within the Object Explorer tree, traverse this path:

OE > Databases > JProCo > Programmability > Functions > Table-valued Functions

Views versus Parameterized Table-Valued Functions

Views and table-valued functions are both useful ways to see the result set for a pre-defined query.  There is no way to pass a variable into a view and change the way it runs.  Views are hard-coded and their criterion does not change.  A table-valued function can display different results by passing values into its parameter(s) at runtime.  Let's begin by selecting all 'No-Stay' records from the CurrentProducts table.  We want to turn this query into a function and allow that function to pick the category.

We're going to enclose our query in parentheses, indent it, and then add some code to create a function. We will create the GetCategoryProducts function which takes a @Category parameter. The query within the table-valued function will predicate on the value passed in when the function is called.

Change the parameter value to 'Medium-stay' and run this query. The GetCategoryProducts function now returns164 records.

SELECT * FROM GetCategoryProducts('Medium-stay')

Whenever you call a function, you must remember to use a set of parentheses and include the parameter(s) which the function expects. Let's demonstrate the error message which results from forgetting to include the needed parameter within the parentheses. SQL Server's error message tells us that our code doesn't match the function's parameter signature.  In other words, it reminds us that we need to specify a category.

SELECT * FROM GetCategoryProducts()

Msg 313, Level 16, State 3, Line 1
An insufficient number of arguments were supplied for the procedure or function GetCategoryProducts.

Note: If you want to setup the sample JProCo database on your system you can watch this video. For this post you will want to run the SQLProgrammingChapter8.1Setup.sql script from Volume 4.

Question 26

You need to create two functions that will each return a scalar result of the number of hours each user has logged for: 1) the current day, and 2) month to date.  You will pass in the user ID as a parameter value. What two things must you do?

1.     Create a function that returns a list of values representing the login times for a given user.

2.     Create a function that returns a list of values representing the people who have logged more hours than the current user has logged.

3.     Create a function that returns a numeric value representing the number of hours that a user has logged for the current day.

4.     Create a function that returns a number value representing the number of hours that a user has logged for the current month.

Rules:

Please leave your answer in comment section below with correct option, explanation and your country of resident.
Every day one winner will be announced from United States.
Every day one winner will be announced from India.
A valid answer must contain country of residence of answerer.
Please check my facebook page for winners name and correct answer.
Every day one winner from India will get Joes 2 Pros Volume 4.
Every day one winner from United States will get Joes 2 Pros Volume 4.
The contest is open till next blog post shows up at http://blog.sqlauthority.com which is next day GTM+2.5.

Reference:  Pinal Dave (http://blog.SQLAuthority.com)

Add a comment to this post

 

WordPress

WordPress.com | Thanks for flying with WordPress!
Manage Subscriptions | Unsubscribe | Reach out to your own subscribers with WordPress.com.

Trouble clicking? Copy and paste this URL into your browser: http://subscribe.wordpress.com

Friday, August 5, 2011

FW: CQRS

We think we know everything about Design Patterns, but it's a continuously evolving world……

 

Found this interesting ppt on CQRS

http://www.slideshare.net/dennisdoomen/cqrs-and-event-sourcing-an-alternative-architecture-for-ddd?from=ss_embed

 

Feed: Martin Fowler
Posted on: Thursday, July 14, 2011 7:03 PM
Author: Martin Fowler
Subject: Bliki: CQRS

 

CQRS stands for Command Query Responsibility Segregation. It's a pattern that I first heard described by Greg Young. At its heart is a simple notion that you can use a different model to update information than the model you use to read information. This simple notion leads to some profound consequences for the design of information systems.

The mainstream approach people use for interacting with an information system is to treat it as a CRUD datastore. By this I mean that we have mental model of some record structure where we can create new records, read records, update existing records, and delete records when we're done with them. In the simplest case, our interactions are all about storing and retrieving these records.

As our needs become more sophisticated we steadily move away from that model. We may want to look at the information in a different way to the record store, perhaps collapsing multiple records into one, or forming virtual records by combining information for different places. On the update side we may find validation rules that only allow certain combinations of data to be stored, or may even infer data to be stored that's different from that we provide.

As this occurs we begin to see multiple representations of information. When users interact with the information they use various presentations of this information, each of which is a different representation. Developers typically build their own conceptual model which they use to manipulate the core elements of the model. If you're using a Domain Model, then this is usually the conceptual representation of the domain. You typically also make the persistent storage as close to the conceptual model as you can.

This structure of multiple layers of representation can get quite complicated, but when people do this they still resolve it down to a single conceptual representation which acts as a conceptual integration point between all the presentations.

The change that CQRS introduces is to split that conceptual model into separate models for update and display, which it refers to as Command and Query respectively following the vocabulary of CommandQuerySeparation. The rationale is that for many problems, particularly in more complicated domains, having the same conceptual model for commands and queries leads to a more complex model that does neither well.

By separate models we most commonly mean different object models, probably running in different logical processes, perhaps on separate hardware. A web example would see a user looking at a web page that's rendered using the query model. If they initiate a change that change is routed to the separate command model for processing, the resulting change is communicated to the query model to render the updated state.

There's room for considerable variation here. The in-memory models may share the same database, in which case the database acts as the communication between the two models. However they may also use separate databases, effectively making the query-side's database by a real-time ReportingDatabase. In this case there needs to be some communication mechanism between the two models or their databases.

The two models might not be separate object models, it could be that the same objects have different interfaces for their command side and their query side, rather like views in relational databases. But usually when I hear of CQRS, they are clearly separate models.

CQRS naturally fits with some other architectural patterns.

  • As we move away from a single representation that we interact with via CRUD, we can easily move to a task-based UI.
  • Interacting with the command-model naturally falls into commands or events, which meshes well with Event Sourcing.
  • Having separate models raises questions about how hard to keep those models consistent, which raises the likelihood of using eventual consistency.
  • For many domains, much of the logic is needed when you're updating, so it may make sense to use EagerReadDerivation to simplify your query-side models.
  • CQRS is suited to complex domains, the kind that also benefit from Domain-Driven Design.

When to use it

Like any pattern, CQRS is useful in some places, but not in others. Many systems do fit a CRUD mental model, and so should be done in that style. CQRS is a significant mental leap for all concerned, so shouldn't be tackled unless the benefit is worth the jump.

In particular CQRS should only be used on specific portions of a system (a Bounded Context in DDD lingo) and not the the system as a whole. In this way of thinking, each Bounded Context needs its own decisions on how it should be modeled.

So far I see benefits in two directions. Firstly is handling complexity - a complex domain may be easier to tackle by using CQRS. I do have to hedge this, usually there's enough overlap between the command and query sides that sharing a model is easier. Each domain has different characteristics.

The other main benefit is in handling high performance applications. CQRS allows you to separate the load from reads and writes allowing you to scale each independently. If your application sees a big disparity between reads and writes this is very handy. Even without that, you can apply different optimization strategies to the two sides. An example of this is using different database access techniques for read and update.

If your domain isn't suited to CQRS, but you have demanding queries that add complexity or performance problems, remember that you can still use a ReportingDatabase. CQRS uses a separate model for all queries. With a reporting database you still use your main system for most queries, but offload the more demanding ones to the reporting database.

It's also true that we haven't seen enough uses of CQRS in the field yet to be confident that we understand its pros and cons. So while CQRS is a pattern I'd certainly want in my toolbox, I wouldn't keep it at the top.


Further Reading


View article...

Wednesday, August 3, 2011

Installing a Generic Text Printer

You can use the instructions below to install the Windows Generic / Text Only printer driver.

1.     Click Start | Control Panel | Printers and Faxes. If you do not see Control Panel after you click Start, click Start | Settings | Control Panel | Printers and Faxes.

2.     On the Printers and Faxes window, double-click Add a Printer.

3.     On the Add Printer Wizard window, click Next.

4.     On the Local or Network window, check Local printer attached to this computer and uncheck Automatically detect and install my Plug and Play printer. Click Next to continue.

5.     On the Select a Printer Port window, select FILE: (Print to File) from the Use the following port: drop-down list. Click Next to continue.

6.     On the Install Printer Software window, click Generic from the Manufacturer pane. Click Generic / Text Only from the Printers pane. Click Next to continue.

7.     On the Name Your Printer window, use the Generic / Text Only printer name in the Printer name: field. In the Do you want to use this printer as the default printer field, click No. ClickNext to continue.

8.     On the Printer Sharing window, click Do not share this printer. Click Next to continue.

9.     On the Print Test Page window, click No. You do not want to print a text page. Click Next to continue.

10.  Click Finish to finish adding your printer.

11.  If Windows XP displays the Print to File window, click Cancel.

Namaste!

Anugrah A