Microsoft Siri Competitor from 2003 – Microsoft is lightyears and decades ahead of their competitors.

This is an amazing article.

Yes, I fully agree– I think that Microsoft is lightyears and decades ahead of their competitors.

What Microsoft offers in Access Data Projects- it is the same thing that they offered in 1999, thirteen years ago- and I’ve yet to see it from ANY Microsoft development platform.

ACCESS JUST WORKS.  It’s the best tool for most jobs. Right click SORT and Right-click FILTER are built in. I’ve never seen a single end user application perform as well as what I can build with Microsoft Access- 3 times faster than Web Developers.

And it performs better.  Hundreds of thousands of records with unlimited drilldown.

Microsoft Access Data Projects kick ass!

"Microsoft has been beating the speech and text recognition drums for decades, only to see Siri, a quirky, accent-fussy smart agent (there’s a term you probably haven’t seen in a while) steal any and all thunder that Microsoft had even thought of years before, but not in a mobile context," Miller wrote, referring to Siri on the Apple iPhone 4S. Here, then-Microsoft Vice President of Automotive Technology Dick Brass uses Microsoft Voice Command on a Windows Mobile-based Pocket PC on Nov. 3, 2003. Photo: Getty Images / Microsoft

“Microsoft has been beating the speech and text recognition drums for  decades, only to see Siri, a quirky, accent-fussy smart agent (there’s a term  you probably haven’t seen in a while) steal any and all thunder that Microsoft  had even thought of years before, but not in a mobile context,” Miller wrote,  referring to Siri on the Apple iPhone 4S. Here, then-Microsoft Vice President of  Automotive Technology Dick Brass uses Microsoft Voice Command on a Windows  Mobile-based Pocket PC on Nov. 3, 2003.

Photo: Getty Images /  Microsoft

Read more:

SqlClr rocks

Extending SQL Server Reporting Services with SQL CLR Table-Valued Functions

        2 out of 2 rated this helpful Rate this topic
            SQL Server 2005

Ryan Ackley Microsoft Corporation

March 2007

Applies to:    Microsoft SQL Server 2005 Reporting Services

Summary: This article describes how to take advantage of SQL CLR table-valued functions to combine different types of data sources to create rich and exciting SQL Server Reporting Services reports. (13 printed pages)

Click here to download the Word document version of this article, SSRSandTableValuedFunctions.docx.

Click here to download the sample code associated with this article, MSDNReportingServices_TVF.exe.


Introduction Extending Reporting Services Using Table-Valued Functions Using Table-Valued Functions as an Alternative to Data-Processing Extensions Using Table-Valued Functions with SQL Server Reporting Services Web Services and Table-Valued Functions Conclusion


A new feature of Microsoft SQL Server 2005 is its integration with the Microsoft .NET Framework common language runtime (CLR). This allows .NET Framework classes and functions to be incorporated into Transact-SQL statements and queries.

There are several different CLR integration mechanisms available. These include:

  • CLR user-defined functions (including table-valued functions).
  • CLR user-defined types.
  • CLR stored procedures.
  • CLR triggers.

This white paper will show how CLR table-valued functions can be used to create report data from various sources in addition to databases to create robust Reporting Services reports.

Extending Reporting Services

There are several ways to extend Reporting Services and integrate CLR functionality using the .NET Framework, including the following:

  • Delivery extensions—Deliver reports in response to an event
  • Rendering extensions—Display a report in a format other than one that is supported by Reporting Services
  • Security extensions—Provide your own authentication and authorization mechanism for viewing and managing reports
  • Data-processing extensions—Can be developed to process data from data sources that are not supported by Reporting Services
  • Custom report items—Are customized server controls that can be embedded in reports to provide additional functionality beyond the built-in controls

This paper discusses how to implement table-valued functions to process data as an alternative to using data-processing extensions. For more information about extending Reporting Services, see Reporting Services Extensions in SQL Server 2005 Books Online.

Using Table-Valued Functions

Table-valued functions are used to programmatically create a table at run time. The tables they create can then be used in Transact-SQL query statements like any other database table. When table-valued functions were introduced in SQL Server 2000, they could only be created using Transact-SQL. The following is an example of a table-valued function implemented in Transact-SQL.

CREATE function EmployeeNames()
returns @employeeNames table (id int, name nvarchar(20), )
as begin
INSERT @employeeNames values(1, 'Ryan');
INSERT @employeeNames values(2, 'John');
INSERT @employeeNames values(3, 'Bob');

The function can then be referenced from a select statement as if it were a table:

SELECT name from EmployeeNames() where id = 1

The query returns the following:


While this is useful, it is limited by the Transact-SQL language, which was designed for use with relational data. If you try to leave that domain, Transact-SQL becomes somewhat inflexible. In SQL Server 2005, you can now use your favorite .NET Framework language to create table-valued functions, opening up some amazing possibilities. Programmers now can abstract any information that they want into a relational database table.

For example, the following code is a SQL Server 2005 table-valued function implemented in Microsoft Visual C# that returns a table created from the system event logs.

using System;
using System.Data.Sql;
using Microsoft.SqlServer.Server;
using System.Collections;
using System.Data.SqlTypes;
using System.Diagnostics;

public class TabularEventLog
    [SqlFunction(TableDefinition="logTime datetime,Message " +
        "nvarchar(4000),Category nvarchar(4000),InstanceId bigint",
        Name="ReadEventLog", FillRowMethodName = "FillRow")]
    public static IEnumerable InitMethod(String logname)
        return new EventLog(logname, Environment.MachineName).Entries;

    public static void FillRow(Object obj, out SqlDateTime timeWritten,
        out SqlChars message, out SqlChars category,
        out long instanceId)
        EventLogEntry eventLogEntry = (EventLogEntry)obj;
        timeWritten = new SqlDateTime(eventLogEntry.TimeWritten);
        message = new SqlChars(eventLogEntry.Message);
        category = new SqlChars(eventLogEntry.Category);
        instanceId = eventLogEntry.InstanceId;

The table-valued function is implemented as two static methods on the TabularEventLog class. The first method (InitMethod) is given the SqlFunction attribute to designate it as the entry point for the table-valued function. This method must return an IEnumerable or IEnumerator object. This object contains the data that will be used to fill the return table. When executing the function, SQL Server will iterate through each object in the IEnumerator object and use this to fill a row of data. It does this by passing the object to the second method in the class, which is FillRow. This method converts the object into a row in the return table. This method is specified in the FillRowMethodName parameter of the SqlFunction attribute.

Additional metadata is defined in the arguments of the SqlFunction attribute. In the preceding example, the column names and types are defined, as well as the name of the return table in this attribute.

After deploying this function to an instance of SQL Server, you can run the following query to see the last 10 items in the application log.

SELECT TOP 10 T.logTime, T.Message, T.InstanceId
FROM dbo.ReadEventLog(N'Application') as T

The results are shown in Figure 1.


Figure 1. Results of the query

For more in-depth information, see CLR Table-Valued Functions in SQL Server 2005 Books online.

Using Table-Valued Functions as an Alternative to Data-Processing Extensions

Reporting Services data-processing extensions allow a data source to be modeled by implementing a set of ADO.NET interfaces. This is conceptually similar to how table-valued functions can be used with Reporting Services. Table-valued functions have important advantages over data-processing extensions.


First, table-valued functions can be much easier to implement than data-processing extensions. Only two methods have to be created to implement a table-valued function. You have to implement a number of interfaces for a data-processing extension. Also, the deployment model is more straightforward. Microsoft Visual Studio 2005 can automatically deploy a .NET Framework table-valued function to SQL Server, in which it becomes immediately available for use from Reporting Services. To deploy a data-processing extension, you must copy the assembly to the client and the report server, and edit XML configuration files in both places.

Another important advantage of a table-valued function is that it can be part of a join in the database in which it is attached. This means that relational data in SQL Server can be mixed and filtered with the custom data defined in the function before it is placed into the report. This is impossible with a data-processing extension, because Reporting Services does not support join queries between data sources.


Data-processing extensions are much more powerful and flexible than table-valued functions. A table-valued function can model only a single database table, whereas a data-processing extension can model the equivalent of an entire database. Also, a data-processing extension functions as a fully custom data source and it can have its own query language and connection syntax. Using SQL as a query language isn’t always ideal for different types of data. For example, Reporting Services includes a data-processing extension for XML data that uses a query language similar to XPath. Data extensions are useful when a developer wants full control over the data access code path.

Using Table-Valued Functions with SQL Server Reporting Services

There are three things that you must do before you can use table-valued functions with Reporting Services. First, SQL Server must be configured to allow CLR integration. Next, the table-valued function must be developed in Visual Studio. Finally, the function has to be deployed to an instance of SQL Server.

For SQL Server to allow CLR integration, a flag has to be set either by using the SQL Server Surface Area Configuration tool or by running a query.

To configure SQL Server to allow CLR integration

  1. Click the Start button, point to All Programs, point to Microsoft SQL Server 2005, point to Configuration Tools, and then click Surface Area Configuration.
  2. In the SQL Server 2005 Surface Area Configuration tool, click Surface Area Configuration for Features.
  3. Select your server instance, expand the Database Engine options, and then click CLR Integration.
  4. Select Enable CLR integration.

Alternatively, you can run the following query in SQL Server (this query requires ALTER SETTINGS permission).

USE master
sp_configure 'show advanced options', 1;
sp_configure 'clr enabled', 1;

To develop a table-valued function

To develop a table-valued function, create a new SQL Server project in Visual Studio. To create a SQL Server project, open the New Project dialog box, expand Visual C#, and then select Database. You should be prompted for database connection information. For more information, see How to: Create a SQL Server Project in SQL Server 2005 Books Online. After you set up your database connection, you can write a table-value function. Create an empty .cs file in your project with the name EventLog.cs, then copy the example function from the previous section and paste it into that file.

To deploy a table-valued function

To deploy, you must register the function and the assembly that contains it with a SQL Server instance. This is done using Transact-SQL commands. The following script registers the tvfEventLogs assembly and the ReadEventLog function:

CREATE FUNCTION ReadEventLog(@logname nvarchar(100))
   RETURNS TABLE (logTime datetime,Message nvarchar(4000),
      Category nvarchar(4000),InstanceId bigint)
   AS EXTERNAL NAME tvfEventLog.TabularEventLog.InitMethod GO

Alternatively, you can deploy your assembly directly from Visual Studio by right-clicking the project in Solution Explorer and selecting the Deploy option. Visual Studio uses the SqlFunction attribute to determine the function signature and other necessary metadata automatically.

Deployment Permissions

SQL Server uses permission sets to run code securely in its hosted environment. When you create a database project in Visual Studio, the default permission set is SAFE. This is the only permission set that allows you to deploy directly from Visual Studio with no other configuration required. To give your assembly a permission set other than SAFE you must give your assembly a strong name and perform other configuration steps before deploying the assembly to the database.
There are three possible permission sets available when registering an assembly with SQL Server: SAFE, EXTERNAL ACCESS, and UNSAFE.
  • SAFE allows only internal computation and local data access from code in the assembly.
  • EXTERNAL ACCESS allows access to external system resources such as files, network resources, and the registry.
  • UNSAFE allows code in the assembly to run unrestricted.
To deploy your assembly with a permission set other than SAFE, you must follow some additional steps. First, an asymmetric key must be created from the assembly you want to register with SQL Server. Next, use that key to create a login. Finally, the appropriate permission set must be granted to this login. The following Transact-SQL statements use these steps to grant the UNSAFE permission set to the assembly created from the tvfEventLogs example in the previous section.
USE master

This has to be done only once, not every time you deploy the assembly. For more information about using the different permission sets and registering assemblies with SQL Server, see CLR Integration Code Access Security and Creating an Assembly, both in SQL Server 2005 Books Online.

Event Log Report

After you deploy the table-valued function, a virtual table of the system event log entries for the computer is added to the database. Because SQL Server treats the function just like a table, it can be used seamlessly in Reporting Services.

After you deploy the assembly, use Visual Studio to create a new Reporting Services project. (If you aren’t familiar with creating a report using Reporting Services Report Designer, see the Reporting Services Tutorials in SQL Server 2005 Books Online.) Create a report with a SQL Server data source that connects to the same database in which the function was installed. Next, create a dataset that uses the following query:

SELECT TOP 10 T.logTime, T.Message, T.InstanceId
FROM dbo.ReadEventLog(N'Security') as T
WHERE T.Category = N'Logon/Logoff'

After the dataset is defined, add a table data region to the report layout and add the fields from the dataset to the table detail row. Finally, run the report. It will show the last 10 security events in the Logon/Logoff category for the local computer. Figure 2 shows an example of the report.

Click here for larger image

Figure 2. Example of the report (Click on the picture for a larger image)

This simple example can be expanded to include other computer management and monitoring tasks. For example, a table-valued function can be created to parse Microsoft Internet Information Services (IIS) logs. Reporting Services can then be used to create a Web traffic monitoring application.

Web Services and Table-Valued Functions

One interesting feature of table-valued functions is the ability to pull data from Web services. This can be used to create unconventional reports. I will demonstrate how to use the Microsoft MapPoint Web Service in a table-value function and join this with data in the AdventureWorks database, to render spatial data onto a map and render it in a report.

Using the MapPoint Web Service

It is necessary to obtain a free developer account from Microsoft before you start to develop against the MapPoint Web Service. You can obtain one by visiting the MapPoint Web Service customer service site. A good place to get information before you start to develop against the Web service is Getting Started with the MapPoint Web Service SDK.

You will need to add a Web Reference to your project using Visual Studio that points to the .wsdl file on the MapPoint staging server. For more information about configuring a Web Reference for the MapPoint Web Service, see Accessing the MapPoint Web Service SOAP API.

The MapPoint Web Service provides four services, each of which has its own SOAP endpoint:

  • The Common Service provides functionality that can be used by the other services. This is used to retrieve metadata and for utility functions.
  • The Find Service can be used to search for locations, find the latitude and longitude of an address (“geocoding”), and find points of interest near a location.
  • The Routing Service routes driving directions from one location to another.
  • The Render Service can be used to create a map image using location and routing information.

MapPoint Web Service Table-Valued Function

Ultimately, I want my table-valued function to use the MapPoint Web Service to perform the following tasks:

  1. Use the Find Service to find the latitude and longitude of an AdventureWorks bicycle shop.
  2. Use the Find Service to find the five closest automatic teller machines (ATMs) to this latitude and longitude.
  3. Use the Routing Service to find the route from the store location to the ATM.
  4. Use the Render Service to render this route on a map.

First, I must define a table-valued function called GetProximity. The following Transact-SQL code shows the signature of my table-valued function:

CREATE FUNCTION GetProximity(@city nvarchar(200), @state nvarchar(2),
   @count int, @entityTypeName nvarchar(200))
(HitName nvarchar(200), HitAddress nvarchar(200), MapImage

GetProximity takes a city name and a two-digit state code for the initial location. It takes the number of entities to return and an entity-type name for which to search. It searches for the n closest entities, where n is specified by the count parameter and the entity type is specified by the entityTypeName parameter. It returns a table containing columns for the name, address, and a map (binary image) that contains directions to each entity.

The C# method signatures look like the following:

public static IEnumerable InitMap(string city, string state, int count,
   string entityTypeName)
public static void FillRow(Object obj, out SqlChars name, out SqlChars
   address, out SqlBinary map)

Note that the nvarchar Transact-SQL data type maps to the SqlChars .NET Framework data type, and the varbinary Transact-SQL data type maps to the SqlBinary .NET Framework data type. For a complete list of the mappings between data types, see the documentation for the System.Data.SqlTypes namespace.

In the InitMap method, I convert the city and state into a latitude and longitude. Next, I find all entities close to this coordinate. Finally, I find driving directions between the initial location and the found entity. The return value is an array of Route objects that encapsulate the driving directions.

public static IEnumerable InitMap(string city, string state, int count, string entityTypeName)
   FindServiceSoap find = new FindServiceSoap();
   find.PreAuthenticate = true;
   find.Credentials = new NetworkCredential(username, passwd);

   // Geocode the initial city and state
   FindAddressSpecification findSpec = new FindAddressSpecification();
   Address findAddr = new Address();
   findAddr.CountryRegion = "US";
   findAddr.Subdivision = state;
   findAddr.PrimaryCity = city;
   findSpec.InputAddress = findAddr;
   findSpec.DataSourceName = "MapPoint.NA";
   findSpec.Options = new FindOptions();
   findSpec.Options.ThresholdScore = 0.45;
   FindResults results = find.FindAddress(findSpec);

   if (results.NumberFound > 0)
      // If the city and state exist, get the latitude and longitude
      Location startLocation = results.Results[0].FoundLocation;
      LatLong startPoint = startLocation.LatLong;

      // Find the nearby entities
      FindNearbySpecification findNearby = new
      FindFilter filter = new FindFilter();
      filter.EntityTypeName = entityTypeName;
      findNearby.Filter = filter;

      FindOptions options = new FindOptions();
      options.Range = new FindRange();
      // Set the count limit
      options.Range.Count = count;
      findNearby.Options = options;
      findNearby.DataSourceName = "NavTech.NA";
      findNearby.LatLong = startPoint;
      findNearby.Distance = 10.0;
      results = find.FindNearby(findNearby);

      Route[] routes = new Route[results.Results.Length];
      RouteServiceSoap routeService = new RouteServiceSoap();

      routeService.PreAuthenticate = true;
      routeService.Credentials = new NetworkCredential(username,passwd);

      RouteSpecification spec = new RouteSpecification();
      spec.DataSourceName = "MapPoint.NA";

      // Create the route to each entity
      spec.Segments = new SegmentSpecification[2];
      spec.Segments[0] = new SegmentSpecification();
      spec.Segments[0].Waypoint = new Waypoint();
      spec.Segments[0].Waypoint.Location = startLocation;
      spec.Segments[0].Waypoint.Name = "start";
      for (int x = 0; x < results.Results.Length; x++)
         spec.Segments[1] = new SegmentSpecification();
         spec.Segments[1].Waypoint = new Waypoint();
         spec.Segments[1].Waypoint.Location =
         spec.Segments[1].Waypoint.Name = "end";
         routes[x] = routeService.CalculateRoute(spec);
      return routes;
   return null;

In the FillRow method, I use the Render service to convert each Route object into a map image. I then populate a row using this image and the location data of the entity.

public static void FillRow(Object obj, out SqlChars name, out SqlChars
address, out SqlBinary map)
   Route route = (Route)obj;

   // build the address string
   Address endAddress =
   string entityAddress = endAddress.AddressLine;
   string enitityCity = endAddress.PrimaryCity;
   string entityState = endAddress.Subdivision;
   string entityName = route.Specification.Segments[1].Waypoint.Location.Entity.DisplayName;

   // Assign the values of two of the columns
   name = new SqlChars(entityName);
   address = new SqlChars(entityAddress + ' ' + enitityCity + ' ' +

   // Get the view of the route
   ViewByHeightWidth view = route.Itinerary.View.ByHeightWidth;
   RenderServiceSoap renderService = new RenderServiceSoap();

   renderService.PreAuthenticate = true;
   renderService.Credentials = new NetworkCredential(username, passwd);

   // Render the map with the route
   MapSpecification mapSpec = new MapSpecification();
   mapSpec.DataSourceName = "MapPoint.NA";
   mapSpec.Views = new MapView[]{view};
   mapSpec.Route = route;

   // Assign the map image to the map column
   MapImage[] image = renderService.GetMap(mapSpec);
   map = new SqlBinary(image[0].MimeData.Bits);

Deploying and Debugging GetProximity

Deploying a table-valued function that uses Web services is more involved than the previous example. The comprehensive steps to deploy a table valued function that uses Web services are as follows:

  1. Configure the project containing the GetProximity function to pre-generate the XML serialization assembly. When the .NET Framework makes Web service calls, it dynamically generates an assembly to handle the serialization and de-serialization of the SOAP XML. This presents a problem because the SQL Server CLR host does not allow assemblies to be dynamically loaded at run time. Therefore, the XML serialization assembly for the Web service calls must be generated at compile time and registered with SQL Server. To pre-generate this assembly from Visual Studio, from the Project menu, click Properties and select Build. Set Generate serialization assembly to On. The XML serialization DLL will be built with your project and added to the bin directory. It is given the name [ProjectName].XmlSerializers.dll.
  2. Add the System.Security.AllowPartiallyTrustedCallersattribute to the assembly. This can be done by adding the following line to AssemblyInfo.cs in the Project:
    [assembly: System.Security.AllowPartiallyTrustedCallers]

    This allows the XML serialization assembly to talk to the main assembly containing the GetProximity function.

  3. Register the XML serialization DLL created in step 1 with SQL Server. The SAFE permission set will be sufficient.
  4. Create an asymmetric key for the DLL that contains the GetProximity table-valued function.
  5. Create a login for the asymmetric key and grant it the EXTERNAL ACCESS permission set.
  6. Register the assembly containing GetProximity with the EXTERNAL ACCESS permission set.
  7. Register the table-valued function, GetProximity.

Because there is a relatively long and complex dependency chain, I abandoned Visual Studio’s deployment mechanism in favor of a Transact-SQL script that runs as a post-build step that performs the deployment steps 3-7. It is included with the sample project.

Debugging table-valued functions is very straightforward. Database projects have a Test Scripts directory. Scripts can be added to this directory and run directly from Visual Studio. After the function has been successfully deployed, you can create a Transact-SQL query that calls the function and step through the C# code for the function without leaving Visual Studio.

To test GetProximity, create a test script called “Test.sql” in the Test Scripts directory and add the following query to the file:

SELECT * FROM GetProximity('Redmond', 'WA', 5, 'SIC3578')

Notice the function arguments. I am centering my proximity query on the city of Redmond in the State of Washington so I used “Redmond” for the @city argument and “WA” for the @state argument. I provided the number 5 for the @count value, which is the number of entities I want returned. I also provided the value “SIC3578” for the @entityTypeName argument, which is the entity name for ATMs in the MapPoint data source I am using. For more information on MapPoint data sources and entity types, see MapPoint Data Sources.

To run the query in Visual Studio, right-click the Test.sql file in Solution Explorer and select Debug Script. You should get something similar to the following results in the Visual Studio Output window:

HitName             HitAddress                          MapImage
Woodgrove Bank      8502 160th Ave NE Redmond WA        <BINARY>
Woodgrove Bank      16025 NE 85th St Redmond WA         <BINARY>
Woodgrove Bank      16150 NE 85th St Redmond WA         <BINARY>
Woodgrove Bank      8867 161st Ave NE Redmond WA        <BINARY>
Woodgrove Bank      15600 Redmond Way Redmond WA        <BINARY>
No rows affected.
(5 row(s) returned)

To debug the GetProximity function, set a breakpoint in the C# code for the function and run the script again. The execution will halt at the specified point, and you can debug it as you would any other managed process.

Creating a Report Using the MapPoint Web Service

The AdventureWorks sample database that comes with SQL Server 2005 represents a fictitious bicycle and bicycle accessories manufacturer that sells to retail stores throughout the United States. For this example, Adventure Works Cycles has decided to stop accepting credit cards or checks. From now on, they would like all invoices to be paid by cash only. As a service to their customers during this transition, they will be creating a report that shows an address and a map to the five closest ATMs to their customers’ store locations. This is not a realistic scenario, but it serves the purpose of demonstrating how to join a traditional data source (SQL database) with a nontraditional data source (MapPoint Web Service) by using table-valued functions.

The first step in creating our report is to create a new Report Server project in Visual Studio and specify a data source. The data source for my report is the SQL Server 2005 AdventureWorks sample database. It has the MapPoint table valued-function that I created previously installed. There is one dataset for the report. It contains fields for the store name, store city, store state, ATM name, ATM address, and ATM directions map.

For each store, we will want to call GetProximity and get the five closest ATMs. In SQL Server 2005, there is the new APPLY clause type to handle doing this. This is a little different than a join because we want to join on the function arguments instead of the function results. This means that the table-valued function is called for each row returned by the left side of the APPLY. A union of the function results can then be joined to the rest of the query. Following is the Transact-SQL query for the report dataset.

SELECT TOP(40) Sales.Store.Name, Person.Address.City,
   Person.StateProvince.StateProvinceCode, GetProximity_1.HitName,
   GetProximity_1.HitAddress, GetProximity_1.MapImage
   FROM Sales.CustomerAddress
      INNER JOIN Person.Address
         ON Sales.CustomerAddress.AddressID = Person.Address.AddressID
         AND Sales.CustomerAddress.AddressID = Person.Address.AddressID
      INNER JOIN Sales.Store
      INNER JOIN Sales.StoreContact ON Sales.Store.CustomerID =
         ON Sales.CustomerAddress.CustomerID =
      INNER JOIN Person.StateProvince ON Person.Address.StateProvinceID
= Person.StateProvince.StateProvinceID
         AND Person.Address.StateProvinceID =
CROSS APPLY dbo.GetProximity(Person.Address.City,
   Person.StateProvince.StateProvinceCode, 5, 'SIC3578') AS

Notice the use of CROSS APPLY to link the GetProximity function arguments with the other query data represented by Person.Address.City and Person.StateProvince.StateProvinceCode.

Note   To use the APPLY clause in a dataset query, you must use the generic query designer. The GUI-based query designer cannot display it graphically and it will throw an exception.

My report design uses two nested lists. The inner list contains a text box for the ATM name and address and an image for the map. The image in my report is set to AutoSize, so that it will grow depending on the size of the image coming from the Web service. The outer list contains text boxes for the store name and location. The outer list is grouped by store name. An image of my report in Layout mode is shown in Figure 3.

Click here for larger image

Figure 3. Image of report in Layout mode (Click on the picture for a larger image)

Figure 4 shows the rendered report with the maps to the ATM locations.


Figure 4. Rendered report, with maps to ATM locations


This paper has shown how table-valued functions in SQL Server can be used to extend SQL Server Reporting Services data access functionality. Table-valued functions provide flexibility to programmers and report designers, to enable reporting scenarios in which data is not stored directly in database tables.


About the author

Ryan Ackley is a Software Development Engineer with the Microsoft SQL Server Business Intelligence group. His main area is the data- and report-processing engine of SQL Server Reporting Services.

Cairo / WinFS was supposed to look like this.

SQL fileTable is probably my favorite feature in SQL Server.

Cairo / WinFS was supposed to look like this.

This will make it SO easy to manage my MP3s!!!

SQL Server 2012 FileTable – Part 1

januar 3, 2012 | Posted by admin in Geniiius, SQL Server, SQL Server 2012 | 3 Comments »

In a series of blog posts we will have a look at the new SQL Server 2012 table type called FileTable. This first blog post will be a simple getting started – how to create a FileTable and how to dump files into the folder and do some simple file manipulation.

A SQL Server FileTable is a special table where you can store directory and files – that’s not special I know, the special thing is that you can access these files and directories from windows applications as if they were stored in the file system.

The data stored in the FileTable is exposed to windows through a windows share, and via this share it is possible to gain non-transactional access to the data.

If you create or change a file through the windows share the command is intercepted by a SQL Server component and the changes are applied to the corresponding data in the FileTable.

Read more about FileTables in BOL

Let’s create a demo database with a FILESTREAM filegroup, I’ll be setting the NON_TRANSACTED_ACCESS to FULL this gives me full non-transactional access to the share, this option can be changed later if you want to do that. Here is the code:

CREATE DATABASE MyFileTableTest ON PRIMARY ( NAME = N'MyFileTableTest', FILENAME = N'C:FileTableMyFileTableTest.mdf' ), FILEGROUP FilestreamFG CONTAINS FILESTREAM ( NAME = MyFileStreamData, FILENAME= 'C:FileTableData' ) LOG ON ( NAME = N'MyFileTableTest_Log', FILENAME = N'C:FileTableMyFileTableTest_log.ldf' ) WITH FILESTREAM ( NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTable' )


That’s the database done, now let’s create the magic table – that is simple done with the “AS FileTable” keyword. When creating a FileTable there is only two options that you can change, everything else is out of the box. Let’s have a look at how I created the table:

USE MyFileTableTest go CREATE TABLE MyDocumentStore AS FileTable WITH ( FileTable_Directory = 'MyDocumentStore', FileTable_Collate_Filename = database_default ); GO


If we take a look at the GUI after creating the table, we can see that the table is created under the new folder called FileTables


that is not the only new thing, if we right click at the table that we just created we will see an option called “Explore FileTable Directory” – click that and a windows opens with the content of your FileTable. If you drop a file into this folder it will be accessible from SSMS, with a simple SELECT from the MyDocumentStore.

Here is how the folder looks:


And if you query the table you get the following: (This query only shows a few of the columns off the table)


Now you can do file manipulation with SSMS, something like renaming the files is just an update statement – easy peasy Smile.

UPDATE MyDocumentStore SET name = 'Geniiius.txt' WHERE stream_id = '05A9F338-CF32-E111-BB43-080027F9EB26'



This is the end of the first blog post in this series – stay tuned new posts will be online very soon.

ConvertToBase – great TSQL UDF function

I’ve had to convert from base 10 to base 2 to base 2 to base 10 hundreds of times in my life..

Finally I was googling for a better answer, and I found this 🙂


@value AS BIGINT,
@base AS INT

— some variables
DECLARE @characters CHAR(36),
@result VARCHAR(MAX);

— the encoding string and the default result
SELECT @characters = ‘0123456789abcdefghijklmnopqrstuvwxyz’,
@result = ”;

— make sure it’s something we can encode. you can’t have
— base 1, but if we extended the length of our @character
— string, we could have greater than base 36
IF @value < 0 OR @base < 2 OR @base > 36 RETURN NULL;

— until the value is completely converted, get the modulus
— of the value and prepend it to the result string. then
— devide the value by the base and truncate the remainder
WHILE @value > 0
SELECT @result = SUBSTRING(@characters, @value % @base + 1, 1) + @result,
@value = @value / @base;

— return our results
RETURN @result;


Is ADP Deprecated?

I hear a lot of MVPs claim that ADP is deprecated.. It’s silly that there’s a religous war between SQL Server architects and developers who continue to be strangled by Jet.

So I thought that it was time to explain… Straight from the horses mouth

Recommendations for Access projects in Access 2010

We continue to support creating and working with .adp files in Access 2010. However, we recommend that developers use the SQL Server design tools provided in SQL Server Management Studio to design database objects such as tables, views and stored procedures, and then use the Access interface to design the forms, reports, macros and modules that connect to those objects. The tools provided by SQL Server provide the most powerful interface for creating SQL Server objects.


verb (used with object)


to bear or hold up (a load, mass, structure, part, etc.); serve as a foundation for.

to sustain or withstand (weight, pressure, strain, etc.) without giving way; serve as a prop for.

to undergo or endure, especially with patience or submission; tolerate.

to sustain (a person, the mind, spirits, courage, etc.) under trial or affliction: They supported him throughout his ordeal.

to maintain (a person, family, establishment, institution, etc.) by supplying with things necessary to existence; provide for: to support a family.

SQL / CLR is the best thing since sliced bread

Comparing Simple Efficiencies: T-SQL UDF vs SQCLR UDF for Splitting Strings

Donabel Santos (twitter (@sqlbelle) | blog) – February 21, 2011

Recap of T-SQL vs SQLCLR (pseudo) Debate

There have been numerous posts about benefits of using T-SQL vs SQLCLR, and vice versa. And we all know the mantra – T-SQL for data access, SQLCLR for anything that is computationally intense tasks.

SQL Server is a relational database, and works best with set based operations and direct data access. If you need to do straightforward INSERTs, UPDATEs, DELETEs, SELECTs, stick with T-SQL unless you want to do lots of overtime trying to figure out why you’re data access suddenly became slow.

So when do we use SQLCLR instead of T-SQL?

There are already lots of discussions – even books – about SQLCLR advantages. I will defer you to them (check out the references section), but I will provide a very brief list of scenarios when you might want to consider SQLCLR:

  • Interaction outside SQL Server If you need to work with the OS, files, registry etc.
  • Validation If you need to validate phone numbers, email addresses, postal codes, or any patterns
  • Complex computations If you need running aggregates, complex math equations (what is the square root of x to the nth power divided by 2 * pi?), financial analytics maybe?
  • Custom Data Types If you need to create your own custom business-specific data types. We also have to remember that XML and GEOGRAPHY/GEOMETRY are great additions to SQL Server, and these are technically CLR data types.

The debate between T-SQL vs SQLCLR sometimes can be taken out of context. I call it a pseudo debate, because sometimes it’s made to seem that SQLCLR is meant to replace T-SQL, when it’s not. SQLCLR is *not* evil. It’s just another tool to help you do your job. It *can* become evil though, if you misuse it.

While there are some overlaps in scenarios where you can use both, these two should really be complementary. Where T-SQL is slow or lacking, SQLCLR should at least be considered and tested. Again, consider the right tool for the right job.

SQLCLR Hesitation

In cases where some projects will clearly benefit from using SQLCLR, project teams still hesitate using it. Some of the common reasons I’ve heard that causes SQL Server DBAs and Developers to hesitate using SQLCLR are:

  • It is unfamiliar ground. Let’s face it. Not every SQL Server professional will know how to program in .NET. It’s becoming increasingly popular, but we still have traditional database professionals
  • It compromises security. Well, it’s yes and no. If you code it properly and if you deploy it properly, it should be as secure as your most secure database objects. However if you take the easy route, such as

Examining Simple Efficiencies

Let’s take a deep dive into a specific scenario, and let’s analyze simple performance differences between a T-SQL user defined function (UDF) vs a SQLCLR UDF. I emphasize simple differences for now, because sometimes, that’s all it takes to convince someone :)


This is one scenario where technically, either the T-SQL way or the SQLCLR way will work.

You have an incoming string that contains character delimited set of files. Let’s assume it’s a pipe delimited set of strings for employees, like this:

EMP1~Doe~John~F.~Supervisor|EMP2~Smith~Mary~~Manager|EMP3~Cargan~Jeff~~|EMP4~Ricks~Minnie~~Accountant|EMP8~Lowe~Bill~D.~Accountant|EMP10~Crowe~Laura~G.~Accountant |EMP5~Hall~Suzy~~Accountant'

This is a legacy system, and you have no control over how the incoming data is formatted. The legacy system will also need to call a stored procedure, which in turn will parse the strings and output a table.

Of course you can argue there are other options to parse the file, such as pushing this functionality to an intermediate app etc, but for now let’s focus on two feasible options to parse the strings and output a table using SQL Server:

  • Transact-SQL UDF

Using T-SQL UDF – dbo.fn_Split

What we can do is create a Table Valued UDF that parses the incoming string based on a delimiter, and output a table.

Here’s one way we can do it: this one I’ve adapted from an RDACorp post. There’s also a few other variations, have a look at this SQLTeam Forum link.

 IF OBJECT_ID('fn_Split') IS NOT NULL    DROP FUNCTION dbo.fn_Split GO CREATE FUNCTION dbo.fn_Split (         @items NVARCHAR(max),         @mainDelimiter CHAR(1) = '|' ) RETURNS @itemTable table (         [item]  VARCHAR(4000) ) as BEGIN         DECLARE                 @tempItemList NVARCHAR(max),                 @i int,                 @item NVARCHAR(max)         SET @tempItemList = @items         -- escape all single quotes         SET @tempItemList = REPLACE(@tempItemList, ''' ''', '')         -- get index where our delimiter was found         SET @i = CHARINDEX(@mainDelimiter, @tempItemList)         -- loop while all the characters in the list have not been traversed yet         WHILE (LEN(@tempItemList) > 0)         BEGIN                 IF @i = 0                         -- if there are no delimiters, then this is the only item in our list                         SET @item = LTRIM(RTRIM(@tempItemList))                 ELSE                         -- get the first word (from the left) less the delimiter character                         SET @item = LTRIM(RTRIM(LEFT(@tempItemList, @i - 1)))                 INSERT INTO @itemTable([type],[item])                 VALUES('outer', LTRIM(RTRIM(@item)))                 IF @i = 0                         SET @tempItemList = ''                 ELSE                         -- remove the word we just added to the table                         SET @tempItemList = LTRIM(RTRIM(RIGHT(@tempItemList, LEN(@tempItemList) - @i)))                 -- lather, rinse, repeat                 SET @i = CHARINDEX(@mainDelimiter, @tempItemList)         END         RETURN; END GO 

Using SQLCLR UDF – dbo.fn_CLRSplit

For the SQLCLR UDF version, again there are a few variations. Normally we would use a String.Split method, or even Regex to split strings based on character delimiters (please see links below for additional samples)

In my case I will be borrowing Adam Machanic’s fast and scalable string splitting UDF. If you’re interested in his code, it’s here: Faster, More Scalable SQLCLR String Splitting.
In this article, Adam also discusses differences between different ways of parsing and splitting strings in the SQLCLR world – differences between the String.Split, Regex, and his way which traverses characters one by one. He does a great job explaining why and where one breaks down, and why his particular version is more scalable.


The tests for this experiment are really simple. Take a string that contains n items, then pass it to both the T-SQL and SQLCLR UDFs, measure CPU time and elapsed time, and compare. Ours will be a simple metric comparison – time – which is sometimes the most effective metric to use when battling to use SQLCLR with business users, sometimes even DBAs.

 -- start with a string of 10 items DECLARE @str10 VARCHAR(MAX) = 'EMP1~Doe~John~F.~Supervisor|EMP2~Smith~Mary~~Manager|EMP3~Cargan~Jeff~~|EMP4~Ricks~Minnie~~Accountant|EMP8~Lowe~Bill~D.~Accountant|EMP10~Crowe~Laura~G.~Accountant|EMP5~Hall~Suzy~~Accountant|EMP5~Hall~Suzy~~Accountant|EMP5~Hall~Suzy~~Accountant|EMP5~Hall~Suzy~~Accountant|' -- this is how I am generating the massive strings to be split DECLARE @str100 VARCHAR(MAX) = REPLICATE(@str10, 10) DECLARE @str1000 VARCHAR(MAX) = REPLICATE(@str10, 100) DECLARE @str10000 VARCHAR(MAX) = REPLICATE(@str10, 1000) DECLARE @str100000 VARCHAR(MAX) = REPLICATE(@str10, 10000) SET STATISTICS TIME ON SET STATISTICS IO ON -- this I have repeated for each of the strings above -- lather, rinse, repeat SELECT * FROM dbo.fn_Split(@str100000, DEFAULT, DEFAULT) SELECT * FROM dbo.fn_CLRSplit(@str100000, DEFAULT, DEFAULT) SET STATISTICS TIME OFF SET STATISTICS IO OFF GO 


They say pictures paint a thousand words, so I’ll let these graphs do the initial talking.

There is clearly a performance difference between the T-SQL way and the SQLCLR way. It’s not so noticeable on a small scale, and perhaps some of you would bear to have this rather than turning on SQLCLR. However, on a bigger scale, the difference is exponential. My T-SQL UDF seems to have noticeably slowed down at the 1,000 item string, and died at the 100,000 item string (or maybe I was just impatient and cancelled the query altogether).

Right now we’re just considering simple efficiencies – CPU and execution times. We haven’t even considered I/Os, memory clerks etc. And clearly, SQLCLR offers the more efficient way to complete the task at hand.

Oracles Downward Spiral

I’m SO glad I chose to base my career on SQL Server.  It’s a LOT more affordable than Oracle.

I ran into this quote, had to share:

In Oracle’s core competency, databases, Microsoft’s SQL Server scored significantly higher satisfaction ratings among survey respondents. More than 80 percent of participants gave Microsoft SQL Server “excellent” or “very good” ratings, compared to the 43 percent that gave the Oracle DB an “excellent” or “very good” rating.