Sunday, December 14, 2014

Mimicking PerformancePoint Filter Connection Formulas using the Query String (URL) Filter

In this post, I'm going to show you an easy trick using SharePoint's Query String (URL) Filter to create complex and dynamic dashboard filter connections for PerformancePoint Scorecards and Reports.

If you‘ve built any PerformancePoint dashboards, you’ve undoubtedly created and connected PerformancePoint Filters to scorecards and reports (Analytic Grids, Analytic Charts, Reporting Services Reports, etc.) to create powerful and interactive dashboards.

A simple (and maybe not so powerful) example, which I will build upon, would be creating a Customer Country filter using the AdventureWorks SSAS multi-dimensional database and connecting it to an Analytic Grid report summarizing data based on the Customer dimension's Customer Geography hierarchy, as in the figure below:

PerformancePoint Filter and Analytic Grid
Figure 1: A Not So Powerful Example of Using a PerformancePoint Filter
Slightly less common is utilizing a PerformancePoint Filter’s Connection Formula to filter a report in a more sophisticated manner using MDX functions (see Extend PerformancePoint dashboards by using MDX queries for a thorough explanation of Connection Formulas). For example, you could add the following simple MDX function as a Connection Formula to modify the filter connection created above:
<<UniqueName>>.Children
This formula would apply the Children MDX function to the value of the filter and when applied to the Customer Geography hierarchy, would summarize the measures by State-Province (i.e., the child level of Country).

Analytic Grid results using PerformancePoint Filter with a Connection Formula
Figure 2: A More Powerful Example Using a PerformancePoint Filter with a Connection Formula

The Query String (URL) Filter


Another filter that can be useful for PerformancePoint Dashboards is SharePoint's Query String (URL) Filter. This filter allows you to access a query string parameter and pass its value to a connected PerformancePoint Scorecard or Report. I use Query String (URL) Filters for a couple of different scenarios: one is when I want to link dashboards based on an attribute hierarchy by having a hyperlink on a dashboard (e.g., in an embedded SSRS report) which navigates to, for example, a dashboard focusing on a particular parent or child geographic area. The other scenario is when I have a parameter with hundreds or thousands of options (e.g., counties) and instead of populating a PerformancePoint Filter, I use a custom search box control to set a query string argument which is used to filter the dashboard. There are certainly drawbacks to this filter, like having to reload a page or having an attribute member value exposed in a URL, but it is also very powerful and it can open up new possibilities in your dashboard design.

Using this filter is fairly straightforward allowing you to add the filter to your dashboard and specify the query string parameter name. 

Query String (URL) Web Part Editor
Figure 3: Query String (URL) Filter Web Part Editor
By passing an attribute member value (e.g., [Geography].[Geography].[Country].&[United States]) in the “area” query string parameter, we can accomplish the same thing that the PerformancePoint Filter did without the Connection Formula. The URL would look something like the following, where "area" is the name of the query string parameter (make sure to URL encode the parameter value, otherwise it may not work):
http://servername/site/subsite/Dashboards/AdventureWorks/Page1.aspx?area=%5BCustomer%5D.%5BCustomer+Geography%5D.%5BCountry%5D.%26%5BUnited+States%5D
Using this type of filter opens up the possibility of dynamically controlling what is being summarized in a report via the URL. The problem with this approach, however, occurs when you want to create a more sophisticated filter using something similar to the PerformancePoint Filter's Connection Formula. There is no interface to enter a Connection Formula for a Query String (URL) Filter, so it can't be done. At least that's what I thought until recently when I started playing with the query string parameter value and discovered that I could apply MDX functions directly in the URL.
For example, we can use the same Children MDX function used above directly in the URL, passing [Geography].[Geography].[Country].&[United States].Children as the query string value. The URL encoded query string would look like the following:
http://servername/site/subsite/Dashboards/AdventureWorks/Page1.aspx?area=%5BCustomer%5D.%5BCustomer+Geography%5D.%5BCountry%5D.%26%5BUnited+States%5D.Children
This allows us to produce the exact same results as when we used the PerformancePoint Filter with the Children MDX function in the Connection Formula, as shown below:
Producing the Same Results Using a PerformancePoint Filter and a Query String (URL) Filter.
Figure 4: Producing the Same Results Using a PerformancePoint Filter (left) and a Query String (URL) Filter (right)
With this knowledge, I began to try slightly more complicated MDX formulas. For example, I could pass TopCount({Descendants([Customer].[Customer Geography].[Country].&[United States], 2)}, 10, [Measures].[Internet Order Count]) to get the top 10 U.S. cities based on Internet Order Count. The URL encoded string would look like:
http://servername/site/subsite/Dashboards/AdventureWorks/Page1.aspx?area=TopCount%28%7BDescendants%28%5BCustomer%5D.%5BCustomer+Geography%5D.%5BCountry%5D.%26%5BUnited+States%5D%2C+2%29%7D%2C+10%2C+%5BMeasures%5D.%5BInternet+Order+Count%5D%29
This would produce something similar to the following:

PerformacePoint Analytic Grid using a Query String (URL) Filter with a more complicated MDX function
Figure 5: Top 10 U.S. Cities Based on Internet Order Count (right side)

By this point
you can see that there is great potential in using a Query String (URL) Filter with MDX functions. Using this trick you can create very dynamic dashboards which can be controlled using query string parameters instead of having to hard code a conditional formula into PerformancePoint Filter. Just remember, as with anytime you are passing query string parameters in a URL, make sure that you understand how a savvy user may try to modify it to gain access to information that you did not intend.

Sunday, November 23, 2014

SSRS: Filling in Cells of Reports Exported to Excel (Part 2)

In my last post, SSRS: Filling in Cells of Reports Exported to Excel (Part 1), I showed how to fill in the cells of a block formatted report that has been exported to Excel. In this post we'll do the same thing for a stepped formatted report.

A stepped format displays row groups in staggered columns as in the example below:



Multi-column stepped format SSRS report example
Multi-Column Stepped Format
When this report is exported to Excel, it will look very similar to the report layout using merged cells to accomplish the formatting. To fill in the empty cells, follow the steps below:

NOTE: In order for the selection of blank cells in the steps below to work, you may need to enter "=Nothing" as the expression in the row group text boxes highlighted in red below:


Highlighted text boxes in SSRS report indicating where to place "=Nothing" formula.
Stepped Format Report Design
  • Click the Enable Editing button (if necessary)
  • Select the cells for the entire worksheet
  • From the Home tab, click Merge & Center -> Unmerge Cells
  • You may want to widen the columns that contain the row groups (i.e., row labels), so that you can see the entire contents of the cells
NOTE: Here is where you may want to re-evaluate your report design. If Excel has had to create merged column cells in your table to accommodate lining up header images, report titles, etc., you may want to try to avoid that. By doing things like conditionally formatting header images to not export to Excel, stretching title text boxes to be the same width as the tablix, etc., the report will export in a more Excel friendly format.
  • Select just the range of cells that represent the row groups (i.e., the row labels) of the tablix
  • From the Home tab, click Find & Select -> Go to Special… -> Blanks
At this point just the blank cells that represent the row groups should be selected as in the image below:

Highlighting blank cells in SSRS report exported to Excel
With Blank Cells Selected
  • Holding down the Ctrl key, select a highlighted blank cell
  • Click in the formula box and type =IF(LEN(A2), A2, 1/0) where A2 is the cell to the left of your selected cell (in my case B2). This formula states that if the cell next to the selected cell is populated with text, use that, otherwise fill the cell with 1/0 which will cause an Invalid Cell Reference Error
  • Holding down the Ctrl key, hit the Enter key (this will copy the formula to all selected cells)
At this point you will have some cells that have been filled in with the correct information and others that show a #REF! error.

Populating blank cells with formula in SSRS report exported to Excel
With Cells Filled In
  • With the cells still selected, click Find & Select -> Go to Special… -> Formulas -> Errors (i.e., uncheck the Numbers, Text and Logicals checkboxes)
Now only the #REF! error cells should be selected, as in the image below:


Selecting error cells in SSRS report exported to Excel
With Error Cells Selected
  • Holding down the Ctrl key, select a highlighted blank cell
  • Click in the formula box and type the equals sign (i.e., =) over the existing formula 
  • Click on the cell above the selected cell (e.g., if your selected cell is B5, click on B4)
  • Holding down the Ctrl key, hit the Enter key (this will copy the formula to all selected cells)
All of the cells should now be filled in with the appropriate data, as shown below:


Populating error cells with formula in SSRS report exported to Excel
With All Cells Filled In
  • Again, select just the range of cells that represent the row groups (i.e., the row labels)
  • From the Home tab, click Copy
  • Click Paste -> Values to replace the formulas with the actual values
You should now have a usable spreadsheet that you can sort, copy and paste rows into other worksheets, etc. Although it may have seemed like many steps to accomplish this task, once you've gone through it a couple of times, it will become a fairly simple Excel trick that you can add to your arsenal.

Sunday, November 9, 2014

SSRS: Filling in Cells of Reports Exported to Excel (Part 1)

Designing reports that have multiple row groups is a relatively common requirement in Reporting Services. This often means implementing either a block or multi-column stepped formatted report to display those groups (i.e., this post does not apply to a stepped format where all row groups appear in a single column using padding or spaces to offset the groups). The problem with these formats occurs once the report has been exported to Excel. To maintain the look of the report, Excel merges the cells of those row groups; however, to work with the data in Excel (e.g. sort, copy & paste selections into other worksheets, etc.) you’ll need to eventually unmerge those cells. Unfortunately, Excel doesn’t automatically fill in those unmerged cells with data where it exists. In part 1 of this 2-part post, I show how this can be accomplished for a block formatted report that has been exported to Excel.

A block format displays row groups in columns side by side as in the example below:



Block Format

When this report is exported to Excel, it will look very similar, using merged cells to accomplish the formatting. To fill in the cells, do the following after exporting the report to Excel:

  • Click the Enable Editing button (if necessary)
  • Select the cells for the entire worksheet
  • From the Home tab, click Merge & Center -> Unmerge

NOTE: Here is where you may want to re-evaluate your report design. If Excel has had to create merged column cells in your table to accommodate lining up header images, report titles, etc., you may want to try to prevent that. By doing things like conditionally formatting header images to not export to Excel, stretching title textboxes to be the same width as the tablix, etc., the report will export in a more Excel friendly format.
  • Select just the range of cells that represent the row groups (i.e., the row labels) of the tablix
  • From the Home tab, click Find & Select -> Go to Special… -> Blanks
At this point just the blank cells that represent the row groups should be selected as in the image below:

Blank Cells Selected
  • Holding down the Ctrl key, select a highlighted blank cell
  • Click in the formula box and type the equals sign (i.e., =)
  • Click on the cell above the selected cell (e.g., if your selected cell is C3, click on C2)
  • Holding down the Ctrl key, hit the Enter key (this will copy the formula to all selected cells)
With Cells Filled In
  • Again, select just the range of cells that represent the row groups (i.e., the row labels)
  • From the Home tab, click Copy
  • Click Paste -> Values to replace the formulas with the actual values
You should now have a usable spreadsheet that you can sort or copy and paste rows into other worksheets, etc.

In part 2 of this post, SSRS: Filling in Cells of Reports Exported to Excel (Part 2), I'll show how to do the same thing for a stepped formatted report, which is slightly more complicated.

Saturday, October 18, 2014

MDX: Top N by Group with All Others

When summarizing data using MDX, it’s a common requirement to find the top N by group (i.e., the top N tuples of one set for each member of another set). For example, you may need to find the top N brands by state or the top N products by calendar year based on sales count or amount. In MDX, this is accomplished using a combination of the Generate and TopCount functions. As an example, the following returns the top 5 products for each region based on sales amount:


WITH SET [Top N Products] AS
    Generate(
        [Sales Territory].[Sales Territory Region].Children
 , TopCount( 
            [Sales Territory].[Sales Territory Region].CurrentMember
  * [Product].[Product Categories].[Product].MEMBERS
            , 5
            , ( [Measures].[Internet Sales Amount] )
        ) 
    )
SELECT { [Measures].[Internet Sales Amount] } ON COLUMNS,
    NON EMPTY { [Top N Products] }  ON ROWS
FROM [Adventure Works]


This commonly available query uses the Generate function to evaluate the set produced by the TopCount of the cross product of regions and products, and applies it to each of the region member's children. This produces the following results:

Results of a Top N by Group MDX Query
Top N by Group Results

But this only provides part of the picture. Typically, I would want to include all other products within each region as well so that I could provide context to the top N items. For example, in a report (which could also be accomplished using MDX... but let's keep the MDX straightforward for this post) I could calculate each product's percent of sales within each region or each region's percent of sales versus all sales. In order to produce the all other products row for each region, we need to create a new calculated member (i.e., [All Other Products]) using the Aggregate and Except functions.

WITH SET [Top N Products] AS
    Generate(
        [Sales Territory].[Sales Territory Region].Children
 , TopCount( 
            [Sales Territory].[Sales Territory Region].CurrentMember
  * [Product].[Product Categories].[Product].MEMBERS
            , 5
            , ( [Measures].[Internet Sales Amount] )
        ) 
    )
    MEMBER [Product].[Product Categories].[All Other Products] AS 
 Aggregate( 
     Except( 
  NonEmpty( 
             { [Sales Territory].[Sales Territory Region].CurrentMember 
      * [Product].[Product Categories].[Product].MEMBERS } 
  )
  , [Top N Products] 
     )
 )
SELECT { [Measures].[Internet Sales Amount] } ON COLUMNS,
    NON EMPTY { 
 [Top N Products]
 , ( 
     [Sales Territory].[Sales Territory Region].[Sales Territory Region] 
     , [All Other Products] 
 ) 
    } ON ROWS
FROM [Adventure Works]

The Aggregate function takes a set (e.g., the cross product of region and product) and aggregates a measure (e.g., sales amount) within the current context. However, we additionally must use the Except function to exclude the top N products for each region from the aggregate. 

While the NonEmpty function within the Except doesn't make a difference in this example, it can provide a significant performance gain depending on the cross product that is being evaluated.

This query results in the following (as a note, I did use an Order function so that All Other Products would sort within each group, but I left it out of the MDX to keep the example clear):

Results of a Top N by Group with All Others MDX Query
Top N by Group with All Others Results

Although the MDX is a little more complicated, the results are much more useful, providing a basis for creating context for your top N values.

Wednesday, September 17, 2014

Migrating a Visual Studio 2003 ASP.NET Project to Visual Studio 2013 and IIS Express

Recently I had to migrate a Visual Studio 2003 ASP.Net project to Visual Studio 2013. I know, so many questions… why hadn’t this project ever been migrated in the past, how did a project survive so long without getting migrated, how is this guy still employed, etc. All great questions. The simple answer is that I had some old projects on an old XP machine that gave up the ghost. I therefore had to migrate the projects to another machine where I had Visual Studio 2013 using IIS Express installed.

Like any good developer, I did not research what issues I might face in making such a leap forward. I just went ahead and tried to open the old project in Visual Studio 2013, hoping that the Microsoft engineers who designed Visual Studio would have had the forethought to imagine that someone would try something so ridiculous. However, after 20 seconds of Visual Studio contemplating what to do with my request, it responded with an error messagebox stating the following:


The operation could not be completed. The system cannot find the path 
specified.

Not very helpful. Fortunately, after clicking OK on the above message, the Migration Report (updatelog.htm) opened automatically and provided me with the following error that was a little more useful:

MyProjectName.csproj: Could not find the server 'http://localhost/MyProjectName/MyProjectName.csproj' on the local machine. Creating a virtual directory is only supported on the local IIS server.


Kind of made sense. I was moving from a development environment that used IIS 6 to one that used IIS Express. A Google search of that error yielded many results, but none that were specific to migrating a project and pertinent to my situation. So instead I rooted around my project's files to figure out where it was picking up the virtual directory information and quickly found the project’s .webinfo file (i.e., MyProjectName.csproj.webinfo). The .webinfo file (something I never really paid attention to before) is a developer specific file that keeps track of the project’s virtual root location. It's contents look like this:

<VisualStudioUNCWeb>
    <Web URLPath = "http://localhost/MyProjectName/MyProjectName.csproj" />
</VisualStudioUNCWeb>

So I threw caution to the wind and experimented. I renamed the .webinfo file (e.g. @ MyProjectName.csproj.webinfo) and tried to open the .csproj file again. This time Visual Studio 2013 quickly responded with the following informative messagebox:

You have completed the first step in converting your Visual Studio .NET 2003 
web project. To complete the conversion, please select your project in the 
Solution Explorer and choose the 'Convert to Web Application' context menu 
item.

The Migration Report that followed additionally gave a few warnings regarding making “non-functional changes without impacting project behavior”. But everything seemed to work fine when I attempted to run it and, as it turned out, there was no need to convert the project to a web application.

However, when running the project, the pages were running from the root of localhost rather than in a virtual directory like I was used to seeing in the VS2003 development environment. As an optional step, this simply required editing the Project URL (e.g. adding MyProjectName to the URL) from the project's properties Web tab (i.e. right-click the project, select Properties and then the Web tab) and clicking the Create Virtual Directory button.

Next stop for my migrated project: refactoring EVERYTHING.

Tuesday, August 12, 2014

Downloading and Extracting US Census Bureau TIGER/Line Shapefiles

The United States Census Bureau (census.gov) site is a great resource for free spatial data such as state, county, census tract, etc. boundary files. These files can be used as is or used as a building block for creating custom sales territories. They can be loaded into SQL Server for use in spatial queries or for displaying boundaries on a map in an SSRS report. They can also be used to build map tiles for use with the Bing Maps API. This post, however, is focused on downloading and extracting these files.

These TIGER (Topologically Integrated Geographic Encoding and Referencing)/Line Shapefiles are available in many vintages, including definitions from the 1990, 2000 and 2010 censuses, with annual updates through 2013 (2014 should be available soon). The Census Bureau site provides several ways to access and download these files, but the easiest way is to use their FTP site (ftp://ftp2.census.gov/).

You can use an FTP client or, as shown here, you can simply use Windows Explorer.

Start by opening Windows Explorer and pasting ftp://ftp2.census.gov/ directly into the address bar. 

Navigate to the geo/tiger/. From this folder you will see the different vintages of TIGER files available. To illustrate the next step, lets say we're interested in the 2013 Census Tract boundaries.

Navigate to the TIGER2013/TRACT folder. In the right hand window (i.e. file list), you'll see a list of compressed (.zip) files with names based on the State FIPS Code, which is a two character numeric code assigned to each state, the District of Columbia and outlying areas of the U.S. (Puerto Rico, Guam, etc.).

Select the ZIP files you're interested in. I'm only interested in the U.S. states and DC, so I'm selecting files tl_2013_01.zip (AL) through tl_2013_56.zip (WY).

Right-click one of the selected files and select Copy from the popup menu.

Navigate to a folder on your local network, right-click the folder and select Paste. This may take a few minutes. I'll wait.

Now that you have the files on your local network, we need to extract (i.e. unzip) them. Unfortunately Windows Explorer falls short when needing to unzip multiple files. If you already have WinZIP or WINRAR installed, you could use either of those to unzip multiple files. I don't, so I'm going to use the following PowerShell script that I found on Steve Schofield's Blog

function UnZipMe($zipfilename,$destination) 
{ 
    $shellApplication = new-object -com shell.application 
    $zipPackage = $shellApplication.NameSpace($zipfilename) 
    $destinationFolder = $shellApplication.NameSpace($destination) 

    # CopyHere vOptions Flag # 4 - Do not display a progress dialog box. 
    # 16 - Respond with "Yes to All" for any dialog box that is displayed. 

    $destinationFolder.CopyHere($zipPackage.Items(),20) 
} 

# replace the following with the folder that contains your zipped files
$a = gci -Path C:\Data\USCBCensusTracts2013 -Filter *.zip 

foreach($file in $a) 
{ 
    Write-Host "Processing - $file" 
    UnZipMe –zipfilename $file.FullName -destination $file.DirectoryName 
}

Once this script finishes running, you will have all of your shapefiles ready to work with.

Wednesday, July 9, 2014

Getting the Last Run Report Parameter Value for an SSRS Report

I recently had a request by a report user to change the default for a couple of parameters in an SSRS report. Because default values were set on the report, it would automatically run when opened and the user would have to change the parameters to his preference and run the report again… every time. This seemed like a reasonable request except for the fact that the report was used by many people who did not necessarily want the default values changed. The alternative of not setting default values and forcing all report users to select them prior to running the report was an option, but would negatively impact those report users who were already satisfied with the existing defaults.

As the users of this report were rarely changing these particular parameters (other than to change the default value), I thought that if I could get the last value that they specified when running the report, it would generally be the value they’d prefer to use as the default.

Having created a few reports summarizing SSRS usage in the past, I knew that the parameters were logged, however rather than trying to write something from scratch, I turned to Google for an answer and came upon this blog post by Jegan, SSRS - Method to retrieve last run report parameters, which laid most of the groundwork for me.

In the post, he lays out a simple query to access all of the entries in the ExecutionLogStorage table for a single report. He then goes on to get a parameter value from the Parameters field for the most recent entry using knowledge of the existence of a second parameter to parse the string. I take a different tact below, using the parameter name/value pair delimiter (i.e. the ampersand) and the equals sign to isolate the parameter name (that's why I added the ampersand to the beginning of the parameter list) and value.

The result below handles parameter name/value pairs at the beginning or the end (why I add the ampersand to the end of the parameters list) of the string as well as anywhere in between. It also handles the case where the parameter name doesn't exist in the string (IIF) or if no records exist (UNION ALL) for the report name and user name combination. There are many ways to write the query below and get the correct results... I tried to write this for performance and the clarity of the SUBSTRING function.

SELECT TOP 1 ParValue
  FROM (
    SELECT els.TimeEnd 
      , IIF(CHARINDEX('&' + 'ParName' + '=', ParsString) = 0, 'DefaultParValue',
        SUBSTRING(ParsString 
          , StartIndex
          , CHARINDEX('&', ParsString, StartIndex) - StartIndex)) AS ParValue
      FROM (SELECT ReportID, TimeEnd 
            , '&' + CONVERT(VARCHAR(MAX), Parameters) + '&' AS ParsString 
            , CHARINDEX('&' + 'ParName' + '=', '&' + CONVERT(VARCHAR(MAX), Parameters) + '&') 
              + LEN('&' + 'ParName' + '=') AS StartIndex 
          FROM ExecutionLogStorage
          WHERE UserName='UserName' -- e.g. DOMAIN\Joe_Smith
          ) AS els 
        INNER JOIN [Catalog] AS c ON c.ItemID = els.ReportID
      WHERE c.Name = 'ReportName.rdl'
    UNION ALL
    SELECT CAST('2000-01-01' AS DateTime), 'DefaultParValue' 
  ) i
  ORDER BY TimeEnd DESC

The above could be re-written to get the values of several parameters at the same time to be used within a single SSRS dataset, however not much is gained as Reporting Services executes the dataset query for each parameter that it is attached to. Therefore, it may make sense to parameterize the arguments (i.e. the user name, report name, parameter name, and default parameter value) and turn it into a stored procedure. This may also prevent you from having to add a data source pointing to your Reporting Services database.

A few things to note when using this code to access parameter values for use within an SSRS report...

First, if you have Boolean parameters that you wish to use this with, you’ll have to convert them to Text. Also, it should be clear that this does not work with multi-value parameters (I'll save that for a future post).

Second, as Jegan notes in his blog post, "the parameter value passed to the report when previewing in visual studio won't be logged in ReportServer database as the report execution is not from ReportServer". This means you will need to deploy the report to see that the default changes on subsequent report launches.

Finally, in my environment, passing Globals!ReportName as a parameter value in Visual Studio 2010 returns “ReportName” (i.e. the report name without the extension), whereas 2012 SQL Server Reporting Services (Sharepoint integrated mode) returns “ReportName.rdl”. Just something to be aware of as you need to use the report name with the extension.

Oh, just one more thing. Many would advise against this as its hitting the Reporting Services database which could affect performance, or Microsoft could change the database structure, breaking your query, etc. Use at your own risk.

Sunday, June 15, 2014

Creating Map Tiles (Part 3) - Generating Tiles from a TileMill Project Using MBUtil and PowerShell

This post is a continuation of Creating Map Tiles (Part 2) - Creating a Sales Territory Boundary Set Using TileMill, where I prepared my transparent boundary set in TileMill. In this post I use PowerShell to call TielMill and MBUtil to generate the map tiles.

As I mentioned in Part 1, for much of the process of creating map tiles I relied on an excellent series of posts by Pedro Sousa on his blog, Pedro's Tech Mumblings. In Part 2 of his series, he walks you through how to export your TileMill project to a MBTiles SQLite file and then use MBUtils (a command line MapBox tool) to export the map tiles. Below, I've taken a the same approach but created a PowerShell script to automate the process.

In order to perform this process manually or using the PowerShell script below, you need to install Python with SQLite and MBUtils. Pedro walks you through this, however I performed the following steps outlined in the accepted answer to Is there any way to use mbutil on windows? posted on StackExchange.
  • Download and install ActiveState Active Python 
  • Install MBUtil by downloading and unzipping to C:\mbutil-0.2.0.
  • From a command-line navigate to C:\mbutil-0.2.0 and run "setup.py install" to install mbutils
At this point the pre-requisites for the following PowerShell script exist. 

function CreateTiles() {
    param(
        [Parameter(mandatory=$true)] [string]$project,
        [Parameter(mandatory=$true)] [string]$inputPath,
        [Parameter(mandatory=$true)] [string]$outputPath
    )
    # delete .mbtiles, .export and .export-failed files
    remove-item  "$($inputPath)$($project).*"
    
    # delete directory
    $pngPath = $outputPath + $project
    if (test-path $pngPath){
       write-host "Deleting $($pngPath)... this may take awhile." 
       #remove-item $pngPath -recurse  # very slow
       $fso = New-Object -ComObject scripting.filesystemobject
       $fso.DeleteFolder($pngPath, $true)
    }
            
    # Create mbtiles file. Requires TileMill.
    # changed the following so that the executable's path doesn't 
    # contain spaces (i.e. using the call (&) operator is optional)
    cd "C:\Program Files (x86)\TileMill-v0.10.1\tilemill\"
    $mbtilesFile = "$($inputPath)$($project).mbtiles"
    & ".\node.exe" ".\index.js" "export" $project $mbtilesFile "--format=mbtiles"
    
    # Create PNGs from mbtile file. Requires:
    # Python with SQL Lite (install ActivePython);
    # MB Util (download mbutil zip from https://github.com/mapbox/mbutil/tree/v0.2.0).
    python "C:\mbutil-0.2.0\mb-util" --scheme=xyz $mbtilesFile $pngPath
}

CreateTiles "State_And_County_Boundaries" "C:\SHPs\" "C:\MapTiles\"

The first step in the script removes the existing MBTiles files, if they exist, which will block the creation of the new file. It also deletes the existing path of previously created tiles. This is necessary because I don't want to have any old tiles mixed with the new (remember not all tiles are generated within the bounds set in the TileMill project because my map has a transparent background). Depending on the number of tiles generated in a previous run, this can take awhile.

The second step is what you can do manually from TileMill by clicking the Export button in the main toolbar and selecting MBTiles. It calls TileMills index.js file and is documented here on the MapBox site.

The final step calls the MBUtils executable to produce the map tiles in a set of folders based on the XYZ tiling scheme.

The files now just have to be copied to a web server where they can accessed by the Bing Maps API. The following, modified from a function taken from another of Pedro's blog posts (taken in turn from a blog post by Alastair Aitchison), shows how the tiles can be called from the Bing Maps API using JavaScript:

var tileSource = new MM.TileSource({ 
    uriConstructor:  function getTilePath(tile) {
        // only created custom tiles for zoom 0-12
        if (tile.levelOfDetail <= 12) {
            var x = tile.x;
            var z = tile.levelOfDetail;
            var yMax = 1 << z;
            var y = yMax - tile.y - 1;

            return "Images/Tiles/BaseMap4/" + z + "/" + x + "/" + y + ".png";
        }
    }
});

var tileLayer = new MM.TileLayer({ mercator: tileSource, opacity: 1 });

// Push the tile layer to the map
map.entities.push(tileLayer);

One of the modifications that I made to the above prevents the call to retrieve tiles that were not generated for certain zoom levels. This will prevent the server from having to generate a multitude of 404 error response codes as you zoom below the level for which tiles were generated. Another modification that I made to my own code (not included above) was to add a version argument to the end of the URL which is retrieved from a web service call to the database when the page initially loads. I use this argument to force the fetching of new tiles when the tiles have been replaced on the server.


The Tile Layers Used with the Bing Maps API

Using the PowerShell script above, I have been able to generate three different, fairly static boundary sets, consisting of 3 to 5 layers each (resulting in around 100,000 tiles for each set). Along with the script from Part 1, Creating Map Tiles (Part 1) - Exporting MapInfo TAB Files to ESRI SHP Files Using PowerShell and MapInfo, I have been able to automate the creation of an additional two boundary map tile layers that have to be generated monthly, due to changes in the boundary definitions.


Friday, May 9, 2014

Creating Map Tiles (Part 2) - Creating a Sales Territory Boundary Set Using TileMill

This post is a continuation of Creating Map Tiles (Part 1) - Exporting MapInfo TAB Files to ESRI SHP Files Using PowerShell and MapInfo, where I prepared my MapInfo boundary files for use in TileMill. In this post I focus on creating a boundary map in TileMill that can eventually produce tiles which will be overlaid onto a Bing Maps tile layer using the Bing Maps API. To keep it very simple, my map will only consist of state and county boundaries.

The first step in creating a transparent boundary set is to specify "png (24-bit)" as the Image Format and to uncheck the "Default data" checkbox when creating the new project (like MapInfo, in TileMill you have a project, or workspace, which will contain your layers). 

When you open the project, you will notice that TileMill automatically created a stylesheet (style.mss) with a default Map style consisting of a solid blue background (unlike MapInfo which use the Layer Control, TileMill uses a CSS type stylesheet to control the appearance of the layers). The second step in creating a transparent boundary set is to remove or comment out  (i.e. using comment blocks /* */) the Map selector.

The next step is to open the Layers List using the bottom button in the Editing tools (located at the bottom left of the map). Next, add a layer by clicking the Add Layers button. Add the bottom, or most detailed, layer first (counties, in my example). Browse for the ESRI Shape file and click "Save & Style" when done (entering a Carto ID and Class is optional). The layer will be added to the map with a default style.

The third step in creating a transparent boundary set is to comment out or remove the polygon-opacity and polygon-fill attributes of polygon layers. You can also change the line-color and line-width to your preference. At this point, you can add more polygon layers (I'm adding states) and perform the same edits to the attributes for each selector. My result looks like:



Right now the map is pretty cluttered with all of the county boundary lines. This brings up the other requirement for my transparent boundary set which was to have the more detailed layers only become visible as you zoom in. To accomplish that, we can apply a zoom selector which can specify at what zoom levels the layer is visible. For my simple state and county set, the stylesheet now looks like following:


#countyregion {
  [zoom >= 6]
    {
      line-color:#00ffff;
      line-width:1;
    }
}

#stateregion {
  line-color:#0000ff;
  line-width:1;
}

Because this map will eventually overlay a Bing Maps road or satellite layer, I also want to make the boundaries stand out a little more. This can be done using additional styles and is basically taken straight from the TileMill manual. The "outline" CSS style pseudo class is used to identify the purpose of the style, but is not required. It is important to put this style setting before the existing, as it will be rendered first.

#countyregion::outline {
  [zoom >= 6]
    {
      line-color:#00ffff;
      line-width:7;
      line-opacity:0.1;
      line-join: round;
      line-cap: round;
    }
}

#countyregion {
  [zoom >= 6]
    {
      line-color:#00ffff;
      line-width:1;
    }
}

#stateregion::outline {
  line-color:#0000ff;
  line-width:7;
  line-opacity:0.1;
  line-join: round;
  line-cap: round;
}

#stateregion {
  line-color:#0000ff;
  line-width:1;
}

Additionally, the line-join and line-cap attributes were used to smooth out the ends of the line segments. The result looks like this (note that I'm zoomed in to 6 to show the county boundaries):



The final step in setting up the TileMill project is to create the settings that will be used when generating the tiles. You do this from the Project Settings panel, accessed using the wrench button in the main toolbar (i.e. the upper right of the application). By defining the Bounds (-179.7253,18.1928,-66.3464,71.4848), Center (-97.6355,40.6181,10) and the Zoom (0-12), you can significantly reduce the number of tiles that will be generated. Additionally, because of the transparent background, tiles will not be generated if there is no visible boundary. This results in a much smaller number of tiles than if there were a visible background to the map.

At this point I have created a simple transparent boundary set with a detailed (i.e. county) layer that becomes visible as one zooms in. The next blog post, Creating Map Tiles (Part 3) - Generating Tiles from a TileMill Project Using MBUtil and PowerShell, will present a PowerShell script that will generate the layers based on this TileMill project.


Wednesday, April 30, 2014

Creating Map Tiles (Part 1) - Exporting MapInfo TAB Files to ESRI SHP Files Using PowerShell and MapInfo

I recently had the requirement to display sales territory boundaries within a Bing Maps web application that I had written. The boundaries were actually a set of custom sales territories (e.g. region, district, markets) using counties as a building block. After some research, I determined the best way to incorporate these boundaries into my Bing Maps application was to generate a set of map tiles which would progressively show the boundaries (e.g. regions then districts then markets and finally counties) as you zoomed in.

For much of the process of creating map tiles, I relied on an excellent series of posts by Pedro Sousa on his blog, Pedro's Tech Mumblings. Starting with Part 2 of his series, he introduces you to TileMill and shows how it can be used to design a map.

One of the first steps after installing TileMill is to create a new project and add map layers to it, and one of the options for the map layers is to use ESRI Shape (.shp) files. I had a bit of a head start in the process as I had been using MapInfo and MapBasic to create MapInfo TAB files for many years for use in a desktop application that I had written back in the late 1990's. Because of my familiarity with MapInfo, I knew that it included a MapBasic tool called the Universal Translator which is a data conversion tool built on Safe Software's FME (Feature Manipulation Engine) technology. This could be used to translate my MapInfo TAB files into ESRI Shape files. However, I had several different boundary sets, consisting of several different boundaries (a dozen total), that I needed to create, and using an interactive tool would be tedious. I also had two boundaries that needed to be generated monthly, so automating the process was a must.



Because I could use MapBasic to write something to call the Universal Translator within MapInfo I knew I could automate the process. However, I got lucky when I unexpectedly stumbled upon the Universal Translator User Guide, which included information on how the MapInfo Universal Translator could be called from the command line. Armed with this information, I was able to quickly put together the following PowerShell script to convert all of my files.


function ExportTabToShp() {
    param(
        [Parameter(mandatory=$true)] [string]$TAB_PATH,
        [Parameter(mandatory=$true)] [string]$outputPath
    )

    $baseName = [System.IO.Path]::GetFileNameWithoutExtension($TAB_PATH)
    
    # Application name of the MapInfo Universal Translator
    $CMD = "C:\Program Files (x86)\MapInfo\Professional\UT\Imut.exe"
    
    # Command that generates a semantic control file
    $IMUT_COMMAND = "CFGenerate"
    
    # Input format is MapInfo .TAB
    $INPUT_FORMAT = "MAPINFO"
    
    # Output format is the ESRI Shapefile format
    $OUTPUT_FORMAT = "SHAPE"
    
    # Path and filename of the semantic control file
    $FME_PATH = $outputPath + $baseName + ".fme"
    
    #Path and filename of the logfile to be generated
    $LOG_FILENAME = $outputPath + "mut.log"

    # Generate a mapping file:
    write-host "Note the RemoteException that is thrown is simply a message stating that the 'Mapping File Generation' was successful:" -ForegroundColor Green
    & $CMD $IMUT_COMMAND $INPUT_FORMAT $OUTPUT_FORMAT $TAB_PATH $FME_PATH LOG_STANDARDOUT "YES" LOG_FILENAME $LOG_FILENAME LOG_APPEND "YES"

    # Run the mapping file:
    write-host "Note the RemoteException that is thrown is simply a message stating that the 'Translation' was successful:" -ForegroundColor Green
    & $CMD $FME_PATH --HOMOGENOUS_GEOMETRY "YES" --_SHAPE_FILE_TYPE "2d" --SourceDataset $TAB_PATH --DestDataset $outputPath --_EXTENSION "TAB" --_BASENAME $baseName --_FULLBASENAME $baseName
}

ExportTabToShp "C:\TABs\county.tab" "C:\SHPs\"

The first call in the ExportTabToShp function generates an FME file, which is a "mapping file" that controls the translation. The second call runs the translation using the generated FME file. Refer to the User Guide link above for more information required to expand this function to support other translations.

Note the use of the Call operator (i.e. &) in the above PowerShell script. This allows one to call an executable formatted as a string and is useful when there is a space in the path to the executable.

In the next part of this series, Creating Map Tiles (Part 2) - Creating a Sales Territory Boundary Set Using TileMill,I'll walk through a couple of the steps that I used in TileMill to create the sales territory boundary sets.