Connecting SSRS Report to Bing Maps

Hy there,

Today i read an interesting article at talking about drill through actions in  order to connect and map your Report Data to external sites such as Bing or Google Maps.

Here is the link :

Categorias:Geographic Mapping

Contain DW/BI Scope Creep and Avoid Scope Theft

Nice article at Kimball Group Site, exposing the need to keep your Business Intelligence Program on track – avoid losing your inicial Scope!!

Categorias:Personal Experience

List All SSAS Roles and Users Membership

This is a known issue of SSAS, there is no easy way to list all SSAS Roles and Users Membership in a Tabular and simple Format. There are some posts mentioning the use of Powershell in order to complete this task ( There is also a Roles Report available in Bids Helper but it hasn’t an easy and understandable format

However, with the help of a great project at codeplex called the Analysis Services Stored Procedure Project (, we can install some stored procedures written in C# in our SSAS Server and then run this simple command in a Mdx query editor:

Command: call assp.discoverxmlmetadatafull(“\Database\Roles\Role|Name\Members\Member”)

The result:



Fantastic C# Project to extend SSAS Daily Management and Metadata Discovery

ASSP – Analysis Services Stored Procedure Project

The Analysis Services Stored Procedure project is a set of sample stored procedures for Analysis Services 2005, 2008, 2008 R2, and 2012. These samples have been written in C# and sample MDX queries are included that demonstrate the use of the procedures against the Adventure Works DW sample database. It was developed by a group of community volunteers with 2 main aims in mind:

  1. To provide a set of useful extensions to Analysis Services 2005 and higher.
  2. To provide a variety of example source code for people looking to write their own stored procedures.

Link to ASSP Project:

Install it in your SSAS Server:

Dynamic OLAP Cube Reports In Excel

Pingback from:

Let’s open up our original Excel file. You should have a report like the one below. It is already a pretty useful report, but the requirements could easily be difficult to achieve with a Pivot Table.


Let’s say for example, the Financial Controller (we’ll call him Ken) gives you the following requirements.

1. Make the report so that I can pick a Period and the report automatically shows me P&L data from the 6 months prior to the selected period.

2. I want to be able to choose a Department and only show data for it.

3. I don’t like the P&L as it stands at the moment as it doesn’t give me enough detail but the rest of the report is fine. I want the line for Operating Expenses to be itemised by a more detailed level, i.e. Labour, Travel, Marketing..

So apart from thinking “cheers Ken”, you think that the answer could be simple – Get the BI team to amend the “Summary P&L” to provide more detail for Operating Expenses. BI Team say that they can’t do that as it’s already used in 300 reports that can’t be changed. So you ask them to create a new Set including the extra information and call it “Ken’s P&L”. BI Team says yes they can do that, but they are busy with Phase 3 of the BI roll-out, so how does 3 months time sound? OK you get the picture – Ken wants this next week, but the BI team can’t deliver for 3 months.

You also cant add the detail to your rows – from your knowledge of the cube, you know that the detail exists in Level4 of your Accounts dimension (see left) because the rows already contain an OLAP “named set” based on the accounts dimension and therefore can’t have another set of members based upon that dimension. Try it and see – you’ll get an error like this


Convert To Formulas

The first thing we can do is to is to convert the whole report to OLAP Formulas. This makes the whole report now formula driven (Using CUBEMEMBER and CUBEVALUE functions) and the Pivot Table no longer exists.

Select OLAP Tools>Convert To Formulas from the Pivot Table Options menu.


NOTE: DO NOT check the box that says “Convert Report Filters” = we need them to drive Ken’s request for being able to select a Financial Period and Department. (this is where you could code this using neat Excel stuff & VBA, but why bother when you can drive it using the existing filters?). The result will be as per the screenshot below. Note how you can still select Departments & Fiscal Years.


To see all the new formulas at a glance, a neat Excel Keyboard shortcut is CTRL+`- This allows you to view FORMULAS as opposed to their RESULTS in an Excel sheets (see below).


Note how the row and column headings have changed to CUBEMEMBER functions and the Values to CUBEVALUE functions.

Note also that the CUBEVALUE functions Member_Expression parameters actually refer to Excel Cells ($B$2, $A$54 etc) – this is really useful to note – They are actually referring to Member_Expressions being generated by the CUBEMEMBER functions in the rows and columns AND Member_Expressions generated by the old Pivot Table report filter value cells (in B1 and B2 of the worksheet). This gives us our first clue as to how we can use this to make the report fully dynamic. Press CTRL+` to return to the values view.


SSAS: Explaining the State Transitions Viewer for Sequence Clustering

Pingback from: SSAS:

Original article (before edits) by Allan Mitchell, SQL Server MVP and principal consultant at Konesans Limited in the UK.  You can find Allan’s website .
NOTE: Alternatively, you can download a Word file of this whitepaper atTechNet Gallery . This is for several potential reasons: (1) To easily print it. (2) To see the original version. (3) To customize it for personal use. (4) To share an offline copy.


In SQL Server 2005 and SQL Server 2008, Microsoft has added some fantastic visualizations around data-mining algorithms. These visual aids allow us to see exactly what a particular algorithm is predicting or describing—making a difficult subject easier to understand.

In this article:


In this article, I look at the State Transitions viewer for the Microsoft Sequence Clustering algorithm and explain exactly what you are seeing and why. Although it is not necessary, this will be easier to understand if you have a statistics background or previous experience with data mining. You can find an introduction to the Microsoft Sequence Clustering Algorithm here: .

The image below is an example screen from the State Transitions viewer.

The screenshot is taken from the Sequence Clustering model that is deployed as part of the sample Analysis Services Project for SQL Server 2008. You can download the Analysis Services Project here:


The screenshot presents a very simple, clean view of a transition. When you look at your models, you will undoubtedly see a lot more nodes, lines, and numbers. However, take heart that everything we learn here is equally applicable in larger sequences. I just wanted a clean, uncomplicated view of a transition.

Back to top

The Viewer in General

This slider on the left side allows us to gradually filter in or out weaker or stronger links between items in a sequence. The links are determined by the transition probability. If the slider is at the top then, no matter how slight the probability, all the links will be displayed.  If the slider is at the bottom, only the strongest links will be displayed.

Across the top, there are many tabs and two dropdown combo-boxes. Once you click each of these tabs, you see another viewer for our Sequence Clustering model. The dropdown on the left allows us to choose our mining model, and the dropdown on the right allows us to choose the viewer. This is important; you might find that you use the viewer dropdown quite frequently.

Of the two available viewers, this article concentrates on the Microsoft Sequence Cluster Viewer. The Microsoft Sequence Cluster Viewer is the more graphical of the two, and I find it easier to understand. The second viewer, the Microsoft Generic Content Tree Viewer, is not graphical, but it contains more information, which is extremely useful when you want to dig deeper into the algorithm.

Back to top

The Viewer in Detail

Sequence Start and End

Let’s dive straight in and try to work out what the model viewer is telling us. In this article, I have chosen to look at Cluster 13 for this particular sequence clustering model. One of the first things I noticed and often get asked about is the “triangles with balls.” I know this is not a very technical name, but in the absence of anything else, I am using the term here.

Here is the ball on the flat edge of the triangle:

This describes that ML Mountain Tire is the first state in the sequence.

Here is the ball on the point of the triangle:

This describes that the ML Mountain Tire state is the last in the sequence and nothing comes after it.

Description of the numbers in the viewer

Looking at the original screenshot, we see that for this particular sequence we would start with the ML Mountain Tire state 61 percent of the time and the Mountain Tire Tube state 39 percent of the time. Where ML Mountain Tire is the first item in the sequence, we can expect it to be followed by the Mountain Tire Tube state in 32 percent of cases and in 68 percent of cases where it is the end of the sequence. When Mountain Tire Tube is the starting sequence, we can expect it to never be followed by anything—we know this because of the value 1.00 (see Fig 1) and the end-of-sequence ball on the point of a triangle. Each sequence has the unique color to mark its states (including start and end states), link, and probabilities.

Explanation of the source of the numbers

Now let’s query the metadata around the model using Data Mining eXpressions (DMX). Although a detailed look at DMX is out of scope for this article, I will explain my queries as we go along.

For this cluster, the first thing I want to know is the probabilities of each state being first in the sequence. Earlier we saw in the viewer that the probabilities are 61 and 39 percent respectively. The query below shows how we can retrieve the same information when using a DMX (Data Mining Expression) query.

Here is the query:

SELECT FLATTENED NODE_UNIQUE_NAME, (SELECT ATTRIBUTE_VALUE AS [Product 1], [Support] AS [Sequence Support], [Probability] AS [Sequence Probability] FROM NODE_DISTRIBUTION WHERE [Support] > 0 ) as t FROM [Sequence Clustering].CONTENT WHERE NODE_TYPE = 13 AND [PARENT_UNIQUE_NAME] = 13

Query 1

The outer part of Query 1 selects from the [Sequence Clustering] model and asks for a NODE_TYPE of 13. This NODE_TYPE is the type that holds the first states of possible sequences. Remember that a cluster can have multiple possible starting points for sequences/runs of states. Query 1 also asks for where PARENT_UNIQUE_NAME is 13. This means we want to look at Cluster 13.

What might be slightly confusing are the FLATTENED keyword and the nested table in Query 1. The following code reads from a nested table that is returned as part of the [Model].CONTENT request.

(SELECT ATTRIBUTE_VALUE AS [Product 1], [Support] AS [Sequence Support], [Probability] AS [Sequence Probability] FROM NODE_DISTRIBUTION WHERE [Support] > 0 ) as t

Query 2

Here I am asking for the Value of the first sequence state, the probability of the state, and the amount of cases that support that state where there is at least some support.

Here are the results:

NODE_UNIQUE_NAME t.Product 1 t.Sequence Support t.Sequence Probability
884722 ML Mountain Tire 168.4872 0.613636
884722 Mountain Tire Tube 106.0845 0.386364

Table 1

We can see that the numbers returned correlate nicely with what the viewer shows us.

Now let’s move a little further and see what the probabilities are of the next states in the sequence when ML Mountain Tire is the first state.  For this I am going to use a slight variation of Query 1:

SELECTFLATTENED NODE_UNIQUE_NAME, (SELECT ATTRIBUTE_VALUE AS [Product 1], [Support] AS [Sequence Support], [Probability] AS [Sequence Probability] FROM NODE_DISTRIBUTION) as t FROM [Sequence Clustering].CONTENT WHERE NODE_TYPE = 13 AND [PARENT_UNIQUE_NAME] = 13

Query 3

The only difference from Query 3 to Query 1 is that I have not restricted the nested table to only show items that have [Support] > 0.

Here are the results of Query 3:

Table 2

Query 3 shows us what the first states are for Cluster 13.  As we can see, there are only two possibilities.  We are concentrating on ML Mountain Tire here.  To find out what states follow ML Mountain Tire, we need to count down the rows in Table 2 until we reach ML Mountain Tire.  Counting should start at 0; Row 0 is always the Missing state.  If we count, we should get to 14.  We now need to go look at the transition states.

The following query will tell us where to find the node that holds the second states for sequence state 14.  We also restrict the PARENT_UNIQUE_NAME to the NODE_UNIQUE_NAME we retrieved by looking at first states of sequences in Query 1.

SELECT NODE_UNIQUE_NAME FROM [Sequence Clustering].CONTENT WHERE NODE_DESCRIPTION = ‘Transition row for sequence state 14’ AND [PARENT_UNIQUE_NAME] = ‘884722’

Query 4

We now need to take the result of Query 4, 884737, and use it to get the second state items.

SELECTFLATTENED (SELECT ATTRIBUTE_VALUE AS Product2, [Support] AS [P2 Support], [Probability] AS [P2 Probability] FROM NODE_DISTRIBUTION) AS t FROM [Sequence Clustering].CONTENT WHERE NODE_UNIQUE_NAME = ‘884737’

Query 5

As you can see, Query 5 is pretty much the same query as before, but NODE_UNIQUE_NAME has changed to the node returned in the previous query. Here are the results of Query 5:

Table 3

The probability column shows us that there is a 31 percent probability that Mountain Tire Tube will follow ML Mountain Tire and a 68 percent chance that nothing will follow.  This correlates nicely with what we see in the viewer.

Perform the same query for the Mountain Tire Tube state.  If we go back to the results in Table 2, we see that Mountain Tire Tube is the 17th transition State Row.  To find the second states, this changes our query to the following:

SELECT NODE_UNIQUE_NAME FROM [Sequence Clustering].CONTENT WHERE NODE_DESCRIPTION = ‘Transition row for sequence state 17’ AND [PARENT_UNIQUE_NAME] = ‘884722’

Query 6

As before, we take the results of Query 6, 884740, and use it to find the second states:

SELECTFLATTENED (SELECT ATTRIBUTE_VALUE AS Product2, [Support] AS [P2 Support], [Probability] AS [P2 Probability] FROM NODE_DISTRIBUTION) AS t FROM [Sequence Clustering].CONTENT WHERE NODE_UNIQUE_NAME = ‘884740’

Query 7

The results below correlate again very nicely with what we see in the viewer.  Here 0.99999999 is identical to probability 1 (100%).  It is what statisticians call “almost surely.”  We will never be able to run this sequence enough times to say that is is always true (surely), but we are very sure it will happen.

Table 4

Whenever Mountain Tire Tube is the State in a sequence for this cluster, we always end on that state.

Back to top


Microsoft is making great strides into providing a means for more and more people to use Data Mining without feeling that it is too complex.  The viewers provided with SQL Server 2005 and 2008 are excellent ways to visualize what the model has learned about your data.  This article has shown how to interpret one of those viewers.  Thank you for reading, and please send us your feedback to let us know how useful this was or if you have any other thoughts.

About the original author (before wiki edits): Allan Mitchell is a SQL Server MVP based in the UK.  He specializes in the Microsoft SQL Server BI stack with a passion for Data Mining and SQL Server Integration Services.You can find Allan’s website at .

RsScripter – Nice tool to Help you Migrate Reporting Services

PingBack from:

If you are using SSRS then chances are high to come across a scenario where you want to deploy reports developed on development machine to the production server, there are various ways to do this and one of them is to use RS Scripter tool. Here are the steps to do this –

1. Creating Report Deployment Script

  1. Download the Reporting Services Scripter –
  2. Update the servers.xml; change the reportservice value if your ReportServer name is different.

    <server label=”SQL2000” reportservice=”http://localhost/ReportServer/ReportService.asmx” />

    <server label=”SQL2005” reportservice=”http://localhost/ReportServer/ReportService2005.asmx” />

  3. Run the RSScripter.exe.
  4. Click on Options, and select the options selected in images below( for more information about Scripting Options check the readme file – –clip_image001



  5. Change the SQL 2005 RS EXE Location in the Global tab as per the location of SQL server and Apply –clip_image004
  6. Select SQL 2005 from the Report Server dropdown and click on Get Catalog, all the reports, data source, Schedules, Roles etc present in report server will be displayed –


  7. Select all the reports, data source, schedules and roles you want to transfer.


Change the output directory path (Should not be machine/user specific e.g. Desktop) e.g. C:\AClickReports and click on Script. The script for loading the specified items will be generated to the specified folder.

2. Deploying the Reports

  1. Ensure that you have IIS and dot net framework 2.0 is installed on report server.
  2. While installing SQL server 2005 select reporting services option.
  3. Follow the report server configuration steps and make sure “Report Server Configuration Manager” is all green ticks?clip_image008
  4. Follow these steps to publish reports –
    1. Extract the AClickReports folder to the report server, C: Drive.
    2. Open the “RS Scripter Load All Items.cmd” file located in AClickReports folder for editing.
    3. Change the SCRIPTLOCATION Script variable value to the location where AClickReports folder is copied (Should be same to the output directory path selected in step 3.8)

        SET SCRIPTLOCATION=C:\ AClickReports\


    4. Change the RS Script variable value to the location where Microsoft SQL Server is installed (if it is different from the default C drive).

      SET RS=”C:\Program Files\Microsoft SQL Server\90\Tools\Bin\RS.EXE”

    5. Run the RS Scripter Load All Items.cmd batch file, it will publish the reports to the report server.
  5. Follow these steps to verify that reports are deployed correctly
    1. Go to http://localhost/Reports/Pages/Folder.aspx,clip_image011
    2. Open the Data Sources you should see data sources selected in step 1.7clip_image013
    3. Open the data source and update the data source details as per the configuration of report server(if they are different) and apply.rs3
    4. Go back to Home page and click on the report folder selected in step 1.7 , select one of the reports and click on Edit button, then click on the Data Sources link ; Make sure that the data source is configured properly.clip_image017
    5. Repeat the steps from 2.5.2 to 2.5.4 for other data sources and report folders