“Multi-threading” the Sql Server Analysis Services Formula Engine II-a parallel query msmdpump proxy

Following a previous post (“Multi-threading” the Sql Server Analysis Services Formula Engine – I #ssas #mdx ), we returned recently to the issue of multi threading and SSAS formula engine. See that last post or this post by James Serra for reference.

Honestly it’s kind of amazing that something like the SSAS engine can run incredibly  well using a single threaded model for each query. It’s damn smart, and as long it runs smart you usually don’t need brute force. :)

Until… you find yourself with a very (very) complex cube, together with a rather complex scorecard model also completely built in SSAS/mdx, and…. PerformancePoint with its “with clauses” (and  like Chris Webb pointed before, with clause disables FE cache)

Sample query generated by PerformancePoint (a scorecard model, kpis on rows), running in 50-60 secs.



To return a complex and FE intensive scorecard with several different KPIs (completely different measure groups), with values/metrics like value, ytd value, prior year, end of year projection, target, ytd target, score and so on… well, requires a not so usual amount of computation by SSAS. Worst, due to the, let’s call it a “reasonable”, :) amount of mdx scopes involved it triggered some internal thresholds i, and it stops being smart… maybe switching to cell by cell mode? Query 3 kpis individually, none exceeds 2-3 secs, get the 3 together in the same query, –> 30-40 secs….

After exhausting all the tuning we could possibly remember… reducing down the query from 1m20 secs, to under 40-50 secs, but still annoying, why all the CPU power if we are waiting for damn 50 secs?

Another thing to note was that every row was a different KPI, completely isolated from all the other rows returned, really a very good candidate for partitioning and multi threading. But SSAS doesn’t do that (query hint would be great SSAS team ;) ),


(and this where I have to say the usual disclaimer, please do this at home, never at work! it’s not supported… aside from tolerating that 50 secs what follows is the worst possible thing you could do…)

We had previously built some SSAS http msmdpump proxies before (it allows you to query SSAS server over http) adding some “extra” features we needed…

So why not trying intercepting that query in a “fake” msmdpump (proxy) built in asp.net, partitioning it by member rows requested, run a bunch of parallel MDX queries against the SSAS DB, get the results, join the cell set together, and return as a SSAS XMLA pump reply….? (kinda…nuts….yes)

And well, far, far, far away from being a reusable module for this scenarios we built it for only this specific one, changed the connections in PerformancePoint pointing to the http endpoint (only for this kpis/scorecards), and gave it a go:

From here, query running to 50 secs, almost no cpu activiy:


To here, query now running under 14 secs (multi-threading is very noticeable, obviously you should have spare cpu available, not for a already heavily loaded server ):


Short story, how it works?

1-a web app gets the XMLA http requests targeted to the isapi SSAS msmdpump, inspects them for a very specific pattern. everything else passthrough to the regular msmdpump (debug/testing) or blocked (production server, we only need to answer a very specific pattern from performance point)

2-it splits the member list requested in the mdx rows (on rows), generates an identical MDX query for each group of N members


3-using .net parallel apis then sends x simultaneous queries/threads to the local SSAS db:





4-and the tricky part, getting the result cell set xml for each query, and aggregating that in a unique cellset xmla that then is returned to the original http request made by performance point (you can also fire a man studio mdx query and passing it through the proxy)

Closing Notes

Honestly I don’t even know if it’s a good snippet to share , but well we confirmed that it can work in very, very specific scenarios, as a last resource when you can’t  influence queries being made (tried that also… didn’t work out  so good…) …. If you risk being fired for not tuning that query that the CEO runs every day,  you may want to test something like this… otherwise forget what you’ve read. :)

But if this post happens to generate some requests where it can be put to good use I consider to share it privately or even on our codeplex sample site.

Please be warned that in most cases you won’t need anything like this, SSAS FE engine should work just fine  because it’s very smart (and we are not! :) ), only for very complex cubes and specific scenarios something as risky as this can be possibly useful.

ps-this also opens a rather interesting possibility, spreading the inner requests to secondary servers in a scale out farm…  imagine that working automatically for some queries… will we get this for SQL 2016? ;)

As for the code, its goes something like this (lots of complexity removed, just the core steps):







Take care,



“Multi-threading” the Sql Server Analysis Services Formula Engine – I #ssas #mdx

The SSAS single threaded Formula Engine (FE)

It’s a known (also unfortunate…) fact that analysis services (multidimensional) formula engine (FE) is single threaded.

Now, for anyone not familiar with SSAS this should come as a very unusual  limiting factor. Honestly, in a world dominated by the cloud, map/reduce, MPP/shared nothing platforms, (imo) the only plausible reason to still have a single threaded SSAS FE, is that we are really looking at an amazing product from a group of amazing minds that, unfortunately, hasn’t got any major upgrade since its very first 2005 release. (aside from 2008 block computation improvements ), and yes the reason for that reason would need another kind of post…

Well, not really single threaded…

A relevant note nevertheless is that, when we say SSAS FE is single threaded, we’re not talking about a single threaded engine FE per server, where we would have a single thread handling all session queries. That would be obviously unsustainable. What seems to happen -as far as I can guess-  is that the FE is single thread per query. When the work is heavily FE bounded, a single thread will handle the FE load. Other simultaneous queries will get their own FE thread from the thread pool. (note: the number of total available worker threads is not unlimited of course)

A very simple query will show this behavior a (using with clause to inhibit the FE cache –a tip from Chris I use a lot – and median calculation to make it very CPU/FE bound):

   member [Measures].[Test1] as
   ‘median([Customer].[Customer].[Customer],[Measures].[Internet Sales Amount])’
   member [Measures].[Test2] as
   ‘median([Customer].[Customer].[Customer],[Measures].[Internet Tax Amount])’
   member [Measures].[Test3] as
   ‘median([Customer].[Customer].[Customer],[Measures].[Internet Order Quantity])’

non empty {[Measures].[Test1],[Measures].[Test2],[Measures].[Test3] } on 0,
non empty [Date].[Calendar].[Month]*[Product].[Category].[Category] on 1
from [Adventure Works]

This query running on my laptop (SQL 2012 SSAS) will average 45-55 secs duration, with a very constant 20%-25% cpu usage (4 listed cores):






In a multi-core server environment  this would be even more noticeable, with very low total cpu% usage during that query FE execution.

The real issue (for end users at least) is that adding more cores/cpus will not have any significant performance  improvement for queries like this –but faster cpus should help a bit-. A single thread will handle all the load, all the cpu% power on recent hardware will be completely wasted. You will see several FE heavy queries taking several seconds to complete, almost without any kind of cpu/io pressure. “Uncomfortable” to watch at least.

“Partitioning” MDX queries & multi threading

If a “final round” can be expected for SSAS multidimensional I really hope that this could be somehow fixed (configurable at least), in a way that multiple threads could handle the FE load when appropriate.

(relevant update, see comments below, thanks Chris! don’t miss this post from Chris Webb, Parallelism and MDX Queries, an amazing piece of work using ssas proc/clr to multi thread some specific query patterns, may be this can be fine tuned for a wider range of scenarios)

Partitioning the workload by unrelated measure makes for a very strong candidate. (related measures will share intermediate calculations,cache, and probably won’t benefit so much from multi threading).

An example, let’s split that previous query into three, one for each measure requested, (note that I used 3 unrelated base measures)

(query 1, the others are similar)


Executing them in parallel will result in something like: query1: 24 secs, query2: 23 secs, query 3: 23 secs. But as they executed simultaneously, we only have to wait for the longest one to get all the data we need, ~24 secs. Less 50% of the original single query duration.

CPU usage now spikes predictably to ~75% during the ~25 secs queries duration.



Possible scenarios for query partitioning

So, what do we take from here? Well, it depends a lot on each specific query workload… But it can come handy in some scenarios:

  • you have a system were cpu capacity is clearly underused most of time
  • you control the queries sent to ssas, through a web frontend for example or you are using a frontend that can use multiple parallel requests (tip: performance point)
  • queries are easy “partitionable“, eg: by measure
  • you already did everything possible regarding the usual ssas performance tuning best practices (at least everything -you think- is possible)

In these scenarios,  you can try to partition the queries, execute and aggregate the results. There’s a chance that your users can benefit from better response times.

(and of course partitioning queries like this is a poor solution for badly written MDX that tends to be very FE intensive. For that my recipe is always refer back to Chris, Marco, Mosha, Piasevoli, Greg, Kasper, Richard, and many others, never forgetting our amazing BI team at DevScope :) )

Next post: “Multi-threading” the Sql Server Analysis Services Formula Engine II – Performance Point

Stay tuned!


Excel 2007/2010 Pivot Tables: Getting detailed/granular table reports from OLAP… in seconds!

(update: SmartPivot for Excel version 2.3 – a much improved version- has just been published, please visit SmartPivot Product Page for the latest version Excel BI features including fast tabular reports, instant search, quick connect , a stunning pivot viewer visualization for Excel tables & many others)


SmartPivot Latest Version for Excel 2007, 2010, 2013 & PowerPivot


Well, I’ve been delaying  & postponing this post. Being the main reason having some assurance that this could have some actual value :). Really, I’m not sure, but anyway, here it goes!


A few weeks (actually I think I now can say months… :) ) I was called to help a customer solve a set of somehow advanced scenarios on Analysis Services. One of the issues, actually the big one, was that end users were having a real hard time getting detailed reports (near to drill trough, maximum granularity “detail” queries) working with Excel 2007 Pivot Tables.


My first thought was, honestly :) , “sure… olap & analysis services are mostly aggregate oriented, not the best option for creating large facts/rows detailed reports“. We could say, if being a little “naive”, that there other good options like using reporting services,etc.  IMO that really doesn’t cut it anymore, my reasons:


  • End users really don’t care if they want aggregates or detail rows, they want a single BI repository to their information needs. And they’re right! :)
  • Solutions like building  custom reports have the problem that they don’t allow free adhoc report authoring & filtering
  • From a “single version of the truth” perspective (whatever that means :) ) using several tools & repositories to show almost the same information is the best way to go (IMO)


So perceiving that end users have good reasons I went a little further to see why, when getting back *only* near 10.000 records with 8/9 fields on a pivot table, Excel 2007 mostly….ahh…”crashed”, or took several minutes to get the “tiny” 10.000 rows pivot table back.


And the results were somehow surprising. In this specific scenario it really has nothing to do with OLAP/Analysis Services, but with the mdx query excel 2007 generates, that even with hidden grand totals (to mimic a table result), requests aggregated values to the olap (All member).


The problem with is this? Well  for a result of 784 real result rows (what end user sees), using 13 columns, olap processes aprox. 125.000 total cells, and returns aprox 10.000 total rows. But if the result rows are now 6.207 we get 1.211.896  total cells from olap, and  86.564   total olap rows. Why is that?


The problem is that Excel still requests all member aggregations even when Excel is not showing that values (that’s a “table” report layout remember?). In this case the yellow rows are returned from analysis services but never shown.


The query goes something like this (note drilldown member style of query):




This is unfortunate because a simple custom mdx query using standard cross joins to get the same “table” style results, actually returns on just a few seconds!!! Something like:



(note: a few weeks later Richard Lees posted exactly the same problem, with great insights & tips: using properties or custom “reporting” hierarchies. Please see here . That’s a great post. This was after my issue, it sure would have helped me at the time! :) )


So, although testing this on Excel is difficult due to performance maybe we’re talking of something like (relation between real rows vs olap rows with 13 columns)




So can we do something do to ease things out? And still maintaining end user authoring & filtering ? :) Well, yes we can! We really can’t tailor the MDX query generated by Excel…but.. we can change the mdx query from an external data table query, and we can get the filters from the pivot and using them on our external table query! :)


The results are really impressive, actually I was deeply surprised, warm cache queries for 6.207 rows are under a second, for 73.094 rows under 15 seconds, we were actually  able to return more than 1million rows from analysis services, using this technique, in a few minutes.


(note: that when using native pivot table for the 73.094 rows scenario I couldn’t get the query to end without first crashing my excel – if you use Prism HUD (great tool btw) the memory & cpu footprint of this native pivot table queries are *very* noticeable…

in fact be careful when exploring several columns excel pivot tables with analysis services, particularly  if both are in the same box…  I got the impression that my disk almost “fried” due to excessive memory usage…).

So, here’s the sample I’ve worked on, I would really like to get some feedback. :)  You can filter the table report like a regular pivot table. It checks the results so that grand totals check, and show the refresh time in seconds. It also does the trick of changing the column labels to the friendly ones.



Some results (note excel result rows vs olap rows):




Some notes:


  • When opening you must enable macros, that’s the way custom mdx query is changed when adding/removing filters on the fast sample
  • I use a named instance for SQL2008, so if you have default instance you have to change it on the connections tab
  • The sample is bases on the Contoso Retail Olap sample
  • If you need to add more columns to the sample just edit the query on the config sheet. As long as you leave the {filters} token
  • Doesn’t support multi select filters (hey, it’s just a sample! actually my customer did an awesome job with this startup, and the final workable excel was great)
  • There’s a strange result with employee jae,pak witch doesn’t show up on the native pivot table (fell free to send feedback on this, my time is limited :) )






So, that’s it. I think there’s some potential here I and would like to share & receive feedback. I talked with several people on the possibility that we were missing some excel option to fix this, and that it’s still open :) , if you know one please share it.


Richard Lees mentions that this should be fixed on 2010, but I’m still getting the same behavior on the sample, again, maybe we missed something. I also would like to share some final notes:


  • End users really shouldn’t be worrying about using the analysis services only for aggregated values. What they want is a single repository for their information needs, specially if the data scope is the same! I actually think there should be a really, cof, :) Unified Model that covers both type of scenarios.
  • As it seems that we can actually use olap to do, at least some, more granular queries… why doesn’t excel offer a real table style, no totals, end user authored pivot table? Doesn’t seem difficult to me…






Update Aug-2010:

Update Sep-2011: (some notes from Chris Webb on the issue, now regarding denali tabular & dax scenarios-surprisingly…still a problem! -)

Sample Download:


DevDays DAT302, Tuning & understanding MDX Query Performance in SQL Server 2008 Analysis Services





A little bit late I know , but now in english,. Thanks for the “heads up” Pedro.


Anyway, here it is, all the materials I’ve used on the DevDays session.Notes of interest:

  • Report/Dashboard that shows the mdx trace log, properly organized by query
  • Possible to filter, using sub report links
  • Use of a custom report in management studio
  • I think it should run fine in sql 2005 management studio (needs sp2 for custom reports)
  • Sample code, use as is, can  be buggy! :)

All feedback is welcome!






MDX Studio

Blog Mosha Pasumansky

BIDS Helper

A Solution for Collecting Analysis Services Performance Data for Performance Analysis

Microsoft SQL Server Community Samples: Analysis Services

SQL Server 2008 White Paper: Analysis Services Performance Guide

SQL Server Best Practices Article: Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services

Microsoft SQL Server 2008 Analysis Services Unleashed (Paperback)