SmartCharts, SmartPivot & Gantt Chart for #Excel -Latest features & releases @DevScope #Apps

We updated some of our apps and add-ins last week. Brief post introducing some of the new features:

SmartPivot – now with Measure Group/Table Filter in search pane, also Table Report & Filter Tools improvements

A much requested feature now available (should have been in the initial Search release honestly…): SmartSearch pane now allows for measure group/table filtering, much like the regular PivotTable fields browser in Excel.

Download the new SmartPivot release here.


Fixed some issues also with Table Report & Filter Tools (both Saved Filters & Filter by list)



SmartCharts-now with real-time Share feature & Free download during SharePoint Conference 2014

You can now share your charts session, opening it in a browser or tablet/mobile window (disclaimer:some safari/OS issues still to fix). You can then collaborate real time with anyone connected to that SmartCharts session.

Also, you can freely download SmartCharts Task Pane in Office Store during #SPC2014. :) feedback will be appreciated!

Download SmartCharts Task Pane in Office Store



Gantt Chart & SmartCharts for SharePoint

Soon to be published SmartCharts for SharePoint ;) stay tuned.


Gantt Chart for Excel on the other hand is already available in Office Store, and free! :)

Download Gantt Chart for Excel

Gantt Chart for Excel


Take care,




“Multi-threading” the Sql Server Analysis Services Formula Engine II-a parallel query msmdpump proxy

Following a previous post (“Multi-threading” the Sql Server Analysis Services Formula Engine – I #ssas #mdx ), we returned recently to the issue of multi threading and SSAS formula engine. See that last post or this post by James Serra for reference.

Honestly it’s kind of amazing that something like the SSAS engine can run incredibly  well using a single threaded model for each query. It’s damn smart, and as long it runs smart you usually don’t need brute force. :)

Until… you find yourself with a very (very) complex cube, together with a rather complex scorecard model also completely built in SSAS/mdx, and…. PerformancePoint with its “with clauses” (and  like Chris Webb pointed before, with clause disables FE cache)

Sample query generated by PerformancePoint (a scorecard model, kpis on rows), running in 50-60 secs.



To return a complex and FE intensive scorecard with several different KPIs (completely different measure groups), with values/metrics like value, ytd value, prior year, end of year projection, target, ytd target, score and so on… well, requires a not so usual amount of computation by SSAS. Worst, due to the, let’s call it a “reasonable”, :) amount of mdx scopes involved it triggered some internal thresholds i, and it stops being smart… maybe switching to cell by cell mode? Query 3 kpis individually, none exceeds 2-3 secs, get the 3 together in the same query, –> 30-40 secs….

After exhausting all the tuning we could possibly remember… reducing down the query from 1m20 secs, to under 40-50 secs, but still annoying, why all the CPU power if we are waiting for damn 50 secs?

Another thing to note was that every row was a different KPI, completely isolated from all the other rows returned, really a very good candidate for partitioning and multi threading. But SSAS doesn’t do that (query hint would be great SSAS team ;) ),


(and this where I have to say the usual disclaimer, please do this at home, never at work! it’s not supported… aside from tolerating that 50 secs what follows is the worst possible thing you could do…)

We had previously built some SSAS http msmdpump proxies before (it allows you to query SSAS server over http) adding some “extra” features we needed…

So why not trying intercepting that query in a “fake” msmdpump (proxy) built in, partitioning it by member rows requested, run a bunch of parallel MDX queries against the SSAS DB, get the results, join the cell set together, and return as a SSAS XMLA pump reply….? (kinda…nuts….yes)

And well, far, far, far away from being a reusable module for this scenarios we built it for only this specific one, changed the connections in PerformancePoint pointing to the http endpoint (only for this kpis/scorecards), and gave it a go:

From here, query running to 50 secs, almost no cpu activiy:


To here, query now running under 14 secs (multi-threading is very noticeable, obviously you should have spare cpu available, not for a already heavily loaded server ):


Short story, how it works?

1-a web app gets the XMLA http requests targeted to the isapi SSAS msmdpump, inspects them for a very specific pattern. everything else passthrough to the regular msmdpump (debug/testing) or blocked (production server, we only need to answer a very specific pattern from performance point)

2-it splits the member list requested in the mdx rows (on rows), generates an identical MDX query for each group of N members


3-using .net parallel apis then sends x simultaneous queries/threads to the local SSAS db:





4-and the tricky part, getting the result cell set xml for each query, and aggregating that in a unique cellset xmla that then is returned to the original http request made by performance point (you can also fire a man studio mdx query and passing it through the proxy)

Closing Notes

Honestly I don’t even know if it’s a good snippet to share , but well we confirmed that it can work in very, very specific scenarios, as a last resource when you can’t  influence queries being made (tried that also… didn’t work out  so good…) …. If you risk being fired for not tuning that query that the CEO runs every day,  you may want to test something like this… otherwise forget what you’ve read. :)

But if this post happens to generate some requests where it can be put to good use I consider to share it privately or even on our codeplex sample site.

Please be warned that in most cases you won’t need anything like this, SSAS FE engine should work just fine  because it’s very smart (and we are not! :) ), only for very complex cubes and specific scenarios something as risky as this can be possibly useful.

ps-this also opens a rather interesting possibility, spreading the inner requests to secondary servers in a scale out farm…  imagine that working automatically for some queries… will we get this for SQL 2016? ;)

As for the code, its goes something like this (lots of complexity removed, just the core steps):







Take care,


SmartPivot new build – now with Saved/Quick Pivot filters and Filter by List improvements

Fresh new build for SmartPivot users, download here as usual. This version features:

  •  Improvements on Filter by List feature (interestingly one of the features mostly mentioned by users)
  • And (finally!) a new Saved/Quick filters feature. :)

New Quick/Saved Filters feature

This feature allows saving filter selections on the pivot for later use on other pivots targeting identical hierarchies. The saved/quick filters are persisted locally in user profile configuration, so that you can always refer to them very quickly.


Ex: filter your pivot as usual, or through SmartPivot fast search




Now, pick the Quick/Saved Filters option to save your filter:





Select your Pivot field and Save Current Filters option:


Now just create any other connection to the same cube, and use the quick/saved filters to apply the exact same selection as before.

(as most features in SmartPivot you can fast search saved filters )


double click and your previous selection is now filtering the new pivot:


Improvements in Filter By List

Filter by list feature now works as a task pane (expect most of SmartPivot dialogs to be migrated to task panes as they allow for better usability).

Improvements in this release:

  • Available as an Excel Task Pane
  • Pick values from range
  • Filter by member names or member keys


Other improvements

Fixed an issue with table reports and orphaned calculated measures (undefined measure groups)

Wrapping up

Download SmartPivot through the product page here.


Take care,


New samples at @devscope #msbi codeplex- II–#PowerShell Dynamic Outliers with #SSAS mining models

Finally… ! Updated our codeproject project with an improved sample for this. :) btw-See the first post on this thread for other Powershell MSBI data driven/dynamic samples & utilities

II-#PowerShell Dynamic Outliers Monitoring with #SSAS mining models

The idea for this sample goes right back to the initial releases of the Excel Data Mining Addin. Now that’s a cool Excel Addin, in fact it’s way more valuable today and it’s a little sad that data mining is even a little bit forgotten in the MSBI stack :( (yes, it is…). The issue with SSAS Data Mining, particularly this addin I think… it  really was ahead of its time! because now, in our always changing data driven world, it’s the time for such agile & self service data mining/data intelligence scenarios.

Anyway, that Addin showed that is was possible to do 100% dynamic outlier detection based only on the given data/schema. Models would be generated in runtime for each dataset.

And so, finally, we had the time to take this concept and turn to PowerShell to make a lot of other data driven scenarios  scriptable & able to run completely automated (ex: scheduled/intelligent alerts/trends/anomalies monitoring).

So, what if we could use this to monitor several data streams we have available in our solutions?  We already have & use several very agile & data driven kpis/reports/datasets… but that still usually involves fixed alert thresholds, what if we would have something looking at these data streams every day and tell us, “hey, here’s something that’s not usual, are you aware?”? (ie something that would do what I usually do, so that I can go on doing several other things… like blogging and reading :) ).

And that’s precisely what the sample scripts does:

  • iterate all the views in a predefined schema (love this pattern for dba/bi monitoring)
  • execute & get the datasets, we used SQL server but you can use any datatable
  • pass the datatables to the data mining module (it will, like the excel data mining addin, create a temporary model dynamically for each table, do the outlier stuff, some nasty hacks of our own, get an outlier probability value for each row
  • then filtering (in this scenario) for today outliers, using a convention styled approach – choosing the first date column available (btw-a better approach would be save the last execution time for each view and use that date value as a date filter)
  • if there are outliers, send the report by mail (using the cmdlets I talked about in my last post)
  • if not, keep quiet!

And anytime I want to monitor another dataset, just create another view in that schema, the script will adapt accordingly… hence the data driven mantra applies:) (data+services mindset to explore in a future post…)

Like this:

A schema (mon) where I can put the data streams I want to monitor for outliers/anomalies.


As a sample I use a dataset of SQL Analysis Services processing logs/elapsed times, by object (measure group, partition, dimension,…).

Ensure that I have a date column to allow for proper date/time filtering-after the outlier detection-, (1 month old outliers aren’t that interesting anyway).



Now imagine that I force something strange in that data stream, changing the started date as so to increase the processing time for that SSAS object (creating an anomaly)…


And I run the provided outlier sample script…. :)


and…amazingly (remembering that I did not configure any kind of warning thresholds…), my two “simulated” outliers are now highlighted in my mailbox


How cool is that? :) The script doesn’t know have to know the data in anyway, anything goes, that’s the beauty of it.

(defending myself :) from possible comments by MSBI team regarding this screenshot… I can only say that I promise to reassess my mobile stack after the “new” release of the MSBI stack for mobile… :)  )

Be advised though that the script is just a sample, there are a –few- known “issues” at this moment:

  • Do not expect perfect results, we didn’t spend much time with SSAS mining model tuning for the moment, and we are limited to the accuracy we can expect to get from analysis services mining models (would be great to get some data mining experts help/feedback!)
  • Some column types can cause the mining model/structure not to be created
  • We had to use a hack… and introduce a outlier probability of 2 when we were getting an outlier probability of 0… (cof, I know, will have to dig deeper as for the reason for this… sure we messed up anywhere)
  • Still missing is the column highlight where the outlier is most probable, that’s possible, the data mining addin does this but we hadn’t time for that yet, sorry
  • several others issues will appear I’m sure…( I did say that’s a sample right? :) )

That’s it for today, browse/get the sample at !

Note that there’s a bundle download available with Excel Data Insights AddIn, Mail DataTable Reports & this outliers sample.

The “tiny” main script:

# for each view found
$viewName = $reportViewsSchema+”.”+$_.Name

$data = Invoke-OLEDBCommand -connectionString $connStr -sql “select * from $viewName”

$firstDateCol= $data[0].Table.Columns | ? {$_.DataType -eq [datetime]} | select -index 1 | select $_.ColumnName

$dmTable = Set-Outliers -dataTable $data[0].Table -connectionString $ssasconn

write-host “Evaluating OutlierProbability threshold…”
$x=$dmTable | sort-object OutlierProbability -descending | select -index 5| select OutlierProbability

write-host “Highlighting…”
$dmTable=$dmTable | ? {($_.$firstDateCol -ge [System.DateTime]::Now.Date )} | select *,@{name=”SysRowCssClass”;expression={if ($_.OutlierProbability -ge $x.OutlierProbability ){ “highlight” } else { “” }}} | sort-object $firstDateCol.ColumnName -descending

$alertCount=($dmTable | ? {$_.SysRowCssClass -like “highlight”}).COunt
write-host “Today outliers: $alertCount”

# send mail only if alert column is present and its 1
if ($alertCount -gt -1)
$html = Out-Email -subject “Outliers in $viewName : $alertCount (today)” -inputObject $dmTable -to $mailto -from $mailfrom -smtpServer $mailsmtp

ps-and don’t tell my boss that PowerShell & Data Mining is doing our work now! ;) We’ll figure out something else to do eventually….

Take care,



#SmartPivot new beta release with #SSAS and #PowerPivot instant text search, now supports #Excel2013

We’ve just finished publishing a new beta release for SmartPivot, our addin for Excel OLAP cubes (and now also PowerPivot/Excel 2013 Data Models).

You can download it here. Give it a try, just remember it’s a beta release, use it for testing purposes only! :)  It would be great to get feedback on the new features, particularly the “fast full text search” over Cubes/PowerPivot.

Now, onto the new stuff!

A new instant Search *beta* feature, for both OLAP cubes and PowerPivot Models

This is probably the main new feature for this release. We hope that this can take Excel olap/ppivot pivot tables usage to a whole new usability level. But tell us what you think.

We are still working on some issues with the feature, but we hope it’s stable enough on the beta , allowing for “crowdsourcing” the final adjustments. :)

It’s rather self –explanatory, just select a pivot and click search…


On the first run for a cube/ppivot, wait a few moments for SmartPivot to read/cache all dimension data (at the moment it will do this automatically only for PowerPivot models or local SSAS connections).

And you’ll be able to search both data (member) and metadata (measures/hierarchies) with instant search results:



This makes exploring the data much easier for end users, as they usually know the data very well (but not always the cube concepts of measures, dimensions, attributes and others).  They can start their own model discovery.


Can we hope in a near future that a feature like this would  be pervasive in every frontend and handled internally by the ssas/powerpivot engines? ;) (good note to post on the ssas team equest for feedback survey) But until that’s available we hope this SmartPivot feature can help a little bit. :)


And it now supports Excel 2007/2010/2013 and PowerPivot


Excel 2013


Excel 2010


PowerPivot 2013 (my kindle book stats model)


PowerPivot 2010 (the “Understanding the US debt” Book excel sample )companion)



Dev Tools

Not finished yet, but you can already explore the SSAS rowsets available when working with Cubes/Ppivot models, useful for troubleshooting and advanced models discovery




Other small improvements in this release

  • Cell Value from Table Reports –You can now start a table report from a cell value (see a previous post here), like you would do with the Detail By option, but it allows you to build a table report for that specific context.
  • Duplicate pivot option– just an handy tiny feature  to save a few clicks
  • Auto update notification

Now, we just need you feedback! Download it and tell us what you think.



Tabular Queries in Excel 2013, Good news, Bad news,DAX Queries and the missing DAX visual query builder

(update: SmartPivot for Excel version 2.3 – a much improved version- has just been published, please visit SmartPivot Product Page for the latest version Excel BI features including instant search, quick connect , a stunning pivot viewer visualization for Excel tables & many others)


SmartPivot Latest Version for Excel 2007, 2010, 2013 & PowerPivot

The never-ending thread… :) Anyway, I just finished some tests with Excel 2013, like the title says, good news, and bad news. (that is, imo).

So, bad news first, so that I can finish this post in a good mood!

Bad News-“Tabular” Pivot Table on OLAP cubes, the scalability/performance problem is still there

Not that I was expecting improvements here, but it’s sad to note that for now I get the same very bad experience both with 2010 & 2013.

Just load an olap pivot table with 11 columns, a tiny rowset of 244 rows and I had to wait more than a minute for the data. (In the process I also crashed SQL Man Studio and Excel due to the abnormal number of cells returned)…


Fortunately, I was glad to see that SmartPivot tabular report works perfectly on Excel 2013 Preview.


Same 244 rows in a native table, 11 columns, ~1.6 secs… No dumps.


Anyway, it would be great that native excel cube pivot tables could be fixed to enable this scenario. Let’s see in the RTM…

So, now for the good news…

Good News-Edit Dax Query in Excel tables shows potential, but we’ll need a friendly query editor soon!

If there’s one thing I always hoped that the tabular model could fix, (and as the tabular suggests… ), were real tabular queries, with real excel tables. A single semantic model (read *real* semantic, not bism…) to rule them all (both pivot and table queries).

Yes, there’s more to data science than pivot tables and low grain aggregated values.

For a lot of (good) reasons it’s very common for users to need that style of data layout. Why the hell can’t users use the same platform/solution to get a simple list of customers by country, or products by categories (yes, there are no metric involved)

And although I don’t feel the problem is properly solved in the Excel 2013 preview, there’s a new table menu option that immediately caught my attention: Edit DAX.


That will allow to edit the underlying DAX model query being used, switching from table style query to a full DAX query.

So for example, I can create a table from one of my data model existing tables,


and then changing the table query to a DAX query, and craft the query to return an additional calculated column (ex: score based on average rating)


I can only guess (hope!) that anytime soon, the old field list/pivot layout that  today supports  most of pivot table/pivot chart data analysis will be finally replaced to a full grown olap/tabular friendly query editor, that can target lots of data layouts (pivot/table style). Like the field list in a pivot table, there should be a field list for a data model based table query. That would be pretty amazing.

Because, and I’ll have to disagree with some msbi most valuable guys :), DAX (like MDX) is extremely difficult to craft  manually beyond the basic stuff.  For pivot based layouts we can live (hardly…) with the drag & drop field list. For tabular queries (particularly for DAX queries) users will need a new (very easy & very visual) query editor tool that explores the full potential of tabular backends.

That reminds me that although I can’t say that DAX is a lot more complex that MDX (I’m tempted to…:) ), or the way way around, one thing is for sure, Excel OLAP users never needed to know MDX to work with OLAP, I and hope that they’ll be able to get the full power of tabular without knowing DAX too!

Btw, regarding DAX queries, be sure to check Paul te Braak promising DAX Studio and his latest posts on DAX:

DAX Querying Part I

DAX II – Extending the use of Evaluate

DAX Studio @codeplex


Final note – Slicers also work with DAX query tables, but they use visual filtering in the Excel table.

It was rather intriguing to note that I could use a table slicer (new feature of Excel 2013), also when using a custom DAX query. In fact the newly added DAX column was immediately available to use as a slicer. But the reason for that become pretty obvious: the slicer uses the Excel table column filters to filter the visible rows only, it doesn’t change the underlying table query. Ex:


So, that’s it for now. Take care,



SQL Server “Denali” CTP 3 is Out–Best Posts & my top picks (Business Intelligence Features)

Yes, SQL Server next version “Denali” ctp3 is out, and so finally… we can go public,  starting making some noise, and sharing thoughts. :)

No, you won’t find another deep review of “denali” here. :) No time for that, there’s so much to explore. But I’ve been reviewing several posts, so I leave you with my top posts & notes so far.

(I’ll try to keep this updated… as in… “try” :) )

Download & Documentation

Download SQL Server “Denali” CTP 3 here.

Technet > What’s New in SQL Server “Denali” CTP3

Release Notes

Product Documentation for Microsoft SQL Server Code-Named “Denali” CTP3

Books Online for SQL Server “Denali”

Online Documentation  – Tabular Model Solutions (SSAS)

Online Documentation – Tabular Modeling (Adventure Works Tutorial)

Online Documentation- DAX Table Query Syntax Reference

Online Documentation- Introduction to the CSDL Extensions for Tabular Models

Online Documentation- CSDL Concepts

Online Documentation-Connect to a Tabular Model Database (SSAS)

Online Documentation-Data Quality Services

Great Crescent Tutorial

Crescent design experience

SQL Samples for Denali CTP3

Download Details – Microsoft Download Center – Microsoft® SQL Server® code name ‘Denali’ CTP 3 Feature Pack

Installation & Walkthroughs

Dan English’s BI Blog >SQL Server ‘Denali’ CTP3 Install Experience

MSDN > Deployment Checklist: Reporting Services, Project Crescent, and PowerPivot for SharePoint

MSDN > SQL Server BI Feature Installation with SharePoint

MSDN > Install PowerPivot for SharePoint 

MSDN > Installing Reporting Services SharePoint Mode Report Server for Project Crescent and Data Alerting

Best resources & community posts so far


Microsoft® SQL Server® code name ‘Denali’, Community Technology Preview 3 (CTP 3) Product Guide


SSIS Team Blog > SSIS – What’s New in SQL Server Denali

SQL Server Code Name “Denali” Breakthrough Insight

New “BISM”,Tabular Models & DAX

Andrew Fryer > Analysis Services cubed in SQL Server Denali ctp3

Analysis Services and PowerPivot Team Blog > Welcome to Tabular Projects

Analysis Services and PowerPivot Team Blog > Creating a tabular Model with AMO

Chris Webb’s BI Blog > DAX Queries, Part 1

Chris Webb’s BI Blog > DAX Queries, Part 2

Cathy Dumas’s Blog > Tabular Designer Architecture

WesleyB’s Blog > SQL Server Denali CTP3–Lots of interesting resources

SQLBI – Marco Russo > First steps with #ssas #Tabular in #Denali #CTP3

Data Visualization > Tabular Model, Columnstore, new BIDS released!

Analysis Services and PowerPivot Team Blog > The Diagram is here…

PowerPivot Denali: Parent child using DAX

Project “Crescent”

YouTube > NEW Demo for SQL Server Project “Crescent”

Getting Started with Project “Crescent” and PowerPivot for Excel in SQL Server Code-Named “Denali” Community Technology Preview 3 (CTP3)


SQL Server Reporting Services Team Blog  > SQL Server codename “Denali” CTP3, including Project “Crescent” is now publically available

Technet> Project Crescent Overview

Technet>Data Visualizations in Project Crescent

Small multiples - one bubble chart per month

Technet > Crescent FAQ, Troubleshooting, Tips, and Trick

Technet > Tutorial Create Charts, Tiles, and Other Visualizations in Project Crescent

Reporting Alerts *NEW!!!*

Reporting Alerting in SQL Server Denali ctp3 – Insufficient data from Andrew Fryer – Site Home – TechNet Blogs

new data alert 2

Data Quality Services *NEW!!!*

Data Quality Services (DQS) > Introducing Data Quality Services

Data Quality Services (DQS) > How to add Reference Data Services in Data Quality Services (DQS)

Interesting findings

Hope that’s not for rtm… Does this mean no excel on tabular models direct mode??? Will Excel support DAX directly? 2010? 2012?

“Connecting to DirectQuery Models

After you have switched the model to DirectQuery mode, traditional OLAP clients cannot connect to the model. For example, if you attempt to create an MDX query against a DirectQuery model, you will get an error indicating that the cube cannot be found, or has not been processed. However, you can use DAX formulas and XMLA queries. For more information about how you can perform ad hoc queries against tabular models, see Tabular Model Data Access.”

Promising… finally! Note that its driven by data feeds… so we’ve got great potential here.

Open Alert Designer from SharePoint library

Self Service Alerting:

Alerting is a new capability that we are adding to Reporting Services. As soon as the report server is upgraded to Microsoft SQL Server Code Name “Denali”, it will enable any user that can access reports previously built with SSRS 2005, SSRS 2008 or SSRS 2008 R2 whether in Report Builder or BIDS, to setup alert rules and be alerted when report data changes occur that match a set of rules.  No changes are required to the existing reports. You’ll need to have SSRS integrated with SharePoint in order to take advantage of this new capability (the reports must also use stored credentials data sources, similar to subscriptions).  Management of alerts is also enabled through SharePoint through a common interface.

CTP 3 Books online – Data Alerts (SSRS)

Workflow in Reporting Services alerting

Data Alert Designer

Areas within the Alert Designer user interface

“RDLC Designer and Report Viewer Control: Have been upgraded with Reporting Services “DENALI” code base (Including support for DOCX, XLSX Export, Map functionality, RDL 2010 schema support…). Starting with the next release of Visual Studio, RDLC and Report Viewer will ship with the same level of functionality of the most current release of SQL Server.”

I want to believe that for RTM that’s still hope for a *real* semantic model… that feeds crescent & other tools like excel, and supports both tabular models, as olap models… Come on guys…

You always start Crescent from a tabular model in a SharePoint Server 2010 document library or in a PowerPivot Gallery. The model can be:

A PowerPivot file (XLSX) in a PowerPivot Gallery in SharePoint Server 2010. For more information, see Use PowerPivot Gallery.
A shared data source (RSDS) with a Microsoft Business Intelligence Semantic Model data source type, based either on a PowerPivot file on a tabular model on a Analysis Services server. For more information, see Tabular Model Connection Type (SSRS).
A BISM connection file (BISM) based on a tabular model on a Analysis Services server. BISM connection files can be either in a standard SharePoint Server 2010 document library or a PowerPivot gallery. For more information, see PowerPivot BISM Connection (.bism).”


Closing “rants“ :)


– Ahhh….Excel team: restricting the new models to pivot table style reporting… hmmmm …. you’re fixing this right? ;) It seems almost unbelievable that MS keeps missing the wide usage of *their own* excel as BI tool, and not (yet) building a proper query editor that covers *real* business intelligence semantic models… with much better query editor & query results (both pivot & tables)…

– And yes, I’ve got a feeling and for now I’m agreeing with Andrei Pandre , and sorry Andrew I have to disagree, I can’t easily spot a proper BISM (semantic model)(cof, yes, apart from the .bism extension…) that unifies & shields client apis from tabular or typical olap models. I see two models, with some clients (excel) being forced to see a tabular model as an olap model, and others seeing only a tabular model (crescent) … kind of “messy” … Hope it will be better! The old report models seem much more semantic to me….

– And sorry guys… a little too early to go “frenzy” deep dive learning DAX. :) No need to rush… honestly we should be focusing more on helping MS get a great new version out. If they get it right, DAX will then be a huge success… but only then…

Kind regards,