TrustRadius – A great new Technology Crowdsourcing concept from Vinay Bhagat

Earlier in the year, I wrote an article about Technology Crowdsourcing.  There is a new player in this space named TrustRadius.


The talented Vinay Bhagat and his team have created a unique platform which compiles software reviews sourced from product experts.  Key differentiators in this new platform include:

  1. TrustRadius is really focused on trying to source in-depth insights from primary product users/ experts
  2. The reviews of TrustRadius are structured, and soon will be able to be added to, and amended over time.  Structure allows for curation into things like comparisons.

They were launched in May and were funded by the Mayfield Fund in July and are rapidly starting to scale.  So far, they have 23k monthly visitors and are growing at 30% per month.

Review of Microsoft Business Intelligence at TrustRadius

For interested higher education folks, I’ve posted a few reviews there that you may find interesting:

Benefits of Social Collaboration = Productivity

How do you build a sense of “team” within an organization?  It seems like such a simple concept, but politically and organizationally it can be a very difficult thing to accomplish.  Many start-ups have an immediate sense of team.   Employees are engaged and always extremely busy.  Over time, workers start to work remotely and the culture lessens.  Then, many get acquired.  The acquiring companies often “forget” about the old company culture and suddenly begin transformation efforts to bring the start-up into the corporate fold.  The irony in many of these situations is that the corporate culture is less than that of the start-up.  But, they just wanted the customers anyway, right?

Even if you are not a start-up, how does your organization build, and more importantly, maintain a sense of team?  Does social media play a part?

How can you get your team that looks like this:

Credit: InformationWeek

Image via InformationWeek

To look more like this?

I have worked for several organizations and I have seen corporate social media tools work fantastically.  I have a preference toward a tool called Yammer (which is now owned by Microsoft).  However, there is a proliferation of good social media tools that may be able to perform a similar function.  A relatively new Charleston, SC-based company, Sparc, has a product called Sparcet.  Unlike Yammer, they focus almost solely on recognition and employee engagement.  Intuit, a company known for Quickbooks, has a neat product, Quickbase,  to spawn DIY custom apps which promote productivity as well.

Here is a quick look at these tools:

Image representing Yammer as depicted in Crunc...

Image via CrunchBase

Regardless of the tool that you use, evaluate to see if this makes sense for your organization.  It may even lead to the formation of a Social Media Team.

Recently, Georgetown University’s CIO, Lisa Davis, announced a key development from University Information Services on our Facebook page.  Socially, we visited students waiting in line for the Georgetown shuttle across D.C.  The development that was announced was the ability to check the next GUTS (Georgetown University Transportation Shuttle) via our mobile application.  Interactively, our mobile app will show the shuttle en route so that you are aware of the location and how long to wait for the next one.  This was a big success and helped to celebrate our accomplishment internally.  In addition to the fun photos, it engaged the community and brought more attention to the hard work of our department.  More attention = more adoption.  More adoption = better success.

So, don’t take my word for it.  Here are a few benefits that others have stated about the use of social media as a team collaboration tool.

Many of these are taken from Yammer’s website under “Customer Success”:

  • Strengthened employee collaboration. From executives in headquarters to stylists on the floor, personnel use Yammer to share experiences, questions, and answers.
  • Nearly real-time process improvements. Westfield has used insight gleaned from Yammer feed to improve everything from computer training to gift-card programs.
  • Savvy leveraging of mobile devices. Employees can access the network when they’re away from their desks, posting real-time updates on happenings at shopping centers.
  • More satisfied customers. As more employees and retailers tap into Westfield’s network, they’re influencing events and policies to improve the shopping experience.
  • Better information sharing. Staff use Yammer as their forum for sharing business content, including articles from publications such as The Harvard Business Review and Forbes.
  • Comprehensive collaboration. The network unites response agents in the field with claims processors at headquarters, who can tap in on any device.
  • A more effective social intranet. Yammer integrates with Nationwide’s key applications, including SharePoint.
  • Improved productivity. Better information sharing and the crowdsourcing of ideas means faster responses to business and customer demands.
  • A stronger corporate culture. The network helped transform a widespread employee base into a more tightly knit workforce focused on customer satisfaction.


Warning Signs That You Need Better Business Intelligence

Many people ignore the warning signs.  My memory immediately flashes to warning labels on cigarette packaging or the famous 80’s commercial by Partnership for a Drug-free America (This is your brain…this is your brain on drugs).

Unfortunately, warning signs for outdated business intelligence systems may not be that clear…and are much more easily ignored.  Let’s start with a few basic questions:

  1. Do you often go to the ‘data guy’ to get reports or data from your ERP systems?
  2. Do you have to contact multiple people to get the data that you require to run a report?
  3. Is there a central portal where you visit for reporting needs?
  4. Are you able to answer the majority of your business questions without involving the IT department?
  5. Are you stuck with canned reports, or do you have the flexibility to run in-memory BI with tools such as Microsoft PowerPivot, Qlikview, or Tableau?
  6. Are you able to write your own reports, and more importantly, understand the data that is available?
  7. Do you have dashboards setup to govern key business areas?
  8. Are you able to share those dashboards in real-time to further collaborate?
  9. Does your business intelligence system allow you to look across functional areas to compare and contrast?
  10. Is there a process in place to request that additional data be added to the system?

Now that you’ve had a chance to think about the questions above, let’s look at what I would consider a positive and negative response to each question.

No. Question Response
1 Do you often go to the ‘data guy’ to get reports or data from your ERP systems? Yes No
2 Do you have to contact multiple people to get the data that you require to run a report? Yes No
3 Is there a central portal where you visit for reporting needs? Yes No
4 Are you able to answer the majority of your business questions without involving the IT department? Yes No
5 Are you stuck with canned reports, or do you have the flexibility to run in-memory BI with tools such as Microsoft PowerPivot, Qlikview, or Tableau? Yes No
6 Are you able to write your own reports, and more importantly, understand the data that is available? Yes No
7 Do you have dashboards setup to govern key business areas? Yes No
8 Are you able to share those dashboards in real-time to further collaborate? Yes No
9 Does your business intelligence system allow you to look across functional areas to compare and contrast? Yes No
10 Is there a process in place to request that additional data be added to the system? Yes No
 Green text Positive response
 Red text Negative response

Use these questions as a quick litmus test.  If you answered negatively to 4 or more of the questions above, you should revisit your business intelligence strategy.  If you answered positively to 6 or more of the questions above, you are likely headed in the right direction.  That being said, it is never a bad idea to reassess and evaluate strategies to improve your business intelligence environment.  Considering the constant change and fantastic software tools that are out there – you can always find something that will add to the value of your BI strategy.  Caution:  many of these tools are expensive, so evaluate carefully and find the right fit for your budget and your organization.

If you would like more explanation as to the responses to my questions above, I have provided them below:

Do you often go to the ‘data guy’ to get reports or data from your ERP systems?

I have worked in many organizations to find that only one specific person, or a small group of people, have access to organization-wide data.  If you find yourself constantly going to one person to get the data that you require…or to get ‘that report’ written, then the business intelligence system is failing.  A proper business intelligence system should allow you to get the types of data that you require as an end user.  It may impose appropriate security on the data, but it should allow you to get access to the data that is required to do your job.

Do you have to contact multiple people to get the data that you require to run a report?

If you find yourself contacting one person in Finance to get the finance data – and one person in Human Resources to get the HR data – and one person in Sales to get the sales data, then the business intelligence system is failing.  An appropriate BI solution should bring this data together and make it accessible cross-functionally.

Is there a central portal where you visit for reporting needs?

Provided the cross-functional nature of any business intelligence solution, it is important to have a BI portal in place.  This portal may address key information such as:

  • Report glossary
  • Data definitions
  • Training
  • Change requests
  • Project activity
  • Maintenance schedules

Without such a portal, this information is difficult to cobble together and it leads to confusion as end-users are visiting multiple locations to find this information.  This portal serves as a one-stop shop for business intelligence, reporting, and analytics needs.

Are you able to answer the majority of your business questions without involving the IT department?

This question is two-fold.  The first part is to ensure that the business intelligence system has the data that is required to fulfill your request.  The second part is to ensure that if the business intelligence system does have all of the data that is required for your request, that it is accessible to you.  Security is important, but it can also be over engineered.  In a recent project, I stepped into a BI project where each report was executing a security table with over 3.5 M rows of data.  This killed performance and made the experience so frustrating for end-users that they started to spawn their own shadow systems.  Not good.  Make sure that you have the appropriate security in place for your data, but remember that security for BI projects will likely not be the same as the transactional security that you have setup in your ERP systems.

Are you stuck with canned reports, or do you have the flexibility to run in-memory BI with tools such as Microsoft PowerPivot, Qlikview, or Tableau?

Business users will always find a report that isn’t written…or a permutation of an existing report that is required to fulfill a specific need.  For the canned or standard reports that you make available, try to make them as flexible as possible with report parameters.  These filters allow you to further refine the report for various users.  Once the user community becomes used to the environment, they will quickly outgrow canned reports.  This is where in-memory BI plays a big part.  Allow power-users the flexibility of further analyzing data using in-memory tools.  I’ve seen this done very well with Microsoft PowerPivot, Qlikview, and Tableau among others.  This collaborative analysis may lead to future canned reports or dashboards that will provide benefit to the larger community.  And, if not, it provides the flexibility to perform quick cross-functional analysis without the effort of creating a full report.  In the case of Qlikview, users can collaborate and share dashboard data in real-time.  This is very neat.

Are you able to write your own reports, and more importantly, understand the data that is available?

If your business intelligence environment is setup correctly, power-users should have the ability to write custom reports and save them either to a public or personal directory.  If the BI implementation has been done well, power-users will also understand the metadata which fuels the reports.  In a recent implementation, I’ve seen this done well with IBM Cognos and their Framework Manager.

Do you have dashboards setup to govern key business areas?

Dashboards are always the eye-candy of a BI implementation.  With the proliferation of good dashboard tools, this is an integral part of a modern BI solution and is a key aid in helping to drive key business decisions.

Are you able to share those dashboards in real-time to further collaborate?

Not only are dashboards great to help the c-level suite make key business decisions, but they are also becoming a fantastic way to collaborate and share information.  As mentioned above, some of the more advanced dashboard tools allow you to share dashboard information in real-time.  This could be as simple as selecting a view parameters and sending over a copy of a dashboard that you’ve modified.  In some of the nicer tools, it could mean sharing a session and collaborating on the dashboard in real-time.

Does your business intelligence system allow you to look across functional areas to compare and contrast?

This may seem like a silly question, but it is an important one.  Does your business intelligence system encompass all of your key ERP systems?  If not, you need to reevaluate and ensure that all key business systems are brought into the BI solution.  Once the data from the key business systems is standardized and brought into the BI solution, you can start to join these data together for insightful cross-functional analysis.

Is there a process in place to request that additional data be added to the system?

Through exploration, collaboration, and report writing, you may identify data that has not been loaded into the BI solution.  This could be as simple as a data field or as complex as another transactional system.  Either way, you should ensure that your BI team has a process in place to queue up work items.  The BI solution needs to evolve as business needs are identified.  Personally, I have gained a lot of benefit from using Intuit’s Quickbase software to manage this process through technology.  Quickbase has a nice blend of tools that facilitate collaboration and allow end-users to submit requests without a lot of effort.

As you evaluate your BI solution, also ensure that your ETLs are loading data at the appropriate interval.  I’ve also seen many implementation where the data is so dated that is becomes useless to the end-users and impacts adoption of the platform.  Try to run ETL processes as quickly as possible.  In many implementations that I’ve completed, we’ve set data up to run nightly.  This isn’t always possible given the volume, but fresh data always allows for better adoption of the platform.

Higher Education Data Warehouse Conference (HEDW) @ Cornell University

I just returned from an excellent conference  (HEDW) which was administered by the IT staff at Cornell University. Kudos to the 2013 Conference Chair, Jeff Christen, and the staff of Cornell University for hosting this year! Below is a little bit more information about the conference:

The Higher Education Data Warehousing Forum (HEDW) is a network of higher education colleagues dedicated to promoting the sharing of knowledge and best practices regarding knowledge management in colleges and universities, including building data warehouses, developing institutional reporting strategies, and providing decision support.

The Forum meets once a year and sponsors a series of listservs to promote communication among technical developers and administrators of data access and reporting systems, data custodians, institutional researchers, and consumers of data representing a variety of internal university audiences.

There are more than 2000 active members in HEDW, representing professionals from 700+ institutions found in 38 different countries and 48 different states.

This conference has proven to be helpful to Georgetown University over the last 5 years.  It is a great opportunity to network with peers and share best practice around the latest technology tools.  And, sorry vendors, you are kept at bay.  This is important as the focus of the conference is less on technology sales – and more about relationships and sharing successes.

Cornell University Outside of Statler Conference Center

Cornell University Outside of Statler Conference Center

Personally, this was my first year in attendance.  I gained a lot of industry insight, but it was also helpful to find peer organizations that are using the same technology tools.  We are about to embark upon an Oracle Peoplesoft finance to Workday conversion.  It was helpful to connect with others that are going through similar projects.  And for me specifically, it was helpful to learn how folks are starting to extract data from Workday for business intelligence purposes.

Higher Education Data Warehouse Conference

Higher Education Data Warehouse Conference

2013 HEDW Attendee List

2013 HEDW Attendee List

My key take-aways from the conference were:

  • Business intelligence is happening with MANY tools.  We saw A LOT of technology.  Industry leaders in the higher education space still seem to be Oracle and MicrosoftOracle seemed to be embedded in more universities; however many are starting projects on the Microsoft stack – particularly with the Blackboard Analytics team standardizing on the Microsoft platform.  IBM Cognos still seemed to be the market leader in terms of operational reporting; however Microsoft’s SSRS is gaining momentum.  From an OLAP and dashboard perspective, it seemed like a mixed bag.  Some were using IBM BI Dashboards, while others were using tools such as OBIEE Dashboards, Microsoft Sharepoint’s Dashboard Designer, and an emerging product – Pyramid Analytics. Microsoft’s PowerPivot was also highly demonstrated and users like it!  PowerView was mentioned, but no one seemed to have it up and running…yet.  Tableau was also a very popular choice and highly recommended.  Several people mentioned how responsive both Microsoft and Tableau had been to their needs pre-sale.
  • Business intelligence requires a SIGNIFICANT amount of governance to be successful.  We saw presentation after presentation about the governance structures that should have been setup.  Or, projects that had to be restarted in order be governed in the appropriate way.  This includes changing business processes and ensuring that common data definitions are put in place across university silos.  A stove-piped approach does not work when you are trying to analyze data cross functionally.
  • Standardizing on one tool is difficult.  We spoke to many universities that had multiple tools in play.  This is due to the difficulty of change management and training.  It is worth making the investment for change management in order to standardize on the appropriate tool set.
  • Technology is expensive.  There is no one size fits all.  Depending on the licensing agreements that are in place at your university – there may be a clear technology choice.  Oracle is expensive, but it may already be in use to support critical ERP systems.  We also heard many universities discuss their use of Microsoft due to educational and statewide discounts available.
  • Predictive Analytics are still future state.  We had brief discussions about statistical tools like SAS and IBM’s SPSS; however, these tools were not the focus of many discussions.  It seems that most universities are trying to figure out simple ODS and EDW projects. Predictive analytics and sophisticated statistical tools are in use – but seem to be taking a back seat while IT departments get the more fundamental data models in place.  Most had an extreme amount of interest in these types of predictive analytics, but felt, “we just aren’t there yet.”  GIS data also came up in a small number of presentations, but also has interest.  In fact, one presentation displayed a dashboard with student enrollment by county.  People like to see data overlaid on a map.  I can see more universities taking advantage of this soon.
  • Business intelligence technologists are in high demand and hard to find.  It was apparent throughout the conference that many universities are challenged to find the right technology talent.  Many are in need of employees that possess business intelligence and reporting skills.
  • Hadoop remains on the shelf.  John Rome from Arizona State gave an excellent presentation about Hadoop and its functional use.  He clarified how Hadoop got its name.  The founder, Doug Cutting, named the company after his son’s stuffed yellow elephant!  John also presented a few experiments that ASU has been doing to evaluate the value that Hadoop may be able to bring the university.  In ASU’s experiments, they used Amazon’s EC2 service to quickly spin up supporting servers and configure the services necessary to support Hadoop.  This presentation was entertaining, but was almost the only mention of Hadoop during the entire conference.  It may have more use in research functions, but does not seem widely adopted in key university business intelligence efforts as of yet.  Wonder if this will change by next year?
g with Son's Stuffed Elephant

Doug Cutting with Son’s Stuffed Elephant


A Compilation of My Favorite DW Resources

Recently, I received an email as part of a listserv from a colleague at  HEDW, or Higher Education Data Warehousing Forum, is a network of higher education colleagues dedicated to promoting the sharing of knowledge and best practices regarding knowledge management in colleges and universities, including building data warehouses, developing institutional reporting strategies, and providing decision support.

In the email that I referenced above, my colleague sent a link to an IBM Redbooks publication titled, “Dimensional Modeling: In a Business Intelligence Environment.”  This is a good read for someone that wants the basics of data warehousing.  It also may be a good refresher for others.  Here’s a short description of the book:

In this IBM Redbooks publication we describe and demonstrate dimensional data modeling techniques and technology, specifically focused on business intelligence and data warehousing. It is to help the reader understand how to design, maintain, and use a dimensional model for data warehousing that can provide the data access and performance required for business intelligence.

Business intelligence is comprised of a data warehousing infrastructure, and a query, analysis, and reporting environment. Here we focus on the data warehousing infrastructure. But only a specific element of it, the data model – which we consider the base building block of the data warehouse. Or, more precisely, the topic of data modeling and its impact on the business and business applications. The objective is not to provide a treatise on dimensional modeling techniques, but to focus at a more practical level.

There is technical content for designing and maintaining such an environment, but also business content.

Dimensional Modeling: In a Business Intelligence Environment

Dimensional Modeling: In a Business Intelligence Environment

In reading through a few responses on the listserv, it compelled me to produce a list of some of my favorite BI books.  I’ll publish a part II to this post in the future, but here is an initial list that I would recommend to any BI professional.  It is also worth signing up for the Kimball Group’s Design Tips.  They are tremendously useful.

Related Articles:

10 Steps to Data Quality Delight!

Data quality is always an aspect of business intelligence (BI) projects that seems to be deprioritized.  It is easy to look at the beautiful visualizations and drill-through reports that are key selling features of a BI project.  However, this article is about the value of cleansing your data so that these tools will work seamlessly with the data model that you establish.  Everyone knows the IT saying, “Garbage in.  Garbage Out.”  That holds entirely true with BI projects.  If the incoming data is dirty, it is going to be very difficult to efficiently process the data and make it available for a reporting platform.  This isn’t an easy problem to solve either.  When working across multiple functional areas, you may also have different sets of users that are entering data into the system in DIFFERENT ways.  So, in this instance, you may not have a data quality issue, but a business process issue.

As I have worked through my BI projects, here are 10 steps that I have followed to work with teams to create a data-centric culture and to improve data integrity.   I hope that these are of use to you…and please feel free to share any additional best practice in the comments of this blog!  We can all learn from one another.

Data Quality Workflow

Data Quality Workflow

  • Step #1:  Build data profiling and inspection into the design of your project
    Don’t wait until you are about to go-live to start looking at the quality of your data.  From the very beginning of your project, you should start to profile the data that you are loading into your BI platform.  Depending on your technology stack, there are multiple tools that will aid you in data profiling and inspection.  You might consider tools such as Informatica Analyst, or Microsoft SSIS Data Profiler.  A quick Google search will provide many alternatives such as Talend.  Regardless of the tool, make sure that you incorporate this activity into your BI project as soon as possible.  You’ll want to do a fair amount of inspection on each system that you intend to load into your BI platform.

    Informatica Analyst

    Informatica Analyst

    Microsoft SSIS Data Profiler

    Microsoft SSIS Data Profiler

    Talend Data Quality Tool

    Talend Data Quality Tool

  • Step #2:  Don’t be afraid to discuss these data quality issues at an executive level (PRIOR TO THE EXECUTION PHASE OF THE PROJECT)
    Awareness is always a key factor for your executive team.  Executives and executive sponsors need to know about the data quality issues as soon as possible.  Why?  You will need their support not only to address the data quality issues, but sometimes these issues stem from poor business process and training.  Their support will be critical to address either issue.

  • Step #3:  Assign ownership and establish accountability
    Assign ownership for data as soon as possible.  This will assist you to not only to resolve the data quality issues, but these key data stewards may be able to help identify additional data quality issues as they may be more familiar with their data than you.  In most cases, they have inherited this bad data too, and will likely want to partner with you to fix it.  However, you must consider that it will also place a burden on them from a bandwidth perspective.  Unless dedicated to your project, they will also have a day job.  Keep this in mind during your planning and see if you can augment and support these data cleansing efforts with your team.
  • Step #4:  Define rules for the data
    One of the biggest challenges that I continue to see is when data stewards do not want to cleanse their data, they want the ETL scripts to handle the 1,001 permutations of how the data should be interpreted.  While the ETLs can handle some of this logic, the business owners need to ensure that the data is being entered into the transactional system via a single set of business processes and that it is being done consistently and completely.  Usually, the transactional systems can have business rules defined and field requirements put in place that can help to enforce these processes.  In some cases, the transaction systems are sophisticated enough to handle workflow too.  Utilize these features to your advantage and do not over-engineer the ETL processes.  Not only will this be time consuming to initially develop, but it will be a management nightmare moving forward.
  • Step #5:  Modify business process as needed
    If you are working cross-functionally, you may run into the need to revise business processes to support consistent data entry into the transactional systems.  Recently, I was working on a project across 6 HR departments.  The net of their hiring process was the same, but they had 6 different processes and unfortunately, they were utilizing the same transactional system.  We had to get their executives together and do some business process alignment work before we could proceed.  Once the business process is unified, you then have to consider the historical data.  Does it need to be cleansed or transformed?  In our case it did.  Don’t underestimate this effort!
  • Step #6:  Prioritize and make trade-offs.  Data will rarely be perfect.
    Once you have revised business process and defined data cleansing activities, you will need to prioritize them.  Rarely are you in a position where data is perfect or resources are unlimited.  If you have done your design work correctly, you will have a catalog of the most critical reports and key pieces of data.  Focus on these areas first and then expand.  Don’t try to boil the ocean.  Keep your data cleansing activities as condensed as possible and make an honest effort to try to support the business units as much as possible.  In my experience, the BI developers can generally augment the full time staff to get data cleansing and data corrections done more efficiently.  However, make sure that the business unit maintains responsibility and accountability.  You don’t want the data to become IT’s problem.  It is a shared problem and one that you will have to work very hard to maintain moving forward.
  • Step #7:  Test and make qualitative data updates
    As you prioritize and move through your data cleansing checklist, ensure that you have prioritized the efforts that will reap the largest reward.  You might be able to prioritize a few smaller wins at first to show the value of the cleansing activities.  You should then align your efforts with the primary requirements of your project.  You may be able to defer some of the data cleansing to later stages of the project, or handle it in a more gradual way.
  • Step #8:  Setup alerts and notifications for future discrepancies
    After data cleansing has occurred and you feel that you have the data in a good state, your job is not over!  Data quality is an ongoing activity.  You almost always run into future data quality issues and governance needs to be setup in order to address these.  Exception reports should be setup and made available “on-demand” to support data cleansing.  Also, one of my favorite tools is data-driven subscriptions, or report bursts.  Microsoft uses the “data-driven subscription” terminology.  IBM Cognos uses the term “report burst.”  Once you have defined the type of data integrity issues that are likely to occur (missing data, incomplete data, inaccurate data, etc.), you can setup data-driven subscriptions, or report bursts, that will prompt the data stewards when these issues occur.  Of course, at the end of the day, you still have the issue of accountability.  We’ll take a look at that in the next step.  Depending on the tool that you using, you may have the capability of sending the user an exception report with the data issue(s) listed.  In other systems, you may simply alert the user of a particular data issue and then they must take action.  These subscriptions should augment the exception reports that are available “on-demand” in your reporting portal.

    Microsoft SSRS Data-Driven Subscription

    Microsoft SSRS Data-Driven Subscription

    IBM Cognos Report Burst

    IBM Cognos Report Burst

  • Step #9:  Consider a workflow to keep data stewards accountable
    So, what now?  The user now has an inbox full of exception reports, or a portal inbox full of alerts, and they still haven’t run the manual, on-demand exception report.  Data integrity issues are causing reporting problems as the data is starting to slip in its quality.  You have a few options here.  In previous projects, I have setup a bit of workflow around the data-driven subscriptions.  The first port of call is the data steward.  They are alerted of an issue with the data and a standard SLA is set to allow them an adequate amount of time to address the issue.  After that SLA period expires, the data issue is then escalated to their line manager.  This can also be setup as a data-driven subscription.  If both steps fail (i.e. both the data steward and the line manager are ignoring the data issue), then it is time to re-engage with your executive committee.  Make the data issues visible and help the executives understand the impact of the data being inaccurate.  Paint a picture for the executive about why data is important.  To further illustrate your point, if you have an executive dashboard that is using this data…it may be worthwhile to point out how the data integrity issue may impact that dashboard.  Not many executives want to be in a position where they are making decisions on inaccurate data.
  • Step #10:  Wash, rinse, and repeat
    By the time that you have gotten to this point, it will likely be time to fold in another transactional system into your BI platform.  Remember this process and use it again!WashRinseRepeat

Benefit of Column-store Indexes

With the release of Microsoft SQL Server 2012, Microsoft has bought into the concept of column-store indexes with its VertiPaq technology.  This is similar to the approach taken by Vertica.  In a very simplistic example, below is a standard data set:

LastName FirstName BusinessUnit JobTitle
Johnson Ben Finance Director
Stevens George HR Recruiter
Evans Bryce Advancement Major Gift Officer

In a common row-store, this might be stored on the disk as follows:

  • Johnson, Ben, Finance, Director
  • Stevens, George, HR, Recruiter
  • Evans, Bryce, Advancement, Major Gift Officer

Column-store indexes store each column’s data together.  In a column-store, you may find the information stored on the disk in this format:

  • Johnson, Stevens, Evans
  • Ben, George, Bryce
  • Finance, HR, Advancement
  • Director, Recruiter, Major Gift Officer

The important thing to notice is that the columns are stored individually.  If your queries are just doing SELECT LastName, FirstName then you don’t need to read the business unit or job title.  Effectively, you read less off the disk.  This is fantastic for data warehouses where query performance is paramount.   In their white paper, Microsoft uses the illustration below to better explain how columnstore indexes work:

Microsoft's Columnstore Index Illustration

Microsoft’s Columnstore Index Illustration

The benefits of using a non-clustered columnstore index are:

  • Only the columns needed to solve a query are fetched from disk (this is often fewer than 15% of the columns in a typical fact table)
  • It is easier to compress the data due to the redundancy of data within a column
  • Buffer hit rates are improved because data is highly compressed, and frequently accessed parts of commonly used columns remain in memory, while infrequently used parts are paged out.

Microsoft recommends that columnstore indexes are used for fact tables in datawarehouses, for large dimensions (say with more than 10 millions of records), and any large tables designated to be used as read-only.  When introducing a columnstore index, tables do become read only.  Some may see this as a disadvantage.  To learn more about Microsoft’s columnstore indexes, read their article titled, “Columstore Indexes Described.”

So let’s take a quick look at how to do this in the 2012 SQL Server Management Studio.  If you need to add a new non-clustered column store index, you can do so as seen below:

Columnstore index in MS SQL Server 2012

Creating a new non-clustered columnstore index in Microsoft SQL Server 2012

To ensure that the non-clustered columnstore index is achieving the right result for you, test it.  Under the Query menu, ensure that you enable the “Display Estimated Execution Plan.”

Displaying the Query Execution Plan in SQL Server 2012

Displaying the Query Execution Plan in SQL Server 2012

Analyzing the Query Execution Plan in SQL Server 2012

Analyzing the Query Execution Plan in SQL Server 2012

You should now be in a position to analyze your old and new queries to determine which is optimal from a performance perspective.  You’ll likely see a great reduction in query time by using the non-clustered columnstore index.

What are your thoughts?  Are you using columnstore indexes in your environment?

Related Articles:

Gartner releases 2013 Business Intelligence & Analytics Magic Quadrant

Last month, Gartner released the 2013 version of their Business Intelligence & Analytics Platform Magic Quadrant.  I always look forward to the release of Gartner’s magic quadrants as they are tremendously helpful in understanding the landscape of specific technology tools.

Gartner Magic Quadrant for Business Intelligence & Analytics - Comparison of 2012 to 2013

This year, I was pleased to observe the following:

  • Microsoft has improved its overall ability to execute.  Overall, it seems that Microsoft is moving in the right direction with their SQL Server 2012 product.  I’m excited about the enhancements to Master Data Services and I like where they are headed with PowerPivot and Power Views.  A full list of new features can be found at Microsoft’s website.  I’m a big Microsoft fan and I’m excited about Office 2013 and the impact that it will have on BI.
  • IBM has maintained, and slightly increased, its market position.  IBM continues to expand upon the features of their key acquisitions (Cognos, SPSS).  They have done a nice job of migrating customers from the old Cognos 8 platform to IBM Cognos 10.x.  This has increased customer satisfaction.  I also really like their Analytic Answers offering.  In my opinion, BI will continue to become more service oriented – so a big applause for IBM’s analytics as a service offering.
  • Tableau has moved into the top right square.  Tableau deserves to be here and I’m excited to see this movement.  Tableau’s customer support and product quality has been consistently high.  They have also set a benchmark in terms of how straightforward it is to move to their platform and upgrade to the latest version release.
  • There is plenty of competition at the bottom of the market.  Niche players like Jaspersoft and Pentaho are at each others heels.  Competition is healthy!

The only thing that surprised me is that I didn’t see Pyramid Analytics on this list.  Microsoft acquired ProClarity back in 2006.  Extended support for ProClarity will soon end in 2017.  Given that Microsoft has not migrated all of the ProClarity features to PerformancePoint, I am speaking to many users that are jumping ship on the ProClarity front and moving toward Pyramid Analytics.  Pyramid Analytics has done a nice job to aid customers moving from ProClarity.  Keep an eye out.  We might see them on the list next year.

If you are interested in reading the full 2013 report, you may preview the online version here.