cover

Table of Contents

Section I: Data Warehousing and Business Intelligence

Chapter 1: Why Business Intelligence?

How Intelligent Is Your Organization?

Try It

Chapter 2: Dimensional Modeling

Key Dimensional Modeling Elements

How Does Dimensional Modeling Work?

Try It

Chapter 3: Fact Table Modeling

Try It

Section II: SQL Server Integration Services

Chapter 4: Understanding SSIS

Business Intelligence Development Studio (BIDS)

Solution Explorer

SSIS Designer

Variables

SSIS Architecture

Try It

Chapter 5: Using the Control Flow

Control Flow Containers

Control Flow Tasks

Precedence Constraints

Connection Manager

Control Flow Designer

Try It

Chapter 6: Using the Data Flow

Data Flow Sources (Extracting Data)

Data Flow Transformations

Data Flow Paths

Data Flow Designer

Try It

Chapter 7: Solving Common SSIS Scenarios

Try It

Chapter 8: Loading Dimensions

Using the Slowly Changing Dimension Task

Try It

Chapter 9: Loading a Fact Table

Try It

Chapter 10: Deploying SSIS Packages

Deploying a Single Package Using SSMS

Deploying a Single Package Using BIDS

Creating a Deployment Utility Using BIDS

Try It

Part Section III: SQL Server Analysis Services

Chapter 11: Understanding SSAS

SSAS Architecture

Cubes

MDX

BIDS for SSAS

Try It

Chapter 12: Configuring a Data Source and Data Source View

Creating a Data Source

Creating a Data Source View

Try It

Chapter 13: Using the Cube Wizard

Try It

Chapter 14: Editing Your Dimension

Dimension Editor

Attribute Relationships

Key Columns

Dimension and Attribute Types

Try It

Chapter 15: Editing Your Cube

Cube Editor Tour

Browsing the Cube

Try It

Chapter 16: Adding New Dimensions and Measure Groups

Adding a Measure Group and Measures

Adding a Dimension

Try It

Chapter 17: Using MDX

Anatomy of Basic MDX

Navigation Functions

Try It

Chapter 18: Creating Calculations

Calculation Basics

Color-Coding Measures

Named Sets

More Advanced Calculations

Try It

Chapter 19: Data Mining

Introduction to Data Mining

Data-Mining Process

Creating a Mining Model

Exploring the Model

Evaluating the Model

Querying the Model

Try It

Chapter 20: Administering the SSAS Instance

Securing the Database

Partitioning the Data

Aggregations

Usage-Based Optimization

Processing the Cube

Deploying Change

Try It

Section IV: SQL Server Reporting Services

Chapter 21: Understanding SSRS

Building Your First Report

Try It

Chapter 22: Using Report Wizard

Try It

Chapter 23: Building a Matrix Report

Try It

Chapter 24: Parameterizing Your Reports

Creating Parameters

Default Parameter Values

Parameter Available Values

Multi-Value Parameters

Altering Properties with Parameters

Try It

Chapter 25: Building Reports on Your Cube

Try It

Chapter 26: Using Maps in Your Report

Try It

Chapter 27: Building a Dashboard

Tables and Filters

Drill-Downs

Try It

Chapter 28: Deploying and Administering SSRS

Stored Credentials

Subscriptions

Shared Schedules

Security

Datasets and Caching

Report Builder 3.0

Report Parts

Viewing Reports

Try It

Chapter 29: New Reporting Services Visualizations — Sparklines, Data Bars, and Indicators

Adding Sparklines

Using Data Bars

Configuring Indicators

Try It

Chapter 30: Using Report Builder

Opening Report Builder

Working with the Table, Matrix, or Chart Wizard

Report Parts

Shared Datasets

Try It

Section V: Containers

Chaper 31: Reporting against a Cube with Excel

Try It

Chapter 32: Loading Data into a PowerPivot Workbook

What Is PowerPivot?

Try It

Chapter 33: Creating a PowerPivot Report

Components of a PowerPivot Report

Building the Report

Adding Bells and Whistles

Try It

Chapter 34: Data Mining in Excel

Getting Ready for Excel Data Mining

Exploring the Data Mining Add-Ins

Analyzing Key Influencers in Excel

Try It

Section VI: SharePoint

Chapter 35: Understanding SharePoint for Business Intelligence

Try It

Chapter 36: Deploying and Using Reporting Services in SharePoint 2010

Try It

Chapter 37: Building PerformancePoint Dashboards in SharePoint 2010

Try It

Chapter 38: Deploying and Using Excel Services

Try It

Chapter 39: Deploying and Using PowerPivot in SharePoint

Try It

Chapter 40: Managing SharePoint Business Intelligence

Try It

Appendix: What’s on the DVD?

System Requirements

Using the DVD

What’s on the DVD

Troubleshooting

Customer Care

Introduction

End-User License Agreement

Section I: Data Warehousing and Business Intelligence

Chapter 1

Why Business Intelligence?

Congratulations on your choice to explore how Business Intelligence can improve your organization’s view into its operations and uncover hidden areas of profitability and analysis. The largest challenges most organizations face around their data are probably mirrored in yours. Challenges include:

How Intelligent Is Your Organization?

Business Intelligence (BI) is a term that encompasses the process of getting your data out of the disparate systems and into a unified model, so you can use the tools in the Microsoft BI stack to analyze, report, and mine the data. Once you organize your company’s data properly, you can begin to find information that will help you make actionable reports and decisions based on how the data from across your organization lines up. For instance, you can answer questions like, “How do delays in my manufacturing or distribution affect my sales and customer confidence?” Answers like this come from aligning logistics data with sales and marketing data, which, without a Business Intelligence solution, would require you to spend time exporting data from several systems and combining it into some form that you could consume with Excel, or another reporting tool.

Business Intelligence systems take this repetitive activity out of your life. BI automates the extracting, transforming, and loading (ETL) process and puts the data in a dimensional model (you’ll create one in the next two lessons) that sets you up to be able to use cutting-edge techniques and everyday tools like Microsoft Excel to analyze, report on, and deliver results from your data.

Getting Intelligence from Data

How do you get information from data? First, you need to understand the difference. As you learned earlier, data can come from many different places, but information requires context and provides the basis for action and decision-making. Identifying your data, transforming it, and using the tools and techniques you learn from this book will enable you to provide actionable information out of the mass of data your organization stores. There are several ways to transform your data into actionable information and each has its pros and cons.

Typical solutions for reporting include a few different architectures:

You have likely seen some form of all of these problems in your organization. These are the opposite of what you want to accomplish with a great BI infrastructure.

BI to the Rescue

A well-thought-out BI strategy will mitigate the problems inherent to each of the previously listed approaches. A good BI approach should provide the targeted departmental reporting that is required by those end users while adjusting the data so it can be consumed by executives through a consolidated set of reports, ad hoc analysis using Excel, or a SharePoint dashboard. Business Intelligence provides a combination of automated reporting, dashboard capabilities, and ad hoc capabilities that will propel your organization forward.

BI provides a single source of truth that can make meetings and discussions immediately more productive. How many times have you gotten a spreadsheet via e-mail before a meeting and shown up to find that everyone had his or her own version of the spreadsheet with different numbers? Business Intelligence standardizes organizational calculations, while still giving you the flexibility to add your own and enhance the company standard. These capabilities allow everyone to speak the same language when it comes to company metrics and to the way the data should be measured across the enterprise or department.

Integrating Business Intelligence with your organization’s current reporting strategy will improve the quality of the data as well as the accuracy of the analysis and the speed at which you can perform it. Using a combination of a data warehouse and BI analytics from Analysis Services and Excel, you can also perform in-depth data mining against your data. This enables you to utilize forecasting, data-cluster analysis, fraud detection, and other great approaches to analyze and forecast actions. Data mining is incredibly useful for things like analyzing sales trends, detecting credit fraud, and filling in empty values based on historical analysis. This powerful capability is delivered right through Excel, using Analysis Services for the back-end modeling and mining engine.

BI = Business Investment

A focused Business Intelligence plan can streamline the costs of reporting and business analytics. The Microsoft BI stack does a great job of providing you with the entire tool set for success within SQL Server Enterprise Edition. We provide more details on that shortly, but the most important bit of information you should take away right now is that the cost of managing multiple products and versions of reporting solutions to meet departmental needs is always higher than the cost of a cohesive strategy that employs one effective licensing policy from a single vendor. When organizations cannot agree or get together on their data strategy, you need to bring them together for the good of the organization. In the authors’ experience, this single, cohesive approach to reporting is often a gateway to a successful BI implementation. Realizing the 360-degree value of that approach and seeing the value it can have in your organization are the two most important first steps.

Microsoft’s Business Intelligence Stack

Microsoft’s Business Intelligence stack comes with SQL Server and is greatly enhanced with the addition of SharePoint Server.

SQL Server Enterprise includes industry-leading software components to build, implement, and maintain your BI infrastructure. The major components of the Microsoft BI stack that are included with SQL Server are the following:

These programs work together in a tightly integrated fashion to deliver solutions like those you’ll build in this book. See Figure 1-1 for more details.

Figure 1-1

missing image file

In Figure 1-1 you see the layers of Microsoft’s Business Intelligence stack. SharePoint is at the top as the most end user–facing program for reporting, dashboard, and analytic capabilities. On the next level down you see the more common end user tools, and continuing down you can see the development tools, the core components, and some of the multitude of potential data sources you can consume with the products we discuss in this book.

BI Users and Technology

Different levels of users will have different sorts of questions around which they will use these BI technologies we are discussing. To see this at a glance, review Table 1-1. In that table you can see a breakdown that helps answer which users will rely on which tool for their reporting and data analysis and makes clear that Microsoft’s Business Intelligence stack does address the needs of users at every level.

Table 1-1

End Users Power Users Executives/Clients
1. Excel
2. Reporting Services
3. SharePoint Dashboards
1. Excel
2. Report Builder
3. Reporting Services
4. SharePoint Dashboards
1. Excel
2. SharePoint Dashboards

Try It

Your Try It for this lesson is a bit different than most others in the book. Throughout the book you will be challenged with hands-on tasks to enhance your understanding. For this lesson, your Try It is to make sure the things you learned in this lesson are in your mind as you learn the technologies to apply them. For instance, ask yourself these questions as you go through the rest of the book.

If you can keep these things in mind as you’re learning and developing, you will succeed in harnessing the goals of a Business Intelligence implementation as you move through the data in your organization.

missing image file

As this chapter is just an introductory overview, it does not have an accompanying video.

Chapter 2

Dimensional Modeling

Dimensional modeling is the process you use to convert your existing OLTP data model to a model that is more business-centric and easier for Business Intelligence tools to work with. Tools like SSIS, Analysis Services, and the others you’ll learn about in this book are geared specifically toward variations of this type of model. In this lesson you will learn what makes a dimensional model different and then have the opportunity to convert a simple model yourself.

As seen in Figure 2-1, the OLTP model is highly normalized. This is to enhance the quick insertion and retrieval of data. The goal in designing a data warehouse or star schema is to denormalize the model in order to simplify it and to provide wider, more straightforward tables for joining and data-retrieval speed. This denormalization allows you to “model” the database in a business-focused way that users can understand, and dramatically increases performance of the types of analytical queries that we’re performing.

Why do you need to do this denormalization in order to report on your data, you may ask? The largest reason is that you need to consolidate some of the redundancy between tables. Consolidating redundancy will put the database into a star schema layout, which has a central fact table surrounded by a layer of dimension tables, as shown in Figure 2-2.

As you can see in Figure 2-2, we have abstracted out tables such as DimProduct, DimCustomer, DimPromotion, and DimDate and put the additive and aggregative data, like sales amounts, costs, and so on into a single fact table, FactInternetSales (more on fact tables in Lesson 3; for now focus on the dimensions). This abstraction allows you to implement a number of important elements that will provide great design patterns for dealing with the challenges discussed later in this chapter.

Figure 2-1

f0201.tif

Moving from the OLTP model to a dimensional model is important for a number of reasons, not the least of which is performance, but within the dimensional model we can handle many situations with the data that are very difficult, if not impossible, to handle in a more typical OLTP third-normal-form model. Some of these situations are:

Figure 2-2

f0202.tif

Key Dimensional Modeling Elements

The key elements that make up the dimensional model system are as follows:

How Does Dimensional Modeling Work?

Before you try some dimensional modeling for yourself, we want to show you an example. For our example, we use the AdventureWorks2008R2 sample databases from Microsoft available at www.codeplex.com. We create a simple star schema from the reseller sales information in the OLTP version of the database. The tables we use from OLTP will be as follows:

The table we will create will be called DimCustomer.

First, take notice of the differences and key elements in Figures 2-3 and 2-4. Figure 2-3 shows the OLTP tables, and Figure 2-4 shows the new dimension table. We’ll walk you through the numbered items in Figure 2-4 to show you what key design elements we employed to make this a successful transition from normalized dimension data to a set of dimension tables.

1. New CustomerKey column to provide SurrogateKey. We are using a new column we created to provide the primary key for this new dimension table. Best practice is to add a suffix of “SK” to the name of this column, so it would read CustomerSK or CustomerKeySK.

2. We have modified the primary key column that is coming over from the source OLTP system to act as the alternate key. All this means is that if we need to bring in data from several systems whose primary keys have overlapped or are in different formats, we can do it with a combination of our alternate (or business) key and our surrogate key CustomerKey.

3. Much of the demographic data and store sales data was also tapped to get columns like DateFirstPurchase and CommuteDistance so you can find out more about your customers. Some of these columns could be calculated in the ETL portion of your processing by comparing information like a work and home address, for example.

Figure 2-3

f0203.tif

Figure 2-4

f0204.eps

Once the dimension tables are in place, you can easily see why this is a better model for working with large analytical queries and analysis. For instance, now if you refer to multiple customers in a single order, you need only one customer dimension with a fact table row that has multiple key relationships to the customer table. This is much better than having a bill-to customer table and a ship-to customer table to handle subsidiaries or other issues.

Multiple dates are also very common in most fact tables; for instance, an inventory fact table may have a product’s arrival date, ship date, expiration date, and return date. This requires multiple links to a product table for multiple columns; instead we can link directly to DimDate for these values with our numeric surrogate keys. Remember, these keys keep all the tables in the warehouse linked as your new key system.

You can see the StartDate and EndDate columns and how they control the historical loading. (The mechanics of historical loading are discussed in the SSIS lessons in Section II of this book.) These columns allow you to expire a row when a historical change is required. For instance, when a product line gets a new account manager, you would expire the current product line row and insert into the dimension table a new row with an EndDate of null that links to the new product manager. This way, all your historical reporting is accurate, and your current reporting is accurate as well. Otherwise, historical reporting could mistakenly tie sales to the wrong manager.

There are three main types of slowly changing dimensions:

It is common to have columns from each type in the same table; for instance, if you need to track history on last names for employees, but not on their addresses, you may have a Type II LastName column and a Type I Address column. This is perfectly acceptable and common.

This design has also been proven to improve performance significantly since the main goal of a data warehouse or BI system is to extract data as quickly as possible. The more denormalized type of this model lends itself to the quick retrieval of data from the tables to serve to populate a cube, run a report, or load data into Excel. You’ll do all of these things in later lessons!

Here are some general design tips for working with your dimension tables:

Try It

In this Try It you’re going to take what you’ve just read about and apply it to create your own product dimension table with columns from a typical source OLTP system.

Lesson Requirements

The columns you put in your table are up to you, but your dimension will need to track history. Also, the dimension table will be getting data from other sources, so it will need to be able to handle that. You will create your table in SQL Server Management Studio.

Hints

Step-by-Step

1. The first thing you should do is identify some columns you might want in your table. Table 2-1 has a number of standard product dimension columns that you can pick from.

Table 2-1

Table 2-1

2. Now, in order to make these into a proper dimension table, you need to review your requirements. Your first requirement was to make sure you can track history, so you need to make sure you have a StartDate and EndDate column so you can expire rows as they become updated.

3. Your next requirement was to make sure the dimension table could handle data from multiple systems either now or in the future, which means you need to apply the best practice you learned about surrogate keys. This will add a ProductKeySK and a ProductAlternateKey column to the table as well.

The finished product should look something like Figure 2-5.

Figure 2-5

f0205.eps

This table will work with multiple systems with its surrogate key structure and will perform well if the rest of the warehouse follows similar best practices for design.

Congratulations, you have just designed your first dimension table. Don’t forget to remember these concepts and refer to them as they become relevant in the lessons in the rest of the book. Great job!

cd.ai

Please select Lesson 2 on the DVD with the print book, or watch online at www.wrox.com/go/vid24bi to view the video that accompanies this lesson.

Chapter 3

Fact Table Modeling

A fact table is modeled to be the center of the star schema in a data warehouse. It consists of two primary types of data:

You need fact tables because they allow you to link the denormalized versions of the dimension tables and provide a largely, if not completely, numeric table for Analysis Services to consume and aggregate. In other words, the fact table is the part of the model that holds the dollars or count type of data that you would want to see rolled up by year, grouped by category, or so forth. The fact table holds “just the facts” and the keys to relate the needed dimension tables. Since many OLAP tools, like Analysis Services, look for a star schema model and are optimized to work with it, the fact table is a critical piece of the puzzle.

The process of designing your fact table will take several steps:

1. Decide on the data you want to analyze.

Will this be sales data, inventory data, or financial data? Each type comes with its own design specifics. For instance, you may have to load different amounts of data based on the type of analysis you’re doing.

2. Once you’ve identified your data type, pick the level of granularity that you’re seeking.

When you consider the question of granularity, or the “grain” of each row, in the fact table, you want each row to represent a certain level of granularity. This means that you need to decide on the lowest level of analysis you want to perform. For example, each row may represent a line item on a receipt, a total amount for a receipt, or the status of a particular product in inventory for a particular day.

3. Decide how you will load the fact table (more on this in Lesson 9).

Transactions are loaded at intervals to show what is in the OLTP system. An inventory or snapshot fact table will load all the rows of inventory or snapshot-style data for the day, always allowing the user to see the current status and information based on the date the information was loaded.

Fact tables are often designed to be index light, meaning that indexes should be placed only to support reporting and cube processing that is happening directly on that table. It is a good idea to remember that your fact tables will often be much larger in row count and data volume than your dimensions. This means you can apply several strategies to manage the tables and improve your performance and scalability.

Now that you have some insight into the fact table design process, it’s time to try building one yourself.

Try It

In this Try It you will build a fact table based on the dimensions from Lesson 2.

Lesson Requirements

This lesson requires you to create a fact table to use in your sample model for this section. Make sure to use SQL Server Management Studio or TSQL to create your tables, whichever you’re more comfortable with. We are going to use SQL Server Management Studio. To complete the lesson you need to build the table, and include any valuable cost, sales, and count fact columns, along with the key columns for the important dimensions. Next are a couple of hints to get you started.

Hints

Step-by-Step

1. Make sure you identify the dimensions that are important to your analysis and then include references to those surrogate key columns in your fact table.

2. Add columns for each numeric or additive value that you’re concerned about rolling up or aggregating.

3. Check out the example in Figure 3-1. Yours should look similar.

Figure 3-1

f0301.tif
cd.ai

Please select Lesson 3 on the DVD with the print book, or watch online at www.wrox.com/go/vid24bi to view the video that accompanies this lesson.

Section II: SQL Server Integration Services