"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

August 27, 2011

Tech-Ed Talks - Managing SQL Server for Reluctant DBA

SQL Learning for this week is learning from Session - Managing SQL Server for Reluctant DBA.

Good Session. Notes listed below 
  • SQL manages data in 8KB Chunks called pages 
How Data Change occurs
  • Transaction log records query in ldf - log data file
  • Database Engine identifies pages affected by change
  • Load affected pages in server memory (RAM)
  • Changes are made and written to disk 
Recovery Mode - Clearing uncommitted transactions, get them committed
SQL Server Backups - Full & Diff backups would truncate transaction logs. Transaction log backup would help max last 15 mins transactions.
Recovery Model - Full (Transaction log is used), Bulk (Bulk load of records might not get recorded, Quicker Recovery - Risk of losing data), Simple

 Indexes
  • Indexes - Make it faster to read data.  
  • Clustered Index - Governs physical order how data is stored
  • Non-Clustered Index – Pointer. Covering Index - Include column in SQL Server
  • Index Pros - Faster Retrieval
  • Index Cons - Space required for storing index, slow down CRUD operations as indexes need to be updated for every operation
Fragmentation - Occurs when pages become full and data need to be inserted in middle.  Provide proper fill factor based on INSERT/UPDATE/DELETE Operations

Reorganizing and Rebuilding Indexes - Reorg has lower performance impact, this can be online operation. Rebuild would require you to wait until index creation is complete to use it.
DB Best Practices 
  • Statistics Auto Create / Update (Auto Create Statistics – Set it TRUE)
  • Growth in terms of MB, Not %%, Provide enough file size initially itself
Security Model
  • Login (Windows only – Domain users, AD/ SQL Server)
  • Login has to be mapped to DB user, predefined database roles or user created
  • Every Instance created is a sql server. Every instance can be configured for different security
HA Options 
  • Log Shipping – Ships transaction log to another server and re-runs transactions there. Manual failover process
  • Database Mirroring –Automated, Keeps complete copy on secondary server. Software solution. Mirroring Database level
  • Clustering – Hardware redundancy. Clustering instance level 
Good Refresher!!!!


More Reads

Common Tasks of SQL Server DBA
Index Fragmentation–“If it isn’t broken, don’t fix it”
Buying a Vehicle vs. Buying a Database Platform
Common SQL Server Myths by Paul Randal now in PDF form
Multiple Instances or a Single Instance of SQL Server - Part 1: Counting Instances
Fundamentals: Improving Insert and Update Performance by Dropping Unused Indexes

Happy Learning!!!!

August 13, 2011

Tech-Ed Talks - SQL Server Upgrade Best Practices & High Availability

SQL Server Upgrade Best Practices & High Availability has always been my Area of Learning (Improvement) since, I had not worked on Clustering and Mirroring Scenarios.

I found below Tech-Ed talks extremely useful from learning perspective

Microsoft SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practices from the Field

Summary of Learning's (Notes)

Upgrade path for SQL 7.0 is SQL 7.0->SQL 2000->SQL 2005 and then 2008

Phases for Migration
  • Planning
  • Pre Upgrade Tasks
  • Upgrade
  • Decide to commit upgrade
  • Post Upgrade
Slipstream
  • Single installation along with service packs
In Place Upgrade Pros
  • Faster
  • Application connects to same instance after upgrade
In Place Upgrade Cons
  • Complex Rollback strategy
  • Upgrade all DBs connected to instances
Side by Side Upgrade
  • New Installation
  • Detach - Copy and Attach Database
Things to consider for Side by Side Upgrade
  • Data Sync
  • Decide to move for 64 bit / High end Hardware
  • Data validation for copied objects
Tools
  • Upgrade Advisor
  • Assessment and Planning Kit
Clustering Scenario
  • Install pre-requisites on passive node (B)
  • Manual Failover for Passive Node to become Active (B)
  • Install pre-requisites on passive Node (A)
  • Start Upgrade with passive node for clustering scenario (A)
  • Then Upgrade Active Node (B)
  • SQL Server Automatic Failover takes care
This is very good information to get started with migration involving clustering

Database Mirroring
  • Software solution which consist of principal and mirror server. All active transactions mirrored to mirror server
Upgrade Involving Database Mirroring, Replication and Failover Clustering is also covered. Migration of Analysis Services is also covered

Bunch of more Best Practices are provided in the tech talk.

SQL Server High Availability: Overview, Considerations, and Solution Guidance

Summary of Notes
  • Classifying Applications based on HA requirements
  • Types of Failures (Local / Site Level)
  • Regional & Geographic Disaster Recovery
  • Chossing approach based on SLA, Cost
  • Planned Downtime (Sps, Hot fixes, Security Fixes, Rebuilding Indexes)
  • Unplanned Downtime (Hardware, Software and power failures)
HA Notes
  • Database backup and restore
  • Log Shipping
  • Database Mirroring (Fastest Failover). Synchronous/ Asynchronous Data movement
  • Instance need to start in cluster scenario but in mirroring it is already started and running
  • Clustering is at instance level, Mirroring is at DB level
  • Mirroring / Log shipping Applying Data Page level
  • Replication - Logical, Secondary can have read access
  • Replication is at Table level, Flexible to replicate subset of information
For Detailed information please check the session. This is very good session to begin with HA topic.

More Reads


SQL Server 2005 High-Availability Options (Distilled)
Automating Database Migration to Microsoft SQL Server
MVC,MVP and MVVM: A Comparison of Architectural Patterns
Microsoft Silverlight,WCF RIA Services and Your Business Objects

Happy Learning!!!

FlexMonkey - Flex Test Automation

Learning's for this post started with question in stackoverflow. Initially I had few challenges to figure out tools. Later I found lot of very good material for flex automation. Kindly check the bookmarks section
Many thanks to my colleague Sampath for helping me setup / build project in Flash builder.
Step 1. Install Flex Monkey 5.0
Step 2. Install Adobe Flash Builder latest version. This is around 500MB (Installer).
Step 3. Download Flex Sample Project  provided in Flexmonkey download page. Unzip the project folder
Step 4. Open FlashBuilder and Create Sample Project
Step 5. Import the downloaded MonkeyContacts Project
Step 6 - Run the Project as Web Application
Step 7 - Launch FlexMonkey Application. Provide Project properties
Step 8 - You would see status connected in Flexmonkey once the application is launched. Interesting thing I observed in Flex Automation is you need to compile the application with the automation library. If you check the project under libraries you would see automation_monkey4.x swc. This file when compiled with the project provides the ability to detect the objects in flex app. When you develop Flex App you need to compile it with library applicable for version of flex.
Step 9 - Click on Start Recording option in flex monkey as per below snapshot. After you record the steps Click on Red button shown below to Stop Recording
Step 10 - I did a sample record insert, delete of the application. You would see recorded steps as listed below
Step 11 - Playback the step once to replay check the test case.
Step 12 - After replaying test case once Drag all the Steps as listed below to create test case out of it
Step 13 - You will see a new test case created based on steps recorded
Step 14 - Playback the created test by click on the play button
So, We are successfully able to setup, create automated test case for flex apps. Below video was very handy for me to be clear on steps while writing this post.
Author has bookmarked series of posts to create Data Driven Test Cases, Setting up Continious Integration for running test suites.
These posts would be helpful to learn further on Flex Test Automation
It took me couple of weeks to do this proof of concept.
Happy Learning!!!

August 10, 2011

My StackOverflow QA Ranking

My rankings improved considerably this month. My rank is #3 for this month. First time ever :)



Happy for it!!
Happy Learning!!

August 06, 2011

SQL server 2008 R2 - SSRS Maps Feature Explored

Next post is based on my question in MSDN forum. Many thanks to Lola for providing clarity and detailed steps in the answer.

For getting started with SSRS please refer to previous posts

Objective of current exercise
  • Represent Area as a Map
  • Display related data (numbers) in the map
  • Show indications in color based on certain data
For this purpose below SQL Queries would be used
Step 1 - Create a blank report
Step 2 - Create two data sets based on below queries
-- Dataset1 in SSRS
 
DECLARE @table1 TABLE( id CHAR(10), shape geometry, Value money );
INSERT INTO @table1 VALUES
 ('ZoneA','POLYGON((0 0, 0 3, -4 3, -4 0, 0 0))',2000),
 ('ZoneB','POLYGON((0 0, 4 0, 4 3, 0 3, 0 0))',12000),
 ('ZoneC','POLYGON((0 0, 4 0, 4 -3, 0 -3, 0 0))',8000),
 ('ZoneD','POLYGON((0 0, 0 -3, -4 -3, -4 0, 0 0))',10000);

SELECT id, shape,Value
FROM @table1  

Query for Second Data Set

-- DataSet2 in SSRS

DECLARE @table2 TABLE( id CHAR(10), shape geometry, counts int);
INSERT INTO @table2 VALUES
 ('PointA','POINT(-4 2)', 100),
 ('PointB','POINT(-4 -1)', 250);

SELECT id,shape, counts
 FROM @table2

Step 3 - After creating Data Set you will find them under report data

 

Step 4 - Now Drag and Drop Map from Tool box. Choose Spatial query as option and select DataSet1


Step 5 - Select shape polygon to display in the report. Next Step select Analytical map




Step 6 - Select Display Label


Step 7 - Now Click on New Layer Wizard as per below snapshot


Step 8 - Select Dataset for the new layer



Step 9 - Select Label to be displayed


Step 10 - Now you would see both points and polygon layer. Next step is to display data (visualise based on data). Select polygon color rule


Step 11 - Under General -> Select option visualize data by using custom colors


Step 12 - Under Distribution select custom option and provide data range for the same

Step 13- Similarly for points layer provide cusom color and data range as per below snapshot






Step 14 -  Finally, Report based on data, Color based on data range, Visualize data using custom colors. This was a very good learning for me. Thanks Lola for providing detailed steps in MSDN answer for my question.


Happy Learning!!!

August 05, 2011

SSIS - For Each Loop Container, Object and Variables

This blog post is based on my learning from my colleague on using For Each loop container. I also had issues in converting object variables to string / int variables. We will be also looking into it.

Scenario - Do Some task for every entry in a table using For Each Loop Container. Example also covers assigning objects to variables

For Reading Object variables i referred to post SSIS - Reading object variable in Script task

Getting Started

Step 1 - Table A with few columns

CREATE TABLE [dbo].[TableB] (
  [Id] [int] NOT NULL,
  [Stage] [int] NULL,
  [Comments] [varchar] (500) NOT NULL,
)

Step 2 Insert Records into table
/****** Object: [dbo].[TableB] (6 rows scripted) ******/
INSERT INTO [dbo].[TableB] ([Id], [Stage], [Comments])
VALUES (1, 1, 'A Comments');
INSERT INTO [dbo].[TableB] ([Id], [Stage], [Comments])
VALUES (1, 2, 'A Second Comments');
INSERT INTO [dbo].[TableB] ([Id], [Stage], [Comments])
VALUES (1, 5, 'A 3rd Comments');
INSERT INTO [dbo].[TableB] ([Id], [Stage], [Comments])
VALUES (2, 1, 'B 1st Comments');
INSERT INTO [dbo].[TableB] ([Id], [Stage], [Comments])
VALUES (2, 9, 'B 2nd Comments');
INSERT INTO [dbo].[TableB] ([Id], [Stage], [Comments])
VALUES (2, 10, 'B 3rd Comments');
Step 3 - For Each Loop Container
Overall package structure looks as below. Fetch entries from table and loop thru and display results.



Step 3.1 - SSIS Package Variables



Step 3.2 - SQL Task Properties


Step 3.3 - SQL Task Return Data Set


Step 3.4 - For Each Loop Properties



Step 3.5 - Iterate and Print Entries in Object

Script Task Inside For Loop Container



C# code in the script task

     int id, stage;
     string comment;
     MessageBox.Show("Inside For Each Loop Container ");
       comment = Dts.Variables["User::Comment"].Value.ToString();
      id = Convert.ToInt32(Dts.Variables["User::Id"].Value.ToString());
     stage = Convert.ToInt32(Dts.Variables["User::Stage"].Value.ToString());
      MessageBox.Show("Id is " + id.ToString() + " Stage is " + stage.ToString() + " Comment is " + comment);


Step 4 - Iterate Object List Script Task - Only Read-Write variable passed in this step is User::ObjectList

For script to iterate the elements.

Add name space -
using System.Data.OleDb;

Paste below piece of code (Source - Reading object variable in Script task)

            // TODO: Add your code here
            int id, stage;
            string comment;
            // TODO: Add your code here
            OleDbDataAdapter oleDA = new OleDbDataAdapter();
            DataTable dt = new DataTable();
            DataColumn col = null;
            DataRow row = null;
            MessageBox.Show("Inside Object List");
             oleDA.Fill(dt, Dts.Variables["User::ObjectList"].Value);
            foreach (DataRow row_ in dt.Rows)
            {
                row = row_;
                id = Convert.ToInt32(row["Id"].ToString());
                stage = Convert.ToInt32(row["Stage"].ToString());
                comment = row["Comments"].ToString();
                MessageBox.Show ("Id is " + id.ToString() + " Stage is " + stage.ToString() + " Comment is " + comment);
            }


Happy Learning!!!

August 02, 2011

SSAS Tutorial - Part I

I referred to MSDN SSAS Tutorial. It is very good. There are not many snapshots for steps in the tutorial. Ten lessons provided are a very good start for learning SSAS. I have covered first 3 chapters to start with. I am summarising few steps and notes based on my learning.


Lesson 1 - Defining Data Source and View
  • Creating Data Source and Data Source Views we have checked in earlier examples
  • Modifying the Fact and Dimension tables from default names is new learning in this lesson

Lesson 2 - Defining and Deploying Cube


Defining Dimension Steps
Creating Date Dimension and Specifying Attributes

Step 1 - Select the Date Dimension


Step 2 - Change Attribute type as shown below


Dimension Wizard
  • Existing Table Date
  • Date Key
  • Full Date Alternate Key - Attribute - Month - Regular - Date - Calender - Date
  • English Month Name rename to Month
  • Calendar Quarter Attribute to Quarter
  • Calendar Year Attribute to Year
  • Calendar Semester Attribute to Half Year
Step 3

Defining a Cube


Step 1 - Adding Measure Tables



Step 2 - Selecting Measures




Adding Attribute to Dimensions

Step 1 - Under Dimension Structure drag and drop attributes as shown in below snapshot. Adding Product Attributes



Step 2 - Adding Customer and Geography dimensions


Lesson 3 - Modifying Measures 

Step 1 - Adding more measure in Cube Structure. Attributes added in measures from Fact Table. Properties changing names marked in below snapshot




Summary - This chapter focusses on
  • Adding Named Calculation
  • Creating Hierarchy
Step - Creating Hierarchy



Step - Adding Named Calcuation





Step - Mapping Customer Attribute to Customer Name


Browsing Cube



Display folder names created in previous steps now shows in the Cube Browser Left Side Display Areas

HIERARCHY Created also listed in below snapshot



Next list of posts will be for KPIs, Calculations.....



Happy BI Learning !!!