a dba's best friend is his tempdb - redgate · a dba's best friend is his tempdb...

52

Upload: others

Post on 07-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can
Page 2: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

A DBA's best friend is his tempdbPublished Thursday, August 18, 2011 5:22 PM

There is a saying amongst welfare agencies that one can tell how well a family is functioning by looking at their dog. If the dog is neurotic, neglectedor maltreated, one fears for the welfare of the children. Likewise, you can tell a lot about the skills of a team of DBAs and developers by looking atthe tempdbs on their servers.

The tempdb database is available to all users of a SQL Server instance to house temporary objects such as cursors and tables, and it is whereSQL Server creates various internal objects for sorting and spooling operations, as well as index modifications. It can get really busy in there,especially if there are unruly processes. The wise DBA will look after tempdb, giving it plenty of space, making sure it is never mistreated. In short,a happy, moist-nosed tempdb is the mark of a nurturing DBA.

By default, tempdb will be installed, with the other system databases, on the c:\ drive of the SQL Server machine. This is far from ideal in almost allcases. Tempdb requires a lot of space, pre-allocated so files aren't constantly growing. You need more than one tempdb data file; one data file perCPU core is a common recommendation. These files need to be located on drives with the highest write performance possible; a RAID 10 array isa good choice. Also, tempdb storage is one area where Solid State Disks are becoming a popular storage choice in preference to conventionalmagnetic drives, again due to their vastly higher write performance.

Of course, not all DBAs can afford the luxury of RAID 10 arrays and expensive SSDs. In such cases, special training and vigilance is required. Thedevelopers, who exercise the dog, must be encouraged away from complex, unwieldy routines. Structured activity is best, breaking down routinesinto a series of well-defined steps, and storing intermediate results in a set of explicit temporary tables. In this way, tempdb usage patterns becomemuch easier to predict. Also these tables will hopefully be cached, and the bigger ones can be indexed, both of which will help reduce contention.The DBA must diligently monitor the health of tempdb, detecting the SPIDs of wild processes, investigating and killing them if necessary (theSPIDs, not the developers).

Even given all this, one can't help feel that Microsoft could do more to prevent a lot of tempdb agony. After all, it feels like an area that's going to getworse rather than better, especially as use of Snapshot isolation becomes more prevalent. Aaron Bertrand has filed several tempdb-relatedConnect items urging Microsoft to offer better advice during the installation process, but to little avail. Judging by the number of forum questionsrelating to the often-painful process of moving tempdb and reallocating disk space, more action is needed.

Some early adopters have their eyes on the new Contained Databases feature in Denali, hopeful that the concept of user databases independentof "plumbing features" like logins and roles may be a move toward selective workspaces for user processes.

In the meantime, how healthy is the dog in your server? If you've got tales of exemplary nurturing or scandalous abuse, we'd love to hear them. Thebest story will, as always, win a prize.

Cheers,

Tony.

by Tony Davis

Page 3: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

I

1. Solomon Grundy,2. Born on a Monday,3. Christened on Tuesday,4. Married on Wednesday,5. Took ill on Thursday,6. Grew worse on Friday,7. Died on Saturday,8. Buried on Sunday.9. This is the end of Solomon Grundy.

Solomon Grundy,Born on a Monday,Christened on Tuesday,Married on Wednesday,Took ill on Thursday,Grew worse on Friday,Died on Saturday,Buried on Sunday.This is the end of Solomon Grundy.

Making HTML tables easier on the eye- CSS Structural Pseudo-classes11 August 2011by Phil Factor

We asked Phil why his PowerShell tabular reports looked so nice. 'CSS structural pseudo-classes' he muttered mystically. Later on,without any further warning, he popped up with this article that explains for anyone who has missed them, how to go about doingintricate formatting of an HTML file, the contents of which you cannot alter.

t is a common predicament: You have an HTML fragment with a table, list or dictionary in it, generated from some data, and you have to render itin such a way that the data is easy to read, and the information presented in such a way as to prevent misunderstandings: However, you can't alter

the HTML source in order to add CSS classes to individual elements. We'll take a couple of practical examples to show how you can solve this sortof problem.

The flurry of browser upgrades has had the result that a lot of useful CSS2 and CSS3 has been recently implemented. As well as helping thecreation of the more esoteric mobile-applications and games, it is the extended ways of addressing individual DOM elements that is making thebasic layout of text and tables a lot easier. This article aims to illustrate practical ways of doing this

With CSS3, you can more easily apply styles to elements of the DOM based on their location within the DOM. This solves many problems with thebasic layout of text. You may be faced with the requirement that the first paragraph in a DIV section may be formatted differently, you may want totuck a bulleted list in under the paragraph that precedes it. You may want code to have minimum paragraph spacing when it is preceded byanother code paragraph. In the old days, you would need some work to do by hand in order to to tidy up text to make it conform to even the mostrudimentary typesetting conventions.

CSS previously gave you some addressing methods that could be used. You could apply styles based on the ID of the DOM Element, or couldapply it to all the children of a particular element, or element class, or to all the children of a particular TagType (e.g. A, DIV, P ). CSS2 and CSS3give you a lot more besides. You can apply a style to the 'N'th child, or to every other child, whether they be even or odd in the sequence. ( 2, 4, 6, 8,10 or 1, 3, 5, 7, 9¦) This is just the start. You can match the first element and no others, or the first few. You can specify this in inverse order, from thelast element. If you want to disregard the parentage of the elements that you wish to attach a style to, then you can do so, and just specify elementsby their sequence irrespective of their place in the hierarchy. At last, all the current browsers support this. Even Internet Explorer 9 will do it! The fulllist of these CSS addressing methods is contained in Simple-Talk's XPath, CSS, DOM, Selenium. Rosetta Stone and Cookbook. wallchart here,together with the equivalent XPath and javascript syntax.

To illustrate how this simplifies perfectly conventional formatting, we'll take a very simple example. You have a list and you'd like to enhance it so thebackground colour on alternating rows is different.

What do you mean, they look the same? We're talking about CSS3, so you will need an up to date version of any of the browsers. Even InternetExplorer should show the stripes in the right-hand list unless IE has gone quietly into 'compatibility mode'. (look for the grey border around thebrowser window). See here for a fix. Hopefully, with an up-to-date browser, you'll see how we're using the CSS3 syntax for applying differentstyles to alternating elements

Both the CSS and the HTML are very simple. The CSS has something that looks a lot like a CSS2 pseudo-class, and in fact extends the idea. Itis called a structural pseudo-class.

<style type="text/css">ol.stripedlist{list-style:none;}ol.stripedlist li:nth-child(2n-1){background-color: #fafad2;}</style>

Page 4: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

And the HTML has no formatting beyond a couple of DIVs to position the two lists side by side.

<div style="float:left; width:300px"><ol><li>Solomon Grundy,</li><li>Born on a Monday,</li><li>Christened on Tuesday,</li><li>Married on Wednesday,</li><li>Took ill on Thursday,</li><li>Grew worse on Friday,</li><li>Died on Saturday,</li><li>Buried on Sunday.</li><li>This is the end of Solomon Grundy.</li></ol></div><div style="width:300px"><ol class="stripedlist"><li>Solomon Grundy,</li><li>Born on a Monday,</li><li>Christened on Tuesday,</li><li>Married on Wednesday,</li><li>Took ill on Thursday,</li><li>Grew worse on Friday,</li><li>Died on Saturday,</li><li>Buried on Sunday.</li><li>This is the end of Solomon Grundy.</li></ol></div>

For a simple example like this, it is easier, and more backward-compatible, to simply assign a class to alternating elements to achieve theeffect. However, you often can't do this, and where you are receiving an XHTML fragment from a source that knows only about the data ratherthan your requirements to make it easy on the eye, then it obviates the task of altering XHTML, which can get messy.

When content is generated from a database, it is very quick to supply it to the application as an XHTML fragment, using the XML extensions toSQL syntax. The actual data structure you use depends on the structure of the data. Most commonly, it will be a table, but I've used lists,Dictionary lists, and nested DIVs as well as tables. Here is a quick example of a table generated from AdventureWorks.

DECLARE @query NVARCHAR(MAX)SET @query = '<table> <caption>AdventureWorks Employees</caption> <tr><th>Employee Name</th><th>Phone</th><th>Email</th></tr>'+ REPLACE(CAST((SELECT TOP 20 --purely for demonstration purposes td = COALESCE(Title + ' ','') + COALESCE(firstname + ' ','') + COALESCE(Middlename+' ','') + Lastname, '', td = Phone,'', td = EmailAddress FROM person.contact FOR XML PATH('tr'),TYPE) AS NVARCHAR(MAX)), '</tr><tr>','</tr> <tr>') + '</table> 'SELECT @Query

Tables are much easier to read if one can apply subtle shades to the backgrounds or borders to delineate columns and rows. You can even dorather nice 'warp and weft' effects to make it easier for the eye to follow down a column or along a row. Additionally, any Excel user will know howimportant the column and row headers are to help people understand data in tables. If you generate an XHTML table from SQL Server, you don'tget an option to apply different classes to different columns or rows. Why should you need to do so, when the way that a table is displayed is noneof the business of the database.

We can leave the HTML structure as it is without assigning any classes. We just create a style sheet. Here is an example. I'm using bitmaps insteadof flat background colors just because I prefer the slightly livelier effect.

/* do the basic style for the entire table */table {

Page 5: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

border-collapse: collapse; border: 1px solid #3399FF; font: 10pt Verdana, Geneva, Arial, Helvetica, sans-serif; color: black;}/*attach the styles to the caption of the table */table caption { font-weight: bold; background-image: url(fieldBack.bmp);}/*give every cell the same style of border */table td, table th, table caption { border: 1px solid #eaeaea; }/* make the first column (not header) blue */table td:nth-child(1){ color: #00016c; }/* apply styles to the third column only */table td:nth-child(3){ font-variant: small-caps; }/* apply styles to the odd headers */table th:nth-child(odd) { background-image: url(headingback.bmp); }/* apply styles to the even headers */table tr th:nth-child(even) { background-image: url(headingCrossingback.bmp); } /* apply styles to the even rows */table tr:nth-child(even) { background-image: url(fieldBack.bmp); }/* apply styles to the odd colums of even rows */table tr:nth-child(even) td:nth-child(odd){ background-image: url(fieldBackAlt.bmp); } /* apply styles to the odd rows */table tr:nth-child(odd){ background-image: url(fieldBack.bmp); }/* apply styles to the even colums of odd rows */table tr:nth-child(odd) td:nth-child(even){ background-image: url(fieldBackAlt.bmp); }

And the HTML source of the table (truncated) looks like this

<table> <caption>AdventureWorks Customers</caption><tr><th>Employee Name</th><th>Phone</th><th>Email</th></tr><tr><td>Mr. Gustavo Achong</td><td>398-555-0132</td> <td>[email protected]</td></tr><tr><td>Ms. Catherine R.Abel</td> <td>747-555-0171</td><td>[email protected]</td></tr><tr><td>Ms. Kim Abercrombie</td> <td>334-555-0137</td><td>[email protected]</td></tr><tr><td>Sr. Humberto Acevedo</td> <td>599-555-0127</td><td>[email protected]</td></tr> <!-- and so on ..... --></table>

..and it should look a bit fancier...

AdventureWorks CustomersEmployee Name Phone Email

Mr. Gustavo Achong 398-555-0132 [email protected]. Catherine R.Abel 747-555-0171 [email protected]. Kim Abercrombie 334-555-0137 [email protected]. Humberto Acevedo 599-555-0127 [email protected]. Pilar Ackerman 1 (11) 500 555-0132 [email protected]. Frances B.Adams 991-555-0183 [email protected]. Margaret J.Smith 959-555-0151 [email protected]. Carla J.Adams 107-555-0138 [email protected]. Jay Adams 158-555-0142 [email protected]. Ronald L.Adina 453-555-0165 [email protected]. Samuel N.Agcaoili 554-555-0110 [email protected]. James T.Aguilar 1 (11) 500 555-0198 [email protected]. Robert E.Ahlering 678-555-0175 [email protected]. François Ferrier 571-555-0128 franç[email protected]. Kim Akers 440-555-0166 [email protected]

Page 6: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Ms. Lili J.Alameda 1 (11) 500 555-0150 [email protected]. Amy E.Alberts 727-555-0115 [email protected]. Anna A.Albright 197-555-0143 [email protected]. Milton J.Albury 492-555-0189 [email protected]

I've done all this with just the nth-child CSS Selector. However, there are others that can be used to select particular elements of 'naked' structures. The Rosetta Stone and Cribsheet gives a much fuller list of selectors, pseudo-classes and pseudo-elements, particularly of attribute addressing(which doesn't help us as our tables haven't got attributes). Before you use them, check on the Quirks Mode site for compatibility with currentbrowsers, and check on The W3.ORG site for the details, and plenty of examples of their use

CSS2 and CSS3 structural pseudo-class and pseudo-element SelectorsSelector Function Example What Example does

* selector Selects all elements * {font: 9pt Arial,Helvetica, sans-serif;}

Makes all elements havethe specified font

> selector Selects direct children of an element div.listing > p{ margin:0; font: 11pt "CourierNew",Courier,monospace;}

Assign style to allparagraphs directly withindiv.listing

+ selector Selects the following sibling of an element h2 + p {margin-bottom:10pt}

make first paragraph aftern H2 heading with abottom margin of 10pt

[attr] selector Selects an element with a certain attribute orattribute value

p[align=right] {float:right;border:1px solid silver; text-align: left;padding: 15px;width:300px; }

float any text with theattribute 'align="right"' tothe right in a box 300pxwide

:before and:after

Insert content before or after an element p.quote:before {content: open-quote; }

p.quote:after { content:close-quote; }

put a quote before andafter the start of anyparagraph whose class is'quote'\

:first-child and:last-child

Select first and last children of an element table td:first-child{ font-variant: small-caps; }table td:last-child{ font-variant: small-caps; }

make the first and lastcolumn of the tablerender in small-caps.

:first-line and:first-letter

: Select the first line or the first letter of anelement

p.start:first-letter { line-height: 100%;float: left; font-size: 280%; }p.start:first-line { font-variant: small-caps; }

makes a large first letter,floated to the left, andputs the fist line in small-caps

~ selector Selects the general next sibling(s) of an element h3 ~ p {margin-left:40em}

give every paragraphfollowing an H3 a marginof 40em

:first-of-type The first sibling element of its type td:first-of-type { font-weight: bold; }

makes the first column ina table bold

:last-child the last sibling td:last-child { font-variant: small-caps; }

makes the last columntext smallcaps

:last-of-type The last sibling element of its type td:last-of-type { font-variant: small-caps; }

makes the last columntext smallcaps

:only-of-type The only child of its type td:only-of-type {font-weight: normal; font-variant: normal;}

make the font normal ifthere is only one column

:contains('text') now withdrawn for some reason!:empty Empty elements (without content) td:empty { background:

silver; }give a cell a silverbackground if it containsno text

:nth-child(an+b))

Selects elements according to a formulaspecifying that the element has an+b-1 siblingsbefore it in the document tree

tr:nth-child(odd){background-color:gray;}p:nth-child(4n+1) {color: navy; }p:nth-child(4n+2) {

make the odd rows of atable have a graybackgroundthe next four examplesalternate between four

Page 7: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

color: green; } p:nth-child(4n+3) {color: maroon; }p:nth-child(4n+4) {color: purple; }

colours

:nth-last-child(an+b))

Selects elements according to a formulaspecifying that the element has an+b-1 siblingsafter it in the document tree

tr:nth-last-child(-n+2) {background-color:gray;}

The last two rows in thetable should have a graybackground

:nth-of-type(an+b))

Selects elements according to a formulaspecifying that the element has an+b-1 siblingsbefore it with the same name in the documenttree

img:nth-of-type(2n+1) {float: right; }img:nth-of-type(2n) {float: left; }

float alternating imagesin the same documentlevel, left and right

:nth-last-of-type(an+b))

Selects elements according to a formulaspecifying that the element has an+b-1 siblingsafter it with the same name in the document tree

tr:nth-of-type(n+2):nth-last-of-type(n+2) {background-color:silver;}

make the odd rows of atable(Make the first andlast row have abackground of silver

Let's take a different example. Here is a PowerShell script. It is getting information about the SQL Server-related services that are running on a listof servers that I've supplied just to check that they are running OK. Whatever. The point is that I've once more got stuck with a table result that lookspretty difficult to format nicely.

# get the SQL Server service details from the list of servers and format them into an HTML tableGet-WmiObject -ComputerName 'PhilFtest.factorFactory.com','ltPhilF' ` -property '__Server,Caption, Description, Name, Status, Started, StartMode' ` -class Win32_Service ` -filter "(NOT Name Like 'MSSQLServerADHelper%') AND (Name Like 'MSSQL%' OR Name Like 'SQLServer%')" `| ConvertTo-HTML -Property '__Server','Caption', 'Description', 'Name','Status', 'Started', 'StartMode' -fragment

You'll get an output somewhat like this (I've kept it short for demo purposes)

<table><colgroup><col/><col/><col/><col/><col/><col/><col/></colgroup><tr><th>__Server</th><th>Caption</th><th>Description</th><th>Name</th><th>Status</th><th>Started</th><th>StartMode</th></tr><tr><td>PHILFTEST</td><td>MSSQL$SQL2000</td><td></td><td>MSSQL$SQL2000</td><td>OK</td><td>True</td><td>Auto</td></tr><tr><td>PHILFTEST</td><td>SQL Server (SQL2005)</td><td>Provides storage, processing and controlled access ofdata and rapid transaction processing.</td><td>MSSQL$SQL2005</td><td>OK</td><td>True</td><td>Auto</td></tr><tr><td>PHILFTEST</td><td>SQL Server (SQL2008)</td><td>Provides storage, processing and controlled access ofdata, and rapid transaction processing.</td><td>MSSQL$SQL2008</td><td>OK</td><td>True</td><td>Auto</td></tr><tr><td>LTPHILF</td><td>SQL Full-text Filter Daemon Launcher (MSSQLSERVER)</td><td>Service to launch full-text filter daemon process which will perform document filtering and word breaking for SQLServer full-text search. Disabling this service will make full-text search features of SQL Serverunavailable.</td><td>MSSQLFDLauncher</td><td>OK</td><td>False</td><td>Manual</td></tr><tr><td>LTPHILF</td><td>SQL Server (MSSQLSERVER)</td><td>Provides storage, processing and controlled accessof data, and rapid transaction processing.</td><td>MSSQLSERVER</td><td>OK</td><td>False</td><td>Manual</td></tr><tr><td>LTPHILF</td><td>SQL Server Agent (MSSQLSERVER)</td><td>Executes jobs, monitors SQL Server, firesalerts, and allows automation of some administrative tasks.</td><td>SQLSERVERAGENT</td><td>OK</td><td>False</td><td>Manual</td></tr></table>

...but we can create something rather easier on the eye by defining some styles.

Page 8: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

/* do the basic style for the entire table */table { border-collapse: collapse; border: 2px solid #853a07; color: #452812; font: 10pt "Times New Roman", Times, serif;} /* give some sensible defaults just in case it is an old browser */table td { border: 1px solid #c24704; vertical-align: top; background: #fdf5f2;} table th { border: 1px solid #fef7ef; padding: 8pt 2pt 5pt 2pt; color: white; font-weight: normal; vertical-align: top; background: #562507;}/* Now emphasise the first column right border */table td:first-of-type { font-weight: bold; font-variant: small-caps; border-right: 2px solid #c24704;}/* and do a warp and weft effect */table tr:nth-child(even) td:nth-child(odd){ background: #ffedd9; }table tr:nth-child(even) td:nth-child(even){ background: #fcf5ef; }table tr:nth-child(odd) td:nth-child(odd){ background: #ffe0bd; }table tr:nth-child(odd) td:nth-child(even){ background: #f9e4d4; }table th:nth-child(even){ background: #703009; }

that will give this, once the style has been applied to the table

__Server Caption Description Name Status Started StartModePHILFTEST MSSQL$SQL2000 MSSQL$SQL2000 OK True AutoPHILFTEST SQL Server

(SQL2005)Provides storage, processing and controlled access of data and rapidtransaction processing.

MSSQL$SQL2005 OK True Auto

PHILFTEST SQL Server(SQL2008)

Provides storage, processing and controlled access of data, andrapid transaction processing.

MSSQL$SQL2008 OK True Auto

LTPHILF SQL Full-text FilterDaemon Launcher(MSSQLSERVER)

Service to launch full-text filter daemon process which will performdocument filtering and word breaking for SQL Server full-textsearch. Disabling this service will make full-text search features ofSQL Server unavailable.

MSSQLFDLauncher OK False Manual

LTPHILF SQL Server(MSSQLSERVER)

Provides storage, processing and controlled access of data, andrapid transaction processing.

MSSQLSERVER OK False Manual

LTPHILF SQL Server Agent(MSSQLSERVER)

Executes jobs, monitors SQL Server, fires alerts, and allowsautomation of some administrative tasks.

SQLSERVERAGENTO K False Manual

Putting it all togetherCuriously, after years of happily using Internet Explorer as a simple way of displaying data as part of a scripting process, I initially fell foul of usingInternet Explorer 9 to display HTML5 and CSS 2/CSS3. Here is the script that I currently use to automatically display this, and any other grid of datain PowerShell. The principle is the same for every other scripting language.

#firstly, let's get the in-line stylesheet in place, and the rest of the header for the HTML Document.# as we want to render just the table we take out margins. (body {margin: 0 0 0 0;})$head= @'<title>SQL Server Services</title><style type="text/css">

Page 9: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

/* do the basic style for the entire table */ body {margin: 0 0 0 0;} table { border-collapse: collapse; border: 2px solid #853a07 ; font-size: 10pt ; color: #452812; font-family: "Times New Roman", Times, serif; } /* give some sensible defaults just in case it is an old browser */ table td {border: 1px solid #c24704; vertical-align: top; background-color: #fdf5f2;} table th {border: 1px solid #fef7ef; padding: 8pt 2pt 5pt 2pt; color:white; font-weight: normal; vertical-align: top; background-color: #562507;} table td:first-of-type { font-weight:bold; font-variant: small-caps; border-right: 2px solid #c24704;} table tr:nth-child(even) td:nth-child(odd){ background-color: #ffedd9; } table tr:nth-child(even) td:nth-child(even){ background-color: #fcf5ef; } table tr:nth-child(odd) td:nth-child(odd){ background-color: #ffe0bd; } table tr:nth-child(odd) td:nth-child(even){ background-color: #f9e4d4; } table th:nth-child(even){ background-color: #703009; }</style></head>'@# get the SQL Server service details from the list of servers and format them into an HTML document$content=Get-WmiObject -ComputerName 'PhilFactor.com','ltPhilF' ` -property '__Server,Caption, Description, Name, Status, Started, StartMode' ` -class Win32_Service ` -filter "(NOT Name Like 'MSSQLServerADHelper%') AND (Name Like 'MSSQL%' OR Name Like 'SQLServer%')" `| ConvertTo-HTML -Property '__Server','Caption', 'Description', 'Name','Status', 'Started', 'StartMode' ` -head $head $exploder = new-object -comobject "InternetExplorer.Application"$exploder.visible = $true$exploder.width=640 # width of the table.$exploder.menubar = $false # kick out the menu bar$exploder.toolbar = $false # and the toolbar$exploder.statusbar = $false # who wants a status bar$exploder.resizable = $true # but you may want this$exploder.AddressBar=$false #$exploder.navigate2('about:offlineinformation') # because it forces the creation of a document whichdefaults to# IE9 standards mode (about:blank is in 'compatibility mode' by default!)While ($exploder.Busy) {} #wait for the docuent to load$exploder.document.IHTMLDocument2_write($content) #and put in our content instead

If you can't see CSS2-3/HTML5 views of a page.

If you can't see all those CSS2-3/HTML5 effects in a page in IE8 or IE9, it probably means you're in 'compatibility mode'. If your browser windowhas a thin gray border, shucks, you're in Compatibility mode.

The developers of Internet Explorer were faced with a change of direction towards HTML/CSS standards-compliance after years of attempting anultimately futile policy of creating their own de-facto browser standard. In order to create a browser that could still render websites that were writtento the Internet Explorer standard, with IE8 and IE9, they have had to introduce a notion of 'compatibility mode' which is extraordinarily elaborate andeven involves the maintenance of a register of the 'compatibility' of various websites. The intricacies of this mechanism would wear out thepatience of even the keenest reader so I'll say little more other than to say that one can force the browser to stop fleeing standards-mode like afrightened rabbit by ...

selecting, from the menu, TOOLS -> Compatibility View Settings, and unchecking 'display intranet sites in compatibility view' (and, ifnecessary, 'display all websites in compatibility view' ).

Page 10: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

From the menu, Tools -> Internet Options. Click on the advanced tab, Look for the browsing section, and in there find 'automaticallyrecover from page layout errors with Compatibility view'. Unclick it.If all else fails... Hit F12 in order to get into the 'developer pane'. Here, you will see the last two menu item 'Browser Mode and 'DocumentMode'. Make sure that these are set to IE9.

I know of no way in scripting to force IE9 to read a dynamic page in standards mode, other than by using the HTML5 doctype (<!DOCTYPEHTML>) in the page you write to the instance. In the script above, I've used guile rather than science.

ConclusionsThe use of the CSS structural pseudo-class and pseudo-elements hold out the hope that we cen do what we want with the rendering of HTMLstructures without having to add a lot of classes to the elements, but instead just doing it by specifying their position in the DOM. I have to admit thatI miss the suddenly-withdrawn :contains pseudo class which for the first time allows us to specify an element by its content. We hope that widercounsels prevail. For those of us who are having to render such XHTML fragments as tables and lists, generated from PowerShell and SQL, theseextensions to CSS are a real boon.

© Simple-Talk.com

Page 11: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Where do all the cool DBAs hang out?Published Monday, August 15, 2011 3:00 AM

Online is of course is the answer. We are a pretty solitary breed of employee. Some companies don't have a DBA, many will have one and a half(i.e. the half is someone else from IT that covers for the DBA's leave etc.,) and a few may have a team of DBAs.

DBA to DBA communication 1.0

Having access to the internet a few years ago meant that a DBA with no direct contact to other DBAs could get online andemail or post on forums to get advice, discuss best practise or generally mix with other people doing the same work as themselves.

DBA to DBA communication 1.1

Twitter came along and reduced the latency in the conversations from a few minutes or hours to almost instant and,along with the hash tag feature, has brought a lot of DBAs closer together getting problems solved and decisions made. Discussion takes place inalmost real time and it's pretty cool to get into a few of the conversations that take place if you happen to be at a point in your day where a ping-pong of messages doesn't get in the way.

DBA to DBA communication 2.0

Google have just raised the bar, far beyond Twitter's reach. With Google+ you can share content (your own orsomething you found online) and see what other people are sharing. It's similar to Facebook I guess but there are fewer 2nd cousins and peoplefrom school there. There is also a feature called Google Hangout this is a brilliant way of interacting with other people. You get to share your

webcam and audio with everyone else in the Hangout so you can have a conversation. Grant Fritchey (blog|twitter) has writtenabout how enthralled he is with this ability here - http://www.scarydba.com/2011/08/11/google-hangouts/. There are some restrictions that Grantmentions that I support, such as the need more than 10 people as a maximum attendee limit. If the conversation is to flow then it wouldn't beappropriate to have many more but if you wanted to stage something like a training session where the output is mainly from one or two attendeeswith others asking questions then 30+ could easily work. This is going to affect the need to LiveMeeting* accounts for sure. I would certainlyrecommend getting a Google account if only for the ability to use this free feature. It may well constitute a big part of the time you spend interactingwith others DBAs very soon.

In the next article I'm going to pick up on a couple of points in Grant's post and some other things that have come tomind about this.

* - and other online meeting type solutions.

by fatherjackFiled Under: Blogging, keeping up, community

Page 12: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

M

Disk Space Monitoring and Early Warning with PowerShell15 August 2011by Sean Duffy

Sean Duffy recently had an unwelcome encounter with Exchange Server Back Pressure, which cut off his message flow due to alack of space on the server. To make sure it didn’t happen again, he found a way to automatically monitor all his servers from afar,with a little PowerShell magic.

Introductiononitoring server resources is an important job of any Systems Administrator. Lose track of what resources are being allocated where, and howmuch of these resources are currently free, and you could find your carefully maintained, delicately balanced servers coming to a grinding halt.

Well, not in all cases, but in the case that you have an OS and a database or database log file disk, you really don’t want it running out of diskspace.

I recently had a bad spate of disk space issues on an Exchange 2010 SP1 server, causing Back Pressure levels to increase, which resulted in maildelivery from the Internet ceasing to function. Needless to say, the company was not thrilled, but this particular client’s budget was tight, so theserver was running on minimal resources (naturally, mainly in terms of disk space). To get around that, I needed some kind of early warning systemthat would alert the client, myself, or both, whenever the space on certain drives was getting low. This is where PowerShell and a disk spacemonitoring script came in.

In this write-up, I will use Exchange 2010 (SP1) Back Pressure as an example of where you could use a script like this, besides it being used aspart of a daily reporting system. The primary goal will be to write a PowerShell script which will send us a regular email report on server disk spaceresources. By doing this, we can aim to keep a closer eye on specific disk space figures, and therefore have a better idea of when we may beapproaching any thresholds.

This PowerShell script is not going to be specifically for monitoring your Exchange servers – it is quite flexible and can be used in just about anyWindows environment to monitor your servers. You’ll basically feed it a list of servers to watch over, and it will report back on these for you, meaningyou could also use it as a more general “daily server disk space report” if you wish.

Disk Space Monitoring and Exchange 2010 SP1 Back PressureBack Pressure is when Exchange 2010 attempts to prevent “service unavailability” (when crucial resources are under pressure, such as Memory orHard Disk space). This resource monitoring system is a service that exists on Microsoft Exchange Server 2010 Hub Transport and Edge Transportservers. Here is a list of items monitored by Back Pressure, taken from the official Technet article.

Free space on the hard disk that stores the message queue database.Free space on the hard disk that stores the message queue database transaction logs.The number of uncommitted message queue database transactions that exist in memory.The memory that's used by the EdgeTransport.exe process.The memory that's used by all other processes.

A back pressure problem is fairly easy to find in your Hub/Edge Transport server event logs once you have seen one before.

There are three levels of resource utilisation that the back pressure feature watches for. This list is also taken from a Technet article , and gives abrief explanation of what happens under each level:

Normal - The resource isn't overused. The server accepts new connections and messages.Medium - The resource is slightly overused. Back pressure is applied to the server in a limited manner. Mail from senders in theauthoritative domain can flow. However, depending on the specific resource under pressure, the server uses tarpitting to delay serverresponse or rejects incoming MAIL FROM commands from other sources.High - The resource is severely overused. Full back pressure is applied. All message flow stops, and the server rejects all new incomingMAIL FROM commands.

Now, while it was an unwelcome encounter with Exchange Server Back Pressure that led me to pursue this script solution, I decided to focus oncreating a script that would be usable in any Windows Server environment. While I’m keen to create a deeper integration with Exchange at a laterdate, I felt that a broader approach would be more useful for now!

Monitoring and Reporting on Disk Space using PowerShellPreviously I have only ever really delved into scripting with VBscript and batch files. I had taken a brief look at PowerShell before, but never really

Page 13: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

had the time to learn more about it. I figured this would be a nice little project to start using PowerShell on, as I do tend to learn quicker with practicalprojects. Given that I’m a PowerShell beginner, I’m sure there’s plenty of room for improvement, so if you do notice any areas that could beimproved, please chime in with a comment at the end of this article!

So, enter PowerShell. My first impressions are that it is fairly easy to learn, and that you seem to be able to accomplish a lot with a comparativelysmall amount of script. Most Windows 7 installations, all Windows Server 2008 R2 installations (bar Core edition) come with PowerShell version2.0 installed by default, and this is what the script below is written to work in. PowerShell 2.0 is included in SP2 for Windows Server 2003, so if youare running an older OS, you should be fine here too.

If you have never used PowerShell on your system before, chances are that your PowerShell “Execution Policy” is set to restrict execution of scriptson your machine, and you’ll have trouble running this script. To allow your scripts to execute, you need to set your Execution Policy toRemoteSigned. Here is the procedure to, first of all, check what yours is set to, and then, if necessary, set it to RemoteSigned.

1. Run PowerShell as Administrator on your PC/Server2. Enter in and run the Get-ExecutionPolicy cmdlet – this will output the current setting. If it is not already RemoteSigned, or Unrestricted,

then use the following cmdlet to set it to allow your scripts to run:

Set-ExecutionPolicy RemoteSigned3. You should now be asked to confirm whether you are sure. Press Y to confirm, and then press Enter.

Figure 1 - Setting your execution policy

Now that your environment is ready to run some cmdlets and scripts, let’s take a look at some of the code we will be using. I have added commentsto each of the sections, explaining what is happening at each stage of the script. The end result will email you a neat, tabled report with all the diskspace information you need about any disk drives that are below a specified free disk space threshold percentage.

A basic rundown of the script’s processes goes like this:

1. Iterate through a list of servers you specify in a text file, checking disk space.2. Check each free disk space percentage figure against a pre-defined percent threshold figure.3. If the disk in question is below this threshold, then add the details to the report, if not, skip past it.4. Assemble an e-mail and send it off to the specified recipient(s) if any of the drives were below the free disk space threshold.

The nice thing about this script is that it will only report on disks that need your attention (you set the threshold yourself in the script). Therefore, youwill only be bothered with an email if one or more disks in your servers are really getting low on disk space.

Don’t forget to set up your from and to email address as well as your SMTP (mail) server address.

The Script

########################################################### Disk space monitoring and reporting script########################################################## $users = "[email protected]" # List of users to email your report to (separate by comma)$fromemail = "[email protected]"$server = "yourmailserver.yourdomain.com" #enter your own SMTP server DNS name / IP address here$list = $args[0] #This accepts the argument you add to your scheduled task for the list of

Page 14: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

servers. i.e. list.txt$computers = get-content $list #grab the names of the servers/computers to check from the list.txtfile.# Set free disk space threshold below in percent (default at 10%)[decimal]$thresholdspace = 10 #assemble together all of the free disk space data from the list of servers and only include it ifthe percentage free is below the threshold we set above.$tableFragment= Get-WMIObject -ComputerName $computers Win32_LogicalDisk `| select __SERVER, DriveType, VolumeName, Name, @{n='Size (Gb)' ;e={"{0:n2}" -f($_.size/1gb)}},@{n='FreeSpace (Gb)';e={"{0:n2}" -f ($_.freespace/1gb)}},@{n='PercentFree';e={"{0:n2}" -f ($_.freespace/$_.size*100)}} `| Where-Object {$_.DriveType -eq 3 -and [decimal]$_.PercentFree -lt [decimal]$thresholdspace} `| ConvertTo-HTML -fragment # assemble the HTML for our body of the email report.$HTMLmessage = @"<font color=""black"" face=""Arial, Verdana"" size=""3""><u><b>Disk Space Storage Report</b></u><br>This report was generated because the drive(s) listed below have less than $thresholdspace %free space. Drives above this threshold will not be listed.<br><style type=""text/css"">body{font: .8em ""Lucida Grande"", Tahoma, Arial, Helvetica, sans-serif;}ol{margin:0;padding: 0 1.5em;}table{color:#FFF;background:#C00;border-collapse:collapse;width:647px;border:5px solid #900;}thead{}thead th{padding:1em 1em .5em;border-bottom:1px dotted #FFF;font-size:120%;text-align:left;}thead tr{}td{padding:.5em 1em;}tfoot{}tfoot td{padding-bottom:1.5em;}tfoot tr{}#middle{background-color:#900;}</style><body BGCOLOR=""white"">$tableFragment</body>"@ # Set up a regex search and match to look for any <td> tags in our body. These would only bepresent if the script above found disks below the threshold of free space.# We use this regex matching method to determine whether or not we should send the email andreport.$regexsubject = $HTMLmessage$regex = [regex] '(?im)<td>' # if there was any row at all, send the emailif ($regex.IsMatch($regexsubject)) { send-mailmessage -from $fromemail -to $users -subject "Disk SpaceMonitoring Report" -BodyAsHTML -body $HTMLmessage -priority High -smtpServer $server} # End of Script

Finally, to run this script you will need a text file, created in your script folder, that it uses as an argument. Call this list.txt and place it in your scriptfolder. In the list.txt file, list each of the servers on your domain that need to be checked in the script, one server name per line. To run the script,create a batch file called start.bat, and use the following as its content:

powershell.exe -command "& 'C:\My Scripts\diskspace.ps1' 'C:\My Scripts\list.txt' "

Naturally, remember to modify the paths in the batch file to match your environment (and note that the above batch file allows you to have blankspaces in your script’s path, too). Once this is all ready, you can use Windows Task Scheduler to schedule in the batch file to run at a certain timeor interval.

Download the full script and associated files here: Script Download

Conclusion

Page 15: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Whether you are looking for a script to actively monitor your server and guard against Exchange disk space back pressure, or you are just after adaily disk space report for your servers, this should do the job. It even goes one step further by trying to be non-obtrusive, emailing you only whendisks are getting low on space. It doesn’t have to end here, though. Why not improve on the above script? You could use many other PowerShellcmdlets to provide additional information or checks. Some ideas that come to mind are:

Find certain processes running in memory and check to see if they are using more than a certain amount or percent of RAM – if so, reporton those too.Find the total amount of memory in each server or computer, work out the amount of free memory, then see if this amount is below acertain percentage and report on that.If working out the percentage of free disk space or memory for Exchange Back Pressure, make some additional checks and calculationsbased on the technet article’s explanations of low, medium and high resource pressure percentages, and report on these differentscenarios (or just before each scenario is reached, to provide ample warning).

Indeed, my fellow Simple-Talk author Laerte Junior has written extensively about using PowerShell to automate your morning checklist, and is nowstarting to employ WMI in his scripts. Powerful stuff, making these kinds of reporting tasks elegant and versatile.

Sometimes system administrators don’t always have the time to jump on to every server console and check on their health and resources. Or youmay be lucky enough to have a central monitoring console that is fed with all kinds of information about each of your servers. If it is the former, thenby monitoring your server resources more closely via a script such as the one we’ve just seen, you should be able to proactively sort out any issuesbefore they arise. This could save you precious time which would have otherwise been used to fire fight the issue at hand.

© Simple-Talk.com

Page 16: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Save hours-get a command promptPublished Monday, August 15, 2011 12:01 AM

Recently somebody showed me a little trick to get a command prompt in any directory. Simply hold down SHIFT whilst pressing right-click on thefolder and the menu option containing "Open command window here" appears as if by magic.

I know this is simple but it's not something I knew about, or had forgotten about. There seems to be a complete list of short-cuts on MSDN.

http://support.microsoft.com/kb/126449

Hopefully this can save you as much time as it does for me.

by Richard Mitchell

Page 17: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

N

From Wake-Up call by Larry Gonick

Preventing Problems in SQL Server15 August 2011by Grant Fritchey

It is never a good idea to let your users be the ones to tell you of database server outages. It is far better to be able to spot potentialproblems by being alerted for the most relevant conditions on your servers at the best threshold. This will take time and patience, butthe reward will be an alerting system which allows you to deal more effectively with issues before they involve system down-time

o one enjoys server outages. Well, most people don’t. I actually get a bit of a kick out of server outages, the adrenaline, the do-or-die, pull out allthe stops troubleshooting, the weeping and gnashing of teeth for the person who caused the outage (as long as it wasn’t me)… all that can

actually be kind of fun. But, the fact of the matter is, businesses really don’t like server outages. It usually means lost revenue, and you may as welltranslate that directly to your paycheck.

How quickly you respond to outages makes a big difference to how long they last. The speed of the response is directly correlated to the accuracyof your monitoring system. But really, is that all monitoring can do for you? Send a message to let you know that the vacation you had planned thisyear needs to be cancelled because the server is offline, causing the company to hemorrhage money and it’s all getting taken out of your bonus?That almost makes it seem like you don’t need monitoring on the server at all. After all, you’ll get a call from the business when they realize thesystem is offline, and that can be your alert.

But there’s a lot more you can do with alerts than simply respond to system outages—you can get proactive.

Proactive MonitoringMonitoring to report the failure of a piece of hardware, software, or a process, is an important part of your monitoring solution, but it’s only a part.Another important aspect is to monitor for events and statuses that don’t represent an emergency, but instead represent an impending emergency.Which is better, to get an alert that your log drive is running out of space, or to get an alert that your log drive has already run out of space? Despitemy daredevil enjoyment of the chaos generated from an outage, I know that my job is to prevent problems, so I’d rather get the warning before thecatastrophe has occurred.

The good news is, it’s not that hard to set up monitoring to see when your systems are running out of space, or any number of other alerts for thatmatter. You can do this through the alerts offered in the SQL Agent, or you can set up alerts through Policy Based Management, or you can use athird-party tool. All these mechanisms will enable you to find out if there are long running queries, or excessive blocking, or if a drive is running out ofspace. The trick is making sure that you’re responding to the alerts. Sounds easy, but it frequently isn’t… because of extraneous ‘noise’.

NoiseFor the sake of our discussion, assume for a moment that you have an alert that will let you know when any hard drive you have is over 80% full.With that in hand, you no longer have to worry about the drive running out of space because you’re going to be alerted long before that occurs andyou’ll be able to do something about it, right? Maybe.

Enabling the alert, you see that multiple servers have drives with less than 20% free space, so you get started resolving these issues, chortling toyourself that you won’t be responding to outages at 3 am.

But there’s a snag… one of the servers has less than 20% free space, but it’s on a drive that is 1 TB insize, meaning it actually has nearly 200 GB of free space. Looking at a history of the data on that driveyou note that there has never been an increase of more than 20 GB over a six month period. Thechance of running through 200 GB in less than a day is vanishingly remote. Now what? Each time youralert polls, you’re going to see this server that you don’t care about. What’s more, you find severalother servers that are the same. Most people’s initial response to this very common situation is toignore the one, two, or three drives that they know are not an issue.

There is a concept that started out in electrical engineering called the signal-to-noise ratio. Simply put,a pure transmission of electricity represents the signal. Any degradation of the pure transmission, fromresistance, outside interference, impurity of the transmission medium etc, is referred to as noise. Youdetermine the quality of your transmission by dividing your signal by the amount of noise you have.This concept can be applied to general communication and can be absolutely applied to monitoringand alerting: Think about the actionable alerts that you want to do something about as signal and thealerts that you don’t care about for one reason or another as noise.

You want to achieve a very high signal-to-noise ratio with your alerts. It’s very important that you get theright information at the right time, so that you can take appropriate action; as soon as you begin to introduce alerts that you don’t care about, suchas the large drive that won’t run out of space any time soon, you’re introducing noise into your system. The more drives that don’t match yourcriteria, the more noise you’re introducing.

Page 18: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

And so far I’ve only been talking about a disk usage alert. You have to multiply this by the hundreds of possible alerts that you could set up on asystem. Not all alerts would be applicable to that system. Not all alert thresholds would be the same for that system. As you start to see these non-applicable alerts and alerts that are set to the wrong threshold, you’re dealing with noise in the system.

Adjusting AlertsIf you’re building your own monitoring system, one alert at a time, you’ll be able to make adjustments to each alert as you go. This will keep yoursignal-to-noise ratio nice and high. If, however, you’ve purchased a third party monitoring system, it will usually have a large number of alerts builtinto the system. When you turn your alerting system on for the first time, frequently your screen fills up with all sorts of dire warnings about the stateof your servers. Usually, many, or even most, of these alerts are accurate, actionable information—good signal. You respond to the alerts and fix oravert the problems, making your systems more stable and avoiding disaster. But there’s also usually a lot of noise. Alerts are firing because theyare inappropriate to your systems, or because the thresholds are too low or too high. While these tools are supposed to make your life easier, noone said that ease would be free (not counting the cost of the software of course). The mistake made by most people is to simply clear theoffending alert (or to turn the alerting system off altogether because it’s “broken”!).

From Derek on Alert by Larry Gonick

Making it go away is not fixing it. A good alerting system will make the cleared alert come back up. After all, the idea of alerts is to poke you in theeye, to alert you. Instead you need to take the time and trouble to make the adjustments to the system so that it’s useful for you. The developers whobuilt the monitoring system just didn’t know how your system was configured. So if you operate regularly with a large number of disks at greater than80%, you need to adjust the threshold on that free space alert so that you reduce the noise of false or unnecessary alerts, and increase the signal,the accurate, timely alerts.

This process can take a little time. I would recommend setting aside a few hours to adjust the alerts immediately after you bring a monitoringsystem online, and then again every couple of weeks. Within a few of these sessions, you’ll probably have everything well in hand so you can startthe next process—turning alerts on and off.

As I said before, most monitoring systems only enable a subset of the provided alerts. You need to determine if the ones they enabled areappropriate to your system. Further, you need to decide if the alerts that are disabled are needed on your system and what the appropriatethresholds are. But remember, the goal is to get your system to provide as much good information, pure signal, as possible, so only turn on thealerts you really need.

Finally, you’re going to find certain systems, drives, databases, whatever, just don’t match the criteria for alert thresholds you need for the rest ofyour system. You will need to isolate these exceptions so that they are not generating noise. Most third-party tools provide a means to do this, eitherby disabling the alert for certain systems or by adjusting the threshold for certain systems. The key here is that exceptions should be exceptional. Ifyou find that you’re creating an exception for every database on every system, then maybe you need to go back and adjust the alert threshold itself.You don't want to try to maintain and document too many exceptions. Oh, didn’t I mention that? Yeah, if you create exceptions, you should documentthem so when an outage occurs you’re not scratching your head wondering why you didn’t get an alert, because you’ll have a record that youdisabled the alert. This documentation process is another reason why your exceptions should be exceptional.

Conclusion

Page 19: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

The goal when you set up your monitoring system alerts is to generate a pure signal. Alerts let you know when a system is offline, but moreimportantly, when a system is about to go offline. You need to take on the responsibility and the labor to adjust your alerts in order to increase thepurity of that signal, and you will need to create some exceptions to your alerting.

Going through these tasks should provide you with a far more useful alert system and avoid the situation where you ignore everything or even turn itall off to avoid the noise. Your reward will be a well adjusted, proactive, alerting system which will result in an improvement in system up-time.

© Simple-Talk.com

Page 20: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

SQL In The City I and IIPublished Monday, August 15, 2011 1:04 AM

The first ever SQL In The City event has come and gone. Well over 300 people showed up at the Royal Society of Medicine (and what anexcellent place to hold the meeting) to get a full day of free training on various SQL Server topics. They also had the chance to network and talk tothe product managers and developers at Red Gate Software. Talk about an opportunity to make your favorite tools better.

Anyway, I had the privilege of presenting three times during the day. First, I did a full hour on the importance of establishing knowledge of how yourservers, databases, and TSQL behave as the foundation for any future performance tuning efforts. "Establish a baseline today" is the one thing Ihope everyone took away from that presentation. Immediately following I had a short session on TSQL Coding Horrors which was a lot of fun to puttogether and present. At the last session of the day I was able to be the straight man for Buck Woody's (blog|twitter) presentation on SQL Azure. Allthree sessions were very well attended and the audience seemed receptive. I learned a lot about the differences between presenting in the US andin the UK. First up, UK audiences mostly laugh at the same jokes, great news because I was frankly worried about that. Second, they don't movemuch. Seriously. Next time you're presenting in front of a UK audience, ask them to raise their hands, then don't wait too long before continuing yourpresentation or you might be there for a while. Finally, they absolutely love follow-up questions. When I was done with my first two sessions, a lineformed, a long line. Something you'd never see in the US. The apparent lack of response during the presentations was more than made up by thetotal attention that had clearly been paid as shown by the post-session behavior.

It was a great, amazing and humbling experience. Another note for US speakers venturing into the UK, you can't just throw away a line. I tossed offsomething about how silly it was to use NOLOCK and had to explain it eight (8) separate times (and yes, I counted). People here are polite andattentive, but don't confuse that for a lack of passion. It was wonderful getting the opportunity to meet and talk to everyone.

Not everything goes off without a hitch. The schedule was set up so that there was only five minutes between rooms. This made the shift from placeto place just slightly frantic and ensured a few people were always arriving late since there was inadequate time to pick up a coffee or tea betweensessions, let alone take care of any biological imperatives.

Except for that though, I didn't see a single thing to improve on. The venue was amazing. The AV equipment was excellent. The people attendingwere great. We had Red Gate branded IPA! A huge round of applause is owed to the people at Red Gate for setting this all up. I'd like to single outAnnabel Bradford for all her work. Well done Annabel!

And now. We're taking the show to Los Angeles. There are a few different speakers, but a lot of it's going to be the same. It was a great event inLondon and now you'll get the chance to participate here in the US. And did I mention we've got Denny Cherry (blog|twitter)? Well, we do. Now youwant to go, right? Well, check out the other speakers. Now you really, really want to go. Only one thing to do about that, you'd best register. I'd do itnow if I were you. We filled the last one and there was a waiting list. You want to make sure you get in. This should be something special, I'm tellingyou.

by Grant FritcheyFiled Under: Events

Page 21: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can
Page 22: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

I

Migrating from OCS 2007 R2 to Lync - Part 316 August 2011by Johan Veldhuis

So far, Johan has walked us through the whole process of preparing our new Lync Server 2010 environment, step-by-step. Now it'stime to expand that environment, migrate the rest of our users and legacy resources across, and start getting ready todecommission OCS 2007 R2.

n the second part of this series, we discussed how to merge the legacy OCS settings with the new Lync Server configuration. After that, weconnected the Lync Server environment to the outside world using the OCS 2007 R2 Edge Server, and we did some test migrations and

validated the functionality of our new deployment.

To continue with the migration progress, we will first need to prepare the groundwork before our OCS 2007 R2 Edge Server can be completelyreplaced; let’s have a look at how to do this now.

Add the Edge Server

As already mentioned, we are still using the OCS 2007 R2 Edge Server to connect to the outside world, and we naturally want to switch to a Lyncserver. So, to add the Lync Edge Server to our Lync environment, we will first need to add it to our topology, which can be done using the TopologyBuilder we’ve used in previous articles. Once you have the tool open, select the Download Topology from existing deployment option andchoose a location to store the topology file. Next, select the Edge Pool node and pick the New Edge Pool from the Actions menu. A wizard willbe launched to guide you through the process.

In the second stage of the process, you will be asked what type of Edge Server you wish to deploy. Select the Single Computer Pool and specifythe internal FQDN of the Edge Server.

With the Multiple Computer Pool option, you have the ability to install multiple Edge Servers and place them in one pool, which will require aHardware Load Balancing solution in front of the Edges, but keep in mind the expanded configuration is not supported during the migration phase.

After that, we will need to specify the features for the Edge pool where the following options can be configured:

Use a single FQDN and IP AddressEnable federationEnable NAT Translation

It is recommended to configure these features to be the same as the OCS 2007 R2 Edge Server (with one caveat, which I’ll come to in a moment),as you might lose some functionality if you forget to enable a feature (unsurprisingly). Although not recommended, you may also enable the Use asingle FQDN and IP Address option, which gives you the opportunity to use a single IP and FQDN for all three Edge Services. However, this willrequire some changes to the default port settings of the Edge Services (By default all services are configured to use 443).

Now, about that caveat - Do not enable the federation option for your new Lync Server, because this will generate a warning during the publishing ofthe topology. This is due to the fact that only one federation route can be used in a deployment, and this continues to take place over the OCS Edgein our co-existent scenario.

Page 23: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 1 - Configure external FQDNs

The configuration of the FQDN and port numbers is handled in the next step. As you can see from Figure 1, you can populate the FQDNs and portnumbers for each of the Lync Edge services at this gate. However, if we select the Use a single FQDN and IP Address option, the external FQDNfields for both Web Conferencing and Audio/Video will be grayed out, and the wizard would assign unique ports to the services.

In the next step, you must provide the name or FQDN of the Lync Edge server. Click Add and specify the internal IP address and FQDN which willbe assigned to the Edge Server; all traffic between the Front End Server and the external world will be sent via this IP address. When the internal IPaddress has been specified, the wizard will then ask for the external IP addresses.

As one of the final steps, you will be asked to specify the NAT address (if you previously enabled the NAT option) - this address will be used to hideall external IP addresses. In case you’re wondering, one of the benefits of a NAT is that you will prevent the “real” IP address of the server frombeing exposed to the internet, and the NAT can be used to add a sort of “routing” functionality. When NAT is used together with Lync Server, thefollowing happens:

ChangeDST - this process changes the destination IP address on packets destined for the network that is using NAT. This NAT methodis also known as transparency, port forwarding, destination NAT mode, or half-NAT mode.ChangeSRC - this process changes the source IP address on packets leaving the network that is using NAT. This NAT method is alsoknown as proxy, secure NAT, stateful NAT, source NAT or full-NAT mode.

The diagram below illustrates an example network where NAT is being used:

Figure 2 - Example of how NAT works with Lync Server

Page 24: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

One final remark should be made about NAT: when using a Hardware Load Balancer and an Edge Pool, NAT should not be used as it is notsupported in this context.

Because all traffic will need to be sent to a Front End Server, we will need to specify the Next Hop, which will be our Lync Front End Server, butcould just as easily be a Lync Director. In this case we select our Front End Server and continue with the next step.

Figure 3 - Associate Edge Server with Front End pools

During the next step, we will associate the Edge Server with our single Standard Edition Front End pool; Just place a checkmark and press theFinish button to close the wizard.

Once all these steps have been completed, a new object is added to the Edge pools node, and our next step is to publish our new topology.Select the root node, called Lync Server 2010, and select the Publish Topology option from the Action menu.

Before starting with the installation of the Edge Server, confirm that a DNS host (A) record already exists. This needs to be done because a DNSrecord will not be created automatically, which also means that we will need to export the Lync Server configuration, and then manually import itduring the installation of the Edge Server. To export the Lync configuration, we will need to use the Lync Server Management Shell:

Export-CsConfiguration –file c:\install\config.zip

The cmdlet above will export the current configuration to a file called config.zip, which is placed in the install directory.

Copy this file to the Edge Server, and then it can imported during the installation. Finally, before starting the setup, make sure the following thingsare configured correctly:

Primary DNS suffix - incorrect configuration will cause traffic to be blocked because of an incorrect name;Host file- as a best practice, Microsoft recommends that you use a host file to provide name resolution when no DNS server is available inthe DMZ to hosts the internal zone;Network configuration- make sure all NICs are configured correctly. As a best practice, don’t configure a gateway on the internal NIC, butuse a static route and configure DNS servers only on your external NICs.Certificate chain - install the certificate chain of the internal CA;And make sure .NET 3.5.1 is installed;

Installing the Lync Edge Server

Once all these requirements are met, you can start the setup; install Visual C++ 2008 runtime and provide the installation location. When the GUIlaunches, select the option Install or Update Lync Server System, which will bring you to the page we need.

As already explained in Part I, each Lync server contains a replica of the CMS - to install this local replica, start the Lync setup and then select theInstall Local Configuration Store option.

Page 25: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 4 - Import config file

The setup will prompt you for the config file which we just exported; select the file using the Browse button, and then press Next to start theinstallation of the Local Configuration Store. Once this installation has been finished, it’s time for the next step: Setup or Remove Lync ServerComponents.

During this step, the necessary components for the Edge Server will be installed, and upon completion, we can continue with the certificate part ofthe installation.

For the Edge Server, this is a bit different than for the Front End Server. Because the Edge Server is connected to the internet on the externalnetwork interface, we typically require certificates issued by publicly trusted third party Certificate Authorities such as Digicert or Verisign. Here’s aquick overview of which certificates are needed:

Subject Name Subject AlternateName (SAN) CA

ocs-edg.corp.local internalsip.corp.local webconf.corp.local externalav.corp.local internal

Table 1: Edge Certificate overview

As you can see, we only need one external certificate for the Edge Server. The A/V Edge can have an internal certificate if you have an internal CA,otherwise you should assign a separate external certificate to the A/V Edge.

Because this process is almost the same for both certificates, we will only describe it once for the external interfaces, because this is the mostcomplicated process. First expand the External Edge certificate entry and uncheck the A/V Edge external option:

Figure 5 - Uncheck A/V Edge external

Press the Request button to start the wizard, which will guide you through the process of creating a Certificate Signing Request (CSR). Select theoption to Prepare the request now, but send it later, and then press Next. Provide a location to store the CSR file, which ultimately needs to besent to the CA. The next step is optional, and is only needed when you want to use a different certificate template (By default the WebServercertificate template is used).

In this optional step, you will be able to specify the Friendly Name which can be used to easily recognize the certificate. Additionally, you can markthe certificate as exportable, which will give you the opportunity to export the certificate, including private key. This option is only required when youwant to install the same certificate on multiple servers or in a load balancing environment.

Provide the company and geographical information, and then we will arrive at the Subject Name/Subject Alternate Names page. This pagealready contains the correct FQDN (if properly configured using the Topology Builder), so press Next which will bring you to the page where youcan configure the SIP Domains. In our case, this is only corp.local, so we can place a checkmark and press Next. The last stage of this processwill give you the option to add an additional SAN entry, which is not necessary in most cases. Continue with the wizard, have a look at the summary,and then save the CSR.

Page 26: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Repeat this process for the other certificates; the only difference is the choice of a CA to submit the CSR (The one just created will be submitted toan external CA.)

Once all certificates are requested and have been received, it’s time to install the certificates, which can be done by pressing the ImportCertificate button. Select the certificate using the Browse button, and uncheck the Certificate file contains certificate’s private key option.Repeat this process for all certificates.

Now that all certificates have been imported, it’s time to assign them to the services; press the Assign button and follow the steps of the wizard.

Figure 6 - The certificates wizard

The wizard will give an overview of the installed certificates. To get started, select the correct certificate and press Next to continue. Review thesummary, which describes the certificate, and wait till the certificate has been assigned. Keep in mind to highlight the correct options whenassigning the certificate to the external Edge Services: Access and Webconference as one pair (figure 5), and A/V separately. When finished, youwill see that the status of all certificates has been changed to assigned.

Now everything is configured correctly, and we can start the services by selecting the Start Services option.

Figure 7 - Start services

Once all services are started, you should see the as the screen above (figure 5). You might want to check if all services are really started using theservices MMC.

Page 27: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Migrating all other users

Due to the fact that we want to reuse a number of external FQDNs, we shall need to first migrate all other users from OCS 2007 R2 to Lync Server.After the users have been migrated, we can make a configuration change so that the new Edge Server will be used.

To migrate the users, we can use the same steps as described in part II of this series, but to speed up the process a little bit we will move migratingall users at once, rather than just test cases. This can be done using the same methods as described in that earlier article, so let’s first have a lookat the Lync Server 2010 Control Panel.

Open the Lync Server Control Panel and select the Users menu option. Create the legacy user filter again to search for all OCS 2007 R2 users,and then select Action followed by Move all users to pool…

Figure 8 - Move users

Next, select the OCS 2007 R2 Front End Server as Source registrar pool and the Lync Server 2010 Front End Server as Destination registrarpool. Finally, click the Move button to migrate all users from OCS 2007 R2 to Lync Server 2010. Once this process is completed, you shouldn’tsee any legacy user when performing the same search.

However, there is another, quicker method using the Lync Server Management Shell:

Get-CsUser –OnOfficeCommunicationServer | Move-CsLegacyUser–Target lync-fe.corp.local

This cmdlet will first perform a search for users which are hosted on OCS, and then move those users to Lync Server.

Figure 9 - Migrate users using the Lync Management Shell

As you can see in figure 7, you are prompted for confirmation before the actual move happens; in this case we can answer with A to migrate allusers.

You might think that, since all of our users have been migrated, our OCS 2007 R2 Edge Server can be removed? Wrong, I’m afraid. Beforedecommissioning the legacy server, we need to reconfigure our environment (which will be discussed in Part 4 of this series).

So for now we will continue to use our OCS 2007 R2 Edge Server and first migrate the response groups and dial-in features.

Migrating Response Groups

Before you can migrate the response groups from OCS 2007 R2 to Lync Server, you must first install the SQLCLI.MSI from the Feature Pack forSQL 2005 – December 2008. Keep in mind that after the update has been installed, you will need to reboot the server. Once the update for theSQL Client is installed, open the Lync Server Management Shell on the Lync Front End Server. To migrate the Response Group Configuration, wewill need to use the Move-CsRgsConfiguration cmdlet, which will migrate all current Response Groups to Microsoft Lync Server 2010:

Move-CsRgsConfiguration -Source ocs-fe.corp.local-Destination lync-fe.corp.local

By default nothing is displayed; if you would like to see what is really happening, append the –v parameter to the cmdlet to enable verbose logging.

Page 28: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 10 - Output verbose logging when migrating response group

To confirm that the migration completed successfully, open the Lync Server Control Panel and check if the response groups are listed on theResponse Groups page. If you prefer scripting, another method is to use the following cmdlets via the Lync Management Shell:

Get-CsRgsAgentGroup - to display all agent groupsGet-CsRgsQueue - to display all queuesGet-CsRgsWorkflow - to display all workflows

If you are having informal agents assigned to groups which used the tabs.xml file in OCS 2007 R2, remove this file and point the users to the newresponse groups website. The URL of this website is, in this example: https://lync-fe.corp.local/RgsClients/Tab.aspx. As soon as the client softwarehas been migrated, the user can easily go to this page via the Tools menu.

Migrating Dial-In Access

Next up, it’s time to migrate the dial-in access features from OCS to Lync Server. The first step is to identity the currently configured dial-in accessnumbers. To do this, you will need to start the Admin console for Office Communications Server 2007 R2 , get the properties of the forest, andthen select Conferencing Attendant Properties, which will open the following window:

Figure 11 - Conferencing attendant properties

In this case only one access number is configured; select that number and click on the Edit button.

Page 29: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 12 - Edit conferencing attendant number

Write down the value of the SIP URI, as this is needed to migrate the access number. Repeat this step for each access number which isassociated to the pool which you would like to migrate, and once you have completed this, we can continue with the next step: the actual migration.

To migrate the access number from the OCS 2007 R2 pool to the Lync Server 2010 Pool, we will need to use the Lync Server ManagementShell and the Move-CsApplicationEndpoint cmdlet. The access number can be migrated using the following parameters:

Move-CsApplicationEndpoint –Identity sip:Microsoft.Rtc.Applications.Caa-F9200A4F-1527-4672-9979-D5E70D452012@corp.local –Target lync-fe.corp.local

There are two parameters which you need to specify: Identity, which is the SIP URI from the access number, and Target, which is the FQDN of theLync Server 2010 pool.

Figure 13- Migrating the access number

To confirm that the number has been migrated to the new pool, check the Lync Server Control Panel or the Lync Server Management Shell.To use The Lync Server Control Panel:

Start the Lync Server 2010 Control Panel;Select the Conferencing option;Go to the Dial-in Access Number tab;Check if the access number is listed.

Page 30: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Alternatively, if you’d like to use the Lync Server Management Shell:

Run Get-CsApplicationEndPoint –Identity SIP-URI

Repeat these steps for each access number. As a final check, you can dial in to the access number(s) and check if it works.

Summary

In this third article discussing the migration from OCS 2007 R2 to Lync Server 2010, we’ve seen how to expand our Lync Server environment andmigrate the remaining legacy resources. This included introducing a Lync Server 2010 Edge Server to our new Lync deployment, migrating theremaining users, as well as the response groups and the dial-in access numbers.

Keep in mind: You can’t switch off the old environment yet, as it still is being used.

In the next article we will reconfigure the federation route used by our Lync environment, and then we will continue with removing the OCS 2007 R2pool. As a bonus, we will have a look at some nice additional features which can be added to our Lync Server 2010 deployment, and also somegotchas you need to know about when migrating.

© Simple-Talk.com

Page 31: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

DevOps: Nostrums or Knowledge?Published Tuesday, August 16, 2011 1:25 AM

There are good reasons for the management of the release of applications. Businesses see it as a safety-net to ensure the success of softwaredeployment. This is a process that requires a different mind-set and set of disciplines to development, and is best handled by small specialistteams that are responsible for getting software delivered to its users in an enterprise. It is meticulous work, because users, and the businesses thatemploy them, judge software primarily by its resilience: They care a great deal if a release goes wrong and errors get into production systems.

Whereas the development cycle has speeded up greatly in the past decade, under the weight of new development techniques, and pressures fromthe business for new functionality, the same is not so true of release. Developers are often shocked and puzzled by the innate conservatism of therelease Management process, but the application delivery process has to be meticulous enough to prevent errors getting into production systems.It has to deal with a wider range of platforms, including the Cloud, virtual servers, and mobile devices, and a more complex configuration.

This has led to a problem that has afflicted IT departments that have adopted the ideal of the rapid development cycle and Continuous Integration.How does one release applications to the users more quickly in line with the increased speed of development? Whereas one might think that itwould require the same effort to manage a small number of changes in a large number of releases as the other way around; it doesn’t look thatway to those of us who are tasked with release. A release is a release, no matter how numerous or complex the changes.

DevOps has never been presented as a nostrum. It doesn’t involve group rituals such as scrums and ‘post-its’ on the wall. (RapidDevelopment used to involve †s̃hirtsleeves’, large felt-tips and A2 sheets fastened on the wall) It is more about Dev and Ops working incollaboration instead of seeing the opposite camp as being adversaries, defining the most effective workflows, and getting the processesreviewed and refined. All this means a much greater coordination.

At this point, automation of at least part of the release process becomes possible. There are prime candidates for automation, such as ‘hot-fixes’, data-center deployments, and configuration management. However, without the multi-department collaboration that is essential for therapid delivery of applications, automation can have the danger of merely allowing mistakes to be made faster. The Automation process has to bescripted, maintained and controlled by the Ops and QA staff themselves, rather than being created only by developers. Whilst it is an essentialcomponent in the devOps initiative, it is secondary in importance to the cultural and organizational changes that often have to take place beforecontinuous deployment or continuous delivery can become a reality.

by Andrew Clarke

Page 32: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

I

Mimicking Magnetic Tape in SQL17 August 2011by Joe Celko

The sequential nature of early data storage devices such as punched card and magnetic tape once forced programmers to devisealgorithms that made the best of sequential access. These ways of doing data-processing have become so entrenched that they arestill used in modern relational database systems. There is now a better way, as Joe explains.

keep telling people that they are writing magnetic tape and punch card programs inSQL. They reply that they do not know what a punch card is, and have never seen a

magnetic tape drive. Therefore, they believe that their SQL is just fine.

Let's 'Wikipedia' and 'Google up' pictures of punch cards and tape drives, so thekids can be grateful for what they have today. The physical media is not what isimportant; the consequences of the physical media are.

When you sit in a live theater, you cannot do a close-up, pan shot, zoom, dissolve orother effect that are common in movies today. Early silent films parked the camerain one position and mimicked a theater experience. This is a general systemsprinciple that the new technology will first mimic the previous technology before itfinds its own voice.

Look at a deck of punch cards or a reel oftape. Their records are in a sequential filestructure, necessitated by the physicalmedia. Random access in a deck of punchcards is impossible; random access intape is impractical. Whenever you see atable with an IDENTITY property, theprogrammer is mimicking that sequentialphysical ordering and not doing RDBMSmodeling.

This means that sorted order isfundamental in sequential files. It alsomeans that we process things one recordat a time; we have innate concepts of first,last, current, prior and next records. Theworld is “left to right” and there is no higherlevel abstractions. Fields are all fixedlength strings that are read by anapplication program to get their meaning –i.e. no data types, defaults or constraints in the data itself.

Another innate property of punch cards and tapes is that you can concatenate contiguous fields to create a new field. COBOL, the classic languagefor this file structure, has hierarchical sub-fields when you define your records in the DATA DIVISION of a program. I know most Microsoftprogrammers do not know or even read COBOL, so let me give a simple address of a US mailing address.

01 ADDRESS. 05 ADDRESS-LINE-1 PIC X(40). 05 ADDRESS-LINE-2. 10 CITY PIC X(17). 1 10 STATE PIC XX. 10 FILLER PIC X. 10 ZIP1 PIC 9(5). 10 FILLER PIC X VALUE IS "-". 10 ZIP2 PIC 9(4).

We have a field called “Address” at the highest level. It is a string in contiguous storage. We can access two sub-fields, named “address-line-1” and“address-line-2”, the first one is a 40 string of alpha characters. The second sub-field is made of sub-sub-fields that can also be accessed byname; FILLER is a special token that means it is ignored or replaced by a constant.

When you see “(CAST (terminal_nbr AS CHAR(5)) + CAST (transaction_seq AS CHAR(8))) AS sale_id” you know thatthey are still doing COBOL in SQL.

Page 33: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

There was an excellent example of this mindset on the MS SQL Server forum recently. It was a table valued function which returned what is called adelta report. That is, a comparison of the change in a set of variables from one report period to another, usually annual. In English, these reportsanswer questions like “What is the change in sales for this year as compared to last year?”

The poster was having performance problems. This was no surprise. I am not going to post the full example; I do not need everything to make mypoints and a reduced model will serve. Here is my slimmed-down Sales table.

CREATE TABLE Sales(store_id INTEGER NOT NULL PRIMARY KEY, sales_date DATE NOT NULL, sales_amt DECIMAL (8,2) NOT NULL);INSERT INTO Sales(1, '2011-01-23' 4.50),(1, '2011-02-23' 14.50),(1, '2011-03-23' 24.50),(1, '2010-01-23' 3.50),(1, '2010-02-23' 13.50),(1, '2010-03-23' 23.50), `(2, '2011-01-23' 4.75),(2, '2011-02-23' 14.75),(2, '2011-03-23' 24.75),(2, '2010-01-23' 3.75),(2, '2010-03-23' 23.75);

What we want to do is look at the sales in 2011 and 2010, and see how much things changed. The real table had a lot of other values, wascomparing customers from this year against last year.

The first thing an old tape file programmer would do is draw a flowchart. Again, I am not sure that younger programmers have ever see a flowchart,but here is one.

The circle with a tab sticking out of it is a reel of tape. The triangle is a sequential merge operation. The rectangles are programs. The rectanglewith a wavy bottom is a printout. The arrows show the flow of control and/or data. Got it?

Here is how it works:

1. Mount the Master Sales on a tape drive. Assume it is sorted by (store_id, sales date) rather than(sales_date, store_id); this is important-- very important, and will talk about it shortly.

2. We have a program that read the master and extracts the 20110 data to a scratch tape, mountedon a second tape drive. We then rewind the master tape and signal the operto4r that we areready or the second scratch tape.

3. The operator then mounts a second scratch tape and the second process which tape drive thattape is on.

4. The operator tells the merge process where the first tape, second tape and final scratch (merge)tapes are mounted.

5. The third scratch tape is read by a process that sums each pair of 2011-2010 values based onthe dates.

6. A final process does a grand total by store_id and makes a printout

Here is an anorexic re-write of the poster's SQL function. Again, the original code was muchmore complicated, was implemented as a table-valued function, had other design flaws and soforth. The parameters were a date range pair in the original code.

CREATE PROCEDURE Delta_Report(@in_report_start_date, @in_report_end_date)WITH This_Year_Sales (sales_date, store_id, sales_amt_tot)AS(SELECT S.sales_date, S.store_id, SUM(S.sales_amt) AS sales_amt_tot FROM Sales AS S WHERE S.sales_date BETWEEN @in_report_start_date AND @in_report_end_date GROUP BY S.sales_date, S.store_id),

Page 34: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Last_Year_Sales(sales_date, store_id, sales_amt_tot)AS(SELECT S.sales_date, S.store_id, SUM(S.sales_amt) AS sales_amt_tot FROM Sales AS S WHERE S.sales_date BETWEEN DATEADD(DAY, -364, @in_report_start_date) AND DATEADD(DAY, -364, @in_report_Fend_date) GROUP BY S.sales_date, S.store_id),Merge_Sales (sales_date, store_id, ty_sales_amt, ly_sales_amt, sales_delta)AS(SELECT CASE WHEN This_Year.sales_date IS NULL THEN Last_Year.sales_date ELSE This_Year.sales_date END AS sales_date, CASE WHEN This_Year.store_id IS NULL THEN Last_Year.store_id ELSE This_Year.store_id END AS store_id, This_Year.sales_amt AS ty_sales_amt, Last_Year.sales_amt AS ly_sales_amt, (COALESCE(This_Year.sales_amt, 0.00) - COALESCE(Last_Year.sales_amt, 0.00)) AS sales_delta FROM This_Year FULL OUTER JOIN Last_Year ON This_Year.sales_date = DATEADD(DAY, 364, Last_Year.sales_date) AND This_Year.store_id = Last_Year.store_id)SELECT store_id, SUM(sales_delta) AS sales_delta_tot FROM Merge_Sales GROUP BY store_id;

The CTE “This_Year” is the first scratch tape. The CTE “ Last_Year” is the second scratch tape. It is a direct translation of the flowchart from 1950'sinto SQL. The tape merge process is translated directly into the FULL OUTER JOIN and it becomes a CTE. The main SELECT then returns thefinal process from the flowchart.

Yes, the CASE expressions should be COALESCE() expressions, but that is how it was done in the original because the programmer is still stuckin IF-THEN logic mindset. This lets him write SQL that looks as much like his procedural language as possible.

Do you see the magnetic tape mindset? Now let's fix it. SQL is a data language, not a computational language, not a processing language. Thefirst thing is that reports are done for fixed known periods in production work. It is suicide to allow date ranges as parameters because users willget “creative” and you cannot be sure they were creative in the same way. That is for ad hoc queries.

Getting back to the issue of sort order as a factor in the logic. Because the data in the Master Sales is sorted by (store_id, sales_date) we had tomake two passes through the tape to get both years.

This assumes we have only two tape drives. If we had 3 or more tape drives, then we could have split out the data for 2011 and 2010 in one passover the Master tape. In fact, you need three tape drives to do any real work. There was a horror store in a recent issue of COMPUTERWORLD's“Shark Tank” column. The shop the writer worked in had only two old slow tape drives. They asked the boss for three newer tape drives. Instead theboss bought them two new tape drives that were twice as fast – 2 times 2 = 4 times faster, right?

So, first thing we do is create a table with our reporting periods; I am just doing the (2011-2010) delta.

CREATE TABLE Report_Periods(report_period_name CHAR(10) NOT NULL, report_start_date DATE NOT NULL, report_end_date DATE NOT NULL, report_delta SMALLINT DEFAULT 1 NOT NULL CHECK(report_delta IN (1, -1));INSERT INTO Report_PeriodsVALUES ('2011-00-00', '2011-01-01', '2011-12-31', +1), ('2011-00-00', '2010-01-01', '2010-12-31', -1);---prior year

Sneaky trick here. The two rows in the table model the two years in the 2011 report period. I happen to like the MySQL convention of using “yyyy-00-00” for a whole year; it makes sorting easier and it is language independent. Notice the prior year has a “report_delta” of plus or minus one.Instead of conditional logic, I use data in a data language.

Thinking in aggregates instead of sequences, we see that

Page 35: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

SUM(this_year_sales) – SUM(last_year_sales)= SUM(this_year_sales - last_year_sales)

Now, here is the replacement procedure, with a SQL mindset.

CREATE PROCEDURE Delta_Report(@in_report_period_name CHAR(10))ASSELECT @in_report_period_name AS report_period, S.sales_date, S.store_id, SUM(R.report_delta * S.sales_amt) AS sales_amt_tot FROM Sales AS S, Report_Period AS R WHERE R.report_period_name = @in_report_period_name AND S.sales_date BETWEEN R.report_start_date AND R.report_end_date GROUP BY S.sales_date, S.store_id;

That is the whole thing! One SELECT, no CTEs and no danger of a bad report range. See why I spend so much time beating up people abouttables are not files, rows are not records and columns are not fields; it really matters.

© Simple-Talk.com

Page 36: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Objects or instancesPublished Wednesday, August 17, 2011 11:06 AM

Why we renamed some features in ANTS Memory Profiler 7

When you are designing a complex product, it is important to ensure that terminology is both consistently used and unambiguous. This helps toavoid confusion amongst users and so contributes to that mass of small things that combine to make the difference between software which is apleasure and a pain to use.

Red Gate's ANTS Memory Profiler is a typical example of a complex product and, for some time, there has been a simmering debate amongst itsdevelopers as to whether it is more appropriate to refer to 'objects' or 'instances' of classes being held in memory. Of course, technical correctness- although important in memory profiling - is not the only variable at play. We also need to ensure that our terminology is familiar to users, beingespecially careful not to use terms differently for the way that they are used by Microsoft and in textbooks on .NET Memory usage.

Within linguistics, it is generally accepted that language abhors synonymy. It is exceedingly rare (if, indeed, it is ever possible) for two words toshare exact the same meaning and usage. It was on these grounds that I decided to spend a couple of hours investigating how we (Red Gate),Microsoft, our competitors, and our customers use these two words.

The conclusion of this research was that:

'instance' is a relatively-rarely used term in memory management, but when it is used, it refers to specific instances of a specified class.'object' is much more common, but has a less specific meaning than 'instance'. The particular class involved is often not described.

We therefore made a small number of changes to the product, which you might, or might not, have noticed. For example, the Object RetentionGraph in ANTS Memory Profiler 6 becomes the Instance Retention Graph in ANTS Memory Profiler 7:

ANTS Memory Profiler 6

ANTS Memory Profiler 7

This is because the Instance Retention Graph shows the retention for a particular instance of a previously-selected class.

On the Filter panel, however, you will notice that we continue to use the word 'Object' in the various filter names:

This is because these filters filter the Class List to display objects which can be objects of any class.

Do these names matter? There would certainly be a case for suggesting that changing the name of a feature is potentially confusing for someusers, or conversely that a simple edit like this is not a big deal.

Memory profiling is a complicated issue. Whilst we go to great efforts to make ANTS Memory Profiler as useable as possible, at the end of the dayyou need to have a minimal amount of domain knowledge in order to use the tool effectively. By being a lot more accurate in our use of language,and by creating more educational material over the coming months, we therefore hope that we can help you to use the product in a more efficientmanner, reaching results faster.

by Dom SmithFiled Under: technical communications

Page 37: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

W

XML Configuration files in SQL Server Integration Services18 August 2011by Robert Sheldon

Package configuration files are a great way of providing the values of SSIS package properties so that packages can be used in afar more versatile way. They make the deployment of SSIS packages easier and can provide parameters that are based on theserver configuration, or which change for each runtime. They're easy to understand, especially when explained by Rob Sheldon.

hen you develop a SQL Server Integration Services (SSIS) package, you can add package configurations in order to provide property valuesto the package at runtime. A package configuration is a defined property/value pair that can be modified without updating the package itself.

Package configurations are useful when you want to deploy packages to multiple servers, when you move your packages from a development toproduction environment, or in any situation in which you want to provide property values to a package at runtime.

SSIS provides several methods for storing package configurations. One of the most flexible of those methods is the XML configuration file. The filelets you store one or more package configurations that can be used by one or more packages. The easiest way to create an XML configuration fileis to use the Package Configuration wizard after you’ve set up your package. The wizard walks you through the steps necessary to create the fileand lets you choose which property values you want to include in that file.

In this article, I walk you through the steps necessary to create an XML configuration file. To demonstrate these steps, I first used the followingTransact-SQL code to create the People table in the AdventureWorks2008R2 database:

USE AdventureWorks2008R2GO

IF OBJECT_ID('dbo.People') IS NOT NULLDROP TABLE dbo.PeopleGO

SELECT TOP 1 *INTO dbo.PeopleFROM.Person

Notice that I inserted only one row. I simply wanted to create the table and use the simplest method for doing so. The table will be truncated whenyou run the package, so it doesn’t matter how many rows you insert into the table.

After I created the People table, I created an SSIS project in SQL Server Business Intelligence Development Studio (BIDS) and renamed thedefault package LoadPersonData.

You can download the SSIS package from the speech bubble at the top of the article.

I then added two OLE DB connection managers, which each point to the AdventureWorks2008R2 database on the same instance of SQL Server.(Normally, they would point to two different instances, but for testing purposes, this is fine.) The first connection manager is named Server A. Thesecond one is named Server B.

After I added the connection managers, I defined a string variable named ConnectMngr and set its default value to “Server A.” The variable will beused in the control flow to indicate which connection manager to use. I then added an Execute SQL task and two Data Flow tasks to the controlflow, as shown in Figure 1.

Page 38: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 1: The control flow in the LoadPersonData SSIS package

The Execute SQL task truncates the People table. The precedence constraints that connect to the Data Flow tasks are each configured to evaluateto an expression. For example, Figure 2 shows how I configured the precedence constraint that connects to the Server A Data Flow task. Noticethat the expression specifies that the ConnectMngr variable must equal “Server A” in order to evaluate to True.

Figure 2: Configuring the precedence constraint to evaluate to an expression

I configured the second precedence constraint just like the first one, except that the expression specifies “Server B” as the variable value.

Next I configured each Data Flow task with an OLE DB source and an OLE DB destination. The sources and destination use their respectiveconnection managers. For example, the Server A data flow uses the Server A connection manager. Figure 3 shows the Server A data flowcomponents. Each data flow retrieves data from the Person table in the AdventureWorks2008R2 database and inserts that data in the Peopletable.

Page 39: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 3: Configuring the data flow for the Server A connection manager

That’s all there is to setting up your SSIS package. Although this is a very simple package, it’s all we need to demonstrate how to implement XMLconfiguration files. (Actually, we don’t even need that much.) If you don’t want to create this package, and instead want to use a package you’vealready created, you should have no trouble applying the steps in the rest of the article to your situation.

Setting Up Your XML Configuration File

After you’ve set up your package, the first step in setting up the XML configuration file is to enable package configurations. To do so, click thePackage Configurations option on the SSIS menu. This launches the Package Configuration Organizer, shown in Figure 4.

Figure 4: The Package Configuration Organizer in SSIS

To enable package configurations on your package, select the Enable package configurations checkbox. You can then add your packageconfigurations to the package. To do so, click Add to launch the Package Configuration wizard. When the wizard appears, click Next to skip theWelcome screen. The Select Configuration Type screen will appear, as shown in Figure 5.

Page 40: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 5: The Select Configuration Type screen in the Package Configuration wizard

From the Configuration type drop-down list, select XML configuration file. You can then choose to specify your configuration settings directly orspecify a Windows environment variable that stores the path and file names for the configuration file. For this example, I selected the Specifyconfiguration settings directly option and specified the following path and file name: C:\Projects\SsisConfigFiles\LoadPersonData.dtsConfig.The main thing to notice is that the file should use the extension dtsConfig.

NOTE: If you specify an XML file that already exists, you’ll be prompted whether to use that file or whether to overwrite the file’s existingsettings and use the package’s current settings. If you use the file’s settings, you’ll skip the next screen, otherwise, the wizard will proceed asif the file had not existed. Also, if you choose to use an environment variable to store the path and file names, the wizard will not create aconfiguration file and will again skip the next screen. Even if you use an environment variable, you might want to create the file first and thenselect the environment variable option afterwards.

The next screen in the wizard is Select Properties to Export. As the name implies, this is where you select the properties for which you wantpackage configurations. In this case, I selected the Value property for the ConnectMngr variable and the ServerName property for each of the twoconnections managers, as shown in Figure 6.

Page 41: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 6: Selecting properties in the Package Configuration wizard

Because I chose three properties, three package configurations will be created in the XML file. You can choose as many properties as you want toadd to your file.

On the next screen of the Package Configuration wizard, you provide a name for the configuration and review the settings (shown in Figure 7).

Figure 7: The Completing the Wizard screen in the Package Configuration wizard

If you’re satisfied with the settings, click Finish. The wizard will automatically generate the XML configuration file and add the properties that you’vespecified. The file will also be listed in the Package Configuration Organizer, as shown in Figure 8.

Figure 8: The XML package configuration as it’s listed in the Package Configuration Organizer

NOTE: When you add an XML configuration file, no values are displayed in the Target Object and Target Property columns of the PackageConfiguration Organizer. This is because XML configuration files support multiple package configurations.

You should also verify whether the XML package configuration file has been created in the specified location. For this example, I added the file tothe C:\Projects\SsisConfigFiles\ folder. The file is automatically saved with the dtsConfig extension. If you open the file in a text editor or browser,you should see the XML necessary for a configuration file. Figure 9 shows the LoadPersonData.dtsConfig file as it appears in Internet Explorer.

Page 42: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 9: The XML in the LoadPersonData.dtsConfig file

As Figure 9 shows, the XML configuration file includes the <DTSConfigurationHeading> element. The element contains the attributes and theirvalues that define when, who, and how the file was generated. The file also includes one <Configuration> element for each package configuration.Each <Configuration> element includes the attributes and their values necessary to determine which property is being referenced. Within each<Configuration> element is a nested <ConfiguredValue> element, which provides the property’s actual value.

Notice that the property values are the same as that of the package itself. When you first set up an XML configuration file, the current package valueis used for each property. You can, of course, change those values, as I demonstrate later in the article.

Running Your SSIS Package

After you’ve created your XML configuration file, you’re ready to run your package. You run the package as you would any other SSIS package.However, because package configurations have been enabled, the package will check for any settings that have been predefined.

For the example I’ve been demonstrating here, the package will run as if nothing has changed because, as stated above, the XML configuration filecontains the same values as the properties initially defined on the package. That means the ConnectMngr variable will still have a value of “ServerA,” and the connection managers will still point to the same SQL Server computer. Figure 10 shows the package after it ran without modifying theXML configuration file.

Figure 10: Running the LoadPersonData package with the default settings

As you would expect, the Server A data flow ran, but not the Server B data flow. However, the advantage to using XML configuration files is that youcan modify property settings without modifying the package itself. When the package runs, it checks the configuration file. If the file exists, it usesthe values form the listed properties. That means if I change the property values in the file, the package will use those new values when it runs.

For instance, if I change the value of the ConnectMngr variable from “Server A” to “Server B,” the package will use the value. As a result, theprecedence constraint that connects to the Server A Data Flow task will evaluate to False, and the precedence constraint that connects to theServer B Data Flow task will evaluate to True, and the Server B data flow will run. Figure 11 shows what happens if I change the variable’s value inthe XML configuration file to “Server B.”

Page 43: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Figure 11: Running the Server B Data Flow task in the LoadPersonData SSIS package

As you would expect, the Server B Data Flow task ran, but not the Server A Data Flow task. If I had changed the values of the ServerNameproperties for the connection managers, my source and destination servers would also have been different.

Clearly, XML configuration files offer a great deal of flexibility for supplying property values to your packages. They are particularly handy whendeploying your packages to different environments. Server and instance names can be easily changed, as can any other value. If you hard-code thepath and file name of the XML configuration file into the package, as I’ve done in this example, then you must modify the package if that file locationor name changes. You can get around this by using a Windows environment variable, but that’s not always a practical solution. In addition, you canoverride the configuration path and file names by using the /CONFIGURATION option with the DTExec utility.

Whatever approach you take, you’ll find XML configuration files to be a useful tool that can help streamline your development and deploymentefforts. They’re easy to set up and maintain, and well worth the time it takes to learn how to use them and how to implement them into your solutions.

© Simple-Talk.com

Page 44: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

Introducing: SQL Tab MagicPublished Thursday, August 11, 2011 10:53 AM

Yesterday I wrote about Down Tools Week and trying to build a working product in 5 days. I also released the first version of the tool to a group ofpeople in our early access program, and they have spent the last 24 hours trying it out, reporting bugs, and giving me lots of feedback. I've spent thelast 6 hours frantically fixing some of the bugs, getting ready for a public release, and trying to remember to breathe.

So, with a big fat not-even-beta-yet label slapped on, here is SQL Tab Magic:

Tabs are automatically restored when you reopen SSMS

Reopen tabs that you have closed manually

Search open tabs and jump directly to the one you want

Download SQL Tab Magic from the Red Gate website.

by theo.spears

Page 45: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

A

The Top 5 WPF and Silverlight Gotchas10 August 2011by Chris Farrell

As WPF and Silverlight sit on the .NET framework, they’re subject to the rules of the Garbage Collector. That means there are a fewunique ways in which WPF will cause your application to leak memory, and Chris Farrell points out the most prominent culprits.

s I’m sure you know, WPF and Silverlight both use XAML as their interface markup language. The idea is that you can define your user interface(UI) and bind it to data without ever writing a line of code (healthy skepticism advised). Whether or not you buy into that vision, the UI possibilities

can be stunning, and it seems Microsoft has created a technology that combines both power and flexibility. However, with that power comesresponsibility.

It’s with an eye on that responsibility that I write this article, in which I want to talk about some of the problems that you can introduce into yourapplication without even realizing it.

Background.NET uses a garbage collector to reclaim and reuse the space left behind when objects are no longer needed. To do this, it builds a list of allobjects that are still ultimately referenced from an application root, such as the stack, other objects on the heap, the CPU and statics, to name just afew. Everything else (i.e. objects which have no such references) is assumed to be garbage, and the .NET framework rearranges memoryallocation to reuse the gaps these objects filled.

A leak (or, if you’re being picky, leak-like behavior) occurs when a section of code fails to release references to objects it has finished working with.The smaller the leak, the greater the number of iterations that must occur before it becomes a noticeable problem. The larger the leak, the moreobvious the problem.

A really obvious example of this problem is adding an object to a static collection and then forgetting about it. Other common ones involve eventhandling, which we will discuss later. The simple fact is that if you leave a reference to an object behind, and that reference traces back to anapplication root, then you have a leak.

There are lots of great articles about .NET memory management, and one of the first things you can do to avoid leaks in general is to reallyunderstand memory management.

heavyweight User Interfaces in xamlSilverlight and WPF applications are state-full, and allow us to hold state in the form of complex data structures as well as rich UI elements such asimages and media. All of this “stuff” adds to the size of the views we create and, ultimately, the size of a memory leak when things go wrong. If youhave a memory leak that involves a complex UI then it can quickly become a major problem, especially if users are constantly opening and closingwindows as part of standard flows.

Just as an example of how the way these problems can scale, this is exactly the situation I found with a large financial application written in WPF,employing all the usual accounting/finance type windows, often containing many hundreds of rows of data. As you would expect, the developers hadtaken full advantage of data binding and the entity framework. It looked great and all seemed well until, during system testing, they discovered thatthe application would get slower over time and ultimately crash the machine. Eventually they actually had to reboot to get over it. Naturally Isuspected a memory leak, but nothing could prepare me for the extent of the issues I actually found (but that’s a different story).

Some of the issues we’ll cover are specific to WPF/XAML and Silverlight, and others are general leaks you will get in any application. I thought itwould be useful to go through the main technology-specific leaks you can easily create; thankfully, the good news is that they are easy to fix andavoid in the future.

WPF and Silverlight leaksWhile I‘ve tried to come up with a list of the most likely leaks, the trouble is that, depending on platform and framework versions, there are manypotential leak mistakes you can make. Regardless, you’ve got to start somewhere, and these points will always serve you well. You’re likely toquickly see a pattern emerging in the underlying nature of the problems and solutions I highlight, but I do recommend you read to the end, because Ialmost guarantee that you’ll encounter one or more of these situations sooner or later.

Unregistered events (WPF + Silverlight, All versions)Let’s start with the classic leak, common to all .NET applications - the event leak. While this is a common source of leaks for all .NET applications,it’s not a bug in .NET, but rather a common oversight by developers.

Specifically, if you create an event handler to handle events occurring in some object, then if you don’t clear the link when you have finished, anunwanted strong reference will be left behind.

Page 46: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

The Issue

My contrived example below deliberately isn’t specific to WPF/Silverlight, but I include it because it’s a very common memory leak which all .NETapplications are vulnerable to. In it I am subscribing to an OnPlaced event on my Order class. Imagine this code executes on a button click.Basically, it sets up an order for a currency exchange to take place when certain price conditions are met:

Order newOrder=new Order(“EURUSD”, DealType.Buy, Price ,PriceTolerance, TakeProfit, StopLoss);newOrder.OnPlaced+=OrderPlaced;m_PendingDeals.Add(newOrder);Listing 1

When the price is right, an Order completes and calls the OnPlaced event, which is handled by the OrderPlaced method;

void OrderPlace(Order placedOrder){m_PendingDeals.Remove(placedOrder);}Listing 2

In the event handler, you can see that we are already eliminating a really common source of leaks; namely, references from collections (in this casethe m_PendingDeals collection).

However, the OrderPlaced event handler still holds a reference to the Order object from when we subscribed to the OnPlaced event. Thatreference will keep the Order object alive even though we have removed it from the collection. It’s so easy to make this mistake.

The Solution

The OrderPlaced method is just one line away from avoiding a memory leak!

void OrderPlaced(Order placedOrder){m_PendingDeals.Remove(placedOrder);m_Curreny.OnPriceUpdate -= placedOrder.OnTick;}Listing 3

The last line unsubscribes the event and removes the strong reference. If this is news to you, then drop everything and look at all your event handlingcode. Chances are you have a leak.

Databinding (WPF + Silverlight, All versions)You read that right; data binding, the thing you rely on, can cause memory leaks. Strictly speaking it’s actually the way you use it that causes theleak and, once you know about it, it’s easy to either avoid or code around this issue.

The Issue

If you have a child object that data binds to a property of its parent, a memory leak can occur. An example of this is shown in Listing 4, below.

<Grid Name="mainGrid"> <TextBlock Name=”txtMainText” Text="{Binding ElementName=mainGrid,Path=Children.Count}" /></Grid>Listing 4: DataBinding Leak Example

In this example, the condition will only occur if the bound property is a PropertyDescriptor property, as Children.Count is. This is because, inorder to detect when a PropertyDescriptor property changes, the framework has to subscribe to the ValueChanged event, which in turn sets upa strong reference chain.

If the binding is marked as OneTime, the bound property is a DependencyProperty, or the object implements INotifyPropertyChanged, thenthe issue won’t occur. In the case of OneTime binding this is because, as the name suggests, it doesn’t need to detect property changes, as thebinding occurs from data source to consumer just once.

Solution

Page 47: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

There are a number of work-arounds for this problem.

1. Add a DependencyProperty [to the page/window] which simply returns the value of the required PropertyDescriptor property. Bindingto this property instead will solve the problem..

2. Make the binding OneTime

Text="{Binding Path=Salary, Mode=OneTime}"/Listing 5

3. Add the following line of code on exit from the page:

BindingOperations.ClearBinding(txtMainText, TextBlock.TextProperty);Listing 6

(This simply clears the binding and removes the reference.)

Static events (WPF + Silverlight, All versions)Subscribing to an event on a static object will set up a strong reference to any objects handling that event. Statics are a classic source of rootreferences, and are responsible for a high proportion of leaks in code.

Statics, once referenced, remain for the duration of the app domain execution, and therefore so do all their references. Strong referencespreventing garbage collection are just memory leaks by another name.

To show you what I mean, the code below subscribes the calling class to the event source, EventToLeak, on the static object MyStaticClass:

MyStaticClass.EventToLeak += new EventHandler(AnEvent);Listing 7

The handling event, AnEvent, will be called when the EventToLeak event fires:

protected override void AnEvent(EventArgs e){

// Do Soemething

}Listing 8

If you don’t subsequently unsubscribe the event, then it will leak because MyStaticClass continues to hold a strong reference to the calling class.

The Solution

To unsubscribe, simply add the code line:

MyStaticClass.EventToLeak -= this.AnEvent;

This releases the strong reference from MyStaticClass. It’s a simple solution, but then it’s a simple problem – human error and oversight.

Command BindingCommand binding is a really useful feature in WPF; it allows you to separate common application commands and their invocation (such as Cut,Paste, etc ) from where they are handled. You can write your classes to handle specific commands, or not, and even indicate if those commandscan be executed. As useful as these bindings are, you do have to be careful about how you use them.

The Issue

In the following example I am setting up some code within a child window to handle when Cut is executed within the parent mainWindow. I firstcreate a CommandBinding, and then simply add it to the parent window’s CommandBindings collection.

CommandBinding cutCmdBinding = new CommandBinding(ApplicationCommands.Cut, OnMyCutHandler,

Page 48: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

OnCanICut); mainWindow.main.CommandBindings.Add(cutCmdBinding); …..void OnMyCutHandler (object target, ExecutedRoutedEventArgs e){ MessageBox.Show("You attempted to CUT");} void OnCanICut (object sender, CanExecuteRoutedEventArgs e){ e.CanExecute = true;}

Listing 9

You may be able to see what the problem is just from reading the code above because, at the moment, it leaks. It’s because we are leaving astrong reference in the mainWindow.main.CommandBindings object, pointing to the child. As a result, even when the child closes, it will stillremain in memory due to the held reference.

This is obviously a contrived example to illustrate the point, but you can easily set this scenario up without even realizing it.

The Solution

Again, the solution couldn’t be easier and, not surprisingly, involves removing the command binding reference:

mainWindow.main.CommandBindings.Remove(cutCmdBinding);Listing 10

Once this reference is removed, the leak will go away.

DispatcherTimer LeakImproper use of the DispatcherTimer will cause a memory leak. There’s not much more background to this, so let’s just jump right in.

The problem

The code below creates a new DispatcherTimer within a user control. A textbox is updated with the contents of the count variable, which isupdated every second by the DispatcherTimer. To make it easier to see the leak, I have also added a byte array called myMemory, which justmakes the leak much bigger and easier to see.

public byte[] myMemory = new byte[50 * 1024 * 1024]; System.Windows.Threading.DispatcherTimer _timer = newSystem.Windows.Threading.DispatcherTimer(); int count = 0; private void MyLabel_Loaded(object sender, RoutedEventArgs e) { _timer.Interval = TimeSpan.FromMilliseconds(1000); _timer.Tick += new EventHandler(delegate(object s, EventArgs ev) { count++; textBox1.Text = count.ToString(); }); _timer.Start(); }Listing 11

Page 49: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

On my main window, I am adding an instance of the UserControl to a StackPanel (after removing children first) on a button click. This will leakmemory for every button click and, as mentioned a moment ago, in this example the main leak you will see is the byte array. Tracing it backwards(using ANTS profiler in this case, though any profiling tool will do) shows that the UserControl as the source of the leak.

This probably feels familiar, as the problem is once again a reference being held, this time by the Dispatcher, which holds a collection of liveDispatcherTimers. The strong reference from the collection keeps each UserControl alive, and therefore leaks memory.

The Solution

The solution is really simple but easy to forget and, you guessed it, you’ve got to stop the timer and set it to null. Here’s the code to do that:

_timer.Stop();_timer=null;Listing 12

TextBox Undo LeakThe last leak I want to draw your attention to is not really a leak; it is intended behavior, but it’s important to know it’s there.

The Problem

The problem is to do with the TextBox control and UNDO. TextBoxes have built-in undo functionality, enabling a user to undo their changes to thecontents of a text box. To achieve that, WPF maintains a stack of recent changes, and when you use a memory profiler, you can clearly see a buildup of data on this undo stack.

This isn’t a major problem unless your app is updating large strings to text boxes over many iterations. The main reason to note this behavior isbecause it can often show up on memory profile traces, and there is often no point being distracted by it.

The solution

You can limit the behavior of the undo stack by either switching it off:

textBox1.IsUndoEnabled=false;Listing 13

Or alternatively you can reduce its impact by setting the UndoLimit property:

textBox1.UndoLimit=100;Listing 14

This limits the number of actions that can be undone, in this case to 100. By default the setting is -1, which limits the number of actions only by theamount of memory available. Setting the value to zero also switches undo off.

ConclusionNone of this is rocket science, and it’s all based on the same principle: “leave a reference behind and potentially you have a leak”. Obviously thatdepends on whether the left reference is ultimately connected to a root reference.

While nothing I have covered is strictly speaking a bug, all of the points are definitely gotchas that you can easily be caught by without realizing it. Ishould know, because I see them all again and again in the projects that I work on.

Ultimately, the two things I recommend you do to avoid memory leaks in the future are:

Learn all you can about .NET memory management and how your code impacts itGet used to routinely using a memory profiler and interpreting it’s results to trace issues such as the many potential flavors of left behindstrong references.

© Simple-Talk.com

Page 50: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

GadgeteerPublished Friday, August 12, 2011 2:16 AM

Microsoft Research, from Cambridge, is about to release what has the potential to become one of the most fun ways of programming in .NET youcould think of. It is called 'Gadgeteer' (well, officially Microsoft .NET Gadgeteer) and is based on the Open Source .NET Micro Framework.

The intent is to allow embedded and handheld electronic devices to be iteratively designed, built and programmed, in a matter of hours. Of course,you have to buy the hardware from third-parties. First out of the blocks is GHI Electronics who, from the end of September, will offer their Fez SpiderStarter Kit. It comprises the .NET Micro Framework mainboard, plus a range of twenty to thirty gadgets, including such things as cameras, SD cardreaders, Wifi, Ethernet modules, touch-screen LCDs, switches, potentiometers, joysticks, and power supplies.

The mainboard, a 72MHz ARM system-on-a-chip with 14 expansion sockets, is the centerpiece (and costs $120). All the modules plug in to themainboard via ribbon-cables, and you can create complex gadgets without doing any soldering. Then, you use all your .NET skills, programmingthe logic in C#, using the .NET Micro Framework in Visual Studio. The framework supplies pretty advanced IntelliSense that prompts you with allpossible options at any point in the programming process, and so allows you to get started without having to stick your nose into manuals.

Evidently, the Gadgeteer project evolved as a result of the frustration felt by the Sensors and Devices Group, led by Steve Hodges at MicrosoftResearch Cambridge Microsoft Research, at the slow pace of prototyping electronic devices such as their SenseCam. The idea came to them toproduce object-oriented hardware to match Microsoft's existing .NET Micro Framework. The results of this combination of technologies have beenstartling.

Although the aims of the .NET Gadgeteer include some serious design-work for electronic devices such as bar-code readers or monitoringsystems, I can see this combination of lightweight framework and standard hardware providing a great deal of educational amusement for .NETprogrammers. Already, a miniature working games arcade console has been made, with the source code available, but there is a huge potentialfor recreational computing. At the moment, the range of modules isn't really enough to consider even the simplest robotics, but hopefully soon ahardware manufacturer will come out with a suitable kit. In the meantime, there are some interesting designs out there, such as a rig for doingsingle-frame animation work, that are sure to keep me amused!

by Laila

Page 51: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

What Counts For A DBA: BlindnessPublished Friday, August 12, 2011 9:17 AM

Anyone who remembers the rock opera Tommy might have guessed that this blog will describe myforthcoming rock opera about a coder with hysterical blindness who becomes a Relational Wizard.Watching him, I jealously start singing… “Ever since I was a newbieI wrote code like a stormMy databases renderedIn the fifth normal formBut I ain’t seen anything like himOn any IT teamThat deft DBA canCode T-SQL up a storm.”

That guess would be wrong. Rather this blog is going to be about avoiding the urge to judge a book by its cover, by being ‘blind’ to all but what isimportant. It started back at SQL Saturday #50 in East Iowa. They had a ‘Women in Technology’ panel in the main lunch room, and some of what Iheard sort of bothered me. I heard it again at SQL Saturday #45 in Louisville, and finally, sitting at a table with Erin Stellato and Jes Borland inColumbus, the topic of women in technology was discussed rather vigorously, and I promised to write about it (I really have to learn when to shutup.)

Now, if you are particularly sensitive, you probably heard that women in technology bother me. Well, they do, but no more than men in technology. Infact I am annoyed by all sexes equally. The point of this blog is that I don't like irrelevant labels placed on any groups of people. If we were blind toall irrelevant factors when dealing with other technology people, life would be much simpler. In my mind, attaching a label to a group (like "Women inTechnology") can unintentionally marginalize all members of that group. Do they really belong in a group that requires special pleading or positivediscrimination? Are Kim T, Kalen D, and Jessica M (to name just a few!) top in the field for any reason other than skill and hard work? Of coursenot. They are smarter and certainly work way harder than me (and most of you who are reading this blog.)

Throughout history, attempts to compensate for prejudice have resulted in a perception of forced equality, leading to jealousy, distrust and worseyet, quotas, meaning good people are left out and less qualified people aren't. This then reinforces stereotypes making matters worse. The fact isthat I just want to have code that works and deal with less stupid questions than I would get from someone who is less talented than the right personfor the job.

The second, not exactly politically-correct, term I have bandied about is "judgmental". Is it okay to be judgmental? Of course. As aDBA/programmer we ought to do a lot of code reviews. Judging the work of others is necessary. The problems happen if one allows one’sjudgment of a person’s professional competence to be skewed by the shape or color of their body. You don't judge a book by its cover but by thequality of the words (or pictures!) inside. In my perfect blog world, people choose employees based on the following two criteria:

1. Does this person have the skills and experience needed?

2. Will this person reasonably support the purpose of the organization?

The first one is obvious. If you need a programmer, and the person can't work a calculator, much less comprehend the concept of binary numbers,the fit is clearly going to be bad. The second is a lot more complex and controversial. I work for a non-profit organization, and we certainly candiscriminate based on the bedrock beliefs of the organization. But should Pepsi hire you if you are an avid Coke drinker that owns the websitewww.PepsiTastesAwful.com (not a real website) which professes that Pepsi is made from rusty sewer pipe drippings? (To be fair, I like them both,but one had to be the patsy). Of course not. However, most times I have seen the concept of the "wrong fit for the job applied,” it has just been thatthe person seemed like they were "different," which has usually seemed like they were quirky (and aren't the best programmers a bit quirky, atleast?) The obvious problem is that it is just far too easy to mask sexism, racism, or really any sort of -ism with the concept of fit.

Page 52: A DBA's best friend is his tempdb - Redgate · A DBA's best friend is his tempdb Published Thursday, August 18, 2011 5:22 PM There is a saying amongst welfare agencies that one can

I perhaps have veered a bit off of the topic of Women In Technology groups, so let's steer back there. I don't want to make it seem as if I thought fora moment that ‘Women in Technology’ groups are evil and bent on world domination. The times I have attended their luncheons, the focus was onhow we make it more socially acceptable to get younger girls into more technology oriented career paths. Excellent. But Don't forget about theyoung boys. Technology has long been considered as unacceptable by the cool cliques, and while that is changing slowly, it isn't changing fastenough. We continue to have a dearth of qualified people out there writing code and designing databases.

My solution would be to elevate technology in the classroom to one of the fundamentals, to the level of reading, writing, and arithmetic. It might notbe popular with many students, but fundamental education is not designed to be popular, it is there to give you foundational knowledge to buildupon. The most helpful class I took in high school was one my father forced me to take but turned out to be one of the most influential in my future;typing. Twenty years ago, typing was not a common class that boys took (nor twenty-six years ago when I actually took it!); but my father, who was amechanic at the time, wanted to prepare me for the future that he saw coming. All day at work, and even as I sit here typing this blog, I use my typingskills. That kind of parental "encouragement" to build fundamental skills for the future is probably the most necessary step, and one that cannot belegislated but can start with you and your child/niece/nephew/neighbor. It is still a problem in this day and age that parents ingrain their children withan attitude that technology work is for people who can't do "real" work. Showing them how great your job is and being an encourager will certainlyhelp. In the end, a person’s career choice should be the intersection of what they have the skills to do and what they like to do.

The goal should be that we work to get every young person of all types involved with technology early, and not just for playing Angry Birds andtexting random body parts to their classmates. Then we won't just be helping women into technology, but men too. Let’s face it, if you are a highlyqualified technologist, you should be excited about the concept of getting more qualified people in technology regardless of who they are. Ofcourse, if you are not qualified, well...I can see how you might be opposed.

by drsql