Archive for the ‘Sql Server Database’ Category

Tempdb Growing unexpectadly with recovery model Simple

Short-term fix Restarting SQL Server will re-create tempdb from scratch, and it will return to its usually allocated size. In and of itself, this solution is only effective in the very short term; assumedly, the application and/or T-SQL code which caused tempdb to grow once, will likely cause it to grow again. To shrink tempdb, you can consider using DBCC ShrinkDatabase, DBCC ShrinkFile (for the data or the log file), or ALTER DATABASE. See KB #256650, KB #272318 and KB #307487 for more information. If you can’t shrink the log, it might be due to an uncommitted transaction. See if you have any long-running transactions with the following command:


Check the oldest transaction (if it returns any), and see who the SPID is (there will be a line starting with ‘SPID (Server Process ID) : <number>’). Use that <number> in the following:


This will tell you at least a portion of the last SQL command executed by this SPID, and will help you determine if you want to end this process with:

Categorias:TemDb Problems

Troubleshoot Sql Server Performance Bottlenecks

Pingback from:

Monitor  Recent CPU Utilization History Trends:

DECLARE @ts_now bigint
SELECT @ts_now = cpu_ticks / (cpu_ticks/ms_ticks)  FROM sys.dm_os_sys_info
SELECT top 2000 record_id, EventTime,
CASE WHEN system_cpu_utilization_post_sp2 IS NOT NULL THEN system_cpu_utilization_post_sp2 ELSE system_cpu_utilization_pre_sp2 END AS system_cpu_utilization,
CASE WHEN sql_cpu_utilization_post_sp2 IS NOT NULL THEN sql_cpu_utilization_post_sp2 ELSE sql_cpu_utilization_pre_sp2 END AS sql_cpu_utilization
record.value(‘(Record/@id)[1]’, ‘int’) AS record_id,
DATEADD (ms, -1 * (@ts_now – [timestamp]), GETDATE()) AS EventTime,
100-record.value(‘(Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]’, ‘int’) AS system_cpu_utilization_post_sp2,
record.value(‘(Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]’, ‘int’) AS sql_cpu_utilization_post_sp2 ,
100-record.value(‘(Record/SchedluerMonitorEvent/SystemHealth/SystemIdle)[1]’, ‘int’) AS system_cpu_utilization_pre_sp2,
record.value(‘(Record/SchedluerMonitorEvent/SystemHealth/ProcessUtilization)[1]’, ‘int’) AS sql_cpu_utilization_pre_sp2
SELECT timestamp, CONVERT (xml, record) AS record
FROM sys.dm_os_ring_buffers
AND record LIKE ‘%<SystemHealth>%’) AS t
) AS t
ORDER BY record_id desc

Monitor  Expensive Queries:

LEFT(P.CACHEOBJTYPE + ‘(‘ +P.OBJTYPE + ‘)’, 35) AS cacheobjtype,
p.size_in_bytes / 1024 AS SIZE_IN_KB,
stat.total_worker_time/1000 AS TOT_CPU_MS,
stat.total_elapsed_time/1000 AS TOT_DURATION_MS,
WHEN PA.VALUE=32767 THEN ‘ResourceDb’
ELSE ISNULL (DB_NAME (CONVERT (sysname, pa.value)), CONVERT (sysname, pa.value))
END, 40) AS dbname,
ELSE REPLACE (REPLACE (sql.[text], CHAR(13), ‘ ‘), CHAR(10), ”)
REPLACE(REPLACE(SUBSTRING (sql.[text], stat.statement_start_offset/2 + 1,
CASE WHEN stat.statement_end_offset = -1 THEN LEN (CONVERT (nvarchar(max), sql.[text]))
ELSE stat.statement_end_offset / 2 – stat.statement_start_offset / 2 + 1 END),
CHAR(13), ‘ ‘), CHAR(10), ‘ ‘) AS STMT_TEXT,
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_plan_attributes (p.plan_handle) pa
INNER JOIN sys.dm_exec_query_stats stat ON p.plan_handle = stat.plan_handle
OUTER APPLY sys.dm_exec_sql_text (p.plan_handle) AS sql
OUTER APPLY sys.dm_exec_query_plan(p.plan_handle) as QPLAN
WHERE pa.attribute = ‘dbid’
ORDER BY tot_cpu_ms DESC
SELECT * FROM sys.dm_exec_requests ORDER BY CPU_TIME desc
(select top 50
sys.dm_exec_query_stats qs
order by qs.total_worker_time desc) as highest_cpu_queries
cross apply sys.dm_exec_sql_text(plan_handle) as q
order by highest_cpu_queries.total_worker_time desc



SQL Server: Controlling the growth of msdb database caused by Database Mail

Pingback from :

This should be the common issue faced by many of us where the database mail is used to send bulk mails on a regular basis. We have a database server which is specifically used for sending weekly newsletters to all the subscribed users. The number of subscriptions increased over a period of time and so as the number of newsletters. One day the disk drive, holding the system databases, started alerting for the low disk space. On investigation, I found that the msdb database has grown to 7 GB in size!

As we know that msdb maintains various types of history information like backup history, jobs history, sent mails and attachments history etc. Now the question is to find out the table(s) holding the largest data. I have found the following query from the Net which helped me to find out the name of the tables in descending order of space consumed on the disk.

(As the above issue had already been fixed, I can not show you the 7 GB grown msdb. For this demo I am running this query on a server having 1.5 GB grown msdb)

use msdb
sp_helpdb msdb

msdb database report
--Query 1 - Checking the size of tables in a database
	object_name(i.object_id) as ObjectName,
	i.[name] as IndexName,
	sum(a.total_pages) as TotalPages,
	sum(a.used_pages) as UsedPages,
	sum(a.data_pages) as DataPages,
	(sum(a.total_pages) * 8) / 1024 as TotalSpaceMB,
	(sum(a.used_pages) * 8) / 1024 as UsedSpaceMB, 
	(sum(a.data_pages) * 8) / 1024 as DataSpaceMB
	sys.indexes i
	inner join sys.partitions p 
		on i.object_id = p.object_id and i.index_id = p.index_id
	inner join sys.allocation_units a
		on p.partition_id = a.container_id
group by 
order by 
	sum(a.total_pages) desc, 

Checking the size of tables in a database

There are also  other types of script to get the audit job done (find number Rows, Columns, ByteSize for each table in the current database – Find Biggest Table in Database):

USE DatabaseName
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))
INSERT #temp
EXEC sp_msforeachtable 'sp_spaceused ''?'''
SELECT a.table_name,
COUNT(*) AS col_count,
FROM #temp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
ORDER BY CAST(REPLACE(a.data_size, ' KB', '') AS integer) DESC



We can see that sysmail_mailitems table is occupying most of the space on the disk, around 1.3 GB spanning across 171812 pages. This table stores the database mail messages which are sent, unsent, retrying, and failed. Microsoft recommends maintaining this table by periodically deleting the messages based on the organizations document retention program. The sysmail_delete_mailitems_sp procedure located in the msdb database permanently deletes e-mail messages from this table. The syntax of the procedure is shown below

[ [ @sent_before = ] ‘sent_before’ ]
[ , [ @sent_status = ] ‘sent_status’

The date option allows deleting e-mails before the specified date and the status option allows deleting e-mails of a particular type i.e. sent, unsent, retrying, and failed.

As deleting a huge amount of records in one shot takes a good amount of time we can purge the data in small batches. The below query will identify since when the data is lying in the table. This is especially useful when the data is piled up over a long period of time.

--Query 2 - Checking the records by year, month
	datepart(yy,sent_date) [Year], 
	datepart(mm,sent_date) [Month], 
	count(*) 'NoOfRows'
group by 
order by 

Checking the records by year and month

The table has data for two months. Let’s keep the data for March and deleted the February data. The following query will do the job,

--Query 3 – Purging the data before 1st March 2011
DECLARE @DelDate nvarchar(20) ;
SET @DelDate = '03/01/2011'
EXECUTE msdb.dbo.sysmail_delete_mailitems_sp @sent_before = @DelDate;

Purging the data

Now run the Query 1 and check the output. The total table space which was 1372 MB has come down to 623 MB and also the Total Pages from 171812 to 79836.

Checking the size of tables in a database

Let’s check the database size using,

sp_helpdb msdb

msdb database reporting

No luck???!!! In fact the log size has increased. Don’t worry, now we need to truncate the log and shrink the database so that the free space will be release to the operating system. Run the following code and recheck the database size.

--Query 4 - Truncate the log and shrink the database 

backup log msdb with truncate_only

dbcc shrinkdatabase ('msdb')

sp_helpdb msdb

msdb database report

It worked…Smile

Sql Server free tools

sp_BLITZ – SQL Server Takeover Script

MS Best Practice Analyzer (2005 & 2008 version)

SQL Everywhere
SQL Server IDE

Atlantis SQL Everywhere is a free SQL Server IDE (Integrated Development Environment) which supports all versions of Microsoft SQL Server from 2000 to 2008 and comes with the most fully-featured and accurate SQL Intellisense® engine available – use it to turbocharge your productivity.


Query Plan Analyser Free
SQL Sentry Plan Explorer v1.3

SQL Search – Redgate

Search your database quickly
SQL Server developers and DBAs use SQL Search to:

Find fragments of SQL text within stored procedures, functions, views and more
Quickly navigate to objects wherever they happen to be on a server
Find all references to an object


Basics of a query hint

Sometimes temp tables can be your best friend


Here is a quick example of when using temporary tables will give you a big performance gain.  I recently was given this query to improve, as it was taking over 15 hours to run:

GL_Star_FactsDetails GL
LEFT JOIN DimAccount ACCT on GL.Account = ACCT.AccountRapidDecision
where GL.LedgerTypeCode = ‘AA’
) FD
INNER JOIN vwDimBusinessUnit BU ON FD.BusinessUnitCode = BU.BusinessUnitCode
INNER JOIN DimCurrencyExchangeRates EX ON BU.CurrencyCode = EX.CurrencyCode AND FD.DateKey = EX.DateKey

GL_Star_FactsDetails has 16 million records, DimAccount has 5,000 records, DimCurrencyExchangeRates has 70,000 records, and vwDimBusinessUnit is a complex view (it has a UNION ALL and multiple joins) that returns 10,000 records.

These are not big tables, so why is this query taking so long?  A simple look at the query execution plan tells the story: it has 80 operations!  It is simply a matter of the query optimizer having too many tables and too many joins to come up with an efficient query plan.  So, use temp tables to simplify it for the query optimizer:

–substitute a temp table for the view
select *
into #TempDimBusinessUnit
from vwDimBusinessUnit

–substitute a temp table for the sub-query
into #TempFactDetails
GL_Star_FactsDetails GL
LEFT JOIN DimAccount ACCT on GL.Account = ACCT.AccountRapidDecision
where GL.LedgerTypeCode = ‘AA’

–and use the two temp tables in the main query
into #temp
#TempFactDetails FD
INNER JOIN #TempDimBusinessUnit BU ON FD.BusinessUnitCode = BU.BusinessUnitCode
INNER JOIN DimCurrencyExchangeRates EX ON BU.CurrencyCode = EX.CurrencyCode AND FD.DateKey = EX.DateKey

select * from #temp

This rewrite resulted in a much simpler query plan with the query taking only 12 minutes.  While there are some negatives with using temp tables, in this case the benefits make it very worthwhile.

Management Data Warehouse/ Performance Dashboards


In SQL Server 2008 Microsoft introduced “Management Data Warehouse” (MDW). This blog post should give you a good idea on what to expect from this new feature. Note: All tests and screenshots for this post were done in SQL Server 2008 February CTP (CTP6).

Before starting MDW configuration please make sure that your SQL Server Agent is running. You can configure Management Data Warehouse by going into SQL Server Management studio, then expanding Management folder and right mouse clicking on Data Collection folder and choosing “Configure Management Data Warehouse”. This will start wizard that will ask you MDW database connection information (server, database and authentication). As MDW database puts some overhead for the server, in production environment you probably will want to have this database on a separate server. After wizard completes you should see green arrows for Data collection items:


MDW in SQL Server Management Studio

SQL Server 2008 CTP6 was shipped with 3 data collection sets. These sets run at set frequency and collects data into MDW database. To give you an idea – “Disk usage” collection set runs every 5 seconds, “Query Statistics” collection set runs every 10 seconds and ”Server Activity” collection set runs every 60 seconds. Of course, you can easily change frequency for each collection set in properties.

So what are the results of these collections?

Disk usage

To see disk usage report in SQL Server Management Studio select “Disk Usage” collection set and then right mouse click “Reports”->”Disk Usage Summary”. Here is report from my virtual PC:

DMW Disk usage report

This report shows you database and log size growth trending and if you select a database, you can see even more information about it:

MDW Disk summary report - DB detail

Query Statistics

Query statistics allows you to see reports about query executions. You can choose time interval for your report and then you can see top query report by CPU, Duration, Total I/O, Physical Reads or Logical writes. Here is example of top queries by duration:

Query statistics - report1

And here is example of report you will get when you will select specific query:

Query statistics - report2

Server Activity

Server Activity reports lets you choose time interval that you are interested in and then shows nice activity report for that interval. This report gives you information about SQL Server and/or System CPU usage, Memory usage, Disk I/O usage, Network usage, SQL Server Waits and other activity:

MDW Server Activity report

You can actually click on the graph and get more information about what happened on the server at that time. I clicked on the “Disk I/O Usage” graph on the spike and got following report:

MDW - Server activity report 2

Behind the scene

It does not look like SQL Server 2008 February CTP6 has an interface to create new data collection sets. But BOL has a list of stored procedures that allows you to do just that. There is very little description on how to use these stored procedures, but I found that best way to understand how to create new data collection set is by scripting existing collection set. When you right mouse click on connection set, you have an option to “Script data collection set as”->”Create”.

Here are create scripts for each data set collection:

I was analyzing these scripts and found that there is a table msdb.dbo.syscollector_collector_types that contains a list of available collector types.  February CTP 6 was shipped with 4 collector types:

  • Generic T-SQL Query Collector Type
  • Generic SQL Trace Collector Type
  • Query Activity Collector Type
  • Performance Counters Collector Type

Each of these collectors is associated with 2 SSIS packages: collection package and upload package. I found these packages in C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Install. For example in this folder there is package TSQLQueryCollect.dtsx and package TSQLQueryUpload.dtsx. I opened and try to analyze these packages, but at this point that was just too much for me.

Why did I started to look behind the scenes? Because I wanted to make sure that it is possible to create new collection sets and add new collector types. And now I am very sure that we can. At this point I strongly believe that what we see in CTP 6 it is just a beginning. And in the not so distant future we will see Microsoft or the 3rd party releasing new collectors and collection sets. Microsoft did a good job finally setting framework for SQL Server monitoring software.

Analysis Services and MDW

SQL Server 2008 February CTP does not have any specific collectors and collection sets to collect information about Analysis Services. But it is just a matter of time before Microsoft or 3rd party will create them – there is definitely a demand for the Analysis Services monitoring software and MDW clearly allows adding new collectors.

In the near future I am planning to test:

  • Can I use “Generic T-SQL Query Collector Type” to execute OpenQuery or Linked Server query to Analysis Services 2008 DMVs and get same SSAS performance data that way?
  • Can I use “Generic SQL Trace Collector Type” to trace Analysis Services events?
  • Can I use “Performance Counters Collector Type” to get Analysis Services related performance data? At this point I believe that I am most likely to succeed for this item.

I will post my findings in my blog.

Note: For this post I used Snagit software to do screen shots and it was so much easier than doing this my old way: capturing screen (ALT+Prt Scr), then cutting out part of the screen in the Microsoft Paint and then changing size. I am very happy that I got this software.