Hi guys!
Today we shall discuss the hot topic how to handle 100% CPU Utilization. DBA
face this kind of situation often in their daily life. Some time it is must to
solve this situation in production environment as this will hamper the business
activity in terms of transactions and money.
The
methods we are about to discuss, help only if SQL server is the culprit, You
need to first identify whether SQL is consuming all memory resources, then only
you need to apply these methods otherwise it will not help you-
I shall
cover two methods; both methods are not same and can’t be used at same time.
Their aim is same to lower the memory usage but they are applied over different
scenario.
Before
begin, I want to discuss a myth that is, mostly it is assumed that longer
running queries are problem, but it is not true all the time, yes it may cause
problem but small concurrent running queries having maximum worker time or
maximum execution count can also be a problem. For example a query which is
executing before 1 sec can’t be a problem. But the same query if executed 1 lac
time concurrently (at the same time) can cause issue.
As per
my experience, mostly select queries are the culprit and create such situation
so that sql server starts consuming 100 % of memory resources. You can use task
manager or resource monitor to find the CPU usage.
Method 1
:
It is a
traditional approach, mostly used by DBAs. Whenever this kind of situation arises, you
need to first check for the intense processes running on the server. For this
you need to continuously execute one procedure sp_who2 and monitor which spid
is increasing gradually, then you need to identify what is going on that
session for that use dbcc inputbuffer(<spid>), if it is select query you
can kill it but you should not kill transaction and queries having insert
update delete on sql tables.
sp_who2
dbcc inputbuffer(<spid>)
kill <spid>
Note:
You need to look for spid greater than 50 because less than 50 spids are used
by Sql server for its internal working.
Method 2
:
It is
more granular approach to optimize the query. In this approach you need to use
few DMV’s. Sql server increases the execution_count for same query, if the
definition of the query changes new plan is created. So you need to find out
the queries having maximum exeution_count and maximum total_worker_time. When
you find the record you will get the query plan, which you need to copy and
paste to another dmv that is sys.dm_exec_query_stats.
Select * from sys.dm_exec_query_stats
order by execution_count desc
Select * from sys.dm_exec_query_stats
order by total_worker_time desc
Select * from sys.dm_exec_query_plan(<plan_handle>)
So from
above query we will get the execution plan and from there we can view xml view
of query and find parameters for the query. After getting queries you can apply
sargable rules, these rules are used to optimize queries.
This is
how you can trace costly queries and handle 100% CPU Utilization
No comments:
Post a Comment