DECLARE @STR VARCHAR(100) = 'Hi how are you doing? I am good.' DECLARE @dev VARCHAR(1) = ' ' SELECT Split.a.value('.', 'VARCHAR(100)') AS E FROM ( SELECT Cast('<M> ' + Replace(Replace(@str, @dev, @dev + '</M><M>'), '.', '') + ' </M>' AS XML) AS yo ) a CROSS APPLY yo.nodes('/M') AS SPLIT(a)
Because sharing knowledge is good. (MSBI, SQL SERVER, Python, Pyspark, Azure Data Factory, Databricks, Machine Learning & Cloud Services)
Splitting Data Query
Encrypt And Decrypt Data Using Certificate In SQL Server
CREATE DATABASE Dbavimal GO USE Dbavimal GO --Create MasterKey CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'DBAVimal!'; GO -- Create Certificate CREATE CERTIFICATE [EncryptionCert] WITH SUBJECT = 'DBAEncryption' GO -- Symmetric Key CREATE SYMMETRIC KEY SymmetricDBAVimalKey WITH ALGORITHM = AES_192 ENCRYPTION BY CERTIFICATE [EncryptionCert] GO --Use Symmetric Key OPEN SYMMETRIC KEY SymmetricDBAVimalKey DECRYPTION BY CERTIFICATE [EncryptionCert] GO ----------------------------- CREATE FUNCTION [dbo].[fn_Encrypt] (@Data VARCHAR(max)) RETURNS VARBINARY(256) AS BEGIN DECLARE @Result VARBINARY(256) SET @Result = EncryptByKey(Key_GUID('SymmetricDBAVimalKey'), @Data) RETURN @Result END GO ------------------------------- CREATE FUNCTION [dbo].[fn_Decrypt] (@Data VARCHAR(max)) RETURNS VARCHAR(max) AS BEGIN DECLARE @Result VARCHAR(max) SET @Result = DecryptByKey(@Data) RETURN @Result END GO ----------------------------------- --Test the result (Same way you can store your data on tables) DECLARE @Str VARCHAR(500) = 'Hello' PRINT '====Original Data============' PRINT @Str PRINT '=============================' DECLARE @EncryptedData VARCHAR(256) DECLARE @DecryptedData VARCHAR(256) SET @EncryptedData = [dbo].[fn_Encrypt](@Str) PRINT '====Encrypted Data===========' PRINT @EncryptedData PRINT '=============================' SET @DecryptedData = [dbo].[fn_Decrypt](@EncryptedData) PRINT '====Data After Decryption====' PRINT @DecryptedData PRINT '=============================' USE master GO DROP DATABASE Dbavimal
SQL Server T-Sql Performance Tuning Tips : Sargable
In a WHERE clause, the various operators used directly affect
how fast a query is run. This is because some operators lend
themselves to speed over other operators. Of course, you may not have any choice
of which operator you use in your WHERE clauses, but sometimes you do.
Here are the key operators used in the WHERE clause, ordered by
their performance. Those operators at the top will produce results faster than
those listed at the bottom.
- =
- >, >=,
<, <=
- LIKE
- <>
- A single literal
used by itself on one side of an operator
- A single column
name used by itself on one side of an operator, a single parameter used by
itself on one side of an operator
- A multi-operand
expression on one side of an operator
- A single exact
number on one side of an operator
- Other numeric
number (other than exact), date and time
- Character data,
NULLs
The simpler the operand, and using exact numbers, provides the
best overall performance.
Try to avoid WHERE clauses that are non-sargable. The
term "sargable" (which is in effect a made-up word) comes from the
pseudo-acronym "SARG", which stands for "Search ARGument,"
which refers to a WHERE clause that compares a column to a constant value. If a
WHERE clause is sargable, this means that it can take advantage of an index
(assuming one is available) to speed completion of the query. If a WHERE clause
is non-sargable, this means that the WHERE clause (or at least part of it)
cannot take advantage of an index, instead performing a table/index scan, which
may cause the query's performance to suffer.
Non-sargable search arguments in the WHERE clause, such as
"IS NULL", "<>", "!=", "!>",
"!<", "NOT", "NOT EXISTS", "NOT IN",
"NOT LIKE", and "LIKE '%500'" generally
prevents (but not always) the query optimizer from using an index to perform a
search. In addition, expressions that include a function on a column,
expressions that have the same column on both sides of the operator, or
comparisons against a column (not a constant), are not sargable.
But not every WHERE clause that has a non-sargable expression in
it is doomed to a table/index scan. If the WHERE clause includes both sargable
and non-sargable clauses, then at least the sargable clauses can use an index
(if one exists) to help access the data quickly.
In many cases, if there is a covering
index on the table, which includes all of the columns in the SELECT,
JOIN, and WHERE clauses in a query, then the covering index can be used instead
of a table/index scan to return a query's data, even if it has a non-sargable
WHERE clause. But keep in mind that covering indexes have their own drawbacks,
such as producing very wide indexes that increase disk I/O when they are read.
In some cases, it may be possible to rewrite a non-sargable
WHERE clause into one that is sargable. For example, the clause:
WHERE
SUBSTRING(firstname,1,1) = 'm'
can be rewritten like this:
WHERE firstname like 'm%'
Both of these WHERE clauses produce the same result, but the
first one is non-sargable (it uses a function) and will run slow, while the
second one is sargable, and will run much faster.
WHERE clauses that perform some function on a column are
non-sargable. On the other hand, if you can rewrite the WHERE clause so that
the column and function are separate, then the query can use an available
index, greatly boosting performance. for example:
Function Acts Directly on Column, and Index Cannot Be Used:
SELECT member_number,
first_name, last_name
FROM members
WHERE DATEDIFF(yy,datofbirth,GETDATE()) > 21
FROM members
WHERE DATEDIFF(yy,datofbirth,GETDATE()) > 21
Function Has Been Separated From Column, and an Index Can Be
Used:
SELECT member_number,
first_name, last_name
FROM members
WHERE dateofbirth < DATEADD(yy,-21,GETDATE())
FROM members
WHERE dateofbirth < DATEADD(yy,-21,GETDATE())
Each of the above queries produces the same results, but the
second query will use an index because the function is not performed directly
on the column, as it is in the first example. The moral of this story is to try
to rewrite WHERE clauses that have functions so that the function does not act
directly on the column.
WHERE clauses that use NOT are not sargable, but can often be
rewritten to remove the NOT from the WHERE clause, for example:
WHERE NOT column_name
> 5
to
WERE column_name <= 5
Each of the above clauses produce the same results, but the
second one is sargable.
If you don't know if a particular WHERE clause is sargable or
non-sargable, check out the query's execution plan in Query Analyzer. Doing
this, you can very quickly see if the query will be using index lookups or
table/index scans to return your results.
With some careful analysis, and some clever thought, many
non-sargable queries can be written so that they are sargable. Your goal for
best performance (assuming it is possible) is to get the left side of a search
condition to be a single column name, and the right side an easy to look up
value.
If you currently have a query that uses NOT IN, which
offers poor performance because the SQL Server optimizer has to use a nested
table scan to perform this activity, instead try to use one of the following
options instead, all of which offer better performance:
- Use EXISTS or
NOT EXISTS
- Use IN
- Perform a LEFT
OUTER JOIN and check for a NULL condition
When you have a choice of using the IN or the BETWEEN clauses in
your Transact-SQL, you will generally want to use the BETWEEN clause, as it is
much more efficient. For example:
SELECT customer_number,
customer_name
FROM customer
WHERE customer_number in (1000, 1001, 1002, 1003, 1004)
FROM customer
WHERE customer_number in (1000, 1001, 1002, 1003, 1004)
is much less efficient than this:
SELECT customer_number,
customer_name
FROM customer
WHERE customer_number BETWEEN 1000 and 1004
FROM customer
WHERE customer_number BETWEEN 1000 and 1004
Assuming there is a useful index on customer_number, the Query
Optimizer can locate a range of numbers much faster (using BETWEEN) than it can
find a series of numbers using the IN clause (which is really just another form
of the OR clause).
If you have a WHERE clause that includes expressions connected
by two or more AND operators, SQL Server will evaluate them from left to right
in the order they are written. This assumes that no
parenthesis have been used to change the order of execution. Because of this,
you may want to consider one of the following when using AND:
- Locate the least
likely true AND expression first. This way, if the AND expression is
false, the clause will end immediately, saving time.
- If both parts of
an AND expression are equally likely being false, put the least complex
AND expression first. This way, if it is false, less work will have to be
done to evaluate the expression.
You may want to consider using Query Analyzer to look at the
execution plans of your queries to see which is best for your situation.
If you want to boost the performance of a query that includes
an AND operator in the WHERE clause, consider the following:
- Of the search
criterions in the WHERE clause, at least one of them should be based on a
highly selective column that has an index.
- If at least one
of the search criterions in the WHERE clause is not highly selective,
consider adding indexes to all of the columns referenced in the WHERE
clause.
- If none of the
column in the WHERE clause are selective enough to use an index on their
own, consider creating a covering index for this query.
The Query Optimizer will perform a table scan or a clustered
index scan on a table if the WHERE clause in the query contains an OR
operator and if any of the referenced columns in the OR clause are not indexed
(or does not have a useful index). Because of this, if you use many queries
with OR clauses, you will want to ensure that each referenced column in the
WHERE clause has a useful index.
Whenever SQL Server has to perform a sorting operation,
additional resources have to be used to perform this task.
Sorting often occurs when any of the following Transact-SQL statements are
executed:
- ORDER BY
- GROUP BY
- SELECT DISTINCT
- UNION
- CREATE INDEX
(generally not as critical as happens much less often)
In many cases, these commands cannot be avoided. On the other
hand, there are few ways that sorting overhead can be reduced. These include:
- Keep the number
of rows to be sorted to a minimum. Do this by only returning those rows
that absolutely need to be sorted.
- Keep the number
of columns to be sorted to the minimum. In other words, don't sort more columns
that required.
- Keep the width
(physical size) of the columns to be sorted to a minimum.
- Sort column with
number datatypes instead of character datatypes.
When using any of the above Transact-SQL commands, try to keep
the above performance-boosting suggestions in mind.
If your SELECT statement contains a HAVING clause, write
your query so that the WHERE clause does most of the work (removing undesired
rows) instead of the HAVING clause do the work of removing undesired rows.
Using the WHERE clause appropriately can eliminate unnecessary rows before they
get to the GROUP BY and HAVING clause, saving some unnecessary work, and
boosting performance.
For example, in a SELECT statement with WHERE, GROUP BY, and
HAVING clauses, here's what happens. First, the WHERE clause is used to select
the appropriate rows that need to be grouped. Next, the GROUP BY clause divides
the rows into sets of grouped rows, and then aggregates their values. And last,
the HAVING clause then eliminates undesired aggregated groups. If the WHERE
clause is used to eliminate as many of the undesired rows as possible, this
means the GROUP BY and the HAVING clauses will have less work to do, boosting
the overall performance of the query.
If your application performs many wildcard (LIKE %) text
searches on CHAR or VARCHAR columns, consider using SQL Server's
full-text search option. The Search Service can significantly speed up wildcard
searches of text stored in a database.
The GROUP BY clause can be sped up if you follow these suggestion:
- Keep the number
of rows returned by the query as small as possible.
- Keep the number
of groupings as few as possible.
- Don't group
redundant columns.
- If there is a
JOIN in the same SELECT statement that has a GROUP BY, try to rewrite the
query to use a subquery instead of using a JOIN. If this is possible,
performance will be faster. If you have to use a JOIN, try to make the
GROUP BY column from the same table as the column or columns on which the
set function is used.
- Consider adding
an ORDER BY clause to the SELECT statement that orders by the same column
as the GROUP BY. This may cause the GROUP BY to perform faster. Test this
to see if is true in your particular situation.
Instead of using temporary tables, consider using a derived
table instead. A derived table is the result of using a SELECT statement in
the FROM clause of an existing SELECT statement. By using derived tables
instead of temporary tables, you can reduce I/O and boost your application's
performance.
It is fairly common request to write a Transact-SQL query to to compare
a parent table and a child table and find out if there are any parent
records that don't have a match in the child table. Generally, there are three
ways this can be done:
Using a NOT EXISTS
SELECT a.hdr_key
FROM hdr_tbl a
WHERE NOT EXISTS (SELECT * FROM dtl_tbl b WHERE a.hdr_key = b.hdr_key)
SELECT a.hdr_key
FROM hdr_tbl a
WHERE NOT EXISTS (SELECT * FROM dtl_tbl b WHERE a.hdr_key = b.hdr_key)
Using a LEFT JOIN
SELECT a.hdr_key
FROM hdr_tbl a
LEFT JOIN dtl_tbl b ON a.hdr_key = b.hdr_key
WHERE b.hdr_key IS NULL
FROM hdr_tbl a
LEFT JOIN dtl_tbl b ON a.hdr_key = b.hdr_key
WHERE b.hdr_key IS NULL
Using a NOT IN
SELECT hdr_key
FROM hdr_tbl
WHERE hdr_key NOT IN (SELECT hdr_key FROM dtl_tbl)
SELECT hdr_key
FROM hdr_tbl
WHERE hdr_key NOT IN (SELECT hdr_key FROM dtl_tbl)
In each case, the above query will return identical results.
But, which of these three variations of the same query produces the best
performance? Assuming everything else is equal, the best performing version
through the worst performing version will be from top to bottom, as displayed
above. In other words, the NOT EXISTS variation of this query is generally the
most efficient.
I say generally, because the indexes found on the tables, along with
the number of rows in each table, can influence the results. If you are not
sure which variation to try yourself, you can try them all and see which
produces the best results in your particular circumstances.
If you need to verify the existence of a record in a table, don't
use SELECT COUNT(*) in your Transact-SQL code to identify it, which is very
inefficient and wastes server resources. Instead, use the Transact-SQL IF EXITS
to determine if the record in question exits, which is much more efficient. For
example:
Here's how you might use COUNT(*):
IF (SELECT COUNT(*) FROM table_name WHERE column_name = 'xxx')
Here's a faster way, using IF EXISTS:
IF EXISTS (SELECT * FROM table_name WHERE column_name = 'xxx')
The reason IF EXISTS is faster than COUNT(*) is because the
query can end immediately when the text is proven true, while COUNT(*) must
count go through every record, whether there is only one, or thousands, before
it can be found to be true.
Let's say that you often need to INSERT the same value into a
column. For example, perhaps you have to perform 100,000 INSERTs a day
into a particular table, and that 90% of the time the data INSERTed into one of
the columns of the table is the same value.
If this the case, you can reduce network traffic (along with
some SQL Server overhead) by creating this particular column with a default
value of the most common value. This way, when you INSERT your data, and the
data is the default value, you don't INSERT any data into this column, instead
allowing the default value to automatically be filled in for you. But when the
value needs to be different, you will of course INSERT that value into the
column.
If you have created a complex transaction that includes several
parts, one part of which has a higher probability of rolling back the
transaction than the others, better performance will be provided if you locate
the most likely to fail part of the transaction at the front of the greater
transaction. This way, if this more-likely-to-fail transaction has to roll back
because of a failure, there has been no resources wasted on the other
less-likely-to-fail transactions.
Guidelines for Using Joins
If you perform
regular joins between two or more tables in your queries, performance
will be optimized if each of the joined columns have their own indexes. This
includes adding indexes to the columns in each table used to join the tables
If you have to
regularly join four or more tables to get the recordset you need,
consider de normalizing the tables so that the number of joined tables is
reduced. Often, by adding one or two columns from one table to another, joins
can be reduced.
Don't use CROSS
JOINS, unless this is the only way to accomplish your goal. What some
inexperienced developers do is to join two tables using a CROSS JOIN, then they
use either the DISTINCT or the GROUP BY clauses to "clean up" the
mess they have created. This, as you might imagine, can be a huge waste of SQL
Server resources.
For maximum
performance when joining two or more tables, the indexes on the columns to
be joined should have the same data type. This also means that you
shouldn't mix non-Unicode and Unicode datatypes when using SQL Server 7.0 or
later. (e.g. varchar and nvarchar). If SQL Server has to implicitly convert the
data types to perform the join, this not only slows the joining process, but it
also could mean that SQL Server may not use available indexes, performing a
table scan instead
Use joins in
preference to sub or nested queries for improved performance.
If
you have a query that contains two sub selects containing an aggregate function
(SUM, Count, etc.) in the SELECT part. The query may perform sluggishly. To
rectify the problem, you can replace the sub selects with a series of JOINS .
So as a rule, should use JOINS in lieu of sub selects when the sub select contains aggregate functions.
So as a rule, should use JOINS in lieu of sub selects when the sub select contains aggregate functions.
Best Practices & Monitoring Points For SQL Database & Server
Best Practices :
1. Application and Database should be on different servers
2. Proper Backup plan should be implemented (Based on Data size and RPO and RTO)
Plan :
i. Full Backup on Sunday
ii. Differential Backup on Daily
iii. Log hourly or every fifteen minutes
iv. System database backup plan should also there
v. Restore drill
vi. Backup reports
3. Proper Indexing
i. Based on Data type, Data size, Fill factor
ii. Implement Indexs suggested by Sql server engine having high impact and monitor
iii. Remove indexes not in use
4. Remove all the objects which were never used
5. Update statististics every week or more frequently(depending upon data insertion and updation and deletion)
6. Email notification
i. Define mailer profile
ii. Create operators
7. Define Bottlenecks for CPU Utilization (Performance)
8. Create Logins and give proper roles not all
9. Proper naming convention of sql objects like procedures with prefix usp
10. For heavy data searching use Full text search
11. Proper history clean up
12. Always logoff after completion of task on prod environment
a. SET NOCOUNT ON should be in every proc
b. Proper try and catch should be there
c. Use SARGABLE rules
c. Use SARGABLE rules
-> . DR plan should be ready (Disaster Recovery)
->. Apply high availability solution ( Logshiping or replication or clustering) depending upon the use of data
--------------------------------------------------------
Monitoring :
A. Backups, backups should be validated and monitored, Define retention period
B. Monitor job failed and job status
C. Monitor services
D. Monitor CPU Utilization & system performance
E. Monitor database performance
F. Monitor disk spaces
G. Monitor tables (based on iteration prepare plan for partitioning and archival)
H. Monitor security
I. Monitor users and delete orphan users
J. Use ssrs for monitoring create ssrs reports(less costly) or use third party tool (more costly)
K. Restricted Access
L. Long running query and job alert
M. Agent Job history configuration and maintenance
N. Monitor database integrity
Important Links for Sql Server scripts
Sql Server Missing Index Script
Click Here
Sql Server Unused Index Script
Click Here
Find Unused Indexes of Current Database
Click Here
Identify Numbers of Non Clustered Index on Tables for Entire Database
Click Here
Find a column in SQL database tables
SELECT s.NAME AS ColumnName ,sh.NAME + '.' + o.NAME AS ObjectName ,o.type_desc AS ObjectType ,CASE WHEN t.NAME IN ( 'char' ,'varchar' ) THEN t.NAME + '(' + CASE WHEN s.max_length < 0 THEN 'MAX' ELSE CONVERT(VARCHAR(10), s.max_length) END + ')' WHEN t.NAME IN ( 'nvarchar' ,'nchar' ) THEN t.NAME + '(' + CASE WHEN s.max_length < 0 THEN 'MAX' ELSE CONVERT(VARCHAR(10), s.max_length / 2) END + ')' WHEN t.NAME IN ('numeric') THEN t.NAME + '(' + CONVERT(VARCHAR(10), s.precision) + ',' + CONVERT(VARCHAR(10), s.scale) + ')' ELSE t.NAME END AS DataType ,CASE WHEN s.is_nullable = 1 THEN 'NULL' ELSE 'NOT NULL' END AS Nullable ,CASE WHEN ic.column_id IS NULL THEN '' ELSE ' identity(' + ISNULL(CONVERT(VARCHAR(10), ic.seed_value), '') + ',' + ISNULL(CONVERT(VARCHAR(10), ic.increment_value), '') + ')=' + ISNULL(CONVERT(VARCHAR(10), ic.last_value), 'null') END + CASE WHEN sc.column_id IS NULL THEN '' ELSE ' computed(' + ISNULL(sc.DEFINITION, '') + ')' END + CASE WHEN cc.object_id IS NULL THEN '' ELSE ' check(' + ISNULL(cc.DEFINITION, '') + ')' END AS MiscInfo ,CASE WHEN t.NAME IN ( 'char' ,'varchar' ) THEN CASE WHEN s.max_length < 0 THEN 0 ELSE CONVERT(VARCHAR(10), s.max_length) END WHEN t.NAME IN ( 'nvarchar' ,'nchar' ) THEN CASE WHEN s.max_length < 0 THEN 0 ELSE CONVERT(VARCHAR(10), s.max_length / 2) END END val INTO #temp FROM sys.columns s INNER JOIN sys.types t ON s.system_type_id = t.user_type_id AND t.is_user_defined = 0 INNER JOIN sys.objects o ON s.object_id = o.object_id INNER JOIN sys.schemas sh ON o.schema_id = sh.schema_id LEFT JOIN sys.identity_columns ic ON s.object_id = ic.object_id AND s.column_id = ic.column_id LEFT JOIN sys.computed_columns sc ON s.object_id = sc.object_id AND s.column_id = sc.column_id LEFT JOIN sys.check_constraints cc ON s.object_id = cc.parent_object_id AND s.column_id = cc.parent_column_id WHERE --t.name in ('nvarchar','nchar','char','varchar') S.NAME LIKE 'attendance%' --<--Write your columnname here ORDER BY sh.NAME + '.' + o.NAME ,s.column_id SELECT * FROM #temp --where val>500 DROP TABLE #temp
DBA Scenario 1: How to handle 100% CPU Utilization
Hi guys!
Today we shall discuss the hot topic how to handle 100% CPU Utilization. DBA
face this kind of situation often in their daily life. Some time it is must to
solve this situation in production environment as this will hamper the business
activity in terms of transactions and money.
The
methods we are about to discuss, help only if SQL server is the culprit, You
need to first identify whether SQL is consuming all memory resources, then only
you need to apply these methods otherwise it will not help you-
I shall
cover two methods; both methods are not same and can’t be used at same time.
Their aim is same to lower the memory usage but they are applied over different
scenario.
Before
begin, I want to discuss a myth that is, mostly it is assumed that longer
running queries are problem, but it is not true all the time, yes it may cause
problem but small concurrent running queries having maximum worker time or
maximum execution count can also be a problem. For example a query which is
executing before 1 sec can’t be a problem. But the same query if executed 1 lac
time concurrently (at the same time) can cause issue.
As per
my experience, mostly select queries are the culprit and create such situation
so that sql server starts consuming 100 % of memory resources. You can use task
manager or resource monitor to find the CPU usage.
Method 1
:
It is a
traditional approach, mostly used by DBAs. Whenever this kind of situation arises, you
need to first check for the intense processes running on the server. For this
you need to continuously execute one procedure sp_who2 and monitor which spid
is increasing gradually, then you need to identify what is going on that
session for that use dbcc inputbuffer(<spid>), if it is select query you
can kill it but you should not kill transaction and queries having insert
update delete on sql tables.
sp_who2
dbcc inputbuffer(<spid>)
kill <spid>
Note:
You need to look for spid greater than 50 because less than 50 spids are used
by Sql server for its internal working.
Method 2
:
It is
more granular approach to optimize the query. In this approach you need to use
few DMV’s. Sql server increases the execution_count for same query, if the
definition of the query changes new plan is created. So you need to find out
the queries having maximum exeution_count and maximum total_worker_time. When
you find the record you will get the query plan, which you need to copy and
paste to another dmv that is sys.dm_exec_query_stats.
Select * from sys.dm_exec_query_stats
order by execution_count desc
Select * from sys.dm_exec_query_stats
order by total_worker_time desc
Select * from sys.dm_exec_query_plan(<plan_handle>)
So from
above query we will get the execution plan and from there we can view xml view
of query and find parameters for the query. After getting queries you can apply
sargable rules, these rules are used to optimize queries.
This is
how you can trace costly queries and handle 100% CPU Utilization
SSRS - Download all Report files Or Single (RDL) from Report Server
Hi friends! Today my manager asked me to download all reports from Report Server and check in to TFS.
*/
There were plenty of reports almost 1000, so it is difficult to download one by one. I thought this kind of problem come to most of the DBAs so decided to write this blog, I found same script on google and modified it-
Prerequisites & Notes :
1. xp_cmdshell should be enabled. (Part 1)
2. SQL Engine Account should proper write permission on Output Folder where you will get all your reports
3. Please read comments before execution and follow (Part 2)
4. Change the Report Server database on the query
Part 1 : Enable xp_cmdshell
EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE GO EXEC sp_configure 'xp_cmdshell', 1 GO RECONFIGURE GO EXEC sp_configure 'show advanced options', 0 GO RECONFIGURE GO
Part 2 : Script Implementation
/*
If you want all reports in output folder whether they were published in different folders
in SSRS Report Server then put Null else define folder name
eg: DECLARE @ReportFolderPath AS VARCHAR(500) ='/Folder1/Folder2/'
*/
DECLARE @ReportFolderPath AS VARCHAR(500) = null /* If you want some specific report then put its name else put Null eg: DECLARE @ReportName AS VARCHAR(500) = 'Bill' */ DECLARE @ReportName AS VARCHAR(500) = null /* This is the output folder where reports will be generated. Make sure Sql Server Engine Account has proper permission on this */ DECLARE @OutputFolderLocation AS VARCHAR(500) = 'C:\ReportFolder\' --To check Folder path SET @OutputFolderLocation = REPLACE(@OutputFolderLocation,'\','/') /* This for bcp access to sql server. If you want to access through sql login use eg : DECLARE @BCPComponents VARCHAR(1000)='-S"MachineName\SQL2012" -U"SA" -P"Pa$$word"' or If you want from windows account having permission on sql then eg : DECLARE @BCPComponents VARCHAR(1000)='-S"1MachineName\SQL2012"' else null */ DECLARE @BCPComponents VARCHAR(1000)= null DECLARE @SqlQuery AS NVARCHAR(MAX) IF LTRIM(RTRIM(ISNULL(@OutputFolderLocation,''))) = '' BEGIN SELECT 'Access denied!!!' END ELSE BEGIN SET @SqlQuery = STUFF((SELECT ';EXEC master..xp_cmdshell ''bcp " ' + ' SELECT ' + ' CONVERT(VARCHAR(MAX), ' + ' CASE ' + ' WHEN LEFT(C.Content,3) = 0xEFBBBF THEN STUFF(C.Content,1,3,'''''''') '+ ' ELSE C.Content '+ ' END) ' + ' FROM ' + ' [ReportServer$SQL2012].[dbo].[Catalog] CL ' + ' CROSS APPLY (SELECT CONVERT(VARBINARY(MAX),CL.Content) Content) C ' + ' WHERE ' + ' CL.ItemID = ''''' + CONVERT(VARCHAR(MAX), CL.ItemID) + ''''' " queryout "' + @OutputFolderLocation + '' + CL.Name + '.rdl" ' + '-T -c -x '+COALESCE(@BCPComponents, '')+'''' FROM [ReportServer$SQL2012].[dbo].[Catalog] CL WHERE CL.[Type] = 2 --Report AND '/' + CL.[Path] + '/' LIKE COALESCE('%/%' + @ReportFolderPath + '%/%', '/' + CL.[Path] + '/') AND CL.Name LIKE COALESCE('%' + @ReportName + '%', CL.Name) FOR XML PATH('')), 1,1,'') SELECT @SqlQuery --Execute the Dynamic Query EXEC SP_EXECUTESQL @SqlQuery END
Reference : VinayPugalia
Subscribe to:
Posts (Atom)