Tuesday, August 2, 2016

Finding SQL server Performance bottlenecs

Primary source of information are Dynamic Views and Functions.


1. Connection and session data.

SELECT
    database_id,    -- SQL Server 2012 and after only
    session_id,
    status,
    login_time,
    cpu_time,
    memory_usage,
    reads,
    writes,
    logical_reads,
    host_name,
    program_name,
    host_process_id,
    client_interface_name,
    login_name as database_login_name,
    last_request_start_time
FROM sys.dm_exec_sessions
WHERE is_user_process = 1
ORDER BY cpu_time DESC;



cpu_time - miliseconds
memory_usage - 8kb blocks
reads,writes - 8kb blocks that read/written from/to physical disk
logical_reads - 8kb blocks that read/written from/to buffer cache

TIP: for "Program Name"  column use a specifically crafted connection string to see 
your program name there, example:

     Data Source=myServer;
     Initial Catalog=myDB;
     User Id=myUsername;
     Password=myPassword;

     Application Name=myApp;



2. What is executed on SQL server

SELECT
        [DatabaseName] = db_name(rq.database_id),
        s.session_id, 
        rq.status,
        [SqlStatement] = SUBSTRING (qt.text,rq.statement_start_offset/2,
            (CASE WHEN rq.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),
            qt.text)) * 2 ELSE rq.statement_end_offset END - rq.statement_start_offset)/2),        
        [ClientHost] = s.host_name,
        [ClientProgram] = s.program_name, 
        [ClientProcessId] = s.host_process_id, 
        [SqlLoginUser] = s.login_name,
        [DurationInSeconds] = datediff(s,rq.start_time,getdate()),
        rq.start_time,
        rq.cpu_time,
        rq.logical_reads,
        rq.writes,
        [ParentStatement] = qt.text,
        p.query_plan,
        rq.wait_type,
        [BlockingSessionId] = bs.session_id,
        [BlockingHostname] = bs.host_name,
        [BlockingProgram] = bs.program_name,
        [BlockingClientProcessId] = bs.host_process_id,
        [BlockingSql] = SUBSTRING (bt.text, brq.statement_start_offset/2,
            (CASE WHEN brq.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),
            bt.text)) * 2 ELSE brq.statement_end_offset END - brq.statement_start_offset)/2)
    FROM sys.dm_exec_sessions s
    INNER JOIN sys.dm_exec_requests rq
        ON s.session_id = rq.session_id
    CROSS APPLY sys.dm_exec_sql_text(rq.sql_handle) as qt
    OUTER APPLY sys.dm_exec_query_plan(rq.plan_handle) p
    LEFT OUTER JOIN sys.dm_exec_sessions bs
        ON rq.blocking_session_id = bs.session_id
    LEFT OUTER JOIN sys.dm_exec_requests brq
        ON rq.blocking_session_id = brq.session_id
    OUTER APPLY sys.dm_exec_sql_text(brq.sql_handle) as bt
    WHERE s.is_user_process =1
        AND s.session_id <> @@spid
-- AND rq.database_id = DB_ID()  -- Comment out to look at all databases
    ORDER BY rq.start_time ASC;

cpu_time - miliseconds
ParentStatement - name of store procedure if SP is run, otherwise it is the same as SqlStatement
SqlStatement - current SQL query (can be a part of SP) 
rq.start_time - important to look at those, indicate a problem if values are high, same as in previous chapter.
rq.cpu_time - same as above
rq.logical_reads - same as above
rq.writes  - same as above
p.query_plan = use this to see what is query plan


3. Statement execution statistics.

Finding the worst performing statements:

This keep last sever hours of running statements in SQL memory.

SELECT TOP 20    
        DatabaseName = DB_NAME(CONVERT(int, epa.value)), 
        [Execution count] = qs.execution_count,
        [CpuPerExecution] = total_worker_time / qs.execution_count ,
        [TotalCPU] = total_worker_time,
        [IOPerExecution] = (total_logical_reads + total_logical_writes) / qs.execution_count ,
        [TotalIO] = (total_logical_reads + total_logical_writes) ,
        [AverageElapsedTime] = total_elapsed_time / qs.execution_count,
        [AverageTimeBlocked] = (total_elapsed_time - total_worker_time) / qs.execution_count,
     [AverageRowsReturned] = total_rows / qs.execution_count,    
     [Query Text] = SUBSTRING(qt.text,qs.statement_start_offset/2 +1, 
            (CASE WHEN qs.statement_end_offset = -1 
                THEN LEN(CONVERT(nvarchar(max), qt.text)) * 2 
                ELSE qs.statement_end_offset end - qs.statement_start_offset)
            /2),
        [Parent Query] = qt.text,
        [Execution Plan] = p.query_plan,
     [Creation Time] = qs.creation_time,
     [Last Execution Time] = qs.last_execution_time   
    FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
    OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) p
    OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
    WHERE epa.attribute = 'dbid'
        AND epa.value = db_id()
    ORDER BY [AverageElapsedTime] DESC; --Other column aliases can be used

4. Missing indexes



SELECT     
    TableName = d.statement,
    d.equality_columns, 
    d.inequality_columns,
    d.included_columns, 
    s.user_scans,
    s.user_seeks,
    s.avg_total_user_cost,
    s.avg_user_impact,
    AverageCostSavings = ROUND(s.avg_total_user_cost * (s.avg_user_impact/100.0), 3),
    TotalCostSavings = ROUND(s.avg_total_user_cost * (s.avg_user_impact/100.0) * (s.user_seeks + s.user_scans),3)
FROM sys.dm_db_missing_index_groups g
INNER JOIN sys.dm_db_missing_index_group_stats s
    ON s.group_handle = g.index_group_handle
INNER JOIN sys.dm_db_missing_index_details d
    ON d.index_handle = g.index_handle
WHERE d.database_id = db_id()
ORDER BY TableName, TotalCostSavings DESC;

TableName = d.statement - table name that SQL recommends to create an index
d.equality_columns - Comma-separated list of columns that contribute to equality predicates of the form: table.column = constant_value
d.inequality_columns - Comma-separated list of columns that contribute to  inequality predicates, for example, predicates of the form: table.column > constant_value Any comparison operator other than “=” expresses inequality.
d.included_columns - Comma-separated list of columns needed as covering columns for the query.
s.user_scans,
s.user_seeks,
s.avg_total_user_cost,
s.avg_user_impact,


5. Index usage.


SELECT
    [DatabaseName] = DB_Name(db_id()),
    [TableName] = OBJECT_NAME(i.object_id),
    [IndexName] = i.name, 
    [IndexType] = i.type_desc,
    [TotalUsage] = IsNull(user_seeks, 0) + IsNull(user_scans, 0) + IsNull(user_lookups, 0),
    [UserSeeks] = IsNull(user_seeks, 0),
    [UserScans] = IsNull(user_scans, 0), 
    [UserLookups] = IsNull(user_lookups, 0),
    [UserUpdates] = IsNull(user_updates, 0)
FROM sys.indexes i 
INNER JOIN sys.objects o
    ON i.object_id = o.object_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats s
    ON s.object_id = i.object_id
    AND s.index_id = i.index_id
WHERE 
    (OBJECTPROPERTY(i.object_id, 'IsMsShipped') = 0)
ORDER BY [TableName], [IndexName];

Look for:
a) lines with NONCLUSTERED type of index with high number of "UserUpdates" and low number of User actions.
b) high value for UserSeeks for NONCLUSTERED type of index, this means that index is properly build.


6. What is memory usage.

DECLARE @total_buffer INT;
SELECT @total_buffer = cntr_value   FROM sys.dm_os_performance_counters
WHERE RTRIM([object_name]) LIKE '%Buffer Manager'   AND counter_name = 'Total Pages';
;WITH src AS(   SELECT        database_id, db_buffer_pages = COUNT_BIG(*) 
FROM sys.dm_os_buffer_descriptors       --WHERE database_id BETWEEN 5 AND 32766       
GROUP BY database_id)SELECT   [db_name] = CASE [database_id] WHEN 32767        THEN 'Resource DB'        ELSE DB_NAME([database_id]) END,   db_buffer_pages,   db_buffer_MB = db_buffer_pages / 128,   db_buffer_percent = CONVERT(DECIMAL(6,3),        db_buffer_pages * 100.0 / @total_buffer)
FROM src
ORDER BY db_buffer_MB DESC;

--then drill down into memory used by objects in database of your choice

USE DB_name;

WITH src AS(   SELECT       [Object] = o.name,       [Type] = o.type_desc,       [Index] = COALESCE(i.name, ''),       [Index_Type] = i.type_desc,       p.[object_id],       p.index_id,       au.allocation_unit_id   
FROM       sys.partitions AS p   INNER JOIN       sys.allocation_units AS au       ON p.hobt_id = au.container_id   INNER JOIN       sys.objects AS o       ON p.[object_id] = o.[object_id]   INNER JOIN       sys.indexes AS i       ON o.[object_id] = i.[object_id]       AND p.index_id = i.index_id   WHERE       au.[type] IN (1,2,3)       AND o.is_ms_shipped = 0)
SELECT   src.[Object],   src.[Type],   src.[Index],   src.Index_Type,   buffer_pages = COUNT_BIG(b.page_id),   buffer_mb = COUNT_BIG(b.page_id) / 128
FROM   src
INNER JOIN   sys.dm_os_buffer_descriptors AS b  
 ON src.allocation_unit_id = b.allocation_unit_id
WHERE   b.database_id = DB_ID()
GROUP BY   src.[Object],   src.[Type],   src.[Index],   src.Index_Type
ORDER BY   buffer_pages DESC;


7. What is memory usage + index usage statistics.

WITH src AS( SELECT [Object] = o.name, [Type] = o.type_desc, [Index] = COALESCE(i.name, ''), [Index_Type] = i.type_desc, p.[object_id], p.index_id, au.allocation_unit_id FROM sys.partitions AS p INNER JOIN sys.allocation_units AS au ON p.hobt_id = au.container_id INNER JOIN sys.objects AS o ON p.[object_id] = o.[object_id] INNER JOIN sys.indexes AS i ON o.[object_id] = i.[object_id] AND p.index_id = i.index_id WHERE au.[type] IN (1,2,3) AND o.is_ms_shipped = 0) SELECT src.[Object], src.[Type], src.[Index], src.Index_Type, i.type_desc, buffer_pages = COUNT_BIG(b.page_id), buffer_mb = COUNT_BIG(b.page_id) / 128, src.object_id, [TotalUsage] = IsNull(user_seeks, 0) + IsNull(user_scans, 0) + IsNull(user_lookups, 0), [UserSeeks] = IsNull(user_seeks, 0), [UserScans] = IsNull(user_scans, 0), [UserLookups] = IsNull(user_lookups, 0), [UserUpdates] = IsNull(user_updates, 0) FROM src INNER JOIN sys.dm_os_buffer_descriptors AS b ON src.allocation_unit_id = b.allocation_unit_id left outer join sys.indexes i on src.object_id=i.object_id and src.Index_Type=i.type_desc and src.[Index]= i.name LEFT OUTER JOIN sys.dm_db_index_usage_stats s ON s.object_id = i.object_id AND s.index_id = i.index_id WHERE b.database_id = DB_ID() --and src.[Index]='icts_transaction_idx5' GROUP BY src.[Object], src.[Type], src.[Index], src.Index_Type, i.type_desc, src.object_id, IsNull(user_seeks, 0) + IsNull(user_scans, 0) + IsNull(user_lookups, 0), IsNull(user_seeks, 0), IsNull(user_scans, 0), IsNull(user_lookups, 0), IsNull(user_updates, 0) ORDER BY buffer_pages DESC;





8. Statistics usage,  this shows statistics higher than 5% calculated since last update.

set nocount on declare @modPercentThreshold decimal(8,4) = 5.0 SELECT [rowmodcounter].[modPercent], names.dbName + '.' + names.schemaName + '.' + names.tableName as tableName, names.statsName, [sp].[last_updated] as lastUpdated, [sp].[modification_counter] as numberOfModifications, [sp].[rows] as tableRowCount, [sp].[rows_sampled] as rowsSampled, [s].[auto_created] as isAnAutoCreatedStats, sampleRate = (1.0 * sp.rows_sampled / sp.rows) * 100, 'UPDATE STATISTICS ' + names.schemaName + '.' + names.tableName + '(' + names.statsName + ')' as sqlStmt FROM [sys].[stats] s CROSS APPLY [sys].[dm_db_stats_properties]([s].[object_id],[s].[stats_id]) sp INNER JOIN [sys].[tables] t ON [s].[object_id] = [t].[object_id] CROSS APPLY (SELECT (1.0 * [sp].[modification_counter] / NULLIF([sp].[rows], 0)) * 100) AS rowmodcounter(modPercent) CROSS APPLY (SELECT DB_NAME() as dbName, SCHEMA_NAME(t.schema_id) as schemaName, t.[name] as tableName, s.[name] as statsName) AS names WHERE OBJECTPROPERTY(s.[object_id],'IsMSShipped')=0 AND [rowmodcounter].[modPercent] > @modPercentThreshold ORDER BY [rowmodcounter].[modPercent] DESC; Go



Thursday, April 21, 2016

how to use logparser

1. Download.
2. Querying IIS:

a) checking what fields can be queried:

"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" -h -i:w3c file.LOG -nskiplines:3

definitions of fields (IIS7)

b) checking number client (source) addresses:

"C:\inetpub\logs\LogFiles\W3SVC1>"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" "SELECT c-ip,count(c-ip) from u_ex160421.log group by c-ip"  -i:w3c -rtp:-1

c)checking number client (source) addresses in april 2016 

"C:\inetpub\logs\LogFiles\W3SVC1>"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" "SELECT c-ip,count(c-ip) from u_ex1604*.log group by c-ip"  -:w3c -rtp:-1

d)checking number client (source) addresses in april 2016 with date

C:\inetpub\logs\LogFiles\W3SVC3>"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" "SELECT date,count(c-ip) from u_ex1604*.log group by date"  -i:w3c -rtp:-1


3. Resources:
 a) examples
 b) extension of logparser

4. Querying exchange logs:

a) checking what fields can be queried:

D:\Microsoft\Exchange Server\V14\TransportRoles\Logs\MessageTracking>"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" -h -i:CSV file.LOG -nskiplines:4

a) checking by connector-id

"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" "SELECT connector-id,Count(*) as Hits from MSGTRK20160102-5.LOG  GROUP BY connector-id ORDER BY Hits DESC" -i:CSV -nSkipLines:4 -rtp:-1


Wednesday, April 20, 2016

Using Git




1. Global config:

git config --global help.autocorrect 1
git config --global core.editor notepad
git config --global core.autocflf = 1  #valid on Windows OS
git config core.autocrlf true  # use to keep endline formatting
git config --global user.name "Billy Everyteen"
git config --global user.email "Someone@gmail.com"
git config --global --list #verify config

#Windows:
git config --global core.autocrlf true
#Linux
git config --global core.autocrlf input


2. Basics

git init #Creating a new local repository

echo "Hello World" > run.cmd #add content to file
git status 

git add run.cmd or
git add -u # adds if there is new content (will not add empty new files)
git add -A #adds all files, including empty or new
git status 

git commit -m "First Commit"


git log # history of commits

git diff HEAD~1..HEAD 
git diff HEAD~1.. #veviewing differences between latest commit and previous (number controls which version to compare)

git checkout file.txt #restores previous version of file.txt
gir reset --hard # bring back version to HEAD

git reset --soft HEAD~1 #brings back previous version of code
git reset --soft HEAD~1 #deletes last commit
use .gitignore to ignore files

git clone https://github.com/jquery/jquery.git #clones project
cd jquery
git log
git log --oneline # singe line per commit
git log --oneline --graph # singe line per commit + merges
git shortlog # alphabetical log
git shortlog -sne # number of commmits with email

git show HEAD #shows last changes
git show HEAD~2 #shows previous changes

git branch #display local branch
git branch -r #display remote branch
git tag # shows versions of code

git remote add origin https://github.com/someone/project.git # allows to upload there
git push -u origin master #uploads content

NOTE: use proxy server if needed:

set HTTP_PROXY=http://IP:port
set HTTPS_PROXY=https://IP:port
v