First @SLODUG Meeting in 2017

We had our first SQL Server User Group SLODUG meeting in this year. Event took place at Microsoft Slovenija, 09.Feb.2017 with cca 15 people showing up. Along 15 people we had 8 pizzas and some 20 beers ūüôā

Scheduled were two topics:

17:15 – 18:00 Let’s use Microsoft R Server 9 for entering Kaggle competition (TomaŇĺ KaŇ°trun)
18:10 – 19:30 Forecasting with MS BI Suite (Dejan Sarka)

with two beautiful presenters:

slodug20170209

Not to mention outstanding statistics about presenters:

Average gender: Male
Maximum eye color: Yes
Beer moving average: coffee

And a printscreen from the SLODUG Blog:

2017-02-10-14_29_25-slodug-srecanje-v-cetrtek-9-2-novice-slodug-slodug

Keep the community spirit up!

R and SQL Server articles

In past couple of months, I have prepared several articles on R and SQL Server that have been published on SQL Server Central.

The idea was, to have couple of articles covering the introduction to R, to basics on R Server, to some practical cases on R with SQL Server.

1) Using Microsoft R in Enterprise Environments

Article covers the concepts on Microsoft R Server, where and how to start with Microsoft R in enterprise environment and give answers to most common concerns people might have when introducing R language into corporation.

1

Link to article: http://www.sqlservercentral.com/articles/R+Language/140422/

 

2) Introduction to Microsoft R Services in SQL Server 2016

Integration and architecture on Microsoft R Services is main focus of this article. It outlinesdifferent flavors of R (Open, Client, Server, Services, Hadoop, etc.), how to deal with installation and basic overview and explanation on extended stored procedure SP_EXECUTE_EXTERNAL_SCRIPT.

2.png

Link to article: http://www.sqlservercentral.com/articles/Microsoft/145393/

 

3) Installing R packages in SQL Server R Services

Expand the functionality of R by adding new packages. Covers many ways how to install and add additional packages to your R environment.

3

Link to article: http://www.sqlservercentral.com/articles/R+Package/145571/

 

4) Using SQL Server and R Services for analyzing Sales data

Providing use cases on analyzing sales data was focus of this article with goal to show readers and users how to ope rationalize and bring R code into use in any enterprise (small or big) environment.

4

Link to article: http://www.sqlservercentral.com/articles/R+Services/145649/

 

5) Using Power BI and SSRS for visualizing SQL Server and R data

Visualizing the data for any use case, is also important aspect of understanding data insights. Article covers Power BI and SSRS visualization and how to embed R code in both tools.

5

Link to article: http://www.sqlservercentral.com/articles/R+Language/151358/

6) Using SQL Server and R Services for analyzing DBA Tasks

Broadening the use of Microsoft R for the DBA tasks was the main goal of this article. With simulation of  the disk usage, showing R example how to switch from monitoring the usage to predicting the usage of disk space. Clustering executed queries to narrow down performance issues and visualizing Query store information with heatmap were also introduced in article.

6

Link to article: http://www.sqlservercentral.com/articles/R+Language/151405/

 

More articles will follow, so stick around.

Happy R-SQLing!

 

 

RevoScaleR package for Microsoft R

RevoscaleR Package for R language is  package for scalable, distributed and parallel computation, available along with Microsoft R Server (and in-Database R Services). It solves many of limitations that R language is facing when run from a client machine. RevoScaleR Package addresses several of these issues:

  • memory based data access model -> dataset can be bigger than the size of a RAM
  • lack of parallel computation -> offers distributed and parallel computation
  • data movement -> no more need for data movement due to ability to set computational context
  • duplication costs -> with computational context set and different R versions (Open, Client or Server) data reside on one place, making maintenance cheaper and no duplication on different locations are needed
  • governance and providence -> RevoscaleR offers oversight of both with setting and additional services in R Server
  • hybrid typologies and agile development -> on-premises + cloud + client combination allow hybrid environment development for faster time to production

 

Before continuing, make sure you have RevoScaleR package installed in your R environment. To check, which computational functions are available within this package, let us run following:

RevoInfo <-packageVersion("RevoScaleR")
RevoInfo

to see the version of RevoScaleR package. In this case it is:

[1] ‚Äė9.0.1‚Äô

Now we will run command to get the list of all functions:

revoScaleR_objects <- ls("package:RevoScaleR")
revoScaleR_objects

Here is the list:

2017-01-30-21_19_51-rstudio

All RevoScaleR functions have prefix rx or Rx, so it is much easier to distinguish functions from functions available in other similar packages – for example rxKMeans and kmeans.

find("rxKmeans")
find("kmeans")

Showing results – name of the package where each function is based:

> find("rxKmeans")
[1] "package:RevoScaleR"
> find("kmeans")
[1] "package:stats"

The output or RevoScaleR object, shows 200 computational functions, but I will focus only on couple of them.

RevoScaleR package and computational function were designed for parallel computation with no memory limitation, mainly because this package introduced it’s own file format, called XDF. eXternal Data Frame was designed for fast processing of smaller chunks of data, and gains it’s efficiency when reading and writing the XDF data by loading chucks of data into RAM one by at a time and only what is needed. The way this is done, means no limitations for the size of RAM, computations run much faster (because it is using C++ to write these algorithms, which is faster than original, which were written in interpretative language). Data scientist still make a single R call, bur R will use distrubuteR component to determine, how many cores, sockets and threads are available and then launch smaller portion of load into each thread, analyze data a bit at a time. With XDF, data is retrieved many times, but since it is 5-10times smaller (as I have already shown in previous blog posts when compared to *.txt or *.csv files), and it is written and stored into XDF file the same way as it was extracted from the memory, it enables faster computations, because no parsing of data chunks is required and because of the way, how data is stored, is maximizes the retrieval time of the data.

Preparing and storing or importing your data into XDF is important part of achieving faster computational time. Download some sample data from revolution analytics blog. I will be taking some AirOnTime data, a CSV file from here.

With help of following functions will help you to, I will import file from csv into xdf format.

rxTextToXdf() ‚Äď for importing data to .xdf format from a delimited text file or csv.

rxDataStepXdf() ‚Äď for transforming and subseting data of variables and/or rows¬†for data exploration and analysis.

 With following code:
setwd("C:/Users/Documents/33")
rxTextToXdf(inFile = "airOT201201.csv", outFile = "airOT201201.xdf",  
stringsAsFactors = T, rowsPerRead = 200000)
I have now converted csv file into xdf file within cca 13 seconds.
2017-02-03-23_42_32-rstudio
and files look like:
2017-02-03-23_44_08-33
which is from original 105MB to 15 MB, it is 7 times smaller data file.
For further information on data handling, a very nice blog post is available here.
Quick information on the data set can be done using:
rxGetInfo("airOT201201.xdf", getVarInfo = TRUE, numRows = 20)
2017-02-03-23_52_02-rstudio
but we can also use following functions to expore and wrangle the data:
rxSummary(), rxCube, rxCrossTabs() ‚Äď summary statistics for column and compute correlations or crosstabulation between the columns
rxHistogram() ‚Äď plot a histogram¬†of a column (variable)
rxLinePlot() ‚Äď plot a line or scatterplot from XDF file or from rxCube
Running summary statistics for column DAY_OF_WEEK:
rxSummary(~DAY_OF_WEEK, data="airOT201201.xdf")
#or for the whole dataset
rxSummary(~., data="airOT201201.xdf")
we see the execution time and results of this statistic:
Rows Read: 200000, Total Rows Processed: 200000, Total Chunk Time: 0.007 seconds
Rows Read: 200000, Total Rows Processed: 400000, Total Chunk Time: 0.002 seconds
Rows Read: 86133, Total Rows Processed: 486133, Total Chunk Time: 0.002 seconds 
Computation time: 0.018 seconds.
Call:
rxSummary(formula = ~DAY_OF_WEEK, data = "airOT201201.xdf")

Summary Statistics Results for: ~DAY_OF_WEEK
Data: "airOT201201.xdf" (RxXdfData Data Source)
File name: airOT201201.xdf
Number of valid observations: 486133 
 
 Name        Mean     StdDev   Min Max ValidObs MissingObs
 DAY_OF_WEEK 3.852806 2.064557 1   7   486133   0
And run now rxHistogram for selected column:
#histogram
rxHistogram(~DAY_OF_WEEK, data="airOT201201.xdf")

Rows Read: 200000, Total Rows Processed: 200000, Total Chunk Time: 0.007 seconds
Rows Read: 200000, Total Rows Processed: 400000, Total Chunk Time: 0.004 seconds
Rows Read: 86133, Total Rows Processed: 486133, Total Chunk Time: Less than .001 seconds 
Computation time: 0.019 seconds.
to get the results for histogram:
2017-02-04 00_01_14-RStudio.png

Some of the following algorithms for predictions are available (and many more in addition):

rxLinMod() ‚Äď linear regression model for XDF file
rxLogit() ‚Äď logistic regression model for XDF file
rxDTree() – classification tree for XDF file
rxNaiveBayes() – bayes classifier for XDF file
rxGlm() – group of general linear models for XDF file
rxPredict() ‚Äď predictions and residuals computations
¬†Let’s create a bit larger regression decision tree on our sample data on departure delay, day of the week, distance and elapsed time.
Air_DTree <- rxDTree(DEP_DELAY_NEW ~ DAY_OF_WEEK + ACTUAL_ELAPSED_TIME +
 DISTANCE_GROUP,  maxDepth = 3, minBucket = 30000, data = "airOT201201.xdf")

Visualizing the tree data:

plotcp(rxAddInheritance(Air_DTree))
plot(rxAddInheritance(Air_DTree))
text(rxAddInheritance(Air_DTree))

2017-02-04-00_22_57-plot-zoom

or you can use the RevoTreeView package, which is even smarter:

library(RevoTreeView)
plot(createTreeView(Air_DTree))

we can visualize the tree:

2017-02-04-00_20_49-microsoft-corporation

Of course, pruning and checking for over-fitting must also be done.

When comparing – for example exDTrees to original function, the performance si much better in favor of R. And if you have the ability to use RevoScaleR package for computations on larger datasets or your client might be an issue, use this package. It sure will make your life easier.

 

Happy R-SQLing.

Animated barplot and google map with R

It might happen that you will need a animated graph of any kind. For purposes of plotting demographic data and changes through the years, Google Maps and plotting maps, merging and converting jpg files into a animated gif, sure will give a nice visualization effect.

Here is a sample of changes over the time period of three years on some dataset of my home town and graph can tell little bit more as numbers ūüôā

ggmap_anim

So besides plotting visualizations based on geographical data, additional plots available in any of R packages can be visualized in a similar way.

Playing and goofing around, I created a sample dataset:

# dataset
d <- data.frame(val=c(2,3,4,3,4,3,4,3,2,1), 
                year=c(2015,2016,2017,2015,2016,2015,2015,2016,2017,2015)
                )

Once that, is created, a function would do the trick:

setwd("C:/DataTK")
library(ggplot2)
library(dplyr)

plot.bar.2 <- function(df_plot, xvar, yvar, fill)
{
  require(ggplot2)
  require(dplyr)
  attach(df_plot)
  #number of steps - based on values in X-axis
  x_unique <- unique(df_plot$xvar)   #xvar = year
  nof_steps <- as.integer(length(df_plot$x_unique))

  for (i in 1:nof_steps) 
  {
    x <- as.integer(x_unique[i])
    
    d1 <- df_plot %>%
          arrange(xvar) %>%
          filter(xvar<=x) %>%
          mutate(new_val = val) %>%
          select(xvar, new_val)
    
    d2 <- df_plot %>%
          arrange(xvar) %>%
          filter(xvar>x) %>%
          mutate(new_val = 0) %>%
          select(xvar,new_val)
    
    dfinal <- union_all(d1, d2)
    dfinal <- data.frame(dfinal)
    colnames(dfinal)[1] <- "x"
    colnames(dfinal)[2] <- "y"
    
    name <- paste('barplot00',i,'.png',sep="")
    png(name)
    ggplot(data=dfinal, aes(x=x, y=y, fill=x)) + geom_bar(stat="identity") + 
        guides(fill=FALSE)
    dev.off()
    rm(d1,d2,x,dfinal) 
  }

  system("magick -delay 150 -loop 0 *.png GeomBar_plot.gif")
  file.remove(list.files(pattern=".png"))
  rm(x_unique, nof_steps)   
  detach(df_plot)
}

plot.bar.2(d, d$year, d$val, d$year)

 

A bit more “interactive” or animated graph is created.

barplot

In comparison to static graph:

barplot.png

In addition to R code, the ImageMagic program needs to be installed on your machine, as well. Also the speed, quality and many other parameters can be set, when creating animated gif.

Animated gif can be also included into your SSRS report, your Sharepoint site or any other site – like my blog ūüôā and it will stay interactive. In Power BI, importing animated gif as a picture, unfortunately will not work.

As always, code is available at GitHub.

Happy R-coding!

Using R sp_execute_external_script with JSON

JSON has become part of the SQL Server in the same version as R. Both were very highly anticipated and awaited from the community.

JSON has very powerful statements for converting to and from JSON for storing into / from SQL Server engine (FOR JSON and JSON VALUE, etc).  And since it is gaining popularity for data exchange, I was curious to give it a try with R combination.

I will simply convert a system table into array using for json clause.

SELECT top 10 object_id  FROM sys.objects FOR JSON AUTO;

and it gives back the result:

[{"object_id":3},{"object_id":5},{"object_id":6},{"object_id":7},{"object_id":8},
{"object_id":9},{"object_id":17},{"object_id":18},{"object_id":19},{"object_id":20}]

And sp_execute_external_script query without JSON would look like:

EXECUTE sp_execute_external_script    
       @language = N'R'    
      ,@script=N'OutputDataSet <- InputDataSet'
      ,@input_data_1 = N'SELECT top 10 object_id  FROM sys.objects'
WITH RESULT SETS ((nr INT));

Now, let’s suppose we want to use JSON result directly into T-SQL using sp_execute_external_script. Yes, imagine getting results from an API and you want to push the results immediately into R for analysis. Very straight-forward package in R is called jsonlite (also available is rjson). Query would be as following:

EXECUTE sp_execute_external_script    
       @language = N'R'    
      ,@script=N'library(jsonlite)
                OutputDataSet <- data.frame(fromJSON(InputDataSet))'
      ,@input_data_1 = N'SELECT top 10 object_id  FROM sys.objects FOR JSON AUTO'
WITH RESULT SETS ((nr INT));

Nope!

Msg 39004, Level 16, State 20, Line 15
A 'R' script error occurred during execution of 'sp_execute_external_script' 
with HRESULT 0x80004004.
Msg 39019, Level 16, State 1, Line 15
An external script error occurred: 
Error: Argument 'txt' must be a JSON string, URL or file.

So the argument ‘txt’ must be a JSON string, URL or file. Khm…very “useful” error message, but problem is, that data from T-SQL is stored and presented as data.frame to R environment (Launchpad), because the data type passed to R is array of objects. And would look something like:

2017-01-08-22_04_46-rstudio

Running this query in native (R) environment, we at least get the idea where and how to tackle the problem. So we need to convert the data.frame to a charaters using toJSON and as.character, so that the end T-SQL query would look like:

EXECUTE sp_execute_external_script    
       @language = N'R'    
      ,@script=N'
                library(jsonlite)
                js <- InputDataSet
                js2 <- as.character(toJSON(js))
                OutputDataSet <- data.frame(fromJSON(js2))'
      ,@input_data_1 = N'SELECT top 10 object_id  FROM sys.objects FOR JSON AUTO'
WITH RESULT SETS ((nr INT));

Now we get the correct results (as if we would not used JSON):

2017-01-08-22_22_40-sqlquery7-sql-sicn-kastrun-wideworldimportersdw-spar_si01017988-60_-micr

So R is ready for JSON and JSON is also ready for R.

Happy R+JSON+SQLing!

Clustering executed SQL Server queries using R as tool for

When query execution performance analysis is to be done, there are many ways to find which queries might cause any unwanted load or cause stall on the server.

By encouraging DBA community to start practicing the advantage or R Language and world of data science, I have created a demo to show, how statistics on numerous queries can be stored for later analysis. And this demo has unsupervised (or undirected) method for grouping similar query statistics (or queries) to easier track and find where and which queries might be potential system stoppers.

Before we run some queries to generate the running statistics for these queries, we clean the cache and prepare the table for storing statistics from sys.dm_exec_query_stats.

-- drop statistics table
DROP TABLE IF EXISTS  query_stats_LOG_2;
DROP PROCEDURE IF EXISTS AbtQry;
DROP PROCEDURE IF EXISTS SalQry;
DROP PROCEDURE IF EXISTS PrsQry;
DROP PROCEDURE IF EXISTS OrdQry;
DROP PROCEDURE IF EXISTS PurQry;
GO

-- Clean all the stuff only for the selected db
DECLARE @dbid INTEGER
SELECT @dbid = [dbid] 
FROM master..sysdatabases 
WHERE name = 'WideWorldImportersDW'
DBCC FLUSHPROCINDB (@dbid);
GO

-- for better sample, check that you have Query store turned off
ALTER DATABASE WideWorldImportersDW SET  QUERY_STORE = OFF;
GO

We generate some fake data:

USE WideWorldImportersDW;
GO

-- CREATE Procedures 

CREATE PROCEDURE AbtQry
(@AMNT AS INTEGER) 
AS

-- an arbitrary query
SELECT 
  cu.[Customer Key] AS CustomerKey
  ,cu.Customer
  ,ci.[City Key] AS CityKey
  ,ci.City
  ,ci.[State Province] AS StateProvince
  ,ci.[Sales Territory] AS SalesTeritory
  ,d.Date
  ,d.[Calendar Month Label] AS CalendarMonth
  ,s.[Stock Item Key] AS StockItemKey
  ,s.[Stock Item] AS Product
  ,s.Color
  ,e.[Employee Key] AS EmployeeKey
  ,e.Employee
  ,f.Quantity
  ,f.[Total Excluding Tax] AS TotalAmount
  ,f.Profit
 
FROM Fact.Sale AS f
  INNER JOIN Dimension.Customer AS cu
    ON f.[Customer Key] = cu.[Customer Key]
  INNER JOIN Dimension.City AS ci
    ON f.[City Key] = ci.[City Key]
  INNER JOIN Dimension.[Stock Item] AS s
    ON f.[Stock Item Key] = s.[Stock Item Key]
  INNER JOIN Dimension.Employee AS e
    ON f.[Salesperson Key] = e.[Employee Key]
  INNER JOIN Dimension.Date AS d
    ON f.[Delivery Date Key] = d.Date
WHERE
    f.[Total Excluding Tax] BETWEEN 10 AND @AMNT;
GO

CREATE PROCEDURE SalQry
(@Q1 AS INTEGER
,@Q2 AS INTEGER)

AS
-- FactSales Query
SELECT * FROM Fact.Sale
WHERE
    Quantity BETWEEN @Q1 AND @Q2;
GO

CREATE PROCEDURE PrsQry
(@CID AS INTEGER )
AS

-- Person Query
SELECT * 
    FROM [Dimension].[Customer]
    WHERE [Buying Group] <> 'Tailspin Toys' 
    /* OR [WWI Customer ID] > 500 */
    AND [WWI Customer ID] BETWEEN 400 AND  @CID
ORDER BY [Customer],[Bill To Customer];
GO


CREATE PROCEDURE OrdQry
(@CK AS INTEGER)
AS

-- FactSales Query
SELECT 
    * 
    FROM [Fact].[Order] AS o
    INNER JOIN [Fact].[Purchase] AS p 
    ON o.[Order Key] = p.[WWI Purchase Order ID]
WHERE
    o.[Customer Key] = @CK;
GO

CREATE PROCEDURE PurQry
(@Date AS SMALLDATETIME)
AS

-- FactPurchase Query
SELECT *
    FROM [Fact].[Purchase]
        WHERE
        [Date Key] = @Date;
    --[Date KEy] = '2015/01/01'
GO

Now we run procedures couple of times:

DECLARE @ra DECIMAL(10,2)
SET @ra = RAND()
SELECT CAST(@ra*10 AS INT)

IF @ra  < 0.3333
    BEGIN
       -- SELECT 'RAND < 0.333', @ra
       DECLARE @AMNT_i1 INT = 100*CAST(@ra*10 AS INT)
       EXECUTE AbtQry @AMNT = @AMNT_i1
       EXECUTE PurQry @DAte = '2015/10/01'
       EXECUTE PrsQry @CID = 480
       EXECUTE OrdQry @CK = 0
       DECLARE @Q1_i1 INT = 1*CAST(@ra*10 AS INT)
       DECLARE @Q2_i1 INT = 20*CAST(@ra*10 AS INT)
       EXECUTE SalQry @Q1 = @Q1_i1, @Q2 = @Q2_i1

    END
ELSE 
    IF @ra  > 0.3333 AND @ra < 0.6667
    BEGIN
        -- SELECT 'RAND > 0.333 | < 0.6667', @ra
        DECLARE @AMNT_i2 INT = 500*CAST(@ra*10 AS INT)
        EXECUTE AbtQry @AMNT = @AMNT_i2
        EXECUTE PurQry @DAte = '2016/04/29'
        EXECUTE PrsQry @CID = 500
        EXECUTE OrdQry @CK = 207
        DECLARE @Q1_i2 INT = 2*CAST(@ra*10 AS INT)
        DECLARE @Q2_i2 INT = 10*CAST(@ra*10 AS INT)
        EXECUTE SalQry @Q1 = @Q1_i2, @Q2 = @Q2_i2

    END
ELSE
    BEGIN
        -- SELECT 'RAND > 0.6667', @ra
        DECLARE @AMNT_i3 INT = 800*CAST(@ra*10 AS INT)
        EXECUTE AbtQry @AMNT = @AMNT_i3
        EXECUTE PurQry @DAte = '2015/08/13'
        EXECUTE PrsQry @CID = 520
        EXECUTE OrdQry @CK = 5
        DECLARE @Q2_i3 INT = 60*CAST(@ra*10 AS INT)
        EXECUTE SalQry @Q1 = 25, @Q2 = @Q2_i3
    END
GO 10

And with this data, we proceed to run the logging T-SQL (query source).

SELECT
    (total_logical_reads + total_logical_writes) AS total_logical_io
    ,(total_logical_reads / execution_count) AS avg_logical_reads
    ,(total_logical_writes / execution_count) AS avg_logical_writes
    ,(total_physical_reads / execution_count) AS avg_phys_reads
    ,substring(st.text,(qs.statement_start_offset / 2) + 1,  
      ((CASE qs.statement_end_offset WHEN - 1 THEN datalength(st.text) 
   qs.statement_end_offset END  - qs.statement_start_offset) / 2) + 1) 
AS statement_text
    ,*
INTO query_stats_LOG_2
FROM
        sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_logical_io DESC

Now that we have gathered some test data, we can proceed to do clustering analysis.

Since I don’t know how many clusters can there be, and I can imagine, a¬† DBA would also be pretty much clueless, I will explore number of clusters. Following R code:

library(RODBC)
myconn <-odbcDriverConnect("driver={SQL Server};Server=SICN-KASTRUN;
database=WideWorldImportersDW;trusted_connection=true")

query.data <- sqlQuery(myconn, "
                     SELECT 
                                 [total_logical_io]
                      ,[avg_logical_reads]
                      ,[avg_phys_reads]
                      ,execution_count
                      ,[total_physical_reads]
                      ,[total_elapsed_time]
                      ,total_dop
                      ,[text]
                      ,CASE WHEN LEFT([text],70) LIKE '%AbtQry%' THEN 'AbtQry'
                       WHEN LEFT([text],70) LIKE '%OrdQry%' THEN 'OrdQry'
                       WHEN LEFT([text],70) LIKE '%PrsQry%' THEN 'PrsQry'
                       WHEN LEFT([text],70) LIKE '%SalQry%' THEN 'SalQry'
                       WHEN LEFT([text],70) LIKE '%PurQry%' THEN 'PurQry'
                       HEN LEFT([text],70) LIKE '%@BatchID%' THEN 'System'
                                  ELSE 'Others' END AS label_graph 
                      FROM query_stats_LOG_2")

close(myconn) 
library(cluster)

#qd <- query.data[,c(1,2,3,5,6)]
qd <- query.data[,c(1,2,6)]

## hierarchical clustering
qd.use <- query.data[,c(1,2,6)]
medians <- apply(qd.use,2,median)
#mads <- apply(qd.use,2,mad)
qd.use <- scale(qd.use,center=medians) #,scale=mads)

#calculate distances
query.dist <- dist(qd.use)

# hierarchical clustering
query.hclust <- hclust(query.dist)

# plotting solution
op <- par(bg = "lightblue")
plot(query.hclust,labels=query.data$label_graph,
main='Query Hierarchical Clustering', ylab = 'Distance', 
xlab = ' ', hang = -1, sub = "" )
# in addition to circle queries within cluster
rect.hclust(query.hclust, k=3, border="DarkRed")        

And produces the following plot:

2017-01-08-10_46_05-plot-zoom

Graph itself is self explanatory and based on the gathered statistics and queries executed against the system, you receive the groups of queries where your DBA can easily and fast track down what might be causing some issues. I added some labels to the query for the graph to look neater, but it is up to you.

I have also changed the type to “triangle” to get the following plot:

2017-01-08-10_54_55-plot-zoom

And both show same information.

So the R code said that, there are three clusters generating And I used medians to generate data around it. In addition I have also tested the result with Partitioning around medoids (which is opposite to hierarchical clustering) and the results from both techniques yield clean clusters.

2017-01-08-10_46_47-rstudio

Also, the data sample is relatively small, but you are very welcome to test this idea into your environment. Just easy with freeproccache and flushprocindb commands!

This blog post was meant as a teaser,  to gather opinion from the readers. Couple of more additional approaches will be part of two articles, that I am currently working on.

As always, code is available at the Github.

Happy SQLinRg!

Quick R Christmas tree

Dear readers,

Wish you all best in year 2017!

Here is a little R code:

library(ggplot2)

dataset <- c(9,8,7,6,5,4,4.5,4,3,2,1,0.8,0.2)

df <- data.frame(group = rep(c("l","d"), 
                             each=length(dataset)),
                 x = 1:length(dataset),
                 y = c(dataset, dataset*-1))

ggplot() +
  geom_bar(data = df, aes(x=x, y=y),stat = "identity", fill = '#00A650',width=.8) +
  coord_flip() + ggtitle("Happy New Year 2017") + theme_void()

And the result….

2016-12-30-11_35_58-plot-zoom

 

See you in 2017!