Passing two SQL queries to sp_execute_external_script

Recently, I got a question on one of my previous blog posts, if there is possibility to pass two queries in same run-time as an argument to external procedure sp_execute_external_script.

Some of the  arguments of the procedure sp_execute_external_script are enumerated. This is valid for the inputting dataset and as the name of argument @input_data_1 suggests, one can easily (and this is valid doubt) think, there can also be @input_data_2 argument, and so on. Unfortunately, this is not true.  External procedure can hold only one T-SQL dataset, inserted through this parameter.

There are many reasons for that, one would be the cost of sending several datasets to external process and back, so inadvertently, this forces user to rethink and pre-prepare the dataset (meaning, do all the data munging beforehand), prior to sending it into external procedure.

But there are workarounds on how to pass additional query/queries to sp_execute_external_script. I am not advocating this, and I strongly disagree with such usage, but here it is.

First I will create two small datasets, using T-SQL

USE SQLR;
GO

DROP TABLE IF EXISTS dataset;
GO

CREATE TABLE dataset
(ID INT IDENTITY(1,1) NOT NULL
,v1 INT
,v2 INT
CONSTRAINT pk_dataset PRIMARY KEY (id)
)

SET NOCOUNT ON;
GO

INSERT INTO dataset(v1,v2)
SELECT TOP 1
 (SELECT TOP 1 number FROM master..spt_values WHERE type IN ('EOB') ORDER BY NEWID()) AS V1
,(SELECT TOP 1 number FROM master..spt_values WHERE type IN ('EOD') ORDER BY NEWID()) AS v2
FROM master..spt_values
GO 50

This dataset will be used directly into @input_data_1 argument. The next one will be used through R code:

CREATE TABLE external_dataset
(ID INT IDENTITY(1,1) NOT NULL
,v1 INT
CONSTRAINT pk_external_dataset PRIMARY KEY (id)
)

SET NOCOUNT ON;
GO

INSERT INTO external_dataset(v1)
SELECT TOP 1
 (SELECT TOP 1 number FROM master..spt_values WHERE type IN ('EOB') ORDER BY NEWID()) AS V1
FROM master..spt_values
GO 50

Normally,  one would use a single dataset like:

EXEC sp_execute_external_script
     @language = N'R'
    ,@script = N'OutputDataSet <- data.frame(MySet);'
    ,@input_data_1 = N'SELECT TOP 5 v1, v2 FROM dataset;'
    ,@input_data_1_name = N'MySet'
WITH RESULT SETS
((
 Val1 INT
 ,Val2 INT
))

But by “injecting” the  ODBC into R code, we can allow external procedure, to get back to your SQL Server and get additional dataset.

This can be done by following:

EXECUTE AS USER = 'RR'; 
GO

DECLARE @Rscript NVARCHAR(MAX)
SET @Rscript = '
   library(RODBC)
   myconn <-odbcDriverConnect("driver={SQL Server};
         Server=SICN-KASTRUN;database=SQLR;uid=RR;pwd=Read!2$16")
  External_source <- sqlQuery(myconn, "SELECT v1 AS v3 
                    FROM external_dataset")
  close(myconn) 
  Myset <- data.frame(MySet)
   #Merge both datasets
   mergeDataSet <- data.frame(cbind(Myset, External_source));'

EXEC sp_execute_external_script
    @language = N'R'
   ,@script = @Rscript
   ,@input_data_1 = N'SELECT v1, v2 FROM dataset;'
   ,@input_data_1_name = N'MySet'
   ,@output_data_1_name = N'mergeDataSet'
WITH RESULT SETS
((
    Val1 INT
   ,Val2 INT
   ,Val3 INT
))

REVERT;
GO

And the result will be merged two datasets, in total three columns:

2017-07-25 22_04_48-Two_data_sets_R_sp_execute_external_script.sql - SICN-KASTRUN.SQLR (SPAR_si01017

which correspond to two datasets:

-- Check the results!
SELECT * FROM dataset
SELECT * FROM external_dataset

There are, as already mentioned, several opposing factors to this approach, and I would not recommend this. Some are:

  • validating and keeping R code in one place
  • performance issues
  • additional costs of data transferring
  • using ODBC connectors
  • installing additional R packages (in my case RODBC package)
  • keeping different datasets in one place
  • security issues
  • additional login/user settings
  • firewall inbound/outbound rules setting

This, of course, can also be achieved with *.XDF file formats, if they are stored locally or on server as a files.

As always, code is available at Github.

Happy R-SQLing! 🙂

Advertisements

Graph Database with T-SQL

SQL Server 2017 will bring users graph database. Graph database is type of database that uses graph attributes – edges (or relationships) and nodes (or vertices) and features, to store data. Relationship between nodes are through edges and features (also called properties) hold the values to represent the strength or direction of the edges. Many of this links can be retrieved easy with a single operation (in comparison to T-SQL language, which can quickly become complex).

Links are also stored in the database, and when querying the graph database, use join clause, to collect the related data. Especially very useful for hierarchical structures and complex social structures, can be easily stored and queried in comparison to “relational” database.

Network database vs. graph database are similar, whereas, network might have problems when dealing with several chains of edges.

When to use graph/network database should be obvious; when you have hierarchical data or when your data is too complex with many many-to-many relations, that would make a relational database hard to query.

small_undirected_network_labeled

 

To put the graph database to the test, I took bunch of emails from a particular MVP SQL Server distribution list (content will not be shown and all the names will be anonymized). On my gmail account, I have downloaded some 90MiB of emails in mbox file format. With some python scripting,  only FROM and SUBJECTS were extracted:

writer.writerow(['from','subject'])
for index, message in enumerate(mailbox.mbox(infile)):
    content = get_content(message)
    row = [
        message['from'].strip('>').split('<')[-1],
        decode_header(message['subject'])[0][0],"|"
          ]
    writer.writerow(row)

Up to this point, I have extracted a CSV file with only From field and subject field, looking as:

Excel_emails

After importing the CSV file to SQL Server database, I have started to prepare my graph database structures:

CREATE TABLE MVP_DataPlatform
(
 MVP_ID INT IDENTITY(1,1) NOT NULL
 ,MVP_Name VARCHAR(100) NULL
 ,MVP_Email VARCHAR(100) NOT NULL
) AS NODE
GO

CREATE TABLE MVP_Topics
(
 Topic_ID INT IDENTITY(1,1) NOT NULL
 ,Title NVARCHAR(250) NOT NULL
) AS NODE
GO

CREATE TABLE Initiated AS EDGE;
CREATE TABLE Replied AS EDGE;
And prepare the data with:

with mails 
as
(
SELECT 
 froms 
 ,REPLACE(SUBSTRING(froms, 1, CASE CHARINDEX(';', froms) 
  WHEN 0 THEN LEN(froms) ELSE CHARINDEX(';',froms)-1 END),',','') AS from_MVP
 ,REPLACE(SUBSTRING(froms, CHARINDEX(';',froms) + 1, 1000),',|','') AS title

FROM [dbo].[clean_mail5_2] 
)
SELECT 
 from_MVP
 ,SUBSTRING(title,2, LEN(title)) as title
 ,CASE WHEN (
 SUBSTRING(title,1, 10) LIKE '%RE%' 
 OR SUBSTRING(title,1, 10) LIKE '%AW%'
 OR SUBSTRING(title,1, 10) LIKE '%FWD%'
 OR SUBSTRING(title,1, 10) LIKE '%OT%'
 ) THEN 0 ELSE 1 END MVP_start
INTO MVP_Mails
FROM mails
 WHERE
 from_MVP like '%@%'

Populating the nodes (vertices) should be a easy task:

INSERT INTO MVP_DataPlatform (MVP_Name, MVP_Email)
SELECT 
 SUBSTRING(from_MVP, 1, CHARINDEX('@',from_MVP)-1) AS MVP_NAME
 ,from_MVP AS MVP_Email
FROM MVP_Mails
GROUP BY 
 SUBSTRING(from_MVP, 1, CHARINDEX('@',from_MVP)-1)
 ,from_MVP;
GO

INSERT INTO MVP_Topics (Title)
SELECT 
 Title
FROM MVP_Mails
GROUP BY 
 title;
GO

But populating the edges (relationships) is part of how you want to define your graph.

I have created two edges; Replied and initiated. When using replied, logic will be following:

  1. AB -> send a mail to distribution list =  AB -> AB
  2. CD -> replies to AB  = CD -> AB
  3. EF -> replies to CD (and AB) = EF -> CD
  4. and so on.

When Initiated, logic is slightly different:

  1. AB -> send a mail to distribution list =  AB -> AB
  2. CD -> replies to AB  = CD -> AB
  3. EF -> replies to CD (and AB) = EF -> AB
  4. and so on…

So difference is, how I want to store information at step 3.; when EF replies. But this rule is – totally arbitrary and up to your type of business question, hierarchy or many-to-many rules. I can even decide to store information at step 3. normalized, as: ((EF -> CD) AND (EF -> AB)).

To store this set of information into the table of edges, following transformation is needed:

CREATE OR ALTER PROCEDURE INSERT_EDGE
AS
DECLARE @id1 INT;
DECLARE @id2 INT;
DECLARE @i INT;
DECLARE @hid INT;
DECLARE @START INT;

select 
 mail.from_MVP as mail_ID
,pl.MVP_id AS pl_ID
,mail.title
,mail.MVP_Start
,row_number() OVER (partition by title ORDER BY title, MVP_Start DESC) as rn_title
,row_number() Over (order by (select 1)) as rn_global
into #temp
from MVP_Mails as mail
JOIN MVP_DataPlatform AS Pl
ON mail.from_mvp = pl.mvp_email

SELECT @i = MAX(rn_global) FROM #TEMP

SET @hid=1;
WHILE @hid <= @i
BEGIN 
 SELECT @start = MVP_start FROM #temp WHERE rn_global = @hid
 PRINT @start
 SELECT @id1 = pl_id FROM #temp WHERE rn_global = @hid
 PRINT @id1
 IF @start = 1
 SELECT @id2 = pl_id FROM #temp WHERE rn_global = @hid+1
 IF @start = 0
 SET @id2 = @id1
 PRINT @id2

INSERT INTO replied VALUES ((SELECT $NODE_ID FROM MVP_DataPlatform WHERE MVP_ID = @id1), (SELECT $NODE_ID FROM MVP_DataPlatform WHERE MVP_ID = @id2));
 SET @hid = @hid + 1;
END

Executing this procedure:

EXECUTE PROCEDURE INSERT_EDGE

You should be getting a table with relations:

relations

And now you can perform simple queries against the graph – including complex relations, in order to get results relatively simple:

SELECT 
 MVP_1.MVP_Name 
 ,MVP_2.MVP_Name
 ,T.Title
 ,T.Topic_ID
FROM 
 MVP_DataPlatform AS MVP_1
 ,MVP_DataPlatform AS MVP_2
 ,Replied
 ,MVP_Topics AS T
WHERE 
 MATCH(MVP_1-(Replied)->MVP_2)

 

Sure, there are many other ways to do the same steps and also many other ways how you want your data to be represented in a graph.

Happy T-SQL Querying! 🙂

 

Installing R packages with rxInstallPackages in Microsoft R Server

In MicrosoftML package comes – in my opinion – long anticipated function for installing R packages for SQL Server and Microsoft R Server. And, I am super happy.

Last year, in one of my previous blog posts, I have been showing how to install R package from SSMS using sp_execute_external_script. Now, with new package MicrosoftML (that is part of Microsoft R Server 9.x and above)  new function is available that enables you to easy install the package and also little bit more.

Code is relatively simple and straightforward:

USE SQLR;
GO

EXECUTE sp_execute_external_script
 @language = N'R'
 ,@script = N'

 packagesToInstall <- c("caret","tree","party")
 library(MicrosoftML)
 SqlServerCC <- RxInSqlServer(connectionString = "Driver=SQL Server;
+Server=SICN-KASTRUN\\SQLSERVER2017C2;Database=SQLR;
+Trusted_Connection=True;")
 rxInstallPackages(pkgs = packagesToInstall, owner = '', 
+scope = "shared", computeContext = "SqlServerCC");';
GO

This is way too easy to be true, but it is. Make sure to do couple of things prior to running this code:

  1. set the compute environment to where your packages are installed
  2. set up the correct permissions and access
  3. Check up also the tcp/ip protocols

In rxInstallPackages function use computeContext parameter to set either to “Local” or to your  “SqlServer” environment, you can also use scope as shared or private (difference is, if you install package as shared it can be used by different users across different databases, respectively for private). You can also specify owner if you are running this command out of db_owner role.

Happy SQLR-ing!

Performance differences between RevoScaleR, ColumnStore Table and In-Memory OLTP Table

Running *.XDF files using RevoScaleR computational functions versus have dataset available in Columnstore table or in In-Memory OLTP table will be focus of comparison for this blog post.

For this test, I will use the AirLines dataset, available here. Deliberately, I have picked a sample 200 MB (of 13GB dataset) in order to properly test the differences and what should be the best way.

After unzipping the file, I will use following T-SQL query to import the file into SQL Server.

With this example, you can import xdf file directly to SQL Server table (note, that I have transformed a CSV file into XDF and import xdf file into SQL table):

-- must have a write permissions on folder: C:/Program Files/Microsoft SQL Server/130/R_SERVER/library/RevoScaleR/SampleData
DECLARE @RScript nvarchar(max)
SET @RScript = N'library(RevoScaleR)
                rxOptions(sampleDataDir = "C:/Program Files/Microsoft SQL Server/130/R_SERVER/library/RevoScaleR/SampleData")
                inFile <- file.path(rxGetOption("sampleDataDir"), "airsample.csv")
                of <-  rxDataStep(inData = inFile, outFile = "C:/Program Files/Microsoft SQL Server/130/R_SERVER/library/RevoScaleR/SampleData/airline20170428_2.xdf", 
                             transformVars = c("ArrDelay", "CRSDepTime","DayOfWeek")
                            ,transforms = list(ArrDelay = as.integer(ArrDelay), CRSDepTime = as.numeric(CRSDepTime), DayOfWeek = as.character(DayOfWeek))
                            ,overwrite = TRUE
                            ,maxRowsByCols = 10000000
                            ,rowsPerRead = 200000)
                OutputDataSet <- rxXdfToDataFrame(of)'

DECLARE @SQLScript nvarchar(max)
SET @SQLScript = N'SELECT 1 AS N'

EXECUTE sp_execute_external_script
     @language = N'R'
    ,@script = @RScript
    ,@input_data_1 = @SQLScript
WITH RESULT SETS ((ArrDelay INT
                    ,CRSDepTime DECIMAL(6,4)
                    ,DofWeek NVARCHAR(20)))
GO

 

So the whole process is to be done by creating a table, converting the above sp_execute_external_script into procedure and import results from external procedure to the table.

--Complete process
CREATE TABLE AirFlights_small 
(id INT IDENTITY(1,1)
,ArrDelay INT
,CRSDepTime DECIMAL(6,4)
,DofWeek NVARCHAR(20) 
);
GO

CREATE Procedure ImportXDFtoSQLTable
AS
DECLARE @RScript nvarchar(max)
SET @RScript = N'library(RevoScaleR)
                rxOptions(sampleDataDir = "C:/Program Files/Microsoft SQL Server/130/R_SERVER/library/RevoScaleR/SampleData")
                inFile <- file.path(rxGetOption("sampleDataDir"), "airsample.csv")
                of <-  rxDataStep(inData = inFile, outFile = "airline20170428_2.xdf", 
                transformVars = c("ArrDelay", "CRSDepTime","DayOfWeek")
            ,transforms = list(ArrDelay = as.integer(ArrDelay), CRSDepTime = as.numeric(CRSDepTime), DayOfWeek = as.character(DayOfWeek))
            ,overwrite = TRUE
            ,maxRowsByCols = 10000000)
             OutputDataSet <- data.frame(rxReadXdf(file=of, varsToKeep=c("ArrDelay", "CRSDepTime","DayOfWeek")))'
DECLARE @SQLScript nvarchar(max)
SET @SQLScript = N'SELECT 1 AS N'
EXECUTE sp_execute_external_script
     @language = N'R'
    ,@script = @RScript
    ,@input_data_1 = @SQLScript
WITH RESULT SETS ((ArrDelay INT,CRSDepTime DECIMAL(6,4),DofWeek NVARCHAR(20)));
GO

INSERT INTO AirFlights_small
EXECUTE ImportXDFtoSQLTable;
GO

 

There you go. Data are in T-SQL Table. Now we can start with comparisons.  I will be measuring the time to get average air delay time per day of the week.

2017-04-28 22_44_10-RStudio

RevoScaleR

With using the RevoScaleR package, I will be using rxCrossTabs function with the help of transform argument to convert day of the week into factors:

#importing data
outFile2 <- rxDataStep(inData = inFile, outFile = "C:/Program Files/Microsoft SQL Server/130/R_SERVER/library/RevoScaleR/SampleData/airline20170428_2.xdf", 
            transformVars = c("ArrDelay", "CRSDepTime","DayOfWeek")
           ,transforms = list(ArrDelay = as.integer(ArrDelay), CRSDepTime = as.numeric(CRSDepTime), DayOfWeek = as.character(DayOfWeek))
           ,overwrite = TRUE
           ,maxRowsByCols = 10000000)

of2 <- data.frame(rxReadXdf(file=outFile2, varsToKeep=c("ArrDelay", "CRSDepTime","DayOfWeek")))

summary(rxCrossTabs(ArrDelay~DayOfWeek
                    ,data = of2  #outFile2
                    ,transforms = transforms
                    ,blocksPerRead=300000), output="means")

Now get those times:

# Getting times
system.time({ 
  summary(rxCrossTabs(ArrDelay~DayOfWeek
                      ,data = of2
                      ,transforms = transforms
                      ,blocksPerRead=300000), output="means")
  })

With results of 7.8 on elapsed time and computation time of 3.8 second.

Rows Read: 8400013, Total Rows Processed: 8400013, Total Chunk Time: 3.825 seconds 
Computation time: 3.839 seconds.
   user  system elapsed 
   2.89    0.37    7.89 

 

T-SQL query without any specifics

To have a baseline, let’s run the following query:

SET STATISTICS TIME ON;
SELECT 
[DofWeek]
,AVG(ArrDelay) AS [means]
FROM
    AirFlights_small
GROUP BY 
    [DofWeek]
SET STATISTICS TIME OFF;

And check these time statistics

 SQL Server Execution Times:
CPU time = 6124 ms,  elapsed time = 2019 ms.
Warning: Null value is eliminated by an aggregate or other SET operation.

Obiously the CPU / computation time is higher, although the elapsed time is faster.

ColumnStore Table

Let’s create a nonclustered column store index.

CREATE TABLE AirFlights_CS
(id INT IDENTITY(1,1)
,ArrDelay INT
,CRSDepTime DECIMAL(6,4)
,DofWeek NVARCHAR(20) 
);
GO
INSERT INTO AirFlights_CS(ArrDelay, CRSDepTime, DofWeek)
SELECT ArrDelay, CRSDepTime, DofWeek FROM AirFlights_small 

CREATE NONCLUSTERED COLUMNSTORE INDEX NCCI_AirFlight
ON AirFlights_CS
(id, ArrDelay, CRSDepTime, DofWeek);
GO

With the execution of the same query

SET STATISTICS TIME ON;
SELECT 
[DofWeek]
,AVG(ArrDelay) AS [means]
FROM
  AirFlights_CS
GROUP BY     [DofWeek] SET STATISTICS TIME OFF;

The following time statistics are in

 SQL Server Execution Times:
CPU time = 202 ms,  elapsed time = 109 ms.
Warning: Null value is eliminated by an aggregate or other SET operation.

 

In-Memory OLTP

To get Memory optimized table, we need to add a filegroup and create a table with memory optimized turned on:

CREATE TABLE dbo.AirFlight_M   
(  
  id INT NOT NULL PRIMARY KEY NONCLUSTERED
 ,ArrDelay INT
 ,CRSDepTime DECIMAL(6,4) 
 ,DofWeek NVARCHAR(20)
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
GO

And insert the data

INSERT INTO AirFlight_M
SELECT * FROM AirFlights_small

Running the simple query

SET STATISTICS TIME ON;
SELECT 
[DofWeek]
,AVG(ArrDelay) AS [means]
FROM
    AirFlight_M
GROUP BY 
    [DofWeek]
SET STATISTICS TIME OFF;

results are:

 SQL Server Execution Times:
CPU time = 6186 ms,  elapsed time = 1627 ms.
Warning: Null value is eliminated by an aggregate or other SET operation.

These results were somehow expected, mostly because the ColumnStore table is the only one having index and reading (also by looking in execution plans) optimized with comparison to others. Also degree of parallelism, clustered and non-clustered index can  be pushed, but the idea was to have tests similar to the one in RevoScaleR and R environemnt. With R, we can not push any index on the XDF file.

In R we run:

system.time({ 
LMResults <- rxLinMod(ArrDelay ~ DayOfWeek, data = outFile2, transforms = transforms)
LMResults$coefficients
})

And in SSMS we run:

SET STATISTICS TIME ON;
-- 1. T-SQL
DECLARE @RScript nvarchar(max)
SET @RScript = N'library(RevoScaleR)
                LMResults <- rxLinMod(ArrDelay ~ DofWeek, data = InputDataSet)
                OutputDataSet <- data.frame(LMResults$coefficients)'
DECLARE @SQLScript nvarchar(max)
SET @SQLScript = N'SELECT ArrDelay, DofWeek FROM [dbo].[AirFlights_small]'
EXECUTE sp_execute_external_script
     @language = N'R'
    ,@script = @RScript
    ,@input_data_1 = @SQLScript
WITH RESULT SETS ((
            --DofWeek NVARCHAR(20)
        --    ,
            Coefficient DECIMAL(10,5)
            ));
GO
SET STATISTICS TIME OFF;


SET STATISTICS TIME ON;
-- 2. ColumnStore
DECLARE @RScript nvarchar(max)
SET @RScript = N'library(RevoScaleR)
                LMResults <- rxLinMod(ArrDelay ~ DofWeek, data = InputDataSet)
                OutputDataSet <- data.frame(LMResults$coefficients)'
DECLARE @SQLScript nvarchar(max)
SET @SQLScript = N'SELECT ArrDelay, DofWeek FROM [dbo].[AirFlights_CS]'
EXECUTE sp_execute_external_script
     @language = N'R'
    ,@script = @RScript
    ,@input_data_1 = @SQLScript
WITH RESULT SETS ((
            --DofWeek NVARCHAR(20)
        --    ,
            Coefficient DECIMAL(10,5)
            ));
GO
SET STATISTICS TIME OFF;


SET STATISTICS TIME ON;
-- 3. Memory optimized
DECLARE @RScript nvarchar(max)
SET @RScript = N'library(RevoScaleR)
                LMResults <- rxLinMod(ArrDelay ~ DofWeek, data = InputDataSet)
                OutputDataSet <- data.frame(LMResults$coefficients)'
DECLARE @SQLScript nvarchar(max)
SET @SQLScript = N'SELECT ArrDelay, DofWeek FROM [dbo].[AirFlight_M]'
EXECUTE sp_execute_external_script
     @language = N'R'
    ,@script = @RScript
    ,@input_data_1 = @SQLScript
WITH RESULT SETS ((
            --DofWeek NVARCHAR(20)
        --    ,
            Coefficient DECIMAL(10,5)
            ));
GO
SET STATISTICS TIME OFF;

 

Conclusion

Gathering statistics on CPU time and elapsed time when running simple Linear regression, this is comparison:

df_LR_comparison <- data.frame (
  method = c("T-SQL", "ColumnStore", "Memory Optimized", "RevoScaleR")
  ,CPUtime = c(3000,1625,2156,7689)
  ,ElapsedTime = c(14323,10851,10600,7760)
  )
library(ggplot2)

ggplot(df_LR_comparison, aes(method, fill=method)) + 
  geom_bar(aes(y=ElapsedTime), stat="identity") +
  geom_line(aes(y=CPUtime, group=1), color="white", size=3) +
  scale_colour_manual(" ", values=c("d1" = "blue", "d2" = "red"))+
  #scale_fill_manual("",values="red")+
  theme(legend.position="none")

Showing that elapsed time for R environment with RevoScaleR is fastest (and getting data from XDF), where as simple T-SQL run with sp_execute_external_script and using RevoScaleR gives the slowest response.

2017-04-29 00_43_10-Plot Zoom

In terms of CPU time (white line), Columnstore with RevoScaleR call through external procedure outperforms all others.

Final conclusion: When running statistical analysis (using RevoScaleR or any other R library), use columnstore and index optimized tables/views to receive best CPU and elapsed times.  Important to remember is also the fact, that any aggregations and calculations that can be done within SQL Server, are better to be perfomered there.

 

As always, code is available at GitHub.

 

Happy coding! 🙂

Is it possible to use RevoScaleR package in Power BI?

I was invited to deliver a session for Belgium User Group on SQL Server and R integration. After the session – which we did online using web based Citrix  – I got an interesting question: “Is it possible to use RevoScaleR performance computational functions within Power BI?“. My first answer was,  a sceptical yes. But I said, that I haven’t used it in this manner yet and that there might be some limitations.

The idea of having the scalable environment and the parallel computational package with all the predictive analytical functions in Power BI is absolutely great. But something tells me, that it will not be that straight forward.

So let’s start by taking a large (500 MB) txt file and create XDF file:

library(RevoScaleR)
file.name <- "YearPredictionMSD.txt"
rxOptions(sampleDataDir = "C:\\Files")
sampleDataDir

File is available on-line at this address with the zip file.

Getting data with R script

Open Power BI and choose Get Data -> R Script -> and copy/Paste the following slightly changed code:

library(RevoScaleR)
file.name <- "YearPredictionMSD.txt";
rxOptions(sampleDataDir = "C:\\Files");
sampleDataDir

With copy pasting and clicking OK,

2017-03-20 18_56_17-Untitled - Power BI Desktop

You will have to wait for the data to be read into the memory, the data models to be created and after monitoring the memory consumption and patiently waiting, you will notice, that this particular dataset (500 MB or 160 MB XDF), that minimum 3 GB of RAM will be consumed and you will end up with preview:

4 - 2017-03-20 19_01_53-

By now, you will also notice that after saving this Power BI document, it will take somewhere up to 700 MB of your disk space and all the data visualization will consume additional RAM and time. After you will close the Power BI document, you will notice a lot of RAM being released.

Using R Script in the visuals

When you create a new Power BI document, I will create new dataset by Entering data. I will create three “dummy” variables.

7 - 2017-03-20 19_18_23-

With these three variables I will try to inject the data returned from XDF data format and have data represented in Power BI.

After selecting the new visual and choosing R visual, I inserted following code:

library(RevoScaleR)
file.name <- "YearPredictionMSD.txt";
rxOptions(sampleDataDir = "C:\\Files");
sampleDataDir

And this time, the result is fascinating. R is plotting histogram in a split of a second, simply meaning it takes advantage of XDF file and inject it to Power BI.

8 - 2017-03-20 19_26_47-Untitled - Power BI Desktop

This is still – an outer file or dataset -, that Power BI does not have a clue about. Meaning, no slicers are available for dynamic change of the user selection.

Let’s try to insert the data into those three dummy variables, where the third one will be a factor that I have to pre-prepare. Since in this case factor is Year, it is relatively easy to do:

library(RevoScaleR)
library(gridExtra)
library(dplyr)
Year % filter(year == c("2000","2001","2002")))
grid.table(df_f %>% filter(year == Year))

Once I have this inserted in new R visualize, I just need to add a dummy slicer.

9 - 2017-03-20 20_52_52-RevoScale_and_PowerBI - Power BI Desktop

Now, I can easily change the years for my cross-tabulation (using rxCrosstab function). Since calculation is comprehended in the back on the whole dataset and using dplyr package just to omit or filter the results, it is also possible to use rxDatastep:

rxDataStep(inData=outputFile, outFile="C:\\Files\\YearPredictMSD_Year.xdf", 
             overwrite=TRUE, transforms=list(LateYears = V1 > 1999))
rxCrossTabs(V2~F(LateYears), data = "C:\\Files\\YearPredictMSD_Year.xdf")

In this way, you will be creating new XDF file through PowerBI with the transformation. Bear in mind, that this step might take some extra seconds to create new variable or to make a subset, if you would need. Again, this is up to  you to decide, based on the file size.

Using SQL Server procedure with R Script

This approach is not that uncommon, because it has been proven that using Stored Procedures with T-SQL and R code is useful and powerful way to use SQL Server and R integration within SSRS.  Changing the computational context is sure another way to make a work around.

Creating Stored procedure:

CREATE PROCEDURE [dbo].[SP_YearMSD_CrossTab]
AS
BEGIN
    DECLARE @RScript nvarchar(max)
        SET @RScript = N'
                library(RevoScaleR)
                sampleDataDir

Or by copying the T-SQL Code into the SQL Server Data Source, the result is the same.

10 -- 2017-03-20 21_51_04-RevoScale_and_PowerBI - Power BI Desktop

In both cases, you should have a cross-tabulational  representation of XDF dataset within Power BI. And now you can really use all the advantages of Power BI visuals, Slicers and as well any additional R predictions.

12 --- 2017-03-20 21_54_42-RevoScale_and_PowerBI - Power BI Desktop

There is a slight minus to this (if not all) approaches like this. You need to have many stored procedures or queries having generated like this. Also rxCube will help you to some extent, but repetitive work will not be avoided.

Using HDInsight or Hadoop?

Using XDF data files stored in HD-Insight or in Hadoop would generaly mean using same dataset and step as for SQL Server procedure. Just that you would need to – prior to executing T-SQL script, also change comptutational context:

# HD Insight - Spark - Azure
HDInsight mySshUsername = USNM,mySshHostname = HSTNM,
mySshSwitches= SWTCH) 
rxSetComputeContext("HDInsight")
## Hadoop
Hadoop mySshUsername = USNM,mySshHostname = HSTNM,
mySshSwitches= SWTCH)
rxSetComputeContext("Hadoop")

Verdict

I have explored couple of ways how to use the Power BI visuals and environment with RevoScaleR XDF (eXternal Data Frame) datafiles. I have to admit, I was surprised that there will be a way to do it in a relatively easy way, but from data scientist perspective, it is still some additional load and work before you can start with actual data analysis. Last two approaches (R script in Visuals and SQL Server Procedures) are by far the fastest and also take the advantage of using parallel and distributed computations that RevoScaleR package brings.

I would very strongly advise Microsoft and Power BI development team to add XDF plug-in to Power BI. Plug-in would work with metadata presentation of the data each time the computations should be used, the metadata would push the code against R Server to have results returned. This would, for sure be a great way to bring Big Data concept to Power BI Desktop.

As always, code and samples are available at GitHub.

Happy coding!

Using R sp_execute_external_script with JSON

JSON has become part of the SQL Server in the same version as R. Both were very highly anticipated and awaited from the community.

JSON has very powerful statements for converting to and from JSON for storing into / from SQL Server engine (FOR JSON and JSON VALUE, etc).  And since it is gaining popularity for data exchange, I was curious to give it a try with R combination.

I will simply convert a system table into array using for json clause.

SELECT top 10 object_id  FROM sys.objects FOR JSON AUTO;

and it gives back the result:

[{"object_id":3},{"object_id":5},{"object_id":6},{"object_id":7},{"object_id":8},
{"object_id":9},{"object_id":17},{"object_id":18},{"object_id":19},{"object_id":20}]

And sp_execute_external_script query without JSON would look like:

EXECUTE sp_execute_external_script    
       @language = N'R'    
      ,@script=N'OutputDataSet <- InputDataSet'
      ,@input_data_1 = N'SELECT top 10 object_id  FROM sys.objects'
WITH RESULT SETS ((nr INT));

Now, let’s suppose we want to use JSON result directly into T-SQL using sp_execute_external_script. Yes, imagine getting results from an API and you want to push the results immediately into R for analysis. Very straight-forward package in R is called jsonlite (also available is rjson). Query would be as following:

EXECUTE sp_execute_external_script    
       @language = N'R'    
      ,@script=N'library(jsonlite)
                OutputDataSet <- data.frame(fromJSON(InputDataSet))'
      ,@input_data_1 = N'SELECT top 10 object_id  FROM sys.objects FOR JSON AUTO'
WITH RESULT SETS ((nr INT));

Nope!

Msg 39004, Level 16, State 20, Line 15
A 'R' script error occurred during execution of 'sp_execute_external_script' 
with HRESULT 0x80004004.
Msg 39019, Level 16, State 1, Line 15
An external script error occurred: 
Error: Argument 'txt' must be a JSON string, URL or file.

So the argument ‘txt’ must be a JSON string, URL or file. Khm…very “useful” error message, but problem is, that data from T-SQL is stored and presented as data.frame to R environment (Launchpad), because the data type passed to R is array of objects. And would look something like:

2017-01-08-22_04_46-rstudio

Running this query in native (R) environment, we at least get the idea where and how to tackle the problem. So we need to convert the data.frame to a charaters using toJSON and as.character, so that the end T-SQL query would look like:

EXECUTE sp_execute_external_script    
       @language = N'R'    
      ,@script=N'
                library(jsonlite)
                js <- InputDataSet
                js2 <- as.character(toJSON(js))
                OutputDataSet <- data.frame(fromJSON(js2))'
      ,@input_data_1 = N'SELECT top 10 object_id  FROM sys.objects FOR JSON AUTO'
WITH RESULT SETS ((nr INT));

Now we get the correct results (as if we would not used JSON):

2017-01-08-22_22_40-sqlquery7-sql-sicn-kastrun-wideworldimportersdw-spar_si01017988-60_-micr

So R is ready for JSON and JSON is also ready for R.

Happy R+JSON+SQLing!

Clustering executed SQL Server queries using R as tool for

When query execution performance analysis is to be done, there are many ways to find which queries might cause any unwanted load or cause stall on the server.

By encouraging DBA community to start practicing the advantage or R Language and world of data science, I have created a demo to show, how statistics on numerous queries can be stored for later analysis. And this demo has unsupervised (or undirected) method for grouping similar query statistics (or queries) to easier track and find where and which queries might be potential system stoppers.

Before we run some queries to generate the running statistics for these queries, we clean the cache and prepare the table for storing statistics from sys.dm_exec_query_stats.

-- drop statistics table
DROP TABLE IF EXISTS  query_stats_LOG_2;
DROP PROCEDURE IF EXISTS AbtQry;
DROP PROCEDURE IF EXISTS SalQry;
DROP PROCEDURE IF EXISTS PrsQry;
DROP PROCEDURE IF EXISTS OrdQry;
DROP PROCEDURE IF EXISTS PurQry;
GO

-- Clean all the stuff only for the selected db
DECLARE @dbid INTEGER
SELECT @dbid = [dbid] 
FROM master..sysdatabases 
WHERE name = 'WideWorldImportersDW'
DBCC FLUSHPROCINDB (@dbid);
GO

-- for better sample, check that you have Query store turned off
ALTER DATABASE WideWorldImportersDW SET  QUERY_STORE = OFF;
GO

We generate some fake data:

USE WideWorldImportersDW;
GO

-- CREATE Procedures 

CREATE PROCEDURE AbtQry
(@AMNT AS INTEGER) 
AS

-- an arbitrary query
SELECT 
  cu.[Customer Key] AS CustomerKey
  ,cu.Customer
  ,ci.[City Key] AS CityKey
  ,ci.City
  ,ci.[State Province] AS StateProvince
  ,ci.[Sales Territory] AS SalesTeritory
  ,d.Date
  ,d.[Calendar Month Label] AS CalendarMonth
  ,s.[Stock Item Key] AS StockItemKey
  ,s.[Stock Item] AS Product
  ,s.Color
  ,e.[Employee Key] AS EmployeeKey
  ,e.Employee
  ,f.Quantity
  ,f.[Total Excluding Tax] AS TotalAmount
  ,f.Profit
 
FROM Fact.Sale AS f
  INNER JOIN Dimension.Customer AS cu
    ON f.[Customer Key] = cu.[Customer Key]
  INNER JOIN Dimension.City AS ci
    ON f.[City Key] = ci.[City Key]
  INNER JOIN Dimension.[Stock Item] AS s
    ON f.[Stock Item Key] = s.[Stock Item Key]
  INNER JOIN Dimension.Employee AS e
    ON f.[Salesperson Key] = e.[Employee Key]
  INNER JOIN Dimension.Date AS d
    ON f.[Delivery Date Key] = d.Date
WHERE
    f.[Total Excluding Tax] BETWEEN 10 AND @AMNT;
GO

CREATE PROCEDURE SalQry
(@Q1 AS INTEGER
,@Q2 AS INTEGER)

AS
-- FactSales Query
SELECT * FROM Fact.Sale
WHERE
    Quantity BETWEEN @Q1 AND @Q2;
GO

CREATE PROCEDURE PrsQry
(@CID AS INTEGER )
AS

-- Person Query
SELECT * 
    FROM [Dimension].[Customer]
    WHERE [Buying Group] <> 'Tailspin Toys' 
    /* OR [WWI Customer ID] > 500 */
    AND [WWI Customer ID] BETWEEN 400 AND  @CID
ORDER BY [Customer],[Bill To Customer];
GO


CREATE PROCEDURE OrdQry
(@CK AS INTEGER)
AS

-- FactSales Query
SELECT 
    * 
    FROM [Fact].[Order] AS o
    INNER JOIN [Fact].[Purchase] AS p 
    ON o.[Order Key] = p.[WWI Purchase Order ID]
WHERE
    o.[Customer Key] = @CK;
GO

CREATE PROCEDURE PurQry
(@Date AS SMALLDATETIME)
AS

-- FactPurchase Query
SELECT *
    FROM [Fact].[Purchase]
        WHERE
        [Date Key] = @Date;
    --[Date KEy] = '2015/01/01'
GO

Now we run procedures couple of times:

DECLARE @ra DECIMAL(10,2)
SET @ra = RAND()
SELECT CAST(@ra*10 AS INT)

IF @ra  < 0.3333
    BEGIN
       -- SELECT 'RAND < 0.333', @ra
       DECLARE @AMNT_i1 INT = 100*CAST(@ra*10 AS INT)
       EXECUTE AbtQry @AMNT = @AMNT_i1
       EXECUTE PurQry @DAte = '2015/10/01'
       EXECUTE PrsQry @CID = 480
       EXECUTE OrdQry @CK = 0
       DECLARE @Q1_i1 INT = 1*CAST(@ra*10 AS INT)
       DECLARE @Q2_i1 INT = 20*CAST(@ra*10 AS INT)
       EXECUTE SalQry @Q1 = @Q1_i1, @Q2 = @Q2_i1

    END
ELSE 
    IF @ra  > 0.3333 AND @ra < 0.6667
    BEGIN
        -- SELECT 'RAND > 0.333 | < 0.6667', @ra
        DECLARE @AMNT_i2 INT = 500*CAST(@ra*10 AS INT)
        EXECUTE AbtQry @AMNT = @AMNT_i2
        EXECUTE PurQry @DAte = '2016/04/29'
        EXECUTE PrsQry @CID = 500
        EXECUTE OrdQry @CK = 207
        DECLARE @Q1_i2 INT = 2*CAST(@ra*10 AS INT)
        DECLARE @Q2_i2 INT = 10*CAST(@ra*10 AS INT)
        EXECUTE SalQry @Q1 = @Q1_i2, @Q2 = @Q2_i2

    END
ELSE
    BEGIN
        -- SELECT 'RAND > 0.6667', @ra
        DECLARE @AMNT_i3 INT = 800*CAST(@ra*10 AS INT)
        EXECUTE AbtQry @AMNT = @AMNT_i3
        EXECUTE PurQry @DAte = '2015/08/13'
        EXECUTE PrsQry @CID = 520
        EXECUTE OrdQry @CK = 5
        DECLARE @Q2_i3 INT = 60*CAST(@ra*10 AS INT)
        EXECUTE SalQry @Q1 = 25, @Q2 = @Q2_i3
    END
GO 10

And with this data, we proceed to run the logging T-SQL (query source).

SELECT
    (total_logical_reads + total_logical_writes) AS total_logical_io
    ,(total_logical_reads / execution_count) AS avg_logical_reads
    ,(total_logical_writes / execution_count) AS avg_logical_writes
    ,(total_physical_reads / execution_count) AS avg_phys_reads
    ,substring(st.text,(qs.statement_start_offset / 2) + 1,  
      ((CASE qs.statement_end_offset WHEN - 1 THEN datalength(st.text) 
   qs.statement_end_offset END  - qs.statement_start_offset) / 2) + 1) 
AS statement_text
    ,*
INTO query_stats_LOG_2
FROM
        sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_logical_io DESC

Now that we have gathered some test data, we can proceed to do clustering analysis.

Since I don’t know how many clusters can there be, and I can imagine, a  DBA would also be pretty much clueless, I will explore number of clusters. Following R code:

library(RODBC)
myconn <-odbcDriverConnect("driver={SQL Server};Server=SICN-KASTRUN;
database=WideWorldImportersDW;trusted_connection=true")

query.data <- sqlQuery(myconn, "
                     SELECT 
                                 [total_logical_io]
                      ,[avg_logical_reads]
                      ,[avg_phys_reads]
                      ,execution_count
                      ,[total_physical_reads]
                      ,[total_elapsed_time]
                      ,total_dop
                      ,[text]
                      ,CASE WHEN LEFT([text],70) LIKE '%AbtQry%' THEN 'AbtQry'
                       WHEN LEFT([text],70) LIKE '%OrdQry%' THEN 'OrdQry'
                       WHEN LEFT([text],70) LIKE '%PrsQry%' THEN 'PrsQry'
                       WHEN LEFT([text],70) LIKE '%SalQry%' THEN 'SalQry'
                       WHEN LEFT([text],70) LIKE '%PurQry%' THEN 'PurQry'
                       HEN LEFT([text],70) LIKE '%@BatchID%' THEN 'System'
                                  ELSE 'Others' END AS label_graph 
                      FROM query_stats_LOG_2")

close(myconn) 
library(cluster)

#qd <- query.data[,c(1,2,3,5,6)]
qd <- query.data[,c(1,2,6)]

## hierarchical clustering
qd.use <- query.data[,c(1,2,6)]
medians <- apply(qd.use,2,median)
#mads <- apply(qd.use,2,mad)
qd.use <- scale(qd.use,center=medians) #,scale=mads)

#calculate distances
query.dist <- dist(qd.use)

# hierarchical clustering
query.hclust <- hclust(query.dist)

# plotting solution
op <- par(bg = "lightblue")
plot(query.hclust,labels=query.data$label_graph,
main='Query Hierarchical Clustering', ylab = 'Distance', 
xlab = ' ', hang = -1, sub = "" )
# in addition to circle queries within cluster
rect.hclust(query.hclust, k=3, border="DarkRed")        

And produces the following plot:

2017-01-08-10_46_05-plot-zoom

Graph itself is self explanatory and based on the gathered statistics and queries executed against the system, you receive the groups of queries where your DBA can easily and fast track down what might be causing some issues. I added some labels to the query for the graph to look neater, but it is up to you.

I have also changed the type to “triangle” to get the following plot:

2017-01-08-10_54_55-plot-zoom

And both show same information.

So the R code said that, there are three clusters generating And I used medians to generate data around it. In addition I have also tested the result with Partitioning around medoids (which is opposite to hierarchical clustering) and the results from both techniques yield clean clusters.

2017-01-08-10_46_47-rstudio

Also, the data sample is relatively small, but you are very welcome to test this idea into your environment. Just easy with freeproccache and flushprocindb commands!

This blog post was meant as a teaser,  to gather opinion from the readers. Couple of more additional approaches will be part of two articles, that I am currently working on.

As always, code is available at the Github.

Happy SQLinRg!