Native scoring in SQL Server 2017 using R

Native scoring is a much overlooked feature in SQL Server 2017 (available only under Windows and only on-prem), that provides scoring and predicting in pre-build and stored machine learning models in near real-time.

stopwatch

                                                                                             Icons made by Smashicons from www.flaticon.com is licensed by CC 3.0 BY

 

Depending on the definition of real-time, and what does it mean for your line of business, I will not go into the definition of real-time, but for sure, we can say scoring 10.000 rows in a second from a mediocre client computer (similar to mine) .

Native scoring in SQL Server 2017 comes with couple of limitations, but also with a lot of benefits. Limitations are:

  • currently supports only SQL server 2017 and Windows platform
  • trained model should not exceed 100 MiB in size
  • Native scoring with PREDICT function supports only following algorithms from RevoScaleR library:
    • rxLinMod (linear model as linear regression)
    • rxLogit (logistic regression)
    • rxBTrees (Parallel external memory algorithm for Stochastic Gradient Boosted Decision Trees)
    • rxDtree (External memory algorithm for Classification and Regression Trees
    • rxDForest (External memory algorithm for Classification and Regression Decision Trees)

Benefits of using PREDICT function for native scoring are:

  • No configuration of R or ML environment is needed (assuming that the trained models are already stored in the database),
  • Code is cleaner, more readable, and no additional R code is needed when performing scoring,
  • No R engine is called in the run-time, so tremendous deduction of  CPU and I/O costs as well as, no external calls,
  • Client or server running Native scoring with PREDICT function does not need R engine installed, because it uses C++ libraries from Microsoft, that can read serialized model stored in a table and un-serialize it and generate predictions, all without the need of R

Overall, if you are looking for a faster predictions in your enterprise and would love to have a faster code and solution deployment, especially integration with other applications or building API in your ecosystem, native scoring with PREDICT function will surely be advantage to you. Although not all of the predictions/scores are supported, majority of predictions can be done using regression models or decision trees models (it is estimated that both type (with derivatives of regression models and ensemble methods) of algorithms are used in 85% of the predictive analytics).

To put the PREDICT function to the test, I have deliberately taken the semi-larger dataset, available in RevoScaleR package in R – AirlineDemoSmall.csv. Using a simple BULK INSERT, we get the data into the database:

BULK INSERT ArrivalDelay
 FROM 'C:\Program Files\Microsoft SQL Server\140\R_SERVER\library\RevoScaleR\SampleData\AirlineDemoSmall.csv'
 WITH 
 ( FIELDTERMINATOR =',', ROWTERMINATOR = '0x0a', FIRSTROW = 2, CODEPAGE = 'RAW');

Once data is in the database, I will split the data into training and test sub-sets.

SELECT TOP 20000 *
INTO ArrDelay_Train
FROM ArrDelay ORDER BY NEWID()
-- (20000 rows affected)

SELECT *
INTO ArrDelay_Test
FROM ArrDelay AS AR
WHERE NOT EXISTS (SELECT * FROM ArrDelay_Train as ATR
                     WHERE
                       ATR.arrDelay = AR.arrDelay
                   AND ATR.[DayOfWeek] = AR.[DayOfWeek]
                   AND ATR.CRSDepTime = AR.CRSDepTime
                 )
-- (473567 rows affected

And the outlook of the dataset is relatively simple:

ArrDelay CRSDepTime DayOfWeek
1        9,383332   3
4        18,983334  4
0        13,883333  4
65       21,499998  7
-3       6,416667   1

Creating models

So we will create essentially two same models using rxLinMod function with same formula, but one with additional parameter for real-time scoring set to TRUE.

-- regular model creation
DECLARE @model VARBINARY(MAX);
EXECUTE sp_execute_external_script
 @language = N'R'
 ,@script = N'
 arrDelay.LM <- rxLinMod(ArrDelay ~ DayOfWeek + CRSDepTime, 
                         data = InputDataSet)
 model <- rxSerializeModel(arrDelay.LM)'
 ,@input_data_1 = N'SELECT * FROM ArrDelay_Train'
 ,@params = N'@model varbinary(max) OUTPUT'
 ,@model = @model OUTPUT
 INSERT [dbo].arrModels([model_name], [native_model])
 VALUES('arrDelay.LM.V1', @model) ;

-- Model for Native scoring
DECLARE @model VARBINARY(MAX);

EXECUTE sp_execute_external_script
 @language = N'R'
 ,@script = N'
 arrDelay.LM <- rxLinMod(ArrDelay ~ DayOfWeek + CRSDepTime, 
                             data = InputDataSet)
 model <- rxSerializeModel(arrDelay.LM, realtimeScoringOnly = TRUE)'
 ,@input_data_1 = N'SELECT * FROM ArrDelay_Train'
 ,@params = N'@model varbinary(max) OUTPUT'
 ,@model = @model OUTPUT
 INSERT [dbo].arrModels([model_name], [native_model])
 VALUES('arrDelay.LM.NativeScoring.V1', @model) ;

Both models will have same training set, and will be stored into a table for future scoring. Upon first inspection, we can see there is a difference in the model size:

model_size

Scoring Models

Both models  took relatively the same amount of time to train and to store in the table. Both can also be created on R Machine Learning server and stored in the same way (with or without argument realtimeScoringOnly). The model size gives you an idea, why and how the realtime scoring can be achieved -> is to keep your model as small as possible. Both models will give you exact same predictions scores, just that the native scoring will be much faster. Note also, if you are planning to do any text analysis with real-time scoring, keep in mind the 100 MiB limitation, as the text prediction models often exceed this limitation.

Comparing the execution of scoring models, I will compare using “traditional way” of using external procedure sp_execute_external_script and using PREDICT function.

------------------------------------
-- Using sp_execute_external_script
------------------------------------
DECLARE @model VARBINARY(MAX) = (SELECT native_model FROM arrModels 
WHERE model_name = 'arrDelay.LM.V1')

EXEC sp_execute_external_script
 @language = N'R'
 ,@script = N'
 modelLM <- rxUnserializeModel(model)
 OutputDataSet <- rxPredict( model=modelLM,
                 data = ArrDelay_Test,
                 type = "link",
                 predVarNames = "ArrDelay_Pred",
                 extraVarsToWrite = c("ArrDelay","CRSDepTime","DayOfWeek")
 )'
 ,@input_data_1 = N'SELECT * FROM dbo.ArrDelay_Test'
 ,@input_data_1_name = N'ArrDelay_Test'
 ,@params = N'@model VARBINARY(MAX)'
 ,@model = @model
WITH RESULT SETS
((
 AddDelay_Pred FLOAT
,ArrDelay INT 
,CRSDepTime NUMERIC(16,5)
,[DayOfWeek] INT
))
-- (473567 rows affected)
-- Duration 00:00:08

---------------------------
-- Using Real Time Scoring
---------------------------
DECLARE @model varbinary(max) = ( SELECT native_model FROM arrModels 
WHERE model_name = 'arrDelay.LM.NativeScoring.V1');

SELECT 
 NewData.*
 ,p.*
 FROM PREDICT(MODEL = @model, DATA = dbo.ArrDelay_Test as newData)
 WITH(ArrDelay_Pred FLOAT) as p;
GO
-- (473567 rows affected)
-- Duration 00:00:04

Both examples are different from each other, but PREDICT function looks much more readable and neater. Time performance is also on the PREDICT function side, as the model returns the predictions much faster.

In addition, I have mentioned that PREDICT function does not need R engine or Launchpad Service to be running in the same environment, where the code will be executed. To put this to test, I will simply stop the SQL Server Launchpad Service:

Service_stop

After executing the first set of predictions using sp_execute_external_script, SQL Server or Machine Learning Server will notify you that the service is not running:

fail_to_communicate

whereas, the PREDICT function will work flawlessly.

Verdict

For sure, faster predictions are the something that can be very welcoming in gaming industry, in transport, utility and metal industry, financial as well as any other types, where real-time predictions against OLTP systems will be much appreciated. With the light-weight models and good algorithm support, I would for sure give it an additional thought, especially, if you see a good potential in faster and near real-time predictions.

As always, complete code and data sample are available at Github. Happy coding! 🙂

Advertisements

Using R in SQL Server Reporting Services (SSRS)

SQL Server Reporting services (SSRS) is an outstanding tool for creating, deploying and managing paginated, mobile, KPI reports as well as Power BI reports. Tool provides simple way to share and get data insights in your corporate environment.

ss-reporting-services-all-together

(Photo source: Microsoft docs)

Using the privileges of R language to enrich your data, your statistical analysis or visualization is a simple task to get more out of your reports.

The best practice to embed R code into SSRS report is to create stored procedure and output the results to report. To demonstrate this, we will create two reports; one that will take two input parameters and the second one to demonstrate the usage of R visualization.

First, We will create a designated database and generate some sample data:

CREATE DATABASE R_SSRS;
GO
USE R_SSRS;
GO

CREATE TABLE R_data (
 v1 INT ,v2 INT,v_low INT,v_high INT,letter CHAR(1));
GO

CREATE OR ALTER PROCEDURE generate_data
AS
BEGIN
  INSERT INTO R_data(v1,v2,v_low,v_high,letter)
  SELECT TOP 10
    CAST(v1.number*RAND() AS INT) AS v1
   ,CAST(v2.number*RAND() AS INT) AS v2
   ,v1.low AS v_low
   ,v1.high AS v_high
   ,SUBSTRING(CONVERT(varchar(40), NEWID()),0,2) AS letter
  FROM master..spt_values AS v1
  CROSS JOIN master..spt_values AS v2
  WHERE  v1.[type] = 'P' AND v2.[type] = 'P'
 ORDER BY NEWID() ASC;
END
GO
EXEC generate_data;
GO 10

For the first report we will create a dynamic report, where user will be able to select two parameters. Based on select of the parameters, the resulted dataset will be passed to R, where with additional R code, we will generate summary statistics.

The idea is, to have the values “v1” and “letter” parametrized:

SELECT * FROM R_data
WHERE 
 v1 > 400 -- replace this with parameter; e.g.: @v1
AND letter IN ('1','2','3') -- replace this with parameter; e.g.: @letters

So in this case, we will create a stored procedure, that will get the data from T-SQL and call external procedure for execution of R Code. This procedure will be as:

CREATE OR ALTER PROCEDURE sp_R1(
   @v1 INT
  ,@lett VARCHAR(20)
) AS
BEGIN
 DECLARE @myQuery NVARCHAR(1000)

  CREATE TABLE #t (let CHAR(1))
  INSERT INTO #t
  SELECT value
  FROM STRING_SPLIT(@lett,',')
 
 SET @myQuery = N'
    SELECT * FROM R_data
    WHERE 
    v1 > '+CAST(@v1 AS VARCHAR(10))+'
    AND letter IN (SELECT * FROM #t)'

EXEC sp_execute_external_script
 @language = N'R'
 ,@script = N'
   df <- InputDataSet 
   OutputDataSet <- data.frame(summary(df))'
  ,@input_data_1 = @myQuery
WITH RESULT SETS
((v1 NVARCHAR(100)
 ,v2 NVARCHAR(100)
 ,freq NVARCHAR(100)))
END;
GO

By testing, we can see that passing both parameters (one single-valued and one multi-valued) will return result of R code:

EXEC sp_R1 @v1 = 200, @lett = '1,2,3'

Both of these parameters will be used in report for selection on data (or filtering the data). In SQL Server 2016 Report Builder, we create a blank report, add data source and create a new dataset, using stored procedure:

report1

Once the dataset is added and stored procedure tested, reporting builder will automatically create report parameters:

report2

In addition, we need to set up the parameters, so, adding additional dataset for the paramteres to have a controlled set of available values, and adding this pool of values to each of the parameters, by selecting the dataset, where the parameters will be taken from.

report3

For the multi-valued parameter, we also need to specify that user can select more than one value at the time. By letting the reporting builder know that “lett” parameter will be parsed back as a string of multiple values, we must select “Allow multiple values”.

report4

Once this is completed, both parameters will be shown as a drop-down selection boxes, where user will get the results back, based upon the selection made.

report5

At the end, report should look very similar to this, where the table is result generated by R code, the rest is T-SQL results and the beautiful Reporting Services.report6

For the second report, we will create a new stored procedure, that will generate  a boxplot for selected values retrieved from the user selection.

The procedure will roughly remain the same, just the R code will be changed in order to be able to return the graph (complete procedure is in the code):

EXEC sp_execute_external_script
 @language = N'R'
 ,@script = N'
  df <- InputDataSet 
  image_file <- tempfile()
  jpeg(filename = image_file, width = 400, height = 400)
  boxplot(df$v1~df$letter)
  dev.off()
  OutputDataSet <- data.frame(data=readBin(file(image_file, "rb"), what=raw(), n=1e6))'

 

In addition to that, on our new report, we will need to show the returned value of this procedure (or simply result) as a graph.

On report canvas drag an image and change the settings of this image as following:

report7

Users will be now able to selected the values and based on their selection, the R graph generated inside SSRS report will be changed respectively.

This next Figure show just this, how simple it is for the user to change the selection (add or remove new values) and the boxplot graph is redrawn with updated values.

report8

Both demos should help you leverage the usage of R scripts and R visualizations in your corporate reporting service and bring some additional user interface dynamics into it.

As always, the complete code and reports are available at Github. And code will also hold my random generated dataset, if you want to get same results as in this blogpost.

Happy RCoding 🙂

SQL Saturday statistics – Web Scraping with R and SQL Server

I wanted to check a simple query: How many times has a particular topic been presented and from how many different presenters.

Sounds interesting, tackling the problem should not be a problem, just that the end numbers may vary, since there will be some text analysis included.

First of all, some web scraping and getting the information from Sqlsaturday web page. Reading the information from the website, and with R/Python integration into SQL Server, this is fairly straightforward task:

EXEC sp_execute_external_script
 @language = N'R'
 ,@script = N'
 library(rvest)
 library(XML)
 library(dplyr)

#URL to schedule
 url_schedule <- ''http://www.sqlsaturday.com/687/Sessions/Schedule.aspx''

#Read HTML
 webpage <- read_html(url_schedule)

# Event schedule
 schedule_info <- html_nodes(webpage, ''.session-schedule-cell-info'') # OK

# Extracting HTML content
 ht <- html_text(schedule_info)

df <- data.frame(data=ht)

#create empty DF
 df_res <- data.frame(title=c(), speaker=c())

for (i in 1:nrow(df)){
 #print(df[i])
 if (i %% 2 != 0) #odd flow
 print(paste0("title is: ", df$data[i]))
 if (i %% 2 == 0) #even flow
 print(paste0("speaker is: ", df$data[i]))
 df_res <- rbind(df_res, data.frame(title=df$data[i], speaker=df$data[i+1]))
 }
df_res_new = df_res[seq(1, nrow(df_res), 2), ]
OutputDataSet <- df_res_new'

Python offers Beautifulsoup library that will do pretty much the same (or even better) job as rvest and XML packages combined. Nevertheless, once we have the data from a test page out (in this case I am reading the Slovenian SQLSaturday 2017 schedule, simply because, it is awesome), we can “walk though” the whole web page and generate all the needed information.

SQLSaturday website has every event enumerated, making it very easy to parametrize the web scrapping process:

2017-11-12 13_13_30-SQLSaturday #687 - Slovenia 2017 _ Sessions _ Schedule

So we will scrape through last 100 events, by simply incrementing the integer of the event; so input parameter will be parsed as:

http://www.sqlsaturday.com/600/Sessions/Schedule.aspx

http://www.sqlsaturday.com/601/Sessions/Schedule.aspx

http://www.sqlsaturday.com/602/Sessions/Schedule.aspx

and so on, regardless of the fact if the website functions or not. Results will be returned back to the SQL Server database.

Creating stored procedure will go the job:

USE SqlSaturday;
GO

CREATE OR ALTER PROCEDURE GetSessions
 @eventID SMALLINT
AS

DECLARE @URL VARCHAR(500)
SET @URL = 'http://www.sqlsaturday.com/' +CAST(@eventID AS NVARCHAR(5)) + '/Sessions/Schedule.aspx'

PRINT @URL

DECLARE @TEMP TABLE
(
 SqlSatTitle NVARCHAR(500)
 ,SQLSatSpeaker NVARCHAR(200)
)

DECLARE @RCODE NVARCHAR(MAX)
SET @RCODE = N' 
 library(rvest)
 library(XML)
 library(dplyr)
 library(httr)
 library(curl)
 library(selectr)
 
 #URL to schedule
 url_schedule <- "'
 
DECLARE @RCODE2 NVARCHAR(MAX) 
SET @RCODE2 = N'"
 #Read HTML
 webpage <- html_session(url_schedule) %>%
 read_html()

# Event schedule
 schedule_info <- html_nodes(webpage, ''.session-schedule-cell-info'') # OK

# Extracting HTML content
 ht <- html_text(schedule_info)

df <- data.frame(data=ht)

#create empty DF
 df_res <- data.frame(title=c(), speaker=c())

for (i in 1:nrow(df)){
 #print(df[i])
 if (i %% 2 != 0) #odd flow
 print(paste0("title is: ", df$data[i]))
 if (i %% 2 == 0) #even flow
 print(paste0("speaker is: ", df$data[i]))
 df_res <- rbind(df_res, data.frame(title=df$data[i], speaker=df$data[i+1]))
 }

df_res_new = df_res[seq(1, nrow(df_res), 2), ]
 OutputDataSet <- df_res_new ';

DECLARE @FINAL_RCODE NVARCHAR(MAX)
SET @FINAL_RCODE = CONCAT(@RCODE, @URL, @RCODE2)

INSERT INTO @Temp
EXEC sp_execute_external_script
 @language = N'R'
 ,@script = @FINAL_RCODE


INSERT INTO SQLSatSessions (sqlSat,SqlSatTitle,SQLSatSpeaker)
SELECT 
 @EventID AS sqlsat
 ,SqlSatTitle
 ,SqlSatSpeaker
FROM @Temp

 

Before you run this, just a little environement setup:

USE [master];
GO

CREATE DATABASe SQLSaturday;
GO

USE SQLSaturday;
GO

CREATE TABLE SQLSatSessions
(
 id SMALLINT IDENTITY(1,1) NOT NULL
,SqlSat SMALLINT NOT NULL
,SqlSatTitle NVARCHAR(500) NOT NULL
,SQLSatSpeaker NVARCHAR(200) NOT NULL
)

 

There you go! Now you can run a stored procedure for a particular event (in this case SQL Saturday Slovenia 2017):

EXECUTE GetSessions @eventID = 687

or you can run this procedure against multiple SQLSaturday events and web scrape data from SQLSaturday.com website instantly.

For Slovenian SQLSaturday, I get the following sessions and speakers list:

2017-11-13 19_19_46-49_blog_post.sql - SICN-KASTRUN.SQLSaturday (SPAR_si01017988 (57))_ - Microsoft .png

Please note that you are running this code behind the firewall and proxy, so some additional changes for the proxy or firewall might be needed!

So going to original question, how many times has the query store been presented on SQL Saturdays (from SQLSat600 until  SqlSat690), here is the frequency table:

2017-11-13 19_57_04-Statistics_on_web_scraping_results.sql - SICN-KASTRUN.SQLSaturday (SPAR_si010179

Or presented with pandas graph:

session_stats

Query store is popular, beyond all R, Python or Azure ML topics, but Powershell is gaining its popularity like crazy. Good work PowerShell people! 🙂

UPDATE #1: More statistics; in general PowerShell session is presented on every second SQLSaturday, Query Store on every third, whereas there are minimum 2 topics related to Azure on every SQLSat event (relevant for SqlSat events ranging from SqlSat600 to SqlSat690).

As always, code is available at Github.

 

Robust IRIS Dataset?

This blog post was born out of pure curiosity about the robustness of the IRIS Dataset. Biological datasets do not need to be that big in comparison to datasets of customers, consumption, stock and anything that might be volatile.

When still at the university, on one occasion I can remember, we were measuring the length of the frog legs and other frogy features. And after just a couple of measures, the further prediction was steady. Also, any kind of sampling was (RS and SRS, cluster/stratified sampling, sampling with replacements and many other creative ways of sampling) proven to be rigid, robust and would converge quickly to a good result.

Therefore, I have decided to put the IRIS dataset to the test, using a simple classification method. Calculating first the simple euclidian distance, following by finding the neighbour and based on that checking the membership of the type of flowers with the labels.

Accuracy of the prediction was tested by mapping the original species with predicted ones. And the test was, how large can a train dataset be in order to still get a good result.

After some Python and R code, the results were in.

I have tested following pairs (train:test sample size):

  • 80% – 20%
  • 60% – 40%
  • 50% – 50%
  • 30% – 70%
  • 10% – 90%

Note, that the IRIS dataset has 150 observations, each evenly distributed among three species. Following Python code loop through the calculation of euclidean distance.

for x in range(3000):
    exec(open("./classification.py").read(), globals())
    x += 1

At the end I have generated the file:

predictions

With these results, simple R code to generate the scatter plot was used:

library(ggplot2)
setwd("C:\\Predictions\\")
df_pred <- data.frame(read.table("results_split.txt", sep=";"))
p <- ggplot(df_pred, aes(df_pred$V3, df_pred$V1)) 
p <- p + geom_point(aes(df_pred$V3))
p <- p + labs(x="Accuracy (%) of predictions", y="Size of training subset")
p

Which resulted as:

Plot

The graph clearly shows that 10% of training set (10% out of 150 observations) can generate very accurate predictions every 1,35x times.

Other pairs, when taking 30% or 50% of training set, will for sure give close to 100% accuracy almost every time.

Snippet of Python code to generate euclidean distance:

def eucl_dist(set1, set2, length):
    distance = 0
    for x in range(length):
        distance += pow(set1[x] - set2[x], 2)
    return math.sqrt(distance)

and neighbours:

def find_neighbors(train, test, nof_class):
    distances = []
    length_dist = len(test) - 1
    for x in range(len(train)):
        dist = eucl_dist(test, train[x], length_dist)
        distances.append((train[x],dist))
    distances.sort(key=operator.itemgetter(1))
    neighbour = []
    for x in range(nof_class):
        neighbour.append(distances[x][0])
    return neighbour

 

Conclusion, IRIS dataset is – due to the nature of the measurments and observations – robust and rigid; one can get very good accuracy results on a small training set. Everything beyond 30% for training the model, is for this particular case, just additional overload.

Happy R & Python coding! 🙂

Passing two SQL queries to sp_execute_external_script

Recently, I got a question on one of my previous blog posts, if there is possibility to pass two queries in same run-time as an argument to external procedure sp_execute_external_script.

Some of the  arguments of the procedure sp_execute_external_script are enumerated. This is valid for the inputting dataset and as the name of argument @input_data_1 suggests, one can easily (and this is valid doubt) think, there can also be @input_data_2 argument, and so on. Unfortunately, this is not true.  External procedure can hold only one T-SQL dataset, inserted through this parameter.

There are many reasons for that, one would be the cost of sending several datasets to external process and back, so inadvertently, this forces user to rethink and pre-prepare the dataset (meaning, do all the data munging beforehand), prior to sending it into external procedure.

But there are workarounds on how to pass additional query/queries to sp_execute_external_script. I am not advocating this, and I strongly disagree with such usage, but here it is.

First I will create two small datasets, using T-SQL

USE SQLR;
GO

DROP TABLE IF EXISTS dataset;
GO

CREATE TABLE dataset
(ID INT IDENTITY(1,1) NOT NULL
,v1 INT
,v2 INT
CONSTRAINT pk_dataset PRIMARY KEY (id)
)

SET NOCOUNT ON;
GO

INSERT INTO dataset(v1,v2)
SELECT TOP 1
 (SELECT TOP 1 number FROM master..spt_values WHERE type IN ('EOB') ORDER BY NEWID()) AS V1
,(SELECT TOP 1 number FROM master..spt_values WHERE type IN ('EOD') ORDER BY NEWID()) AS v2
FROM master..spt_values
GO 50

This dataset will be used directly into @input_data_1 argument. The next one will be used through R code:

CREATE TABLE external_dataset
(ID INT IDENTITY(1,1) NOT NULL
,v1 INT
CONSTRAINT pk_external_dataset PRIMARY KEY (id)
)

SET NOCOUNT ON;
GO

INSERT INTO external_dataset(v1)
SELECT TOP 1
 (SELECT TOP 1 number FROM master..spt_values WHERE type IN ('EOB') ORDER BY NEWID()) AS V1
FROM master..spt_values
GO 50

Normally,  one would use a single dataset like:

EXEC sp_execute_external_script
     @language = N'R'
    ,@script = N'OutputDataSet <- data.frame(MySet);'
    ,@input_data_1 = N'SELECT TOP 5 v1, v2 FROM dataset;'
    ,@input_data_1_name = N'MySet'
WITH RESULT SETS
((
 Val1 INT
 ,Val2 INT
))

But by “injecting” the  ODBC into R code, we can allow external procedure, to get back to your SQL Server and get additional dataset.

This can be done by following:

EXECUTE AS USER = 'RR'; 
GO

DECLARE @Rscript NVARCHAR(MAX)
SET @Rscript = '
   library(RODBC)
   myconn <-odbcDriverConnect("driver={SQL Server};
         Server=SICN-KASTRUN;database=SQLR;uid=RR;pwd=Read!2$16")
  External_source <- sqlQuery(myconn, "SELECT v1 AS v3 
                    FROM external_dataset")
  close(myconn) 
  Myset <- data.frame(MySet)
   #Merge both datasets
   mergeDataSet <- data.frame(cbind(Myset, External_source));'

EXEC sp_execute_external_script
    @language = N'R'
   ,@script = @Rscript
   ,@input_data_1 = N'SELECT v1, v2 FROM dataset;'
   ,@input_data_1_name = N'MySet'
   ,@output_data_1_name = N'mergeDataSet'
WITH RESULT SETS
((
    Val1 INT
   ,Val2 INT
   ,Val3 INT
))

REVERT;
GO

And the result will be merged two datasets, in total three columns:

2017-07-25 22_04_48-Two_data_sets_R_sp_execute_external_script.sql - SICN-KASTRUN.SQLR (SPAR_si01017

which correspond to two datasets:

-- Check the results!
SELECT * FROM dataset
SELECT * FROM external_dataset

There are, as already mentioned, several opposing factors to this approach, and I would not recommend this. Some are:

  • validating and keeping R code in one place
  • performance issues
  • additional costs of data transferring
  • using ODBC connectors
  • installing additional R packages (in my case RODBC package)
  • keeping different datasets in one place
  • security issues
  • additional login/user settings
  • firewall inbound/outbound rules setting

This, of course, can also be achieved with *.XDF file formats, if they are stored locally or on server as a files.

As always, code is available at Github.

Happy R-SQLing! 🙂

Installing R packages with rxInstallPackages in Microsoft R Server

In MicrosoftML package comes – in my opinion – long anticipated function for installing R packages for SQL Server and Microsoft R Server. And, I am super happy.

Last year, in one of my previous blog posts, I have been showing how to install R package from SSMS using sp_execute_external_script. Now, with new package MicrosoftML (that is part of Microsoft R Server 9.x and above)  new function is available that enables you to easy install the package and also little bit more.

Code is relatively simple and straightforward:

USE SQLR;
GO

EXECUTE sp_execute_external_script
 @language = N'R'
 ,@script = N'

 packagesToInstall <- c("caret","tree","party")
 library(MicrosoftML)
 SqlServerCC <- RxInSqlServer(connectionString = "Driver=SQL Server;
+Server=SICN-KASTRUN\\SQLSERVER2017C2;Database=SQLR;
+Trusted_Connection=True;")
 rxInstallPackages(pkgs = packagesToInstall, owner = '', 
+scope = "shared", computeContext = "SqlServerCC");';
GO

This is way too easy to be true, but it is. Make sure to do couple of things prior to running this code:

  1. set the compute environment to where your packages are installed
  2. set up the correct permissions and access
  3. Check up also the tcp/ip protocols

In rxInstallPackages function use computeContext parameter to set either to “Local” or to your  “SqlServer” environment, you can also use scope as shared or private (difference is, if you install package as shared it can be used by different users across different databases, respectively for private). You can also specify owner if you are running this command out of db_owner role.

Happy SQLR-ing!

Saving input and output with sp_execute_external_script using temporal table and file table (part #2)

In my previous blog post, Saving input and output with sp_execute_external_script, I was exploring the possibilities how to capture the R code that external procedure sends it internally to the Launchpad.exe program.  And a blog comment by reader Bob gave me additional push to write the second part to same topic. Thank you Bob for sharing this with us.

Bob was explained that how they are doing, and you can read all about it here. To recap, R code (as well as any additional packages) is stored on local file system, it gets copied, zipped, transferred and inserted to SQL table. Calling sp_execute_external_script everything gets unzipped and executed.

I will not comment on the solution Bob provided, since I don’t know how their infrastructure, roles, security is set up. At this point, I am grateful for his comment. But what I will comment, is that there is no straightforward way or any out-of-the-box solution. Furthermore, if your R code requires any additional packages, storing the packages with your R code is not that bad idea, regardless of traffic or disk overhead. And versioning the R code is something that is for sure needed.

To continue from previous post, getting or capturing R code, once it gets to Launchpad, is tricky. So storing R code it in a database table or on file system seems a better idea.

Starting with original sample R code:

EXEC sys.sp_execute_external_script
     @language = N'R'
    ,@script = N'
       d <- InputDataSet 
       c <- data.frame(Num_V1 = c(1,2,3))
       c
       OutputDataSet <- c'
    ,@input_data_1 = N'SELECT 1 AS Nmbrs_From_R'

WITH RESULT SETS ((Numbers_From_R INT));

We can create SQL table for R code to be persistent and always available.

CREATE TABLE R_code 
(id INT
,R NVARCHAR(MAX))
INSERT INTO R_code
SELECT 1, '
       d <- InputDataSet 
       c <- data.frame(Num_V1 = c(1,2,3))
       c
       OutputDataSet <- c'
-- (1 row(s) affected)

DECLARE @r_code NVARCHAR(MAX)
SELECT @r_code = R FROM R_code WHERE id = 1

EXEC sys.sp_execute_external_script
     @language = N'R'
    ,@script = @r_code
    ,@input_data_1 = N'SELECT 1 AS Nmbrs_From_R'
WITH RESULT SETS ((Numbers_From_R INT));

This will yield same results. To have R code and T-SQL code side by side, I would suggest to store T-SQL in table as well.

DROP TABLE IF EXISTS R_code
CREATE TABLE R_code 
(id INT
,R NVARCHAR(MAX)
,SQLC NVARCHAR(MAX))

INSERT INTO R_code
SELECT 1, '
       d <- InputDataSet 
       c <- data.frame(Num_V1 = c(1,2,3))
       c
       OutputDataSet <- c','SELECT 1 AS Nmbrs_From_R'
-- (1 row(s) affected)

DECLARE @r_code NVARCHAR(MAX)
DECLARE @sql_code NVARCHAR(MAX)
SELECT @r_code = R FROM R_code WHERE id = 1
SELECT @sql_code = SQLC FROM R_code WHERE id = 1

EXEC sys.sp_execute_external_script
     @language = N'R'
    ,@script = @r_code
    ,@input_data_1 = @sql_code
WITH RESULT SETS ((Numbers_From_R INT));

Now, the best thing to do, is to add some logging to the R_code table and some versioning. Easiest way to achieve this by using Temporal Table.

TEMPORAL TABLE

Rewrite the original table:

CREATE TABLE R_code 
(
 id INT IDENTITY(1,1)
,CombinationID INT NOT NULL CONSTRAINT PK_ComboID PRIMARY KEY
,R NVARCHAR(MAX)
,SQLC NVARCHAR(MAX)
,Valid_From DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL
,Valid_To DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL
,PERIOD FOR SYSTEM_TIME (Valid_From, Valid_To)
)
WITH (SYSTEM_VERSIONING = ON);

Please note, that table will be represented slightly differently (see the clock in the icon).

temp_table

Besides actual table (or Temporal Table), system automatically creates history table where all the changes are being kept.

Once this is done, I can store T-SQL and R-code.

INSERT INTO R_code (CombinationID, R, SQLC)
SELECT 1,'
       d <- InputDataSet 
       c <- data.frame(Num_V1 = c(1,2,3))
       c
       OutputDataSet <- c','SELECT 1 AS Nmbrs_From_R'
-- (1 row(s) affected)

Now I can run the query same way as before:

DECLARE @r_code NVARCHAR(MAX)
DECLARE @sql_code NVARCHAR(MAX)
SELECT @r_code = R FROM R_code WHERE CombinationID = 1
SELECT @sql_code = SQLC FROM R_code WHERE CombinationID = 1

EXEC sys.sp_execute_external_script
     @language = N'R'
    ,@script = @r_code
    ,@input_data_1 = @sql_code
WITH RESULT SETS ((Numbers_From_R INT));

Suppose that there are changes either to R code or SQL Code committed, and I will simulate this change with an UPDATE query:

-- INSERT ANOTHER CombinationID = 1, with changed R Code
UPDATE R_code
SET R = '
       d <- InputDataSet 
       c <- data.frame(Num_V1 = c(1,2,3,4))
       c
       d
       OutputDataSet <- c'
,SQLC = 'SELECT 1 AS Nmbrs_From_R'
WHERE
    CombinationID = 1
-- (1 row(s) affected)

The best part is that system maintains the versioning automatically and I – as an end user – don’t need to worry about changing the original T-SQL code that executes R Script. So once again I can execute the same query:

DECLARE @r_code NVARCHAR(MAX)
DECLARE @sql_code NVARCHAR(MAX)
SELECT @r_code = R FROM R_code WHERE CombinationID = 1
SELECT @sql_code = SQLC FROM R_code WHERE CombinationID = 1

EXEC sys.sp_execute_external_script
     @language = N'R'
    ,@script = @r_code
    ,@input_data_1 = @sql_code
WITH RESULT SETS ((Numbers_From_R INT));

And now, I will get the results from updated R-script or T-SQL code.

result_temporal

FILE TABLE

Another way (among many) is to use File Table. In one of my previous blog posts, I covered how to create and configure File Table. In this scenario, we will consider following. Create a file, that will contain R code and store it with *.R extension. And upload it to the directory, where File Table is considering this file-stream data to be landed or stored.

Quickly check the configuration:

--- Check configurations
SELECT 
  DB_NAME(database_id) AS DbName
 ,non_transacted_access
 ,non_transacted_access_desc
 ,directory_name  
 ,*
FROM  sys.database_filestream_options
WHERE 
    DB_NAME(database_id) = db_name() --'FileTableRChart'

By checking where my R file is residing:

SELECT 
     FT.Name AS [File Name]
    ,IIF(FT.is_directory=1,'Directory','Files') AS [File Category]
    ,FT.file_type AS [File Type]
    ,(FT.cached_file_size)/1024.0 AS [File Size (KB)]
    ,FT.creation_time AS [File Created Time]
    ,FT.file_stream.GetFileNamespacePath(1,0) AS [File Path]
    ,ISNULL(PT.file_stream.GetFileNamespacePath(1,0),'Root Directory') AS [Parent Path]
FROM 
    [dbo].[ChartsR] AS FT
LEFT JOIN [dbo].[ChartsR] AS PT
ON FT.path_locator.GetAncestor(1) = PT.path_locator
WHERE
    FT.File_type = 'R'

And you can see that I have created R file with the name R_combination1.R

2017-04-17 19_46_14-Document1 - Word

So we can access this using OPENROWSET.

SELECT * FROM 
OPENROWSET(BULK N'\\****\RCharts\DocumentTable\R_Combination1.R',
   SINGLE_CLOB) AS R_Code

You can also do this using the master.dbo.xp_cmdshell, whatever suits you better.

Once you have code read from R file, you can simply continue to execute external procedure.

-- Physical Location of FileTable
DECLARE @r_code NVARCHAR(MAX)
SELECT @r_code = BulkColumn FROM 
OPENROWSET(BULK N'C:\DataTK\00\R_Combination1.R', SINGLE_CLOB) AS R_Code

EXEC sys.sp_execute_external_script
     @language = N'R'
    ,@script = @r_code
    ,@input_data_1 = N'SELECT 1 AS Nmbrs_From_R'
WITH RESULT SETS ((Numbers_From_R INT));

And result is the same. In this case you need to consider the usage of BCP, BulkCopy or XP_CMDSHELL. Again, based on your eco-system, what suits you best.

With file table versioning can be a bit of work-around. In addition, you should implement some naming convention to store files correctly or add additional logic to storing and keeping the changes.2017-04-17 20_31_20-Store_input_output_sp_execute_external_script.sql - SICN-KASTRUN.FileTableRChart

So in my case, I am keeping versioning on FileName level, which can be done using T-SQL or renaming the file on file system and later creating hierarchies with files.

 

As always, Code is available at GitHub.

 

Happy coding!