FileTable and storing graphs from Microsoft R Server

FileTable has been around now for quite some time and and it is useful  for storing files, documents, pictures and and binary files in a designated SQL Server table – FileTable. The best part of FileTable is the fact one can access it from windows or other application as if it were stored on file system (because they are) and not making any other changes on the client.

And this feature is absolutely handy for using and storing outputs from Microsoft R Server. In this blog post I will focus mainly on persistently storing charts from statistical analysis.

First we need to secure that FileStream is enabled. Open SQL Server Configuration Manager and navigate to your running SQL Server. Right click and select FILESTREAM and enable Filestream for T-SQL access and I/O access. In addition, allow remote clients access to Filestream data as well.

2016-09-23-21_53_34-sqlquery9-sql-sicn-kastrun-master-spar_si01017988-63_-microsoft-sql-serv

Next step is to enable the configurations in Management Studio.

EXEC sp_configure 'filestream_access_level' , 2;
GO
RECONFIGURE;
GO

For this purpose I have decided to have a dedicated database for storing charts created in R. And this database will have FileTable enabled.

USE master;
GO

CREATE DATABASE FileTableRChart 
ON PRIMARY  (NAME = N'FileTableRChart', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\FileTableRChart.mdf' , SIZE = 8192KB , FILEGROWTH = 65536KB ),
FILEGROUP FileStreamGroup1 CONTAINS FILESTREAM( NAME = ChartsFG, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\RCharts')
LOG ON (NAME = N'FileTableRChart_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\FileTableRChart_log.ldf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
GO

ALTER DATABASE FileTableRChart
    SET FILESTREAM ( NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'RCharts' )

So I will have folder RCharts available as a BLOB storage to my FileTableRChart SQL server database. Adding a table to get all the needed information on my charts.

USE FileTableRChart;
GO

CREATE TABLE ChartsR AS FILETABLE
WITH (
 FileTable_Directory = 'DocumentTable'
,FileTable_Collate_Filename = database_default  
);
GO

Setting the BLOB, we can focus now on R code within T-SQL. Following R Code will be used to generated histograms with normal curve for quick data overview (note, this is just a sample):

x <- data.frame(val = c(1,2,3,6,3,2,3,4,5,6,7,7,6,6,6,5,5,4,8))
y <- data.frame(val = c(1,2,5,8,5,4,2,4,5,6,3,2,3,5,5,6,7,7,8))
x$class <- 'XX'
y$class <- 'YY'
d <- rbind(x,y)

#normal function with counts
gghist <- ggplot(d, aes(x=val)) + geom_histogram(binwidth=2, 
                  aes(y=..density.., fill=..count..))
gghist <- gghist + stat_function(fun=dnorm, args=list(mean=mean(d$val), 
                   sd=sd(d$val)), colour="red")
gghist <- gghist + ggtitle("Histogram of val with normal curve")  + 
                   xlab("Variable Val") + ylab("Density of Val")

Returning diagram that will be further parametrized when inserted into T-SQL code.

hist_norm_curv

Besides parametrization, I will add a function to loop through all the input variables and generated diagrams for each of the given variable/column in SQL Server query passed through sp_execute_external_script stored procedure.

Final code:

DECLARE @SQLStat NVARCHAR(4000)
SET @SQLStat = 'SELECT
                     fs.[Sale Key] AS SalesID
                    ,c.[City] AS City
                    ,c.[State Province] AS StateProvince
                    ,c.[Sales Territory] AS SalesTerritory
                    ,fs.[Customer Key] AS CustomerKey
                    ,fs.[Stock Item Key] AS StockItem
                    ,fs.[Quantity] AS Quantity
                    ,fs.[Total Including Tax] AS Total
                    ,fs.[Profit] AS Profit

                    FROM [Fact].[Sale] AS  fs
                    JOIN dimension.city AS c
                    ON c.[City Key] = fs.[City Key]
                    WHERE
                        fs.[customer key] <> 0'

DECLARE @RStat NVARCHAR(4000)
SET @RStat = 'library(ggplot2)
              library(stringr)
              #library(jpeg)
              cust_data <- Sales
              n <- ncol(cust_data)
              for (i in 1:n) 
                        {
                          path <- 
''\\\\SICN-KASTRUN\\mssqlserver\\RCharts\\DocumentTable\\Plot_''
                          colid   <- data.frame(val=(cust_data)[i])
                          colname <- names(cust_data)[i]
                          #print(colname)
                          #print(colid)
                          gghist <- ggplot(colid, aes(x=val)) + 
geom_histogram(binwidth=2, aes(y=..density.., fill=..count..))
                          gghist <- gghist + stat_function(fun=dnorm, 
args=list(mean=mean(colid$val), sd=sd(colid$val)), colour="red")
                          gghist <- gghist + ggtitle("Histogram of val with 
normal curve")  + xlab("Variable Val") + ylab("Density of Val")
                          path <- paste(path,colname,''.jpg'')
                          path <- str_replace_all(path," ","")
                          #jpeg(file=path)
                          ggsave(path, width = 4, height = 4)
                          plot(gghist)
                          dev.off()
                        }';

EXECUTE sp_execute_external_script
     @language = N'R'
    ,@script = @RStat
    ,@input_data_1 = @SQLStat
    ,@input_data_1_name = N'Sales'

I am using ggsave function, but jpeg function from package jpeg is also an option. Matter of a flavour. And variable path should be pointing to your local FileTable directory.

Now I can have graphs and charts stored persistently in filetable and retrieving information on files with simple query:

SELECT FT.Name
,IIF(FT.is_directory=1,'Directory','Files') [File Category]
,FT.file_type [File Type]
,(FT.cached_file_size)/1024.0 [File Size (KB)]
,FT.creation_time [Created Time]
,FT.file_stream.GetFileNamespacePath(1,0) [File Path]
,ISNULL(PT.file_stream.GetFileNamespacePath(1,0),'Root Directory') [Parent Path]
FROM [dbo].[ChartsR] FT
LEFT JOIN [dbo].[ChartsR] PT
ON FT.path_locator.GetAncestor(1) = PT.path_locator

2016-09-25-13_33_40-using_filetable_to_store_graphs_generated_with_revoscale-sql-sicn-kastrun-file

Going through the charts could be now much easier for multiple purposes.

2016-09-25-13_35_08-edit-post-tomaztsql-wordpress-com

There might be some security issues: I have used mklink to create a logical drive pointing to FileTable directory.

2016-09-25-12_05_54-administrator_-command-prompt

You might also want to use Local group policy editor for MSSQLLaunchpad to have access granted (write permissions) to FileTable directory.

2016-09-25-13_37_41-local-group-policy-editor

Code is available at GitHub.

Happy R-SQLing!

 

#SQLSatDenmark 2016 wrap up

SQLSatDenmark 2016 took place in Lyngby at Microsoft Denmark headquarters. Apart from the fact that Lyngby is absolutely cute town, the Microsoft HQ is nice as well.

At Microsoft Denmark:2016-09-23-09_46_29-presentation1-pptx-powerpoint

Lyngy:2016-09-23-09_48_21-presentation1-pptx-powerpoint

On the evening before the precons day, dinner at MASH restaurant was prepared for all the precon speakers Tim Chapman, Andre Kamman, Kevin Kline and myself with hosts Regis Baccaro and Kenneth M. Nielsen.

2016-09-23-19_26_08-presentation1-pptx-powerpoint

After delicious steaks, pints of beer and interesting conversations, the precon day started.

My precon room was full and I had 30 attendees, lots of material and ending with demos on R and SQL Server integration from the fields. Feedback was great, obviously. The problem I had was that I prepared too much material, which was anyways handed over (all the code) for people to learn more when back at home and that I was focusing too much on statistics. But finishing at 16.30 and was available in the Microsoft HQ until 17.30 for any other questions.

Next day, SQLSaturday Denmark started early and in total 280 attendees showed up. It was easy going and well organized event, great sponsors, nice swag, raffle and overall good stuff one can find at the event – juice bar, good food and ending the event with hot dog stand and SQL Beer. Yes, traditional Danish SQL saturday beer 🙂

2016-09-23-19_43_41-presentation1-pptx-powerpoint

I delivered a session on Machine Learning algorithms in Microsoft Azure, explaining which algorithm to use with which dataset and what kind of statistical problem can be solved with. Great feedback from the crowd and very interesting questions – more or less very statistical and data mining questions. And I truly loved it.

Thanks to all the sponsors, organizers, attendees and the SQL family. It was great to see you.

 

Size of XDF files using RevoScaleR package

It came to my attention that size of XDF (external data frame) file can change drastically based on the compute context and environment. When testing the output of a dataset I was working on in SQL Server Management Studio I was simultaneously testing R code in RTVS or  R Studio and I have noticed a file growth.

Following stored procedure will do a simple test:

CREATE PROCEDURE rxImport_Test ( @rowsPerRead INT )
AS
BEGIN
    DECLARE @RStat NVARCHAR(4000)
    SET @RStat = 'library(RevoScaleR)
                  #rxSetComputeContext("RxLocalSeq")
                  ptm <- proc.time()
                  inFile <- file.path(rxGetOption("sampleDataDir"), "AirlineDemoSmall.csv")
                  filename <- "AirlineDemoSmall_'+CAST(@rowsPerRead AS VARCHAR(100))+'_TSQL_NC.xdf"
                  rxTextToXdf(inFile = inFile, outFile = filename,  stringsAsFactors = T, rowsPerRead = '+CAST(@rowsPerRead AS VARCHAR(100))+', overwrite=TRUE)
                  outFile <- file.path(rxGetOption("sampleDataDir"), filename)
                  rxImport(inData = inFile, outFile = outFile, overwrite=TRUE)
                  d <- proc.time() - ptm
                  filesize <- data.frame(file.size(filename))    
                  time     <- data.frame(d[3])
                  RowsPerRead <- data.frame('+CAST(@rowsPerRead AS VARCHAR(100))+')
                  filename_xdf <- data.frame(filename)
                  ran <- data.frame(Sys.time())
                  OutputDataSet <- cbind(as.character(filesize), time, RowsPerRead, filename_xdf, ran)';
    EXECUTE sp_execute_external_script
          @language = N'R'
         ,@script = @RStat
    WITH RESULT SETS (( 
                         Filesize NVARCHAR(100)
                        ,Time_df NVARCHAR(100)
                        ,RowsPerRead NVARCHAR(100)
                        ,filename_xdf NVARCHAR(100)
                        ,DateExecute NVARCHAR(100)
                        ))
END

But let’s first understand and test the Computation context and path to the data.

-- Make sure your path location is pointing to RevoscaleR library folder!
EXECUTE sp_execute_external_script
      @language = N'R'
     ,@script = N'library(RevoScaleR) 
                OutputDataSet <- data.frame(rxGetOption("sampleDataDir"))'
WITH RESULT SETS (( 
                    path_folder NVARCHAR(1000)
                    ))

-- check for ComputeContext
DECLARE @RStat NVARCHAR(4000)
SET @RStat = 'library(RevoScaleR)
             cc <- rxGetOption("computeContext")
             OutputDataSet <- data.frame(cc@description)';
EXECUTE sp_execute_external_script
      @language = N'R'
     ,@script = @RStat
WITH RESULT SETS ((compute_context NVARCHAR(100)))

At my computer, this looks like this:

2016-09-22-14_33_40-revoscale_r_file_conversion-sql-sicn-kastrun-sqlr-spar_si01017988-59-mic

No we will run procedure

rxImport_Test

with different chunk sizes (this is what I will test) and observe execution times.

INSERT INTO rxImport_results
EXEC rxImport_Test @rowsPerRead = 2;
GO

INSERT INTO rxImport_results
EXEC rxImport_Test @rowsPerRead = 20;
GO

INSERT INTO rxImport_results
EXEC rxImport_Test @rowsPerRead = 200;
GO

INSERT INTO rxImport_results
EXEC rxImport_Test @rowsPerRead = 2000;
GO

Running with different chunk size the procedure, it yields interesting results:

2016-09-22 15_22_37-Revoscale_R_file_conversion.sql - SICN-KASTRUN.SQLR (SPAR_si01017988 (60))_ - Mi.png

Now, let’s see the summary information on this file / dataset.

 

2016-09-22-15_35_58-book1-excel

Considering that original file holds 600.000 rows, number of blocks for each of the files is also changing and therefore the size of the files is growing.

Retrieving information on block size

DECLARE @RStat NVARCHAR(4000)
SET @RStat = 'library(RevoScaleR)    
              info <- rxGetInfoXdf(data="AirlineDemoSmall_20000000_TSQL_NC.xdf", getVarInfo = TRUE)    
              OutputDataSet <- data.frame(info$numBlocks)';

EXECUTE sp_execute_external_script
      @language = N'R'
     ,@script = @RStat
WITH RESULT SETS (( 
                    nof_blocks NVARCHAR(100)))

one can see the change between the files and where is the optimal block size. In my test, number of blocks would be 3 to 30 max to receive maximum performance from creating XDF file. This means from 2000 up to 200.000 rows per block would yield best performance results. Otherwise I haven’t found the the golden rule of the block size, but take caution, especially when dealing with larger files.

I ran test couple of times in order to check the consistency of the results, and they hold water. As for the file size; this is the presentation of internal file, as of *.xdf file (as external structure) size should not differ as the block size changes, but perfomance does!

Code is available at Github.

Happy R-sqling!

Sharing thoughts on satRdays R Conference, Budapest 2016 #satRdays

satRdayLogo-128px

First satRdays in Budapest September 03, 2016 event is completed. This one day, community driven event with regional for very affordable prices, good for networking, getting latest from R community event is over. And it was a blast! Great time, nice atmosphere, lots of interesting people and where there is a good energy, there is a will to learn new things. And that’s what we did!

header.png

September 3rd, 2016 satRdays event took place at MTA TTK building in Budapest. When morning workshop were over, the event took off with keynote with Gabor Csardi sharing his experiences with R, CRAN and internals on his packages, following the speakers, lightning talks and at the end the visualization competition.

Speakers presenting were:speakers.jpg

With absolutely outstanding schedule:

2016-09-06 13_55_35-satRday @ Budapest 2016.png

Sessions were outstanding and people were create, talks covering from practical, technical, visualization and package-wise topics and I was thrilled to be part of it.

20160903_093459.jpg

20160903_101957.jpg

2016-09-06 13_48_38-Gergely Daróczi (@daroczig) _ Twitter.png2016-09-06 13_47_41-Gergely Daróczi (@daroczig) _ Twitter.png

20160903_093842
Selfie photo with Bence and Gergely.

 

Closing the day with pizza and visualization talks.

2016-09-06 13_49_19-Gergely Daróczi (@daroczig) _ Twitter.png

20160903_133052.jpg

Since it was R event, some statistics: 192 registrations -> 170 showed up (11% drop off rate) and speakers from 19 countries. Cca 180 lunches served,  lots of coffees (I had 4) 🙂  and tea drank and highest density of R package authors in Budapest on September 3, 2016.

 (some photos are courtesy of Gergely)

And some twitter statistics as well:

490 tweets with hashtag #satRdays (from 01SEP2016 – 04SEP2016)

Top 10 most active twitter handles (order desc): @tomaz_tsql, @romain_francois, @BenceArato, @SteffLocke, @daroczig, @torokagoston, @thinkR_fr, @InvernessRug, @Emaasit, @matlabulous  and many many others…

Most retweeted tweet by Kate Ross-Smith @LaSystemistaria

2016-09-06 14_18_42-@LaSystemistaria_ To all of the lovely - Twitter Search

Most adorable tweets (I must admit we can all agree) were the ones Romain proposing to his long-time girlfriend Cecile. Tweet scored highest number of favorites! (112 of the time when writing this post)

2016-09-06 14_22_00-Romain François (@romain_francois) _ Twitter.png

Word associations with official hashtag #satRdays  were

$satrdays
      lunch       kboda     putting     romance        buns         pic   andypryke       where        back       break       enjoy   firstever          gt       proud     sponsor       super   tomorrows          we 
       0.49        0.43        0.43        0.43        0.37        0.37        0.35        0.35        0.32        0.27        0.27        0.25        0.25        0.25        0.25        0.25        0.25        0.24 
satrdaysorg    proposal 
       0.22        0.20

Love is correlating with satRdays very good these days! 🙂 Not to mention food! Overall, 55% of tweets with positive and 45% with neutral sentiment.

To conclude: just great event! If you missed it, well… don’t do it again! 🙂

Thanks again to organizers (Thank you Gergely, Steff and the crew) and to all the speakers, volunteers, sponsors and R Consortium.

And just couple of words on my presentation: Microsoft R Server with SQL Server 2016, where I showed what a great job Microsoft did and how well RevoScaleR Package performs was accepted great, and I had couple of interesting questions and people coming to me to learn more.