Little useless-useful R functions – Looping through variable names and generating plots

Facets in ggplot2 are great for showing multiple plots on a single canvas. Assuming this usually covers many scenarios, there might be a case that you would want to save all the combinations of x and y variables in a plot as a file. Useless scenario, and again somehow useful.

Combination of Species and Petal.Width as boxplot

Given a x-variable (in this case Species) we would like to have as much as four plots, each time with different y-variable (in this case Petal.Width). So the combinations would be:

  • Species x Petal.Width
  • Species x Petal.Length
  • Species x Sepal.Width
  • Species x Sepal.Length

Creating a helper function that will take an input string and convert it to variable for boxplot:

# Helper function
Iris_plot <- function(df=iris, y) {
  ggplot(df, aes(x = Species, y = !! sym(y) )) + 
    geom_boxplot(notch = TRUE) +
    theme_classic(base_size = 10)
}

Once we have a helper function defined, loop into the datasets:

# Main loop through the columns and dataset
for(varR in variableR){
  name <- paste0(varR, "_x_Species")
  png(paste0(name, ".png"))
  print(Iris_plot(df=iris, y=varR))
  dev.off()
}

At the end, you will have in your work enviroment (check path by getwd() ) files, each holding the combination of graph.

As always, code is available in at the Github in same Useless_R_function repository.

Happy R-coding!

Tagged with: , , , ,
Posted in Uncategorized, Useless R functions

Using SQL for R data.frames with sqldf

There are many R packages for querying SQL Databases. Recently, I was looking into sqldf package | CRAN documentation.

There are so many great advantages (simple running SQL statements, creating, loading, deleteing data to data.frames, connectivity to many databases, support for SQL functions, data types and many many more) , but one that was really a major win was interactions with data frames and SQL Language.

There are also many great packages for manipulating, wrangling and engineering data frames. Tidyverse, dplyr, data.table, purr, tibble, magrittr are many more. A curated list of relevant packages for data scientists can be found here.

But using SQL syntax to get subsets of data.frame can also be done, especially for everyone with SQL background. This blogpost will show a simplicity of using this package and compare it with base R or dplyr.

Let’s create a data.frame with some sample data. I will use iris dataset.

iris <- iris

And load both packages:

library(dplyr)
library(sqldf)

So let’s say we want to get a particular column from dataset that has filtered values. In base R:

iris[iris$Sepal.Width >= 3.0,]$Sepal.Width

using dplyr:

iris %>%
  select(Sepal.Width) %>%
  filter(Sepal.Width>=3.0)

and using sqldf:

sqldf("select [Sepal.Width] from iris
      where
        [Sepal.Width]  >= 3.0")

All in all, it is your flavour of choice, but for convenience, up to you, which one to use.

As always, code is available in at the Github in same Useless_R_function repository.

Tagged with: , , ,
Posted in Uncategorized, Useless R functions

Little useless-useful R functions – Letter frequency in a vector of numbers

So here is a useless fact. There is no letter A in numbers – written as words – from 1 to 100. And of course, we wanted to put this into the test and check if it holds the water.

And sure, there is the result:

Two sets of functions were created in this case (it can be stuffed in single one and super simplified, but it is all about uselessness.

Getting the words for number:

#function
word_a_number <- function(numb){

  basLet <- c('one','two','three','four','five','six','seven','eight','nine','ten'
              ,'eleven','twelve','thirteen','fourteen','fifteen','sixteen','seventeen','eighteen','nineteen'
              ,'twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety','one hundred')
  basNum <- c(1:20,30,40,50,60,70,80,90,100)
  df <- data.frame(num = basNum, let = as.character(basLet))
  
          if (numb <= 20) {
            im <- df[which(df$num == numb),]$let
            print(paste(im, collapse = NULL))
          } else {
                  if (numb %% 10 == 0){
                    e <- df[which(df$num == numb),]$let
                    print(paste0(e, collapse=NULL))
                  } else {
                    sec <- numb %% 10
                    fir <- as.integer(numb/10)*10
                    f_im <- df[which(df$num == fir),]$let
                    s_im <- df[which(df$num == sec),]$let
                    res <- paste0(f_im,"-",s_im, collapse = NULL)
                    print(res)
                  }
          }
}

This functions iterates through a set of numbers, given a boundaries. And since first ten words and and words from eleven to twenty differ significantly, I wrote it in data frame. Next step is to do the increments of ten. And then hundred, thousand, etc.

Once this function is established, it will return the word for given number. Once this is established, a little helper counter would do the job:

#function for  count the frequency 
freqLet <- function(x) {
  word <- tolower(unlist(strsplit(x,"")))
  word_table <- table(word)
  ans <- word_table[names(word_table)]
}

getFreq <- function(vect) {
  df <- data.frame(word=as.character(), stringsAsFactors = FALSE)
  for (i in 1:length(vect)) {
    df[i,1] <- as.character(word_a_number(i))
    a <<- freqLet(df$word)
  }
  return(a)
}

And once this is loaded into environment, we need to run the complete set:

######### Let's check the complete set of numbers

# Automate the function, get a vector of first 100 numbers
vect <- c(1:100)

#Is there A in first 100 words?
getFreq(vect)

And this returns a table of letter frequencies

There are some gotcha moments 🙂

It is useless (and that’s the catch) to write down the words of all the numbers. If you only create a dictionary of non-repetitive words (like the one from 11 to 19 or from 0 to 10), you may check only the in these subsets for presence of letter A 🙂

Another one is, if you – by some change – decide to write word as hundred and ten (110), you will get high frequency of letter A, unless you decide to write it as hundred-ten.

And the last gotcha moment is – letter A will be present only as in word AND as a helper word for easier pronunciation.

And really last gotcha – the next natural occurrence of letter A is in word gazillion 🙂 which is a loooot of billions and yet, very informal word.

As always, code is available in at the Github in same Useless_R_function repository.

Happy R-coding!

Tagged with: , , , , , , ,
Posted in Uncategorized, Useless R functions

Little useless-useful R functions – Using L-Systems for useless writing

Writing useless strings has been a focus of this series, and L-Systems (Lindenmayer Systems) are no different. It is a set or a string of characters that is rewritten on each iteration, following a set of rules. Probably the most famous one is Fractal tree, Sierpinski triangle, Koch curve and many others.

It can be represented as a iterative (recursive) set of symbols, that follow set of rules.

This recursions follow strict rules in case of Fractal Tree or Koch curve, but useless functions follow sheer randomness.

To kick off this function, let’s load the Turtle:

library(TurtleGraphics)

This adorable animal will do the tedious, useless, slow and tangled walk.

With common function:

# common function
turtlebump <- function(i, j) {
  if (i==0) {
    turtle_forward(10)
  } else {
    turtlebump(i-1, j)
    turtle_left(60)
    turtlebump(i-1, j)
    turtle_right(60)
    turtle_right(60)
    turtlebump(i-1, j)
    turtle_left(60)
  }
}

We will have the random part generated by a function:


random_turtle <- function(){
    
    f <- ""
    single_com <- function(){
      list_com <- c("turtle_left(","turtle_right(")
      angle <- sample(1:120, 1, TRUE)
      com <- sample(list_com,1,TRUE)
      return(paste0(com, angle, ")\n"))
    }
    
    comm1 <- "set.seed(2908)
    turtle_init(600, 500, 'clip')
    turtle_hide()
    i <- 8
    j <- 500
    turtle_do({"
    
    for (i in 1:10){
      sc <- single_com()
      i <- i + 1
      f <- paste(f, sc, collapse = NULL)
      #print(f)
      comm2 <<- f
    }
    
    comm3 <- "
    turtlebump(i,j)
    })"
    
    fin <- paste0(comm1, comm2, comm3)
    eval(parse(text=fin))

}

And to fire the function, just set the poor turtle for a tangled walk:

#run random function
random_turtle()

Thank you Turtle.

p.s.: No animals were harmed during function coding!

As always, code is available in at the Github in same Useless_R_function repository.

Happy R-coding!

Tagged with: , , , ,
Posted in Uncategorized, Useless R functions

Little useless-useful R functions – Use pipe %>% in ggplot2

Using pipe %>% or chaining commands is very intuitive for creating following set of commands for data preparation. Visualization library ggplot in this manner uses sign “+” (plus) to do all the chaining. What if we would have to replace with the pipe sign?

So image your typical ggplot command:

library(ggplot2)

#sample DataSet
iris <- iris
ggplot(iris, aes(Sepal.Length, Sepal.Width, colour = Species)) + geom_point()

Which produces a typical graph:

And for the sake of useless functionality, what if the ggplot command would have used a pipe? The code would look like:

gggplot(iris, aes(Sepal.Length, Sepal.Width, colour = Species)) 
              %>% geom_point()
              %>% theme_bw()

Pretty nifty and still very useless.

With the help of the “ToPipe” function, this can be achieved:

ToPipe <- function(ee) {
  this_fn <- rlang::call_name(ee)
  updated_args <- rlang::call_args(ee)
  
  if (identical(this_fn, "%>%") || length(updated_args)==0) {
    fn_2 <- rlang::call2("+", !!!updated_args)
    eval(fn_2)
  } else {
   eval(ee)
  }
}


### pipe version
fun <- quote(ggplot(iris, aes(Sepal.Length, Sepal.Width, colour = Species)) 
              %>% geom_point())
ToPipe(fun)

As always, code is available in at the Github in same Useless_R_function repository.

Happy R-coding!

Tagged with: , , , , , ,
Posted in Uncategorized, Useless R functions

Little useless-useful R functions – Useless R poem for Valentine

Gimmick not a poem, useless R code for your Valentine.

Code for this heart shaped useless poem:

library(tidyverse)

ValentinePoem <- function(){
df<- data_frame(sq = seq(-30, 0, 0.005),
           x1 = (sin(sq)*sin(sq)),
           x2 = x1*-1,
           y = sqrt(cos(sq))*cos(200*sq) + sqrt(abs(sq)) - 0.7*(4 - sq^2)^0.01
           ) %>%
gather(heart, x,x1,x2)
p <- ggplot(df, aes(x, y)) + geom_polygon(fill = "Red") + theme_void() +
  geom_text(size=6,aes(x=0, y=0, label="Errors are red, \n
      Reserved words are blue, \n
      I am here writing this useless\n
      R heart function for you!"), col="black") +  theme(legend.position = "none")
return(p)
}

And once you have the function persistent in your environment, just run:

# Run function
ValentinePoem()

And for the poets, here the R version of the poem:

Errors are red,
Reserved words are blue,
I am here writing this useless
R heart function for you!

Happy R-coding and happy Valentine’s day!

As always, code is available in at the Github in same Useless_R_function repository.

Tagged with: , , , , ,
Posted in Uncategorized, Useless R functions

Little useless-useful R functions – R Version

Printing out R version is absolute no fun. So making it fun using the following useless-useful R function will make this little bit more interesting.

Should be a print of “Hello R” with the version.

Yes, I am rocking an older version of R, but this was to test the behaviour of different R Versions and operating systems (works on Windows and MacOS system).

The function is fairly simple, as it uses unserialize() function in base R. In addition, I am adding little bit of dplyr to

HelloRversion <- function(text=TRUE){

  if (text==TRUE){ 
    # Get some text
    text_R <- "580a000000030003060300030500000000055554462d38000000100000000100040009000001284820202048204545454545204c20202020204c2020202020204f4f4f2020202020202052525252202020202121200a4820202048204520202020204c20202020204c20202020204f2020204f20202020202052202020522020202121200a4848484848204545454545204c20202020204c20202020204f2020204f20202020202052525252202020202121200a4820202048204520202020204c20202020204c20202020204f2020204f202020202020522020205220202020200a4820202048204545454545204c4c4c4c4c204c4c4c4c4c20204f4f4f20202020202020522020205220202021210a2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d2d0a202020202020202020207c207665723a20"
  } else {
    text_R <- "580a000000030003060300030500000000055554462d380000001000000001000400090000021d202020202020202020202020202020202020202020202020202020202020202020205f5f0a202020202020202020202020202020202020202020202020202020202020205f2e2d7e2020290a20202020202020202020202020202020202020205f2e2e2d2d7e7e7e7e2c272020202c2d2f20202020205f0a20202020202020202020202020202020202e2d272e202e202e202e272020202c2d272c27202020202c2720290a2020202020202020202020202020202c272e202e202e205f2020202c2d2d7e2c2d275f5f2e2e2d2720202c270a202020202020202020202020202c272e202e202e202028402927202d2d2d7e7e7e7e2020202020202c270a2020202020202020202020202f2e202e202e202e20277e7e202020202020202020202020202c2d270a20202020202020202020202f2e202e202e202e202e202020202020202020202020202c2d270a202020202020202020203b202e202e202e202e20202d202e20202020202020202c270a2020202020202020203a202e202e202e202e202020202020205f20202020202f0a20202020202020202e202e202e202e202e20202020202020202020602d2e3a0a202020202020202e202e202e202e2f20202d202e20202020202020202020290a2020202020202e20202e202e207c20205f5f5f5f5f2e2e2d2d2d2e2e5f2f200a7e2d2d2d7e7e7e7e2d2d2d2d7e7e7e7e202020202020202020202020207e7e0a2020202020207c207665723a20"
  }  
  
  # Get R version
  vR<- trimws(gsub("\\(.*?\\)", "", sub("R version ","",R.version$version.string)))
  
    
  unserialized_vR <- text_R %>% 
      {substring(., seq(1, nchar(.), 2), seq(2, nchar(.), 2))} %>% 
          paste0("0x", .) %>% 
           as.integer %>% 
            as.raw %>% 
             unserialize()
  
  unserialized_vR <- paste0(unserialized_vR,vR,' |')
  
  
  cat("\014")   
  #cat("\f") if running on Windows OS
  cat(unserialized_vR)
}

The serialized text is essentially converted ASCII art and is outputted on the console along with the R engine version.

To run the function, simply run:

HelloRversion()

But if you want to get creative, run the function argument as FALSE:

HelloRversion(FALSE)

And get the little dolphin as ASCII creature.

<<< Not giving you the picture, go check the code for yourself! >>>>

Making versions fun, the code is available at the Github in same Useless_R_function repository.

Happy R coding and stay safe!

Tagged with: , , , , ,
Posted in Uncategorized, Useless R functions

New features in Power BI for Data Analysts – Small multiples, Anomaly Detection and Zoom on visuals

Great new features have bundled and are now available in Power BI. With December 2020 update, all of the features described in this blog post will be available.

Once you downloaded the latest update (as of writing of this blog, this is December 2020 update), open Power BI Desktop and head to Options and Settings -> Options and go to Preview feature.

Make sure to have Anomaly detection and Small multiples checked. Restart Power BI Desktop.

Small multiples

Small multiples is a layout of small charts over a grouping variable, aligned side-by-side, sharing common scale, that is scaled to fit all the values (by grouping or categorical variable) on multiple smaller graphs. Analyst should immediately see and tell the difference between the grouping variable (e.g.: city, color, type,…) give a visualized data.

In Python, we know this as trellis plot or FacetGrid (seaborn) or simply subplots (Matplotlib).

In R, this is usually referred to as facets (ggplot2).

These multiples are customizable to show you number of graphs per column and row, can be type of line-chart, bar-chart, area-chart and I am positive, that it will get more visuals supported in following updates (this is still under preview as of writing this post).

Anomaly detection

Anomaly detection is a process of detecting and finding the values that deviate from normal (base) line. It can be described as outlier detection. Anomaly detection currently supports finding anomalies in time series data, and can provide also explanation of finding the root cause analysis. This is part of adding more analytics out of the box into Power BI (like Key influencers, decomposition tree and many additional visuals based on R or Python available in marketplace).

You can also fine-tune the sensitivity and get explanation of the anomalies by adding a variable for the anomaly to be explained by.

Visual Zoom Slider

Visual Zoom slider was introduced in November 2020 Power BI update and gives you the option to zoom in on dataset without having to use or change the filters. Zoom slider will examine a smaller portion of the chart and give you a detailed look on the data.

This is a step in right directions for the analysts. And a great response to capabilities that bring R or Python libraries for visualisations, like Plotly. Plotly gives you the capability to zoom in on smaller range of data, to examine or understand a denser set on the graph.

Here is a R exampla of plotly:

########################
### Creating random data
########################
set.seed(2908)

df <- data.frame(time = seq(1,1000, by=1),
                 value = sample(1:34, size=1000, replace=TRUE),
                 valueA = sample(1:100, size=1000, replace=TRUE),
                 valueB = sample(1:100, size=1000, replace=TRUE),
                 city = sample(LETTERS[1:4], size=1000, replace=TRUE), 
                 dist = runif(1000),
                 Date = sample(seq(as.Date('2016/01/01'), as.Date('2021/01/01'), by="day"), 1000))

########################
### ggplot scatter plot
### Plotly
#######################

library(plotly)
library(ggplot2)

p <- ggplot(df, aes(valueA, valueB)) + geom_point()
fig <- ggplotly(p)
fig

And this will generate same looking chart as in Power BI with command line in right upper corner, giving data engineer to zoom in (pane in / pane out) on smaller range of data.

Great and new features available for you in Power BI, making data engineering and data science even easier.

As always, the sample, code and PBIX file is available on Github.

Tagged with: , , , , , ,
Posted in Uncategorized

My predictions for 2021 – Data and analytics

Year 2020 has had a tremendous impact on our lives and has driven many changes. Since last year was a year of radical changes (which we were or were not prepared for, but had to accept them), these will certainly have an influence on what the year 2021 will bring us.

I have made a short list (curated list) of predictions for 2021 where data and analytics might head. For better clarity, I have grouped some of the relevant areas, mostly covering:
– Data Engineering
– Data Analytics
– Machine Learning
– Cloud Technology
– Languages and Roles
– Data Governance

Data Engineering will continue to grow and will see additional boom in 2021. Data consolidation will make this role expanding and will further more heavily depend on success of any ML project. New wave of ETL tools will emerge, making data transition, transformation and data availability easier, faster and more reliable. Depending on the infrastructure, but these might become even bigger players for data pipelining, data tool chains and ETL: dbt, Panoply, Airflow, Matillion, Dataform and Alteryx. All are vendor agnostics, some also great for connecting different tools, platftorms, OS and some are also great tools for data analytics. Exclusivity will be bought by developing fast drivers, API, connections between different data silos.

Following the expansion of data engineering teams, tasks and operations, people will become more mindful about Data strategy; term that will become more and more used. It is broadly used for describing strong data management vision, prioritising, aligning data with data analytic activities with key organisational priorities. With goals as: concepts and standards, collaboration, reuse, improved accuracy, access and sharing in mind. This will be driven – especially in Europe – throughout many of the organisations due to data growth and aligning with data teams to organisational goals.

Data Analytics have been reshaped to some extent in 2020 due to changing workplace, customer experience and faster digitalisation of daily life. Graph analytics will gain further traction due to pandemic causes, cybersecurity and need for tracking activities. Real-time dashboards and data visualisation will play further role in information segment of feeding consumers correct and non-biased information, as well as story telling will further gain popularity, due to changes in daily life of every individual. All will contribute to understanding basics on what is going on, making basic business decisions and understanding underlying concepts of why changes have happened. Many aspect of data analytics will play key role to dramatic changes and impact of pandemic and related events. Therefore we can also expect more logs being generated, kept for longer period of time and opening up many new opportunities.

Machine Learning (AI) will continue to rise in mid-size to large organisations. And will continue to decline in small organisations. Data scientists will continue to hunger for meaningful training datasets. They will fed their ML Algorithms to understand predictions, changes over time and results to cloud based services or SaaS applications. Giving more compute power will also create more pressure for data scientists to capture and ingest single change. Encapsulated environments will further drive the expansion among data science. Platforms as Databricks will grow in popularity, usability and will help DataOps ecosystem in large enterprises, making data more actionable for data science.

CI/CD and MLOps will continue to bloom and should gain even more traction in 2021. Year 2020 was the explosion year, offering many tools to data scientists, with the explosion of many startups and many offerings, there might be some consolidation and only few (frontrunners) vendors will remain. More focus will be put in developing solutions that require more and more effort due to rapid data changes, bringing build/deploy prediction model to higher frequency. This will also make the testing more difficult and version control more complex.

Natural language processing will see even further growth in 2021, mostly to digitalisation of many of the daily processes and storing many of the conversations. Also health industry (as other industries) will have a huge gain in NLP.

Machine learning will get further commoditised, and many of the cloud services and cloud platforms are offering ML out of the box. On the other hand, the need for white box (in comparison to black box ML algorithms) will be available in many of the platforms, from interpretability, explainability to fairness and many more.

Cloud technologies will have several players that will advocate new standards. Snowflake will become number top 3 in field of Data warehousing, bringing new concepts of datawarehouse to cloud. Decoupling compute from storage, making it cross-platform and cross-language available, ingesting any type of data, anywhere will bring closer cloud and into everyday use to ,big organizations. Cloud will be even more used in 2021 due to changes in workplace and how we make work, so additional services for making work easier, to collaborate better, exchange work will bring a lot of fundings from investors and many of smaller start-ups will flourish.

Live recordings of work in bigger companies will drive appetite in this direction with the help of cloud storage and services. Fog computing (in respect to edge computing) will be the buzz-word of the year with the companies that deal with IoT or organisations adopting IoT.

Everything as a code will revolutionise “as Code” concept in 2021, making it bigger part of DevOps teams

Languages and Roles will also change in 2021. Bringing new data roles as: Cloud data Prep, Analytics Engineer, Data Trustee, Data-Lake engineers, and mesh-up roles as DataOps Engineer will appear further more in large organisations. Data team will start aligning their methodologies to core software development for better data understanding, better data services to other data-orientated teams.

Data-Ops practices will become part Data Team, Data Engineers and in 2022 or later, of almost every team, because fast growing business needs will be tailoring new business use cases and cloud technologies will be pushing the data literacy further. In 2021, having knowledge in Python, R, Scala, Julia, PowerShell, Spark, or Machine Learning will not be an advantage anymore, but more a prerequisite for any data-orientated position.

Many of roles that have emerged in 2019,2020 will be further stabilised and will have a continuative growth.

R and Python, alongside with Scala, Julia, will remain and hold even a stronger position in data science. But the necessity of general comprehension of SQL, JavaScript, Bash/PowerShell, Java, C++ will become even bigger.

Spark will the key language for 2021, when we will talk about data science and infrastructure, alongside Presto and others. Investing in Spark in 2021 will pay-off.

Data Governance will become much bigger focus in 2021 as it has been in past 10 years. With the surge of data teams, data-ops and data officers, the need for catalogues, definitions and business rules will be corner stones to data trust. Having trustworthy data will speed up many of the later data ingestion, preparation or data analysis processes and thus making data much more agile and operationalised to business needs. Governance will almost be a key component between smart data cleaning, better ETL/data chaining/data processing operations, making and helping a stronger data management vision and building strong business cases on top.

Feel free to comment, post your views, agree, disagree, and debate. 🙂 I know we are bad at giving such predictions, but it is always nice to share the vision and have a contra-argument for incentive and further thinking.

As always, Stay Healthy and happy coding!

Tagged with: , , , , , , , , ,
Posted in thoughts, Uncategorized

Little useless-useful R functions – Countdown number puzzle

The famous countdown game loved among mathematicians and people adoring numbers and why not find a way to check for solutions.

So the game is (was) known as a TV show where then host would give a random 3-digit number and the contestants would draw 6 random numbers from stack of numbers. Given the time limit, the winner was the one who would create a formula matching the result or being closest.

Many ways, tips, tricks and optimisations were already considered, maybe the most famous was the Reverse Polish notation where operators follow their operands and is a great fit for the game.

With useless functionality, I have decided to use permuteGeneral function from RcppAlgos or same functionality could be achieved with combn function.

library(RcppAlgos)

countDown_puzzle <- function(six, res_num) {
    oper <- c("+","-","/","*")
    res <- 0
    d2 <- permuteGeneral(six)
    for (i in 1:nrow(d2)){
      for (o in 1:1000){
        
        r <- paste0(as.integer(d2[i,1]),' ',as.character(sample(oper,1)),' (',as.integer(d2[i,2]),' ',as.character(sample(oper,1)),' ((', as.integer(d2[i,3]),' ',as.character(sample(oper,1)),as.integer(d2[i,4]),') ',as.character(sample(oper,1)), 
as.integer(d2[i,5]),')',as.character(sample(oper,1)),as.integer(d2[i,6]), ') ', sep ="")
        res <- eval(parse(text=r))
        if(res == res_num){
          print(paste0("Solution: ", r, ' with result of: ', res, ' for given vector: ', paste(six, collapse = " "), sep=""));       
            }
      }
    }
}

And simply run the function:

countDown_puzzle(c(100,9,4,1,7,25), 463)

If you want to generate also the numbers and a random solution:

set.seed(2908)
number_pool <- c(1:11, 25, 50, 75, 100, 200)
six <- sample(number_pool, 6, replace=FALSE)
res_num <- sample(100:999,1)

# run function
countDown_puzzle(six, res_num)

There are many immediate optimisations to this game, such as:
– there is no need to multiply or divide by 1
– intermediate solutions become non-integer
– intermediate solution become negative number
– redundant calculations as: x+y = x, respectively for subtractions.
– and many more.

Non of there were taken into consideration, but it could be a great start for optimisation.

As always, code is available at Github.

Happy R-coding!

Tagged with: , , , , , , ,
Posted in Uncategorized, Useless R functions
Follow TomazTsql on WordPress.com
Programs I Use: SQL Search
Programs I Use: R Studio
Programs I Use: Plan Explorer
Rdeči Noski – Charity

Rdeči noski

100% of donations made here go to charity, no deductions, no fees. For CLOWNDOCTORS - encouraging more joy and happiness to children staying in hospitals (http://www.rednoses.eu/red-noses-organisations/slovenia/)

€2.00

Top SQL Server Bloggers 2018
TomazTsql

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

Discover

A daily selection of the best content published on WordPress, collected for you by humans who love to read.

Revolutions

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

tenbulls.co.uk

tenbulls.co.uk - attaining enlightenment with the Microsoft Data and Cloud Platforms with a sprinkling of Open Source and supporting technologies!

SQL DBA with A Beard

He's a SQL DBA and he has a beard

Reeves Smith's SQL & BI Blog

A blog about SQL Server and the Microsoft Business Intelligence stack with some random Non-Microsoft tools thrown in for good measure.

SQL Server

for Application Developers

Business Analytics 3.0

Data Driven Business Models

SQL Database Engine Blog

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

Search Msdn

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

R-bloggers

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

MsSQLGirl

Bringing meaning to data & insights through experiences users love

R-bloggers

R news and tutorials contributed by hundreds of R bloggers

Data Until I Die!

Data for Life :)

Paul Turley's SQL Server BI Blog

sharing my experiences with the Microsoft data platform, SQL Server BI, Data Modeling, SSAS Design, Power Pivot, Power BI, SSRS Advanced Design, Power BI, Dashboards & Visualization since 2009

Grant Fritchey

Intimidating Databases and Code

Madhivanan's SQL blog

A modern business theme

Alessandro Alpi's Blog

DevOps could be the disease you die with, but don’t die of.

Paul te Braak

Business Intelligence Blog

Sql Server Insane Asylum (A Blog by Pat Wright)

Information about SQL Server from the Asylum.

Gareth's Blog

A blog about Life, SQL & Everything ...

SQLPam's Blog

Life changes fast and this is where I occasionally take time to ponder what I have learned and experienced. A lot of focus will be on SQL and the SQL community – but life varies.