SparkR
DataFrame
GroupedData
PipelineModel-class
abs
acos
add_months
agg
alias
approxCountDistinct
arrange
ascii
asin
atan
atan2
avg
base64
between
bin
bitwiseNOT
cache
cacheTable
cancelJobGroup
cast
cbrt
ceil
clearCache
clearJobGroup
collect
column
columns
concat
concat_ws
conv
cos
cosh
count
countDistinct
crc32
createDataFrame
createExternalTable
date_add
date_format
date_sub
datediff
dayofmonth
dayofyear
describe
dim
distinct
dropTempTable
dtypes
except
exp
explain
explode
expm1
expr
factorial
filter
first
floor
format_number
format_string
from_unixtime
from
utc
timestamp
glm
greatest
groupBy
hashCode
head
hex
hour
hypot
ifelse
infer_type
initcap
insertInto
instr
intersect
isLocal
isNaN
join
jsonFile
last
last_day
least
length
levenshtein
limit
lit
locate
log
log10
log1p
log2
lower
lpad
ltrim
match
max
md5
mean
merge
min
minute
month
months_between
nafunctions
nanvl
ncol
negate
next_day
nrow
otherwise
parquetFile
persist
pmod
predict
print.jobj
print.structField
print.structType
printSchema
quarter
rand
randn
rbind
read.df
regexp_extract
regexp_replace
registerTempTable
repartition
reverse
rint
round
rpad
rtrim
sample
saveAsParquetFile
saveAsTable
schema
second
select
selectExpr
setJobGroup
sha1
sha2
shiftLeft
shiftRight
shiftRightUnsigned
show
showDF
signum
sin
sinh
size
soundex
sparkR.init
sparkR.stop
sparkRHive.init
sparkRSQL.init
sql
sqrt
statfunctions
structField
structType
subset
substr
substring_index
sum
sumDistinct
summary
table
tableNames
tables
take
tan
tanh
toDegrees
toRadians
to_date
to
utc
timestamp
translate
trim
unbase64
uncacheTable
unhex
unionAll
unique
unix_timestamp
unpersist-methods
upper
weekofyear
when
withColumn
withColumnRenamed
write.df
year
Generated with
knitr
1.10.5