作者: 康凯森
日期: 2017-07-10
分类: OLAP
Apache Kylin is a OALP Engine that speeding up query by Cube precomputation. The Cube is multi-dimensional dataset which contain precomputed all measures in all dimension combination. Before v2.0, Kylin uses MapReduce to build Cubes. In order to get better performance, Kylin 2.0 introduced the Spark Cubing. About the principle of Spark Cubing, please refer to this official blog:By-layer Spark Cubing.
In this blog, I mainly talk about the following points:
In currently Spark Cubing(beta) version, it doesn't support HBase cluster using Kerberos bacause Spark Cubing need to get matadata from HBase. To solve this problem, we have two solutions: ony way is to make Spark could connect HBase using Kerberos, another way is to avoid Spark connect HBase in Spark Cubing.
If we only want to run Spark Cubing in Yarn client mode, we only need to add three line code before new SparkConf() in SparkCubingByLayer:
Configuration configuration = HBaseConnection.getCurrentHBaseConfiguration();
HConnection connection = HConnectionManager.createConnection(configuration);
//Obtain an authentication token for the given user and add it to the user's credentials.
TokenUtil.obtainAndCacheToken(connection, UserProvider.instantiate(configuration).create(UserGroupInformation.getCurrentUser()));
As for How to make Spark connect HBase using Kerberos in Yarn cluster mode, please refer to SPARK-6918,SPARK-12279,HBASE-17040. The solution maybe could work, but not elegant. So I try the sencond solution.
The core idea is uploading the necessary metadata job related to HDFS and using HDFSResourceStore manage the metadata.
Before introducing how I use HDFSResourceStore instead of HBaseResourceStore in Spark Cubing. Let’s see what's Kylin metadata format and how kylin manage the metadata.
Every concrete metadata for table, cube, model and project is a JSON file in Kylin . The whole metadata is organized by file directory. The picture below is the root directory for Kylin metadata, This picture shows the content of project dir, the "learn_kylin" and "kylin_test" are both project name.
Kylin manage the metadata using ResourceStore, ResourceStore is a abstract class, which abstract the CRUD Interface for metadata. ResourceStore has three implementation classes:
Currently, only HBaseResourceStore could use in prod env. FileResourceStore mainly used for test. HDFSResourceStore still has some concurrent issue, but which could be used for read safely. Kylin use the "kylin.metadata.url" config to decide which kind of ResourceStore will be used.
Now, Let’s see How I use HDFSResourceStore instead of HBaseResourceStore in Spark Cubing.
Of course, We need to delete the HDFS metadata dir.
These are the Spark configuration I used for Spark Cubing. The goal is to make our user set less spark config.
//running in yarn-cluster mode
kylin.engine.spark-conf.spark.master=yarn
kylin.engine.spark-conf.spark.submit.deployMode=cluster
//enable the dynamic allocation for Spark to avoid user set the number of executors explicitly
kylin.engine.spark-conf.spark.dynamicAllocation.enabled=true
kylin.engine.spark-conf.spark.dynamicAllocation.minExecutors=10
kylin.engine.spark-conf.spark.dynamicAllocation.maxExecutors=1024
kylin.engine.spark-conf.spark.dynamicAllocation.executorIdleTimeout=300
kylin.engine.spark-conf.spark.shuffle.service.enabled=true
kylin.engine.spark-conf.spark.shuffle.service.port=7337
//the memory config
kylin.engine.spark-conf.spark.driver.memory=4G
//should enlarge the executor.memory when the cube dict is huge
kylin.engine.spark-conf.spark.executor.memory=4G
//because kylin need to load the cube dict in executor
kylin.engine.spark-conf.spark.executor.cores=1
//enlarge the timeout
kylin.engine.spark-conf.spark.network.timeout=600
kylin.engine.spark-conf.spark.yarn.queue=root.hadoop.test
kylin.engine.spark.rdd-partition-cut-mb=100
For the source data level is from million to hundreds of millions of,my test result is consistent with the By-layer Spark Cubing basically. The performance improvement is Remarkable. Moreover I tested with billions of source data or having huge dict specially.
The test Cube1 has 2.7 billion source data, 9 dimensions, one precise distinct count measure having 70 million cardinality.(which means the dict also has 70 million cardinality)
Test test Cube2 has 2.4 billion source data, 13 dimensions, 38 measures(contains 9 precise distinct count measures).
The test result is shown in below picture, the unit of time is minute.
In a word, Spark cubing is much faster than MR cubing in most scenes.
In my opinion, the advantage for Spark Cubing is:
On the contrary,the drawback for Spark Cubing is:
In my opinion, Except the huge dict scenario, we all could use Spark Cubing instead of MR Cubing, especially the following scenarios:
As we all known, A big difference for MR and Spark is the task for MR is running in progress and the task for Spark is running in thread. So, in MR Cubing, the dict of Cube only laod once, but in Spark Cubing, the dict will load many times in one executor, which will result in GC frequently in Spark Cubing.
So, I have done the two improvement:
Spark Cubing is a great feature for Kylin 2.0, Thanks Kylin community. I will popularize Spark Cubing in our company. I believe Spark Cubing will be more robust and efficient.