Can clickhouse-backup use HDFS as remote_storage? Recommended approach? #1342
-
|
We are running clickhouse-backup inside a Kubernetes pod and would like to store backup archives on HDFS. Does clickhouse-backup support HDFS as a remote_storage backend (i.e. set remote_storage: hdfs or similar)? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
you have following options HDFS disk as backup storage + BACKUP sql command/etc/clickhouse-server/config.d/hdfs_backup_disk.xml <clickhouse>
<!-- optional params for authorization -->
<backups_hdfs>
<hadoop_kerberos_keytab>/tmp/keytab/clickhouse.keytab</hadoop_kerberos_keytab>
<hadoop_kerberos_principal>clickuser@TEST.CLICKHOUSE.TECH</hadoop_kerberos_principal>
<hadoop_security_authentication>kerberos</hadoop_security_authentication>
</backups_hdfs>
<storage_policy>
<storage_configuration>
<disks>
<hdfs>
<type>hdfs</type>
<endpoint>hdfs://hdfs1:9000/clickhouse/</endpoint>
<skip_access_check>true</skip_access_check>
</hdfs>
</disks>
</storage_configuration>
<backups>
<allowed_disk>backups_hdfs</allowed_disk>
</backups>
</clickhouse>then BACKUP ALL TO Disk('backups_hdfs','backup_name')remote_storage: custom and
|
Beta Was this translation helpful? Give feedback.
you have following options
HDFS disk as backup storage + BACKUP sql command
/etc/clickhouse-server/config.d/hdfs_backup_disk.xml