Enable Preview Features
Preview features are LiveData Migrator features that are still under development and are subject to improvement. These are disabled by default, and must be enabled in the properties file:
/etc/wandisco/livedata-migrator/application.properties
Preview features must be enabled by finding the corresponding property with the prefix preview.feature.
and changing the OFF
value to ON
. For example:
preview.feature.backup-restore=ON
LiveData Migrator must then be restarted for the changes to take effect:
service livedata-migrator restart
#
Current preview featuresYou may check which preview features are currently active with the following curl command:
curl localhost:18080/preview
The command will return information similar to the following:
curl localhost:18080/preview{ "preview/<feature>" : "OFF", "preview/<feature>" : "OFF"}
The following preview features are currently available in LiveData Migrator.
Backup and Restore#
This feature allows you to save LiveData Migrator's current state - including migrations, filesystems, path mappings and configuration - and restore it later. View the full list of properties this feature backs up and restores here.
Enable this feature with the following property:
preview.feature.backup-restore=ON
#
Preview statusThese features do not require enablement in the properties file. They are immediately available for use.
#
Configure success filesThis feature is available for Hadoop Distributed File System (HDFS) source filesystems. Use success files to determine when a specific path has migrated successfully and the data within is ready for an application or job to process on the target side.
Success files are migrated last within their containing directory, meaning they can be used to ascertain that the directory they are contained within has finished migration.
important
Read details of a known issue before using the success files feature.
#
Configure success files in the UI- Click on an HDFS source filesystem of choice in the LiveData Migrator dashboard.
- Under Success File, supply a filename or glob pattern that matches any success files you want to add (for example:
/**_SUCCESS
). - Click Save.
#
Configure success files through the CLIAdd success files in the CLI by supplying a filename or glob pattern to the --success-file
parameter of the filesystem add hdfs
or filesystem update hdfs
command:
filesystem add hdfs --file-system-id mysource --source --success-file /mypath/myfile.txt
filesystem update hdfs --file-system-id mysource --success-file /**_SUCCESS
#
Databricks metadata agentLiveData Migrator supports metadata migration to Databricks Delta Lake.
#
Creating a Databricks metadata agent in the UITo configure Databricks Delta Lake as a metadata agent, select Databricks in the Agent Type dropdown menu when connecting metastores with the UI.
#
Creating a Databricks metadata agent through the CLIUse the hive agent add databricks
command to set up a Databricks agent in the CLI.
hive agent add databricks --name databricksAgent --jdbc-server-hostname mydbcluster.cloud.databricks.com --jdbc-port 443 --jdbc-http-path sql/protocolv1/o/8445611123456789/0234-125567-testy978 --access-token daexamplefg123456789t6f0b57dfdtoken4 --file-system-id mys3bucket --default-fs-override dbfs: --fs-mount-point /mnt/mybucket --convert-to-delta --host myRemoteHost.example.com --port 5552
See the command reference page for more information on how to configure Delta Lake and set up a metadata agent.