Skip to main content
Version: 1.16.0

Enable Preview Features

Preview features are LiveData Migrator features that are still under development and are subject to improvement. These are disabled by default, and must be enabled in the properties file:


Preview features must be enabled by finding the corresponding property with the prefix preview.feature. and changing the OFF value to ON. For example:

Example for enabling Backup & Restore

LiveData Migrator must then be restarted for the changes to take effect:

service livedata-migrator restart

Current preview features#

You may check which preview features are currently active with the following curl command:

curl localhost:18080/preview

The command will return information similar to the following:

curl localhost:18080/preview{  "preview/<feature>" : "OFF",  "preview/<feature>" : "OFF"}

The following preview features are currently available in LiveData Migrator.

Backup and Restore#

This feature allows you to save LiveData Migrator's current state - including migrations, filesystems, path mappings and configuration - and restore it later. View the full list of properties this feature backs up and restores here.

Enable this feature with the following property:


Preview status#

These features do not require enablement in the properties file. They are immediately available for use.

Configure success files#

This feature is available for Hadoop Distributed File System (HDFS) source filesystems. Use success files to determine when a specific path has migrated successfully and the data within is ready for an application or job to process on the target side.

Success files are migrated last within their containing directory, meaning they can be used to ascertain that the directory they are contained within has finished migration.


Read details of a known issue before using the success files feature.

Configure success files in the UI#

  1. Click on an HDFS source filesystem of choice in the LiveData Migrator dashboard.
  2. Under Success File, supply a filename or glob pattern that matches any success files you want to add (for example: /**_SUCCESS).
  3. Click Save.

Configure success files through the CLI#

Add success files in the CLI by supplying a filename or glob pattern to the --success-file parameter of the filesystem add hdfs or filesystem update hdfs command:

filesystem add hdfs --file-system-id mysource --source --success-file /mypath/myfile.txt
filesystem update hdfs --file-system-id mysource --success-file /**_SUCCESS

Databricks metadata agent#

LiveData Migrator supports metadata migration to Databricks Delta Lake.

Creating a Databricks metadata agent in the UI#

To configure Databricks Delta Lake as a metadata agent, select Databricks in the Agent Type dropdown menu when connecting metastores with the UI.

Creating a Databricks metadata agent through the CLI#

Use the hive agent add databricks command to set up a Databricks agent in the CLI.

Example for remote Databricks agent
hive agent add databricks --name databricksAgent --jdbc-server-hostname  --jdbc-port 443 --jdbc-http-path sql/protocolv1/o/8445611123456789/0234-125567-testy978 --access-token daexamplefg123456789t6f0b57dfdtoken4 --file-system-id mys3bucket --default-fs-override dbfs: --fs-mount-point /mnt/mybucket --convert-to-delta --host --port 5552

See the command reference page for more information on how to configure Delta Lake and set up a metadata agent.