Skip to main content
Version: 2.4.3 (latest)

Configure Data Migrator

Find details here for the configuration properties of Data Migrator. Properties are defined in the following files:

/etc/wandisco/livedata-migrator/application.properties

/etc/wandisco/livedata-migrator/logback-spring.xml

/etc/wandisco/livedata-migrator/vars.env

/etc/wandisco/ui/vars.env

After adding new properties or changing existing values, always restart the Data Migrator service.

For information about UI logging, see Logging.

Application properties

General configuration

These configuration properties are used to adjust general items of operation.

NameDetails
spring.jackson.serialization.INDENT_OUTPUTWhether to apply indentation to JSON output from command results

Default value: true
Allowed values: true, false
springdoc.swagger-ui.pathThe path at which clients can access the Swagger documentation for the Data Migrator REST API

Default value: /ldm-api.html
Allowed values: Any valid file path
pull.threadsThreads allocated for concurrent file transfers

Default value: 150
Allowed values: An integer value between 1 and 10000
engine.threadsThe number of migrations that can run concurrently

Default value: 1000
Allowed values: An integer value between 1 and 10000
Note: Increasing this value may have undesired impact on performance. Contact Support for more info.
persisted.storeReserved for future use

Default value: true
server.portThe TCP port used to listen for clients interacting with the REST API

Default value: 18080
Allowed values: An integer value between 1024 and 65535

Data Migrator logging

Configure how Data Migrator logs requests made against the REST API.

NameDetails
logging.level.org.zalando.logbookThe logging level to apply to HTTP activity against the REST API of Data Migrator. This must be set to TRACE to record any log information.

Default value: TRACE
Allowed values: TRACE, NONE
logbook.format.styleThe logging style applied to HTTP activity records

Default value: http
Allowed values: http, curl
logbook.write.max-body-sizeThe maximum number of bytes from an HTTP request or response body to include in a log entry

Default value: 1024
Allowed values: Any integer between 1 and 2147483647
logbook.excludeA comma-separated list of patterns that match URIs for which HTTP activity should not be logged

Default value: /v3/api-docs/*,/swagger-ui/,/stats/,/diagnostics/,/notifications/*,/openapi.yaml
Allowed values: Any valid comma-separated list of patterns
logbook.obfuscate.parametersA comma-separated list of HTTP parameters that should not be recorded in log entries, for example: access_token,password

Default value: (none)
Allowed values: Any valid comma-separated list of HTTP parameter names
logbook.obfuscate.headersA comma-separated list of HTTP headers that should not be recorded in log entries, for example: authorization,x-auth-password,x-auth-token,X-Secret

Default value: (none)
Allowed values: Any valid comma-separated list of HTTP headers
obfuscate.json.propertiesA comma-separated list of JSON request properties by name that should not be recorded in log entries, for example: foo,bar

Default value: ${hdfs.fs.type.masked.properties},${adls2.fs.type.masked.properties},${s3a.fs.type.masked.properties},${gcs.fs.type.masked.properties}
Allowed values: Any valid comma-separated list of property names

Set how many days to keep the Data Migrator log file

  1. Open /etc/wandisco/livedata-migrator/logback-spring.xml.
  2. Find the appender block for the livedata-migrator.log file which contains <file>${LOGS}/livedata-migrator.log</file>.
  3. Update the value of property <maxHistory>90</maxHistory> to your preferred number of days.
  4. Save the change.
  5. Restart Data Migrator.

Enable Data Migrator debugging mode

  1. Open /etc/wandisco/livedata-migrator/logback-spring.xml.

  2. Update the INFO level section:

    <!-- LOG everything at INFO level -->
    <root level="info">
    <appender-ref ref="RollingFile"/>
    <!-- <appender-ref ref="Console" /> -->
    </root>
    </configuration>

    Change <root level=info> to <root level=debug>.

  3. Save the change.

  4. Restart Data Migrator.

info

After you've completed your investigation, reverse the changes. The debug mode generates massive log files that may exhaust your available storage.

Change Data Migrator log location

The default path for Data Migrator logs, heap dumps, and garbage collection(GC) logs is /var/log/wandisco/livedata-migrator. You can change the default path by editing the /opt/wandisco/livedata-migrator/start.sh script. Before making changes, ensure the path exists and is writable by Data Migrator.

  1. Open /opt/wandisco/livedata-migrator/start.sh
  2. Update the value of -Dlog.dir, -XX:HeapDumpPath and -Xloggc to change the Data Migrator logs, heap dumps, and GC log default paths.
  3. Save the change.
  4. Restart Data Migrator.
info

Changes to the default log location won't persist after upgrade. After upgrading to a new version, you will have to reapply these changes.

Thread dump

Add the following properties to the /etc/wandisco/livedata-migrator/application.properties file to adjust the default thread dump period and number of dumps to retain. Without the property values specified, the default values apply which results in 24 hours of thread dumps being available.

NameDetails
threaddump.periodThread dump period in seconds.

Default value: 3600
Allowed values: Integer
threaddump.number.filesThe number of thread dumps to retain.

Default value: 24
Allowed values: Integer
tip

To retain 7 days of thread dumps, add both properties and adjust the threaddump.number.files value to 168. Ensure your Data Migrator server has sufficient disk space available for additional thread dumps, contact Support for any questions or concerns before applying these properties.

State

Data Migrator uses an internally-managed database to record state during operation called the Prevayler.

NameDetails
prevayler.databaseLocationThe directory in which Data Migrator will write files to manage state

Default value: ${install.dir}/db
Allowed values: The full path to a directory in which database files will be managed. It must be writable by the user running Data Migrator (typically hdfs.)
prevayler.persistentWhether Data Migrator will persist state to disk in files

Default value: true
Allowed values: true, false
prevayler.forceWhether the database performs a sync operation to ensure content is written to persistent storage on each write activity

Default value: true
Allowed values: true, false
prevayler.bufferedJournalWhether buffered journal I/O is used for the database

Default value: true
Allowed values: true, false
prevayler.mirroredWhether actions tracked in-memory by the database are mirrored to disk on every modification. The alternative is for operation to periodically flush to disk and flush on shutdown.

Default value: true
Allowed values: true, false

Security

Secure access to the Data Migrator REST API through configuration. Choose between no security or HTTP basic security. To set up, see Configure basic auth.

NameDetails
security.typeThe method of securing access to the REST API.

Default value: off
Allowed values: off, basic
security.basic.userRequired when security.type=basic.
The username that needs to be provided by a REST client to gain access to a secured REST API, for example: admin

If basic authentication is enabled or will be enabled on the Hive Migrator REST API, use the same username for Data Migrator and Hive Migrator.

Default value: admin
Allowed values: Any string that defines a username (no whitespace)
security.basic.passwordRequired when security.type=basic.
A bcrypt-encrypted representation of the password that needs to be provided using HTTP basic authentication to access the REST API, for example:
{bcrypt}$2a$10$mQXFoGAdLryWcZLjSP31Q.5cSgtoCPO3ernnsK4F6/gva8lyn1qgu

The {bcrypt} prefix must be included before the encrypted password string as shown in the example above.

Default value: {bcrypt}exampleEncryptedValue
Allowed values: A valid bcrypt-encrypted string
info

If you enable basic authentication on Data Migrator, ensure you update the UI with the credentials to maintain functionality.

Connect to Data Migrator with basic authentication

When basic authentication is enabled, enter the username and password when prompted to connect to Data Migrator with the CLI:

Example
connect livemigrator localhost: trying to connect...
Username: admin
Password: ********
connected

The username and password will also be required when accessing the Data Migrator REST API directly. See more about basic auth

Transport Layer Security

To enable Transport Layer Security (TLS) on the Data Migrator REST API (HTTPS), modify the following server.ssl.* properties.

To update UI with details for TLS, remove the product instance and then add it again with the updated TLS connection details.

info

Data Migrator comes preconfigured with an instance running on localhost. To remove the preconfigured instance see the Remove Data Migrators steps.

note

If HTTPS is enabled on the REST API, plain HTTP requests from the CLI to the REST API will fail.

Example error
Bad Request
This combination of host and port requires TLS.
NameDetails
server.ssl.key-storePath or classpath to the Java KeyStore.
Default value: (none)
Allowed values: File system path or classpath (example:/path/to/keystore.p12, classpath:keystore.p12)
server.ssl.key-store-passwordThe Java KeyStore password.
Default value: (none)
Allowed values: Any text string
server.ssl.key-store-typeThe Java KeyStore type.
Default value: (none)
Allowed values: KeyStore types
server.ssl.key-aliasThe alias for the server certificate entry.
Default value: (none)
Allowed values: Any text string.
server.ssl.ciphersThe ciphers suite enforce the security by deactivating some old and deprecated SSL ciphers, this list was tested against SSL Labs.

Default value: (none but list provided below)

TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 ,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA,TLS_RSA_WITH_CAMELLIA_256_CBC_SHA,TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA,TLS_RSA_WITH_CAMELLIA_128_CBC_SHA
tip

The example command below will generate a server certificate and place it inside a new Java KeyStore named keystore.p12:

keytool -genkey -alias livedata-migrator -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 365

See the keytool documentation for further information on the parameters used.

Filesystem defaults

Each filesystem supported by Data Migrator can apply properties defined using the --properties or --properties-files parameters to the various filesystem add commands. You can set default properties that will apply to each type of filesystem at time of creation with these configuration items.

NameDetails
adls2.fs.type.default.propertiesA comma-separated list of default properties to apply to ADLS Gen 2 filesystem resources on creation.

Default value: fs.scheme,fs.account.name,fs.container.name,fs.auth.type,fs.oauth2.client.id,fs.insecure
Allowed values: Any comma-separated list of valid ADLS Gen 2 configuration properties
hdfs.fs.type.default.propertiesA comma-separated list of default properties to apply to ADLS Gen 1 filesystem resources on creation.

Default value: fs.defaultFS
Allowed values: Any comma-separated list of valid HDFS configuration properties
s3a.fs.type.default.propertiesA comma-separated list of default properties to apply to S3A filesystem resources on creation.

Default value: fs.defaultFS
Allowed values: Any comma-separated list of valid S3A configuration properties
gcs.fs.type.default.propertiesA comma-separated list of default properties to apply to GCS resources on creation.

Default value: bucket.name
Allowed values: Any comma-separated list of valid GCS configuration properties
local.fs.type.default.propertiesA comma-separated list of default properties to apply to local filesystem resources on creation.

Default value: fs.root
Allowed values: Any comma-separated list of valid configuration properties

Hadoop Distributed File System inotify

Data Migrator will poll the Hadoop cluster for NameNode events using the Hadoop Distributed File System (HDFS) inotify system. These properties can be added and configured to change the default poll periods.

NameDetails
hdfs.inotify.poll.periodThe length of time in milliseconds between each event listener poll.
Default value: 10
Allowed values: An integer value
hdfs.inotify.sleep.periodThe length of time in milliseconds for delaying the event listener poll after 10 consecutive retry failures.
Default value: 10
Allowed values: An integer value

HDFS marker storage

Data Migrator uses marker files to manage the migration of files on paths. By default, these are stored in the HDFS user's home directory if possible. If this is not possible, they will be stored in the root directory of the migration on the source filesystem. To configure another directory to store marker files in, alter the following property:

NameDetails
hdfs.fs.marker.dirThe directory in which marker files are stored.
Default value: (none)
Allowed values: The full path to a directory in which database files will be managed. It must be writable by the user running Data Migrator (typically hdfs.)

Proxy Auto-Config (PAC) file support

Data Migrator allows the use of a PAC file so that traffic can be routed through HTTP proxies (examples of PAC files).

NameDetails
lm.proxy.pacPath to the PAC file on the local filesystem.
Default value: (none)
Allowed values: A path that includes the file URI prefix (example: file:///tmp/proxy.pac).
info

PAC files for Data Migrator must contain an explicit clause that will return "DIRECT" for "localhost".

Example of a PAC file
function FindProxyForURL(url, host) {
if (dnsDomainIs(host, "localhost"))
return "DIRECT";
}

Notification properties

Adjust notification properties in the application.properties file:

/etc/wandisco/livedata-migrator/application.properties
NameDetails
notifications.pending.region.warn.percentThe percentage threshold of events Data Migrator has fallen behind that will trigger a warning. A migration that exceeds this threshold will trigger the Data migration is falling behind system events notification. Default value: 90
notifications.pending.region.clear.percentThe percentage threshold of events Data Migrator has fallen behind that will clear the Data migration is falling behind system events notification. A migration that exceeds the notifications.pending.region.warn.percent threshold and then falls below this value will automatically clear its Data migration is falling behind system events notification. Default: 80
fs.health.check.initialDelay= fs.health.check.interval=Filesystem health checks. The frequency at which the system checks for an unhealthy filesystem in milliseconds. Default value: 60000

Verifications

The following application properties are optional. They aren't included in the application properties file by default. If you want to use them, add them to the file manually.

NameDetails
verify.runner.threadsThis property limits the number of threads used for verifications. Each verification runs on its own thread. Verifications that exceed this number are queued until threads become available. Default value: 10
verify.lister.threadsThis property controls how many threads are available for making calls during verifications. These calls list the contents of directories on the source and target filesystems and are parallelized in separate threads. A good value for this is twice the number of verify.runner.threads as each runner thread always has two lister threads available. Default value: 20

gRPC maximum message size

NameDetails
dataagent.grpc.diagnostics.max.message.size.kbThe maximum message size in kilobytes that Data Migrator can send or receive for diagnostic gRPC messages. Don't set the value to less than the default. Default value: 4096
dataagent.grpc.transfer.max.message.size.kbThe maximum message size in kilobytes that Data Migrator can send or receive for gRPC data transfer messages. Don't set the value to less than the default. Default value: 20

SSL implementation

The SSL implementation is specified with the hcfs.ssl.channel.mode property. Supported values are: default_jsse, default_jsse_with_gcm, default, and openssl.

default_jsse uses the Java Secure Socket Extension package (JSSE). However, when running on Java 8, the Galois/Counter Mode (GCM) cipher is removed from the list of enabled ciphers. This is due to performance issues with GCM in Java 8.

default_jsse_with_gcm uses the JSSE with the default list of cipher suites.

default attempts to use OpenSSL rather than the JSSE for SSL encryption, if OpenSSL libraries cannot be loaded, it falls back to the default_jsse behavior.

openssl attempts to use OpenSSL, but fails if OpenSSL libraries cannot be loaded.

NameDetails
hcfs.ssl.channel.modeSpecifies the SSL implementation used to encrypt connections.
Default value: default
Allowed values: default_jsse, default_jsse_with_gcm, default, openssl
info

As of Data Migrator 2.1.1 hcfs.ssl.channel.mode replaces the use of fs.s3a.ssl.channel.mode and fs.azure.ssl.channel.mode which are no longer valid.

Talkback

Change the following properties to alter how the talkback API outputs data:

NameDetails
talkback.apidata.enabledEnables an hourly process to log Data Migrator configuration from the output of REST endpoints.
Default value: true
talkback.apidata.diagnosticsCountSpecifies the maximum number of diagnostic records that the talkback process logs from the REST endpoint.
Default value: 500
talkback.apidata.backupsCountSpecifies the maximum number of backups that the talkback process gets from the REST endpoint.
Default value: 50

Data transfer manager memory allocation

Add the following property and integer value between 10 and 75 to limit memory usage percent while transferring data.

NameDetails
transfer.manager.heapBasedPercentageLimits memory usage when transferring data to the percent value supplied.
Default value: 50

Additional Java arguments

Data Migrator

/etc/wandisco/livedata-migrator/vars.env

NameDetails
JVM_MAX_MEMSpecifies the maximum size of the heap that can be used by the Java virtual machine (JVM). Use k, m, or g (case insensitive) for KiB, MiB, or GiB.
JVM_MIN_MEMSpecifies the minimum size of the heap that can be used by the Java virtual machine (JVM). Use k, m, or g (case insensitive) for KiB, MiB, or GiB.
JVM_GCSpecifies the type of garbage collector.
JVM_G1HeapRegionSizeSpecifies the heap region size. Minimum 1 maximum 32 Mb with values that are always a power of 2.
JVM_GC_LOG_NAMESpecifies the GC log name.
JVM_GC_LOG_SIZESpecifies the max GC log size.
JVM_GC_NUM_LOGSSpecifies the max number of GC logs.
HTTP_MAX_CONNECTIONSSpecifies the maximum number of keep-alive connections that are maintained. The possible range is zero to 32768.
LDM_EXTRA_JVM_ARGSAdd extra Java arguments (for example, LDM_EXTRA_JVM_ARGS="-Djava.security.properties=\"/etc/wandisco/jvm/jvm.override.security\"").

UI

/etc/wandisco/ui/vars.env

NameDetails
JVM_MAX_MEMSpecifies the maximum size of the heap that can be used by the Java virtual machine (JVM). Use k, m, or g (case insensitive) for KiB, MiB, or GiB.
JVM_MIN_MEMSpecifies the minimum size of the heap that can be used by the Java virtual machine (JVM). Use k, m, or g (case insensitive) for KiB, MiB, or GiB.
LDUI_EXTRA_JVM_ARGSAdd extra Java arguments (for example, LDUI_EXTRA_JVM_ARGS="-Djava.security.properties=\"/etc/wandisco/jvm/jvm.override.security\"").

Directory structure

The following directories are used for the Data Migrator core package:

LocationContent
/var/log/wandisco/livedata-migratorLogs
/etc/wandisco/livedata-migratorConfiguration files
/opt/wandisco/livedata-migratorJava archive files
/opt/wandisco/livedata-migrator/dbData Migrator runtime state
/var/run/livedata-migratorRuntime files