Release Notes

WD Fusion 2.4.1 Build 266 - June 18 2015

This release provides enhancements to the installation processing, providing detailed validation, added support for Stacks.

The 2.3 release is the first step towards making WD Fusion available to download freely for evaluation. As part of this work additional development has been completed on license handling and validation.

WD Fusion Impala client supports Impala reads with WD Fusion on CDH, which now works for most Impala use cases, where Hive is used to write to the Hive metastore that is also used by Impala and HDFS and Impala is used for fast reads.

Available Packages

WD Fusion currently supports the following versions of Hadoop:

  • CDH 5.2.0
  • CDH 5.3.0
  • CDH 5.4.0
  • HDP 2.1.0
  • HDP 2.2.0
  • PHD 3.0.0

System Requirements

Before you install this software you must ensure that you meet the necessary system, software and hardware requirements for running WD Fusion. See a full checklist in our online user guide: docs.wandisco.com/bigdata/wdfusion/

Supported Hadoop Packages:

View our list of supported versions of Hadoop: - docs.wandisco.com/bigdata/wdfusion/install.html#supported

Certified Platforms / DBMS & DConE Support:

  • HDP2.1.2 - 2.2.X
  • PHD3.0
  • CDH5.15 5.4
  • EMC Isilon 7.2
  • MapR M5,7
  • DConE 1.3
  • MySQL (Hive MetaStore)

Client Applications Supported:

  • Hive
  • SparkSQL
  • Impala* Some current limitations - contact WANdisco's solutions team
  • HBase SQOOP
  • Flume
  • Kafka
  • Storm

Installation

You can find detailed instructions on how to get up and running in our online user guide:
- docs.wandisco.com/bigdata/wdfusion/install.html#procedure

Upgrades from an earlier version:

It's essential that you remove previously installed versions of WD Fusion before you complete a new installation:
- docs.wandisco.com/bigdata/wdfusion/install.html#cleanup

Previous Release Notes:

You can view the release notes for previous releases in the release archive:
- docs.wandisco.com/bigdata/wdfusion/archive.html

installer with environment verification that helps users run through the installation and giving warnings if hardware or software configuration doesn't meet WD Fusion's requirements.

Known issues

  • Spark files input can't be used for streaming from replicated folders directly and checkpointing could work incorrectly. (FUS-611)

  • Hive from CDH 5.3 does not work with WD Fusion, as a result of HIVE-9991. The issue will be addressed once this fix for Hive is released.

  • Hive from CDH 5.4 will also fail with WD Fusion, resulting in the following message:
    
    FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.wandisco.fs.client.FusionFs not found) 
    
    (FUS-455)

  • Do not locate WD Fusion on the same server as other HDFS processors, especially DataNodes. HDFS's default block placement policy dictates that if a client is co-located on a DataNode, then that co-located DataNode will receive 1 block of whatever file is being put into HDFS from that client. This means that if the WD Fusion Server (where all transfers go through) is co-located on a DataNode, then all incoming transfers will place 1 block onto that DataNode. In which case the DataNode is likely to consumes lots of disk space in a transfer-heavy cluster, potentially forcing the WD Fusion Server to shut down in order to keep the Prevaylers from getting corrupted. (FUS-453)

  • Impala does not work with WD Fusion. Impala is not able to read data from non-HDFS file systems, which is how Impala sees Fusion. With Fusion 2.3 a Fusion Impala client will be available that will enable Fusion to support Impala reads without modifications to CDH, the Hive Metastore, or the customer's Impala applications. This should satisfy the vast majority of customer use cases, since Impala is typically used for fast read performance. Most customers use Hive to write data to HDFS and modify the Hive metastore, which Impala also uses.
    If the customer is using Impala for both reads and writes a workaround is available to the field upon request. (FUS-476)

New Features

  • The installer now includes extensive system validation, providing immediately warnings and feedback if the system on which you are installing WD Fusion doesn't match hardware or software prerequisites. (FUI-490)

  • The settings screen now provides links to the client package for the packaging type that corresponds with your deloyment type: HDP Stacks for Ambari or Parcels for Cloudera. (FUI-449, FUI-448)

  • The installer now includes more information about the product license that you upload, along with a new chart on the dashboard for the data throughput, to make it easier to manage any data limits associated with your product license. (FUS-505)

  • WD Fusion UI now sets the default binding for ui.hostname to 0.0.0.0. (FUI-522)

  • The inter-Hadoop Communication (IHC) server component is now better integrated with the WD Fusion UI. (FUI-118)

  • Added the originating client name to file transfer reports. (FUI-427)

Fixed Issues

  • Fixed a rendering error in the UI when a consistency check was run across 3 or more zones. FUI-173

  • When editing an existing replicated folder the UI now ensures that the current membership is pre-selected in the available membership drop-down menu. FUI-274