Feedback
Lift logo

IBM Lift CLI

Migrate your data quickly, easily and securely via a command-line interface

Watch Lift CLI Demo

Features

clock icon

Fast

High-speed data movement

Lift uses IBM Aspera under the covers to move your data to the cloud at blazing fast speeds.

Lift uses Aspera's patented transport technology that leverages existing WAN infrastructure and commodity hardware to achieve speeds that are hundreds of times faster than FTP and HTTP.

bandage icon

Resilient

Quickly recover from common problems

Automatically recovers from common problems you may hit during the migration. For example, if your file upload is interrupted mid-transfer, Lift will resume where you last left off. File uploads are stable and robust, even over the most bandwidth-constrained networks.

lock icon

Secure

Encryption for data in motion

Nobody wants to end up on the front page of the news. Any data moved over the wire to the IBM Cloud is completely secure via a 256-bit encrypted connection.

no charge icon

Free

No strings attached

We want you to try our cloud data services. Cost shouldn't be an issue.

controls icon

Flexible

Control each migration step

Every data migration is split into three steps: extract from source, transport over the wire, and load into target. Our CLI gives you the flexibility to perform these three steps separately so that your data migration works around your schedule, not the other way around.

auto-update gear icon

Built for the cloud

Always up-to-date

You'll install the Lift CLI only once on your on-premises machine. Under the covers, the CLI works with the Lift Core Services running in the IBM Cloud to help get your data to your Watson Data Platform persistent store. Like any other cloud app, Lift never requires an update. New features are instantly available to you without you having to lift a finger.


OUR LATEST UPDATE

What’s new in Lift

Support for new data source: Microsoft SQL Server database

Migrate your data from Microsoft SQL Server database to IBM Cloud. Lift CLI adds the ability to migrate data from Microsoft SQL Server to IBM Db2 Warehouse on Cloud and Db2 on Cloud. Migrating data from Microsoft SQL Server to IBM Cloud is easy and the migration path remains the same as for other Lift supported data sources. For more information specific to Microsoft SQL Server, visit our FAQ section. If you have questions about migration or need assistance with Lift, go to the Community Help section to reach us through Stack Overflow or open a ticket in the IBM Cloud Service Portal.

See All Recent Changes Read Prior Blogs

How It Works

Hover over each item for more information.

cloud graphic

Db2 Warehouse

on Cloud

load

landing zone graphic

Landing Zone

for Db2 Warehouse on Cloud

IBM CLOUD

ON-PREMISES

put

csv file graphic

BEGIN

csv file graphic

extract

source database graphic

Source

Database

BEGIN

Use Cases

Want to migrate from IBM PureData System for Analytics to IBM Db2 Warehouse on Cloud?

It's a two-step process: convert your schema and migrate your data.

To convert your schema, start by downloading the IBM Database Conversion Workbench . The workbench will walk you through the process of converting your source database DDL so that it is compatible with the target. The workbench will also produce a report that tells you where your action is required. Once your schema is in place, you'll use the Lift CLI to migrate your data.

Get Data Conversion Workbench

You need to keep feeding your warehouse with new data constantly, and the Lift CLI is here to help.

Start by generating a set of CSV files that represent your incremental changes, per database table. Use the Lift CLI to scoop up those delimited files, push them over the wire, and import the files into IBM Db2 Warehouse on Cloud. Throw these steps in a script, set up a cron job, and you've got an ongoing incremental update of your data warehouse.

You can use the Lift CLI to migrate data from multiple different databases or data sources into a single IBM Db2 Warehouse on Cloud MPP cluster. Lift provides you with the flexibility to take tables from multiple data sources and import them under a single schema in IBM Db2 Warehouse on Cloud so that you can decommission your existing database cluster.

Don't slam your transactional data store with reporting queries.

Your customers don't care that you need to run analytics on their buying behavior. They just want a snappy user experience.

Spin up a cloud data warehouse, such as IBM Db2 Warehouse on Cloud to run analytics on data from your transactional data store. Keep your reports and dashboards up to date by sending small amounts of data from the source, and always have an up-to-date view of your business.

Source IBM Db2 Warehouse on Cloud IBM Db2 on Cloud
IBM Db2 Yes Yes
IBM Db2 Warehouse Yes No
IBM Integrated Analytics System Yes No
IBM PureData System for Analytics Yes No
Oracle Database Yes Yes
Microsoft SQL Server Yes Yes
CSV file format Yes Yes
SUPPORTED SOURCES

IBM Db2

IBM Db2 Warehouse

IBM PureData System for Analytics

IBM Integrated Analytics System

Oracle Database

Microsoft SQL Server

Upload and load any data represented in a CSV file format

SUPPORTED TARGETS

IBM Db2 on Cloud

IBM Db2 Warehouse on Cloud

Download Lift CLI

SELECT PACKAGE FOR DOWNLOAD
Linux

- 125 MB -

Mac

- 95 MB -

Windows

- 165 MB -


REQUIREMENTS
Minimum local system requirements
  • CPU - 1 Core
  • Memory - 300 MB
  • Disk - 250 MB for the Lift CLI. Additional disk space is recommended for data extraction. Refer to the FAQs for more guidance on disk space requirements for data extractions.
Supported database versions
Sources
  • IBM Db2 9.7 and later
  • IBM Db2 Warehouse External Table (ET) enabled versions
  • PureData® System for Analytics Version 6.0.3 and later
  • IBM Integrated Analytics System
  • Oracle Database 11g Release 2, 12c Release 1, 12c Release 2, with latest patch levels
  • Microsoft SQL Server 2014 and later
Targets
  • IBM Db2 on Cloud
  • IBM Db2 Warehouse on Cloud (Entry and Enterprise Edition) on IBM Cloud and Amazon Web Services (AWS)
Supported operating systems
  • MacOS 10.11 and later
  • RedHat Enterprise Linux (RHEL) Server 6 and later
  • SUSE Linux Enterprise Server (SLES) 11 and later
  • Ubuntu 15.10 and later
  • Windows 7 and later
  • Windows Server 2012 and later (all editions)

Convert your schema with IBM Database Conversion Workbench

The IBM Database Conversion Workbench helps you migrate your source schema to IBM Db2 Warehouse on Cloud. It will examine your source DDL and automatically convert it to make the DDL compatible with your target engine. If the Database Conversion Workbench can't convert something automatically, you'll get a report detailing the steps you'll need to take to complete the conversion.

Linux

- 260 MB -

Mac

- 233 MB -

Windows

- 295 MB -

Docs

Migrate data to IBM Db2 Warehouse on Cloud or IBM Db2 on Cloud in 5 minutes

Create an instance

As a prerequisite, you need an instance of Db2 on Cloud or Db2 Warehouse on Cloud. If you have an instance, you're all set! If not, create an instance of one of the following IBM Cloud managed services:

  • Db2 on cloud
  • Db2 Warehouse on Cloud
  • Get and install the Lift CLI

    1

    Download the version of the Lift CLI for your operating system.

    2
    3

    Unzip the package to a <zip-extract-directory> directory on your hard drive.

    4

    To install the Lift CLI, open a terminal window (macOS or Linux) or command prompt (Windows), and navigate to the <zip-extract-directory> directory. Install the Lift CLI by running the following command:

    % <zip-extract-directory>/install <lift-home>

    For example:

    On Linux: $ sudo <zip-extract-directory>/install /opt/lift-cli

    On macOS: % sudo <zip-extract-directory>/install /opt/lift-cli

    On Windows: > <zip-extract-directory>\install.bat C:\lift-cli

    The lift command executable is located in the <lift-home>/bin. After the installation completes, add the <lift-home>/bin to your PATH environment variable. For the rest of this tutorial, we assume that <lift-home>/bin is in your PATH and that the lift command is accessible from your terminal.

    Tip: The lift command can be run by using a properties file to include settings for command options, such as login credentials,and you can reference that file from within the command by using the
    –-properties_file or-pf option.The following is an example of a lift extract command that specifies to use a properties file:

    % lift extract --properties-file <path-to-properties-file>

    The following example shows the format within a properties file that contains login credentials:

    source-user=user1
    source-password=pass1
    target-user=user1
    target-password=pass1
    max-load-errors=500

    CONTINUE WITH A CHOSEN MIGRATION PATH:

    Click the data source link for the next steps
    Target database on IBM Cloud Data source
    IBM Db2 on Cloud IBM Db2
    Oracle Database
    Microsoft SQL Server
    CSV file format
    IBM Db2 Warehouse on Cloud IBM Db2
    IBM Db2 Warehouse
    IBM Integrated Analytics System
    IBM PureData System for Analytics
    Oracle Database
    Microsoft SQL Server
    CSV file format

    Prepare for migration

    5

    The following sample data set can be used to complete the tutorial or you can use your own data file and DDL.Download the Download the Boston Property Assessment FY2016 (45.6MB) sample data set (courtesy of Analyze Boston). This package contains a schema (boston_property_assessment_fy2016.schema.sql) file and a data file (BOSTON_PROPERTY_ASSESSMENT_FY2016.csv).

    6

    Log in to your Db2 Warehouse on Cloud or Db2 on Cloud console.

    7

    To create a table, complete the following steps:

    a

    Copy the contents of boston_property_assessment_fy2016.schema.sql into the DDL box under the Run SQL tab.

    b

    Specify a schema by concatenating the schema name with the table name separated by a period. For example, <SCHEMA_NAME>.BOSTON_PROPERTY_ASSESSMENT_FY2016. If a schema is not specified, the table is created in your default schema. The default schema name is your user name in uppercase.

    c

    Click Run All. The result is a table called BOSTON_PROPERTY_ASSESSMENT_FY2016 in the specified or default schema.

    Move your data

    Move the data file to the Db2 Warehouse on Cloud or Db2 on Cloud landing zone. This landing zone stages your CSV file before it's ingested into the database. You need your database credentials. You can get these credentials from the console by clicking Connect in the side navigation bar.

    8

    Move the data file to the landing zone of the target database by running the following lift put command:

    % lift put --file <path-to-csv-file>/BOSTON_PROPERTY_ASSESSMENT_FY2016.csv --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    9

    Load the data set CSV file into the target database by running the following lift load command:

    % lift load --filename BOSTON_PROPERTY_ASSESSMENT_FY2016.csv --target-schema <your-schema-name> --target-table BOSTON_PROPERTY_ASSESSMENT_FY2016 --header-row --remove --file-origin user --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    The `--header-row` option specifies that the first row of the data set contains the column headings. The `--file-origin` user option specifies that this CSV file is user-generated and was not extracted using the lift extract command.

    10

    You're done. You can now run SQL queries on the sample data set from the database console.

    Prepare for migration

    5

    Create the schema and table structures on your Db2 Warehouse on Cloud database target. You have several options to do this, but the most effective way is to download, install, and use the download and use the IBM Database Conversion Workbench. This tool can help you to convert your existing Netezza schema to one that's compatible with the Db2 Warehouse on Cloud engine database. After completing the conversion, the DCW produces a report that identifies which parts of your source database DDL were automatically converted and which parts require manual intervention. For more information, see the included step-by-step DCW guide.

    Move your data

    After your table structures are in place on the target database, you can start moving your PureData System for Analytics (Netezza) tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone on Db2 Warehouse on Cloud, and then load it into the database

    6

    Extract the table to a CSV file by running the following lift extract command:

    % lift extract --source-schema <source-schema-name> --source-table <source-table-name> --source-database ADMIN --source-host <source-database-host-name> --source-user <source-user-name> --source-password <source-password> --source-database-port <source-database-port> --file <path-to-csv-file>

    7

    Move your CSV file and stage it inthe landing zone of the target database by running the following lift put command:

    % lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    8

    Load your CSV file into the target engine database by running the following lift load command:

    % lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin extract-pda --target-user <target-user-nam> --target-password <target-password> --target-host <target-database-host-name>

    The `--file-origin extract-pda` option specifies that the CSV file was extracted by using the lift extract command.

    9

    You're done. You can now run SQL queries on your data from the database console.

    Prepare for migration

    Create the table structures on your Db2 Warehouse on Cloud or Db2 on Cloud database target.

    5

    To migrate table structures, use the lift ddl command instead of the Database Conversion Workbench(DCW) tool. Choose one of the following methods:

    1

    Migrate table structures using Lift without any intervention. This method involves single command that extracts ddl from source database and applies to target database. (Recommended)

    % lift ddl --migrate --source-schema <source-schema-name> --source-object <source-object-name> --source-database <source-database-name> --source-user <source-user-name> --source-password <source-password> --source-host <source-database-host-name> --source-database-port <source-database-port> --source-database-type <source-database-type> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    2

    Extract the table DDLusing Lift CLI, modify the generated DDL(optional) and apply the DDL to the target database. This method involves two command options:

    1. Generate the DDL.

    2. Execute the generated DDL.

    Use this method if you want to customize the DDL extracted from the source database.

    % lift ddl --generate --source-schema <source-schema-name> --source-object <source-object-name> --source-database <source-database-name> --source-user <source-user-name> --source-password <source-password> --source-host <source-database-host-name> --source-database-port <source-database-port> --source-database-type <source-database-type>[options]

    % lift ddl --execute --file <path-to-ddl-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    Run the lift ddl --help command for more available command options.

    Move your data

    After your table structures are in place on the target database, you can start moving your tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone of the target database, and then load it into the database.

    6

    Extract the table to a CSV file by running the following lift extract command:.

    % lift extract --source-schema <source-schema-name> --source-table <source-table-name> --source-database <source-database-name> --source-host <source-host-name> --source-user <source-user-name> --source-password <source-password> --source-database-port <source-database-port> --source-database-type <ias/db2/db2w> --file <path-to-csv-file>

    The `ias, db2, and db2w` settings for the `–source-database-type` command option are used to specify the particular source database type.

    7

    Move your CSV file and stage it in the landing zone of the target database by running the following lift put command:

    % lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    8

    Load your CSV file into the target engine database by running the following lift load command:

    % lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin <extract-ias/extract-db2/extract-db2w> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    The `extract-ias, extract-db2, and extract-db2w` settings for the `--file-origin` command option are used to specify that the CSV file was extracted from a particular database by using the lift extract command

    9

    You're done. You can now run SQL queries on your data from the database console.

    Configuration considerations

    Lift uses the following different modes to extract your data:

    - Native

    - JDBC

    The native mode of the lift extract command leverages Oracle's native tools for extraction and is recommended for the best performance. If Oracle client tools cannot be installed or configured properly, Lift CLI automatically switches to JDBC mode that has relatively slower performance. For best performance, use Oracle client tools (Basic and Tools packages) that provide native extract tooling. Install and configure the Oracle client on the same box as Lift CLI and set the client path in the system environment variable so that Lift CLI can find and use it.

    The native mode extract method uses server encoding irrespective of the encoding on the client machine. If the table name or column names contain multi byte character set (MBCS) characters, then the encoding of the client must match the encoding of the server so that the table and column names can be specified properly in the Lift CLI commands.

    Prepare for migration

    Recommended path for migration:
    Migrate table structures to the target database -> Move data using Lift CLI -> Migrate indexes and other constraints (optional)

    5

    Create the schema and table structures on your IBM Cloud target database (Db2 Warehouse on Cloud or Db2 on Cloud). You have several options to do this, but the most effective way is to download, install, and use the

    IBM Database Conversion Workbench (DCW). This tool can help you to convert your existing Oracle schema to one that's compatible with the Db2 Warehouse on Cloudor Db2 on Cloud engine database. After completing the conversion, the DCW produces a report that identifies which parts of your source DDL were automatically converted and which parts require manual intervention. For more information, see the included step-by-step DCW guide.

    Move your data

    After your table structures are in place, you can start moving your Oracle tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone on Db2 Warehouse on Cloud or on Db2 on Cloud, and then load it into the database.

    6

    Extract the table to a CSV file by running the following lift extract command:.

    % lift extract --source-schema <oracle-schema-name> --source-table <oracle-table-name> --source-database <oracle-database-name> --source-host <oracle-host-name> --source-user <oracle-user-name> --source-password <oracle-password> --source-database-port <oracle-database-port> --source-database-type <oracle> --file <path-to-csv-file>

    7

    Move your CSV file and stage it in the landing zone of the target database by running the following lift put command:

    % lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    8

    Load your CSV file into the target engine database by running the following lift load command:

    % lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin extract-oracle --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    The `--file-origin extract-oracle` option specifies that the CSV file was extracted by using the lift extract command.

    9

    You're done. You can now run SQL queries on your data from the database console.

    Post data movement

    10

    Optionally, you can move indexes and other constraints from the source database to the target database after completing the data movement.

    Prepare for migration

    Recommended path for migration:
    Migrate table structures to the target database -> Move data using Lift CLI -> Migrate indexes and other constraints (optional)

    5

    Create the schema and table structures on your IBM Cloud target database (Db2 Warehouse on Cloud or Db2 on Cloud).

    Move your data

    After your table structures are in place, you can start moving your Microsoft SQL Server tables. Start by extracting a table to a CSV file. Move that file over the network, stage it in the landing zone on Db2 Warehouse on Cloud or on Db2 on Cloud, and then load it into the database.

    6

    Extract the table to a CSV file by running the following lift extract command:.

    % lift extract --source-schema <sqlserver-schema-name> --source-table <sqlserver-table-name> --source-database <sqlserver-database-name> --source-host <sqlserver-host-name> --source-user <sqlserver-user-name> --source-password <sqlserver-password> --source-database-port <sqlserver-database-port> --source-database-type <sqlserver> --file <path-to-csv-file>

    7

    Move your CSV file and stage it in the landing zone of the target database by running the following lift put command:

    % lift put --file <path-to-csv-file> --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    8

    Load your CSV file into the target engine database by running the following lift load command:

    % lift load --filename <csv-file-name> --target-schema <target-schema-name> --target-table <target-table-name> --file-origin extract-sqlserver --target-user <target-user-name> --target-password <target-password> --target-host <target-database-host-name>

    The `--file-origin extract-sqlserver` option specifies that the CSV file was extracted by using the lift extract command.

    9

    You're done. You can now run SQL queries on your data from the database console.

    Post data movement

    10

    Optionally, you can move indexes and other constraints from the source database to the target database after completing the data movement.

Copy and paste the following token during installation:

Go to Getting Started tutorial
Optional: Verify the download image

Token: