Unable to send feedback at this time. Please try again later.
Lift uses IBM Aspera under the covers to move your data to the cloud at blazing fast speeds.
Lift uses Aspera's patented transport technology that leverages existing WAN infrastructure and commodity hardware to achieve speeds that are hundreds of times faster than FTP and HTTP.
Automatically recovers from common problems you may hit during the migration. For example, if your file upload is interrupted mid-transfer, Lift will resume where you last left off. File uploads are stable and robust, even over the most bandwidth-constrained networks.
Nobody wants to end up on the front page of the news. Any data moved over the wire to the IBM Cloud is completely secure via a 256-bit encrypted connection.
We want you to try our cloud data services. Cost shouldn't be an issue.
Every data migration is split into three steps: extract from source, transport over the wire, and load into target. Our CLI gives you the flexibility to perform these three steps separately so that your data migration works around your schedule, not the other way around.
You'll install the Lift CLI only once on your on-premises machine. Under the covers, the CLI works with the Lift Core Services running in the IBM Cloud to help get your data to your Watson Data Platform persistent store. Like any other cloud app, Lift never requires an update. New features are instantly available to you without you having to lift a finger.
When we announced the general availability of Lift CLI late last year, our engineering team was working very hard to make Lift the preferred tool to move your data to the IBM Cloud. Lift continues in that direction to bring a bunch of new features as part of our latest updates. As always, use that blue feedback strip to let us know how we're doing. Or, if you get stuck with Lift, go to the Community Help section to reach out to us via Stack Overflow.Read Blog Post See All Recent Changes
Hover over each item for more information.
Your fully-managed data warehouse in the cloud.
The 'load' command ingests the data from the landing zone into the Db2 Warehouse on Cloud engine.
for Db2 Warehouse on Cloud
The 'ls' command lists the files in the cloud landing zone.
The 'df' command shows the available disk space in the cloud landing zone.
The 'rm' command removes a file from the cloud landing zone.
The 'put' command moves the file from your local disk to the Db2 Warehouse on Cloud landing zone: a space for staging data before ingest.
The flat file containing the data from a single table in your source database.
The flat file containing the data from a single table in your source database.
The 'extract' command allows you to extract data, as flat files, from your database to your local machine.
The source of data you want to move to the IBM Cloud.
Want to migrate from IBM PureData System for Analytics to IBM Db2 Warehouse on Cloud?
It's a two-step process: convert your schema and migrate your data.
To convert your schema, start by downloading the IBM Database Conversion Workbench . The workbench will walk you through the process of converting your source database DDL so that it is compatible with the target. The workbench will also produce a report that tells you where your action is required. Once your schema is in place, you'll use the Lift CLI to migrate your data.Get Data Conversion Workbench
You need to keep feeding your warehouse with new data constantly, and the Lift CLI is here to help.
Start by generating a set of CSV files that represent your incremental changes, per database table. Use the Lift CLI to scoop up those delimited files, push them over the wire, and import the files into IBM Db2 Warehouse on Cloud. Throw these steps in a script, set up a cron job, and you've got an ongoing incremental update of your data warehouse.
You can use the Lift CLI to migrate data from multiple different databases or data sources into a single IBM Db2 Warehouse on Cloud MPP cluster. Lift provides you with the flexibility to take tables from multiple data sources and import them under a single schema in IBM Db2 Warehouse on Cloud so that you can decommission your existing database cluster.
Don't slam your transactional data store with reporting queries.
Your customers don't care that you need to run analytics on their buying behavior. They just want a snappy user experience.
Spin up a cloud data warehouse, such as IBM Db2 Warehouse on Cloud to run analytics on data from your transactional data store. Keep your reports and dashboards up to date by sending small amounts of data from the source, and always have an up-to-date view of your business.
You must sign up for an IBM Cloud account that creates an IBMid if you don't already have one.
Use your IBMid to log in to IBM Cloud before you can download IBM Lift CLI.
It looks like you're a member of multiple IBM Cloud accounts. Select the appropriate account for your Lift CLI activity:
The IBM Database Conversion Workbench helps you migrate your source schema to IBM Db2 Warehouse on Cloud. It will examine your source DDL and automatically convert it to make the DDL compatible with your target engine. If the Database Conversion Workbench can't convert something automatically, you'll get a report detailing the steps you'll need to take to complete the conversion.
Unable to load software license at this time. Please try again later.
As a prerequisite, you'll need your very own instance of IBM Db2 Warehouse on Cloud. If you've purchased one of the enterprise plans, you're all set! If not, you can either get your own IBM Db2 Warehouse on Cloud enterprise cluster or entry instance.
Download the version of the Lift CLI for your operating system.
Unzip the package to a <zip-extract-directory> directory on your hard drive.
To install the Lift CLI, open a terminal window (macOS or Linux) or command prompt (Windows), and navigate to the <zip-extract-directory> directory. Then, run the install.
% <zip-extract-directory>/install <lift-home>
On Linux: $ sudo <zip-extract-directory>/install /opt/lift-cli
On macOS: % sudo <zip-extract-directory>/install /opt/lift-cli
On Windows: > <zip-extract-directory>\install.bat C:\lift-cli
The lift executable lives in <lift-home>/bin. Once the install completes, you can add <lift-home>/bin to your PATH environment variable. For the rest of this tutorial, we'll assume that <lift-home>/bin is in your PATH and that `lift` is accessible from your terminal.
The following data set has been provided to complete the tutorial. You are free to continue with your own data file and DDL. Download the Boston Property Assessment FY2016 (45.6MB) sample data set (courtesy of Analyze Boston). This package contains a schema (boston_property_assessment_fy2016.schema.sql) and a data file (BOSTON_PROPERTY_ASSESSMENT_FY2016.csv).
Log in to your IBM Db2 Warehouse on Cloud console.
To create a table, complete the following steps:
Copy the contents of boston_property_assessment_fy2016.schema.sql into the DDL box under the Run SQL tab.
Specify a schema by concatenating the schema name with the table name separated by a period. For example, <SCHEMA_NAME>.BOSTON_PROPERTY_ASSESSMENT_FY2016. If a schema is not specified, the table is created in your default schema. The default schema name is your user name in uppercase.
Click Run All. The result is a table called BOSTON_PROPERTY_ASSESSMENT_FY2016 in the specified or default schema.
First, move the data file over to the IBM Db2 Warehouse on Cloud landing zone. You'll use this landing zone to stage your CSV file before it's ingested into IBM Db2 Warehouse on Cloud. You'll need your IBM Db2 Warehouse on Cloud for Analytics credentials. You can get these credentials from your IBM Db2 Warehouse on Cloud console by clicking Connect in the side navigation bar.
% lift put --file <path-to-csv-file>/BOSTON_PROPERTY_ASSESSMENT_FY2016.csv --target-user <database-user> --target-password <database-password> --target-host <database-hostname>
Alternatively, you can put these options, such as target-user and target-password into a properties file and reference that file from the command using the -pf option.
Once the file is copied to the landing zone, load the data set into the IBM Db2 Warehouse on Cloud engine.
% lift load --filename BOSTON_PROPERTY_ASSESSMENT_FY2016.csv --target-schema <your-schema-name> --target-table BOSTON_PROPERTY_ASSESSMENT_FY2016 --header-row --remove --file-origin user --target-user <database-user> --target-password <database-password> --target-host <database-hostname>
Here, the --header-row option is used to let the loader know that the first row of my data set contains the column headings. The first row will, therefore, be ignored. Also, the --file-origin user option is used to denote that this CSV file is user-generated, and was not extracted using `lift extract`.
And you're done. You can now go back to the IBM Db2 Warehouse on Cloud console and run SQL queries against the sample data set.
First, you'll need to create the schema and table structure on your IBM Db2 Warehouse on Cloud target. You have several options to this, but the most effective way is to download and use the IBM Database Conversion Workbench. This tool will help you convert your existing Netezza schema to one that's compatible with the IBM Db2 Warehouse on Cloud engine. Once the conversion is complete, the Database Conversion Workbench will produce a report to let you know which parts of your source DDL were automatically converted, and which parts require manual intervention. Check out the included step-by-step guide for more information.
Once your table structure is in place, we can start moving your Netezza tables over to IBM Db2 Warehouse on Cloud. We'll start by extracting a table to a CSV file. Then, move that file over the wire, stage it in the landing zone on IBM Db2 Warehouse on Cloud, and then load it into the engine.
First, extract the table to a CSV file.
% lift extract --source-schema <schema> --source-table <table> --source-database ADMIN --source-host <netezza-hostname> --source-user <netezza-user> --source-password <netezza-password> --source-database-port <netezza-port> --file <path-to-csv-file>
Alternatively, you can put these options, such as source-user and source-password into a properties file and reference that file from the command using the -pf option.
Next, we'll transport the CSV file over to the IBM Db2 Warehouse on Cloud landing zone. For this, we'll use the `put` command.
% lift put --file <path-to-csv-file> --target-user <database-user> --target-password <database-password> --target-host <database-hostname>
And finally, we'll load your CSV file into the IBM Db2 Warehouse on Cloud engine.
% lift load --filename <csv-file> --target-schema <schema-name> --target-table <table-name> --file-origin extract-pda --target-user <database-user> --target-password <database-password> --target-host <database-hostname>
Here, the --file-origin extract-pda option is used to signify that the CSV file that's being loaded was extracted with Lift using the extract command.
And you're done. You can now go back to the IBM Db2 Warehouse on Cloud console and run SQL queries against the data set.
Unable to fetch command reference at this time. Please try again later.
To verify the signed downloaded image of IBM Lift CLI, complete the following steps:
Check that you have a recent version of the
jarsigner Java program installed. It is included in a Java SDK/JDK. It is not included in a JRE.
Run the following
jarsigner command that corresponds to your operating system and compare your output with the example output:
On Linux: jarsigner -verify -verbose:certs lift-cli-linux.zip
On macOS: jarsigner -verify -verbose:certs lift-cli-macos.zip
On Windows: jarsigner -verify -verbose:certs "lift-cli-windows.zip"
The following is an example of the
jarsigner command output:
s = signature was verified m = entry is listed in manifest k = at least one certificate was found in keystore i = at least one certificate was found in identity scope - Signed by "CN=IBM Canada Limited, O=IBM Canada Limited, L=Markham, ST=Ontario, C=CA" Digest algorithm: SHA-256 Signature algorithm: SHA256withRSA, 2048-bit key Timestamped by "CN=Symantec SHA256 TimeStamping Signer - G2, OU=Symantec Trust Network, O=Symantec Corporation, C=US" on Thu Oct 26 19:55:38 UTC 2017 Timestamp digest algorithm: SHA-256 Timestamp signature algorithm: SHA256withRSA, 2048-bit key jar verified.
The timestamp indicates when the install image was signed.
Verify the public key fingerprint of the certificate. The
openssl program is required. It is available in Linux and MacOS by default. It is available in Windows 10 using the command line program (
cmd) or PowerShell in bash mode.
a) Unzip the downloaded image file into a directory of your choice. The following two sub-directories are created:
b) Navigate to the parent directory of the unzipped image file:
c) Run the following
openssl command to read the certificate found in the
openssl pkcs7 -in META-INF/SLINGSHO.RSA -print_certs -inform DER -out LIFTCLI.cer
d) Run the following
openssl command to show the fingerprint of the public key contained in the certificate.
openssl x509 -in LIFTCLI.cer -noout -sha256 -fingerprint
The resulting fingerprint output must match the following public key fingerprint of the certificate: