Search This Blog

Thursday 21 June 2012

Loading existing RDBMS data into SQLFire

A common scenario or question I get is how do I load data into SQLFire from an existing RDBMS. If you have a simple schema and you wish to load that into SQLFire you can use Apache DDLUtils as shown below. In this example we take the classic DEPT/EMP schema from MYSQL and place it into SQLFire.

1. Create a build.xml as follows. This example is taken from $SQLFIRE_HOME/lib/ddlutils/example directory. It's been edited to point to a MYSQL database for the source schema and a target SQLFire database as the target schema.
  
<?xml version="1.0"?>

<project name="test" default="writeDDLToXML" basedir=".">

<path id="runtime-classpath">
  <fileset dir="./lib">
    <include name="**/*.jar"/>
    <include name="**/*.zip"/>
  </fileset>
</path>

 <!-- This will output all the object in the database 'test' and output the schema definition
   to a file 'db-schema1.xml'. This will also extract the data and output this to 'data.xml'
  -->
<target name="writeDDLToXML" description="Dumps the database structure">
  <taskdef name="databaseToDdl"
           classname="org.apache.ddlutils.task.DatabaseToDdlTask">
    <classpath refid="runtime-classpath"/>

  </taskdef>
  <databaseToDdl verbosity="debug">
  <!--
    <database url="jdbc:sqlfire://localhost:1527"
              driverClassName="com.vmware.sqlfire.jdbc.ClientDriver"
              username="app"
              password="app"/>
  -->

    <database url="jdbc:mysql://localhost/scottdb"
              driverClassName="com.mysql.jdbc.Driver"
              username="scott"
              password="tiger"/>

    <writeSchemaToFile outputFile="db-schema1.xml" failonerror="false" />

    <!-- Comment line below if the source DB is too big -->
    <writeDataToFile outputFile="data.xml"/>
  </databaseToDdl>

</target>


 <!-- This will create the tables, etc in SQLFire.
      If your table names or column names contain embedded spaces, set
      usedelimitedsqlidentifiers to true.
  -->
<target name="createDBFromXML" description="Create the DB tables ..">
  <taskdef classname="org.apache.ddlutils.task.DdlToDatabaseTask"
          name="ddlToDatabase"
          classpathref="runtime-classpath"/>

   <ddlToDatabase usedelimitedsqlidentifiers="false">
     <database driverclassname="com.vmware.sqlfire.jdbc.ClientDriver"
             url="jdbc:sqlfire://localhost:1527"
             username="app"
             password="app"/>
   <fileset dir=".">
     <include name="db-schema1.xml"/>
   </fileset>

   <!--   <createdatabase failonerror="false"/>  -->

   <writeschematodatabase alterdatabase="true"
                          failonerror="false"/>
   </ddlToDatabase>
</target>


 <!-- Extract DDL for a schema and write this out as a 'SQL' script file (db-schema1.sql).
      EDIT THIS FILE TO CHANGE THE TABLES TO BE CUSTOM PARTITIONED, REPLICATED, PERSISTENT, ETC
      Then, execute the SQL script using 'IJ' or your favorite tool (like SQuirrel)
 -->
<target name="writeDDLToSQL" description="Dumps the database structure">
  <taskdef name="databaseToDdl"
           classname="org.apache.ddlutils.task.DatabaseToDdlTask">
    <classpath refid="runtime-classpath"/>

  </taskdef>
  <databaseToDdl verbosity="debug">
    <database url="jdbc:sqlfire://localhost:1527"
              driverClassName="com.vmware.sqlfire.jdbc.ClientDriver"
              username="app"
              password="app"/>

    <writeSchemaSqlToFile outputFile="db-schema1.sql" dodrops="true" failonerror="false" alterdatabase="false"/>
  </databaseToDdl>
</target>


 <!-- Imports data rows into a database.
      If your table names or column names contain embedded spaces, set
      usedelimitedsqlidentifiers to true.
  -->
<target name="ImportDataToDB" description="Import the data ..">
  <taskdef classname="org.apache.ddlutils.task.DdlToDatabaseTask"
          name="ddlToDatabase"
          classpathref="runtime-classpath"/>

   <ddlToDatabase usedelimitedsqlidentifiers="false">
    <database url="jdbc:sqlfire://localhost:1527"
              driverClassName="com.vmware.sqlfire.jdbc.ClientDriver"
              username="app"
              password="app"/>
   <fileset dir=".">
     <include name="db-schema1.xml"/>
   </fileset>

   <writedatatodatabase datafile="data.xml"
                        usebatchmode="true"
                        batchsize="1000"/>
 </ddlToDatabase>
</target>

</project>

2. Ensure you have Apache ant in your path
3. Ensure you have placed the JDBC drivers for SQLFire and MYSQL in the path which will be picked up by ant. In this example ANT is using a "lib" sub directory to include all JARS in that directory at runtime. This "lib" directory would also include the Apache DDLUtils jar files as well.

mysql-connector-java-5.1.17-bin.jar
sqlfireclient.jar

Here is a full directory listing on the "lib" directory JAR files I was using.

[Thu Jun 21 14:29:01 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/lib ] $ d
total 38848
-rw-r--r--   1 papicella  staff    787920  4 Jul  2011 mysql-connector-java-5.1.17-bin.jar
-rw-r--r--   1 papicella  staff    474464 20 Jun 14:32 wstx-asl-3.0.2.jar
-rw-r--r--   1 papicella  staff     26514 20 Jun 14:32 stax-api-1.0.1.jar
-rw-r--r--   1 papicella  staff    352668 20 Jun 14:32 log4j-1.2.8.jar
-rw-r--r--   1 papicella  staff    486522 20 Jun 14:32 dom4j-1.4.jar
-rw-r--r--   1 papicella  staff     42492 20 Jun 14:32 commons-pool-1.2.jar
-rw-r--r--   1 papicella  staff     38015 20 Jun 14:32 commons-logging-1.0.4.jar
-rw-r--r--   1 papicella  staff    207723 20 Jun 14:32 commons-lang-2.1.jar
-rw-r--r--   1 papicella  staff    139966 20 Jun 14:32 commons-digester-1.7.jar
-rw-r--r--   1 papicella  staff    107631 20 Jun 14:32 commons-dbcp-1.2.1.jar
-rw-r--r--   1 papicella  staff    559366 20 Jun 14:32 commons-collections-3.1.jar
-rw-r--r--   1 papicella  staff     46725 20 Jun 14:32 commons-codec-1.3.jar
-rw-r--r--   1 papicella  staff    188671 20 Jun 14:32 commons-beanutils-1.7.0.jar
-rw-r--r--   1 papicella  staff    447398 20 Jun 14:34 DdlUtils-1.0.jar
-rw-r--r--   1 papicella  staff    822779 20 Jun 14:37 sqlfireclient.jar
-rw-r--r--   1 papicella  staff  15125561 20 Jun 14:37 sqlfire.jar
drwxr-xr-x  18 papicella  staff       612 21 Jun 14:08 ./
drwxr-xr-x   7 papicella  staff       238 21 Jun 14:08 ../


** At this point you can begin the process as shown below **

4. Create schema / data file in XML using the ant task as follows

[Thu Jun 21 08:36:10 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant writeDDLToXML
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

writeDDLToXML:
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@bdee400 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@bdee400 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Written schema to /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.xml
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@bdee400 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@bdee400 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@bdee400 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@bdee400 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Written data XML to file/Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/data.xml

BUILD SUCCESSFUL
Total time: 1 second

5. If your happy to use only replicated tables then you can import the schema definition into
SQLFire as follows. Every table created by default is "replicate" in this scenario.

[Thu Jun 21 08:36:37 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant createDBFromXML
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

createDBFromXML:
[ddlToDatabase] Read schema file /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.xml
[ddlToDatabase] Table DEPT needs to be added
[ddlToDatabase] Table EMP needs to be added
[ddlToDatabase] Foreign key Foreign key [name=EMP_FK; foreign table=DEPT; 1 references] needs to be added to table EMP
[ddlToDatabase] Executed 3 SQL command(s) with 0 error(s)
[ddlToDatabase] Written schema to database

BUILD SUCCESSFUL
Total time: 0 seconds

6. This next step will take the XML created in step #1 and create a schema file in SQL. This is great because you can now add the SQLFire SQL to use partioned, disk stores etc. Without persisteant disk stores the data will disappear once the system goes down.

[Thu Jun 21 08:54:01 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant writeDDLToSQL
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

writeDDLToSQL:
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@2d1e233 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@2d1e233 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Written schema SQL to /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.sql

BUILD SUCCESSFUL
Total time: 1 second

7. Finally with a schema created we can simply import the data created from #4 above as follows.

[Thu Jun 21 08:54:43 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant ImportDataToDB
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

ImportDataToDB:
[ddlToDatabase] Read schema file /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.xml
[ddlToDatabase] Written data from file /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/data.xml to database

BUILD SUCCESSFUL
Total time: 0 seconds

8. The last thing left to do is verify we have our schema data within SQLFire as shown below.
  
[Thu Jun 21 08:55:32 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo ] $ sqlf
sqlf version 10.4
sqlf> connect client 'localhost:1527';
sqlf> show tables in app;
TABLE_SCHEM         |TABLE_NAME                    |REMARKS             
------------------------------------------------------------------------
APP                 |DEPT                          |                    
APP                 |EMP                           |                    

2 rows selected
sqlf> select * from dept;
DEPTNO     |DNAME         |LOC          
----------------------------------------
20         |RESEARCH      |DALLAS       
10         |ACCOUNTING    |NEW YORK     
30         |SALES         |CHICAGO      
40         |OPERATIONS    |BOSTON       

4 rows selected 

For more information on DDLUtils with SQLFire see the link below.

http://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.sqlfire.1.0/data_management/ddlutils-procedure.html

Friday 8 June 2012

Modelling SQLFire with a Locator VM in vFabric Application Director

I recently setup a SQLFire distributed system by modelling a blueprint from vFabric Application Director. In this simple setup I created a service for a locator as well as a SQLFire member. The screen shots below show what this looked like.

New Catalog Items

1. Create the locator service as follows

1.1. Define it as follows


1.2. Set properties as follows


1.3. Add INSTALL script as shown below
  
#!/bin/sh
# Import global conf
. $global_conf

set -e

#To support ubuntu java bins
export PATH=$PATH:/usr/java/jre-vmware/bin

mkdir -p $install_path
java -Dsqlfire.installer.directory="$install_path" -jar `basename "$SQLFire102_Installer"`
 
cd $install_path/*SQLFire*

mkdir locator
./bin/sqlf locator start -peer-discovery-address=$locator_ip -peer-discovery-port=$locator_port -client-bind-address=$locator_ip -client-port=$client_port -dir=locator

2. Create a SQLFire server service

2.1. Define it as follows




2.2. Set properties as follows


2.3 Add install script as shown below
  
#!/bin/sh
# Import global conf
. $global_conf

set -e

#To support ubuntu java bins
export PATH=$PATH:/usr/java/jre-vmware/bin

mkdir -p $install_path
java -Dsqlfire.installer.directory="$install_path" -jar `basename "$SQLFire102_Installer"`
 
cd $install_path/*SQLFire*

mkdir server1
./bin/sqlf server start -dir=server1 -client-port=$client_port -client-bind-address=$server_ip -locators=$locator_ip[$locator_port]

Blueprint using the Catalog Items above

The catalog items are simply added to the templates as needed. From the image below we have one locator VM and a cluster of SQLFire server node VM's as well. This enables us to create as many SQLFire member nodes as needed.


From the screen shot below it shows how we determine the locator IP address as well as the SQLFire server node IP address at runtime as we need that to be injected in for us before the lifecycle INSTALL phase is started.