Search This Blog

Wednesday 19 December 2012

Spring Data JPA Repository with vFabric SQLFire

Spring JPA is part of the umbrella Spring Data project that makes it easy to easily implement JPA based repositories. In this example below we use Hibernate 4.1 as the JPA implementation along with with our Spring Data JPA repository to get up and running as quickly as possible. Finally in this example we use the SQLFire Hibernate Dialect given we are connecting to a SQLFire distributed database system. More info on the SQLFire Hibernate Dialect can be found here.

The project I have created in STS is as follows.


In no particular order lets show how we defined this project. Like most other blog entries this is based on the classic DEPT/EMP RDBMS tables. Before we start the maven STS project has the following defined. This will ensure you can use this code without compilation errors as all libraries required are defined with maven dependencies.

pom.xml
  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>spring-hibernate-jpa-sqlfire</groupId>
  <artifactId>spring-hibernate-jpa-sqlfire</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <name>spring-hibernate-jpa-sqlfire</name>
   <properties>
       <spring.version>3.1.2.RELEASE</spring.version>
   </properties>
  <dependencies>
   <dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-entitymanager</artifactId>
    <version>4.1.6.Final</version>
   </dependency>
   <dependency>
    <groupId>org.hibernate.javax.persistence</groupId>
    <artifactId>hibernate-jpa-2.0-api</artifactId>
    <version>1.0.0.Final</version>
   </dependency>   
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>${spring.version}</version>
   </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-aop</artifactId>
            <version>${spring.version}</version>
        </dependency>
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-tx</artifactId>
    <version>${spring.version}</version>
   </dependency>
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-orm</artifactId>
    <version>${spring.version}</version>
   </dependency>
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-jdbc</artifactId>
    <version>${spring.version}</version>
   </dependency>
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>${spring.version}</version>
   </dependency>
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>${spring.version}</version>
   </dependency>
   <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-jpa</artifactId>
    <version>2.0.8</version>
   </dependency>
   <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.8.2</version>
   </dependency>
     <dependency>
     <groupId>org.springframework.data</groupId>
     <artifactId>spring-data-jpa</artifactId>
     <version>1.2.0.RELEASE</version>
    </dependency>
    <dependency>
      <groupId>com.vmware</groupId>
      <artifactId>sqlf-client</artifactId>
      <version>1.0.3</version>
      <scope>system</scope>
      <systemPath>/Users/papicella/sqlfire/vFabric_SQLFire_103/lib/sqlfireclient.jar</systemPath>
    </dependency>
    <dependency>
      <groupId>com.vmware</groupId>
      <artifactId>sqlf-dialect</artifactId>
      <version>1.0.3</version>
      <scope>system</scope>
      <systemPath>/Users/papicella/sqlfire/vFabric_SQLFire_103/lib/sqlfHibernateDialect.jar</systemPath>
    </dependency>
  </dependencies>   
    
</project>
1. Create a persistence.xml file in META-INF as shown below.

persistence.xml
  
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
             version="1.0">

 <persistence-unit name="application" transaction-type="RESOURCE_LOCAL">
     <mapping-file>META-INF/persistence-query.xml</mapping-file>
        <class>pas.au.spring.hibernate.sqlfire.model.Dept</class>
        <class>pas.au.spring.hibernate.sqlfire.model.Emp</class>
  <exclude-unlisted-classes>true</exclude-unlisted-classes>
 </persistence-unit>

</persistence> 

2. Optionally created named queries in it's own XML. I prefer to do this then add annotations as this gives you the ability to tweak the queries without altering the code.

persistence-query.xml
  
<?xml version="1.0" encoding="UTF-8" ?>
 
<entity-mappings xmlns="http://java.sun.com/xml/ns/persistence/orm"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_1_0.xsd"
        version="1.0">

    <named-query name="Dept.findByDeptnoNamedQuery">
        <query>
            from Dept
            where deptno = ?1
        </query>
    </named-query>
        
</entity-mappings>

3. Create the Dept and Emp domain model classes as shown below. These are referenced in persistence.xml above.

Dept.java
  
package pas.au.spring.hibernate.sqlfire.model;

import java.util.List;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.FetchType;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.OneToMany;

@Entity
public class Dept 
{
 @Id
 @GeneratedValue (strategy=GenerationType.IDENTITY)
 @Column(name="deptno")
 private Long deptno;
 
 @Column(length = 20, unique = true)
 private String dname;
 
 private String loc;

    @OneToMany(mappedBy = "dept", fetch = FetchType.EAGER)
    private List<Emp> empList;

 public Dept()
 { 
 }
 
 public Dept(Long deptno, String dname, String loc) 
 {
  super();
  this.deptno = deptno;
  this.dname = dname;
  this.loc = loc;
 }

 public Long getDeptno() {
  return deptno;
 }

 public void setDeptno(Long deptno) {
  this.deptno = deptno;
 }

 public String getDname() {
  return dname;
 }

 public void setDname(String dname) {
  this.dname = dname;
 }

 public String getLoc() {
  return loc;
 }

 public void setLoc(String loc) {
  this.loc = loc;
 }

 
    public List<Emp> getEmpList() {
  return empList;
 }

 public void setEmpList(List<Emp> empList) {
  this.empList = empList;
 }

 public Emp addEmp(Emp emp) {
        getEmpList().add(emp);
        emp.setDept(this);
        return emp;
    }

    public Emp removeEmp(Emp emp) {
        getEmpList().remove(emp);
        emp.setDept(null);
        return emp;
    }

 @Override
 public String toString() {
  return "Dept [deptno=" + deptno + ", dname=" + dname + ", loc=" + loc
    + "]";
 }
 
} 
Emp.java
  
package pas.au.spring.hibernate.sqlfire.model;

import java.sql.Timestamp;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;

@Entity
public class Emp 
{
 @Id
 @GeneratedValue (strategy=GenerationType.IDENTITY)
 @Column(name="empno")
 private Long empno;
 
 @Column(length = 20)
 private String ename;

 private Timestamp hiredate;
    private String job;
    private Long mgr;
    private Double sal;
    @ManyToOne
    @JoinColumn(name = "DEPTNO")
    private Dept dept;
    
    public Emp()
    {
    }

 public Emp(Long empno, String ename, String job, Long mgr, Double sal, Timestamp hiredate, Dept dept) {
  super();
  this.empno = empno;
  this.ename = ename;
  this.job = job;
  this.mgr = mgr;
  this.sal = sal;
  this.hiredate = hiredate;
  this.dept = dept;
 }

 public Long getEmpno() {
  return empno;
 }

 public void setEmpno(Long empno) {
  this.empno = empno;
 }

 public String getEname() {
  return ename;
 }

 public void setEname(String ename) {
  this.ename = ename;
 }

 public String getJob() {
  return job;
 }

 public void setJob(String job) {
  this.job = job;
 }

 public Long getMgr() {
  return mgr;
 }

 public void setMgr(Long mgr) {
  this.mgr = mgr;
 }

 public Double getSal() {
  return sal;
 }

 public void setSal(Double sal) {
  this.sal = sal;
 }

 public Dept getDept() {
  return dept;
 }

 public void setDept(Dept dept) {
  this.dept = dept;
 }

 public Timestamp getHiredate() {
  return hiredate;
 }

 public void setHiredate(Timestamp hiredate) {
  this.hiredate = hiredate;
 }

 @Override
 public String toString() {
  return "Emp [empno=" + empno + ", ename=" + ename + ", hiredate="
    + hiredate + ", job=" + job + ", mgr=" + mgr + ", sal=" + sal
    + ", dept=" + dept + "]";
 }
 
} 
4. Create a spring XML file in the META-INF/spring directory named "applicationContext-persistence.xml"

applicationContext-persistence.xml
  
<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:tx="http://www.springframework.org/schema/tx"
 xmlns:repository="http://www.springframework.org/schema/data/repository"
 xmlns:jpa="http://www.springframework.org/schema/data/jpa"
 xsi:schemaLocation="http://www.springframework.org/schema/data/repository http://www.springframework.org/schema/data/repository/spring-repository-1.4.xsd
  http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://www.springframework.org/schema/data/jpa http://www.springframework.org/schema/data/jpa/spring-jpa-1.2.xsd
  http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">
 
    <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="jpaDialect">
            <bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
        </property>
        <property name="jpaVendorAdapter">
            <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
        </property>
        <property name="persistenceUnitName" value="application" />
        <property name="persistenceUnitManager">
            <bean class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager" />
        </property>
        <property name="jpaProperties">
            <props>
                <prop key="hibernate.dialect">com.vmware.sqlfire.hibernate.SQLFireDialect</prop>
                <prop key="hibernate.show_sql">true</prop>
    <prop key="hibernate.format_sql">true</prop>
                <prop key="hibernate.connection.driver_class">com.vmware.sqlfire.jdbc.ClientDriver</prop>
                <prop key="hibernate.connection.url">jdbc:sqlfire://localhost:1527</prop>
                <prop key="hibernate.connection.username">APP</prop>
                <prop key="hibernate.connection.password">APP</prop>
            </props>
        </property>
        <property name="packagesToScan" value="pas.au.spring.hibernate.sqlfire.repository" />
    </bean>
 
    <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
        <property name="entityManagerFactory" ref="entityManagerFactory" />
    </bean>
 
    <tx:annotation-driven/>

 
 <!-- CRUD JPA Repositories Here loaded by default  -->
 <jpa:repositories base-package="pas.au.spring.hibernate.sqlfire.repository" />
 
</beans>

5. Create a JPA CRUD repository as shown below. This will automatically be picked up by spring JPA repository component scanner as defined in "applicationContext-persistence.xml" above.

JPADeptRepository.java
  
package pas.au.spring.hibernate.sqlfire.repository;

import java.util.List;

import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.CrudRepository;

import pas.au.spring.hibernate.sqlfire.model.Dept;

/*
 * Define a repository with all CRUD methods written for you
 */
public interface JPADeptRepository extends CrudRepository<Dept, Long>
{
 /*
  * Method which will invoke the JPQL required for a find by the attribute name
  */
 public Dept findByDeptno (Long deptno);
 
 /* 
  * Shows how to access  a named query using a method name to identify it
  * The method name must match the defined query name
  * 
  */
 public Dept findByDeptnoNamedQuery (Long deptno);

 /*
  * Method that allows us to supply a JPQL query to execute for the method
  */
 @Query("SELECT d FROM Dept d WHERE d.dname LIKE '%'")
    public List<Dept> findDeptsWithCustomJPQLQuery();

 /*
  * Method that allows us to supply a JPQL query to execute for the method with a parameter
  */
 @Query("SELECT d FROM Dept d WHERE d.loc = ?1")
    public Dept findDeptWithCustomJPQLQueryWithParam(String loc);
 
 @Query("SELECT d FROM Dept d WHERE d.deptno = ?1")
 public Dept findEmpsInDeptno (Long deptno);
 
 @Query(value = "SELECT * FROM DEPT where deptno = ?1", nativeQuery = true)
 public Dept findDeptByNativeQuery(Long deptno);
 
 @Query(value = "SELECT d, e FROM Dept d, Emp e where e.dept = d and d.deptno = ?1")
 public List<Object[]> findWithJoin(Long deptno);

} 

6. Finally create a test class to verify the repository. It's assumed you have SQLFire running and the DEPT/EMP table already exist. You can use this previous blog entry to get that setup.

http://theblasfrompas.blogspot.com.au/2011/12/sqlfire-in-few-minutes.html

JPAReposDeptTest.java
  
package pas.au.spring.hibernate.sqlfire.test.repository;

import static org.junit.Assert.*;
import java.util.List;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import pas.au.spring.hibernate.sqlfire.repository.JPADeptRepository;
import pas.au.spring.hibernate.sqlfire.model.Dept;
import pas.au.spring.hibernate.sqlfire.model.Emp;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:META-INF/spring/applicationContext-persistence.xml")
public class JPAReposDeptTest 
{
 @Autowired
 JPADeptRepository deptRespository;
 
 @Test
 public void invokeFindAll() 
 {
  List<Dept> deps = (List<Dept>) deptRespository.findAll();
  
  System.out.println("deps size = " + deps.size());
  assertNotNull(deps);
 }

 @Test
 public void invokeFindOne() 
 {
  Dept dept = deptRespository.findOne(new Long(20));
  
  System.out.println(dept);
  assertNotNull(dept);
  assertEquals(new Long(20), dept.getDeptno());
 }
 
 @Test
 public void invokeFindByDeptno() 
 {
  Dept dept = deptRespository.findByDeptno(new Long(10));
  
  System.out.println(dept);
  assertNotNull(dept);
  assertEquals(new Long(10), dept.getDeptno());
 }

 @Test
 public void invokeNamedQuery() 
 {
  Dept dept = deptRespository.findByDeptnoNamedQuery(new Long(30));
  
  System.out.println(dept);
  assertNotNull(dept);
  assertEquals(new Long(30), dept.getDeptno());
 }

 @Test
 public void invokefindDeptsWithCustomJPQLQuery() 
 {
  List<Dept> deps = (List<Dept>) deptRespository.findDeptsWithCustomJPQLQuery();
  
  System.out.println("deps size = " + deps.size());
  assertNotNull(deps);
 }

 @Test
 public void invokefindDeptsWithCustomJPQLQueryWithParam() 
 {
  Dept dept = deptRespository.findDeptWithCustomJPQLQueryWithParam("CHICAGO");
  
  System.out.println(dept);
  assertNotNull(dept);
  assertEquals(new Long(30), dept.getDeptno());
 }

 @Test
 public void invokefindEmpInDeptno() 
 {
  Dept dept = deptRespository.findEmpsInDeptno(new Long(20));
  
  System.out.println(dept);
  assertNotNull(dept);
  
  List<Emp> emps = dept.getEmpList();
  for (Emp e: emps)
  {
   System.out.println(e);
  }
  
  assertEquals(5, emps.size());
 }
 
 @Test
 public void invokeFindDeptByNativeQuery()
 {
  Dept dept = deptRespository.findDeptByNativeQuery(new Long(40));
  
  System.out.println(dept);
  assertNotNull(dept);
  assertEquals(new Long(40), dept.getDeptno());  
 }
 
 @Test
 public void invokeFindWithJoin()
 {
  List<Object[]> results = deptRespository.findWithJoin(new Long(10));
  
  System.out.println("Size of results = " + results.size());
  assertEquals(3, results.size());
  
  for (Object[] result: results)
  {
   System.out.println("Dept: " + result[0] + ", Emp: " + result[1]);
  }
  
 }
}

7. Run the test class above.

Verify output as shown below.



Dec 19, 2012 9:11:41 PM org.hibernate.engine.transaction.internal.TransactionFactoryInitiator initiateService
INFO: HHH000268: Transaction strategy: org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory
Dec 19, 2012 9:11:41 PM org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory
INFO: HHH000397: Using ASTQueryTranslatorFactory
Hibernate:
    select
        dept0_.deptno as deptno0_,
        dept0_.dname as dname0_,
        dept0_.loc as loc0_
    from
        Dept dept0_
Hibernate:
    select
        emplist0_.DEPTNO as DEPTNO0_1_,
        emplist0_.empno as empno1_,
        emplist0_.empno as empno1_0_,
        emplist0_.DEPTNO as DEPTNO1_0_,
        emplist0_.ename as ename1_0_,
        emplist0_.hiredate as hiredate1_0_,
        emplist0_.job as job1_0_,
        emplist0_.mgr as mgr1_0_,
        emplist0_.sal as sal1_0_
    from
        Emp emplist0_
    where
        emplist0_.DEPTNO=?

.......
Size of results = 3
Dept: Dept [deptno=10, dname=ACCOUNTING, loc=NEW YORK], Emp: Emp [empno=7934, ename=MILLER, hiredate=1982-01-23 00:00:00.0, job=CLERK, mgr=7782, sal=1300.0, dept=Dept [deptno=10, dname=ACCOUNTING, loc=NEW YORK]]
Dept: Dept [deptno=10, dname=ACCOUNTING, loc=NEW YORK], Emp: Emp [empno=7782, ename=CLARK, hiredate=1981-06-09 00:00:00.0, job=MANAGER, mgr=7839, sal=2450.0, dept=Dept [deptno=10, dname=ACCOUNTING, loc=NEW YORK]]
Dept: Dept [deptno=10, dname=ACCOUNTING, loc=NEW YORK], Emp: Emp [empno=7839, ename=KING, hiredate=1981-11-17 00:00:00.0, job=PRESIDENT, mgr=null, sal=5000.0, dept=Dept [deptno=10, dname=ACCOUNTING, loc=NEW YORK]]


More Information on Spring data JPA and SQLFire Hibernate Dialect can be can be found here.

Spring Data JPA
http://www.springsource.org/spring-data/jpa

vFabric SQLFire Hibernate Dialect
http://communities.vmware.com/docs/DOC-20294


Friday 7 December 2012

GemFire 70 - Gemcached client adapter for Memcache

As part of GemFire 70 Gemcached is a vFabric GemFire adapter that allows memcached clients to communicate with a GemFire server cluster, as if the servers were memcached servers. Memcached is an open-source caching solution that uses a distributed, in-memory hash map to store key-value pairs of string or object data.

In this example below we are using the following

GemFire 70
SpyMemcache JAVA API

1. Start a GemFire 70 cluster as shown below. We are ommitting the files required here but this shows what is required to ensure Memcache clients can connect to GemFire. Simple unix script below.

gfsh <<!
start locator --name=locator1 --properties-file=gemfire.properties --bind-address=localhost --port=10334 --dir=locator;
start server --name=server1 --memcached-port=11211 --memcached-protocol=BINARY --properties-file=gemfire.properties --locators=localhost[10334] --dir=server1
start server --name=server2 --memcached-port=11212 --memcached-protocol=BINARY --properties-file=gemfire.properties --locators=localhost[10334] --dir=server2
list members;
list regions;
exit;
!


2. Verify we have Memcache ports established and running

[Fri Dec 07 13:10:48 papicella@:~/gf7/gemcache ] $ ./memcache-status.sh
tcp4       0      0  10.117.85.71.11212     *.*                    LISTEN    
tcp4       0      0  10.117.85.71.11211     *.*                    LISTEN    
udp4       0      0  *.62112                *.*   
                           

3. Create a java client as follows which connects to the Gemcached servers we have in our cluster
  
package vmware.au.gemfire.demos.gemcached;

import net.spy.memcached.AddrUtil;
import net.spy.memcached.BinaryConnectionFactory;
import net.spy.memcached.MemcachedClient;

public class GemcacheClient 
{

 private MemcachedClient c = null;
 
 public GemcacheClient() throws Exception
 {
  c = new MemcachedClient(new BinaryConnectionFactory(),
          AddrUtil.getAddresses("10.117.85.71:11212 10.117.85.71:11212"));  
 }
 
 public void run()
 {

  for (int i = 1; i <= 20; i++)
  {
   c.set(String.valueOf(i), 0, "Customer" + i);
  }
  
  System.out.println("added 20 entries \n");
  
  String value =  (String) c.get("5");
  
  System.out.println("Value with key [5] = " + value);
  
  c.shutdown();  
 }
 
 /**
  * @param args
  * @throws Exception 
  */
 public static void main(String[] args) throws Exception 
 {
  GemcacheClient test = new GemcacheClient();
  test.run();
  System.out.println("all done..");
 }

}

4. Run it and verify output as shown below. Ensure you alter the connect address correctly.

2012-12-07 13:25:05.577 INFO net.spy.memcached.MemcachedConnection:  Added {QA sa=/10.117.85.71:11212, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2012-12-07 13:25:05.578 INFO net.spy.memcached.MemcachedConnection:  Added {QA sa=/10.117.85.71:11212, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2012-12-07 13:25:05.582 INFO net.spy.memcached.MemcachedConnection:  Connection state changed for sun.nio.ch.SelectionKeyImpl@52cc95d
2012-12-07 13:25:05.584 INFO net.spy.memcached.MemcachedConnection:  Connection state changed for sun.nio.ch.SelectionKeyImpl@4393722c
added 20 entries

Value with key [5] = Customer5
2012-12-07 13:25:05.942 INFO net.spy.memcached.MemcachedConnection:  Shut down memcached client
all done..


5. Now in GFSH verify we have a region called "gemcached" and it now has data.

  
[Fri Dec 07 13:40:27 papicella@:~/gf7/gemcache ] $ gfsh
WARNING: JDK/lib/tools.jar is required for some GFSH commands. Please set JAVA_HOME to point to the JDK directory. Currently using JRE.
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  / 
 / /__/ / ____/  _____/ / /    / /  
/______/_/      /______/_/    /_/    v7.0

Monitor and Manage GemFire
gfsh>connect --locator=localhost[10334];
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=Pas-Apicellas-MacBook-Pro.local, port=1099] ..
Successfully connected to: [host=Pas-Apicellas-MacBook-Pro.local, port=1099]

gfsh>list regions;
List of regions
---------------
gemcached
test

gfsh>query --query="select count(*) from /gemcached";

Result     : true
startCount : 0
endCount   : 20
Rows       : 1

Result
------
20

NEXT_STEP_NAME : END 

More information can be found in the documentation below.

http://pubs.vmware.com/vfabricNoSuite/index.jsp?topic=/com.vmware.vfabric.gemfire.7.0/tools_modules/gemcached/deploying_gemcached.html

Thursday 29 November 2012

GemFire 70 and Parallel WAN and Simplified WAN Configuration

With the introduction of GemFire 70 the WAN replication has been simplified. IN fact the simplest setup is as shown below.

1. A cache sever started with a cache.xml as follows which defines a single sender and a receiver
  
<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC
    "-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN"
    "http://www.gemstone.com/dtd/cache7_0.dtd">

<cache>
    <gateway-sender id="sender1" parallel="true" remote-distributed-system-id="2" dispatcher-threads="2" order-policy="partition"/>
    <gateway-receiver start-port="1530" end-port="1551"/>
    <cache-server port="40001"/>
    <region name="wantest">
        <region-attributes refid="PARTITION" gateway-sender-ids="sender1"/>
    </region>
</cache>

2. Some commands in GFSH to verify the gateway senders/recievers
  
Cluster-1 gfsh>list gateways
Gateways


GatewaySender

GatewaySender Id |                       Member                       | Remote Cluster Id |   Type   | Status  | Queued Events | Receiver Location
---------------- | -------------------------------------------------- | ----------------- | -------- | ------- | ------------- | -----------------
sender1          | Pas-Apicellas-MacBook-Pro(server1:22361)<v1>:10766 | 2                 | Parallel | Running | 0             | null
sender1          | Pas-Apicellas-MacBook-Pro(server2:22362)<v2>:25822 | 2                 | Parallel | Running | 0             | null


GatewayReceiver

                      Member                       | Port | Sender Count | Sender's Connected
-------------------------------------------------- | ---- | ------------ | ------------------------------------------------------
Pas-Apicellas-MacBook-Pro(server1:22361)<v1>:10766 | 1541 | 2            | ["Pas-Apicellas-MacBook-Pro(server1:22385)<v1>:55223"]
Pas-Apicellas-MacBook-Pro(server2:22362)<v2>:25822 | 1537 | 2            | ["Pas-Apicellas-MacBook-Pro(server2:22386)<v2>:11991"]

Cluster-1 gfsh>status gateway-sender --id=sender1

                      Member                       |   Type   | Status
-------------------------------------------------- | -------- | -------
Pas-Apicellas-MacBook-Pro(server1:22361)<v1>:10766 | Parallel | Running
Pas-Apicellas-MacBook-Pro(server2:22362)<v2>:25822 | Parallel | Running 

More information on this can be found at the following link

http://pubs.vmware.com/vfabricNoSuite/index.jsp?topic=/com.vmware.vfabric.gemfire.7.0/topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html

Thursday 22 November 2012

Spring data repository for GemFire 7

The example below shows the various config and code required to use Spring Data Repository with GemFire7. It is assumed we have a GemFire distributed system up and running with some department and employee data. For this example we will connect as a client proxy where all requests go to the server side for processing.

client.xml - client cache file for access to the remote servers
  
<!DOCTYPE client-cache PUBLIC 
"-//GemStone Systems, Inc.//GemFire Declarative Caching 7//EN" 
"http://www.gemstone.com/dtd/cache7_0.dtd">
<client-cache>	
	<!-- No cache storage in the client region because of the PROXY client region shortcut setting. -->

    <region name="departments">
      <region-attributes refid="PROXY" pool-name="gfPool"/>
    </region>   
		
    <region name="employees">
		<region-attributes refid="PROXY" pool-name="gfPool" />
    </region>
</client-cache>

application-context.xml - spring application context file using spring gemfire. It's here where we make a connection to GemFire through a locator to load balance between the servers.
  
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns:gfe="http://www.springframework.org/schema/gemfire"
	xmlns:gfe-data="http://www.springframework.org/schema/data/gemfire"
	xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
		http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire-1.2.xsd
		http://www.springframework.org/schema/data/gemfire http://www.springframework.org/schema/data/gemfire/spring-data-gemfire-1.2.xsd">
		
	<gfe:client-cache id="client-cache" cache-xml-location="classpath:client.xml" pool-name="gfPool"/>
	
	<gfe:pool id="gfPool" max-connections="10">
		<gfe:locator host="localhost" port="10334"/>
	</gfe:pool>
	
	<gfe:lookup-region id="departments" name="departments" cache-ref="client-cache"/>
   	<gfe:lookup-region id="employees" name="employees" cache-ref="client-cache"/>

	<gfe-data:repositories base-package="vmware.au.se.gf7.deptemp.repos"  />
   
</beans>

Department.java - domain object with spring data repository annotation referencing "departments" region in gemFire
  
package vmware.au.se.gf7.deptemp.beans;

import java.io.Serializable;
import java.util.Properties;

import org.springframework.data.annotation.Id;
import org.springframework.data.gemfire.mapping.Region;

import com.gemstone.gemfire.cache.Declarable;

@Region("departments")
public class Department implements Declarable, Serializable
{

	private static final long serialVersionUID = -9097335119586059309L;

	@Id
	private int deptno;
	private String name;
	
	public void init(Properties props) 
	{
		// TODO Auto-generated method stub
		this.deptno = Integer.parseInt(props.getProperty("deptno"));
		this.name = props.getProperty("name");
	}

	public int getDeptno() {
		return deptno;
	}

	public void setDeptno(int deptno) {
		this.deptno = deptno;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}

	@Override
	public String toString() {
		return "Department [deptno=" + deptno + ", name=" + name + "]";
	}
  
}

Employee.java - domain object with spring data repository annotation referencing "employees" region in GemFire
  
package vmware.au.se.gf7.deptemp.beans;

import java.io.Serializable;
import java.util.Properties;

import org.springframework.data.annotation.Id;
import org.springframework.data.gemfire.mapping.Region;

import com.gemstone.gemfire.cache.Declarable;

@Region("employees")
public class Employee implements Declarable, Serializable
{

	private static final long serialVersionUID = -8229531542107983344L;
	
	@Id
	private int empno;
	private String name;
	private String job;
	private int deptno;

	public void init(Properties props) 
	{
		// TODO Auto-generated method stub
		this.empno = Integer.parseInt(props.getProperty("empno"));
		this.name = props.getProperty("name");
		this.job = props.getProperty("job");
		this.deptno = Integer.parseInt(props.getProperty("deptno"));
		
	}

	public int getEmpno() {
		return empno;
	}

	public void setEmpno(int empno) {
		this.empno = empno;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}

	public String getJob() {
		return job;
	}

	public void setJob(String job) {
		this.job = job;
	}

	public int getDeptno() {
		return deptno;
	}

	public void setDeptno(int deptno) {
		this.deptno = deptno;
	}

	@Override
	public String toString() {
		return "Employee [empno=" + empno + ", name=" + name + ", job=" + job
				+ ", deptno=" + deptno + "]";
	}
	
	
	
} 

At this point we create two repositories as shown below.

DeptRepository.java
  
package vmware.au.se.gf7.deptemp.repos;

import java.util.Collection;

import org.springframework.data.gemfire.repository.Query;
import org.springframework.data.repository.CrudRepository;
import vmware.au.se.gf7.deptemp.beans.Department;

public interface DeptRepository extends CrudRepository<Department, String> 
{
	Department findByName(String name);
	
	@Query("SELECT * FROM /departments")
	Collection<Department> myFindAll();
} 

EmpRepository.java
  
package vmware.au.se.gf7.deptemp.repos;

import java.util.Collection;

import org.springframework.data.gemfire.repository.Query;
import org.springframework.data.repository.CrudRepository;

import vmware.au.se.gf7.deptemp.beans.Employee;

public interface EmpRepository extends CrudRepository<Employee, String>
{
	@Query("SELECT * FROM /employees where deptno = $1")
	Collection<Employee> empsInDeptno(int deptno);
	
	@Query("SELECT * FROM /employees")
	Collection<Employee> myFindAll();
}

Finally we have a Test client as follows.
  
package spring.gemfire.repository.test;

import java.util.Collection;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import vmware.au.se.gf7.deptemp.beans.Department;
import vmware.au.se.gf7.deptemp.beans.Employee;
import vmware.au.se.gf7.deptemp.repos.DeptRepository;
import vmware.au.se.gf7.deptemp.repos.EmpRepository;

public class Test 
{

	private ConfigurableApplicationContext ctx = null;
	
	public Test()
	{
		ctx = new ClassPathXmlApplicationContext("application-context.xml");		
	}
	
	public void run()
	{
		
		DeptRepository deptRepos = (DeptRepository) ctx.getBean(DeptRepository.class);
        
		// get quick size
		System.out.println("** Size of dept repository **");
		System.out.println("Size = " + deptRepos.count());
		
		// call findOne crud method by key
		System.out.println("** calling  deptRepos.findOne(\"20\") **");
		Department dept = (Department) deptRepos.findOne("20");
		System.out.println(dept);

		// call findOne crud method for an attribute
		System.out.println("** calling  deptRepos.findByName(\"ACCOUNTING\") **");
		Department dept2 = (Department) deptRepos.findByName("ACCOUNTING");
		System.out.println(dept2);
		
		// call my own findAll
		Collection<Department> deps = (Collection<Department>) deptRepos.myFindAll(); 
		
		System.out.println("\n** All Departments using -> deptRepos.myFindAll()");
		System.out.println("Defined as : @Query(\"SELECT * FROM /departments\") ");
		System.out.println("Collection<Department> myFindAll(); ** ");
		
		for (Department d: deps)
		{
			System.out.println(d.toString());
		}
		
		EmpRepository empRepos = (EmpRepository) ctx.getBean(EmpRepository.class);

		// get quick size
		System.out.println("** Size of emp repository **");
		System.out.println("Size = " + empRepos.count());
		
		Collection<Employee> emps = empRepos.empsInDeptno(40);
		System.out.println("\n ** All Employees in dept 40 using -> Collection<Employee> empsInDeptno(int deptno) **");
		for (Employee e: emps)
		{
			System.out.println(e.toString());
		}
		
	}
	
	/**
	 * @param args
	 * @throws InterruptedException 
	 */
	public static void main(String[] args) 
	{
		// TODO Auto-generated method stub
		Test t = new Test();
		t.run();
		System.out.println("all done..");
		
	}

}


Output is as follows

[info 2012/11/22 16:39:03.626 EST tid=0xf] AutoConnectionSource discovered new locators [192-168-1-3.tpgi.com.au/192.168.1.3:10334]

[config 2012/11/22 16:39:03.628 EST tid=0x10] Updating membership port.  Port changed from 0 to 59,207.
** Size of dept repository **
Size = 4
** calling  deptRepos.findOne("20") **
Department [deptno=20, name=RESEARCH]
** calling  deptRepos.findByName("ACCOUNTING") **
Department [deptno=10, name=ACCOUNTING]

** All Departments using -> deptRepos.myFindAll()
Defined as : @Query("SELECT * FROM /departments")
Collection myFindAll(); **
Department [deptno=30, name=SALES]
Department [deptno=40, name=OPERATIONS]
Department [deptno=10, name=ACCOUNTING]
Department [deptno=20, name=RESEARCH]
** Size of emp repository **
Size = 13

 ** All Employees in dept 40 using -> Collection empsInDeptno(int deptno) **
Employee [empno=7380, name=BLACK, job=CLERK, deptno=40]
Employee [empno=7381, name=BROWN, job=SALESMAN, deptno=40]
Employee [empno=7373, name=SIENA, job=CLERK, deptno=40]
all done..


For more information see the link below.

http://static.springsource.org/spring-gemfire/docs/current/reference/html/gemfire-repositories.html

Tuesday 20 November 2012

Management of GemFire 7 using gfsh command line interface

The GemFire command line interface or 'gfsh' (pronounced "gee - fish") provides a single interface that allows you to launch, manage and monitor vFabric GemFire processes, data and applications.

The GemFire 7.0 version of gfsh supports all of the commands that existed in the previous version of gfsh (released with GemFire 6.6) and also now includes functionality that was previously found in the gemfire and cacheserver scripts.
With gfsh, you can:

  • Start and stop GemFire processes, such as locators and cache servers
  • Start and stop Gateway Senders and Gateway Receiver processes
  • Deploy applications
  • Create and destroy regions
  • Execute functions
  • Manage disk stores
  • Import and export data
  • Monitor GemFire processes
  • Launch GemFire monitoring tools 
Here is a quick example of how it works.

1. Setup your environment to use GemFire 70 as shown below

export GEMFIRE=/Users/papicella/gemfire/vFabric_GemFire_70_b38623

export GF_JAVA=$JAVA_HOME/bin/java

export PATH=$PATH:$JAVA_HOME/bin:$GEMFIRE/bin

export CLASSPATH=$GEMFIRE/lib/gemfire.jar:$GEMFIRE/lib/antlr.jar:\
$GEMFIRE/lib/gfsh-dependencies.jar:$GEMFIRE/lib/gfSecurityImpl.jar:\
$GEMFIRE/lib/jackson-core-asl-1.9.9.jar:\
$GEMFIRE/lib/commons-logging.jar:\
$GEMFIRE/lib/tomcat-embed-core.jar:\
$GEMFIRE/lib/tomcat-embed-logging-juli.jar:\
$GEMFIRE/lib/tomcat-embed-jasper.jar:\
$CLASSPATH

echo ""
echo "CLASSPATH as follows..."
echo ""

echo $CLASSPATH


2. Start gfsh as shown below.
  
[Tue Nov 20 11:02:51 papicella@:~/vmware/ant-demos/gemfire/70/Demo ] $ gfsh
WARNING: JDK/lib/tools.jar is required for some GFSH commands. Please set JAVA_HOME to point to the JDK directory. Currently using JRE.
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  / 
 / /__/ / ____/  _____/ / /    / /  
/______/_/      /______/_/    /_/    v7.0

Monitor and Manage GemFire
gfsh>

3. Start a locator as shown below

start locator --name=locator1 --properties-file=gemfire.properties --bind-address=localhost --dir=/Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator
  
gfsh>start locator --name=locator1 --properties-file=gemfire.properties --bind-address=localhost --dir=/Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator
Starting a Locator in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator on localhost[10334] as locator1...
.....................................
Locator in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator on 192-168-1-3.tpgi.com.au[10334] as locator1 is currently online.
Process ID: 4686
Uptime: 19 seconds
GemFire Version: 7.0
Java Version: 1.6.0_35
Log File: /Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator/locator1.log
JVM Arguments: -DgemfirePropertyFile=gemfire.properties -Dgemfire.launcher.registerSignalHandlers=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gemfire.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/antlr.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfsh-dependencies.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-core.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-logging-juli.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-jasper.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/ecj-3.7.2.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/spring-asm-3.1.1.RELEASE.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gemfire.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/antlr.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfsh-dependencies.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfSecurityImpl.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/jackson-core-asl-1.9.9.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/commons-logging.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-core.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-logging-juli.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-jasper.jar::

Successfully connected to: [host=192-168-1-3.tpgi.com.au, port=1099]

4. Start 2 peers as shown below

start server --name=peer1 --cache-xml-file=Pulse.xml --properties-file=gemfire.properties --locators=localhost[10334] --dir=/Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1

start server --name=peer2 --cache-xml-file=Pulse.xml --properties-file=gemfire.properties --locators=localhost[10334] --dir=/Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2


  
gfsh>start server --name=peer1 --cache-xml-file=Pulse.xml --properties-file=gemfire.properties --locators=localhost[10334] --dir=/Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1
Starting a Cache server in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1 on 192-168-1-3.tpgi.com.au[40404] as peer1...
.....
Server in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1 on 192-168-1-3.tpgi.com.au[40001] as peer1 is currently online.
Process ID: 4687
Uptime: 2 seconds
GemFire Version: 7.0
Java Version: 1.6.0_35
Log File: /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1/peer1.log
JVM Arguments: -Dgemfire.default.locators=192.168.1.3[10334] -DgemfirePropertyFile=gemfire.properties -Dgemfire.cache-xml-file=Pulse.xml -Dgemfire.locators=localhost[10334] -Dgemfire.launcher.registerSignalHandlers=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gemfire.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/antlr.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfsh-dependencies.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-core.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-logging-juli.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-jasper.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/ecj-3.7.2.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/spring-asm-3.1.1.RELEASE.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gemfire.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/antlr.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfsh-dependencies.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfSecurityImpl.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/jackson-core-asl-1.9.9.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/commons-logging.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-core.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-logging-juli.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-jasper.jar::

gfsh>start server --name=peer2 --cache-xml-file=Pulse.xml --properties-file=gemfire.properties --locators=localhost[10334] --dir=/Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2
Starting a Cache server in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2 on 192-168-1-3.tpgi.com.au[40404] as peer2...
.....
Server in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2 on 192-168-1-3.tpgi.com.au[40002] as peer2 is currently online.
Process ID: 4688
Uptime: 2 seconds
GemFire Version: 7.0
Java Version: 1.6.0_35
Log File: /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2/peer2.log
JVM Arguments: -Dgemfire.default.locators=192.168.1.3[10334] -DgemfirePropertyFile=gemfire.properties -Dgemfire.cache-xml-file=Pulse.xml -Dgemfire.locators=localhost[10334] -Dgemfire.launcher.registerSignalHandlers=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gemfire.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/antlr.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfsh-dependencies.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-core.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-logging-juli.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-jasper.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/ecj-3.7.2.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/spring-asm-3.1.1.RELEASE.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gemfire.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/antlr.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfsh-dependencies.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/gfSecurityImpl.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/jackson-core-asl-1.9.9.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/commons-logging.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-core.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-logging-juli.jar:/Users/papicella/gemfire/vFabric_GemFire_70_b38623/lib/tomcat-embed-jasper.jar::

5. Start Pulse to monitor the cluster over a HTTP browser

start pulse
  
gfsh>start pulse;
Running GemFire Pulse Pulse URL : http://192.168.1.3:8083/pulse/

6. The HTTP browser should be invoked where you can connect to an embedded pulse server to monitor the cluster. The username / password for this is admin/admin, Once logged in should look as follows


7. Now lets stop the system as shown below.

  
gfsh>stop server --name=peer2;
Stopping Cache Server running in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2 on 192-168-1-3.tpgi.com.au[40002] as peer2...
Process ID: 4688
Log File: /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer2/peer2.log
....
gfsh>stop server --name=peer1;
Stopping Cache Server running in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1 on 192-168-1-3.tpgi.com.au[40001] as peer1...
Process ID: 4687
Log File: /Users/papicella/vmware/ant-demos/gemfire/70/Demo/peer1/peer1.log
....
gfsh>stop locator --name=locator1;
Stopping Locator running in /Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator on 192-168-1-3.tpgi.com.au[10334] as locator1...
Process ID: 4686
Log File: /Users/papicella/vmware/ant-demos/gemfire/70/Demo/locator/locator1.log
...
No longer connected to 192-168-1-3.tpgi.com.au[1099].

For more information about GemFire 70 gfsh command line options / commands you can just use the tab key in the shell. One example you can use to get an overview of metrics is as follows

  
gfsh>show metrics;

Cluster-wide Metrics

 Type   |        Metric         | Value
------- | --------------------- | -----
cluster | totalHeapSize         | 246
cache   | totalRegionEntryCount | 0
        | totalRegionCount      | 2
        | totalMissCount        | 0
        | totalHitCount         | 22454
        | diskReadsRate         | 0
        | diskWritesRate        | 0
        | flushTimeAvgLatency   | 0
        | totalBackupInProgress | 0
query   | activeCQCount         | 0
        | queryRequestRate      | 0

More information on gfsh can be found in the GemFire 70 documentation link below.

http://pubs.vmware.com/vfabricNoSuite/index.jsp?topic=/com.vmware.vfabric.gemfire.7.0/tools_modules/gfsh/chapter_overview.html

Friday 12 October 2012

Installing BCI agent with APM 1.1 with tc Server 2.7.x

If you try and install the java BCI agent into tc Server 2.7 as follows you will get an error as shown below.

vfabric@tcserver2:~$ java -jar java-agent.jar --ip 192.168.110.126 --path /home/vfabric/vfabric-tc-server-standard-2.7.1.RELEASE/booking/ -n
Failed to detect the server type located at /home/vfabric/vfabric-tc-server-standard-2.7.1.RELEASE/booking/

In order to install the BCI agent on tcServer 2.7 you need to install it manually without providing the "-n" option as shown below.

Ensure you select "ssl" or the transport protocol and select option 11 for tc Server 2.6 given 2.7 doesn't exist at this stage.

vfabric@tcserver2:~$ java -jar java-agent.jar --ip 192.168.110.126 --path /home/vfabric/vfabric-tc-server-standard-2.7.1.RELEASE/booking/
Please enter Insight user name [Default:agent]: agent
Please enter Insight password [Default:insight]: insight
1: ssl
2: nio
3: nio+ssl
Please select a transport protocol [Default:ssl]: 1
Please enter the full path for the keystore file [Default:/home/vfabric/vfabric-tc-server-standard-2.7.1.RELEASE/booking/agent.ks]:
Please enter the full path for the truststore file [Default:/home/vfabric/vfabric-tc-server-standard-2.7.1.RELEASE/booking/agent.ts]:
Please enter the keystore password [Default:0ac06ca35033444fb49e8862519ee61d]:
1: Tomcat 6
2: Tomcat 6 (Executable)
3: Tomcat 7
4: Tomcat 7 (Executable)
5: JBoss 5.0.1.GA
6: JBoss 5.1.0.GA
7: JBoss 6.0.0.Final
8: JBoss 6.1.0.Final
9: tc Server 2.1
10: tc Server 2.5
11: tc Server 2.6
Please select server: 11
===============================================================
Installing Insight agent on tc Server 2.6
===============================================================
Insight agent installed successfully on tc Server 2.6

Thursday 27 September 2012

Backup command for MAC

Using rsync to backup a directory onto a USB with files which have changed and or been removed.

Note for myself:

rsync -avrzeh --progress --delete /Users/papicella/vmware/ /Volumes/Wizzy\ Wig\ Mac\ Drive/Users/papicella/vmware/

Friday 14 September 2012

SQLFire - Generic ExecuteQuery Using Procedure Parameters with DYNAMIC RESULT SETS

In this previous post I showed how to use procedure parameters with DYNAMIC RESULT SETS.

http://theblasfrompas.blogspot.com.au/2012/09/sqlfire-using-procedure-parameters-with.html

Now I extend that and add a java class method to my stored procedures that allows an SQL query with bind variables and parameters to be used to execute arbitrary SQL. Wasn't tested that well and has no error checking BUT it generally works.

Add a method as shown below to handle a generic query with bind variables
public static void executeQuery(String sql, String params, ResultSet[] resultSet1) throws SQLException 
  {
    Connection conn = null;
    PreparedStatement pstmt = null;
    ResultSet rset = null;

    try
    {
      conn = DriverManager.getConnection("jdbc:default:connection");
      pstmt = conn.prepareStatement(sql);
      // assume that parameters are correctly set; usually for most types
      // one can just do a setString()
      String[] args = params.split(",");
      int index = 1;
      for (String arg : args) 
      {
        pstmt.setString(index++, arg.trim());
      }
      
      resultSet1[0] = pstmt.executeQuery();

    }
    catch (SQLException se)
    {
      throw new RuntimeException("Error in stored procedure executeQuery - ", se);
    }   
  }

Create the SQL for the stored procedure as shown below.
CREATE PROCEDURE executeQuery (IN query VARCHAR(200), IN params VARCHAR(200))
LANGUAGE JAVA 
PARAMETER STYLE JAVA 
READS SQL DATA 
DYNAMIC RESULT SETS 1 
EXTERNAL NAME 'pas.au.vmware.sqlfire.procs.DemoProc.executeQuery';

Test it from sqlf as shown below.
sqlf> run 'test.sql';
sqlf> maximumdisplaywidth 120;
sqlf> call executeQuery('select * from emp where deptno = ?', '10');
EMPNO      |ENAME     |JOB      |MGR        |HIREDATE  |SAL        |COMM       |DEPTNO     
-------------------------------------------------------------------------------------------
7934       |MILLER    |CLERK    |7782       |1982-01-23|1300       |NULL       |10         
7782       |CLARK     |MANAGER  |7839       |1981-06-09|2450       |NULL       |10         
7839       |KING      |PRESIDENT|NULL       |1981-11-17|5000       |NULL       |10         

3 rows selected
sqlf> call executeQuery('select * from emp where (deptno = ? or deptno = ?)', '10,30');
EMPNO      |ENAME     |JOB      |MGR        |HIREDATE  |SAL        |COMM       |DEPTNO     
-------------------------------------------------------------------------------------------
7934       |MILLER    |CLERK    |7782       |1982-01-23|1300       |NULL       |10         
7782       |CLARK     |MANAGER  |7839       |1981-06-09|2450       |NULL       |10         
7839       |KING      |PRESIDENT|NULL       |1981-11-17|5000       |NULL       |10         
7521       |WARD      |SALESMAN |7698       |1981-02-22|1250       |500        |30         
7499       |ALLEN     |SALESMAN |7698       |1981-02-21|1600       |300        |30         
7900       |JAMES     |CLERK    |7698       |1981-12-03|950        |NULL       |30         
7844       |TURNER    |SALESMAN |7698       |1981-09-08|1500       |0          |30         
7654       |MARTIN    |SALESMAN |7698       |1981-09-28|1250       |1400       |30         
7698       |BLAKE     |MANAGER  |7839       |1981-05-01|2850       |NULL       |30         

9 rows selected
sqlf> call executeQuery('select * from emp where deptno = ? and ename like ?', '10,%MILL%');
EMPNO      |ENAME     |JOB      |MGR        |HIREDATE  |SAL        |COMM       |DEPTNO     
-------------------------------------------------------------------------------------------
7934       |MILLER    |CLERK    |7782       |1982-01-23|1300       |NULL       |10         

1 row selected

If you wanted to test with from a java client you could do it as follows
package clientaccess;

import java.sql.*;
import java.sql.ResultSetMetaData;

public class ExecuteQuery
{
  public ExecuteQuery()
  {
  }


  public static Connection getConnection()
    throws SQLException
  {

    String thinConn = "jdbc:sqlfire://localhost:1527/";
    Connection conn = DriverManager.getConnection(thinConn);
    conn.setAutoCommit(false);
    return conn;
  }

  public void run(String query, String params)
    throws SQLException
  {
    String sql = "call executeQuery(?, ?)";

    Connection conn = null;
    PreparedStatement pstmt = null;
    ResultSet rset = null;
    ResultSetMetaData meta;
    
    try
    {
      conn = getConnection();
      pstmt = conn.prepareCall(sql);
      pstmt.setString(1, query);
      pstmt.setString(2, params);
      pstmt.execute();

      rset = pstmt.getResultSet();

      meta = rset.getMetaData();
      int cols = meta.getColumnCount();

      System.out.println("\n-> Query results for : " + query);
      
      while(rset.next())
      {   
          for (int i=1; i <= cols; i++) 
          {
              System.out.print(rset.getString(i)+"\t");
          }
        
          System.out.print("\n");
      }
      
    }
    finally
    {
      if (rset != null)
      {
        rset.close();
      }

      if (pstmt != null)
      {
        pstmt.close();
      }

      if (conn != null)
      {
        conn.close();
      }
    }

  }

  public static void main(String[] args)
    throws SQLException
  {
    ExecuteQuery test = new ExecuteQuery();
    System.out.println("Started at " + new java.util.Date());
    test.run("select * from emp where deptno = ?", "20");
    test.run("select * from emp where (deptno = ? or deptno = ?)", "20,10");
    test.run("select * from emp where deptno = ? and ename like ?", "10,%MILL%");
    System.out.println("\nFinished at " + new java.util.Date());
  }
}

Output

Started at Fri Sep 14 22:35:40 EST 2012

-> Query results for : select * from emp where deptno = ?
7902    FORD    ANALYST    7566    1981-12-03    3000    null    20   
7876    ADAMS    CLERK    7788    1983-01-12    1100    null    20   
7788    SCOTT    ANALYST    7566    1982-12-09    3000    null    20   
7369    SMITH    CLERK    7902    1980-12-17    800    null    20   
7566    JONES    MANAGER    7839    1981-04-02    2975    null    20   

-> Query results for : select * from emp where (deptno = ? or deptno = ?)
7934    MILLER    CLERK    7782    1982-01-23    1300    null    10   
7782    CLARK    MANAGER    7839    1981-06-09    2450    null    10   
7839    KING    PRESIDENT    null    1981-11-17    5000    null    10   
7902    FORD    ANALYST    7566    1981-12-03    3000    null    20   
7876    ADAMS    CLERK    7788    1983-01-12    1100    null    20   
7788    SCOTT    ANALYST    7566    1982-12-09    3000    null    20   
7369    SMITH    CLERK    7902    1980-12-17    800    null    20   
7566    JONES    MANAGER    7839    1981-04-02    2975    null    20   

-> Query results for : select * from emp where deptno = ? and ename like ?
7934    MILLER    CLERK    7782    1982-01-23    1300    null    10   

Finished at Fri Sep 14 22:35:41 EST 2012

SQLFire - Using Procedure Parameters with DYNAMIC RESULT SETS

In this example below we use DYNAMIC RESULT SETS with stored procedures. When you configure a procedure using the CREATE PROCEDURE statement, SQLFire assembles the method parameters and passes them to the procedure implementation class using Java reflection

The demo below is using the classic DEPT/EMP schema data. If you want to follow it setup SQLFire with that data as shown in this previous blog entry.

http://theblasfrompas.blogspot.com/2011/12/sqlfire-in-few-minutes.html

1. Create a Java Stored Procedure class with a single method as follows.
package pas.au.vmware.sqlfire.procs;

import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.PreparedStatement;

public class DemoProc
{
  public static void queryEmpsinDepartments(String deptno, ResultSet[] resultSet1) throws SQLException
  {
    String sql = "select empno, ename, deptno from emp where deptno = ?";
    
    Connection conn = null;
    PreparedStatement pstmt = null;
    ResultSet rset = null;
    
    try
    {
      conn = DriverManager.getConnection("jdbc:default:connection");
      pstmt = conn.prepareStatement(sql);
      pstmt.setInt(1, Integer.parseInt(deptno));
      resultSet1[0] = pstmt.executeQuery();
      
    }
    catch (SQLException se)
    {
      throw new RuntimeException("Error in stored procedure queryEmpsinDepartments - ", se);
    }
  }
}
2. Add the JAR file to the classpath for your SQLFire distributed system prior to starting the system. Here we just use the CLASSPATH variable but you could also load the JAR file into the system itself.

export CUR_DIR=`pwd`
export CLASSPATH=$CUR_DIR/lib/procs.jar

3. Create a stored procedure as follows.
CREATE PROCEDURE queryEmpsinDepartments (IN deptno VARCHAR(2))
LANGUAGE JAVA 
PARAMETER STYLE JAVA 
READS SQL DATA 
DYNAMIC RESULT SETS 1 
EXTERNAL NAME 'pas.au.vmware.sqlfire.procs.DemoProc.queryEmpsinDepartments';
4. Invoke from a client as shown below.
package clientaccess;

import java.sql.*;

public class TestQueryEmpsinDepartments
{
  public TestQueryEmpsinDepartments()
  {
  }


  public static Connection getConnection() throws SQLException
  {

    String thinConn = "jdbc:sqlfire://localhost:1527/";
    Connection conn = DriverManager.getConnection(thinConn);
    conn.setAutoCommit(false);
    return conn;
  }
  
  public void run () throws SQLException
  {
    String sql = "call queryEmpsinDepartments(?)";
    
    Connection conn = null;
    PreparedStatement pstmt = null;
    ResultSet rset = null;
    
    try
    {
      conn = getConnection();
      pstmt = conn.prepareCall(sql);
      pstmt.setString(1, "10");
      pstmt.execute();
      
      rset = pstmt.getResultSet();
      
      System.out.println("** Employees in department with id 10 **");
      
      while (rset.next())
      {
        System.out.println(String.format("Emp[%s, %s, %s]", rset.getInt(1), rset.getString(2), rset.getInt(3)));
      }
    }
    finally
    {
      if (rset != null)
      {
        rset.close();
      }
      
      if (pstmt != null)
      {
        pstmt.close();
      }
      
      if (conn != null)
      {
        conn.close();
      }
    }
    
  }
  
  public static void main(String[] args) throws SQLException
  {
    TestQueryEmpsinDepartments test = new TestQueryEmpsinDepartments();
    System.out.println("Started at " + new java.util.Date());
    test.run();
    System.out.println("Finished at " + new java.util.Date());
  }
} 

Output

Started at Fri Sep 14 10:59:20 EST 2012
** Employees in department with id 10 **
Emp[7934, MILLER, 10]
Emp[7782, CLARK, 10]
Emp[7839, KING, 10]
Finished at Fri Sep 14 10:59:20 EST 2012

For more information see the link below.

http://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.sqlfire.1.0/developers_guide/topics/server-side/dap-impl-parameters.html

Monday 6 August 2012

vFabric Data Director 2.0 - Oracle

We just released the vFabirc Data Director 2.0 product with support for Oracle. In fact this supports Oracle 10g as well 11g and will even setup OEM / dbconsole for you. Here are some screen shots of what the UI gives you with a previously created Oracle 11g instance.

Here is a base VM used to create database templates from. The base VM must contain operating system, database engine and Data Director agent before it can be converted into a database template.


We then convert the base VM into a database template and assign it to a resource pool to enable us to provision Oracle Databases from.



Now we create our Oracle Database to a previously created database group with it managed within Data Director.


Finally connect to our Oracle Database by obtaining the connect details from the UI itself and then use Oracle SQL Developer to make a connection.







Here is what we see in vSphere and how the database is added to our Resource Pool.



More Information on vFabric Data Director and support for Oracle 10g/11g can be found as follows

http://www.vmware.com/products/application-platform/vfabric-data-director/overview.html

Friday 6 July 2012

Displaying Total Memory Footprint in SQLFire

The example below shows how you can view table / index memory footprint within your SQLFire distrubuted system.

1. Start the servers ensuring you  add sqlfire.jar as shown below.

Locator:

sqlf locator start -J-javaagent:/Users/papicella/sqlfire/SQLFire102_35992/lib/sqlfire.jar -peer-discovery-address=localhost -peer-discovery-port=41111 -client-bind-address=localhost -client-port=1527

SQLFire Nodes:

sqlf server start -J-Xms2024m -J-Xmx2024m -J-javaagent:/Users/papicella/sqlfire/SQLFire1021/lib/sqlfire.jar -server-groups=MYGROUP -locators=localhost[41111] -client-bind-address=localhost -client-port=1528 -dir=server1 &
sqlf server start -J-Xms2024m -J-Xmx2024m -J-javaagent:/Users/papicella/sqlfire/SQLFire1021/lib/sqlfire.jar -server-groups=MYGROUP -locators=localhost[41111] -client-bind-address=localhost -client-port=1529 -dir=server2 &

2. Query the system as shown below using sizerHints=withMemoryFootPrint hint with your query

SQL:

select sqlentity, memory, substr(id, 1, 50) "Id" FROM sys.memoryAnalytics -- SQLFIRE-PROPERTIES sizerHints=withMemoryFootPrint;

  
sqlf> select sqlentity, memory, substr(id, 1, 50) "Id" FROM sys.memoryAnalytics -- SQLFIRE-PROPERTIES sizerHints=withMemoryFootPrint;
SQLENTITY                                                       |MEMORY                                                                                                                          |Id                                                
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
APP.EMP (Entry Size, Value Size, Row Count)                     |208000000,50997494,2000000                                                                                                      |10.117.85.62(9090)<v1>:43540/60442                
APP.EMP (sqlfire,gemfire,others)                                |64426568,61261504,52987760 (178675832 = 170.4 mb)                                                                               |10.117.85.62(9090)<v1>:43540/60442                
APP.EMP (Entry Size, Value Size, Row Count)                     |208000000,50997494,2000000                                                                                                      |10.117.85.62(9091)<v0>:41971/60439                
APP.EMP (sqlfire,gemfire,others)                                |133766368,129748968,307527368 (571042704 = 544.59 mb)                                                                           |10.117.85.62(9091)<v0>:41971/60439

More information see the link below.

http://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.sqlfire.1.0/getting_started/book_intro.html

Thursday 5 July 2012

Difference between a GemFire Client and a SQLFire client

Often I get asked how does a client accessing data in a GemFire region differ to that of a SQLFire table. Both elastic data fabric caches BUT SQLFire uses a compliant JDBC/ADO NET interface where the other is a key value store with a proprietary interface.

The best way to demonstrate this is by showing what the client code looks like from GemFire accessing a region versus what it would using SQLFire table with a JDBC compliant driver. In the two code samples below there using a Dept object and Dept row with the same attributes/columns,

GemFire Code
  
package pas.au.vmware.se.dept;

import java.util.Collection;
import java.util.Iterator;
import java.util.logging.Level;
import java.util.logging.Logger;

import com.gemstone.gemfire.cache.Region;
import com.gemstone.gemfire.cache.client.ClientCache;
import com.gemstone.gemfire.cache.client.ClientCacheFactory;
import com.gemstone.gemfire.cache.query.FunctionDomainException;
import com.gemstone.gemfire.cache.query.NameResolutionException;
import com.gemstone.gemfire.cache.query.Query;
import com.gemstone.gemfire.cache.query.QueryInvocationTargetException;
import com.gemstone.gemfire.cache.query.QueryService;
import com.gemstone.gemfire.cache.query.SelectResults;
import com.gemstone.gemfire.cache.query.TypeMismatchException;

public class ViewDeps 
{
 private ClientCache cache = null;
 private Logger logger = Logger.getLogger(this.getClass().getSimpleName());
 
 public ViewDeps()
 {
  ClientCacheFactory ccf = new ClientCacheFactory();
  ccf.set("cache-xml-file", "config/client.xml");
  cache = ccf.create();  
 }
 
 public void run() throws Exception
 {
  
  QueryService queryService = cache.getQueryService(); 
  Query query = queryService.newQuery("SELECT * FROM /dept");
  logger.log (Level.INFO, "\nExecuting query:\n\t" + query.getQueryString());
  
  Object result = query.execute();
  
  Collection<?> collection = ((SelectResults<?>)result).asList();
  Iterator<?> iter = collection.iterator();
  
  while (iter.hasNext())
  { 
            Dept dept = (Dept) iter.next();
   System.out.println(dept);
  } 
  
  cache.close();
 }
 
 /**
  * @param args
  * @throws Exception 
  */
 public static void main(String[] args) throws Exception 
 {
  // TODO Auto-generated method stub
  ViewDeps test = new ViewDeps();
  test.run();
 }

}

Output

INFO:
Executing query:
    SELECT * FROM /dept

[info 2012/07/05 21:02:03.460 EST
tid=0x1] Adding: PdxType[

      id=1, name=pas.au.vmware.se.dept.Dept, fields=[
          deptno:int:0
          name:String:1]]

[info 2012/07/05 21:02:03.473 EST
tid=0x1] Auto serializer generating type for class pas.au.vmware.se.dept.Dept for fields:

      deptno: private int pas.au.vmware.se.dept.Dept.deptno
      name: private java.lang.String pas.au.vmware.se.dept.Dept.name
 
Dept [deptno=20, name=RESEARCH]
Dept [deptno=30, name=SALES]
Dept [deptno=10, name=ACCOUNTING]
Dept [deptno=40, name=OPERATION]

SQLFire Code
  
package pas.au.vmware.sqlfire;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import java.util.logging.Logger;

public class TestThinClient 
{
 private Logger logger = Logger.getLogger(this.getClass().getSimpleName());
 private String url = "jdbc:sqlfire://localhost:1527/";
 
 public TestThinClient() 
 {
  // TODO Auto-generated constructor stub
 }

 public void run() throws SQLException
 {
  Connection conn = null;
  Statement stmt = null;
  ResultSet rset = null;
  
  logger.log (Level.INFO, String.format("Connecting to SQLFire with url %s", url));
  
  try
  {
   conn = DriverManager.getConnection(url);
   logger.log(Level.INFO, conn.toString());
   stmt = conn.createStatement();
   rset = stmt.executeQuery("select * from dept order by 1");
   while (rset.next())
   {
    System.out.println(
                    String.format("Dept[%s, %s, %s]", 
                           rset.getInt(1), rset.getString(2), rset.getString(3))); 
   }
  }
  catch (SQLException se)
  {
   logger.log(Level.SEVERE, se.getMessage());
  }
  finally
  {
   if (conn != null)
   {
    conn.close();
   }
   
   if (stmt != null)
   {
    stmt.close();
   }
   
   if (rset != null)
   {
    rset.close();
   }
  }
  
 }
 
 /**
  * @param args
  */
 public static void main(String[] args) 
 {
  // TODO Auto-generated method stub
  TestThinClient test = new TestThinClient();
  try 
  {
   test.run();
  } 
  catch (SQLException e) 
  {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }
 }

}

Output

Jul 5, 2012 9:03:17 PM pas.au.vmware.sqlfire.TestThinClient run
INFO: Connecting to SQLFire with url jdbc:sqlfire://localhost:1527/
Jul 5, 2012 9:03:17 PM pas.au.vmware.sqlfire.TestThinClient run
INFO: com.vmware.sqlfire.internal.client.net.NetConnection40@2acdb06e
Dept[10, ACCOUNTING, NEW YORK]
Dept[20, RESEARCH, DALLAS]
Dept[30, SALES, CHICAGO]
Dept[40, OPERATIONS, BRISBANE]

Tuesday 3 July 2012

Importing data from Oracle 11g into SQLFire

Note for myself:

build.xml extract which ensures we only query data from a specific schema in this case HR schema. DDLUtils has issues with Oracle unless you clearly specify which schema to use.

  
<target name="writeDDLToXML" description="Dumps the database structure">
  <taskdef name="databaseToDdl"
           classname="org.apache.ddlutils.task.DatabaseToDdlTask">
    <classpath refid="runtime-classpath"/>

  </taskdef>
  <databaseToDdl verbosity="debug" schemapattern="HR">
    <database url="jdbc:oracle:thin:@172.16.101.70:1521/linux11gr2"
              driverClassName="oracle.jdbc.OracleDriver"
              username="hr"
              password="hr" />

    <writeSchemaToFile outputFile="db-schema1.xml" failonerror="false" />

    <!-- Comment line below if the source DB is too big -->
    <writeDataToFile outputFile="data.xml" determineschema="false"/>
  </databaseToDdl>

</target>

Thursday 21 June 2012

Loading existing RDBMS data into SQLFire

A common scenario or question I get is how do I load data into SQLFire from an existing RDBMS. If you have a simple schema and you wish to load that into SQLFire you can use Apache DDLUtils as shown below. In this example we take the classic DEPT/EMP schema from MYSQL and place it into SQLFire.

1. Create a build.xml as follows. This example is taken from $SQLFIRE_HOME/lib/ddlutils/example directory. It's been edited to point to a MYSQL database for the source schema and a target SQLFire database as the target schema.
  
<?xml version="1.0"?>

<project name="test" default="writeDDLToXML" basedir=".">

<path id="runtime-classpath">
  <fileset dir="./lib">
    <include name="**/*.jar"/>
    <include name="**/*.zip"/>
  </fileset>
</path>

 <!-- This will output all the object in the database 'test' and output the schema definition
   to a file 'db-schema1.xml'. This will also extract the data and output this to 'data.xml'
  -->
<target name="writeDDLToXML" description="Dumps the database structure">
  <taskdef name="databaseToDdl"
           classname="org.apache.ddlutils.task.DatabaseToDdlTask">
    <classpath refid="runtime-classpath"/>

  </taskdef>
  <databaseToDdl verbosity="debug">
  <!--
    <database url="jdbc:sqlfire://localhost:1527"
              driverClassName="com.vmware.sqlfire.jdbc.ClientDriver"
              username="app"
              password="app"/>
  -->

    <database url="jdbc:mysql://localhost/scottdb"
              driverClassName="com.mysql.jdbc.Driver"
              username="scott"
              password="tiger"/>

    <writeSchemaToFile outputFile="db-schema1.xml" failonerror="false" />

    <!-- Comment line below if the source DB is too big -->
    <writeDataToFile outputFile="data.xml"/>
  </databaseToDdl>

</target>


 <!-- This will create the tables, etc in SQLFire.
      If your table names or column names contain embedded spaces, set
      usedelimitedsqlidentifiers to true.
  -->
<target name="createDBFromXML" description="Create the DB tables ..">
  <taskdef classname="org.apache.ddlutils.task.DdlToDatabaseTask"
          name="ddlToDatabase"
          classpathref="runtime-classpath"/>

   <ddlToDatabase usedelimitedsqlidentifiers="false">
     <database driverclassname="com.vmware.sqlfire.jdbc.ClientDriver"
             url="jdbc:sqlfire://localhost:1527"
             username="app"
             password="app"/>
   <fileset dir=".">
     <include name="db-schema1.xml"/>
   </fileset>

   <!--   <createdatabase failonerror="false"/>  -->

   <writeschematodatabase alterdatabase="true"
                          failonerror="false"/>
   </ddlToDatabase>
</target>


 <!-- Extract DDL for a schema and write this out as a 'SQL' script file (db-schema1.sql).
      EDIT THIS FILE TO CHANGE THE TABLES TO BE CUSTOM PARTITIONED, REPLICATED, PERSISTENT, ETC
      Then, execute the SQL script using 'IJ' or your favorite tool (like SQuirrel)
 -->
<target name="writeDDLToSQL" description="Dumps the database structure">
  <taskdef name="databaseToDdl"
           classname="org.apache.ddlutils.task.DatabaseToDdlTask">
    <classpath refid="runtime-classpath"/>

  </taskdef>
  <databaseToDdl verbosity="debug">
    <database url="jdbc:sqlfire://localhost:1527"
              driverClassName="com.vmware.sqlfire.jdbc.ClientDriver"
              username="app"
              password="app"/>

    <writeSchemaSqlToFile outputFile="db-schema1.sql" dodrops="true" failonerror="false" alterdatabase="false"/>
  </databaseToDdl>
</target>


 <!-- Imports data rows into a database.
      If your table names or column names contain embedded spaces, set
      usedelimitedsqlidentifiers to true.
  -->
<target name="ImportDataToDB" description="Import the data ..">
  <taskdef classname="org.apache.ddlutils.task.DdlToDatabaseTask"
          name="ddlToDatabase"
          classpathref="runtime-classpath"/>

   <ddlToDatabase usedelimitedsqlidentifiers="false">
    <database url="jdbc:sqlfire://localhost:1527"
              driverClassName="com.vmware.sqlfire.jdbc.ClientDriver"
              username="app"
              password="app"/>
   <fileset dir=".">
     <include name="db-schema1.xml"/>
   </fileset>

   <writedatatodatabase datafile="data.xml"
                        usebatchmode="true"
                        batchsize="1000"/>
 </ddlToDatabase>
</target>

</project>

2. Ensure you have Apache ant in your path
3. Ensure you have placed the JDBC drivers for SQLFire and MYSQL in the path which will be picked up by ant. In this example ANT is using a "lib" sub directory to include all JARS in that directory at runtime. This "lib" directory would also include the Apache DDLUtils jar files as well.

mysql-connector-java-5.1.17-bin.jar
sqlfireclient.jar

Here is a full directory listing on the "lib" directory JAR files I was using.

[Thu Jun 21 14:29:01 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/lib ] $ d
total 38848
-rw-r--r--   1 papicella  staff    787920  4 Jul  2011 mysql-connector-java-5.1.17-bin.jar
-rw-r--r--   1 papicella  staff    474464 20 Jun 14:32 wstx-asl-3.0.2.jar
-rw-r--r--   1 papicella  staff     26514 20 Jun 14:32 stax-api-1.0.1.jar
-rw-r--r--   1 papicella  staff    352668 20 Jun 14:32 log4j-1.2.8.jar
-rw-r--r--   1 papicella  staff    486522 20 Jun 14:32 dom4j-1.4.jar
-rw-r--r--   1 papicella  staff     42492 20 Jun 14:32 commons-pool-1.2.jar
-rw-r--r--   1 papicella  staff     38015 20 Jun 14:32 commons-logging-1.0.4.jar
-rw-r--r--   1 papicella  staff    207723 20 Jun 14:32 commons-lang-2.1.jar
-rw-r--r--   1 papicella  staff    139966 20 Jun 14:32 commons-digester-1.7.jar
-rw-r--r--   1 papicella  staff    107631 20 Jun 14:32 commons-dbcp-1.2.1.jar
-rw-r--r--   1 papicella  staff    559366 20 Jun 14:32 commons-collections-3.1.jar
-rw-r--r--   1 papicella  staff     46725 20 Jun 14:32 commons-codec-1.3.jar
-rw-r--r--   1 papicella  staff    188671 20 Jun 14:32 commons-beanutils-1.7.0.jar
-rw-r--r--   1 papicella  staff    447398 20 Jun 14:34 DdlUtils-1.0.jar
-rw-r--r--   1 papicella  staff    822779 20 Jun 14:37 sqlfireclient.jar
-rw-r--r--   1 papicella  staff  15125561 20 Jun 14:37 sqlfire.jar
drwxr-xr-x  18 papicella  staff       612 21 Jun 14:08 ./
drwxr-xr-x   7 papicella  staff       238 21 Jun 14:08 ../


** At this point you can begin the process as shown below **

4. Create schema / data file in XML using the ant task as follows

[Thu Jun 21 08:36:10 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant writeDDLToXML
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

writeDDLToXML:
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@bdee400 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@bdee400 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Written schema to /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.xml
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@bdee400 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@bdee400 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@bdee400 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@bdee400 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Written data XML to file/Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/data.xml

BUILD SUCCESSFUL
Total time: 1 second

5. If your happy to use only replicated tables then you can import the schema definition into
SQLFire as follows. Every table created by default is "replicate" in this scenario.

[Thu Jun 21 08:36:37 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant createDBFromXML
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

createDBFromXML:
[ddlToDatabase] Read schema file /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.xml
[ddlToDatabase] Table DEPT needs to be added
[ddlToDatabase] Table EMP needs to be added
[ddlToDatabase] Foreign key Foreign key [name=EMP_FK; foreign table=DEPT; 1 references] needs to be added to table EMP
[ddlToDatabase] Executed 3 SQL command(s) with 0 error(s)
[ddlToDatabase] Written schema to database

BUILD SUCCESSFUL
Total time: 0 seconds

6. This next step will take the XML created in step #1 and create a schema file in SQL. This is great because you can now add the SQLFire SQL to use partioned, disk stores etc. Without persisteant disk stores the data will disappear once the system goes down.

[Thu Jun 21 08:54:01 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant writeDDLToSQL
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

writeDDLToSQL:
[databaseToDdl] Borrowed connection org.apache.commons.dbcp.PoolableConnection@2d1e233 from data source
[databaseToDdl] Returning connection org.apache.commons.dbcp.PoolableConnection@2d1e233 to data source.
[databaseToDdl] Remaining connections: None
[databaseToDdl] Written schema SQL to /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.sql

BUILD SUCCESSFUL
Total time: 1 second

7. Finally with a schema created we can simply import the data created from #4 above as follows.

[Thu Jun 21 08:54:43 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils ] $ ant ImportDataToDB
Buildfile: /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/build.xml

ImportDataToDB:
[ddlToDatabase] Read schema file /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/db-schema1.xml
[ddlToDatabase] Written data from file /Users/papicella/vmware/software/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo/ddlutils/data.xml to database

BUILD SUCCESSFUL
Total time: 0 seconds

8. The last thing left to do is verify we have our schema data within SQLFire as shown below.
  
[Thu Jun 21 08:55:32 papicella@:~/sqlfire/vFabric_SQLFire_1021/pasdemos/ddlutils-demo ] $ sqlf
sqlf version 10.4
sqlf> connect client 'localhost:1527';
sqlf> show tables in app;
TABLE_SCHEM         |TABLE_NAME                    |REMARKS             
------------------------------------------------------------------------
APP                 |DEPT                          |                    
APP                 |EMP                           |                    

2 rows selected
sqlf> select * from dept;
DEPTNO     |DNAME         |LOC          
----------------------------------------
20         |RESEARCH      |DALLAS       
10         |ACCOUNTING    |NEW YORK     
30         |SALES         |CHICAGO      
40         |OPERATIONS    |BOSTON       

4 rows selected 

For more information on DDLUtils with SQLFire see the link below.

http://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.sqlfire.1.0/data_management/ddlutils-procedure.html