Monday, 22 May 2017

Using HTTPIE with Spring Boot Rest Repositories

I recently got introduced to HTTPIE as a command line alternative to CURL for testing RESTful api endpoints created using @RestController annotated classes. For more information on httpie follow this link

Before we test this out lets create a very basic Spring Boot Application with classes/interfaces to verify HTTPIE. The following assumes you have a Spring Boot application already created and it has maven dependancies as follows to enable JPA, Rest Repositories, H2 and Web support

Note: We are using Spring Boot 1.5.3 here

  
<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>1.5.3.RELEASE</version>
  <relativePath/> <!-- lookup parent from repository -->
 </parent>

 <properties>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  <java.version>1.8</java.version>
 </properties>

 <dependencies>
  <dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-rest</artifactId>
  </dependency>
  <dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
  </dependency>
  <dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-entitymanager</artifactId>
  </dependency>
  <dependency>
   <groupId>com.h2database</groupId>
   <artifactId>h2</artifactId>
   <scope>runtime</scope>
  </dependency>
 </dependencies>

1. Create classes/interfaces as follows

Employee.java
  
package pivotal.io.boot.httpie.demo;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Employee
{
    @Id
    @GeneratedValue (strategy = GenerationType.AUTO)
    private Long id;

    private String firstName;
    private String lastName;
    private String job;

    public Employee()
    {
    }

    public Employee(String firstName, String lastName, String job) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.job = job;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public String getJob() {
        return job;
    }

    public void setJob(String job) {
        this.job = job;
    }

    @Override
    public String toString() {
        return "Employee{" +
                "id=" + id +
                ", firstName='" + firstName + '\'' +
                ", lastName='" + lastName + '\'' +
                ", job='" + job + '\'' +
                '}';
    }
}

EmployeeRepository.java
  
package pivotal.io.boot.httpie.demo;

import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository <Employee, Long> {
}  

EmployeeRest.java
  
package pivotal.io.boot.httpie.demo;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@RestController
@RequestMapping ("/api/employee")
public class EmployeeRest
{
    private static Log logger = LogFactory.getLog(EmployeeRest.class);

    @Autowired
    private EmployeeRepository employeeRepository;

    @GetMapping("/emps")
    public List<Employee> allEmployees()
    {
        return employeeRepository.findAll();
    }

    @GetMapping("/emp/{employeeId}")
    public Employee findEmployee (@PathVariable Long employeeId)
    {
        Employee emp = employeeRepository.findOne(employeeId);

        return emp;
    }

    @PostMapping("/emps")
    public Employee createEmployee(@RequestBody Employee employee)
    {
        return employeeRepository.save(employee);
    }

    @DeleteMapping("/emps/{employeeId}")
    public void deleteEmployee(@PathVariable Long employeeId)
    {
        Employee emp = employeeRepository.findOne(employeeId);
        employeeRepository.delete(emp);
        logger.info("Employee with id " + employeeId + " deleted...");
    }

}

2. Run the Spring Boot Application which will run on port localhost:8080


  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.5.3.RELEASE)

2017-05-22 13:39:22.910  INFO 8875 --- [           main] p.i.b.h.d.HttpieSpringbootApplication    : Starting HttpieSpringbootApplication on pas-macbook with PID 8875 (/Users/pasapicella/pivotal/DemoProjects/spring-starter/pivotal/httpie-springboot/target/classes started by pasapicella in /Users/pasapicella/pivotal/DemoProjects/spring-starter/pivotal/httpie-springboot)

...

2017-05-22 13:39:25.948  INFO 8875 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2017-05-22 13:39:25.952  INFO 8875 --- [           main] p.i.b.h.d.HttpieSpringbootApplication    : Started HttpieSpringbootApplication in 3.282 seconds (JVM running for 3.676)

Now we can test HTTPIE and here are some endpoints

3. Here are some examples with output

** All Employees **

pasapicella@pas-macbook:~$ http http://localhost:8080/api/employee/emps
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 22 May 2017 01:26:43 GMT
Transfer-Encoding: chunked

[
    {
        "firstName": "pas",
        "id": 1,
        "job": "CEO",
        "lastName": "Apicella"
    },
    {
        "firstName": "lucia",
        "id": 2,
        "job": "CIO",
        "lastName": "Apicella"
    },
    {
        "firstName": "lucas",
        "id": 3,
        "job": "MANAGER",
        "lastName": "Apicella"
    },
    {
        "firstName": "siena",
        "id": 4,
        "job": "CLERK",
        "lastName": "Apicella"
    }
]

** Find Employee by {employeeId} **

pasapicella@pas-macbook:~$ http http://localhost:8080/api/employee/emp/1
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 22 May 2017 01:31:32 GMT
Transfer-Encoding: chunked

{
    "firstName": "pas",
    "id": 1,
    "job": "CEO",
    "lastName": "Apicella"
}

** POST new employee **

pasapicella@pas-macbook:~$ http POST http://localhost:8080/api/employee/emps firstName=john lastName=black job=CLERK
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 22 May 2017 02:32:34 GMT
Transfer-Encoding: chunked

{
    "firstName": "john",
    "id": 5,
    "job": "CLERK",
    "lastName": "black"
}

** POST with updated employee object **

pasapicella@pas-macbook:~$ http POST http://localhost:8080/api/employee/emps id:=5 firstName=john lastName=black job=CLEANER
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 22 May 2017 02:36:06 GMT
Transfer-Encoding: chunked

{
    "firstName": "john",
    "id": 5,
    "job": "CLEANER",
    "lastName": "black"
}

** Delete employee with {employeeId} 5 **

pasapicella@pas-macbook:~$ http DELETE http://localhost:8080/api/employee/emps/5
HTTP/1.1 200
Content-Length: 0
Date: Mon, 22 May 2017 02:36:56 GMT

Tuesday, 2 May 2017

Binding a Spring Cloud Task to a Pivotal Cloud Foundry Database Service

I previously blogged about how to create and deploy a Spring Cloud Task to Pivotal Cloud Foundry (PCF) as shown below.

http://theblasfrompas.blogspot.com.au/2017/03/run-spring-cloud-task-from-pivotal.html

Taking that same example I have used the Spring Cloud Connectors to persist the log output to a database table to avoid looking through log files to view the output. Few things have to change to make this happen as detailed below.

1. We need to change the manifest.yml to include a MySQL service instance as shown below

applications:
- name: springcloudtask-date
  memory: 750M
  instances: 1
  no-route: true
  health-check-type: none
  path: ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar
  services:
    - pmysql-test
  env:
    JAVA_OPTS: -Djava.security.egd=file:///dev/urando

2. Alter the project dependancies to include Spring Data JPA libraries to persist the log output to a table. Spring Cloud Connectors will automatically pick up the bound MySQL instance and connect for us when we push the application to PCF

https://github.com/papicella/SpringCloudTaskTodaysDate

  
<dependencies>
  <dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-task</artifactId>
  </dependency>
  <dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
  </dependency>
  <dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-entitymanager</artifactId>
  </dependency>
  <dependency>
   <groupId>com.h2database</groupId>
   <artifactId>h2</artifactId>
  </dependency>
  <dependency>
   <groupId>mysql</groupId>
   <artifactId>mysql-connector-java</artifactId>
   <scope>runtime</scope>
  </dependency>
 </dependencies>

3. A Entity class, Spring JPA repository interface and a JPA task Configurer has been created for persisting the log output as shown in the code below.

TaskRunOutput.java
  
package pas.au.pivotal.pa.sct.demo;

import javax.persistence.*;

@Entity
@Table (name = "TASKRUNOUTPUT")
public class TaskRunOutput
{
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String output;

    public TaskRunOutput()
    {
    }

    public TaskRunOutput(String output) {
        this.output = output;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getOutput() {
        return output;
    }

    public void setOutput(String output) {
        this.output = output;
    }

    @Override
    public String toString() {
        return "TaskRunOutput{" +
                "id=" + id +
                ", output='" + output + '\'' +
                '}';
    }
}

TaskRepository.java
  
package pas.au.pivotal.pa.sct.demo;

import org.springframework.data.jpa.repository.JpaRepository;

public interface TaskRepository extends JpaRepository <TaskRun, Long>
{
}

JpaTaskConfigurer.java
  
package pas.au.pivotal.pa.sct.demo.configuration;

import java.text.SimpleDateFormat;
import java.util.Date;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import pas.au.pivotal.pa.sct.demo.TaskRunOutput;
import pas.au.pivotal.pa.sct.demo.TaskRunRepository;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.task.configuration.DefaultTaskConfigurer;
import org.springframework.cloud.task.listener.annotation.BeforeTask;
import org.springframework.cloud.task.repository.TaskExecution;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.stereotype.Component;
import org.springframework.transaction.PlatformTransactionManager;

@Component
public class JpaTaskConfigurer extends DefaultTaskConfigurer {
 private static final Log logger = LogFactory.getLog(JpaTaskConfigurer.class);

 @Autowired
 private PlatformTransactionManager transactionManager;

 @Autowired
 private TaskRunRepository taskRunRepository;

 @Override
 public PlatformTransactionManager getTransactionManager() {
  if(this.transactionManager == null) {
   this.transactionManager = new JpaTransactionManager();
  }

  return this.transactionManager;
 }

 @BeforeTask
 public void init(TaskExecution taskExecution)
 {
  String execDate = new SimpleDateFormat().format(new Date());
  taskRunRepository.save(new TaskRunOutput("Executed at " + execDate));
  logger.info("Executed at : " + execDate);
 }
}

4. Now as per the previous blog execute the task and verify it completes without error. The screen shot below shows how the "Tasks" tab shows this

Note: You would need to PUSH the application to Pivotal Cloud Foundry before you can execute it which is shown on the original blog entry


5. Now if you follow this blog entry below you can deploy a Web Based interface for Pivotal MySQL instance to view the table and it's output

http://theblasfrompas.blogspot.com.au/2017/04/accessing-pivotal-mysql-service.html

With Pivotal MySQL*Web installed the output can be viewed as shown below.



Thursday, 27 April 2017

Accessing a Pivotal MySQL service instance within Pivotal Cloud Foundry

Recently at a hackathon we used the Pivotal MySQL service rather then a ClearDB MySQL service. As a result we could not connect to our instance from a third party tool as the service instance is locked down. There are various way to access the MySQL service to me the best two options are as follows.

1. Cloud Foundry CLI MySQL Plugin

cf-mysql-plugin makes it easy to connect the mysql command line client to any MySQL-compatible database used by Cloud Foundry apps. Use it to

  • inspect databases for debugging purposes
  • manually adjust schema or contents in development environments
  • dump and restore databases

Install it as explained in the link below:

  https://github.com/andreasf/cf-mysql-plugin

** Using It ** 

1. First ensure you are logged into a Pivotal Cloud Foundry instance you can determine that as follows

pasapicella@pas-macbook:~$ cf target -o ben.farrelly-org -s hackathon
API endpoint:   https://api.run.pivotal.io
API version:    2.78.0
User:           papicella@pivotal.io
Org:            ben.farrelly-org
Space:          hackathon

2. Verify you have a MySQL instance provisioned

pasapicella@pas-macbook:~$ cf services
Getting services in org ben.farrelly-org / space hackathon as papicella@pivotal.io...
OK

name        service   plan    bound apps                                                     last operation
nab-mysql   p-mysql   100mb   nabhackathon-beacon, nabhackathon-merchant, pivotal-mysqlweb   create succeeded

3. Log in as shown below

pasapicella@pas-macbook:~$ cf mysql nab-mysql

...

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| cf_53318c9c_caec_49be_9e33_075fade26183 |
| information_schema                      |
+-----------------------------------------+
2 rows in set (0.30 sec)

mysql> use cf_53318c9c_caec_49be_9e33_075fade26183;
Database changed

mysql> show tables;
+---------------------------------------------------+
| Tables_in_cf_53318c9c_caec_49be_9e33_075fade26183 |
+---------------------------------------------------+
| beacon                                            |
| beacon_product                                    |
| customer                                          |
| customer_registration                             |
| merchant                                          |
| payment                                           |
| payment_product                                   |
| product                                           |
+---------------------------------------------------+
8 rows in set (0.29 sec)

2. Pivotal MySQL*Web

PivotalMySQL*Web is a browser based SQL tool rendered using Bootstrap UI for MySQL PCF service instances which allows you to run SQL commands and view schema objects from a browser based interface. Use it to

  • Multiple Command SQL worksheet for DDL and DML
  • Run Explain Plan across SQL Statements
  • View/Run DDL command against Tables/Views/Indexes/Constraints
  • Command History
  • Auto Bind to Pivotal MySQL Services bound to the Application within Pivotal Cloud Foundry 
  • Manage JDBC Connections
  • Load SQL File into SQL Worksheet from Local File System
  • SQL Worksheet with syntax highlighting support
  • HTTP GET request to auto login without a login form
  • Export SQL query results in JSON or CSV formats
  • Generate DDL for schema objects


It does this deployed within Pivotal Cloud Foundry as an application instance and auto binds to the MySQL service for you if you choose to bind it as part of the "cf push" and a manifest.yml which looks as follows

---
applications:
- name: pivotal-mysqlweb
  memory: 512M
  instances: 1
  host: pivotal-mysqlweb-${random-word}
  path: ./target/PivotalMySQLWeb-0.0.1-SNAPSHOT.jar
  services:
    - pas-mysql

Install it as explained in the link below:

  https://github.com/pivotal-cf/PivotalMySQLWeb


Wednesday, 26 April 2017

Cross-origin resource sharing (CORS) from Spring Boot Rest Controllers

Was involved in a hackathon recently and after creating a few Spring boot API's for the UI team to consume and they run into errors around (Cross-origin resource sharing ). For security reasons, browsers prohibit AJAX calls to resources residing outside the current origin.

I have seen this before and Spring Boot has support to ensure you can control which resources can be accessed outside of the current origin. It's as simple as an annotation "@CrossOrigin", as shown below. In this example every request from this Rest Controller supports resource calls residing outside the current origin.

  
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@CrossOrigin
@RestController
@RequestMapping(value = "/beacon")
public class BeaconRest
{
    private static Log logger = LogFactory.getLog(BeaconRest.class);

    @Autowired
    private BeaconRepository beaconRepository;

    @RequestMapping(value = "/all",
            method = RequestMethod.GET,
            produces = MediaType.APPLICATION_JSON_VALUE)
    public List<Beacon> allBeacons()
    {
        logger.info("Invoking /beacon/all RESTful method");
        return beaconRepository.findAll();
    }

Of course it's much more flexible then that adding the ability to add options, and you can read more about it here.

https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/cors.html

Thursday, 13 April 2017

Spring Boot Application for Pivotal Cloud Cache Service

I previously blogged about the Pivotal Cloud Cache service in Pivotal Cloud Foundry as follows

http://theblasfrompas.blogspot.com.au/2017/04/getting-started-with-pivotal-cloud.html

During that post I promised it will follow with a Spring Boot application which would use the PCC service to show what the code would look like. That demo exists at the GitHub URL below.

https://github.com/papicella/SpringBootPCCDemo

The GitHub URL above shows how you can clone , package and then push this application to PCF using your own PCC service instance using the "Spring Cloud GemFire Connector"



More Information

Pivotal Cloud Cache Docs
http://docs.pivotal.io/p-cloud-cache/index.html



Monday, 10 April 2017

Getting Started with Pivotal Cloud Cache on Pivotal Cloud Foundry

Recently we announced the new cache service Pivotal Cloud Cache (PCC) for Pivotal Cloud Foundry (PCC). In short Pivotal Cloud Cache (PCC) is a opinionated, distributed, highly available, high speed key/value caching service. PCC can be easily horizontally scaled for capacity and performance.

In this post we will show how you would provision a service, login to the Pulse UI dashboard, connect using GFSH etc. I won't create a spring boot application to use the service at this stage BUT that will follow in a post soon enough.

Steps

1. First you will need the PCC service and if it's been installed it will look like this


2. Now let's view the current plans we have in place as shown below

pasapicella@pas-macbook:~$ cf marketplace -s p-cloudcache
Getting service plan information for service p-cloudcache as papicella@pivotal.io...
OK

service plan   description          free or paid
extra-small    Plan 1 Description   free
extra-large    Plan 5 Description   free

3. Now let's create a service as shown below

pasapicella@pas-macbook:~$ cf create-service p-cloudcache extra-small pas-pcc
Creating service instance pas-pcc in org pivot-papicella / space development as papicella@pivotal.io...
OK

Create in progress. Use 'cf services' or 'cf service pas-pcc' to check operation status.

4. At this point it will asynchronously create the GemFire cluster which is essentially what PCC is. For more Information on GemFire see the docs link here.

You can check the progress one of two ways.

1. Using Pivotal Apps manager as shown below


2. Using a command as follows

pasapicella@pas-macbook:~$ cf service pas-pcc

Service instance: pas-pcc
Service: p-cloudcache
Bound apps:
Tags:
Plan: extra-small
Description: Pivotal CloudCache offers the ability to deploy a GemFire cluster as a service in Pivotal Cloud Foundry.
Documentation url: http://docs.pivotal.io/gemfire/index.html
Dashboard: http://gemfire-yyyyy.run.pez.pivotal.io/pulse

Last Operation
Status: create in progress
Message: Instance provisioning in progress
Started: 2017-04-10T01:34:58Z
Updated: 2017-04-10T01:36:59Z

5. Once complete it will look as follows


6. Now in order to log into both GFSH and Pulse we are going to need to create a service key for the service we just created, which we do as shown below.

pasapicella@pas-macbook:~/pivotal/PCF/services/PCC$ cf create-service-key pas-pcc pas-pcc-key
Creating service key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...
OK

7. Retrieve service keys as shown below

pasapicella@pas-macbook:~$ cf service-key pas-pcc pas-pcc-key
Getting key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...

{
 "locators": [
  "0.0.0.0[55221]",
  "0.0.0.0[55221]",
  "0.0.0.0[55221]"
 ],
 "urls": {
  "gfsh": "http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1",
  "pulse": "http://gemfire-yyyy.run.pez.pivotal.io/pulse"
 },
 "users": [
  {
   "password": "password",
   "username": "developer"
  },
  {
   "password": "password",
   "username": "operator"
  }
 ]
}

8. Now lets log into Pulse. The URL is available as part of the output above

Login Page


Pulse Dashboard : You can see from the dashboard page it shows how many locators and cache server members we have as part of this default cluster



9. Now lets log into GFSH. Once again the URL is as per the output above

- First we will need to download Pivotal GemFire so we have the GFSH client, download the zip at the link below and extract to your file system

  https://network.pivotal.io/products/pivotal-gemfire

- Invoke as follows using the path to the extracted ZIP file

$GEMFIRE_HOME/bin/gfsh

pasapicella@pas-macbook:~/pivotal/software/gemfire/pivotal-gemfire-9.0.3/bin$ ./gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  /
 / /__/ / ____/  _____/ / /    / /
/______/_/      /______/_/    /_/    9.0.3

Monitor and Manage Pivotal GemFire
gfsh>connect --use-http --url=http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1 --user=operator --password=password
Successfully connected to: GemFire Manager HTTP service @ http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1

gfsh>

10. Now lets create a region which will use to store some cache data

$ create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
  
gfsh>create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
              Member                | Status
----------------------------------- | ---------------------------------------------------------------------
cacheserver-PCF-PEZ-Heritage-RP04-1 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-1"
cacheserver-PCF-PEZ-Heritage-RP04-0 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-0"
cacheserver-PCF-PEZ-Heritage-RP04-2 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-2"
cacheserver-PCF-PEZ-Heritage-RP04-3 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-3" 

Note: Understanding the region types you can create exist at the Pivotal GemFire docs but basically in the example above we create a partitioned region where primary and backup data is stored among the cache servers. As you can see we asked for a single backup copy of each region entry to be placed on a separate cache server itself for redundancy

http://gemfire.docs.pivotal.io/geode/developing/region_options/region_types.html#region_types

11. If we return to the Pulse Dashboard UI we will see from the "Data Browser" tab we have a region


12. Now lets just add some data , few entries which are simple String key/value pairs only
  
gfsh>put --region=/demoregion --key=1 --value="value 1"
Result      : true
Key Class   : java.lang.String
Key         : 1
Value Class : java.lang.String
Old Value   : <NULL>


gfsh>put --region=/demoregion --key=2 --value="value 2"
Result      : true
Key Class   : java.lang.String
Key         : 2
Value Class : java.lang.String
Old Value   : <NULL>


gfsh>put --region=/demoregion --key=3 --value="value 3"
Result      : true
Key Class   : java.lang.String
Key         : 3
Value Class : java.lang.String
Old Value   : <NULL>

13. Finally lets query the data we have in the cache
  
gfsh>query --query="select * from /demoregion"

Result     : true
startCount : 0
endCount   : 20
Rows       : 3

Result
-------
value 3
value 1
value 2

NEXT_STEP_NAME : END

13. We can return to Pulse and invoke the same query from the "Data Browser" tab as shown below.



Of course storing data in a cache isn't useful unless we actually have an application on PCF that can use the Cache BUT that will come in a separate post. Basically we will BIND to this service, connect as a GemFire Client using the locators we are given as part of the service key and then extract the cache data we have just created above by invoking a query.

More Information

Download PCC for PCF
https://network.pivotal.io/products/cloud-cache

Data Sheet for PCC
https://content.pivotal.io/datasheets/pivotal-cloud-cache

Tuesday, 4 April 2017

Pivotal Cloud Foundry Cloud Service Brokers for AWS, Azure and GCP

Pivotal Cloud Foundry (PCF) has various cloud service brokers for all the public clouds we support which include AWS, Azure and GCP. You can download and install those service brokers on premise or off premise giving you the capability to use Cloud services where it makes sense for your on premise or off premise cloud native applications.

https://network.pivotal.io/

The three cloud service brokers are as follows:





In the example below we have a PCF install running on vSphere and it has the AWS service broker tile installed as shown by the Ops Manager UI


Once installed this PCF instance can then provision AWS services and you can do that one of two ways.

1. Using Apps Manager UI as shown below


2. Use the CF CLI tool and invoking "cf marketplace" to list the service and then "cf create-service" to actually create an instance of the service.



Once provisioned within a SPACE of PCF you can then bind and use the service from applications as you normally would to consume the service reading the VCAP_SERVICES ENV variable and essentially access AWS services from your on premise installation of PCF in the example above.

More Information

GCP service broker:
https://network.pivotal.io/products/gcp-service-broker

AWS service broker:
https://network.pivotal.io/products/pcf-service-broker-for-aws

Azure service broker:
https://network.pivotal.io/products/microsoft-azure-service-broker


Manually running a BOSH errand for Pivotal Cloud Foundry on GCP

Pivotal Ops Manager has various errands in runs for different deployments within a PCF instance. These Errands can be switched off manually when installing new Tiles or upgrading the platform, in fact in PCF 1.10 the errands themselves will only run if they need to run making it a lot faster.

Below I am going to show you how you would manually run an Errand if you needed to on a PCF instance running on GCP. These instructions would work for PCF running on AWS, Azure or even vSphere so there not specific to PCF on GCP.

1. First login to your Ops Manager VM itself

pasapicella@pas-macbook:~/pivotal/GCP/install/10/opsmanager$ ./ssh-opsman.sh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-66-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Apr  3 23:38:57 UTC 2017

  System load:  0.0                Processes:           141
  Usage of /:   14.7% of 78.71GB   Users logged in:     0
  Memory usage: 68%                IP address for eth0: 0.0.0.0
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

5 packages can be updated.
0 updates are security updates.

Your Hardware Enablement Stack (HWE) is supported until April 2019.

*** System restart required ***
Last login: Mon Apr  3 23:38:59 2017 from 110.175.56.52
ubuntu@om-pcf-110:~$

2. Target the Bosh director which would look like this

ubuntu@om-pcf-110:~$ bosh --ca-cert /var/tempest/workspaces/default/root_ca_certificate target 10.0.0.10
Target set to 'p-bosh'

Note: You may be asked to login if you have not logged in to the bosh director which you can determine the login details from Ops Manager UI as follows

- Log into Ops Manager UI
- Click on the tile for the the the "Ops Manager Director" which would be specific to your IaaS provider, in the example below that is GCP


- Click on the credentials tab


3. Target the correct deployment. In the example below I am targeting the Elastic Runtime deployment.

ubuntu@om-pcf-110:~$ bosh deployment /var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml
Deployment set to '/var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml'

Note: You can list out the deployment names using "bosh deployments"

4. List out the errands as shown below using "bosh errands"

ubuntu@om-pcf-110:~$ bosh errands
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

+-----------------------------+
| Name                        |
+-----------------------------+
| smoke-tests                 |
| push-apps-manager           |
| notifications               |
| notifications-ui            |
| push-pivotal-account        |
| autoscaling                 |
| autoscaling-register-broker |
| nfsbrokerpush               |
| bootstrap                   |
| mysql-rejoin-unsafe         |
+-----------------------------+

5. Now in this example we are going to run the errand "push-apps-manager" and we do it as shown below

$ bosh run errand push-apps-manager

** Output **

ubuntu@om-pcf-110:~$ bosh run errand push-apps-manager
Acting as user 'director' on deployment 'cf-c099637fab39369d6ba0' on 'p-bosh'
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

Director task 621
  Started preparing deployment > Preparing deployment

  Started preparing package compilation > Finding packages to compile. Done (00:00:01)

     Done preparing deployment > Preparing deployment (00:00:05)

  Started creating missing vms > push-apps-manager/32218933-7511-4c0d-b512-731ca69c4254 (0)

...

+ '[' '!' -z 'Invitations deploy log: ' ']'
+ printf '** Invitations deploy log:  \n'
+ printf '*************************************************************************************************\n'
+ cat /var/vcap/packages/invitations/invitations.log

Errand 'push-apps-manager' completed successfully (exit code 0)
ubuntu@om-pcf-110:~$


Wednesday, 22 March 2017

Visual Studio Code editor support for Cloud Foundry Manifest files

An early BETA version of the Cloud Foundry (CF) manifest file support is available in Visual Studio Code. To see a video on this support follow the link below which shows how to install the extension and use Code Completion and a bit more follow link.

  https://www.youtube.com/watch?v=Ao6Mx6Q0XKE

With this extension for manifest files, it becomes a pleasure to write and modify those CF manifest files. You get content-assist, validations, and hover help - even for dynamic content like buildpacks and services (it integrates with the CF CLI for that)

Some screen shots of this as follows -






Tuesday, 21 March 2017

dotnet publish - ASP.NET Core app deployed to Pivotal Cloud Foundry

I previously showed how to push a ASP .NET Core application to Pivotal Cloud Foundry by just using the source code files itself. It turns out this creates a rather large droplet and hence slows down the deployment. So here we are going to take the same demo and use "dotnet publish" to make this a lot faster. The previous post is here which is the base for this blog entry itself.

ASP.NET Core app deployed to Pivotal Cloud Foundry
http://theblasfrompas.blogspot.com.au/2017/03/aspnet-core-app-deployed-to-pivotal.html

First we need to make some changes to our project

1. Open "dotnet-core-mvc.csproj" and add "RuntimeIdentifiers" inside the "PropertyGroup" tag

  
<PropertyGroup>
    <TargetFramework>netcoreapp1.0</TargetFramework>
    <RuntimeIdentifiers>osx.10.10-x64;osx.10.11-x64;ubuntu.14.04-x64;ubuntu.15.04-x64;debian.8-x64</RuntimeIdentifiers>
</PropertyGroup>



2. Perform a "dotnet restore" as shown below either form a terminal windows/prompt or from Visual Studio Code itself , this step is vital and is required

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
....

3. Now lets publish this as Release and ensure we target the correct runtime. For Cloud Foundry (CF) that will be "ubuntu.14.04-x64" and the framework version is 1.0 as we created the application using 1.0 , we could of used 1.1 here if we wanted to.

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet publish --output ./publish --configuration Release --runtime ubuntu.14.04-x64  --framework netcoreapp1.0
Microsoft (R) Build Engine version 15.1.548.43366
Copyright (C) Microsoft Corporation. All rights reserved.

  dotnet-core-mvc -> /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/bin/Release/netcoreapp1.0/ubuntu.14.04-x64/dotnet-core-mvc.dll

4. Finally cd into the "Publish" folder and verify there are the required DLL's as well as project files, JSON files , everything ready to run your application.

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc/publish$ ls -lartF
total 116848
-rwxr--r--    1 pasapicella  staff    25992 Jun 11  2016 Microsoft.Win32.Primitives.dll*

..

-rwxr--r--    1 pasapicella  staff      168 Mar 16 22:33 appsettings.Development.json*
drwxr-xr-x    7 pasapicella  staff      238 Mar 21 08:01 wwwroot/
-rwxr--r--    1 pasapicella  staff     1332 Mar 21 08:01 dotnet-core-mvc.pdb*
-rwxr--r--    1 pasapicella  staff     8704 Mar 21 08:01 dotnet-core-mvc.dll*
drwxr-xr-x    6 pasapicella  staff      204 Mar 21 08:01 Views/
drwxr-xr-x   16 pasapicella  staff      544 Mar 21 08:01 ../
-rwxr--r--    1 pasapicella  staff      362 Mar 21 08:01 web.config*
drwxr-xr-x   79 pasapicella  staff     2686 Mar 21 08:01 refs/
-rwxr--r--    1 pasapicella  staff       92 Mar 21 08:01 dotnet-core-mvc.runtimeconfig.json*
-rwxr--r--    1 pasapicella  staff   297972 Mar 21 08:01 dotnet-core-mvc.deps.json*
drwxr-xr-x  212 pasapicella  staff     7208 Mar 21 08:01 ./

5. Now this time lets "cf push" using the files in the "Publish" folder and

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc/publish$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m
Creating app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Using route pas-dotnetcore-mvc-demo.cfapps.io
Binding pas-dotnetcore-mvc-demo.cfapps.io to pas-dotnetcore-mvc-demo...
OK

Uploading pas-dotnetcore-mvc-demo...
Uploading app files from: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/publish
Uploading 14.8M, 280 files
Done uploading
OK

Starting app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
Creating container
Successfully created container
Downloading app package...
Downloaded app package (23.7M)
-----> Buildpack version 1.0.13
ASP.NET Core buildpack version: 1.0.13
ASP.NET Core buildpack starting compile
-----> Restoring files from buildpack cache
       OK
-----> Restoring NuGet packages cache
-----> Extracting libunwind
       libunwind version: 1.2
       OK
       https://buildpacks.cloudfoundry.org/dependencies/manual-binaries/dotnet/libunwind-1.2-linux-x64-f56347d4.tgz
       OK
-----> Saving to buildpack cache
       Copied 38 files from /tmp/app/libunwind to /tmp/cache
       OK
-----> Cleaning staging area
       OK
ASP.NET Core buildpack is done creating the droplet
Exit status 0
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (995K)
Uploaded droplet (23.8M)
Uploading complete
Destroying container
Successfully destroyed container

1 of 1 instances running

App started


OK

App pas-dotnetcore-mvc-demo was started using this command `cd . && ./dotnet-core-mvc --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-dotnetcore-mvc-demo.cfapps.io
last uploaded: Mon Mar 20 21:05:08 UTC 2017
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/dotnet-core-buildpack

     state     since                    cpu    memory          disk          details
#0   running   2017-03-21 08:06:05 AM   0.0%   39.2M of 512M   66.9M of 1G