1. Home
  2. OpenStack
  3. How can I deploy Cloud Foundry on Bright Openstack 7.3?

How can I deploy Cloud Foundry on Bright Openstack 7.3?

This guide is based on the Cloud Foundry documentation in http://docs.cloudfoundry.org/deploying/openstack/index.html, and walk trough the installation process in a Bright Openstack 7.3 deployment.

When not specified, the commands mentioned in this article must be run on the head node.

Prerequisites

Bright Openstack already deployed

We assume BOSH is not already installed, so we will install it first over Bright OpenStack.

Openstack credentials are already configured and the command line utilities can access the Openstack API. This is configured by Bright by default, for the root user the ~/.openstackrc stores credentials for the admin user and the bright tenant. In case of deploying CloudFoundry for another user and tenant, these values would need to be changed.

There are other prerequisites that the Openstack deployment must conform to, and they are described in http://docs.cloudfoundry.org/deploying/openstack/validate_openstack.html. A typical installation of Bright Openstack meets most of these requirements out of the box.

1 – Configure Openstack environment

1.1 – Verify that Nova and Cinder use the same default availability zones

A tool which will be used in this article (BOSH) creates a virtual machine in the default availability zone and then tries to create a Cinder volume in the same zone in which it created the virtual machine. This will not work if the mentioned availability zone is not known to Cinder.

To verify what is the default availability zone, run the following command:
# cmsh -c "openstack; settings; compute; get defaultavailabilityzone"

To verify what availability zones are available for creating cinder volumes, run the following command:
# cinder availability-zone-list

If the zones are different it will be necessary to change Cinder’s availability zones. The following is an example on how to change Cinder’s availability zone to “default”:
# cmsh -c "configurationoverlay; use openstackcontrollers; customizations; add INI /etc/cinder/cinder.conf; entries; add [DEFAULT]storage_availability_zone=default; commit"

The controller nodes need to be rebooted after this change.

1.2 – Configure security groups for Cloud Foundry

Run the following commands on the head node:
# openstack security group create cf
# openstack security group rule create --proto udp --dst-port 68:68 cf
# openstack security group rule create --proto udp --dst-port 3457:3457 cf
# openstack security group rule create --proto icmp cf
# openstack security group rule create --proto tcp --dst-port 22:22 cf
# openstack security group rule create --proto tcp --dst-port 80:80 cf
# openstack security group rule create --proto tcp --dst-port 443:443 cf
# openstack security group rule create --proto tcp --dst-port 4443:4443 cf
# openstack security group rule create --proto tcp --dst-port 1:65535 --src-group $(openstack security group show cf | grep id | head -n 1 | cut -d"|" -f3) cf

This is the minimum required configuration, according to http://docs.cloudfoundry.org/deploying/openstack/security_group.html. For a production environment it can require some modifications.

1.3 – Verify quota of tenant

The last time we tested this procedure in May 2017, with the configuration templates included with Cloud Foundry, a total of 21 instances had to be deployed (1 for the BOSH director, 6 for compilation and 14 for Cloud Foundry)

This could be a problem, if the current tenant has a limit on how many instances can be created which is lower than that.

To verify how many instances can be deployed in the bright tenant, run the following command on the head node:
# openstack quota show bright

If the values shown by this command are less than what it’s needed, then they can be increased. For example, to increase the quota for instances and cores to 50, the following command can be used:
# openstack quota set --instances 50 --cores 50 bright

2 – Install BOSH

In order to deploy Cloud Foundry, we will first deploy BOSH (https://bosh.cloudfoundry.org/) a tool used to deploy applications.

The steps in this section are based on the procedure described in https://bosh.io/docs/init-openstack.html

2.1 – Install RVM and Ruby 2.3.0

For running the bosh client (which will be installed later) Ruby is required. Here is a brief description on how to install Ruby in a rvm (https://rvm.io/) environment.

Run the following commands on the head node:
# gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
# curl -sSL https://get.rvm.io | bash -s stable --ruby=2.3.0
# source /usr/local/rvm/scripts/rvm

2.3 – Create a manifest to deploy BOSH

Create a directory which will be used for placing the BOSH manifests.
# mkdir /cm/shared/manifests

Create the manifest for deploying bosh in /cm/shared/manifests/microbosh.yml, with the following content:
name: bosh

releases:
- name: bosh
 url: https://bosh.io/d/github.com/cloudfoundry/bosh?v=259
 sha1: 94cd3ade8549fbc6a02fcca349794ac40045c22c
- name: bosh-openstack-cpi
 url: https://bosh.io/d/github.com/cloudfoundry-incubator/bosh-openstack-cpi-release?v=27
 sha1: 85e6244978f775c888bbd303b874a2c158eb43c4

resource_pools:
- name: vms
 network: private
 stemcell:
   url: https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3263.10
   sha1: abbf1686c486570393c75d865772bf1b3e8dfbfd
 cloud_properties:
   instance_type: m1.xlarge

disk_pools:
- name: disks
 disk_size: 20_000

networks:
- name: private
 type: manual
 subnets:
 - range: PRIVATE-CIDR # <--- Replace with a private subnet CIDR
   gateway: PRIVATE-GATEWAY-IP # <--- Replace with a private subnet's gateway
   dns: [DNS-IP] # <--- Replace with your DNS
   cloud_properties: {net_id: NETWORK-UUID} # <--- # Replace with private network UUID
- name: public
 type: vip

jobs:
- name: bosh
 instances: 1

 templates:
 - {name: nats, release: bosh}
 - {name: postgres, release: bosh}
 - {name: blobstore, release: bosh}
 - {name: director, release: bosh}
 - {name: health_monitor, release: bosh}
 - {name: registry, release: bosh}
 - {name: openstack_cpi, release: bosh-openstack-cpi}

 resource_pool: vms
 persistent_disk_pool: disks

 networks:
 - name: private
   static_ips: [PRIVATE-IP] # <--- Replace with a private IP
   default: [dns, gateway]
 - name: public
   static_ips: [FLOATING-IP] # <--- Replace with a floating IP

 properties:
   nats:
     address: 127.0.0.1
     user: nats
     password: nats-password

   postgres: &db
     listen_address: 127.0.0.1
     host: 127.0.0.1
     user: postgres
     password: postgres-password
     database: bosh
     adapter: postgres

   registry:
     address: PRIVATE-IP # <--- Replace with a private IP
     host: PRIVATE-IP # <--- Replace with a private IP
     db: *db
     http: {user: admin, password: admin, port: 25777}
     username: admin
     password: admin
     port: 25777

   blobstore:
     address: PRIVATE-IP # <--- Replace with a private IP
     port: 25250
     provider: dav
     director: {user: director, password: director-password}
     agent: {user: agent, password: agent-password}

   director:
     address: 127.0.0.1
     name: my-bosh
     db: *db
     cpi_job: openstack_cpi
     max_threads: 3
     user_management:
       provider: local
       local:
         users:
         - {name: admin, password: admin}
         - {name: hm, password: hm-password}

   hm:
     director_account: {user: hm, password: hm-password}
     resurrector_enabled: true

   openstack: &openstack
     auth_url: IDENTITY-API-ENDPOINT # <--- Replace with OpenStack Identity API endpoint
     project: OPENSTACK-PROJECT # <--- Replace with OpenStack project name
     domain: OPENSTACK-DOMAIN # <--- Replace with OpenStack domain name
     username: OPENSTACK-USERNAME # <--- Replace with OpenStack username
     api_key: OPENSTACK-PASSWORD # <--- Replace with OpenStack password
     default_key_name: bosh
     default_security_groups: [bosh]

   agent: {mbus: "nats://nats:nats-password@PRIVATE-IP:4222"} # <--- Replace with a private IP

   ntp: &ntp [0.pool.ntp.org, 1.pool.ntp.org]

cloud_provider:
 template: {name: openstack_cpi, release: bosh-openstack-cpi}

 ssh_tunnel:
   host: FLOATING-IP # <--- Replace with a floating IP
   port: 22
   user: vcap
   private_key: ./bosh # Path relative to this manifest file

 mbus: "https://mbus:mbus-password@FLOATING-IP:6868" # <--- Replace with a floating IP

 properties:
   openstack: *openstack
   agent: {mbus: "https://mbus:mbus-password@0.0.0.0:6868"}
   blobstore: {provider: local, path: /var/vcap/micro_bosh/data/cache}
   ntp: *ntp

2.4 – Create an Openstack keypair for BOSH

Create a SSH keypair and place it in the /cm/shared/manifests directory

# ssh-keygen -f /cm/shared/manifests/bosh

Add the keypair to Openstack.

# openstack keypair create --public-key /cm/shared/manifests/bosh.pub bosh

2.5 – Configure a security group for BOSH

Run the following commands on the head node:
# openstack security group create bosh
# openstack security group rule create --proto tcp --dst-port 22:22 bosh
# openstack security group rule create --proto tcp --dst-port 6868:6868 bosh
# openstack security group rule create --proto tcp --dst-port 25555:25555 bosh
# openstack security group rule create --proto tcp --dst-port 1:65535 --src-group $(openstack security group show bosh | grep id | head -n 1 | cut -d"|" -f3) bosh

The previous rules are mentioned in the BOSH documentation. In order to allow communication between the BOSH director and the Cloud Foundry instances, the following rules have to be added also:
# openstack security group rule create --proto tcp --dst-port 1:65535 --src-group $(openstack security group show cf | grep id | head -n 1 | cut -d"|" -f3) bosh

# openstack security group rule create --proto tcp --dst-port 1:65535 --src-group $(openstack security group show bosh | grep id | head -n 1 | cut -d"|" -f3) cf

2.6 – Allocate a floating IP

Allocate a floating IP in the external network by running this command:
# openstack ip floating create bright-external-flat-externalnet

Take note of the generated IP, as it will be used in the following steps.

2.7 – Create an internal network to be used by Cloud Foundry

In case there isn’t already an internal network defined for use by Cloud Foundry, create one by following these steps.
#  openstack network create cloudfoundry-net
# neutron subnet-create --enable-dhcp --gateway 192.168.0.254 --name cloudfoundry-subnet cloudfoundry-net 192.168.0.0/24

2.8 – Verify that the external network can be reached from the internal network

If it’s not already defined, a router connecting the internal Cloud Foundry network and the external network must be defined (that will allow Openstack to assign the floating IP to a VM inside the Cloud Foundry internal network)
# openstack router create cloudfoundry-router
# neutron router-gateway-set cloudfoundry-router bright-external-flat-externalnet
# neutron router-interface-add cloudfoundry-router cloudfoundry-subnet

When configuring this, it’s also very important to verify that from inside the internal network it will be possible to reach the OpenStack public endpoints. The public endpoints’s URLs are the ones returned by running this command in the head node:

# openstack endpoint list | grep public

2.9 – Edit the BOSH manifest

Edit the  /cm/shared/manifests/microbosh.yml and replace the following values:

NETWORK-UUID -> replace for the UUID of the network defined in 2.7. To get the UUIDs of the networks run this command: openstack network list

PRIVATE-IP -> replace for an unused IP of the network defined in 2.7.

PRIVATE-CIDR -> replace with the CIDR of the network defined in 2.7. For example: 192.168.0.0/16

PRIVATE-GATEWAY-IP -> replace for the gateway of the network defined in 2.7. This should be the same as the internal network IP of the router defined in 2.8. For example: 192.168.0.254

DNS-IP -> replace by the IP of a DNS server.

FLOATING-IP -> replace for the floating IP allocated in step 2.5

OPENSTACK-PASSWORD→ replace for the value of OS_PASSWORD in /root/.openstackrc

IDENTITY-API-ENDPOINT -> replace for the URL of the public Identity API endpoint. The URL can be found by running the “openstack endpoint list” command.

OPENSTACK-PROJECT → replace for the value of OS_TENANT_NAME in /root/.openstackrc

OPENSTACK-USERNAME → replace for the value of OS_USERNAME in /root/.openstackrc

OPENSTACK-DOMAIN -> replace for the Openstack domain name of the user. For example: ‘default’

You must also verify that the flavor specified in “instance_type” can be deployed in your Openstack Hypervisors.

2.10 – Install bosh-init

Run the following commands in the head node:
# wget https://s3.amazonaws.com/bosh-init-artifacts/bosh-init-0.0.102-linux-amd64
# chmod +x bosh-init-0.0.102-linux-amd64
# mv bosh-init-0.0.102-linux-amd64 /root/bin/bosh-init

Verify it’s installed correctly by running this command:
# bosh-init -v

2.11 – Deploy BOSH

Run the following command on the head node:
# bosh-init deploy /cm/shared/manifests/microbosh.yml

2.12 – Install the BOSH command line client

Run the following command on the head node:
# gem install bosh_cli --no-ri --no-rdoc

Test by logging in to the BOSH director:
# bosh target FLOATING_IP

And use the username and password specified in the “director” section of the manifest.

3 – Deploy Cloud Foundry

We will now use BOSH to deploy Cloud Foundry.

3.1 – Install Git

If not already installed run the following command
# yum install git

3.2 – Install Go

An installation of GO will be required for some of the next steps. The following commands can be run to install and configure it.
# yum install golang
# export GOPATH=/usr/share/gocode
# export PATH=$PATH:/usr/share/gocode/bin

3.3 – Retrieve the BOSH director UUID

Log into the BOSH director
# bosh target FLOATING_IP

After logging in, run the following command:
# bosh status --uuid

The value returned here will be required in step 3.5.

3.4 – Allocate a floating IP

Allocate a floating IP in the external network by running this command:
# openstack ip floating create bright-external-flat-externalnet

Take note of the generated IP, as it will be used in the following steps.

3.5 – Clone the cf-release project

Run the following commands in the head node.
# git clone https://github.com/cloudfoundry/cf-release.git
# cd cf-release
# ./scripts/update

3.6 – Create a stub manifest for Cloud Foundry

Create a stub manifest for deploying Cloud Foundry in /cm/shared/manifests/cf-stub.yml. Use the cf-release/spec/fixtures/openstack/cf-stub.yml as a template. In the following sections we describe how the stub has to be filled.

3.7 – Generate certificates and keys.

In some sections of the manifest stub it will be necessary to generate SSL certificates and keys. Running the scripts under cf-release/scripts can be used to generate them in case they don’t exist already.

For example, to generate the certificates for the consul section, the following command can be used:
# cf-release/scripts/generate-consul-certs

The certificates should be generated under /root/consul-certs

3.8 – Edit the stub manifest

Edit the manifest according to the instructions in http://docs.cloudfoundry.org/deploying/openstack/cf-stub.html#editing, keeping in mind the following:

DIRECTOR_UUID has to be replaced by the uuid obtained in step 3.1

ENVIRONMENT has to be replaced by an arbitrary value, for example openstack-prod

In floating_static_ips the floating ip allocated in step 3.3 must be set.

In the networks section,the net_id value must be the network id of the network defined in 2.7, this can be obtained with the following command:
# openstack network show cloudfoundry-net

In the networks section, the range and gateway values in subnets must correspond to the subnet of the internal network defined in 2.7, this can be obtained with the following command:
# openstack subnet show cloudfoundry-subnet

In the networks section, in static it’s required to specify a range of at least 26 IP addresses inside the range.

In the networks section, in reserved it’s necessary to specify IP addresses inside the range that are already being used and can’t be assigned to instances. Typically those addresses are the DHCP server of the internal network, the BOSH director and the router. To verify which IP addresses are already being used in the internal network, the following command can be used:

# neutron port-list | grep $(openstack subnet list | grep cloudfoundry-subnet | cut -d” ” -f2)

In security_groups for the cf1 network, the security group defined in step 1.2 must be specified.

Where certificates and keys are required, the scripts mentioned in 3.3 can be used to generate them.

Editing Instructions Table, from that URL

Deployment Manifest Stub ContentsEditing Instructions
director_uuid: DIRECTOR_UUID
  
Replace DIRECTOR_UUID with the BOSH Director UUID. Run the BOSH CLI bosh status –uuid command to view the BOSH Director UUID.
meta:
 environment: ENVIRONMENT
  
Replace ENVIRONMENT with an arbitrary name describing your environment, for example openstack-prod.
 floating_static_ips:
 – 198.51.100.1
  
Replace 198.51.100.1 with an existing static IP address for your OpenStack floating network. This is assigned to the ha_proxy job to receive incoming traffic.
networks:
 – name: floating
   type: vip
   cloud_properties:
     net_id: NET_ID
     security_groups: []
 – name: cf1
   type: manual
   subnets:
   – range: 10.10.0.0/24
     gateway: 10.10.0.1
     reserved:
     – 10.10.0.2 – 10.10.0.100
     – 10.10.0.200 – 10.10.0.254
     dns:
     – 8.8.8.8
     static:
     – 10.10.0.125 – 10.10.0.175
     cloud_properties:
       net_id: NET_ID
       security_groups: [“cf”]
  
Update the values for range, reserved, static, and gateway to reflect the available networks and IP addresses in your OpenStack network. Replace NET_ID with the network ID of your OpenStack network.This also assumes that you have a security group cf suitable for your Cloud Foundry VMs. Change this to the name of your security group if necessary.
properties:
 system_domain: SYSTEM_DOMAIN
 system_domain_organization: SYSTEM_DOMAIN_ORGANIZATION
 app_domains:
  – APP_DOMAIN
  
Replace SYSTEM_DOMAIN and APP_DOMAIN with the full domain you want associated with applications pushed to your Cloud Foundry installation, for example cloud-09.cf-app.com. You must have already acquired these domains and configured their DNS records so that these domains resolve to your load balancer.Choose a name for the SYSTEM_DOMAIN_ORGANIZATION. This organization will be created and configured to own the SYSTEM_DOMAIN.
 cc:
   staging_upload_user: STAGING_UPLOAD_USER
   staging_upload_password: STAGING_UPLOAD_PASSWORD
   bulk_api_password: BULK_API_PASSWORD
   db_encryption_key: CCDB_ENCRYPTION_KEY
  
The Cloud Controller API endpoint requires basic authentication. Replace STAGING_UPLOAD_USER and STAGING_UPLOAD_PASSWORD with a username and password of your choosing.Replace BULK_API_PASSWORD with a password of your choosing. Health Manager uses this password to access the Cloud Controller bulk API.Replace CCDB_ENCRYPTION_KEY with a secure key that you generate to encrypt sensitive values in the Cloud Controller database. You can use any random string. For example, run the following command from a command line to generate a 32-character random string: LC_ALL=C tr -dc ‘A-Za-z0-9’ < /dev/urandom | head -c 32 ; echo
 blobstore:
   admin_users:
     – username: blobstore-username
       password: blobstore-password
   secure_link:
       secret: blobstore-secret
   tls:
       port: 443
       cert: BLOBSTORE_TLS_CERT
       private_key: BLOBSTORE_PRIVATE_KEY
       ca_cert: BLOBSTORE_CA_CERT
  
Replace blobstore-username and blobstore-password with a username and password of your choosing.Replace blobstore-secret with a secure secret of your choosing.Replace BLOBSTORE_TLS_CERT, BLOBSTORE_PRIVATE_KEY, and BLOBSTORE_CA_CERT with the blobstore TLS certificate, private key, and CA certificate.
 consul:
   encrypt_keys:
     – CONSUL_ENCRYPT_KEY
   ca_cert: CONSUL_CA_CERT
   server_cert: CONSUL_SERVER_CERT
   server_key: CONSUL_SERVER_KEY
   agent_cert: CONSUL_AGENT_CERT
   agent_key: CONSUL_AGENT_KEY
  
See the Security Configuration for Consul topic.
 loggregator_endpoint:
   shared_secret: LOGGREGATOR_ENDPOINT_SHARED_SECRET
  
Replace LOGGREGATOR_ENDPOINT_SHARED_SECRET with a secure string that you generate.
 nats:
   user: NATS_USER
   password: NATS_PASSWORD
  
Replace NATS_USER and NATS_PASSWORD with a username and secure password of your choosing. Cloud Foundry components use these credentials to communicate with each other over the NATS message bus.
 router:
   status:
     user: ROUTER_USER
     password: ROUTER_PASSWORD
  
Replace ROUTER_USER and ROUTER_PASSWORD with a username and secure password of your choosing.
 uaa:
   admin:
     client_secret: ADMIN_SECRET
   cc:
     client_secret: CC_CLIENT_SECRET
   clients:
     cc-service-dashboards:
       secret: CC_SERVICE_DASHBOARDS_SECRET
     cc_routing:
       secret: CC_ROUTING_SECRET
     cloud_controller_username_lookup:
       secret: CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET
     doppler:
       secret: DOPPLER_SECRET
     gorouter:
       secret: GOROUTER_SECRET
     tcp_emitter:
       secret: TCP-EMITTER-SECRET
     tcp_router:
       secret: TCP-ROUTER-SECRET
     login:
       secret: LOGIN_CLIENT_SECRET
     notifications:
       secret: NOTIFICATIONS_CLIENT_SECRET
  
Replace the values for all secret keys with secure secrets that you generate.
   jwt:
     verification_key: JWT_VERIFICATION_KEY
     —–BEGIN PUBLIC KEY—–
     PUBLIC_KEY
     —–END PUBLIC KEY—–
     signing_key: JWT_SIGNING_KEY
     —–BEGIN RSA PRIVATE KEY—–
     RSA_PRIVATE_KEY
     —–END RSA PRIVATE KEY—–
  
Generate a PEM-encoded RSA key pair, and replace JWT_SIGNING_KEY with the private key, and JWT_VERIFICATION_KEY with the corresponding public key. You can generate a key pair by running the following command:openssl genrsa -des3 -out jwt-key.pem 2048 && openssl rsa -in jwt-key.pem -pubout > key.pubThis command creates the jwt-key.pem.pub file, which contains your public key, and the jwt-key.pem file, which contains your private key. If you were prompted for a passphrase, you must strip it from the private key with the following command:openssl rsa -in jwt-key.pem -out jwt-key.pemCopy in the full keys, including the BEGIN and END delimiter lines.
   scim:
     users:
     – name: admin
       password: ADMIN_PASSWORD
  
Generate a secure password and replace ADMIN_PASSWORD with that value to set the password for the Admin user of your Cloud Foundry installation.
 ccdb:
   roles:
   – name: ccadmin
     password: CCDB_PASSWORD
 uaadb:
   roles:
     – name: uaaadmin
       password: UAADB_PASSWORD
 databases:
   roles:
   – name: ccadmin
     password: CCDB_PASSWORD
   – name: uaaadmin
     password: UAADB_PASSWORD
  
Replace CCDB_PASSWORD and UAADB_PASSWORD with secure passwords of your choosing.
 hm9000:
   ca_cert: HM9000_CA_CERT
   server_cert: HM9000_SERVER_CERT
   server_key: HM9000_SERVER_KEY
   agent_cert: HM9000_AGENT_CERT
   agent_key: HM9000_AGENT_KEY
  
Generate SSL certificates for HM9000 and replace these values. You can use the scripts/generate-hm9000-certs script in the cf-release repository to generate self signed certificates.
jobs:
 – name: ha_proxy_z1
   networks:
     – name: cf1
       default:
       – dns
       – gateway
   properties:
     ha_proxy:
       ssl_pem: |
         —–BEGIN RSA PRIVATE KEY—–
         RSA_PRIVATE_KEY
         —–END RSA PRIVATE KEY—–
         —–BEGIN CERTIFICATE—–
         SSL_CERTIFICATE_SIGNED_BY_PRIVATE_KEY
         —–END CERTIFICATE—–
  
Replace RSA_PRIVATE_KEY and SSL_CERTIFICATE_SIGNED_BY_PRIVATE_KEY with the PEM-encoded private key and certificate associated with the system domain and apps domains that you configured to terminate at the floating IP address associated with the ha_proxy job.

Note: You can configure blacklists of IP address ranges to prevent future apps deployed to your Cloud Foundry installation from attempting to drain syslogs to internal Cloud Foundry components. See the Log Drain Blacklist Configuration topic for more information.

3.9 – Install Spiff

To generate the Cloud Foundry manifest from the stub, the Spiff tool will be used. It can be installed following this procedure:
# wget https://github.com/cloudfoundry-incubator/spiff/releases/download/v1.0.8/spiff_linux_amd64.zip
# unzip spiff_linux_amd64.zip
# mv spiff /root/bin
# export PATH=$PATH:/root/bin

3.10 – Generate the Cloud Foundry manifest

Generate the manifest by running the following command:
# cf-release/scripts/generate_deployment_manifest openstack /cm/shared/manifests/cf-stub.yml > /cm/shared/manifests/cf-deployment.yml

3.11 – Deploy the Cloud Foundry manifest

Set the generated manifest as the current deployment.
# bosh deployment /cm/shared/manifests/cf-deployment.yml

Download an Openstack stemcell from https://bosh.io/stemcells, for example:
# wget https://s3.amazonaws.com/bosh-core-stemcells/openstack/bosh-stemcell-3363.14-openstack-kvm-ubuntu-trusty-go_agent.tgz

Upload the stemcell to the BOSH director.
# bosh upload stemcell bosh-stemcell-3363.14-openstack-kvm-ubuntu-trusty-go_agent.tgz

Create the release.
# cd /root/cf-release
# bosh create release

Upload the release.
# bosh upload release

Deploy Cloud Foundry by running the following command:
# bosh deploy

Updated on October 26, 2020

Related Articles

Leave a Comment