How to recover juju from a lost ~/.juju (OpenStack Provider)

If you have accidentally lost the ~/.juju directory or the
host where your juju client runs, then the following procedure can help you to recover access to your environment.

Note that this document covers only the OpenStack provider, but might apply for others as well.
We are assuming that the units that composes your juju deployment are still alive, specially your bootstrap host.

First, you need to have a new machine, setup a new ssh key and install juju-core.

$ ssh-keygen -t rsa && sudo apt-get install juju-core 

Then find on which nova-compute node is your machine living on:

$ nova hypervisor-servers juju-$USERNAME-machine-0
+--------------------------------------+-------------------+---------------+---------------------+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+---------------+---------------------+
| 078f97cf-19e0-440d-aa5c-1234a75a57d3 | instance-000005a2 | 1             | juju-$USERNAME-machine-0       |

Then filter the hypervisor-list with the hypervisor id:

$ nova hypervisor-list | grep 1
+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 1  | nova-compute-01     |

Then access to your nova-compute-01 machin and check if the directory for instance-000005a2 exists:

$ sudo ls -al /var/lib/nova/instances/078f97cf-19e0-440d-aa5c-1234a75a57d3

Then stop that instance using virsh:

$ sudo virsh shutdown instance-000005a2

Then assuming your are running with local based storage using qcow2 format, mount the unit as a network-block-device.

$ sudo qemu-nbd -c /dev/nbd0 /var/lib/nova/instances/078f97cf-19e0-440d-aa5c-1234a75a57d3/disk
$ sudo mount /dev/nbd0p1 /mnt/instance

Then enable access to your previously created ssh keypair ( assuming you already copied it to the
Vm hypervisor)

$ sudo cat ~/id_rsa.pub >> /mnt/instance/home/ubuntu/.ssh/authorized_keys

Then umount the nbd device, disconnect and start the instance again.

$ sudo umount -l /mnt/instance
$ sudo qemu-nbd -d /dev/nbd0
$ sudo virsh start instance-000005a2

At this point, you can access your instance via SSH using your key, connect to your bootstrap
node and run the next steps there.

Then access to your juju's MongoDB and ask a few details about the environment:

$  sudo su
$  mongo --ssl -u admin -p $(grep oldpassword /var/lib/juju/agents/machine-0/agent.conf | awk -e '{print $2}') localhost:37017/admin

Once on the database instance, run the following for getting your environment uuid:

juju:PRIMARY> db = db.getSiblingDB('juju')  
juju:PRIMARY> db.environments.find().pretty()

{ "_id" : "cc503d03-6933-47a7-8a16-4d1094a6593e"

Second step: Then get all the settings information that will be used later for creating a new jenv file:

juju:PRIMARY> db.settings.find({'_id': "e"}).pretty()  
{
"_id" : "e",
"access-key" : "",
"admin-secret" : "",
"agent-version" : "1.20.14",
"api-port" : 17070,
"apt-http-proxy" : "http://squid.internal:3128",
"apt-https-proxy" : "http://squid.internal:3128",
"auth-mode" : "userpass",
"auth-url" : "http://x.x.x.x/v2.0",
"authorized-keys" : "...",
"bootstrap-addresses-delay" : 10,
"bootstrap-retry-delay" : 5,
"bootstrap-timeout" : 600,
"ca-cert" : "-----BEGIN CERTIFICATE-----ZNUqHLxIuzsl
OVO/pj/GIfrQxXEoG6AGLBrQh6SlJkbbcJLtFno=  
-----END CERTIFICATE-----
",
"ca-private-key" : "",
"charm-store-auth" : "",
"control-bucket" : "1a560626-981a-11e4-a725-d3f43a24220d",
"default-series" : "",
"development" : false,
"disable-network-management" : false,
"firewall-mode" : "instance",
"image-metadata-url" : "http://10.x.x.x/swift/v1/simplestreams/data/",
"image-stream" : "",
"logging-config" : "<root>=WARNING;unit=DEBUG",
"lxc-clone-aufs" : false,
"name" : "username",
"network" : "",
"password" : "password",
"proxy-ssh" : true,
"region" : "region",
"rsyslog-ca-cert" : "-----BEGIN CERTIFICATE-----
+ByOa+sVdAql
FA7pG8XJxaZPlbWj1ZCE2LjIgV+6N9bTXPb7RArmn3OWaKw=  
-----END CERTIFICATE-----
",
"secret-key" : "",
"ssl-hostname-verification" : true,
"state-port" : 37017,
"syslog-port" : 6514,
"tenant-name" : "username",
"test-mode" : false,
        [...]
"type" : "openstack",
"use-default-secgroup" : true,
"use-floating-ip" : false,
"username" : "username"
}

The only important settings from the above list, are: control-bucket, ca-cert, name and password. Then
grab the following file from the bootstrap node, that contains the public/private CA certificate.

$ ls -lt /var/lib/juju/server.pem

On the new juju-client machine edit ~/.juju/environments.yaml and add a new openstack provider:

    recovery:
        type: openstack
        control-bucket: CONTROL-BUCKET-UUID
        admin-secret: password
        auth-url: http://x.x.x.x:5000/v2.0
        region: region
        username: USERNAME-FROM-MONGODB
        password: PASSWORD_FROM_MONGODB
        tenant-name: username
        use-default-secgroup: true
        image-stream: "released"
        apt-http-proxy: http://squid.internal:3128
        apt-https-proxy: http://squid.internal:3128
        tools-metadata-url: https://streams.canonical.com/juju/tools/
        image-metadata-url: http://x.x.x.x/swift/v1/simplestreams/data/

Replace control-bucket, admin-secret, username, password with the values from the mongodb
database.

Then create a new file called ~/.juju/environments/recovery.jenv , with the following contents:

user: admin  
password: test  
environ-uuid: ENVIRONMENT_UUID  
state-servers:  
- ip-address-of-your-bootstrap-node:17070

server-hostnames:  
- ip-address-of-your-bootstrap-node:17070

ca-cert: CA_CERT

bootstrap-config:  
  access-key: ""
  admin-secret: TO_BE_DEFINED
  agent-metadata-url: https://streams.canonical.com/juju/tools/
  api-port: 17070
  apt-http-proxy: http://squid.internal:3128
  apt-https-proxy: http://squid.internal:3128
  auth-mode: userpass
  auth-url: http://10.230.19.65:5000/v2.0
  authorized-keys: 
  bootstrap-addresses-delay: 10
  bootstrap-retry-delay: 5
  bootstrap-timeout: 600
  ca-cert: SECOND_CA_CERT
  ca-private-key: SECOND_CA_CERT_PRIVATE
  charm-store-auth: ""
  control-bucket: CONTROL_BUCKET
  default-series: ""
  development: false
  disable-network-management: false
  firewall-mode: instance
  image-metadata-url: http://x.x.x.x:80/swift/v1/simplestreams/data/
  image-stream: released
  logging-config: <root>=WARNING;unit=DEBUG
  lxc-clone-aufs: false
  name: USERNAME
  network: ""
  password: PASSWORD
  prefer-ipv6: false
  proxy-ssh: true
  region: region
  secret-key: ""
  set-numa-control-policy: false
  ssl-hostname-verification: true
  state-port: 37017
  syslog-port: 6514
  tenant-name: USERNAME
  test-mode: false
  tools-metadata-url: https://streams.canonical.com/juju/tools/
  type: openstack
  use-default-secgroup: true
  use-floating-ip: false
  username: USERNAME
  uuid: ENVIRONMENT_UUID

Replace the environ-uuid and uuid with the environment uid, then the first ca-certentry
with the ca-cert specified on the second step. Then the next ca-cert and ca-private-key
with the contents of the file /var/lib/juju/server.pem that you got from the bootstrap node.

Once you are done, you need to regenerate your admin-password with a new one, use the following
Go script to re-generate:

package main

import (  
"fmt"
"os"
"github.com/juju/utils"
)

func main() () {  
salt, err := utils.RandomSalt()  
if err != nil {  
fmt.Println(err)  
}

fmt.Printf("admin-password:%s - salt:%s  
", utils.UserPasswordHash(os.Args[0], salt), salt)
}

Compile and run with:

$ go build password.go
$ ./password newpassword
admin-password: 98asdnaskd - salt: 9asdasd93asd  

Edit your ~/.juju/environment/recovery.jenv and replace the fields admin-secret and password
with the value that you passed to the previous script ( in plain text ).

Once you save your changes, then you need to jump into the mongodb database and update the salt and password:

$  sudo su
$  mongo --ssl -u admin -p $(grep oldpassword /var/lib/juju/agents/machine-0/agent.conf | awk -e '{print $2}') localhost:37017/admin

Once on the database instance, update the admin user with your new password and salt.

juju:PRIMARY> db = db.getSiblingDB('juju')  
juju:PRIMARY> db.users.update({'_id': "admin"}, { $set: { "passwordhash": "NEWPASSWORD",  "passwordsalt": "SALT" })  

At this point you should be access to switch into your new environment and run a juju status.

$ juju switch recovery
$ juju status

Accessing via SSH

For accessing via juju ssh you need to copy your new RSA public key into ~/.ssh/authorized_keys for every
juju machine. After doing this, you need to manually modify the database to use your new ssh key

juju:PRIMARY> db.settings.update({'_id': "e"}, { $set: { 'authorized-keys': "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCupSv5dH6LO3qIu4EfBP8OhHO4RwLmgz9twWCgh5Boh/sdasdasdad/9lVottd2ACwVMCsAPxwBJmc/58EIVguuqQlLs9AT0KZu1tYqgqsAhQxOspQTXjbNKgrJNVsOxzi1i34HAoyICGkv3/j2IwRWLBY73e4lk+7U3kea/5vhmoehYDXmkeDpSPrw3btM2QiBIo6eibe6q8fas2/hZcS9R4ykG/6/iM1 ubuntu@your-new-host  
"}})

Once this is done you can access your environment via SSH:

$ juju ssh 0

At this point the recovery is completed.


Jorge Niedbalski

Dev and Ops , and might be the opposite.


View or Post Comments