Browse code

Doc fixes

- Redo of README
- Added Photon Logo
- Moved docs from root to docs/ folder
- Moved build instructions from readme.md to docs/build-photon.md

Fabio Rapposelli authored on 2015/04/23 11:14:49
Showing 14 changed files
... ...
@@ -1,96 +1,17 @@
1
-# Welcome to the VMware Photon Linux Release!
1
+![Photon](http://storage.googleapis.com/project-photon/vmw-logo-photon.svg "VMware Photon")
2 2
 
3
-## Introduction
3
+[ ![Download](https://api.bintray.com/packages/vmware/photon/iso/images/download.svg) ](https://bintray.com/vmware/photon/iso/_latestVersion)
4
+VMware Photon: Minimal Linux Container Host
5
+===========================================
4 6
 
5
-Photon is a small RPM-based Linux distribution that is optimized for running containers. This repository is intended for developers wishing to modify Photon and build their own customized ISO images. For those interested in an ISO image that is ready to use, please download from the following location: 
7
+Photon is a technology preview of a minimal Linux container host. It is designed to have a small footprint and boot extremely quickly on VMware platforms. Photon is intended to invite collaboration around running containerized applications in a virtualized environment.
6 8
 
7
-https://dl.bintray.com/vmware/photon/iso/1.0TP1/x86_64/
9
+- Optimized for vSphere - Validated on VMware product and provider platforms.
10
+- Container support - Supports Docker, rkt, and the Pivotal Garden container specifications.
11
+- Efficient lifecycle management - contains a new, open-source, yum-compatible package manager that will help make the system as small as possible, but preserve the robust yum package management capabilities.
8 12
 
9
-## Folder Layout
10
-```
11
-photon/
12
-├── Makefile
13
-├── README
14
-├── SPECS # RPM SPEC files
15
-├── cloud-init.md
16
-├── gce.md
17
-├── installer # Installer used at runtime
18
-└── support
19
-```
13
+This repository is intended for developers wishing to modify Photon and build their own customized ISO images.
20 14
 
21
-## How to build the ISO?
15
+Official ISOs are available for download at [Bintray](https://bintray.com/vmware/photon/iso/view).
22 16
 
23
-Assuming you checked out the workspace under `$HOME/workspaces/photon`.
24
-```
25
-cd $HOME/workspaces/photon
26
-sudo make iso
27
-```
28
-Deliverable will be created at `$HOME/workspaces/photon/stage/photon.iso`
29
-
30
-## How to use cached toolchain and RPMS?
31
-```
32
-mkdir $HOME/photon-cache
33
-sudo make iso PHOTON_CACHE_PATH=$HOME/photon-cache
34
-```
35
-Directory format of `PHOTON_CACHE_PATH` is as follows.
36
-```
37
-photon-cache/
38
-├──tools-build.tar.gz
39
-├──RPMS/x86-64/*.rpm
40
-└──RPMX/noarch/*.rpm
41
-```
42
-## How to use cached sources?
43
-```
44
-mkdir $HOME/photon-sources
45
-sudo make iso PHOTON_SOURCES_PATH=$HOME/photon-sources
46
-```
47
-Directory format of `PHOTON_SOURCES_PATH` is as follows.
48
-```
49
-photon-sources/
50
-├──src1.tar.gz
51
-├──src2.tar.gz
52
-└──...
53
-```
54
-## How to build the toolchain?
55
-
56
-1. Check toolchain pre-requisites
57
-```
58
-$HOME/workspaces/photon/support/toolchain/version-check.sh
59
-```
60
-2. Make toolchain
61
-```
62
-$HOME/workspaces/photon
63
-sudo make toolchain
64
-```
65
-
66
-Pre-requisites :
67
-
68
- * Build O/S : Ubuntu 14.04 (or later) 64 bit
69
- * Packages: bison, gawk, g++, createrepo, python-aptdaemon, genisoimage, texinfo, python-requests
70
-```
71
-sudo apt-get -y install bison gawk g++ createrepo python-aptdaemon genisoimage texinfo python-requests
72
-```
73
-
74
-### Settings:
75
-
76
-Make sure `/bin/sh` is a symbolic link pointing to `/bin/bash`
77
-
78
-If `/bin/sh` is pointing `/bin/dash`, execute the following:
79
-```
80
-rm -f /bin/sh
81
-ln -s /bin/bash /bin/sh
82
-```
83
-
84
-## Where are the build logs?
85
-```
86
-$HOME/workspaces/photon/stage/LOGS
87
-```
88
-
89
-## Complete build environment using Vagrant
90
-A `Vagrantfile` is available to ensure a quick standup of a development/build environment for Photon, this Vagrantfile uses a box called `photon-build-machine` box that is created through a [Packer](http://packer.io) template available under `support/packer-templates`, see the [README.md](https://github.com/vmware/photon/blob/master/support/packer-templates/README.md) for more information on how to build `photon-build-machine`.
91
-
92
-## Photon Vagrant box
93
-As with the build-machine a Packer template is available under `support/packer-templates` to build a Photon based Vagrant box running Docker, see the [README.md](https://github.com/vmware/photon/blob/master/support/packer-templates/README.md) for more information on how to build.
94
-
95
-## Automated build environment and Vagrant boxes
96
-Convenience make targets also exist to build both the `photon-build-machine` and the `photon` Packer templates as well as building a fresh ISO using the `photon-build-machine`. See the [README.md](https://github.com/vmware/photon/blob/master/support/packer-templates/README.md) for more details.
17
+An official Vagrant box is available on Hashicorp Atlas, to get started: `vagrant init vmware/photon`.
97 18
deleted file mode 100644
... ...
@@ -1,89 +0,0 @@
1
-Overview
2
-=================
3
-`cloud-init` is the defacto multi-distribution package that handles early initialization of a cloud instance.
4
-
5
-In-depth documentation for cloud-init is available here: https://cloudinit.readthedocs.org/en/latest/
6
-
7
-Supported installations
8
-=================
9
-<dl>
10
-<dt><code>Photon Container OS (Minimal)</code></dt>
11
-<dt><code>Photon Full OS (All)</code></dt>
12
-</dl>
13
-
14
-Supported capabilities
15
-=================
16
-`Photon` supports `cloud-init` starting with the following capabilities
17
-<dl>
18
-<dt><code>run commands</code></dt>
19
-<dd>execute a list of commands with output to console.</dd>
20
-<dt><code>configure ssh keys</code></dt>
21
-<dd>add entry to ~/.ssh/authorized_keys for the configured user</dd>
22
-<dt><code>install package</code></dt>
23
-<dd>install additional packages on first boot</dd>
24
-<dt><code>configure networking</code></dt>
25
-<dd>update /etc/hosts, hostname etc</dd>
26
-<dt><code>write files</code></dt>
27
-<dd>write arbitrary file(s) to disk</dd>
28
-<dt><code>add yum repo</code></dt>
29
-<dd>add a yum repository to /etc/yum.repos.d</dd>
30
-<dt><code>create groups and users</code></dt>
31
-<dd>add groups and users to the system. set user/group properties</dd>
32
-<dt><code>run yum upgrade</code></dt>
33
-<dd>upgrade all packages</dd>
34
-<dt><code>reboot</code></dt>
35
-<dd>reboot or power off when done with cloud-init</dd>
36
-</dl>
37
-
38
-Getting Started
39
-=================
40
-photon cloud config has `ec2 datasource` turned on by default so an `ec2` configuration is accepted. 
41
-However, for testing, the following methods provide ways to do `cloud-init` with `photon` standalone.
42
-
43
-Using a seed iso
44
-This will be using the `nocloud` datasource. In order to init this way, an iso file needs to be created 
45
-with a meta-data and an user-data file as shown below
46
-<code><pre>
47
-$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
48
-$ printf "#cloud-config\nhostname: testhost\n" > user-data
49
-$ genisoimage  -output seed.iso -volid cidata -joliet -rock user-data meta-data
50
-</pre>
51
-</code>
52
-
53
-Attach the above generated seed.iso to your machine and reboot for the init to take effect. 
54
-In this case, the hostname is set to `testhost`
55
-
56
-Using a seed disk file
57
-To init using local disk files, do the following
58
-<code><pre>
59
-mkdir /var/lib/cloud/seed/nocloud
60
-cd /var/lib/cloud/seed/nocloud
61
-$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
62
-$ printf "#cloud-config\nhostname: testhost\n" > user-data
63
-</pre></code>
64
-Reboot the machine and the hostname will be set to `testhost`
65
-
66
-Frequencies
67
-cloud-init modules have pre-determined frequencies. Based on the frequency setting, multiple runs will yield different results. 
68
-For the scripts to run always, remove instances folder before reboot.
69
-<code><pre>
70
-rm -rf /var/lib/cloud/instances
71
-</code></pre>
72
-
73
-Module frequency info
74
-Name  |  Frequency
75
-disable_ec2_metadata  | Always
76
-users_groups  | Instance
77
-write_files  | Instance
78
-update_hostname  | Always
79
-final_message  | Always
80
-resolv_conf  | Instance
81
-growpart  | Always
82
-update_etc_hosts  | Always
83
-power_state_change  | Instance
84
-phone_home  | Instance
85 1
deleted file mode 100644
... ...
@@ -1,19 +0,0 @@
1
-# Photon on Docker
2
-
3
-To create a Docker container image we need a Dockerfile that describes the base image and packages to be installed on the image. A Dockerfile lets you define and then create an image that can then be used to create container instances.
4
-A Photon Dockerfile is located at following location:
5
-
6
-```$HOME/workspace/photon/support/dockerfiles/photon```
7
-
8
-## Build new Photon Images
9
-To build new images you should have built all Photon RPMS using ```make all``` or ```make iso```. Also, the docker service should be running in the background.
10
-
11
-The ```./make-docker-image.sh``` command takes the path of the local repo and the type of image (i.e. minimal, micro or full) you want to create.
12
-
13
-```cd $HOME/workspace/photon/support/dockerfiles/photon```
14
-
15
-```./make-docker-image.sh $HOME/workspace minimal```
16
-
17
-## Running the Photon Container
18
-
19
-```docker run -it photon:minimal```
20 1
new file mode 100644
... ...
@@ -0,0 +1,88 @@
0
+## Folder Layout
1
+```
2
+photon/
3
+├── Makefile
4
+├── README
5
+├── SPECS # RPM SPEC files
6
+├── cloud-init.md
7
+├── gce.md
8
+├── installer # Installer used at runtime
9
+└── support
10
+```
11
+
12
+## How to build the ISO?
13
+
14
+Assuming you checked out the workspace under `$HOME/workspaces/photon`.
15
+```
16
+cd $HOME/workspaces/photon
17
+sudo make iso
18
+```
19
+Deliverable will be created at `$HOME/workspaces/photon/stage/photon.iso`
20
+
21
+## How to use cached toolchain and RPMS?
22
+```
23
+mkdir $HOME/photon-cache
24
+sudo make iso PHOTON_CACHE_PATH=$HOME/photon-cache
25
+```
26
+Directory format of `PHOTON_CACHE_PATH` is as follows.
27
+```
28
+photon-cache/
29
+├──tools-build.tar.gz
30
+├──RPMS/x86-64/*.rpm
31
+└──RPMX/noarch/*.rpm
32
+```
33
+## How to use cached sources?
34
+```
35
+mkdir $HOME/photon-sources
36
+sudo make iso PHOTON_SOURCES_PATH=$HOME/photon-sources
37
+```
38
+Directory format of `PHOTON_SOURCES_PATH` is as follows.
39
+```
40
+photon-sources/
41
+├──src1.tar.gz
42
+├──src2.tar.gz
43
+└──...
44
+```
45
+## How to build the toolchain?
46
+
47
+1. Check toolchain pre-requisites
48
+```
49
+$HOME/workspaces/photon/support/toolchain/version-check.sh
50
+```
51
+2. Make toolchain
52
+```
53
+$HOME/workspaces/photon
54
+sudo make toolchain
55
+```
56
+
57
+Pre-requisites :
58
+
59
+ * Build O/S : Ubuntu 14.04 (or later) 64 bit
60
+ * Packages: bison, gawk, g++, createrepo, python-aptdaemon, genisoimage, texinfo, python-requests
61
+```
62
+sudo apt-get -y install bison gawk g++ createrepo python-aptdaemon genisoimage texinfo python-requests
63
+```
64
+
65
+### Settings:
66
+
67
+Make sure `/bin/sh` is a symbolic link pointing to `/bin/bash`
68
+
69
+If `/bin/sh` is pointing `/bin/dash`, execute the following:
70
+```
71
+rm -f /bin/sh
72
+ln -s /bin/bash /bin/sh
73
+```
74
+
75
+## Where are the build logs?
76
+```
77
+$HOME/workspaces/photon/stage/LOGS
78
+```
79
+
80
+## Complete build environment using Vagrant
81
+A `Vagrantfile` is available to ensure a quick standup of a development/build environment for Photon, this Vagrantfile uses a box called `photon-build-machine` box that is created through a [Packer](http://packer.io) template available under `support/packer-templates`, see the [README.md](https://github.com/vmware/photon/blob/master/support/packer-templates/README.md) for more information on how to build `photon-build-machine`.
82
+
83
+## Photon Vagrant box
84
+As with the build-machine a Packer template is available under `support/packer-templates` to build a Photon based Vagrant box running Docker, see the [README.md](https://github.com/vmware/photon/blob/master/support/packer-templates/README.md) for more information on how to build.
85
+
86
+## Automated build environment and Vagrant boxes
87
+Convenience make targets also exist to build both the `photon-build-machine` and the `photon` Packer templates as well as building a fresh ISO using the `photon-build-machine`. See the [README.md](https://github.com/vmware/photon/blob/master/support/packer-templates/README.md) for more details.
0 88
new file mode 100644
... ...
@@ -0,0 +1,89 @@
0
+Overview
1
+=================
2
+`cloud-init` is the defacto multi-distribution package that handles early initialization of a cloud instance.
3
+
4
+In-depth documentation for cloud-init is available here: https://cloudinit.readthedocs.org/en/latest/
5
+
6
+Supported installations
7
+=================
8
+<dl>
9
+<dt><code>Photon Container OS (Minimal)</code></dt>
10
+<dt><code>Photon Full OS (All)</code></dt>
11
+</dl>
12
+
13
+Supported capabilities
14
+=================
15
+`Photon` supports `cloud-init` starting with the following capabilities
16
+<dl>
17
+<dt><code>run commands</code></dt>
18
+<dd>execute a list of commands with output to console.</dd>
19
+<dt><code>configure ssh keys</code></dt>
20
+<dd>add entry to ~/.ssh/authorized_keys for the configured user</dd>
21
+<dt><code>install package</code></dt>
22
+<dd>install additional packages on first boot</dd>
23
+<dt><code>configure networking</code></dt>
24
+<dd>update /etc/hosts, hostname etc</dd>
25
+<dt><code>write files</code></dt>
26
+<dd>write arbitrary file(s) to disk</dd>
27
+<dt><code>add yum repo</code></dt>
28
+<dd>add a yum repository to /etc/yum.repos.d</dd>
29
+<dt><code>create groups and users</code></dt>
30
+<dd>add groups and users to the system. set user/group properties</dd>
31
+<dt><code>run yum upgrade</code></dt>
32
+<dd>upgrade all packages</dd>
33
+<dt><code>reboot</code></dt>
34
+<dd>reboot or power off when done with cloud-init</dd>
35
+</dl>
36
+
37
+Getting Started
38
+=================
39
+photon cloud config has `ec2 datasource` turned on by default so an `ec2` configuration is accepted. 
40
+However, for testing, the following methods provide ways to do `cloud-init` with `photon` standalone.
41
+
42
+Using a seed iso
43
+----------------
44
+This will be using the `nocloud` datasource. In order to init this way, an iso file needs to be created 
45
+with a meta-data and an user-data file as shown below
46
+<code><pre>
47
+$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
48
+$ printf "#cloud-config\nhostname: testhost\n" > user-data
49
+$ genisoimage  -output seed.iso -volid cidata -joliet -rock user-data meta-data
50
+</pre>
51
+</code>
52
+
53
+Attach the above generated seed.iso to your machine and reboot for the init to take effect. 
54
+In this case, the hostname is set to `testhost`
55
+
56
+Using a seed disk file
57
+----------------
58
+To init using local disk files, do the following
59
+<code><pre>
60
+mkdir /var/lib/cloud/seed/nocloud
61
+cd /var/lib/cloud/seed/nocloud
62
+$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
63
+$ printf "#cloud-config\nhostname: testhost\n" > user-data
64
+</pre></code>
65
+Reboot the machine and the hostname will be set to `testhost`
66
+
67
+Frequencies
68
+-----------
69
+cloud-init modules have pre-determined frequencies. Based on the frequency setting, multiple runs will yield different results. 
70
+For the scripts to run always, remove instances folder before reboot.
71
+<code><pre>
72
+rm -rf /var/lib/cloud/instances
73
+</code></pre>
74
+
75
+Module frequency info
76
+---------------------
77
+Name  |  Frequency
78
+------|-------------
79
+disable_ec2_metadata  | Always
80
+users_groups  | Instance
81
+write_files  | Instance
82
+update_hostname  | Always
83
+final_message  | Always
84
+resolv_conf  | Instance
85
+growpart  | Always
86
+update_etc_hosts  | Always
87
+power_state_change  | Instance
88
+phone_home  | Instance
0 89
new file mode 100644
... ...
@@ -0,0 +1,19 @@
0
+# Photon on Docker
1
+
2
+To create a Docker container image we need a Dockerfile that describes the base image and packages to be installed on the image. A Dockerfile lets you define and then create an image that can then be used to create container instances.
3
+A Photon Dockerfile is located at following location:
4
+
5
+```$HOME/workspace/photon/support/dockerfiles/photon```
6
+
7
+## Build new Photon Images
8
+To build new images you should have built all Photon RPMS using ```make all``` or ```make iso```. Also, the docker service should be running in the background.
9
+
10
+The ```./make-docker-image.sh``` command takes the path of the local repo and the type of image (i.e. minimal, micro or full) you want to create.
11
+
12
+```cd $HOME/workspace/photon/support/dockerfiles/photon```
13
+
14
+```./make-docker-image.sh $HOME/workspace minimal```
15
+
16
+## Running the Photon Container
17
+
18
+```docker run -it photon:minimal```
0 19
new file mode 100644
... ...
@@ -0,0 +1,59 @@
0
+#FAQ
1
+
2
+#### Why can't I SSH in as root?
3
+
4
+By default Photon does not permit root login to ssh. To make yourself login as root using
5
+SSH set <code>PermitRootLogin yes</code> in /etc/ssh/sshd_config, and restart the sshd deamon.
6
+
7
+#### Why is netstat not working?
8
+
9
+netstat is deprecated, ss or ip (part of iproute2) should be used instead.
10
+
11
+## How do I install new packages?
12
+#### Why is the yum command not working in a Minimal installation of Photon?
13
+
14
+To install packages from cdrom, mount cdrom using following command
15
+
16
+```
17
+mount /dev/cdrom /media/cdrom
18
+```
19
+
20
+Then you can use ```tdnf``` to install new pacakges
21
+
22
+```
23
+tdnf install vim
24
+```
25
+
26
+#### How do I build a new RPM package?
27
+
28
+Assuming you have the Ubuntu development environment setup and got the latest code pull into /workspace.
29
+Lets assume your package name is foo with version 1.0.
30
+
31
+```
32
+cp foo-1.0.tar.gz /workspace/photon/SOURCES
33
+cp foo.spec /workspace/photon/SPECS/foo/
34
+cd /workspace/photon/support/package-builder
35
+sudo python ./build_package.py -i foo
36
+```
37
+
38
+#### I just booted into a freshly installed Photon, why is ```docker ps``` not working?
39
+
40
+Make sure the docker daemon is running, which by design is not started at boot time.
41
+
42
+#### What is the difference between the Micro/Minimal/Full installations of Photon?
43
+Micro is the smallest version of Photon, under 220MB (as of 03/30) to be used as base for customization.
44
+
45
+Minimal is Micro plus Docker and Cloud-init packages.
46
+
47
+Full contains all the packages shipped with ISO.
48
+
49
+#### What packages are included in Micro/Minimal?
50
+See [package_list.json](installer/package_list.json)
51
+
52
+#### Why is vi/vim is not working in a Minimal installation of Photon?
53
+
54
+We have `nano` installed by default for file editing in Minimal. Use `tdnf` to install `vim`.
55
+
56
+#### How do I transfer/share files between Photon and my host machine?
57
+
58
+We are working on supporting some standard options. Currently we recommend using [sshfs](https://wiki.archlinux.org/index.php/sshfs) for file sharing between hosts and Photon.
0 59
new file mode 100644
... ...
@@ -0,0 +1,146 @@
0
+#Photon on GCE
1
+## Google Compute Engine (GCE) Image background
2
+GCE is a service that lets user run virtual machines on Google's infrastructure. User can customize the virtual machine as much as they want, even can install their custom OS imsage apart from the publicly provided [images](https://cloud.google.com/compute/docs/operating-systems/). For any OS to be useable on GCE, it must match the Google's infrastructure needs. 
3
+Following are Google provided tools used for VM instances to behave properly.
4
+
5
+ *   __[Google startup scripts](https://cloud.google.com/compute/docs/startupscript)__: User can provide some startup script to configure their instances at startup.
6
+ *   __[Google Daemon](https://cloud.google.com/compute/docs/metadata)__: Google Daemon creates new accounts and configures ssh to accept public keys using the metadata server.
7
+ *   __[Google Cloud SDK](https://cloud.google.com/sdk/)__: Command line tools to manage your images, instances and other objects on GCE.
8
+
9
+Following is the list (extracted from [this link](https://cloud.google.com/compute/docs/tutorials/building-images )) of items must be done to make Photon work on GCE.
10
+
11
+ *   Install Google Compute Engine Image Packages
12
+ *   Install Google Cloud SDK
13
+ *   Change GPT partition table to MBR 
14
+ *   Update Grub config for new MBR and serial console output
15
+ *   Update ssh configuration
16
+ *   Delete ssh host keys
17
+ *   Set the time zone to UTC
18
+ *   Use the Google NTP server
19
+ *   Delete the hostname file.
20
+ *   Add Google hosts /etc/hosts
21
+ *   Set MTU to 1460. SSH will not work without it.
22
+ *   Create /etc/ssh/sshd_not_to_be_run with just the contents “GOOGLE\n”.
23
+
24
+## Creating Photon image for GCE
25
+##### 1. Prepare Photon Disk
26
+###### Install Photon Minimal on Fusion/Workstation and install some required packages.
27
+      mount /dev/cdrom /media/cdrom
28
+      tdnf install yum
29
+      tdnf install python2-libs
30
+      yum install ntp sudo wget tar which gptfdisk sed findutils grep gzip --nogpgcheck -y
31
+
32
+###### Photon installer installs GPT partition table by default but GCE only accepts MBR(msdos) type partition table. We need to convert GPT to MBR and update the grub. Following are commands to do that.
33
+  
34
+      # Change partition table to MBR from GPT
35
+      sgdisk -m 1:2 /dev/sda
36
+      grub-install /dev/sda
37
+      
38
+      # Enable serial console on grub for GCE.
39
+      cat << EOF >> /etc/default/grub
40
+      GRUB_CMDLINE_LINUX="console=ttyS0,38400n8"
41
+      GRUB_TERMINAL=serial
42
+      GRUB_SERIAL_COMMAND="serial --speed=38400 --unit=0 --word=8 --parity=no --stop=1"
43
+      EOF
44
+      
45
+      # Create new grub.cfg based on the settings in /etc/default/grub
46
+      grub-mkconfig -o /boot/grub/grub.cfg
47
+      
48
+##### 2. Install Google Cloud SDK and GCE Packages
49
+      yum install google-daemon google-startup-scripts
50
+      cp /usr/lib/systemd/system/google* /lib/systemd/system/
51
+      cd /lib/systemd/system/multi-user.target.wants/
52
+      
53
+      # Create links in multi-user.target to auto-start these scripts and services.
54
+      for i in ../google*; do  ln -s $i `basename $i`; done
55
+      
56
+      cd /tmp/; wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz --no-check-certificate
57
+      tar -xf google-cloud-sdk.tar.gz
58
+      cd google-cloud-sdk
59
+      ./install.sh
60
+##### 3. Update /etc/hosts file with GCE values
61
+      echo "169.254.169.254 metadata.google.internal metadata" >> /etc/hosts
62
+##### 4. Remove all servers from ntp.conf and add Google's ntp server.
63
+      sed -i -e "/server/d" /etc/ntp.conf
64
+      cat /etc/ntp.conf
65
+      echo "server 169.254.169.254" >> /etc/ntp.conf
66
+      # Create ntpd.service to auto starting ntp server.
67
+      cat << EOF >> /lib/systemd/system/ntpd.service
68
+      [Unit]
69
+      Description=Network Time Service
70
+      After=network.target nss-lookup.target
71
+
72
+      [Service]
73
+      Type=forking
74
+      PrivateTmp=true
75
+      ExecStart=/usr/sbin/ntpd -g -u ntp:ntp
76
+      Restart=always
77
+      
78
+      [Install]
79
+      WantedBy=multi-user.target
80
+      EOF
81
+      
82
+      # Add link in multi-user.target.wants to auto start this service.
83
+      cd /lib/systemd/system/multi-user.target.wants/
84
+      ln -s ../ntpd.service ntpd.service
85
+      
86
+##### 5. Set UTC timezone
87
+      ln -sf /usr/share/zoneinfo/UTC /etc/localtime
88
+
89
+##### 6. Update /etc/resolv.conf
90
+      echo "nameserver 8.8.8.8" >> /etc/resolv.conf
91
+
92
+##### 7. Remove ssh host keys and add script to regenerate them at boot time.
93
+      rm /etc/ssh/ssh_host_*
94
+      # Depending on the installation, you may need to purge the following keys
95
+      rm /etc/ssh/ssh_host_rsa_key*
96
+      rm /etc/ssh/ssh_host_dsa_key*
97
+      rm /etc/ssh/ssh_host_ecdsa_key*
98
+
99
+      sed -i -e "/exit 0/d" /etc/rc.local
100
+      echo "[ -f /etc/ssh/ssh_host_key ] && echo 'Keys found.' || ssh-keygen -A" >> /etc/rc.local
101
+      echo "exit 0" >> /etc/rc.local
102
+      printf "GOOGLE\n" > /etc/ssh/sshd_not_to_be_run
103
+      
104
+      # Edit sshd_config and ssh_config as per instructions on [this link](https://cloud.google.com/compute/docs/tutorials/building-images ).
105
+      
106
+##### 8. Change MTU to 1460 for network interface.
107
+      # Create a startup service in systemd that will change MTU and exits
108
+      cat << EOF >> /lib/systemd/system/eth0.service
109
+      [Unit]
110
+      Description=Network interface initialization
111
+      After=local-fs.target network-online.target network.target
112
+      Wants=local-fs.target network-online.target network.target
113
+
114
+      [Service]
115
+      ExecStart=/bin/ifconfig eth0 mtu 1460 up
116
+      Type=oneshot
117
+
118
+      [Install]
119
+      WantedBy=multi-user.target
120
+      EOF
121
+      # Make this service auto-start at boot.
122
+      cd /lib/systemd/system/multi-user.target.wants/
123
+      ln -s ../eth0.service eth0.service
124
+
125
+##### 9. Pack and Upload to GCE.
126
+###### Shutdown the Photon VM and copy its disk to tmp folder.       
127
+      # You will need to install Google Cloud SDK on host machine to upload the image and play with GCE.
128
+      cp Virtual\ Machines.localized/photon.vmwarevm/Virtual\ Disk.vmdk /tmp/disk.vmdk
129
+      cd /tmp
130
+      # GCE needs disk to be named as disk.raw with raw format.
131
+      qemu-img convert -f vmdk -O raw disk.vmdk disk.raw
132
+      
133
+      # ONLY GNU tar will work to create acceptable tar.gz file for GCE. MAC's default tar is BSDTar which will not work. 
134
+      # On Mac OS X ensure that you have gtar "GNU Tar" installed. exmaple: gtar -Szcf photon.tar.gz disk.raw 
135
+
136
+      gtar -Szcf photon.tar.gz disk.raw 
137
+      
138
+      # Upload
139
+      gsutil cp photon.tar.gz gs://photon-bucket
140
+      
141
+      # Create image
142
+      gcloud compute --project "<project name>" images create "photon-beta-vYYYYMMDD" --description "Photon Beta" --source-uri https://storage.googleapis.com/photon-bucket/photon032315.tar.gz
143
+      
144
+      # Create instance on GCE of photon image
145
+      gcloud compute --project "photon" instances create "photon" --zone "us-central1-f" --machine-type "n1-standard-1" --network "default" --maintenance-policy "MIGRATE" --scopes "https://www.googleapis.com/auth/devstorage.read_only" "https://www.googleapis.com/auth/logging.write" --image "https://www.googleapis.com/compute/v1/projects/photon/global/images/photon" --boot-disk-type "pd-standard" --boot-disk-device-name "photon"
0 146
new file mode 100644
... ...
@@ -0,0 +1,77 @@
0
+Running Rocket containers on Photon
1
+===================================
2
+
3
+Rocket is a new container runtime, created by [CoreOS](http://coreos.com) and designed for composability, security, and speed. 
4
+
5
+rkt (pronounced _"rock-it"_) is a CLI for running app containers, and an implementation of the [App Container Spec](https://github.com/coreos/rkt/blob/master/Documentation/app-container.md).
6
+
7
+rkt is available as an optional package in Photon, to install it:
8
+
9
+```
10
+mount /dev/cdrom /media/cdrom
11
+
12
+tdnf install rocket
13
+```
14
+
15
+### Running an App Container Image (ACI)
16
+
17
+rkt uses content addressable storage (CAS) for storing an ACI on disk. In this example, the image is downloaded and added to the CAS.
18
+
19
+Since rkt verifies signatures by default, you will need to first [trust](https://github.com/coreos/rkt/blob/master/Documentation/signing-and-verification-guide.md#establishing-trust) the [CoreOS public key](https://coreos.com/dist/pubkeys/aci-pubkeys.gpg) used to sign the image:
20
+
21
+```
22
+$ sudo rkt trust --prefix coreos.com/etcd
23
+Prefix: "coreos.com/etcd"
24
+Key: "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg"
25
+GPG key fingerprint is: 8B86 DE38 890D DB72 9186  7B02 5210 BD88 8818 2190
26
+        CoreOS ACI Builder <release@coreos.com>
27
+Are you sure you want to trust this key (yes/no)? yes
28
+Trusting "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg" for prefix "coreos.com/etcd".
29
+Added key for prefix "coreos.com/etcd" at "/etc/rkt/trustedkeys/prefix.d/coreos.com/etcd/8b86de38890ddb7291867b025210bd8888182190"
30
+```
31
+
32
+Now that we've trusted the CoreOS public key, we can bring up a simple etcd instance using the ACI format:
33
+
34
+```
35
+$ privateIp=$(ip -4 -o addr show eth0 | cut -d' ' -f7 | cut -d'/' -f1)
36
+$ sudo rkt run coreos.com/etcd:v2.0.4 -- -name vmware-cna \
37
+> -listen-client-urls http://0.0.0.0:2379 \
38
+> -advertise-client-urls http://${privateIp}:2379 \
39
+> -listen-peer-urls http://0.0.0.0:2380 \
40
+> -initial-advertise-peer-urls http://${privateIp}:2380 \
41
+> -initial-cluster vmware-cna=http://${privateIp}:2380 \
42
+> -initial-cluster-state new
43
+
44
+rkt: searching for app image coreos.com/etcd:v2.0.4
45
+rkt: fetching image from https://github.com/coreos/etcd/releases/download/v2.0.4/etcd-v2.0.4-linux-amd64.aci
46
+Downloading signature from https://github.com/coreos/etcd/releases/download/v2.0.4/etcd-v2.0.4-linux-amd64.aci.asc
47
+Downloading ACI: [========================================     ] 3.38 MB/3.76 MB
48
+rkt: signature verified:
49
+  CoreOS ACI Builder <release@coreos.com>
50
+Timezone UTC does not exist in container, not updating container timezone.
51
+2015/04/02 13:18:39 no data-dir provided, using default data-dir ./vmware-cna.etcd
52
+2015/04/02 13:18:39 etcd: listening for peers on http://0.0.0.0:2380
53
+2015/04/02 13:18:39 etcd: listening for client requests on http://0.0.0.0:2379
54
+2015/04/02 13:18:39 etcdserver: name = vmware-cna
55
+2015/04/02 13:18:39 etcdserver: data dir = vmware-cna.etcd
56
+2015/04/02 13:18:39 etcdserver: member dir = vmware-cna.etcd/member
57
+2015/04/02 13:18:39 etcdserver: heartbeat = 100ms
58
+2015/04/02 13:18:39 etcdserver: election = 1000ms
59
+2015/04/02 13:18:39 etcdserver: snapshot count = 10000
60
+2015/04/02 13:18:39 etcdserver: advertise client URLs = http://192.168.35.246:2379
61
+2015/04/02 13:18:39 etcdserver: initial advertise peer URLs = http://192.168.35.246:2380
62
+2015/04/02 13:18:39 etcdserver: initial cluster = vmware-cna=http://192.168.35.246:2380
63
+2015/04/02 13:18:39 etcdserver: start member 8f79fa9a50a1689 in cluster 75c533bd1f49730b
64
+2015/04/02 13:18:39 raft: 8f79fa9a50a1689 became follower at term 0
65
+2015/04/02 13:18:39 raft: newRaft 8f79fa9a50a1689 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
66
+2015/04/02 13:18:39 raft: 8f79fa9a50a1689 became follower at term 1
67
+2015/04/02 13:18:39 etcdserver: added local member 8f79fa9a50a1689 [http://192.168.35.246:2380] to cluster 75c533bd1f49730b
68
+2015/04/02 13:18:41 raft: 8f79fa9a50a1689 is starting a new election at term 1
69
+2015/04/02 13:18:41 raft: 8f79fa9a50a1689 became candidate at term 2
70
+2015/04/02 13:18:41 raft: 8f79fa9a50a1689 received vote from 8f79fa9a50a1689 at term 2
71
+2015/04/02 13:18:41 raft: 8f79fa9a50a1689 became leader at term 2
72
+2015/04/02 13:18:41 raft.node: 8f79fa9a50a1689 elected leader 8f79fa9a50a1689 at term 2
73
+2015/04/02 13:18:41 etcdserver: published {Name:vmware-cna ClientURLs:[http://192.168.35.246:2379]} to cluster 75c533bd1f49730b
74
+```
75
+
76
+At any time you can press ^] three times to kill container.
0 77
\ No newline at end of file
1 78
new file mode 100644
... ...
@@ -0,0 +1,28 @@
0
+#default package manager in photon - tdnf(tyum)
1
+
2
+##Introduction
3
+tdnf(tyum) is tiny dnf (tiny yum) implementing dnf commands in C without python dependencies. 
4
+dnf is the next upcoming major version of yum. tyum(tdnf) is included in photon micro, photon minimal and photon full. 
5
+tyum(tdnf) will read yum repositories and work just like yum. If you need yum, its just as easy as ```tdnf install yum```
6
+
7
+##How to configure a repository
8
+photon comes pre-configured with ```photon-iso``` repository which is in ```\etc\yum.repos.d```
9
+If you get an access error message when working with this repository, it is usually because you dont have the
10
+photon iso mounted. If you have the photon iso, you can mount and makecache to read metadata.
11
+
12
+```
13
+mount /dev/cdrom /media/cdrom
14
+tdnf makecache
15
+```
16
+
17
+##How to install a package?
18
+```tdnf install pkgname```
19
+
20
+##How to remove a package
21
+```tdnf erase pkgname```
22
+
23
+##How to list enabled repositories
24
+```tdnf repolist```
25
+
26
+##Other commands
27
+tdnf implements all dnf commands as listed here: http://dnf.readthedocs.org/en/latest/
0 28
deleted file mode 100644
... ...
@@ -1,59 +0,0 @@
1
-#FAQ
2
-
3
-#### Why can't I SSH in as root?
4
-
5
-By default Photon does not permit root login to ssh. To make yourself login as root using
6
-SSH set <code>PermitRootLogin yes</code> in /etc/ssh/sshd_config, and restart the sshd deamon.
7
-
8
-#### Why is netstat not working?
9
-
10
-netstat is deprecated, ss or ip (part of iproute2) should be used instead.
11
-
12
-## How do I install new packages?
13
-#### Why is the yum command not working in a Minimal installation of Photon?
14
-
15
-To install packages from cdrom, mount cdrom using following command
16
-
17
-```
18
-mount /dev/cdrom /media/cdrom
19
-```
20
-
21
-Then you can use ```tdnf``` to install new pacakges
22
-
23
-```
24
-tdnf install vim
25
-```
26
-
27
-#### How do I build a new RPM package?
28
-
29
-Assuming you have the Ubuntu development environment setup and got the latest code pull into /workspace.
30
-Lets assume your package name is foo with version 1.0.
31
-
32
-```
33
-cp foo-1.0.tar.gz /workspace/photon/SOURCES
34
-cp foo.spec /workspace/photon/SPECS/foo/
35
-cd /workspace/photon/support/package-builder
36
-sudo python ./build_package.py -i foo
37
-```
38
-
39
-#### I just booted into a freshly installed Photon, why is ```docker ps``` not working?
40
-
41
-Make sure the docker daemon is running, which by design is not started at boot time.
42
-
43
-#### What is the difference between the Micro/Minimal/Full installations of Photon?
44
-Micro is the smallest version of Photon, under 220MB (as of 03/30) to be used as base for customization.
45
-
46
-Minimal is Micro plus Docker and Cloud-init packages.
47
-
48
-Full contains all the packages shipped with ISO.
49
-
50
-#### What packages are included in Micro/Minimal?
51
-See [package_list.json](installer/package_list.json)
52
-
53
-#### Why is vi/vim is not working in a Minimal installation of Photon?
54
-
55
-We have `nano` installed by default for file editing in Minimal. Use `tdnf` to install `vim`.
56
-
57
-#### How do I transfer/share files between Photon and my host machine?
58
-
59
-We are working on supporting some standard options. Currently we recommend using [sshfs](https://wiki.archlinux.org/index.php/sshfs) for file sharing between hosts and Photon.
60 1
deleted file mode 100644
... ...
@@ -1,146 +0,0 @@
1
-#Photon on GCE
2
-## Google Compute Engine (GCE) Image background
3
-GCE is a service that lets user run virtual machines on Google's infrastructure. User can customize the virtual machine as much as they want, even can install their custom OS imsage apart from the publicly provided [images](https://cloud.google.com/compute/docs/operating-systems/). For any OS to be useable on GCE, it must match the Google's infrastructure needs. 
4
-Following are Google provided tools used for VM instances to behave properly.
5
-
6
- *   __[Google startup scripts](https://cloud.google.com/compute/docs/startupscript)__: User can provide some startup script to configure their instances at startup.
7
- *   __[Google Daemon](https://cloud.google.com/compute/docs/metadata)__: Google Daemon creates new accounts and configures ssh to accept public keys using the metadata server.
8
- *   __[Google Cloud SDK](https://cloud.google.com/sdk/)__: Command line tools to manage your images, instances and other objects on GCE.
9
-
10
-Following is the list (extracted from [this link](https://cloud.google.com/compute/docs/tutorials/building-images )) of items must be done to make Photon work on GCE.
11
-
12
- *   Install Google Compute Engine Image Packages
13
- *   Install Google Cloud SDK
14
- *   Change GPT partition table to MBR 
15
- *   Update Grub config for new MBR and serial console output
16
- *   Update ssh configuration
17
- *   Delete ssh host keys
18
- *   Set the time zone to UTC
19
- *   Use the Google NTP server
20
- *   Delete the hostname file.
21
- *   Add Google hosts /etc/hosts
22
- *   Set MTU to 1460. SSH will not work without it.
23
- *   Create /etc/ssh/sshd_not_to_be_run with just the contents “GOOGLE\n”.
24
-
25
-## Creating Photon image for GCE
26
-##### 1. Prepare Photon Disk
27
-###### Install Photon Minimal on Fusion/Workstation and install some required packages.
28
-      mount /dev/cdrom /media/cdrom
29
-      tdnf install yum
30
-      tdnf install python2-libs
31
-      yum install ntp sudo wget tar which gptfdisk sed findutils grep gzip --nogpgcheck -y
32
-
33
-###### Photon installer installs GPT partition table by default but GCE only accepts MBR(msdos) type partition table. We need to convert GPT to MBR and update the grub. Following are commands to do that.
34
-  
35
-      # Change partition table to MBR from GPT
36
-      sgdisk -m 1:2 /dev/sda
37
-      grub-install /dev/sda
38
-      
39
-      # Enable serial console on grub for GCE.
40
-      cat << EOF >> /etc/default/grub
41
-      GRUB_CMDLINE_LINUX="console=ttyS0,38400n8"
42
-      GRUB_TERMINAL=serial
43
-      GRUB_SERIAL_COMMAND="serial --speed=38400 --unit=0 --word=8 --parity=no --stop=1"
44
-      EOF
45
-      
46
-      # Create new grub.cfg based on the settings in /etc/default/grub
47
-      grub-mkconfig -o /boot/grub/grub.cfg
48
-      
49
-##### 2. Install Google Cloud SDK and GCE Packages
50
-      yum install google-daemon google-startup-scripts
51
-      cp /usr/lib/systemd/system/google* /lib/systemd/system/
52
-      cd /lib/systemd/system/multi-user.target.wants/
53
-      
54
-      # Create links in multi-user.target to auto-start these scripts and services.
55
-      for i in ../google*; do  ln -s $i `basename $i`; done
56
-      
57
-      cd /tmp/; wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz --no-check-certificate
58
-      tar -xf google-cloud-sdk.tar.gz
59
-      cd google-cloud-sdk
60
-      ./install.sh
61
-##### 3. Update /etc/hosts file with GCE values
62
-      echo "169.254.169.254 metadata.google.internal metadata" >> /etc/hosts
63
-##### 4. Remove all servers from ntp.conf and add Google's ntp server.
64
-      sed -i -e "/server/d" /etc/ntp.conf
65
-      cat /etc/ntp.conf
66
-      echo "server 169.254.169.254" >> /etc/ntp.conf
67
-      # Create ntpd.service to auto starting ntp server.
68
-      cat << EOF >> /lib/systemd/system/ntpd.service
69
-      [Unit]
70
-      Description=Network Time Service
71
-      After=network.target nss-lookup.target
72
-
73
-      [Service]
74
-      Type=forking
75
-      PrivateTmp=true
76
-      ExecStart=/usr/sbin/ntpd -g -u ntp:ntp
77
-      Restart=always
78
-      
79
-      [Install]
80
-      WantedBy=multi-user.target
81
-      EOF
82
-      
83
-      # Add link in multi-user.target.wants to auto start this service.
84
-      cd /lib/systemd/system/multi-user.target.wants/
85
-      ln -s ../ntpd.service ntpd.service
86
-      
87
-##### 5. Set UTC timezone
88
-      ln -sf /usr/share/zoneinfo/UTC /etc/localtime
89
-
90
-##### 6. Update /etc/resolv.conf
91
-      echo "nameserver 8.8.8.8" >> /etc/resolv.conf
92
-
93
-##### 7. Remove ssh host keys and add script to regenerate them at boot time.
94
-      rm /etc/ssh/ssh_host_*
95
-      # Depending on the installation, you may need to purge the following keys
96
-      rm /etc/ssh/ssh_host_rsa_key*
97
-      rm /etc/ssh/ssh_host_dsa_key*
98
-      rm /etc/ssh/ssh_host_ecdsa_key*
99
-
100
-      sed -i -e "/exit 0/d" /etc/rc.local
101
-      echo "[ -f /etc/ssh/ssh_host_key ] && echo 'Keys found.' || ssh-keygen -A" >> /etc/rc.local
102
-      echo "exit 0" >> /etc/rc.local
103
-      printf "GOOGLE\n" > /etc/ssh/sshd_not_to_be_run
104
-      
105
-      # Edit sshd_config and ssh_config as per instructions on [this link](https://cloud.google.com/compute/docs/tutorials/building-images ).
106
-      
107
-##### 8. Change MTU to 1460 for network interface.
108
-      # Create a startup service in systemd that will change MTU and exits
109
-      cat << EOF >> /lib/systemd/system/eth0.service
110
-      [Unit]
111
-      Description=Network interface initialization
112
-      After=local-fs.target network-online.target network.target
113
-      Wants=local-fs.target network-online.target network.target
114
-
115
-      [Service]
116
-      ExecStart=/bin/ifconfig eth0 mtu 1460 up
117
-      Type=oneshot
118
-
119
-      [Install]
120
-      WantedBy=multi-user.target
121
-      EOF
122
-      # Make this service auto-start at boot.
123
-      cd /lib/systemd/system/multi-user.target.wants/
124
-      ln -s ../eth0.service eth0.service
125
-
126
-##### 9. Pack and Upload to GCE.
127
-###### Shutdown the Photon VM and copy its disk to tmp folder.       
128
-      # You will need to install Google Cloud SDK on host machine to upload the image and play with GCE.
129
-      cp Virtual\ Machines.localized/photon.vmwarevm/Virtual\ Disk.vmdk /tmp/disk.vmdk
130
-      cd /tmp
131
-      # GCE needs disk to be named as disk.raw with raw format.
132
-      qemu-img convert -f vmdk -O raw disk.vmdk disk.raw
133
-      
134
-      # ONLY GNU tar will work to create acceptable tar.gz file for GCE. MAC's default tar is BSDTar which will not work. 
135
-      # On Mac OS X ensure that you have gtar "GNU Tar" installed. exmaple: gtar -Szcf photon.tar.gz disk.raw 
136
-
137
-      gtar -Szcf photon.tar.gz disk.raw 
138
-      
139
-      # Upload
140
-      gsutil cp photon.tar.gz gs://photon-bucket
141
-      
142
-      # Create image
143
-      gcloud compute --project "<project name>" images create "photon-beta-vYYYYMMDD" --description "Photon Beta" --source-uri https://storage.googleapis.com/photon-bucket/photon032315.tar.gz
144
-      
145
-      # Create instance on GCE of photon image
146
-      gcloud compute --project "photon" instances create "photon" --zone "us-central1-f" --machine-type "n1-standard-1" --network "default" --maintenance-policy "MIGRATE" --scopes "https://www.googleapis.com/auth/devstorage.read_only" "https://www.googleapis.com/auth/logging.write" --image "https://www.googleapis.com/compute/v1/projects/photon/global/images/photon" --boot-disk-type "pd-standard" --boot-disk-device-name "photon"
147 1
deleted file mode 100644
... ...
@@ -1,77 +0,0 @@
1
-Running Rocket containers on Photon
2
-===================================
3
-
4
-Rocket is a new container runtime, created by [CoreOS](http://coreos.com) and designed for composability, security, and speed. 
5
-
6
-rkt (pronounced _"rock-it"_) is a CLI for running app containers, and an implementation of the [App Container Spec](https://github.com/coreos/rkt/blob/master/Documentation/app-container.md).
7
-
8
-rkt is available as an optional package in Photon, to install it:
9
-
10
-```
11
-mount /dev/cdrom /media/cdrom
12
-
13
-tdnf install rocket
14
-```
15
-
16
-### Running an App Container Image (ACI)
17
-
18
-rkt uses content addressable storage (CAS) for storing an ACI on disk. In this example, the image is downloaded and added to the CAS.
19
-
20
-Since rkt verifies signatures by default, you will need to first [trust](https://github.com/coreos/rkt/blob/master/Documentation/signing-and-verification-guide.md#establishing-trust) the [CoreOS public key](https://coreos.com/dist/pubkeys/aci-pubkeys.gpg) used to sign the image:
21
-
22
-```
23
-$ sudo rkt trust --prefix coreos.com/etcd
24
-Prefix: "coreos.com/etcd"
25
-Key: "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg"
26
-GPG key fingerprint is: 8B86 DE38 890D DB72 9186  7B02 5210 BD88 8818 2190
27
-        CoreOS ACI Builder <release@coreos.com>
28
-Are you sure you want to trust this key (yes/no)? yes
29
-Trusting "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg" for prefix "coreos.com/etcd".
30
-Added key for prefix "coreos.com/etcd" at "/etc/rkt/trustedkeys/prefix.d/coreos.com/etcd/8b86de38890ddb7291867b025210bd8888182190"
31
-```
32
-
33
-Now that we've trusted the CoreOS public key, we can bring up a simple etcd instance using the ACI format:
34
-
35
-```
36
-$ privateIp=$(ip -4 -o addr show eth0 | cut -d' ' -f7 | cut -d'/' -f1)
37
-$ sudo rkt run coreos.com/etcd:v2.0.4 -- -name vmware-cna \
38
-> -listen-client-urls http://0.0.0.0:2379 \
39
-> -advertise-client-urls http://${privateIp}:2379 \
40
-> -listen-peer-urls http://0.0.0.0:2380 \
41
-> -initial-advertise-peer-urls http://${privateIp}:2380 \
42
-> -initial-cluster vmware-cna=http://${privateIp}:2380 \
43
-> -initial-cluster-state new
44
-
45
-rkt: searching for app image coreos.com/etcd:v2.0.4
46
-rkt: fetching image from https://github.com/coreos/etcd/releases/download/v2.0.4/etcd-v2.0.4-linux-amd64.aci
47
-Downloading signature from https://github.com/coreos/etcd/releases/download/v2.0.4/etcd-v2.0.4-linux-amd64.aci.asc
48
-Downloading ACI: [========================================     ] 3.38 MB/3.76 MB
49
-rkt: signature verified:
50
-  CoreOS ACI Builder <release@coreos.com>
51
-Timezone UTC does not exist in container, not updating container timezone.
52
-2015/04/02 13:18:39 no data-dir provided, using default data-dir ./vmware-cna.etcd
53
-2015/04/02 13:18:39 etcd: listening for peers on http://0.0.0.0:2380
54
-2015/04/02 13:18:39 etcd: listening for client requests on http://0.0.0.0:2379
55
-2015/04/02 13:18:39 etcdserver: name = vmware-cna
56
-2015/04/02 13:18:39 etcdserver: data dir = vmware-cna.etcd
57
-2015/04/02 13:18:39 etcdserver: member dir = vmware-cna.etcd/member
58
-2015/04/02 13:18:39 etcdserver: heartbeat = 100ms
59
-2015/04/02 13:18:39 etcdserver: election = 1000ms
60
-2015/04/02 13:18:39 etcdserver: snapshot count = 10000
61
-2015/04/02 13:18:39 etcdserver: advertise client URLs = http://192.168.35.246:2379
62
-2015/04/02 13:18:39 etcdserver: initial advertise peer URLs = http://192.168.35.246:2380
63
-2015/04/02 13:18:39 etcdserver: initial cluster = vmware-cna=http://192.168.35.246:2380
64
-2015/04/02 13:18:39 etcdserver: start member 8f79fa9a50a1689 in cluster 75c533bd1f49730b
65
-2015/04/02 13:18:39 raft: 8f79fa9a50a1689 became follower at term 0
66
-2015/04/02 13:18:39 raft: newRaft 8f79fa9a50a1689 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
67
-2015/04/02 13:18:39 raft: 8f79fa9a50a1689 became follower at term 1
68
-2015/04/02 13:18:39 etcdserver: added local member 8f79fa9a50a1689 [http://192.168.35.246:2380] to cluster 75c533bd1f49730b
69
-2015/04/02 13:18:41 raft: 8f79fa9a50a1689 is starting a new election at term 1
70
-2015/04/02 13:18:41 raft: 8f79fa9a50a1689 became candidate at term 2
71
-2015/04/02 13:18:41 raft: 8f79fa9a50a1689 received vote from 8f79fa9a50a1689 at term 2
72
-2015/04/02 13:18:41 raft: 8f79fa9a50a1689 became leader at term 2
73
-2015/04/02 13:18:41 raft.node: 8f79fa9a50a1689 elected leader 8f79fa9a50a1689 at term 2
74
-2015/04/02 13:18:41 etcdserver: published {Name:vmware-cna ClientURLs:[http://192.168.35.246:2379]} to cluster 75c533bd1f49730b
75
-```
76
-
77
-At any time you can press ^] three times to kill container.
78 1
\ No newline at end of file
79 2
deleted file mode 100644
... ...
@@ -1,28 +0,0 @@
1
-#default package manager in photon - tdnf(tyum)
2
-
3
-##Introduction
4
-tdnf(tyum) is tiny dnf (tiny yum) implementing dnf commands in C without python dependencies. 
5
-dnf is the next upcoming major version of yum. tyum(tdnf) is included in photon micro, photon minimal and photon full. 
6
-tyum(tdnf) will read yum repositories and work just like yum. If you need yum, its just as easy as ```tdnf install yum```
7
-
8
-##How to configure a repository
9
-photon comes pre-configured with ```photon-iso``` repository which is in ```\etc\yum.repos.d```
10
-If you get an access error message when working with this repository, it is usually because you dont have the
11
-photon iso mounted. If you have the photon iso, you can mount and makecache to read metadata.
12
-
13
-```
14
-mount /dev/cdrom /media/cdrom
15
-tdnf makecache
16
-```
17
-
18
-##How to install a package?
19
-```tdnf install pkgname```
20
-
21
-##How to remove a package
22
-```tdnf erase pkgname```
23
-
24
-##How to list enabled repositories
25
-```tdnf repolist```
26
-
27
-##Other commands
28
-tdnf implements all dnf commands as listed here: http://dnf.readthedocs.org/en/latest/