Browse code

Merge branch 'master' of https://github.com/vmware/photon

archive authored on 2018/10/04 23:30:26
Showing 44 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,16 @@
0
+An official Vagrant box is available on Hashicorp Atlas. To get started: 
1
+
2
+	vagrant init vmware/photon
3
+
4
+Add the following lines to the Vagrantfile: 
5
+
6
+	config.vm.provider "virtualbox" do |v|
7
+	  v.customize ['modifyvm', :id, '--acpi', 'off']
8
+	end
9
+
10
+Install vagrant-guests-photon plugin which provides VMware Photon OS guest support.
11
+It is available at https://github.com/vmware/vagrant-guests-photon.
12
+
13
+Requires VirtualBox 4.3 or later version. If you have issues, please check your version.
14
+
15
+
0 16
new file mode 100644
... ...
@@ -0,0 +1,106 @@
0
+Download the Photon OS version that’s right for you. Click one of the links below.
1
+
2
+**Selecting a Download Format**
3
+----------------
4
+
5
+Photon OS is available in the following pre-packaged, binary formats.
6
+#### Download Formats ####
7
+| Format | Description |
8
+| --- | --- |
9
+| ISO Image | Contains everything needed to install either the minimal or full installation of Photon OS. The bootable ISO has a manual installer or can be used with PXE/kickstart environments for automated installations. |
10
+| OVA | Pre-installed minimal environment, customized for VMware hypervisor environments. These customizations include a highly sanitized and optimized kernel to give improved boot and runtime performance for containers and Linux applications. Since an OVA is a complete virtual machine definition, we've made available a Photon OS OVA that has virtual hardware version 11; this will allow for compatibility with several versions of VMware platforms or allow for the latest and greatest virtual hardware enhancements. |
11
+| Amazon AMI | Pre-packaged and tested version of Photon OS made ready to deploy in your Amazon EC2 cloud environment. Previously, we'd published documentation on how to create an Amazon compatible instance, but, now we've done the work for you. |
12
+| Google GCE Image | Pre-packaged and tested Google GCE image that is ready to deploy in your Google Compute Engine Environment, with all modifications and package requirements for running Photon OS in GCE. | 
13
+| Azure VHD | Pre-packaged and tested Azure HD image that is ready to deploy in your Microsoft Azure Cloud, with all modifications and package requirements for running Photon OS in Azure. |
14
+
15
+**Downloading Photon OS 2.0 GA**
16
+------------------------------
17
+
18
+Photon OS 2.0 GA is available now! Choose the download that’s right for you and click one of the links below. Refer to the associated sha1sums and md5sums.
19
+#### Photon OS 2.0 GA Binaries ####
20
+| Download | Size | sha1 checksum | md5 checksum |
21
+| --- | --- | --- | --- |
22
+| [Full ISO](http://dl.bintray.com/vmware/photon/2.0/GA/iso/photon-2.0-304b817.iso) | 2.3GB | 68ec892a66e659b18917a12738176bd510cde829 | 6ce66c763589cf1ee49f0144ff7182dc |
23
+| [OVA with virtual hardware v11](http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-hw11-2.0-304b817.ova) | 108MB | b8c183785bbf582bcd1be7cde7c22e5758fb3f16 | 1ce23d43a778fdeb5283ecd18320d9b5 |
24
+| [OVA with virtual hardware v13 (ESX 6.5 and above)](http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-hw13-2.0-304b817.ova) | 106MB | 44f7b808ca48ea1af819d222561a14482a15e493 | ec490b65615284a0862e9ee4a7a0ac97 |
25
+| [OVA with virtual hardware v11(Workstation and Fusion)](http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-lsilogic-hw11-2.0-304b817.ova) | 108MB | 6ed700cbbc5e54ba621e975f28284b27adb71f68 | 586c059bf3373984c761e254bd491f59 |
26
+| [Amazon AMI](http://dl.bintray.com/vmware/photon/2.0/GA/ami/photon-ami-2.0-304b817.tar.gz) | 135MB | 45f4e9bc27f7316fae77c648c8133195d38f96b3 | 486d59eca17ebc948e2f863f2af06eee |
27
+| [Google GCE](http://dl.bintray.com/vmware/photon/2.0/GA/gce/photon-gce-2.0-304b817.tar.gz) | 705MB | b1385dd8464090b96e6b402c32c5d958d43f9fbd | 34953176901f194f02090988e596b1a7 |
28
+| [Azure VHD - gz file](http://dl.bintray.com/vmware/photon/2.0/GA/azure/photon-azure-2.0-304b817.vhd.gz) | 170MB | a77d54351cca43eefcf289a907ec751c32372930 | 86d281f033f3584b11e5721a5cbda2d3 |
29
+| [Azure VHD - gz file - cloud-init provisioning](http://dl.bintray.com/vmware/photon/2.0/GA/azure/photon-azure-2.0-3146fa6.tar.gz) | 172MB | d7709a7b781dad03db55c4999bfa5ef6606efd8b | ee95bffe2c924d9cb2d47a94ecbbea2c |
30
+
31
+***Photon OS 2.0 AMI ID (Update: November 7th, 2017)***
32
+-------------------------
33
+| Region | AMI ID|
34
+| --- | --- |
35
+| N.Virginia | ami-47fe4c3d |
36
+| Ohio | ami-29dff04c |
37
+| N.California | ami-065f6166 |
38
+| Oregon | ami-f6ab7f8e |
39
+
40
+**Downloading Photon OS 2.0 RC**
41
+------------------------------
42
+Photon OS 2.0 RC is available now! Choose the download that’s right for you and click one of the links below. Refer to the associated sha1sums and md5sums.
43
+#### Photon OS 2.0 RC Binaries ####
44
+| Download | Size | sha1 checksum | md5 checksum |
45
+| --- | --- | --- | --- |
46
+| [Full ISO](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fiso%2Fphoton-2.0-31bb961.iso) | 2.2GB | 5c049d5ff40c8f22ae5e969eabd1ee8cd6b834e7 | 88cc8ecf2a7f6ae5ac8eb15f54e4a821 |
47
+| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fova%2Fphoton-custom-hw11-2.0-31bb961.ova) | 108MB | 6467ebb31ff23dfd112c1c574854f5655a462cc2 | b2c7fa9c151b1130342f08c2f513f9e1 |
48
+| [OVA with virtual hardware v13 (ESX 6.5 and above)](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fova%2Fphoton-custom-hw13-2.0-31bb961.ova) | 106MB | 5072ec86bcaa2d6e07f4fe3e6aa99063acbbc3f3 | 9331fc10d4526f389d2b658920727925 |
49
+| [Amazon AMI](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fami%2Fphoton-ami-2.0-31bb961.tar.gz) | 135MB | 2461b81f3d7c2325737c6ae12099e4c7ef6a079c | 67458ee457a0cf68d199ab95fc707107 |
50
+| [Google GCE](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fgce%2Fphoton-gce-2.0-31bb961.tar.gz) | 704MB | c65bcc0cbda061c6305f968646be2d72a4283227 | 2dff057540e37a161520ec86e39b17aa |
51
+| [Azure VHD - gz file](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fazure%2Fphoton-azure-2.0-31bb961.vhd.gz) | 169MB | b43a746fead931ae2bb43e9108cde35913b23715 | 3485c7a31741cca07cc11cbf374ec1a5 |
52
+
53
+**Downloading Photon OS 2.0 Beta**
54
+------------------------------
55
+Photon OS 2.0 Beta is here! Choose the download that’s right for you and click one of the links below. Refer to the associated sha1sums and md5sums.
56
+#### Photon OS 2.0 Beta Binaries ####
57
+| Download | Size | sha1 checksum | md5 checksum |
58
+| --- | --- | --- | --- |
59
+| [Full ISO](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fiso%2Fphoton-2.0-8553d58.iso) | 2.1GB | 7a0e837061805b7aa2649f9ba6652afb2d4591fc | a52c50240726cb3c4219c5c608f9acf3 |
60
+| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fova%2Fphoton-custom-hw11-2.0-8553d58.ova) | 110MB | 30b81b22a7754165ff30cc964b0a4a66b9469805 | fb309ee535cb670fe48677f5bfc74ec0 |
61
+| [Amazon AMI](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fami%2Fphoton-ami-2.0-8553d58.tar.gz) | 136MB | 320c5b6f6dbf6b000a6036b569b13b11e0e93034 | cc3cff3cf9a9a8d5f404af0d78812ab4 |
62
+| [Google GCE](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fgce%2Fphoton-gce-2.0-8553d58.tar.gz) | 705MB | c042d46971fa3b642e599b7761c18f4005fc70a7 | 03b873bbd2f0dd1401a334681c59bbf6 |
63
+| [Azure VHD](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fazure%2Fphoton-azure-2.0-8553d58.vhd) | 17GB | 20cfc506a2425510e68a9d12ea48218676008ffe | 6a531eab9e1f8cba89b1f150d344ecab |
64
+
65
+**Downloading Photon OS 1.0**
66
+-------------------------
67
+
68
+
69
+***Photon OS 1.0 AMI ID (Update: September 28th, 2017)***
70
+-------------------------
71
+| Region | AMI ID|
72
+| --- | --- |
73
+| N Virginia | ami-18758762 |
74
+| Ohio | ami-96200df3 |
75
+| N.California | ami-37360657 |
76
+| Oregon | ami-66b74f1e |
77
+
78
+***Photon OS 1.0, Revision 2 Binaries (Update: January 19th, 2017)***
79
+-------------------------------------------------------------
80
+We've been busy updating RPMs in our repository for months, now, to address both functional and security issues. However, our binaries have remained fixed since their release back in September 2015. In order to make it faster and easier to get a up-to-date Photon OS system, we've repackaged all of our binaries to include all of these RPM updates. For clarity, we'll call these updated binaries, which are still backed by the 1.0 repos - **1.0, Revision 2**.
81
+
82
+Choose the download that’s right for you and click one of the links below.
83
+#### Photon OS 1.0, Revision 2 Binaries ####
84
+| Download | Size | sha1 checksum | md5 checksum |
85
+| --- | --- | --- | --- |
86
+| [Full ISO](https://bintray.com/vmware/photon/download_file?file_path=photon-1.0-62c543d.iso) | 2.4GB | c4c6cb94c261b162e7dac60fdffa96ddb5836d66| 69500c07d25ce9caa9b211a8b6eefd61|
87
+| [OVA with virtual hardware v10](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw10-1.0-62c543d.ova) | 159MB | 6e9087ed25394e1bbc56496ae368b8c77efb21cb | 3e4b1a5f24ab463677e3edebd1ecd218|
88
+| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw11-1.0-62c543d.ova) | 159MB | 18c1a6d31545b757d897c61a0c3cc0e54d8aeeba| be9961a232ad5052b746fccbb5a9672d|
89
+| [Amazon AMI](https://bintray.com/vmware/photon/download_file?file_path=photon-ami-1.0-62c543d.tar.gz) | 590MB | 6df9ed7fda83b54c20bc95ca48fa467f09e58548| 5615a56e5c37f4a9c762f6e3bda7f9d0|
90
+| [Google GCE](https://bintray.com/vmware/photon/download_file?file_path=photon-gce-1.0-62c543d.tar.gz) | 164MB | 1feb68ec00aaa79847ea7d0b00eada7a1ac3b527| 5adb7b30803b168e380718db731de5dd|
91
+
92
+There are a few other ways that you could create a Photon OS instance – either making the ISO from source that’s been cloned from the [GitHub Photon OS repository](https://github.com/vmware/photon), using the [instructions](https://github.com/vmware/photon/blob/master/docs/build-photon.md) found on the GitHub repo, using the [scripted installation](https://github.com/vmware/photon/blob/master/docs/kickstart.md), or [boot Photon OS over a network](https://github.com/vmware/photon/blob/master/docs/PXE-boot.md), using PXE. These options are beyond the scope of this document. If you’re interested in these methods, follow the links provided above. 
93
+
94
+***Photon OS 1.0, Original Binaries***
95
+--------------------------------
96
+
97
+If you're looking for the original Photon OS, version 1.0 binaries, they can still be found here:
98
+#### Photon OS 1.0, Original Binaries ####
99
+| Download | Size | sha1 checksum | md5 checksum |
100
+| --- | --- | --- | --- |
101
+| [Full ISO](https://bintray.com/artifact/download/vmware/photon/photon-1.0-13c08b6.iso) | 2.1GB | ebd4ae77f2671ef098cf1e9f16224a4d4163bad1 | 15aea2cf5535057ecb019f3ee3cc9d34 |
102
+| [OVA with virtual hardware v10](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw10-1.0-13c08b6.ova) | 292MB | 8669842446b6aac12bd3c8158009305d46b95eac | 3ca7fa49128d1fd16eef1993cdccdd4d |
103
+| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw11-1.0-13c08b6.ova) | 292MB | 2ee56c5ce355fe6c59888f2f3731fd9d51ff0b4d | 8838498fb8202aac5886518483639073 |
104
+| [Amazon AMI](https://bintray.com/artifact/download/vmware/photon/photon-ami-1.0-13c08b6.tar.gz) | 148.5MB | 91deb839d788ec3c021c6366c192cf5ac601575b | fe657aafdc8189a85430e19ef82fc04a |
105
+| [Google GCE](https://bintray.com/artifact/download/vmware/photon/photon-gce-1.0-13c08b6.tar.gz) | 411.7MB | 397ccc7562f575893c89a899d9beafcde6747d7d | 67a671e032996a26d749b7d57b1b1887 |
0 106
new file mode 100644
... ...
@@ -0,0 +1,167 @@
0
+* [What is Photon OS?](#q-what-is-photon-os)
1
+* [How do I get started with Photon OS?](#q-how-do-i-get-started-with-photon-os)
2
+* [Can I upgrade my existing Photon OS 1.0 VMs?](#q-can-i-upgrade-my-existing-photon-os-10-vms)
3
+* [What kind of support comes with Photon OS?](#q-what-kind-of-support-comes-with-photon-os)
4
+* [How can I contribute to Photon OS?](#q-how-can-i-contribute-to-photon-os)
5
+* [How is Photon OS patched?](#q-how-is-Photon-OS-patched)
6
+* [How does Photon OS relate to Project Lightwave?](#q-how-does-photon-os-relate-to-project-lightwave)
7
+* [Will VMware continue to support other container host runtime offerings on vSphere?](#q-will-vmware-continue-to-support-other-container-host-runtime-offerings-on-vsphere)
8
+* [How to report a security vulnerability in Photon OS?](#q-how-to-report-a-security-vulnerability-in-photon-os)
9
+* [What are the Docker improvements in Photon OS 2.0?](#q-what-are-the-docker-improvements-in-photon-os-20)
10
+* [Why is VMware creating Photon OS?](#q-why-is-vmware-creating-photon-os)
11
+* [Why is VMware open-sourcing Photon OS?](#q-why-is-vmware-open-sourcing-photon-os)
12
+* [In what way is Photon OS "optimized for VMware?"](#q-in-what-way-is-photon-os-optimized-for-vmware)
13
+* [Why can't I SSH in as root?](#q-why-cant-i-ssh-in-as-root)
14
+* [Why isn't netstat working?](#q-why-is-netstat-not-working)
15
+* [Why do all of my cloned Photon OS instances have the same IP address when using DHCP?](#q-why-do-all-of-my-cloned-photon-os-instances-have-the-same-ip-address-when-using-dhcp)
16
+* [How to install new packages?](#how-to-install-new-packages)
17
+* [Why is the yum command not working in a minimal installation?](#q-why-the-yum-command-not-working-in-a-minimal-installation)
18
+* [How to install all build essentials?](#q-how-to-install-all-build-essentials)
19
+* [How to build new package for Photon OS?](#q-how-to-build-new-package-for-photon-os)
20
+* [I just booted into freshly installed Photon OS instance, why isn't "docker ps" working?](#q-i-just-booted-into-freshly-installed-photon-os-instance-why-isnt-docker-ps-working)
21
+* [What is the difference between Minimal and Full installation?](#q-what-is-the-difference-between-minimal-and-full-installation)
22
+* [What packages are included in Minimal and Full?](#q-what-packages-are-included-in-minimal-and-full)
23
+* [How do I transfer or share files between Photon OS and my host machine?](#q-how-do-i-transfer-or-share-files-between-photon-and-my-host-machine)
24
+* [Why is the ISO over 2GB, when I hear that Photon OS is a minimal container runtime?](#q-why-is-the-iso-over-2gb-when-i-hear-that-photon-os-is-a-minimal-container-runtime)
25
+
26
+***
27
+
28
+# Getting Started
29
+
30
+## Q. What is Photon OS?
31
+A. Photon OS™ is an open source Linux container host optimized for cloud-native applications, cloud platforms, and VMware infrastructure. Photon OS provides a secure run-time environment for efficiently running containers. For an overview, see [https://vmware.github.io/photon/](https://vmware.github.io/photon/).
32
+
33
+## Q. How do I get started with Photon OS?
34
+A. Start by deciding your target platform. Photon OS 2.0 has been certified in public cloud environments - Microsoft Azure (new), Google Compute Engine (GCE), Amazon Elastic Compute Cloud (EC2) - as well as on VMware vSphere, VMware Fusion, and VMware Workstation.
35
+Next, download the latest binary distributions for your target platform. The binaries are hosted on [https://bintray.com/vmware/photon/](https://bintray.com/vmware/photon/). For download instructions, see [Downloading Photon OS](https://github.com/vmware/photon/wiki/Downloading-Photon-OS).
36
+Finally, go to the installation instructions for your target platform, which are listed here: [https://github.com/vmware/photon/wiki](https://github.com/vmware/photon/wiki).
37
+
38
+## Q. Can I upgrade my existing Photon OS 1.0 VMs?
39
+A. Yes, there is an in-place upgrade path for Photon OS 1.0 implementations. You simply download an upgrade package, run a script, and reboot the VM. Refer to the instructions in [Upgrading to Photon OS 2.0](https://github.com/vmware/photon/wiki/Upgrading-to-Photon-OS-2.0).
40
+
41
+## Q. What kind of support comes with Photon OS?
42
+A. Photon OS is supported through community efforts and direct developer engagement in the communities. Potential users of Photon OS should start with the [Photon microsite](http://vmware.com/photon).
43
+
44
+Developers who might want the source code, including those interested in making contributions, should visit the [Photon OS Github repository](https://github.com/vmware/photon). 
45
+
46
+## Q. How can I contribute to Photon OS?
47
+A. We welcome community participation in the development of Photon OS and look forward to broad ecosystem engagement around the project. Getting your idea into Photon OS is just a [GitHub](https://vmware.github.io/photon) pull request away. When you submit a pull request, you'll be asked to accept the Contributor License Agreement (CLA). 
48
+
49
+## Q. How is Photon OS patched?
50
+A. Within a major release, updates will be delivered as package updates. Security updates will be delivered on an as-needed basis. Non-security related updates will happen quarterly, but may not include every, single package update. The focus is on delivering a valid, functional updated stack every quarter.
51
+
52
+Photon OS isn't "patched," as a whole - instead, individual packages are updated (potentially, with patches applied to that individual package). For instance, if a package releases a fix for a critical vulnerability, we'll update the package in the Photon OS repository, for critical issues probably within a day or two. At that point, customers get that updated package by running, "tdnf update <package>"
53
+ 
54
+## Q. How does Photon OS relate to Project Lightwave?
55
+A. Project Lightwave is an open-sourced project that provides enterprise-grade identity and access management services, and can be used to solve key security, governance, and compliance challenges for a variety of use cases within the enterprise.
56
+Through integration between Photon OS and Project Lightwave, organizations can enforce security and 
57
+governance on container workloads, for example, by ensuring only authorized containers are run on authorized hosts, by authorized users. For details about Lightwave, see [https://github.com/vmware/lightwave](https://github.com/vmware/lightwave).
58
+
59
+## Q. Will VMware continue to support other container host runtime offerings on vSphere?
60
+A. YES, VMware is committed to delivering an infrastructure for all workloads, and for vSphere to have the largest guest OS support in the industry and support customer choice. 
61
+Toward those goals, VMware will continue to work with our technology partners to support new Guest Operating Systems and container host runtimes as they come to the market. Open-sourcing Photon OS will enable optimizations and enhancements for container host runtimes on VMware Platform are available as reference implementation for other container host runtimes as well.
62
+
63
+# Photon OS
64
+## Q. What is Photon OS?
65
+A. Photon OS is an open source, Linux container host runtime optimized for VMware vSphere®. Photon OS is extensible, lightweight, and supports the most common container formats including Docker, Rocket and Garden. Photon OS includes a small footprint, yum-compatible, package-based lifecycle management system, and can support an rpm-ostree image-based system versioning. When used with development tools and environments such as VMware Fusion®, VMware Workstation™, HashiCorp (Vagrant and Atlas) and a production runtime environment (vSphere, VMware vCloud® Air™), Photon OS allows seamless migration of containers-based Apps from development to production.
66
+
67
+## Q. How to report a security vulnerability in Photon OS?
68
+A. VMware encourages users who become aware of a security vulnerability in VMware products to contact VMware with details of the vulnerability. VMware has established an email address that should be used for reporting a vulnerability. Please send descriptions of any vulnerabilities found to security@vmware.com. Please include details on the software and hardware configuration of your system so that we can duplicate the issue being reported.
69
+
70
+Note: We encourage use of encrypted email. Our public PGP key is found at [kb.vmware.com/kb/1055](http://kb.vmware.com/kb/1055).
71
+
72
+VMware hopes that users encountering a new vulnerability will contact us privately as it is in the best interests of our customers that VMware has an opportunity to investigate and confirm a suspected vulnerability before it becomes public knowledge.
73
+
74
+In the case of vulnerabilities found in third-party software components used in VMware products, please also notify VMware as described above.
75
+
76
+## Q. What are the Docker improvements in Photon OS 2.0?
77
+In Photon OS 2.0, the Docker image size (compressed and uncompressed) was reduced to less than a third of its size in Photon OS 1.0. This gain resulted from:
78
+- using toybox (instead of standard core tools), which brings the docker image size from 50MB (in 1.0) to 14MB (in 2.0)
79
+- a package split - in Photon OS 2.0, the binary set contains only bash, tdnf, and toybox; all other installed packages are now libraries only.
80
+
81
+## Q. Why is VMware creating Photon OS?
82
+A. It's about workloads - VMware has always positioned our vSphere platform as a secure, highly-performant platform for enterprise applications. With containers, providing an optimized runtime ensures that customers can embrace these new workload technologies without disrupting existing operations. Over time, Photon OS will extend the capabilities of the software-defined data center such as security, identity and resource management to containerized workloads. Organizations can then leverage a single infrastructure architecture for both traditional and cloud-native Apps, and leverage existing investments in tools, skills and technologies. This converged environment will simplify operation and troubleshooting, and ease the adoption of Cloud-Native Apps. 
83
+
84
+Photon OS can provide a reference implementation for optimizing containers on VMware platforms across compute, network, storage and management. For example, Photon OS can deliver performance through kernel tuning to remove redundant caching between the Linux kernel and the vSphere hypervisor, and advanced security services through network micro-segmentation delivered by VMware NSX™, and more.
85
+
86
+## Q. Why is VMware open-sourcing Photon OS?
87
+A. Open-sourcing Photon OS encourages discussion, innovation, and collaboration with others in the container ecosystem. In particular, we want to make sure the innovations we introduce to Photon to run containers effectively on VMware are also available to any other container runtime OS. 
88
+Additionally, VMware is committed to supporting industry and de facto standards, as doing so also supports stronger security, interoperability, and choice for our customers. 
89
+
90
+## Q. In what way is Photon OS "optimized for VMware?"
91
+
92
+Photon OS 1.0 introduced extensive optimizations for VMware environments, which are described in detail in the following VMware white paper: [Deploying Cloud-Native Applications with Photon OS](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmware-deploying-cloud-native-apps-with-photon-os.pdf). Photon OS 2.0 enhances VMware optimization. The kernel message dumper (new in Photon OS 2.0) is a paravirt feature that extends debugging support. In case of a guest panic, the kernel (through the paravirt channel) dumps the entire kernel log buffer (including the panic message) into the VMware log file (vmware.log) for easy, consolidated access. Previously, this information was stored in a huge vmss (VM suspend state) file.
93
+
94
+## Q. Why can't I SSH in as root?
95
+A. By default Photon does not permit root login to ssh. To make yourself login as root using SSH set PermitRootLogin yes in /etc/ssh/sshd_config, and restart the sshd deamon.
96
+
97
+## Q. Why is netstat not working?
98
+A. netstat is deprecated, ss or ip (part of iproute2) should be used instead.
99
+
100
+## Q. Why do all of my cloned Photon OS instances have the same IP address when using DHCP?
101
+A. Photon OS uses the contents of /etc/machine-id to determine the duid that is used for DHCP requests. If you're going to use a Photon OS instance as the base system for cloning to create additional Photon OS instances, you should clear the machine-id with:
102
+~~~~
103
+    echo -n > /etc/machine-id
104
+~~~~
105
+With this value cleared, systemd will regenerate the machine-id and, as a result, all DHCP requests will contain a unique duid. 
106
+
107
+# How to install new packages?
108
+## Q. Why is the yum command not working in a minimal installation?
109
+A. yum has package dependencies that make the system larger than it needs to be. Photon OS includes [tdnf](https://github.com/vmware/tdnf) - 'tiny' dandified yum - to provide package management and yum-functionality in a much, much smaller footprint. To install packages from cdrom, mount cdrom using following command
110
+~~~~
111
+     mount /dev/cdrom /media/cdrom
112
+~~~~
113
+Then, you can use tdnf to install new packages. For example, to install the vim editor, 
114
+~~~~
115
+     tdnf install vim
116
+~~~~
117
+## Q. How to install all build essentials?
118
+A. Following command can be used to install all build essentials.
119
+~~~~
120
+curl -L https://git.io/v1boE | xargs -I {} tdnf install -y {}
121
+~~~~
122
+## Q. How to build new package for Photon OS??
123
+A. Assuming you have an Ubuntu development environment, setup and get the latest code pull into /workspace. Lets assume your package name is foo with version 1.0.
124
+~~~~
125
+    cp foo-1.0.tar.gz /workspace/photon/SOURCES
126
+    cp foo.spec /workspace/photon/SPECS/foo/
127
+    cd /workspace/photon/support/package-builder
128
+    sudo python ./build_package.py -i foo
129
+~~~~
130
+## Q. I just booted into freshly installed Photon OS instance, why isn't "docker ps" working?
131
+A. Make sure docker daemon is running. By design and default in Photon OS, the docker daemon/engine is not started at boot time. To start the docker daemon for the current session, use the command:
132
+~~~~
133
+    systemctl start docker
134
+~~~~
135
+To start the docker daemon, on boot, use the command:
136
+~~~~
137
+    systemctl enable docker
138
+~~~~
139
+## Q. What is the difference between Minimal and Full installation?
140
+A. Minimal is the minimal set of packages for a container runtime, plus cloud-init.
141
+Full contains all the packages shipped with ISO.
142
+
143
+## Q. What packages are included in Minimal and Full?
144
+A. See [packages_minimal.json](https://github.com/vmware/photon/blob/dev/common/data/packages_minimal.json) as an example
145
+
146
+## Q. How do I transfer or share files between Photon and my host machine?
147
+A. Use vmhgfs-fuse to transfer files between Photon and your host machine:
148
+1. Enable Shared folders in the Workstation or Fusion UI (edit the VM settings and choose Options->Enabled shared folders).
149
+2. Make sure open-vm-tools is installed (it is installed by default in the Minimal installation and OVA import).
150
+3. Run vmware-hgfsclient to list the shares.
151
+
152
+Next, do one of the following:
153
+
154
+- Run the following to mount:
155
+~~~~
156
+vmhgfs-fuse .host:/$(vmware-hgfsclient) /mnt/hgfs
157
+~~~~
158
+OR
159
+
160
+- Add the following line to /etc/fstab:
161
+~~~~
162
+.host:/ /mnt/hgfs fuse.vmhgfs-fuse <options> 0 0
163
+~~~~
164
+
165
+## Q. Why is the ISO over 2GB, when I hear that Photon OS is a minimal container runtime?
166
+A. ISO includes a repository with all Photon OS packages. When you mount the ISO to a machine and boot to the Photon installer, you'll be able to choose the Photon Minimal installation option and the hypervisor-optimized Linux kernel, which will reduce the storage size.
0 167
\ No newline at end of file
1 168
new file mode 100644
... ...
@@ -0,0 +1,29 @@
0
+![Photon](https://cloud.githubusercontent.com/assets/11306358/9800286/cb4c9eb6-57d1-11e5-916c-6eba8e40fa99.png)
1
+# Welcome to the Photon OS Wiki
2
+
3
+This wiki serves as an unofficial supplement to the documentation that is published in the project .md files. 
4
+
5
+### Photon OS 2.0 GA Available!
6
+
7
+Photon OS 2.0 introduces new security and OS management capabilities, along with new and updated packages for cloud-native applications and VMware appliances. To download the distribution images, go to [Downloading Photon OS](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). To learn more, see [What is New in Photon OS 2.0](https://github.com/vmware/photon/wiki/What-is-New-in-Photon-OS-2.0).
8
+
9
+# Table of Contents
10
+
11
+1. [Frequently Asked Questions](https://github.com/vmware/photon/wiki/Frequently-Asked-Questions)
12
+2. Getting Started Guides
13
+    * [Downloading Photon OS](https://github.com/vmware/photon/wiki/Downloading-Photon-OS)
14
+    * [Running Photon OS on vSphere](https://github.com/vmware/photon/wiki/Running-Photon-OS-on-vSphere)
15
+    * [Running Photon OS on Fusion](https://github.com/vmware/photon/wiki/Running-Project-Photon-on-Fusion)
16
+    * [Running Photon OS on Workstation](https://github.com/vmware/photon/wiki/Running-Photon-OS-on-Workstation)
17
+    * [Running Photon OS on AWS EC2](https://github.com/vmware/photon/wiki/Running-Photon-OS-on-Amazon-Elastic-Cloud-Compute)
18
+    * [Running Photon OS on Microsoft Azure](https://github.com/vmware/photon/wiki/Running-Photon-OS-on-Microsoft-Azure)
19
+    * [Running Photon OS on Google Compute Engine](https://github.com/vmware/photon/wiki/Running-Photon-OS-on-Google-Compute-Engine)
20
+
21
+3. Administration Guides
22
+    * [Photon OS Administration Guide](https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md)
23
+    * [How to use Photon Management Daemon](https://github.com/vmware/photon/blob/master/docs/pmd-cli.md)
24
+
25
+4. How-To Guides
26
+    * [Install and Configure a Swarm Cluster with DNS Service on Photon OS](https://github.com/vmware/photon/wiki/Install-and-Configure-a-Swarm-Cluster-with-DNS-Service-on-PhotonOS)
27
+    * [Install and Configure a Production Ready Mesos Cluster on Photon OS](https://github.com/vmware/photon/wiki/Install-and-Configure-a-Production-Ready-Mesos-Cluster-on-Photon-OS)
28
+ 
0 29
new file mode 100644
... ...
@@ -0,0 +1,34 @@
0
+<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/knesenko '''Kiril Nesenko''']</sub><br />
1
+
2
+To install the DCOS CLI:
3
+Install virtualenv. The Python tool virtualenv is used to manage the DCOS CLI’s environment.
4
+<source lang="bash" enclose="div">
5
+sudo pip install virtualenv
6
+</source><br />
7
+Tip: On some older Python versions, ignore any ‘Insecure Platform’ warnings. For more information, see https://virtualenv.pypa.io/en/latest/installation.html.
8
+From the command line, create a new directory named dcos and navigate into it.
9
+<source lang="bash" enclose="div">
10
+$ mkdir dcos
11
+$ cd dcos
12
+$ curl -O https://downloads.mesosphere.io/dcos-cli/install.sh
13
+</source><br />
14
+Run the DCOS CLI install script, where &lt;hosturl&gt; is the hostname of your master node prefixed with http://:
15
+<source lang="bash" enclose="div">
16
+$ bash install.sh <install_dir> <mesos-master-host>
17
+</source><br />
18
+For example, if the hostname of your Mesos master node is mesos-master.example.com:
19
+<source lang="bash" enclose="div">
20
+$ bash install.sh . http://mesos-master.example.com
21
+</source><br />
22
+Follow the on-screen DCOS CLI instructions and enter the Mesosphere verification code. You can ignore any Python ‘Insecure Platform’ warnings.
23
+<source lang="bash" enclose="div">
24
+Confirm whether you want to add DCOS to your system PATH:
25
+$ Modify your bash profile to add DCOS to your PATH? [yes/no]
26
+</source><br />
27
+Since DCOS CLI is used for DCOS cluster, reconfigure Marathon and Mesos masters URLs with the following commands:
28
+<source lang="bash" enclose="div">
29
+dcos config set core.mesos_master_url http://<mesos-master-host>:5050
30
+dcos config set marathon.url http://<marathon-host>:8080
31
+</source><br />
32
+<br /><br />
33
+Next - [[Install and Configure Mesos DNS on a Mesos Cluster]]
0 34
\ No newline at end of file
1 35
new file mode 100644
... ...
@@ -0,0 +1,52 @@
0
+<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/knesenko '''Kiril Nesenko''']</sub><br />
1
+<br />
2
+In my previous How-To [[Install and Configure a Production Ready Mesos Cluster on PhotonOS]]. In this How-To I am going to explain how to install and configure Marathon for Mesos cluster. All the following steps should be done on each Mesos master.
3
+First, download Marathon:
4
+<source lang="bash" enclose="div">
5
+root@pt-mesos-master2 [ ~ ]# mkdir -p  /opt/mesosphere/marathon/ && cd /opt/mesosphere/marathon/
6
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]#  curl -O http://downloads.mesosphere.com/marathon/v0.13.0/marathon-0.13.0.tgz
7
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# tar -xf marathon-0.13.0.tgz
8
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# mv marathon-0.13.0 marathon
9
+</source><br />
10
+Create a configuration for Marathon:
11
+<source lang="bash" enclose="div">
12
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# ls -l /etc/marathon/conf/
13
+total 8
14
+-rw-r--r-- 1 root root 68 Dec 24 14:33 master
15
+-rw-r--r-- 1 root root 71 Dec 24 14:33 zk
16
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# cat /etc/marathon/conf/*
17
+zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos
18
+zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/marathon
19
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# cat /etc/systemd/system/marathon.service
20
+[Unit]
21
+Description=Marathon
22
+After=network.target
23
+Wants=network.target
24
+ 
25
+[Service]
26
+Environment="JAVA_HOME=/opt/OpenJDK-1.8.0.51-bin"
27
+ExecStart=/opt/mesosphere/marathon/bin/start \
28
+    --master zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos \
29
+    --zk zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/marathon
30
+Restart=always
31
+RestartSec=20
32
+ 
33
+[Install]
34
+WantedBy=multi-user.target
35
+</source><br />
36
+Finally, we need to change the Marathon startup script, since PhotonOS do not use the standard JRE. Make sure you add JAVA_HOME to Java path:
37
+<source lang="bash" enclose="div">
38
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# tail -n3 /opt/mesosphere/marathon/bin/start
39
+# Start Marathon
40
+marathon_jar=$(find "$FRAMEWORK_HOME"/target -name 'marathon-assembly-*.jar' | sort | tail -1)
41
+exec "${JAVA_HOME}/bin/java" "${java_args[@]}" -jar "$marathon_jar" "${app_args[@]}"
42
+</source><br />
43
+Now we can start the Marthon service:
44
+<source lang="bash" enclose="div">
45
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# systemctl start marathon
46
+root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# ps -ef | grep marathon
47
+root     15821     1 99 17:14 ?        00:00:08 /opt/OpenJDK-1.8.0.51-bin/bin/java -jar /opt/mesosphere/marathon/bin/../target/scala-2.11/marathon-assembly-0.13.0.jar --master zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos --zk zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/marathon
48
+root     15854 14692  0 17:14 pts/0    00:00:00 grep --color=auto marathon
49
+</source><br />
50
+<br /><br />
51
+Next - [[Install and Configure DCOS CLI for Mesos]]
0 52
\ No newline at end of file
1 53
new file mode 100644
... ...
@@ -0,0 +1,141 @@
0
+<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/knesenko '''Kiril Nesenko''']</sub><br />
1
+= Overview =<br />
2
+Before you read this How-To, please read: [[Install and Configure a Production-Ready Mesos Cluster on PhotonOS]] , [[Install and Configure Marathon for Mesos Cluster on PhotonOS]] and [[Install and Configure DCOS CLI for Mesos]].
3
+After you have fully installed and configured the Mesos cluster, you can execute jobs on it. However, if you want a service discovery and load balancing capabilities you will need to use Mesos-DNS and Haproxy. In this How-To I will explain how to install and configure Mesos-DNS for your Mesos cluster.
4
+Mesos-DNS supports service discovery in Apache Mesos clusters. It allows applications and services running on Mesos to find each other through the domain name system (DNS), similarly to how services discover each other throughout the Internet. Applications launched by Marathon are assigned names like search.marathon.mesos. Mesos-DNS translates these names to the IP address and port on the machine currently running each application. To connect to an application in the Mesos datacenter, all you need to know is its name. Every time a connection is initiated, the DNS translation will point to the right machine in the datacenter.
5
+[[ http://mesosphere.github.io/mesos-dns/img/architecture.png ]]<br />
6
+= Installation =<br />
7
+I will explain how to configure Mesos-DNS docker and run it through Marathon. I will show you how to create a configuration file for a mesos-dns-docker container and how to run it via Marathon.
8
+<source lang="bash" enclose="div">
9
+root@pt-mesos-node1 [ ~ ]# cat /etc/mesos-dns/config.json
10
+{
11
+  "zk": "zk://192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/mesos",
12
+  "masters": ["192.168.0.1:5050", "192.168.0.2:5050", "192.168.0.3:5050"],
13
+  "refreshSeconds": 60,
14
+  "ttl": 60,
15
+  "domain": "mesos",
16
+  "port": 53,
17
+  "resolvers": ["8.8.8.8"],
18
+  "timeout": 5,
19
+  "httpon": true,
20
+  "dnson": true,
21
+  "httpport": 8123,
22
+  "externalon": true,
23
+  "SOAMname": "ns1.mesos",
24
+  "SOARname": "root.ns1.mesos",
25
+  "SOARefresh": 60,
26
+  "SOARetry":   600,
27
+  "SOAExpire":  86400,
28
+  "SOAMinttl": 60
29
+}
30
+</source><br />
31
+'''Create Application Run File'''<br />
32
+Next step is to create a json file and run the service from Marathon for HA. It is possible to run the service via API or via DCOS CLI.
33
+<source lang="bash" enclose="div">
34
+client:~/mesos/jobs$ cat mesos-dns-docker.json
35
+{
36
+    "args": [
37
+        "/mesos-dns",
38
+        "-config=/config.json"
39
+    ],
40
+    "container": {
41
+        "docker": {
42
+            "image": "mesosphere/mesos-dns",
43
+            "network": "HOST"
44
+        },
45
+        "type": "DOCKER",
46
+        "volumes": [
47
+            {
48
+                "containerPath": "/config.json",
49
+                "hostPath": "/etc/mesos-dns/config.json",
50
+                "mode": "RO"
51
+            }
52
+        ]
53
+    },
54
+    "cpus": 0.2,
55
+    "id": "mesos-dns-docker",
56
+    "instances": 3,
57
+    "constraints": [["hostname", "CLUSTER", "pt-mesos-node2.example.com"]]
58
+}
59
+</source>
60
+Now we can see in the Marthon and Mesos UI that we launched the application.
61
+<br /><br />
62
+'''Setup Resolvers and Testing'''<br />
63
+To allow Mesos tasks to use Mesos-DNS as the primary DNS server, you must edit the file ''/etc/resolv.conf'' in every slave and add a new nameserver. For instance, if ''mesos-dns'' runs on the server with IP address  ''192.168.0.5''  at the beginning of ''/etc/resolv.conf'' on every slave.
64
+<source lang="bash" enclose="div">
65
+root@pt-mesos-node2 [ ~/mesos-dns ]# cat /etc/resolv.conf
66
+# This file is managed by systemd-resolved(8). Do not edit.
67
+#
68
+# Third party programs must not access this file directly, but
69
+#only through the symlink at /etc/resolv.conf. To manage
70
+# resolv.conf(5) in a different way, replace the symlink by a
71
+# static file or a different symlink.
72
+nameserver 192.168.0.5
73
+nameserver 192.168.0.4
74
+nameserver 8.8.8.8
75
+</source><br />
76
+Let's run a simple Docker app and see if we can resolve it in DNS.
77
+<source lang="bash" enclose="div">
78
+client:~/mesos/jobs$ cat docker.json
79
+{
80
+    "id": "docker-hello",
81
+    "container": {
82
+        "docker": {
83
+            "image": "centos"
84
+        },
85
+        "type": "DOCKER",
86
+        "volumes": []
87
+    },
88
+    "cmd": "echo hello; sleep 10000",
89
+    "mem": 16,
90
+    "cpus": 0.1,
91
+    "instances": 10,
92
+    "disk": 0.0,
93
+    "ports": [0]
94
+}
95
+</source>
96
+<source lang="bash" enclose="div">
97
+client:~/mesos/jobs$ dcos marathon app add docker.json
98
+</source><br />
99
+Let's try to resolve it.
100
+
101
+<pre>
102
+root@pt-mesos-node2 [ ~/mesos-dns ]# dig _docker-hello._tcp.marathon.mesos SRV
103
+;; Truncated, retrying in TCP mode.
104
+; <<>> DiG 9.10.1-P1 <<>> _docker-hello._tcp.marathon.mesos SRV
105
+;; global options: +cmd
106
+;; Got answer:
107
+;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25958
108
+;; flags: qr aa rd ra; QUERY: 1, ANSWER: 10, AUTHORITY: 0, ADDITIONAL: 10
109
+;; QUESTION SECTION:
110
+;_docker-hello._tcp.marathon.mesos. IN SRV
111
+;; ANSWER SECTION:
112
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31998 docker-hello-4bjcf-s2.marathon.slave.mesos.
113
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31844 docker-hello-jexm6-s1.marathon.slave.mesos.
114
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31111 docker-hello-6ms44-s2.marathon.slave.mesos.
115
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31719 docker-hello-muhui-s2.marathon.slave.mesos.
116
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31360 docker-hello-jznf4-s1.marathon.slave.mesos.
117
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31306 docker-hello-t41ti-s1.marathon.slave.mesos.
118
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31124 docker-hello-mq3oz-s1.marathon.slave.mesos.
119
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31816 docker-hello-tcep8-s1.marathon.slave.mesos.
120
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31604 docker-hello-5uu37-s1.marathon.slave.mesos.
121
+_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31334 docker-hello-jqihw-s1.marathon.slave.mesos.
122
+ 
123
+;; ADDITIONAL SECTION:
124
+docker-hello-muhui-s2.marathon.slave.mesos. 60 IN A 192.168.0.5
125
+docker-hello-4bjcf-s2.marathon.slave.mesos. 60 IN A 192.168.0.5
126
+docker-hello-jexm6-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
127
+docker-hello-jqihw-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
128
+docker-hello-mq3oz-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
129
+docker-hello-tcep8-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
130
+docker-hello-6ms44-s2.marathon.slave.mesos. 60 IN A 192.168.0.5
131
+docker-hello-t41ti-s1.marathon.slave.mesos. 60 IN A 192.168.0.4
132
+docker-hello-jznf4-s1.marathon.slave.mesos. 60 IN A 192.168.0.4
133
+docker-hello-5uu37-s1.marathon.slave.mesos. 60 IN A 192.168.0.4
134
+;; Query time: 0 msec
135
+;; SERVER: 192.168.0.5#53(192.168.0.5)
136
+;; WHEN: Sun Dec 27 14:36:32 UTC 2015
137
+;; MSG SIZE  rcvd: 1066
138
+</pre>
139
+
140
+We can see that we can resolve our app!
0 141
\ No newline at end of file
1 142
new file mode 100644
... ...
@@ -0,0 +1,171 @@
0
+== Overview ==
1
+For this setup I will use 3 Mesos masters and 3 slaves. On each Mesos master I will run a Zookeeper, meaning that we will have 3 Zookeepers as well. The Mesos cluster will be configured with a quorum of 2. For networking Mesos use Mesos-DNS. I tried to run Mesos-DNS as container, but got into some resolving issues, so in my next How-To I will explain how to configure Mesos-DNS and run it through Marathon. Photon hosts will be used for masters and slaves.<br />
2
+<br />
3
+''' Masters: '''<br />
4
+{| class="wikitable"
5
+! style="text-align: center; font-weight: bold;" | Hostname
6
+! style="font-weight: bold;" | IP Address
7
+|-
8
+| pt-mesos-master1.example.com
9
+| 192.168.0.1
10
+|-
11
+| pt-mesos-master2.example.com
12
+| 192.168.0.2
13
+|-
14
+| pt-mesos-master3.example.com
15
+| 192.168.0.3
16
+|}
17
+''' Agents: '''<br />
18
+{| class="wikitable"
19
+! style="text-align: center; font-weight: bold; font-size: 0.100em;" | Hostname
20
+! style="font-weight: bold;" | IP Address
21
+|-
22
+| pt-mesos-node1.example.com
23
+| 192.168.0.4
24
+|-
25
+| pt-mesos-node2.example.com
26
+| 192.168.0.5
27
+|-
28
+| pt-mesos-node3.example.com
29
+| 192.168.0.6
30
+|}
31
+<br />
32
+== Masters Installation and Configuration ==
33
+First of all we will install Zookeeper. Since currently there is a bug in Photon related to the Zookeeper installation I will use the tarball. Do the following for each master:
34
+<source lang="bash" enclose="div">
35
+root@pt-mesos-master1 [ ~ ]# mkdir -p /opt/mesosphere && cd /opt/mesosphere && wget http://apache.mivzakim.net/zookeeper/stable/zookeeper-3.4.7.tar.gz
36
+root@pt-mesos-master1 [ /opt/mesosphere ]# tar -xf zookeeper-3.4.7.tar.gz && mv zookeeper-3.4.7 zookeeper
37
+root@pt-mesos-master1 [ ~ ]# cat /opt/mesosphere/zookeeper/conf/zoo.cfg | grep -v '#'
38
+tickTime=2000
39
+initLimit=10
40
+syncLimit=5
41
+dataDir=/var/lib/zookeeper
42
+clientPort=2181
43
+server.1=192.168.0.1:2888:3888
44
+server.2=192.168.0.2:2888:3888
45
+server.3=192.168.0.3:2888:3888
46
+</source><br />
47
+Example of Zookeeper systemd configuration file:
48
+<source lang="bash" enclose="div">
49
+root@pt-mesos-master1 [ ~ ]# cat /etc/systemd/system/zookeeper.service
50
+[Unit]
51
+Description=Apache ZooKeeper
52
+After=network.target
53
+ 
54
+[Service]
55
+Environment="JAVA_HOME=/opt/OpenJDK-1.8.0.51-bin"
56
+WorkingDirectory=/opt/mesosphere/zookeeper
57
+ExecStart=/bin/bash -c "/opt/mesosphere/zookeeper/bin/zkServer.sh start-foreground"
58
+Restart=on-failure
59
+RestartSec=20
60
+User=root
61
+Group=root
62
+ 
63
+[Install]
64
+WantedBy=multi-user.target
65
+</source><br />
66
+Add server id to the configuration file, so zookeeper will understand the id of your master server. This should be done for each master with its own id.
67
+<source lang="bash" enclose="div">
68
+root@pt-mesos-master1 [ ~ ]# echo 1 > /var/lib/zookeeper/myid
69
+root@pt-mesos-master1 [ ~ ]# cat /var/lib/zookeeper/myid
70
+1
71
+</source><br />
72
+Now lets install the Mesos masters. Do the following for each master:
73
+<source lang="bash" enclose="div">
74
+root@pt-mesos-master1 [ ~ ]# yum -y install mesos
75
+Setting up Install Process
76
+Package mesos-0.23.0-2.ph1tp2.x86_64 already installed and latest version
77
+Nothing to do
78
+root@pt-mesos-master1 [ ~ ]# cat /etc/systemd/system/mesos-master.service
79
+[Unit]
80
+Description=Mesos Slave
81
+After=network.target
82
+Wants=network.target
83
+ 
84
+[Service]
85
+ExecStart=/bin/bash -c "/usr/sbin/mesos-master \
86
+    --ip=192.168.0.1 \
87
+    --work_dir=/var/lib/mesos \
88
+    --log_dir=/var/log/mesos \
89
+    --cluster=EXAMPLE \
90
+    --zk=zk://192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/mesos \
91
+    --quorum=2"
92
+KillMode=process
93
+Restart=always
94
+RestartSec=20
95
+LimitNOFILE=16384
96
+CPUAccounting=true
97
+MemoryAccounting=true
98
+ 
99
+[Install]
100
+WantedBy=multi-user.target
101
+</source><br />
102
+Make sure you replace '''''–ip''''' setting on each master. So far we have 3 masters with a Zookeeper and Mesos packages installed. Let's start zookeeper and mesos-master services on each master:
103
+<source lang="bash" enclose="div">
104
+root@pt-mesos-master1 [ ~ ]# systemctl start zookeeper
105
+root@pt-mesos-master1 [ ~ ]# systemctl start mesos-master
106
+root@pt-mesos-master1 [ ~ ]# ps -ef | grep mesos
107
+root     11543     1  7 12:09 ?        00:00:01 /opt/OpenJDK-1.8.0.51-bin/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/mesosphere/zookeeper/bin/../build/classes:/opt/mesosphere/zookeeper/bin/../build/lib/*.jar:/opt/mesosphere/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/mesosphere/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/mesosphere/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/opt/mesosphere/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/mesosphere/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/mesosphere/zookeeper/bin/../zookeeper-3.4.7.jar:/opt/mesosphere/zookeeper/bin/../src/java/lib/*.jar:/opt/mesosphere/zookeeper/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/mesosphere/zookeeper/bin/../conf/zoo.cfg
108
+root     11581     1  0 12:09 ?        00:00:00 /usr/sbin/mesos-master --ip=192.168.0.1 --work_dir=/var/lib/mesos --log_dir=/var/lob/mesos --cluster=EXAMPLE --zk=zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos --quorum=2
109
+root     11601  9117  0 12:09 pts/0    00:00:00 grep --color=auto mesos
110
+</source><br />
111
+== Slaves Installation and Configuration ==
112
+The steps for configuring a Mesos slave are very simple and not very different from master installation. The difference is that we won't install zookeeper on each slave. We will also start the Mesos slaves in slave mode and will tell the daemon to join the Mesos masters. Do the following for each slave:
113
+<source lang="bash" enclose="div">
114
+root@pt-mesos-node1 [ ~ ]# cat /etc/systemd/system/mesos-slave.service
115
+[Unit]
116
+Description=Photon instance running as a Mesos slave
117
+After=network-online.target,docker.service
118
+  
119
+[Service]
120
+Restart=on-failure
121
+RestartSec=10
122
+TimeoutStartSec=0
123
+ExecStartPre=/usr/bin/rm -f /tmp/mesos/meta/slaves/latest
124
+ExecStart=/bin/bash -c "/usr/sbin/mesos-slave \
125
+    --master=zk://192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/mesos \
126
+        --hostname=$(/usr/bin/hostname) \
127
+        --log_dir=/var/log/mesos_slave \
128
+        --containerizers=docker,mesos \
129
+        --docker=$(which docker) \
130
+        --executor_registration_timeout=5mins \
131
+        --ip=192.168.0.4"
132
+  
133
+[Install]
134
+WantedBy=multi-user.target
135
+</source>
136
+Please make sure to replace the NIC name under '''''–ip''''' setting. Start the mesos-slave service on each node.
137
+<br />
138
+Now you should have ready Mesos cluster with 3 masters, 3 Zookeepers and 3 slaves.
139
+[[https://www.devops-experts.com/wp-content/uploads/2015/12/Screen-Shot-2015-12-24-at-2.22.27-PM.png]]
140
+<br />
141
+If you want to use private docker registry, you will need to edit docker systemd file. In my example I am using cse-artifactory.eng.vmware.com registry:
142
+<source lang="bash" enclose="div">
143
+root@pt-mesos-node1 [ ~ ]# cat /lib/systemd/system/docker.service
144
+[Unit]
145
+Description=Docker Daemon
146
+Wants=network-online.target
147
+After=network-online.target
148
+  
149
+[Service]
150
+EnvironmentFile=-/etc/sysconfig/docker
151
+ExecStart=/bin/docker -d $OPTIONS -s overlay
152
+ExecReload=/bin/kill -HUP $MAINPID
153
+KillMode=process
154
+Restart=always
155
+MountFlags=slave
156
+LimitNOFILE=1048576
157
+LimitNPROC=1048576
158
+LimitCORE=infinity
159
+  
160
+[Install]
161
+WantedBy=multi-user.target
162
+  
163
+root@pt-mesos-node1 [ ~ ]# cat /etc/sysconfig/docker
164
+OPTIONS='--insecure-registry cse-artifactory.eng.vmware.com'
165
+root@pt-mesos-node1 [ ~ ]# systemctl daemon-reload && systemctl restart docker
166
+root@pt-mesos-node1 [ ~ ]# ps -ef | grep cse-artifactory
167
+root      5286     1  0 08:39 ?        00:00:00 /bin/docker -d --insecure-registry <your_privet_registry> -s overlay
168
+</source><br />
169
+<br /><br />
170
+Next - [[Install and Configure Marathon for Mesos Cluster on PhotonOS]]
0 171
\ No newline at end of file
1 172
new file mode 100644
... ...
@@ -0,0 +1,277 @@
0
+<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/tgabay '''Tal Gabay''']</sub>
1
+
2
+= Overview =
3
+
4
+In this How-To, the steps for installing and configuring a Docker Swarm cluster, alongside DNS and Zookeeper, will be presented.
5
+The cluster that will be set up will be on VMware Photon hosts. <br />
6
+<br />
7
+A prerequisite to using this guide is to be familiar with Docker Swarm - information can be found [https://docs.docker.com/swarm/ here].
8
+
9
+== Cluster description ==
10
+
11
+The cluster will have 2 Swarm Managers and 3 Swarm Agents:
12
+
13
+=== Masters ===
14
+
15
+{| class="wikitable"
16
+! style="text-align: center; font-weight: bold;" | Hostname
17
+! style="font-weight: bold;" | IP Address
18
+|-
19
+| pt-swarm-master1.example.com
20
+| 192.168.0.1
21
+|-
22
+| pt-swarm-master2.example.com
23
+| 192.168.0.2
24
+|}
25
+
26
+=== Agents ===
27
+
28
+{| class="wikitable"
29
+! style="text-align: center; font-weight: bold; font-size: 0.100em;" | Hostname
30
+! style="font-weight: bold;" | IP Address
31
+|-
32
+| pt-swarm-agent1.example.com
33
+| 192.168.0.3
34
+|-
35
+| pt-swarm-agent2.example.com
36
+| 192.168.0.4
37
+|-
38
+| pt-swarm-agent3.example.com
39
+| 192.168.0.5
40
+|}<br />
41
+
42
+= Docker Swarm Installation and Configuration =
43
+
44
+== Setting Up the Managers ==
45
+
46
+The following steps should be done on both managers.<br />
47
+Docker Swarm supports multiple methods of using service discovery, but in order to use failover, Consul, etcd or Zookeeper must be used. In this guide, Zookeeper will be used.<br />
48
+Download the latest stable version of Zookeeper and create the '' zoo.cfg '' file under the '' conf '' directory:
49
+<br />
50
+<br />
51
+
52
+=== Zookeeper installation ===
53
+
54
+<source lang="bash" enclose="div">
55
+root@pt-swarm-master1 [ ~ ]# mkdir -p /opt/swarm && cd /opt/swarm && wget http://apache.mivzakim.net/zookeeper/stable/zookeeper-3.4.6.tar.gz
56
+root@pt-swarm-master1 [ /opt/swarm ]# tar -xf zookeeper-3.4.6.tar.gz && mv zookeeper-3.4.6 zookeeper
57
+root@pt-swarm-master1 [ ~ ]# cat /opt/swarm/zookeeper/conf/zoo.cfg | grep -v '#'
58
+tickTime=2000
59
+initLimit=10
60
+syncLimit=5
61
+dataDir=/var/lib/zookeeper
62
+clientPort=2181
63
+server.1=192.168.0.1:2888:3888
64
+server.2=192.168.0.2:2888:3888
65
+</source><br />
66
+The dataDir should be an empty, existing directory.
67
+From the Zookeeper documentation: Every machine that is part of the ZooKeeper ensemble should know about every other machine in the ensemble. You accomplish this with the series of lines of the form server.id=host:port:port. You attribute the server id to each machine by creating a file named myid, one for each server, which resides in that server's data directory, as specified by the configuration file parameter dataDir. The myid file consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
68
+<br />
69
+<br />
70
+Set Zookeeper ID
71
+<source lang="bash" enclose="div">
72
+root@pt-swarm-master1 [ ~ ]# echo 1 > /var/lib/zookeeper/myid
73
+</source><br />
74
+Project Photon uses [https://en.wikipedia.org/wiki/Systemd Systemd] for services, so a zookeeper service should be created using systemd unit file.<br />
75
+<source lang="bash" enclose="div">
76
+root@pt-swarm-master1 [ ~ ]# cat /etc/systemd/system/zookeeper.service
77
+[Unit]
78
+Description=Apache ZooKeeper
79
+After=network.target
80
+ 
81
+[Service]
82
+Environment="JAVA_HOME=/opt/OpenJDK-1.8.0.51-bin"
83
+WorkingDirectory=/opt/swarm/zookeeper
84
+ExecStart=/bin/bash -c "/opt/swarm/zookeeper/bin/zkServer.sh start-foreground"
85
+Restart=on-failure
86
+RestartSec=20
87
+User=root
88
+Group=root
89
+ 
90
+[Install]
91
+WantedBy=multi-user.target
92
+</source><br />
93
+Zookeeper comes with OpenJDK, so having Java on the Photon host is not a prerequisite. Simply direct the Environment variable to the location where the Zookeeper was extracted.
94
+Now you need to enable and start the service. Enabling the service will make sure that if the host restarts for some reason, the service will automatically start.<br />
95
+<source lang="bash" enclose="div">
96
+root@pt-swarm-master1 [ ~ ]# systemctl enable zookeeper
97
+root@pt-swarm-master1 [ ~ ]# systemctl start zookeeper
98
+</source><br />
99
+Verify that the service was able to start:<br />
100
+<source lang="bash" enclose="div">
101
+root@pt-swarm-master1 [ ~ ]# systemctl status zookeeper
102
+zookeeper.service - Apache ZooKeeper
103
+   Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled)
104
+   Active: active (running) since Tue 2016-01-12 00:27:45 UTC; 10s ago
105
+ Main PID: 4310 (java)
106
+   CGroup: /system.slice/zookeeper.service
107
+           `-4310 /opt/OpenJDK-1.8.0.51-bin/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/swarm/zookeeper/bin/../build/classes:/opt/swarm/zookeeper/bin/../build/lib/*.jar:/opt/s...
108
+</source><br />
109
+On the Manager you elected to be the Swarm Leader (primary), execute the following (if you do not have a specific leader in mind, choose one of the managers randomly):
110
+<source lang="bash" enclose="div">
111
+root@pt-swarm-master1 [ ~ ]# docker run -d --name=manager1 -p 8888:2375 swarm manage --replication --advertise 192.168.0.1:8888 zk://192.168.0.1,192.168.0.2/swarm
112
+</source>
113
+* '' docker run -d ''- run the container in the background.
114
+* '' --name=manager1 ''- give the container a name instead of the auto-generated one.
115
+* '' -p 8888:2375 ''- publish a container's port(s) to the host. In this case, when you connect to the host in port 8888, it connects to the container in port 2375.
116
+* swarm - the image to use for the container.
117
+* manage - the command to send to the container once it's up, alongside the rest of the parameters.
118
+* '' --replication '' - tells swarm that the manager is part of a a multi-manager configuration and that this primary manager competes with other manager instances for the primary role. The primary manager has the authority to manage the cluster, replicate logs, and replicate events that are happening inside the cluster.
119
+* '' --advertise 192.168.0.1:8888 ''- specifies the primary manager address. Swarm uses this address to advertise to the cluster when the node is elected as the primary.
120
+* '' zk://192.168.0.1,192.168.0.2/swarm ''- specifies the Zookeepers' location to enable service discovery. The /swarm path is arbitrary, just make sure that every node that joins the cluster specifies that same path (it is meant to enable support for multiple clusters with the same Zookeepers).<br />
121
+<br />
122
+On the second manager, execute the following:
123
+<source lang="bash" enclose="div">
124
+root@pt-swarm-master2 [ ~ ]# docker run -d --name=manager2 -p 8888:2375 swarm manage --replication --advertise 192.168.0.2:8888 zk://192.168.0.1,192.168.0.2/swarm
125
+</source>
126
+Notice that the only difference is the --advertise flag value. The first manager will not lose leadership following this command.<br />
127
+<br />
128
+Now 2 managers are alive, one is the primary and another is the replica. When we now look at the docker info on our primary manager, we can see the following information:
129
+<source lang="bash" enclose="div">
130
+docker-client:~$ docker -H tcp://192.168.0.1:8888 info
131
+Containers: 0
132
+Images: 0
133
+Role: primary
134
+Strategy: spread
135
+Filters: health, port, dependency, affinity, constraint
136
+Nodes: 0
137
+CPUs: 0
138
+Total Memory: 0 B
139
+Name: 82b8516efb7c
140
+</source>
141
+There are a few things that are worth noticing:
142
+* The info command can be executed from ANY machine that can reach the master. The -H tcp://&lt;ip&gt;:&lt;port&gt; command specifies that the docker command should be executed on a remote host.
143
+* Containers - this is the result of the docker ps -a command for the cluster we just set up.
144
+* Images - the result of the docker images command.
145
+* Role - as expected, this is the primary manager.
146
+* Strategy - Swarm has a number of strategies it supports for setting up containers in the cluster. spread means that a new container will run on the node with the least amount of containers on it.
147
+* Filters - Swarm can choose where to run containers based on different filters supplied in the command line. More info can be found [https://docs.docker.com/swarm/scheduler/filter/ here].<br />
148
+<br />
149
+When we now look at the docker info on our replicated manager, we can see the following information:
150
+<source lang="bash" enclose="div">
151
+docker-client:~$ docker -H tcp://192.168.0.2:8888 info
152
+Containers: 0
153
+Images: 0
154
+Role: replica
155
+Primary: 192.168.0.1:8888
156
+Strategy: spread
157
+Filters: health, port, dependency, affinity, constraint
158
+Nodes: 0
159
+CPUs: 0
160
+Total Memory: 0 B
161
+Name: ac06f826e507
162
+</source>
163
+Notice that the only differences between both managers are:
164
+Role: as expected, this is the replicated manager.
165
+Primary: contains the primary manager.<br />
166
+<br />
167
+
168
+== Setting Up the Agents ==
169
+
170
+In Swarm, in order for a node to become a part of the cluster, it should "join" that said cluster - do the following for each of the agents.
171
+Edit the '' /usr/lib/systemd/system/docker.service '' file so that each agent will be able to join the cluster:
172
+<source lang="bash" enclose="div">
173
+root@pt-swarm-agent1 [ ~ ]# cat /usr/lib/systemd/system/docker.service
174
+[Unit]
175
+Description=Docker Daemon
176
+Wants=network-online.target
177
+After=network-online.target
178
+ 
179
+[Service]
180
+ExecStart=/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eno16777984:2375 --cluster-store zk://192.168.0.1,192.168.0.2/swarm
181
+ExecReload=/bin/kill -HUP $MAINPID
182
+KillMode=process
183
+Restart=always
184
+MountFlags=slave
185
+LimitNOFILE=1048576
186
+LimitNPROC=1048576
187
+LimitCORE=infinity
188
+ 
189
+[Install]
190
+WantedBy=multi-user.target
191
+</source>
192
+* '' -H tcp://0.0.0.0:2375 ''- This ensures that the Docker remote API on Swarm Agents is available over TCP for the Swarm Manager.
193
+* '' -H unix:///var/run/docker.sock ''- The Docker daemon can listen for Docker Remote API requests via three different types of Socket: unix, tcp, and fd. 
194
+** tcp - If you need to access the Docker daemon remotely, you need to enable the tcp Socket.
195
+** fd - On Systemd based systems, you can communicate with the daemon via Systemd socket activation.
196
+* '' --cluster-advertise <NIC>:2375 ''- advertises the machine on the network by stating the ethernet card and the port used by the Swarm Managers.
197
+* '' --cluster-store zk://192.168.0.1,192.168.0.2/swarm ''- as we defined before, the service discovery being used here is Zookeeper.
198
+<br />
199
+Enable and start the docker service:
200
+<source lang="bash" enclose="div">
201
+root@pt-swarm-agent1 [ ~ ]# systemctl enable docker
202
+root@pt-swarm-agent1 [ ~ ]# systemctl daemon-reload && systemctl restart docker
203
+root@pt-swarm-agent1 [ ~ ]# systemctl status docker
204
+docker.service - Docker Daemon
205
+   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
206
+   Active: active (running) since Tue 2016-01-12 00:46:18 UTC; 4s ago
207
+ Main PID: 11979 (docker)
208
+   CGroup: /system.slice/docker.service
209
+           `-11979 /bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eno16777984:2375 --cluster-store zk://192.168.0.1,192.168.0.2/swarm
210
+</source><br />
211
+All that remains is to have the agents join the cluster:
212
+<source lang="bash" enclose="div">
213
+root@pt-swarm-agent1 [ ~ ]# docker run -d swarm join --advertise=192.168.0.3:2375 zk://192.168.0.1,192.168.0.2/swarm
214
+</source><br />
215
+A look at the output of the docker info command will now show:
216
+<source lang="bash" enclose="div">
217
+docker-client:~$ docker -H tcp://192.168.0.1:8888 info
218
+Containers: 3
219
+Images: 9
220
+Role: primary
221
+Strategy: spread
222
+Filters: health, port, dependency, affinity, constraint
223
+Nodes: 3
224
+ pt-swarm-agent1.example.com: 192.168.0.3:2375
225
+  └ Status: Healthy
226
+  └ Containers: 1
227
+  └ Reserved CPUs: 0 / 1
228
+  └ Reserved Memory: 0 B / 2.055 GiB
229
+  └ Labels: executiondriver=native-0.2, kernelversion=4.1.3-esx, operatingsystem=VMware Photon/Linux, storagedriver=overlay
230
+ pt-swarm-agent2.example.com: 192.168.0.4:2375
231
+  └ Status: Healthy
232
+  └ Containers: 1
233
+  └ Reserved CPUs: 0 / 1
234
+  └ Reserved Memory: 0 B / 2.055 GiB
235
+  └ Labels: executiondriver=native-0.2, kernelversion=4.1.3-esx, operatingsystem=VMware Photon/Linux, storagedriver=overlay
236
+ pt-swarm-agent3.example.com: 192.168.0.5:2375
237
+  └ Status: Healthy
238
+  └ Containers: 1
239
+  └ Reserved CPUs: 0 / 1
240
+  └ Reserved Memory: 0 B / 2.055 GiB
241
+  └ Labels: executiondriver=native-0.2, kernelversion=4.1.3-esx, operatingsystem=VMware Photon/Linux, storagedriver=overlay
242
+CPUs: 3
243
+Total Memory: 6.166 GiB
244
+Name: 82b8516efb7c
245
+</source>
246
+
247
+== Setting Up DNS ==
248
+
249
+Docker does not have its own self-provided DNS so we use a [https://github.com/ahmetalpbalkan/wagl wagl] DNS.
250
+Setting it up is very simple. In this case, one of the masters will also be the DNS. Simply execute:
251
+<source lang="bash" enclose="div">
252
+docker-client:~$ docker run -d --restart=always --name=dns -p 53:53/udp --link manager1:swarm ahmet/wagl wagl --swarm tcp://swarm:2375
253
+</source>
254
+* '' --restart=always ''- Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container continuously. The container will also always start on daemon startup, regardless of the current state of the container.
255
+* '' --link manager1:swarm ''- link the manager1 container (by name) and give it the alias swarm.
256
+That's it, DNS is up and running.
257
+
258
+= Test Your Cluster =
259
+
260
+== Running Nginx ==
261
+
262
+Execute the following commands from any docker client:
263
+<source lang="bash" enclose="div">
264
+docker-client:~$ docker -H tcp://192.168.0.1:8888 run -d -l dns.service=api -l dns.domain=example -p 80:80 vmwarecna/nginx
265
+docker-client:~$ docker -H tcp://192.168.0.1:8888 run -d -l dns.service=api -l dns.domain=example -p 80:80 vmwarecna/nginx
266
+</source>
267
+Note that this is the same command, executed twice. It tells the master to run 2 of the similar containers, each of which has 2 dns labels.<br />
268
+Now, from any container in the cluster that has dnsutils, you can execute the following (for example):
269
+<source lang="bash" enclose="div">
270
+root@13271a2d0fcb:/# dig +short A api.example.swarm
271
+192.168.0.3
272
+192.168.0.4
273
+root@13271a2d0fcb:/# dig +short SRV _api._tcp.example.swarm
274
+1 1 80 192.168.0.3.
275
+1 1 80 192.168.0.4.
276
+</source>
0 277
\ No newline at end of file
1 278
new file mode 100644
... ...
@@ -0,0 +1,38 @@
0
+**Installing the Lightwave Client on a Photon Image and Joining the Client to a Domain**
1
+
2
+After you have set up a Lightwave domain controller, you can join Photon clients to that domain. You install the Lightwave client first. After the client is installed, you join the client to the domain.
3
+
4
+**Prerequisites**
5
+
6
+- Prepare a Photon OS client for the Lightwave client installation.
7
+- Verify that the hostname of the client can be resolved.
8
+- Verify that you have 184 MB free for the Lightwave client installation.
9
+
10
+**Procedure**
11
+
12
+1. Log in to your Photon OS client over SSH.
13
+2. Install the Lightwave client by running the following command. 
14
+	
15
+	`# tdnf install lightwave-client -y`
16
+
17
+3. Edit the `iptables` firewall rules configuration file to allow connections on port `2020` as a default setting.
18
+	
19
+	The default Photon OS 2.0 firewall settings block all incoming, outgoing, and forwards so that you must configure the rules.
20
+
21
+	1. Open the  iptables settings file.
22
+	
23
+	`# vi /etc/systemd/scripts/iptables`
24
+
25
+	2. Add allow information over tcp for port 2020 in the end of the file, save, and close the file.
26
+
27
+	`iptables -A INPUT -p tcp -m tcp --dport 2020 -j ACCEPT`
28
+
29
+	3. Run the following command to allow the required connections without restarting the client.
30
+
31
+	`# iptables -A INPUT -p tcp -m tcp --dport 2020 -j ACCEPT`
32
+
33
+4. Join the client to the domain by running the `domainjoin.sh` script and configuring the domain controller FQDN, domain, and the password for the `administrator` user.
34
+
35
+	`# domainjoin.sh --domain-controller <lightwave-server-FQDN> --domain <your-domain> --password '<administrator-user-password>`
36
+
37
+5. In a browser, go to https://*Lightwave-Server-FQDN* to verify that the client appears under the tenants list for the domain.
0 38
\ No newline at end of file
1 39
new file mode 100644
... ...
@@ -0,0 +1,34 @@
0
+**Installing the Lightwave Server and Configuring It as a Domain Controller on a Photon Image**
1
+
2
+You can configure Lightwave server as domain controller on a Photon client. You install the Lightwave server first. After the server is installed, you configure a new domain. 
3
+
4
+**Prerequisites**
5
+
6
+- Prepare a Photon OS client for the Lightwave server installation.
7
+- Verify that the hostname of the client can be resolved.
8
+- Verify that you have 500 MB free for the Lightwave server installation.
9
+
10
+**Procedure**
11
+
12
+1. Log in to your Photon OS client over SSH as an administrator.
13
+2. Install the Lightwave server by running the following command. 
14
+	
15
+	`# tdnf install lightwave -y`
16
+3. Configure the Lightwave server as domain controller by selecting a domain name and password for the `administrator` user.
17
+	
18
+	The minimum required password complexity is 8 characters, one symbol, one upper case letter, and one lower case letter. 
19
+	Optionally, if you want to access the domain controller over IP, configure the ip under the `--ssl-subject-alt-name` parameter.
20
+	`# configure-lightwave-server --domain <your-domain> --password '<administrator-user-password>' --ssl-subject-alt-name <machine-ip-address>`
21
+4. Edit `iptables` rules to allow connections to and from the client.
22
+
23
+	The default Photon OS 2.0 firewall settings block all incoming, outgoing, and forwards so that you must reconfigure them.
24
+	
25
+	`# iptables -P INPUT ACCEPT`
26
+
27
+	`# iptables -P OUTPUT ACCEPT`
28
+
29
+	`# iptables -P FORWARD ACCEPT`
30
+
31
+5. In a browser, go to https://*lightwave-server-FQDN* to verify that you can log in to the newly created domain controller.
32
+	1. On the Cascade Identity Services page, enter the domain that you configured and click **Take me to Lightwave Admin**.
33
+	2. On the Welcome page, enter administrator@your-domain as user name and the password that you set during the domain controller configuration and click **LOGIN**.
0 34
\ No newline at end of file
1 35
new file mode 100644
... ...
@@ -0,0 +1,11 @@
0
+# Installing and Using Lightwave on Photon OS #
1
+
2
+Project Lightwave is an open-sourced project that provides enterprise-grade identity and access management services, and can be used to solve key security, governance, and compliance challenges for a variety of use cases within the enterprise. Through integration between Photon OS and Project Lightwave, organizations can enforce security and governance on container workloads, for example, by ensuring only authorized containers are run on authorized hosts, by authorized users. For more details about Lightwave, see the [project Lightwave page on GitHub](https://github.com/vmware/lightwave).
3
+
4
+**Procedure**
5
+
6
+1. [Installing the Lightwave Server and Configuring It as a Domain Controller on a Photon Image](Installing-Lightwave-Server-and-Setting-Up-a-Domain)
7
+2. [Installing the Lightwave Client on a Photon Image and Joining the Client to a Domain](Installing-Lightwave-Client-and-Joining-a-Domain)
8
+3. [Installing the Photon Management Daemon on a Lightwave Client](Installing-the-Photon-Management-Daemon-on-a-Lightwave-Client)
9
+4. [Remotely Upgrade a Single Photon OS Machine With Lightwave Client and Photon Management Daemon Installed](Remotely-Upgrade-a-Photon-OS-Machine-With-Lightwave-Client-and-Photon-Management-Daemon-Installed)
10
+5. [Remotely Upgrade Multiple Photon OS Machines With Lightwave Client and Photon Management Daemon Installed](Remotely-Upgrade-Photon-OS-Machine-With-Lightwave-Client-and-Photon-Management-Daemon-Installed)
0 11
new file mode 100644
... ...
@@ -0,0 +1,35 @@
0
+**Installing the Photon Management Daemon on a Lightwave Client**
1
+
2
+After you have installed and configured a domain on Lightwave, and joined a client to the domain, you can install the Photon Management Daemon on that client so that you can remotely manage it.
3
+
4
+**Prerequisites**
5
+
6
+- Have an installed Lightwave server with configured domain controller on it.
7
+- Have an installed Lightwave client that is joined to the domain.
8
+- Verify that you have 100 MB free for the daemon installation on the client.
9
+
10
+**Procedure**
11
+
12
+1. Log in to a machine with installed Lightwave client over SSH as an administrator.
13
+2. Install the Photon Management Daemon.
14
+	
15
+	`# tdnf install pmd -y`
16
+2. Start the Photon Management Daemon.
17
+	 
18
+	`# systemctl start pmd`
19
+3. Verify that the daemon is in an `active` state.
20
+
21
+	`# systemctl status pmd`
22
+4. (Optional) In a new console, use `curl` to verify that the Photon Management Daemon returns information.
23
+
24
+	Use the root credentials for the local client to authenticate against the daemon service.
25
+	`# curl https://<lightwave-client-FQDN>:2081/v1/info -ku root`
26
+
27
+5. (Optional) Create an administrative user for the Photon Management Daemon for your domain and assign it the domain administrator role.
28
+	1. In a browser, go to https://*lightwave-server-FQDN*.
29
+	1. On the Cascade Identity Services page, enter your domain name and click **Take me to Lightwave Admin**.
30
+	2. On the Welcome page, enter administrative credentials for your domain and click **Login**.
31
+	2. Click **Users & Groups** and click **Add** to create a new user.
32
+	3. On the Add New User page, enter user name, at least one name, password, and click **Save**.
33
+	3. Click the **Groups** tab, select the Administrators group, and click  **Membership**  to add the new user to the group.
34
+	4. On the View Members page, select the user that you created, click **Add Member**, click **Save**, and click **Cancel** to return to the previous page.
0 35
\ No newline at end of file
1 36
new file mode 100644
... ...
@@ -0,0 +1,13 @@
0
+The Photon OS Administration Guide covers the basics of managing packages, controlling services with systemd, setting up networking, initializing Photon OS with cloud-init, running Docker containers, and working with other technologies, such as Kubernetes. The guide also includes a section to get you started using Photon OS quickly and easily. The guide is at the following URL: 
1
+
2
+https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md
3
+
4
+The Photon OS Troubleshooting Guide describes the fundamentals of troubleshooting problems on Photon OS. This guide covers the basics of troubleshooting systemd, packages, network interfaces, services such as SSH and Sendmail, the file system, and the Linux kernel. The guide includes a quick tour of the tools that you can use for troubleshooting and provides examples along the way. The guide also demonstrates how to access the system's log files. It is at the following URL:
5
+
6
+https://github.com/vmware/photon/blob/master/docs/photon-os-troubleshooting-guide.md 
7
+
8
+Additional documentation appears in the docs directory of the Photon OS GitHub:
9
+
10
+https://github.com/vmware/photon/tree/master/docs
11
+
12
+
0 13
new file mode 100644
... ...
@@ -0,0 +1,52 @@
0
+### 1.1 What is OSTree? How about RPM-OSTree?
1
+
2
+OSTree is a tool to manage bootable, immutable, versioned filesystem trees. Unlike traditional package managers like rpm or dpkg that know how to install, uninstall, configure packages, OSTree has no knowledge of the relationship between files. But when you add rpm capabilities on top of OSTree, it becomes RPM-OSTree, meaning a filetree replication system that is also package-aware.   
3
+The idea behind it is to use a client / server architecture to keep your Linux installed machines (physical or VM) in sync with the latest bits, in a predictable and reliable manner. To achieve that, OSTree uses a git-like repository that records the changes to any file and replicate them to any subscriber.  
4
+A system administrator or an image builder developer takes a base Linux image, prepares the packages and other configuration on a server box, executes a command to compose a filetree that the host machines will download and then incrementally upgrade whenever a new change has been committed.
5
+You may read more about OSTree [here](https://wiki.gnome.org/Projects/OSTree).
6
+
7
+### 1.2 Why use RPM-OSTree in Photon?
8
+There are several important benefits:
9
+* Reliable, efficient: The filetree replication is simple, reliable and efficient. It will only transfer deltas over the network. If you have deployed two almost identical bootable images on same box (differing just by several files), it will not take twice the space. The new tree will have a set of hardlinks to the old tree and only the different files will have a separate copy stored to disk.
10
+* Atomic: the filetree replication is atomic. At the end of a deployment, you are either booting from one deployment, or the other. There is no "partial deployed bootable image". If anything bad happens during replication or deployment- power loss, network failure, your machine boots from the old image. There is even a tool option to cleanup old deployed (successfully or not) image.
11
+* Manageable: You are provided simple tools to figure out exactly what packages have been installed, to compare files, configuration and package changes between versions.
12
+* Predictable, repeatable: A big headache for a system administrator is to maintain a farm of computers with different packages, files and configuration installed in different order, that will result in exponential set of test cases. With RPM-OStree, you get identical, predictable installed systems. 
13
+
14
+As drawbacks, I would mention:
15
+* Some applications configured by user on host may have compatibility issues if they save configuration or download into read only directories like /usr.
16
+* People not used with "read only" file systems will be disappointed that they could no longer use RPM, yum, tdnf to install whatever they want. Think of this as an "enterprise policy". They may circumvent this by customizing the target directory to a writable directory like /var or using rpm to install packages and record them using a new RPM repository in a writable place.
17
+* Administrators need to be aware about the directories re-mapping specific to OSTree and plan accordingly.
18
+
19
+### 1.3 Photon with RPM-OSTree installation profiles
20
+Photon takes advantage of RPM-OSTree and offers several installation choices:
21
+* Photon RPM-OSTree server - used to compose customized Photon OS installations and to prepare updates. I will call it for short 'server'.
22
+* Photon RPM-OSTree host connected to a default online server repository via http or https, maintained by VMware Photon OS team, where future updates will be published. This will create a minimal installation profile, but with the option to self-upgrade. I will call it for short 'default host'.
23
+* Photon RPM-OSTree host connected to a custom server repository. It requires a Photon RPM-OSTree Server installed in advance. I will call it for short 'custom host'.
24
+
25
+### 1.4 Terminology
26
+I use the term "OSTree" (starting with capitals) throughout this document, when I refer to the general use of this technology, the format of the repository or replication protocol. I use "RPM-OSTree" to emphasize the layer that adds RedHat Package Manager compatibility on both ends - at server and at host. However, since Photon OS is an RPM-based Linux, there are places in the documentation and even in the installer menus where "OSTree" may be used instead of "RPM-OSTree" when the distinction is not obvious or doesn't matter in that context.
27
+When "ostree" and "rpm-ostree" (in small letters) are encountered, they refer to the usage of the specific Unix commands.   
28
+
29
+Finally, "Photon RPM-OSTree" is the application or implementation of RPM-OStree system into Photon OS, materialized into two options: Photon Server and Photon Host (or client). "Server" or "Host" may be used with or without the "Photon" and/or "RPM-OStree" qualifier, but it means the same thing. 
30
+
31
+### 1.5 Sample code
32
+Codes samples used throughout the book are small commands that can be typed at shell command prompt and do not require downloading additional files. As an alternative, one can remote via ssh, so cut & paste sample code from outside sources or copy files via scp will work. See the Photon Administration guide to learn [how to enable ssh](https://github.com/vmware/photon/blob/1.0/docs/photon-admin-guide.md#permitting-root-login-with-ssh). 
33
+The samples assume that the following VMs have been installed - see the steps in the next chapters:
34
+* A default host VM named **photon-host-def**.
35
+* Two server VMs named **photon-srv1** and **photon-srv2**.
36
+* Two custom host VMs named **photon-host-cus1** and **photon-host-cus2**, connected each to the corresponding server during install.
37
+
38
+### 1.6 How to read this book
39
+I've tried to structure this book to be used both as a sequential read and as a reference documentation.   
40
+If you are just interested in deploying a host system and keeping it up to date, then read chapters 2 and 5.   
41
+If you want to install your own server and experiment with customizing packages for your Photon hosts, then read chapters 6 to 9. There are references to the concepts discussed throughout the book, if you need to understand them better.  
42
+However, if you want to read page by page, information is presented from simple to complex, although as with any technical book, we occasionally run into the chicken and egg problem - forward references to concepts that have yet to be explained later. In other cases, concepts are introduced and presented in great detail that may be seem hard to follow at first, but I promise they will make sense in the later pages when you get to use them.
43
+
44
+### 1.7 Difference between versions
45
+This book has been written when Photon 1.0 was released, so all the information presented apply directly to Photon 1.0 and also to Photon 1.0 Revision 2 (in short Photon 1.0 Rev2 or Photon 1.0r, as some people refer to it as Photon 1.0 Refresh). This release is relevant to OSTree, because of ISO including an updated RPM-OSTree repository containing upgraded packages, as well as matching updated online repo that plays well into the upgrade story. Other than that, differences are minimal.  
46
+
47
+The guide has been updated significantly for Photon OS 2.0. Information of what's different is scattered through chapters 2, 6, 7, 8. [[Chapter 12|Photon-RPM-OSTree:-Install-or-rebase-to-Photon-OS-2.0]] is dedicated to the topic.    
48
+
49
+OSTree technology is evolving too and rather than pointing out at what package version some feature has been introduced or changed, the focus is on the ostree and rpm-ostree package versions included with the Photon OS major releases.
50
+
51
+[[Back to main page|Photon-RPM-OSTree:-a-simple-guide]] | [[Previous page|Photon-RPM-OSTree:-Preface]] | [[ Next page >|Photon-RPM-OSTree:-2-Installing-a-host-against-default-server-repository]]
0 52
\ No newline at end of file
1 53
new file mode 100644
... ...
@@ -0,0 +1,89 @@
0
+In Chapter 3 we talked about the Refspec that contains a **photon:** prefix, that is the name of a remote. When a Photon host is installed, a remote is added - which contains the URL for an OSTree repository that is the origin of the commits we are going to pull from and deploy filetrees, in our case the Photon RPM-OSTree server we installed the host from. This remote is named **photon**, which may be confusing, because it's also the OS name and part of the Refspec (branch) path.
1
+
2
+### 10.1 Listing remotes
3
+A host repo can be configured to switch between multiple remotes to pull from, however only one remote is the "active" one at a time. We can list the remotes created so far, which brings back the expected result.
4
+```
5
+root@photon-host-def [ ~ ]# ostree remote list
6
+photon
7
+```
8
+We can inquiry about the URL for that remote name, which for the default host is the expected Photon OS online OSTree repo.
9
+```
10
+root@photon-host-def [ ~ ]# ostree remote show-url photon
11
+https://dl.bintray.com/vmware/photon/rpm-ostree/1.0
12
+```
13
+But where is this information stored? The repo's config file has it.
14
+```
15
+root@photon-host-def [ ~ ]# cat /ostree/repo/config 
16
+[core]
17
+repo_version=1
18
+mode=bare
19
+
20
+[remote "photon"]
21
+url=https://dl.bintray.com/vmware/photon/rpm-ostree/1.0
22
+gpg-verify=false
23
+```
24
+
25
+If same command is executed on the custom host we've installed, it's going to reveal the URL of the Photon RPM-OSTree server connected to during setup.
26
+```
27
+root@photon-host-cus [ ~ ]# ostree remote show-url photon
28
+http://10.118.101.168
29
+```
30
+
31
+### 10.2 GPG signature verification
32
+You may wonder what is the purpose of ```gpg-verify=false``` in the config file, associated with the specific remote. This will instruct any host update to skip the signing verification for the updates that come from server, resulted from tree composed locally at the server, as they are not signed. Without this, host updating will fail.  
33
+
34
+There is a whole chapter about signing, importing keys and so on that I will not get into, but the idea is that signing adds an extra layer of security, by validating that everything you download comes from the trusted publisher and has not been altered. That is the case for all Photon OS artifacts downloaded from VMware official site. All OVAs and packages, either from the online RPMS repositories or included in the ISO file - are signed by VMware. We've seen a similar setting ```gpgcheck=1``` in the RPMS repo configuration files that tdnf uses to validate or not the signature for all packages downloaded to be installed.
35
+
36
+
37
+### 10.3 Switching repositories
38
+Since mapping name/url is stored in the repo's config file, in principle you can re-assign a different URL, connecting the host to a different server. The next upgrade will get the latest commit chain from the new server.   
39
+If we edit photon-host-def's repo config and replace the bintray URL by photon-srv1's IP address, all original packages in the original 1.0_minimal version will be preserved, but any new package change (addition, removal, upgrade) added after that (in 1.0_minimal.1, 1.0_minimal.2) will be reverted and all new commits from photon-srv1 (that may have same version) will be applied. This is because the two repos are identical copies, so they have the same original commit ID as a common ancestor, but they diverge from there.  
40
+This may create confusion and it's one of the reasons I insisted on creating your own scheme of versioning.
41
+  
42
+If the old and new repo have nothing in common (no common ancestor commit), this will undo even the original commit, so all commits from the new tree will be applied.  
43
+A better solution would be to add a new remote that will identify where the commits come from.
44
+
45
+### 10.4 Adding and removing remotes
46
+
47
+A cleaner way to switch repositories is to add remotes that point to different servers. Let's add another server that we will refer to as **photon2**, along with (optional) the refspecs for branches that it provides (we will see later that in the newer OSTree versions, we don't need to know the branch names, they could be [[queried at run-time|Photon-RPM-OSTree:-10-Remotes#105-listing-available-branches]]). The 'minimal' and 'full' branch ref names containing '2.0' suggest this may be a Photon OS 2.0 RPM-OSTree server. 
48
+```
49
+root@photon-host-cus [ ~ ]# ostree remote add --repo=/ostree/repo -v --no-gpg-verify photon2 http://10.118.101.86 photon/2.0/x86_64/minimal photon/2.0/x86_64/full
50
+root@photon-host-cus [ ~ ]# ostree remote list
51
+photon
52
+photon2
53
+root@photon-host-cus [ ~ ]# ostree remote show-url photon2
54
+http://10.118.101.86
55
+```
56
+Where is this information stored? There is an extra config file created per each remote:
57
+```
58
+root@photon-host-cus [ ~ ]# cat /etc/ostree/remotes.d/photon2.conf 
59
+[remote "photon2"]
60
+url=http://10.118.101.86
61
+branches=photon/2.0/x86_64/minimal;photon/2.0/x86_64/full;
62
+gpg-verify=false
63
+```
64
+You may have guessed what is the effect of ```--no-gpg-verify option```.  
65
+Obviously, remotes could also be deleted.
66
+```
67
+root@photon-host-cus [ ~ ]# ostree remote delete photon2
68
+root@photon-host-cus [ ~ ]# ostree remote list
69
+photon
70
+```
71
+
72
+### 10.5 List available branches
73
+If a host has been deployed from a specific branch and would like to switch to a different one, maybe from a different server, how would it know what branches are available? In git, you would run ```git remote show origin``` or ```git remote -a``` (although last command would not show all branches, unless you ran ```git fetch``` first).  
74
+
75
+Fortunately, in Photon OS 2.0 and higher, the hosts are able to query the server, if summary metadata has been generated, as we've seen in [[8.5|Photon-RPM-OSTree:-8-File-oriented-server-operations#85-creating-summary-metadata]].  This command lists all branches available for remote **photon2**.
76
+
77
+```
78
+root@photon-host-cus [ ~ ]# ostree remote refs photon2 
79
+photon2:photon/2.0/x86_64/base
80
+photon2:photon/2.0/x86_64/full
81
+photon2:photon/2.0/x86_64/minimal
82
+```
83
+
84
+###10.6 Switching branches (rebasing)
85
+
86
+
87
+[[Back to main page|Photon-RPM-OSTree:-a-simple-guide]] | [[Previous page|Photon-RPM-OSTree:-9-Package-oriented-server-operations]] | [[Next page >|Photon-RPM-OSTree:-11-Running-container-applications-between-bootable-images]]
88
+  
0 89
new file mode 100644
... ...
@@ -0,0 +1,211 @@
0
+In this chapter, we want to test a docker application and make sure that all the settings and downloads done in one bootable filetree are going to be saved into writable folders and be available in the other image, in other words after reboot from the other image, everything is available exactly the same way.   
1
+We are going to do this twice: first, to verify an existing bootable image installed in parallel and then create a new one.
2
+
3
+### 11.1 Downloading a docker container appliance
4
+Photon OS comes with docker package installed and configured, but we expect that the docker daemon is inactive (not started). Configuration file /usr/lib/systemd/system/docker.service is read-only (remember /usr is bound as read-only). 
5
+```
6
+root@sample-host-def [ ~ ]# systemctl status docker
7
+* docker.service - Docker Daemon
8
+   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
9
+   Active: inactive (dead)
10
+
11
+root@sample-host-def [ ~ ]# cat /usr/lib/systemd/system/docker.service
12
+[Unit]
13
+Description=Docker Daemon
14
+Wants=network-online.target
15
+After=network-online.target
16
+
17
+[Service]
18
+ExecStart=/bin/docker -d -s overlay
19
+ExecReload=/bin/kill -HUP $MAINPID
20
+KillMode=process
21
+Restart=always
22
+MountFlags=slave
23
+LimitNOFILE=1048576