Browse code

Merge branch 'master' of https://github.com/vmware/photon

archive authored on 2018/10/09 23:30:21
Showing 80 changed files
1 1
deleted file mode 100644
... ...
@@ -1,16 +0,0 @@
1
-An official Vagrant box is available on Hashicorp Atlas. To get started: 
2
-
3
-	vagrant init vmware/photon
4
-
5
-Add the following lines to the Vagrantfile: 
6
-
7
-	config.vm.provider "virtualbox" do |v|
8
-	  v.customize ['modifyvm', :id, '--acpi', 'off']
9
-	end
10
-
11
-Install vagrant-guests-photon plugin which provides VMware Photon OS guest support.
12
-It is available at https://github.com/vmware/vagrant-guests-photon.
13
-
14
-Requires VirtualBox 4.3 or later version. If you have issues, please check your version.
15
-
16
-
17 1
deleted file mode 100644
... ...
@@ -1,106 +0,0 @@
1
-Download the Photon OS version that’s right for you. Click one of the links below.
2
-
3
-**Selecting a Download Format**
4
-
5
-Photon OS is available in the following pre-packaged, binary formats.
6
-#### Download Formats ####
7
-| Format | Description |
8
-| --- | --- |
9
-| ISO Image | Contains everything needed to install either the minimal or full installation of Photon OS. The bootable ISO has a manual installer or can be used with PXE/kickstart environments for automated installations. |
10
-| OVA | Pre-installed minimal environment, customized for VMware hypervisor environments. These customizations include a highly sanitized and optimized kernel to give improved boot and runtime performance for containers and Linux applications. Since an OVA is a complete virtual machine definition, we've made available a Photon OS OVA that has virtual hardware version 11; this will allow for compatibility with several versions of VMware platforms or allow for the latest and greatest virtual hardware enhancements. |
11
-| Amazon AMI | Pre-packaged and tested version of Photon OS made ready to deploy in your Amazon EC2 cloud environment. Previously, we'd published documentation on how to create an Amazon compatible instance, but, now we've done the work for you. |
12
-| Google GCE Image | Pre-packaged and tested Google GCE image that is ready to deploy in your Google Compute Engine Environment, with all modifications and package requirements for running Photon OS in GCE. | 
13
-| Azure VHD | Pre-packaged and tested Azure HD image that is ready to deploy in your Microsoft Azure Cloud, with all modifications and package requirements for running Photon OS in Azure. |
14
-
15
-**Downloading Photon OS 2.0 GA**
16
-
17
-Photon OS 2.0 GA is available now! Choose the download that’s right for you and click one of the links below. Refer to the associated sha1sums and md5sums.
18
-#### Photon OS 2.0 GA Binaries ####
19
-| Download | Size | sha1 checksum | md5 checksum |
20
-| --- | --- | --- | --- |
21
-| [Full ISO](http://dl.bintray.com/vmware/photon/2.0/GA/iso/photon-2.0-304b817.iso) | 2.3GB | 68ec892a66e659b18917a12738176bd510cde829 | 6ce66c763589cf1ee49f0144ff7182dc |
22
-| [OVA with virtual hardware v11](http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-hw11-2.0-304b817.ova) | 108MB | b8c183785bbf582bcd1be7cde7c22e5758fb3f16 | 1ce23d43a778fdeb5283ecd18320d9b5 |
23
-| [OVA with virtual hardware v13 (ESX 6.5 and above)](http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-hw13-2.0-304b817.ova) | 106MB | 44f7b808ca48ea1af819d222561a14482a15e493 | ec490b65615284a0862e9ee4a7a0ac97 |
24
-| [OVA with virtual hardware v11(Workstation and Fusion)](http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-lsilogic-hw11-2.0-304b817.ova) | 108MB | 6ed700cbbc5e54ba621e975f28284b27adb71f68 | 586c059bf3373984c761e254bd491f59 |
25
-| [Amazon AMI](http://dl.bintray.com/vmware/photon/2.0/GA/ami/photon-ami-2.0-304b817.tar.gz) | 135MB | 45f4e9bc27f7316fae77c648c8133195d38f96b3 | 486d59eca17ebc948e2f863f2af06eee |
26
-| [Google GCE](http://dl.bintray.com/vmware/photon/2.0/GA/gce/photon-gce-2.0-304b817.tar.gz) | 705MB | b1385dd8464090b96e6b402c32c5d958d43f9fbd | 34953176901f194f02090988e596b1a7 |
27
-| [Azure VHD - gz file](http://dl.bintray.com/vmware/photon/2.0/GA/azure/photon-azure-2.0-304b817.vhd.gz) | 170MB | a77d54351cca43eefcf289a907ec751c32372930 | 86d281f033f3584b11e5721a5cbda2d3 |
28
-| [Azure VHD - gz file - cloud-init provisioning](http://dl.bintray.com/vmware/photon/2.0/GA/azure/photon-azure-2.0-3146fa6.tar.gz) | 172MB | d7709a7b781dad03db55c4999bfa5ef6606efd8b | ee95bffe2c924d9cb2d47a94ecbbea2c |
29
-
30
-***Photon OS 2.0 AMI ID (Update: November 7th, 2017)***
31
-| Region | AMI ID|
32
-| --- | --- |
33
-| N.Virginia | ami-47fe4c3d |
34
-| Ohio | ami-29dff04c |
35
-| N.California | ami-065f6166 |
36
-| Oregon | ami-f6ab7f8e |
37
-
38
-**Downloading Photon OS 2.0 RC**
39
-Photon OS 2.0 RC is available now! Choose the download that’s right for you and click one of the links below. Refer to the associated sha1sums and md5sums.
40
-#### Photon OS 2.0 RC Binaries ####
41
-| Download | Size | sha1 checksum | md5 checksum |
42
-| --- | --- | --- | --- |
43
-| [Full ISO](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fiso%2Fphoton-2.0-31bb961.iso) | 2.2GB | 5c049d5ff40c8f22ae5e969eabd1ee8cd6b834e7 | 88cc8ecf2a7f6ae5ac8eb15f54e4a821 |
44
-| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fova%2Fphoton-custom-hw11-2.0-31bb961.ova) | 108MB | 6467ebb31ff23dfd112c1c574854f5655a462cc2 | b2c7fa9c151b1130342f08c2f513f9e1 |
45
-| [OVA with virtual hardware v13 (ESX 6.5 and above)](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fova%2Fphoton-custom-hw13-2.0-31bb961.ova) | 106MB | 5072ec86bcaa2d6e07f4fe3e6aa99063acbbc3f3 | 9331fc10d4526f389d2b658920727925 |
46
-| [Amazon AMI](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fami%2Fphoton-ami-2.0-31bb961.tar.gz) | 135MB | 2461b81f3d7c2325737c6ae12099e4c7ef6a079c | 67458ee457a0cf68d199ab95fc707107 |
47
-| [Google GCE](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fgce%2Fphoton-gce-2.0-31bb961.tar.gz) | 704MB | c65bcc0cbda061c6305f968646be2d72a4283227 | 2dff057540e37a161520ec86e39b17aa |
48
-| [Azure VHD - gz file](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FRC%2Fazure%2Fphoton-azure-2.0-31bb961.vhd.gz) | 169MB | b43a746fead931ae2bb43e9108cde35913b23715 | 3485c7a31741cca07cc11cbf374ec1a5 |
49
-
50
-**Downloading Photon OS 2.0 Beta**
51
-Photon OS 2.0 Beta is here! Choose the download that’s right for you and click one of the links below. Refer to the associated sha1sums and md5sums.
52
-#### Photon OS 2.0 Beta Binaries ####
53
-| Download | Size | sha1 checksum | md5 checksum |
54
-| --- | --- | --- | --- |
55
-| [Full ISO](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fiso%2Fphoton-2.0-8553d58.iso) | 2.1GB | 7a0e837061805b7aa2649f9ba6652afb2d4591fc | a52c50240726cb3c4219c5c608f9acf3 |
56
-| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fova%2Fphoton-custom-hw11-2.0-8553d58.ova) | 110MB | 30b81b22a7754165ff30cc964b0a4a66b9469805 | fb309ee535cb670fe48677f5bfc74ec0 |
57
-| [Amazon AMI](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fami%2Fphoton-ami-2.0-8553d58.tar.gz) | 136MB | 320c5b6f6dbf6b000a6036b569b13b11e0e93034 | cc3cff3cf9a9a8d5f404af0d78812ab4 |
58
-| [Google GCE](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fgce%2Fphoton-gce-2.0-8553d58.tar.gz) | 705MB | c042d46971fa3b642e599b7761c18f4005fc70a7 | 03b873bbd2f0dd1401a334681c59bbf6 |
59
-| [Azure VHD](https://bintray.com/vmware/photon/download_file?file_path=2.0%2FBeta%2Fazure%2Fphoton-azure-2.0-8553d58.vhd) | 17GB | 20cfc506a2425510e68a9d12ea48218676008ffe | 6a531eab9e1f8cba89b1f150d344ecab |
60
-
61
-**Downloading Photon OS 1.0**
62
-
63
-
64
-***Photon OS 1.0 AMI ID (Update: September 28th, 2017)***
65
-| Region | AMI ID|
66
-| --- | --- |
67
-| N Virginia | ami-18758762 |
68
-| Ohio | ami-96200df3 |
69
-| N.California | ami-37360657 |
70
-| Oregon | ami-66b74f1e |
71
-
72
-***Photon OS 1.0, Revision 2 Binaries (Update: January 19th, 2017)***
73
-We've been busy updating RPMs in our repository for months, now, to address both functional and security issues. However, our binaries have remained fixed since their release back in September 2015. In order to make it faster and easier to get a up-to-date Photon OS system, we've repackaged all of our binaries to include all of these RPM updates. For clarity, we'll call these updated binaries, which are still backed by the 1.0 repos - **1.0, Revision 2**.
74
-
75
-Choose the download that’s right for you and click one of the links below.
76
-#### Photon OS 1.0, Revision 2 Binaries ####
77
-| Download | Size | sha1 checksum | md5 checksum |
78
-| --- | --- | --- | --- |
79
-| [Full ISO](https://bintray.com/vmware/photon/download_file?file_path=photon-1.0-62c543d.iso) | 2.4GB | c4c6cb94c261b162e7dac60fdffa96ddb5836d66| 69500c07d25ce9caa9b211a8b6eefd61|
80
-| [OVA with virtual hardware v10](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw10-1.0-62c543d.ova) | 159MB | 6e9087ed25394e1bbc56496ae368b8c77efb21cb | 3e4b1a5f24ab463677e3edebd1ecd218|
81
-| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw11-1.0-62c543d.ova) | 159MB | 18c1a6d31545b757d897c61a0c3cc0e54d8aeeba| be9961a232ad5052b746fccbb5a9672d|
82
-| [Amazon AMI](https://bintray.com/vmware/photon/download_file?file_path=photon-ami-1.0-62c543d.tar.gz) | 590MB | 6df9ed7fda83b54c20bc95ca48fa467f09e58548| 5615a56e5c37f4a9c762f6e3bda7f9d0|
83
-| [Google GCE](https://bintray.com/vmware/photon/download_file?file_path=photon-gce-1.0-62c543d.tar.gz) | 164MB | 1feb68ec00aaa79847ea7d0b00eada7a1ac3b527| 5adb7b30803b168e380718db731de5dd|
84
-
85
-There are a few other ways that you could create a Photon OS instance – either making the ISO from source that’s been cloned from the [GitHub Photon OS repository](https://github.com/vmware/photon), using the [instructions](https://github.com/vmware/photon/blob/master/docs/build-photon.md) found on the GitHub repo, using the [scripted installation](https://github.com/vmware/photon/blob/master/docs/kickstart.md), or [boot Photon OS over a network](https://github.com/vmware/photon/blob/master/docs/PXE-boot.md), using PXE. These options are beyond the scope of this document. If you’re interested in these methods, follow the links provided above. 
86
-
87
-***Photon OS 1.0, Original Binaries***
88
-
89
-If you're looking for the original Photon OS, version 1.0 binaries, they can still be found here:
90
-#### Photon OS 1.0, Original Binaries ####
91
-| Download | Size | sha1 checksum | md5 checksum |
92
-| --- | --- | --- | --- |
93
-| [Full ISO](https://bintray.com/artifact/download/vmware/photon/photon-1.0-13c08b6.iso) | 2.1GB | ebd4ae77f2671ef098cf1e9f16224a4d4163bad1 | 15aea2cf5535057ecb019f3ee3cc9d34 |
94
-| [OVA with virtual hardware v10](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw10-1.0-13c08b6.ova) | 292MB | 8669842446b6aac12bd3c8158009305d46b95eac | 3ca7fa49128d1fd16eef1993cdccdd4d |
95
-| [OVA with virtual hardware v11](https://bintray.com/vmware/photon/download_file?file_path=photon-custom-hw11-1.0-13c08b6.ova) | 292MB | 2ee56c5ce355fe6c59888f2f3731fd9d51ff0b4d | 8838498fb8202aac5886518483639073 |
96
-| [Amazon AMI](https://bintray.com/artifact/download/vmware/photon/photon-ami-1.0-13c08b6.tar.gz) | 148.5MB | 91deb839d788ec3c021c6366c192cf5ac601575b | fe657aafdc8189a85430e19ef82fc04a |
97
-| [Google GCE](https://bintray.com/artifact/download/vmware/photon/photon-gce-1.0-13c08b6.tar.gz) | 411.7MB | 397ccc7562f575893c89a899d9beafcde6747d7d | 67a671e032996a26d749b7d57b1b1887 |
98 1
deleted file mode 100644
... ...
@@ -1,169 +0,0 @@
1
-# Photon OS Frequently Asked Questions
2
-
3
-* [What is Photon OS?](#q-what-is-photon-os)
4
-* [How do I get started with Photon OS?](#q-how-do-i-get-started-with-photon-os)
5
-* [Can I upgrade my existing Photon OS 1.0 VMs?](#q-can-i-upgrade-my-existing-photon-os-10-vms)
6
-* [What kind of support comes with Photon OS?](#q-what-kind-of-support-comes-with-photon-os)
7
-* [How can I contribute to Photon OS?](#q-how-can-i-contribute-to-photon-os)
8
-* [How is Photon OS patched?](#q-how-is-Photon-OS-patched)
9
-* [How does Photon OS relate to Project Lightwave?](#q-how-does-photon-os-relate-to-project-lightwave)
10
-* [Will VMware continue to support other container host runtime offerings on vSphere?](#q-will-vmware-continue-to-support-other-container-host-runtime-offerings-on-vsphere)
11
-* [How to report a security vulnerability in Photon OS?](#q-how-to-report-a-security-vulnerability-in-photon-os)
12
-* [What are the Docker improvements in Photon OS 2.0?](#q-what-are-the-docker-improvements-in-photon-os-20)
13
-* [Why is VMware creating Photon OS?](#q-why-is-vmware-creating-photon-os)
14
-* [Why is VMware open-sourcing Photon OS?](#q-why-is-vmware-open-sourcing-photon-os)
15
-* [In what way is Photon OS "optimized for VMware?"](#q-in-what-way-is-photon-os-optimized-for-vmware)
16
-* [Why can't I SSH in as root?](#q-why-cant-i-ssh-in-as-root)
17
-* [Why isn't netstat working?](#q-why-is-netstat-not-working)
18
-* [Why do all of my cloned Photon OS instances have the same IP address when using DHCP?](#q-why-do-all-of-my-cloned-photon-os-instances-have-the-same-ip-address-when-using-dhcp)
19
-* [How to install new packages?](#how-to-install-new-packages)
20
-* [Why is the yum command not working in a minimal installation?](#q-why-the-yum-command-not-working-in-a-minimal-installation)
21
-* [How to install all build essentials?](#q-how-to-install-all-build-essentials)
22
-* [How to build new package for Photon OS?](#q-how-to-build-new-package-for-photon-os)
23
-* [I just booted into freshly installed Photon OS instance, why isn't "docker ps" working?](#q-i-just-booted-into-freshly-installed-photon-os-instance-why-isnt-docker-ps-working)
24
-* [What is the difference between Minimal and Full installation?](#q-what-is-the-difference-between-minimal-and-full-installation)
25
-* [What packages are included in Minimal and Full?](#q-what-packages-are-included-in-minimal-and-full)
26
-* [How do I transfer or share files between Photon OS and my host machine?](#q-how-do-i-transfer-or-share-files-between-photon-and-my-host-machine)
27
-* [Why is the ISO over 2GB, when I hear that Photon OS is a minimal container runtime?](#q-why-is-the-iso-over-2gb-when-i-hear-that-photon-os-is-a-minimal-container-runtime)
28
-
29
-***
30
-
31
-# Getting Started
32
-
33
-## Q. What is Photon OS?
34
-A. Photon OS™ is an open source Linux container host optimized for cloud-native applications, cloud platforms, and VMware infrastructure. Photon OS provides a secure run-time environment for efficiently running containers. For an overview, see [https://vmware.github.io/photon/](https://vmware.github.io/photon/).
35
-
36
-## Q. How do I get started with Photon OS?
37
-A. Start by deciding your target platform. Photon OS 2.0 has been certified in public cloud environments - Microsoft Azure (new), Google Compute Engine (GCE), Amazon Elastic Compute Cloud (EC2) - as well as on VMware vSphere, VMware Fusion, and VMware Workstation.
38
-Next, download the latest binary distributions for your target platform. The binaries are hosted on [https://bintray.com/vmware/photon/](https://bintray.com/vmware/photon/). For download instructions, see [Downloading Photon OS](Downloading-Photon-OS.md).
39
-Finally, go to the installation instructions for your target platform, which are listed here:  [Quick Start](photon-admin-guide.md#getting-started-with-photon-os-20).
40
-
41
-## Q. Can I upgrade my existing Photon OS 1.0 VMs?
42
-A. Yes, there is an in-place upgrade path for Photon OS 1.0 implementations. You simply download an upgrade package, run a script, and reboot the VM. Refer to the instructions in [Upgrading to Photon OS 2.0](Upgrading-to-Photon-OS-2.0.md).
43
-
44
-## Q. What kind of support comes with Photon OS?
45
-A. Photon OS is supported through community efforts and direct developer engagement in the communities. Potential users of Photon OS should start with the [Photon microsite](http://vmware.com/photon).
46
-
47
-Developers who might want the source code, including those interested in making contributions, should visit the [Photon OS Github repository](https://github.com/vmware/photon). 
48
-
49
-## Q. How can I contribute to Photon OS?
50
-A. We welcome community participation in the development of Photon OS and look forward to broad ecosystem engagement around the project. Getting your idea into Photon OS is just a [GitHub](https://vmware.github.io/photon) pull request away. When you submit a pull request, you'll be asked to accept the Contributor License Agreement (CLA). 
51
-
52
-## Q. How is Photon OS patched?
53
-A. Within a major release, updates will be delivered as package updates. Security updates will be delivered on an as-needed basis. Non-security related updates will happen quarterly, but may not include every, single package update. The focus is on delivering a valid, functional updated stack every quarter.
54
-
55
-Photon OS isn't "patched," as a whole - instead, individual packages are updated (potentially, with patches applied to that individual package). For instance, if a package releases a fix for a critical vulnerability, we'll update the package in the Photon OS repository, for critical issues probably within a day or two. At that point, customers get that updated package by running, "tdnf update <package>"
56
- 
57
-## Q. How does Photon OS relate to Project Lightwave?
58
-A. Project Lightwave is an open-sourced project that provides enterprise-grade identity and access management services, and can be used to solve key security, governance, and compliance challenges for a variety of use cases within the enterprise.
59
-Through integration between Photon OS and Project Lightwave, organizations can enforce security and 
60
-governance on container workloads, for example, by ensuring only authorized containers are run on authorized hosts, by authorized users. For details about Lightwave, see [https://github.com/vmware/lightwave](https://github.com/vmware/lightwave).
61
-
62
-## Q. Will VMware continue to support other container host runtime offerings on vSphere?
63
-A. YES, VMware is committed to delivering an infrastructure for all workloads, and for vSphere to have the largest guest OS support in the industry and support customer choice. 
64
-Toward those goals, VMware will continue to work with our technology partners to support new Guest Operating Systems and container host runtimes as they come to the market. Open-sourcing Photon OS will enable optimizations and enhancements for container host runtimes on VMware Platform are available as reference implementation for other container host runtimes as well.
65
-
66
-# Photon OS
67
-## Q. What is Photon OS?
68
-A. Photon OS is an open source, Linux container host runtime optimized for VMware vSphere®. Photon OS is extensible, lightweight, and supports the most common container formats including Docker, Rocket and Garden. Photon OS includes a small footprint, yum-compatible, package-based lifecycle management system, and can support an rpm-ostree image-based system versioning. When used with development tools and environments such as VMware Fusion®, VMware Workstation™, HashiCorp (Vagrant and Atlas) and a production runtime environment (vSphere, VMware vCloud® Air™), Photon OS allows seamless migration of containers-based Apps from development to production.
69
-
70
-## Q. How to report a security vulnerability in Photon OS?
71
-A. VMware encourages users who become aware of a security vulnerability in VMware products to contact VMware with details of the vulnerability. VMware has established an email address that should be used for reporting a vulnerability. Please send descriptions of any vulnerabilities found to security@vmware.com. Please include details on the software and hardware configuration of your system so that we can duplicate the issue being reported.
72
-
73
-Note: We encourage use of encrypted email. Our public PGP key is found at [kb.vmware.com/kb/1055](http://kb.vmware.com/kb/1055).
74
-
75
-VMware hopes that users encountering a new vulnerability will contact us privately as it is in the best interests of our customers that VMware has an opportunity to investigate and confirm a suspected vulnerability before it becomes public knowledge.
76
-
77
-In the case of vulnerabilities found in third-party software components used in VMware products, please also notify VMware as described above.
78
-
79
-## Q. What are the Docker improvements in Photon OS 2.0?
80
-In Photon OS 2.0, the Docker image size (compressed and uncompressed) was reduced to less than a third of its size in Photon OS 1.0. This gain resulted from:
81
-- using toybox (instead of standard core tools), which brings the docker image size from 50MB (in 1.0) to 14MB (in 2.0)
82
-- a package split - in Photon OS 2.0, the binary set contains only bash, tdnf, and toybox; all other installed packages are now libraries only.
83
-
84
-## Q. Why is VMware creating Photon OS?
85
-A. It's about workloads - VMware has always positioned our vSphere platform as a secure, highly-performant platform for enterprise applications. With containers, providing an optimized runtime ensures that customers can embrace these new workload technologies without disrupting existing operations. Over time, Photon OS will extend the capabilities of the software-defined data center such as security, identity and resource management to containerized workloads. Organizations can then leverage a single infrastructure architecture for both traditional and cloud-native Apps, and leverage existing investments in tools, skills and technologies. This converged environment will simplify operation and troubleshooting, and ease the adoption of Cloud-Native Apps. 
86
-
87
-Photon OS can provide a reference implementation for optimizing containers on VMware platforms across compute, network, storage and management. For example, Photon OS can deliver performance through kernel tuning to remove redundant caching between the Linux kernel and the vSphere hypervisor, and advanced security services through network micro-segmentation delivered by VMware NSX™, and more.
88
-
89
-## Q. Why is VMware open-sourcing Photon OS?
90
-A. Open-sourcing Photon OS encourages discussion, innovation, and collaboration with others in the container ecosystem. In particular, we want to make sure the innovations we introduce to Photon to run containers effectively on VMware are also available to any other container runtime OS. 
91
-Additionally, VMware is committed to supporting industry and de facto standards, as doing so also supports stronger security, interoperability, and choice for our customers. 
92
-
93
-## Q. In what way is Photon OS "optimized for VMware?"
94
-
95
-Photon OS 1.0 introduced extensive optimizations for VMware environments, which are described in detail in the following VMware white paper: [Deploying Cloud-Native Applications with Photon OS](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmware-deploying-cloud-native-apps-with-photon-os.pdf). Photon OS 2.0 enhances VMware optimization. The kernel message dumper (new in Photon OS 2.0) is a paravirt feature that extends debugging support. In case of a guest panic, the kernel (through the paravirt channel) dumps the entire kernel log buffer (including the panic message) into the VMware log file (vmware.log) for easy, consolidated access. Previously, this information was stored in a huge vmss (VM suspend state) file.
96
-
97
-## Q. Why can't I SSH in as root?
98
-A. By default Photon does not permit root login to ssh. To make yourself login as root using SSH set PermitRootLogin yes in /etc/ssh/sshd_config, and restart the sshd deamon.
99
-
100
-## Q. Why is netstat not working?
101
-A. netstat is deprecated, ss or ip (part of iproute2) should be used instead.
102
-
103
-## Q. Why do all of my cloned Photon OS instances have the same IP address when using DHCP?
104
-A. Photon OS uses the contents of /etc/machine-id to determine the duid that is used for DHCP requests. If you're going to use a Photon OS instance as the base system for cloning to create additional Photon OS instances, you should clear the machine-id with:
105
-~~~~
106
-    echo -n > /etc/machine-id
107
-~~~~
108
-With this value cleared, systemd will regenerate the machine-id and, as a result, all DHCP requests will contain a unique duid. 
109
-
110
-# How to install new packages?
111
-## Q. Why is the yum command not working in a minimal installation?
112
-A. yum has package dependencies that make the system larger than it needs to be. Photon OS includes [tdnf](https://github.com/vmware/tdnf) - 'tiny' dandified yum - to provide package management and yum-functionality in a much, much smaller footprint. To install packages from cdrom, mount cdrom using following command
113
-~~~~
114
-     mount /dev/cdrom /media/cdrom
115
-~~~~
116
-Then, you can use tdnf to install new packages. For example, to install the vim editor, 
117
-~~~~
118
-     tdnf install vim
119
-~~~~
120
-## Q. How to install all build essentials?
121
-A. Following command can be used to install all build essentials.
122
-~~~~
123
-curl -L https://git.io/v1boE | xargs -I {} tdnf install -y {}
124
-~~~~
125
-## Q. How to build new package for Photon OS??
126
-A. Assuming you have an Ubuntu development environment, setup and get the latest code pull into /workspace. Lets assume your package name is foo with version 1.0.
127
-~~~~
128
-    cp foo-1.0.tar.gz /workspace/photon/SOURCES
129
-    cp foo.spec /workspace/photon/SPECS/foo/
130
-    cd /workspace/photon/support/package-builder
131
-    sudo python ./build_package.py -i foo
132
-~~~~
133
-## Q. I just booted into freshly installed Photon OS instance, why isn't "docker ps" working?
134
-A. Make sure docker daemon is running. By design and default in Photon OS, the docker daemon/engine is not started at boot time. To start the docker daemon for the current session, use the command:
135
-~~~~
136
-    systemctl start docker
137
-~~~~
138
-To start the docker daemon, on boot, use the command:
139
-~~~~
140
-    systemctl enable docker
141
-~~~~
142
-## Q. What is the difference between Minimal and Full installation?
143
-A. Minimal is the minimal set of packages for a container runtime, plus cloud-init.
144
-Full contains all the packages shipped with ISO.
145
-
146
-## Q. What packages are included in Minimal and Full?
147
-A. See [packages_minimal.json](https://github.com/vmware/photon/blob/dev/common/data/packages_minimal.json) as an example
148
-
149
-## Q. How do I transfer or share files between Photon and my host machine?
150
-A. Use vmhgfs-fuse to transfer files between Photon and your host machine:
151
-1. Enable Shared folders in the Workstation or Fusion UI (edit the VM settings and choose Options->Enabled shared folders).
152
-2. Make sure open-vm-tools is installed (it is installed by default in the Minimal installation and OVA import).
153
-3. Run vmware-hgfsclient to list the shares.
154
-
155
-Next, do one of the following:
156
-
157
-- Run the following to mount:
158
-~~~~
159
-vmhgfs-fuse .host:/$(vmware-hgfsclient) /mnt/hgfs
160
-~~~~
161
-OR
162
-
163
-- Add the following line to /etc/fstab:
164
-~~~~
165
-.host:/ /mnt/hgfs fuse.vmhgfs-fuse <options> 0 0
166
-~~~~
167
-
168
-## Q. Why is the ISO over 2GB, when I hear that Photon OS is a minimal container runtime?
169
-A. ISO includes a repository with all Photon OS packages. When you mount the ISO to a machine and boot to the Photon installer, you'll be able to choose the Photon Minimal installation option and the hypervisor-optimized Linux kernel, which will reduce the storage size.
170 1
\ No newline at end of file
171 2
deleted file mode 100644
... ...
@@ -1,34 +0,0 @@
1
-<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/knesenko '''Kiril Nesenko''']</sub><br />
2
-
3
-To install the DCOS CLI:
4
-Install virtualenv. The Python tool virtualenv is used to manage the DCOS CLI’s environment.
5
-<source lang="bash" enclose="div">
6
-sudo pip install virtualenv
7
-</source><br />
8
-Tip: On some older Python versions, ignore any ‘Insecure Platform’ warnings. For more information, see https://virtualenv.pypa.io/en/latest/installation.html.
9
-From the command line, create a new directory named dcos and navigate into it.
10
-<source lang="bash" enclose="div">
11
-$ mkdir dcos
12
-$ cd dcos
13
-$ curl -O https://downloads.mesosphere.io/dcos-cli/install.sh
14
-</source><br />
15
-Run the DCOS CLI install script, where &lt;hosturl&gt; is the hostname of your master node prefixed with http://:
16
-<source lang="bash" enclose="div">
17
-$ bash install.sh <install_dir> <mesos-master-host>
18
-</source><br />
19
-For example, if the hostname of your Mesos master node is mesos-master.example.com:
20
-<source lang="bash" enclose="div">
21
-$ bash install.sh . http://mesos-master.example.com
22
-</source><br />
23
-Follow the on-screen DCOS CLI instructions and enter the Mesosphere verification code. You can ignore any Python ‘Insecure Platform’ warnings.
24
-<source lang="bash" enclose="div">
25
-Confirm whether you want to add DCOS to your system PATH:
26
-$ Modify your bash profile to add DCOS to your PATH? [yes/no]
27
-</source><br />
28
-Since DCOS CLI is used for DCOS cluster, reconfigure Marathon and Mesos masters URLs with the following commands:
29
-<source lang="bash" enclose="div">
30
-dcos config set core.mesos_master_url http://<mesos-master-host>:5050
31
-dcos config set marathon.url http://<marathon-host>:8080
32
-</source><br />
33
-<br /><br />
34
-Next - [[Install and Configure Mesos DNS on a Mesos Cluster]]
35 1
\ No newline at end of file
36 2
deleted file mode 100644
... ...
@@ -1,52 +0,0 @@
1
-<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/knesenko '''Kiril Nesenko''']</sub><br />
2
-<br />
3
-In my previous How-To [[Install and Configure a Production Ready Mesos Cluster on PhotonOS]]. In this How-To I am going to explain how to install and configure Marathon for Mesos cluster. All the following steps should be done on each Mesos master.
4
-First, download Marathon:
5
-<source lang="bash" enclose="div">
6
-root@pt-mesos-master2 [ ~ ]# mkdir -p  /opt/mesosphere/marathon/ && cd /opt/mesosphere/marathon/
7
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]#  curl -O http://downloads.mesosphere.com/marathon/v0.13.0/marathon-0.13.0.tgz
8
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# tar -xf marathon-0.13.0.tgz
9
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# mv marathon-0.13.0 marathon
10
-</source><br />
11
-Create a configuration for Marathon:
12
-<source lang="bash" enclose="div">
13
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# ls -l /etc/marathon/conf/
14
-total 8
15
--rw-r--r-- 1 root root 68 Dec 24 14:33 master
16
--rw-r--r-- 1 root root 71 Dec 24 14:33 zk
17
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# cat /etc/marathon/conf/*
18
-zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos
19
-zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/marathon
20
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# cat /etc/systemd/system/marathon.service
21
-[Unit]
22
-Description=Marathon
23
-After=network.target
24
-Wants=network.target
25
- 
26
-[Service]
27
-Environment="JAVA_HOME=/opt/OpenJDK-1.8.0.51-bin"
28
-ExecStart=/opt/mesosphere/marathon/bin/start \
29
-    --master zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos \
30
-    --zk zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/marathon
31
-Restart=always
32
-RestartSec=20
33
- 
34
-[Install]
35
-WantedBy=multi-user.target
36
-</source><br />
37
-Finally, we need to change the Marathon startup script, since PhotonOS do not use the standard JRE. Make sure you add JAVA_HOME to Java path:
38
-<source lang="bash" enclose="div">
39
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# tail -n3 /opt/mesosphere/marathon/bin/start
40
-# Start Marathon
41
-marathon_jar=$(find "$FRAMEWORK_HOME"/target -name 'marathon-assembly-*.jar' | sort | tail -1)
42
-exec "${JAVA_HOME}/bin/java" "${java_args[@]}" -jar "$marathon_jar" "${app_args[@]}"
43
-</source><br />
44
-Now we can start the Marthon service:
45
-<source lang="bash" enclose="div">
46
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# systemctl start marathon
47
-root@pt-mesos-master2 [ /opt/mesosphere/marathon ]# ps -ef | grep marathon
48
-root     15821     1 99 17:14 ?        00:00:08 /opt/OpenJDK-1.8.0.51-bin/bin/java -jar /opt/mesosphere/marathon/bin/../target/scala-2.11/marathon-assembly-0.13.0.jar --master zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos --zk zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/marathon
49
-root     15854 14692  0 17:14 pts/0    00:00:00 grep --color=auto marathon
50
-</source><br />
51
-<br /><br />
52
-Next - [[Install and Configure DCOS CLI for Mesos]]
53 1
\ No newline at end of file
54 2
deleted file mode 100644
... ...
@@ -1,141 +0,0 @@
1
-<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/knesenko '''Kiril Nesenko''']</sub><br />
2
-= Overview =<br />
3
-Before you read this How-To, please read: [[Install and Configure a Production-Ready Mesos Cluster on PhotonOS]] , [[Install and Configure Marathon for Mesos Cluster on PhotonOS]] and [[Install and Configure DCOS CLI for Mesos]].
4
-After you have fully installed and configured the Mesos cluster, you can execute jobs on it. However, if you want a service discovery and load balancing capabilities you will need to use Mesos-DNS and Haproxy. In this How-To I will explain how to install and configure Mesos-DNS for your Mesos cluster.
5
-Mesos-DNS supports service discovery in Apache Mesos clusters. It allows applications and services running on Mesos to find each other through the domain name system (DNS), similarly to how services discover each other throughout the Internet. Applications launched by Marathon are assigned names like search.marathon.mesos. Mesos-DNS translates these names to the IP address and port on the machine currently running each application. To connect to an application in the Mesos datacenter, all you need to know is its name. Every time a connection is initiated, the DNS translation will point to the right machine in the datacenter.
6
-[[ http://mesosphere.github.io/mesos-dns/img/architecture.png ]]<br />
7
-= Installation =<br />
8
-I will explain how to configure Mesos-DNS docker and run it through Marathon. I will show you how to create a configuration file for a mesos-dns-docker container and how to run it via Marathon.
9
-<source lang="bash" enclose="div">
10
-root@pt-mesos-node1 [ ~ ]# cat /etc/mesos-dns/config.json
11
-{
12
-  "zk": "zk://192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/mesos",
13
-  "masters": ["192.168.0.1:5050", "192.168.0.2:5050", "192.168.0.3:5050"],
14
-  "refreshSeconds": 60,
15
-  "ttl": 60,
16
-  "domain": "mesos",
17
-  "port": 53,
18
-  "resolvers": ["8.8.8.8"],
19
-  "timeout": 5,
20
-  "httpon": true,
21
-  "dnson": true,
22
-  "httpport": 8123,
23
-  "externalon": true,
24
-  "SOAMname": "ns1.mesos",
25
-  "SOARname": "root.ns1.mesos",
26
-  "SOARefresh": 60,
27
-  "SOARetry":   600,
28
-  "SOAExpire":  86400,
29
-  "SOAMinttl": 60
30
-}
31
-</source><br />
32
-'''Create Application Run File'''<br />
33
-Next step is to create a json file and run the service from Marathon for HA. It is possible to run the service via API or via DCOS CLI.
34
-<source lang="bash" enclose="div">
35
-client:~/mesos/jobs$ cat mesos-dns-docker.json
36
-{
37
-    "args": [
38
-        "/mesos-dns",
39
-        "-config=/config.json"
40
-    ],
41
-    "container": {
42
-        "docker": {
43
-            "image": "mesosphere/mesos-dns",
44
-            "network": "HOST"
45
-        },
46
-        "type": "DOCKER",
47
-        "volumes": [
48
-            {
49
-                "containerPath": "/config.json",
50
-                "hostPath": "/etc/mesos-dns/config.json",
51
-                "mode": "RO"
52
-            }
53
-        ]
54
-    },
55
-    "cpus": 0.2,
56
-    "id": "mesos-dns-docker",
57
-    "instances": 3,
58
-    "constraints": [["hostname", "CLUSTER", "pt-mesos-node2.example.com"]]
59
-}
60
-</source>
61
-Now we can see in the Marthon and Mesos UI that we launched the application.
62
-<br /><br />
63
-'''Setup Resolvers and Testing'''<br />
64
-To allow Mesos tasks to use Mesos-DNS as the primary DNS server, you must edit the file ''/etc/resolv.conf'' in every slave and add a new nameserver. For instance, if ''mesos-dns'' runs on the server with IP address  ''192.168.0.5''  at the beginning of ''/etc/resolv.conf'' on every slave.
65
-<source lang="bash" enclose="div">
66
-root@pt-mesos-node2 [ ~/mesos-dns ]# cat /etc/resolv.conf
67
-# This file is managed by systemd-resolved(8). Do not edit.
68
-#
69
-# Third party programs must not access this file directly, but
70
-#only through the symlink at /etc/resolv.conf. To manage
71
-# resolv.conf(5) in a different way, replace the symlink by a
72
-# static file or a different symlink.
73
-nameserver 192.168.0.5
74
-nameserver 192.168.0.4
75
-nameserver 8.8.8.8
76
-</source><br />
77
-Let's run a simple Docker app and see if we can resolve it in DNS.
78
-<source lang="bash" enclose="div">
79
-client:~/mesos/jobs$ cat docker.json
80
-{
81
-    "id": "docker-hello",
82
-    "container": {
83
-        "docker": {
84
-            "image": "centos"
85
-        },
86
-        "type": "DOCKER",
87
-        "volumes": []
88
-    },
89
-    "cmd": "echo hello; sleep 10000",
90
-    "mem": 16,
91
-    "cpus": 0.1,
92
-    "instances": 10,
93
-    "disk": 0.0,
94
-    "ports": [0]
95
-}
96
-</source>
97
-<source lang="bash" enclose="div">
98
-client:~/mesos/jobs$ dcos marathon app add docker.json
99
-</source><br />
100
-Let's try to resolve it.
101
-
102
-<pre>
103
-root@pt-mesos-node2 [ ~/mesos-dns ]# dig _docker-hello._tcp.marathon.mesos SRV
104
-;; Truncated, retrying in TCP mode.
105
-; <<>> DiG 9.10.1-P1 <<>> _docker-hello._tcp.marathon.mesos SRV
106
-;; global options: +cmd
107
-;; Got answer:
108
-;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25958
109
-;; flags: qr aa rd ra; QUERY: 1, ANSWER: 10, AUTHORITY: 0, ADDITIONAL: 10
110
-;; QUESTION SECTION:
111
-;_docker-hello._tcp.marathon.mesos. IN SRV
112
-;; ANSWER SECTION:
113
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31998 docker-hello-4bjcf-s2.marathon.slave.mesos.
114
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31844 docker-hello-jexm6-s1.marathon.slave.mesos.
115
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31111 docker-hello-6ms44-s2.marathon.slave.mesos.
116
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31719 docker-hello-muhui-s2.marathon.slave.mesos.
117
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31360 docker-hello-jznf4-s1.marathon.slave.mesos.
118
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31306 docker-hello-t41ti-s1.marathon.slave.mesos.
119
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31124 docker-hello-mq3oz-s1.marathon.slave.mesos.
120
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31816 docker-hello-tcep8-s1.marathon.slave.mesos.
121
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31604 docker-hello-5uu37-s1.marathon.slave.mesos.
122
-_docker-hello._tcp.marathon.mesos. 60 IN SRV 0 0 31334 docker-hello-jqihw-s1.marathon.slave.mesos.
123
- 
124
-;; ADDITIONAL SECTION:
125
-docker-hello-muhui-s2.marathon.slave.mesos. 60 IN A 192.168.0.5
126
-docker-hello-4bjcf-s2.marathon.slave.mesos. 60 IN A 192.168.0.5
127
-docker-hello-jexm6-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
128
-docker-hello-jqihw-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
129
-docker-hello-mq3oz-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
130
-docker-hello-tcep8-s1.marathon.slave.mesos. 60 IN A 192.168.0.6
131
-docker-hello-6ms44-s2.marathon.slave.mesos. 60 IN A 192.168.0.5
132
-docker-hello-t41ti-s1.marathon.slave.mesos. 60 IN A 192.168.0.4
133
-docker-hello-jznf4-s1.marathon.slave.mesos. 60 IN A 192.168.0.4
134
-docker-hello-5uu37-s1.marathon.slave.mesos. 60 IN A 192.168.0.4
135
-;; Query time: 0 msec
136
-;; SERVER: 192.168.0.5#53(192.168.0.5)
137
-;; WHEN: Sun Dec 27 14:36:32 UTC 2015
138
-;; MSG SIZE  rcvd: 1066
139
-</pre>
140
-
141
-We can see that we can resolve our app!
142 1
\ No newline at end of file
143 2
deleted file mode 100644
... ...
@@ -1,171 +0,0 @@
1
-== Overview ==
2
-For this setup I will use 3 Mesos masters and 3 slaves. On each Mesos master I will run a Zookeeper, meaning that we will have 3 Zookeepers as well. The Mesos cluster will be configured with a quorum of 2. For networking Mesos use Mesos-DNS. I tried to run Mesos-DNS as container, but got into some resolving issues, so in my next How-To I will explain how to configure Mesos-DNS and run it through Marathon. Photon hosts will be used for masters and slaves.<br />
3
-<br />
4
-''' Masters: '''<br />
5
-{| class="wikitable"
6
-! style="text-align: center; font-weight: bold;" | Hostname
7
-! style="font-weight: bold;" | IP Address
8
-|-
9
-| pt-mesos-master1.example.com
10
-| 192.168.0.1
11
-|-
12
-| pt-mesos-master2.example.com
13
-| 192.168.0.2
14
-|-
15
-| pt-mesos-master3.example.com
16
-| 192.168.0.3
17
-|}
18
-''' Agents: '''<br />
19
-{| class="wikitable"
20
-! style="text-align: center; font-weight: bold; font-size: 0.100em;" | Hostname
21
-! style="font-weight: bold;" | IP Address
22
-|-
23
-| pt-mesos-node1.example.com
24
-| 192.168.0.4
25
-|-
26
-| pt-mesos-node2.example.com
27
-| 192.168.0.5
28
-|-
29
-| pt-mesos-node3.example.com
30
-| 192.168.0.6
31
-|}
32
-<br />
33
-== Masters Installation and Configuration ==
34
-First of all we will install Zookeeper. Since currently there is a bug in Photon related to the Zookeeper installation I will use the tarball. Do the following for each master:
35
-<source lang="bash" enclose="div">
36
-root@pt-mesos-master1 [ ~ ]# mkdir -p /opt/mesosphere && cd /opt/mesosphere && wget http://apache.mivzakim.net/zookeeper/stable/zookeeper-3.4.7.tar.gz
37
-root@pt-mesos-master1 [ /opt/mesosphere ]# tar -xf zookeeper-3.4.7.tar.gz && mv zookeeper-3.4.7 zookeeper
38
-root@pt-mesos-master1 [ ~ ]# cat /opt/mesosphere/zookeeper/conf/zoo.cfg | grep -v '#'
39
-tickTime=2000
40
-initLimit=10
41
-syncLimit=5
42
-dataDir=/var/lib/zookeeper
43
-clientPort=2181
44
-server.1=192.168.0.1:2888:3888
45
-server.2=192.168.0.2:2888:3888
46
-server.3=192.168.0.3:2888:3888
47
-</source><br />
48
-Example of Zookeeper systemd configuration file:
49
-<source lang="bash" enclose="div">
50
-root@pt-mesos-master1 [ ~ ]# cat /etc/systemd/system/zookeeper.service
51
-[Unit]
52
-Description=Apache ZooKeeper
53
-After=network.target
54
- 
55
-[Service]
56
-Environment="JAVA_HOME=/opt/OpenJDK-1.8.0.51-bin"
57
-WorkingDirectory=/opt/mesosphere/zookeeper
58
-ExecStart=/bin/bash -c "/opt/mesosphere/zookeeper/bin/zkServer.sh start-foreground"
59
-Restart=on-failure
60
-RestartSec=20
61
-User=root
62
-Group=root
63
- 
64
-[Install]
65
-WantedBy=multi-user.target
66
-</source><br />
67
-Add server id to the configuration file, so zookeeper will understand the id of your master server. This should be done for each master with its own id.
68
-<source lang="bash" enclose="div">
69
-root@pt-mesos-master1 [ ~ ]# echo 1 > /var/lib/zookeeper/myid
70
-root@pt-mesos-master1 [ ~ ]# cat /var/lib/zookeeper/myid
71
-1
72
-</source><br />
73
-Now lets install the Mesos masters. Do the following for each master:
74
-<source lang="bash" enclose="div">
75
-root@pt-mesos-master1 [ ~ ]# yum -y install mesos
76
-Setting up Install Process
77
-Package mesos-0.23.0-2.ph1tp2.x86_64 already installed and latest version
78
-Nothing to do
79
-root@pt-mesos-master1 [ ~ ]# cat /etc/systemd/system/mesos-master.service
80
-[Unit]
81
-Description=Mesos Slave
82
-After=network.target
83
-Wants=network.target
84
- 
85
-[Service]
86
-ExecStart=/bin/bash -c "/usr/sbin/mesos-master \
87
-    --ip=192.168.0.1 \
88
-    --work_dir=/var/lib/mesos \
89
-    --log_dir=/var/log/mesos \
90
-    --cluster=EXAMPLE \
91
-    --zk=zk://192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/mesos \
92
-    --quorum=2"
93
-KillMode=process
94
-Restart=always
95
-RestartSec=20
96
-LimitNOFILE=16384
97
-CPUAccounting=true
98
-MemoryAccounting=true
99
- 
100
-[Install]
101
-WantedBy=multi-user.target
102
-</source><br />
103
-Make sure you replace '''''–ip''''' setting on each master. So far we have 3 masters with a Zookeeper and Mesos packages installed. Let's start zookeeper and mesos-master services on each master:
104
-<source lang="bash" enclose="div">
105
-root@pt-mesos-master1 [ ~ ]# systemctl start zookeeper
106
-root@pt-mesos-master1 [ ~ ]# systemctl start mesos-master
107
-root@pt-mesos-master1 [ ~ ]# ps -ef | grep mesos
108
-root     11543     1  7 12:09 ?        00:00:01 /opt/OpenJDK-1.8.0.51-bin/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/mesosphere/zookeeper/bin/../build/classes:/opt/mesosphere/zookeeper/bin/../build/lib/*.jar:/opt/mesosphere/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/mesosphere/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/mesosphere/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/opt/mesosphere/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/mesosphere/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/mesosphere/zookeeper/bin/../zookeeper-3.4.7.jar:/opt/mesosphere/zookeeper/bin/../src/java/lib/*.jar:/opt/mesosphere/zookeeper/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/mesosphere/zookeeper/bin/../conf/zoo.cfg
109
-root     11581     1  0 12:09 ?        00:00:00 /usr/sbin/mesos-master --ip=192.168.0.1 --work_dir=/var/lib/mesos --log_dir=/var/lob/mesos --cluster=EXAMPLE --zk=zk://192.168.0.2:2181,192.168.0.1:2181,192.168.0.3:2181/mesos --quorum=2
110
-root     11601  9117  0 12:09 pts/0    00:00:00 grep --color=auto mesos
111
-</source><br />
112
-== Slaves Installation and Configuration ==
113
-The steps for configuring a Mesos slave are very simple and not very different from master installation. The difference is that we won't install zookeeper on each slave. We will also start the Mesos slaves in slave mode and will tell the daemon to join the Mesos masters. Do the following for each slave:
114
-<source lang="bash" enclose="div">
115
-root@pt-mesos-node1 [ ~ ]# cat /etc/systemd/system/mesos-slave.service
116
-[Unit]
117
-Description=Photon instance running as a Mesos slave
118
-After=network-online.target,docker.service
119
-  
120
-[Service]
121
-Restart=on-failure
122
-RestartSec=10
123
-TimeoutStartSec=0
124
-ExecStartPre=/usr/bin/rm -f /tmp/mesos/meta/slaves/latest
125
-ExecStart=/bin/bash -c "/usr/sbin/mesos-slave \
126
-    --master=zk://192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/mesos \
127
-        --hostname=$(/usr/bin/hostname) \
128
-        --log_dir=/var/log/mesos_slave \
129
-        --containerizers=docker,mesos \
130
-        --docker=$(which docker) \
131
-        --executor_registration_timeout=5mins \
132
-        --ip=192.168.0.4"
133
-  
134
-[Install]
135
-WantedBy=multi-user.target
136
-</source>
137
-Please make sure to replace the NIC name under '''''–ip''''' setting. Start the mesos-slave service on each node.
138
-<br />
139
-Now you should have ready Mesos cluster with 3 masters, 3 Zookeepers and 3 slaves.
140
-[[https://www.devops-experts.com/wp-content/uploads/2015/12/Screen-Shot-2015-12-24-at-2.22.27-PM.png]]
141
-<br />
142
-If you want to use private docker registry, you will need to edit docker systemd file. In my example I am using cse-artifactory.eng.vmware.com registry:
143
-<source lang="bash" enclose="div">
144
-root@pt-mesos-node1 [ ~ ]# cat /lib/systemd/system/docker.service
145
-[Unit]
146
-Description=Docker Daemon
147
-Wants=network-online.target
148
-After=network-online.target
149
-  
150
-[Service]
151
-EnvironmentFile=-/etc/sysconfig/docker
152
-ExecStart=/bin/docker -d $OPTIONS -s overlay
153
-ExecReload=/bin/kill -HUP $MAINPID
154
-KillMode=process
155
-Restart=always
156
-MountFlags=slave
157
-LimitNOFILE=1048576
158
-LimitNPROC=1048576
159
-LimitCORE=infinity
160
-  
161
-[Install]
162
-WantedBy=multi-user.target
163
-  
164
-root@pt-mesos-node1 [ ~ ]# cat /etc/sysconfig/docker
165
-OPTIONS='--insecure-registry cse-artifactory.eng.vmware.com'
166
-root@pt-mesos-node1 [ ~ ]# systemctl daemon-reload && systemctl restart docker
167
-root@pt-mesos-node1 [ ~ ]# ps -ef | grep cse-artifactory
168
-root      5286     1  0 08:39 ?        00:00:00 /bin/docker -d --insecure-registry <your_privet_registry> -s overlay
169
-</source><br />
170
-<br /><br />
171
-Next - [[Install and Configure Marathon for Mesos Cluster on PhotonOS]]
172 1
\ No newline at end of file
173 2
deleted file mode 100644
... ...
@@ -1,277 +0,0 @@
1
-<sub>Posted on January 13, 2016 by [https://il.linkedin.com/in/tgabay '''Tal Gabay''']</sub>
2
-
3
-= Overview =
4
-
5
-In this How-To, the steps for installing and configuring a Docker Swarm cluster, alongside DNS and Zookeeper, will be presented.
6
-The cluster that will be set up will be on VMware Photon hosts. <br />
7
-<br />
8
-A prerequisite to using this guide is to be familiar with Docker Swarm - information can be found [https://docs.docker.com/swarm/ here].
9
-
10
-== Cluster description ==
11
-
12
-The cluster will have 2 Swarm Managers and 3 Swarm Agents:
13
-
14
-=== Masters ===
15
-
16
-{| class="wikitable"
17
-! style="text-align: center; font-weight: bold;" | Hostname
18
-! style="font-weight: bold;" | IP Address
19
-|-
20
-| pt-swarm-master1.example.com
21
-| 192.168.0.1
22
-|-
23
-| pt-swarm-master2.example.com
24
-| 192.168.0.2
25
-|}
26
-
27
-=== Agents ===
28
-
29
-{| class="wikitable"
30
-! style="text-align: center; font-weight: bold; font-size: 0.100em;" | Hostname
31
-! style="font-weight: bold;" | IP Address
32
-|-
33
-| pt-swarm-agent1.example.com
34
-| 192.168.0.3
35
-|-
36
-| pt-swarm-agent2.example.com
37
-| 192.168.0.4
38
-|-
39
-| pt-swarm-agent3.example.com
40
-| 192.168.0.5
41
-|}<br />
42
-
43
-= Docker Swarm Installation and Configuration =
44
-
45
-== Setting Up the Managers ==
46
-
47
-The following steps should be done on both managers.<br />
48
-Docker Swarm supports multiple methods of using service discovery, but in order to use failover, Consul, etcd or Zookeeper must be used. In this guide, Zookeeper will be used.<br />
49
-Download the latest stable version of Zookeeper and create the '' zoo.cfg '' file under the '' conf '' directory:
50
-<br />
51
-<br />
52
-
53
-=== Zookeeper installation ===
54
-
55
-<source lang="bash" enclose="div">
56
-root@pt-swarm-master1 [ ~ ]# mkdir -p /opt/swarm && cd /opt/swarm && wget http://apache.mivzakim.net/zookeeper/stable/zookeeper-3.4.6.tar.gz
57
-root@pt-swarm-master1 [ /opt/swarm ]# tar -xf zookeeper-3.4.6.tar.gz && mv zookeeper-3.4.6 zookeeper
58
-root@pt-swarm-master1 [ ~ ]# cat /opt/swarm/zookeeper/conf/zoo.cfg | grep -v '#'
59
-tickTime=2000
60
-initLimit=10
61
-syncLimit=5
62
-dataDir=/var/lib/zookeeper
63
-clientPort=2181
64
-server.1=192.168.0.1:2888:3888
65
-server.2=192.168.0.2:2888:3888
66
-</source><br />
67
-The dataDir should be an empty, existing directory.
68
-From the Zookeeper documentation: Every machine that is part of the ZooKeeper ensemble should know about every other machine in the ensemble. You accomplish this with the series of lines of the form server.id=host:port:port. You attribute the server id to each machine by creating a file named myid, one for each server, which resides in that server's data directory, as specified by the configuration file parameter dataDir. The myid file consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
69
-<br />
70
-<br />
71
-Set Zookeeper ID
72
-<source lang="bash" enclose="div">
73
-root@pt-swarm-master1 [ ~ ]# echo 1 > /var/lib/zookeeper/myid
74
-</source><br />
75
-Project Photon uses [https://en.wikipedia.org/wiki/Systemd Systemd] for services, so a zookeeper service should be created using systemd unit file.<br />
76
-<source lang="bash" enclose="div">
77
-root@pt-swarm-master1 [ ~ ]# cat /etc/systemd/system/zookeeper.service
78
-[Unit]
79
-Description=Apache ZooKeeper
80
-After=network.target
81
- 
82
-[Service]
83
-Environment="JAVA_HOME=/opt/OpenJDK-1.8.0.51-bin"
84
-WorkingDirectory=/opt/swarm/zookeeper
85
-ExecStart=/bin/bash -c "/opt/swarm/zookeeper/bin/zkServer.sh start-foreground"
86
-Restart=on-failure
87
-RestartSec=20
88
-User=root
89
-Group=root
90
- 
91
-[Install]
92
-WantedBy=multi-user.target
93
-</source><br />
94
-Zookeeper comes with OpenJDK, so having Java on the Photon host is not a prerequisite. Simply direct the Environment variable to the location where the Zookeeper was extracted.
95
-Now you need to enable and start the service. Enabling the service will make sure that if the host restarts for some reason, the service will automatically start.<br />
96
-<source lang="bash" enclose="div">
97
-root@pt-swarm-master1 [ ~ ]# systemctl enable zookeeper
98
-root@pt-swarm-master1 [ ~ ]# systemctl start zookeeper
99
-</source><br />
100
-Verify that the service was able to start:<br />
101
-<source lang="bash" enclose="div">
102
-root@pt-swarm-master1 [ ~ ]# systemctl status zookeeper
103
-zookeeper.service - Apache ZooKeeper
104
-   Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled)
105
-   Active: active (running) since Tue 2016-01-12 00:27:45 UTC; 10s ago
106
- Main PID: 4310 (java)
107
-   CGroup: /system.slice/zookeeper.service
108
-           `-4310 /opt/OpenJDK-1.8.0.51-bin/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/swarm/zookeeper/bin/../build/classes:/opt/swarm/zookeeper/bin/../build/lib/*.jar:/opt/s...
109
-</source><br />
110
-On the Manager you elected to be the Swarm Leader (primary), execute the following (if you do not have a specific leader in mind, choose one of the managers randomly):
111
-<source lang="bash" enclose="div">
112
-root@pt-swarm-master1 [ ~ ]# docker run -d --name=manager1 -p 8888:2375 swarm manage --replication --advertise 192.168.0.1:8888 zk://192.168.0.1,192.168.0.2/swarm
113
-</source>
114
-* '' docker run -d ''- run the container in the background.
115
-* '' --name=manager1 ''- give the container a name instead of the auto-generated one.
116
-* '' -p 8888:2375 ''- publish a container's port(s) to the host. In this case, when you connect to the host in port 8888, it connects to the container in port 2375.
117
-* swarm - the image to use for the container.
118
-* manage - the command to send to the container once it's up, alongside the rest of the parameters.
119
-* '' --replication '' - tells swarm that the manager is part of a a multi-manager configuration and that this primary manager competes with other manager instances for the primary role. The primary manager has the authority to manage the cluster, replicate logs, and replicate events that are happening inside the cluster.
120
-* '' --advertise 192.168.0.1:8888 ''- specifies the primary manager address. Swarm uses this address to advertise to the cluster when the node is elected as the primary.
121
-* '' zk://192.168.0.1,192.168.0.2/swarm ''- specifies the Zookeepers' location to enable service discovery. The /swarm path is arbitrary, just make sure that every node that joins the cluster specifies that same path (it is meant to enable support for multiple clusters with the same Zookeepers).<br />
122
-<br />
123
-On the second manager, execute the following:
124
-<source lang="bash" enclose="div">
125
-root@pt-swarm-master2 [ ~ ]# docker run -d --name=manager2 -p 8888:2375 swarm manage --replication --advertise 192.168.0.2:8888 zk://192.168.0.1,192.168.0.2/swarm
126
-</source>
127
-Notice that the only difference is the --advertise flag value. The first manager will not lose leadership following this command.<br />
128
-<br />
129
-Now 2 managers are alive, one is the primary and another is the replica. When we now look at the docker info on our primary manager, we can see the following information:
130
-<source lang="bash" enclose="div">
131
-docker-client:~$ docker -H tcp://192.168.0.1:8888 info
132
-Containers: 0
133
-Images: 0
134
-Role: primary
135
-Strategy: spread
136
-Filters: health, port, dependency, affinity, constraint
137
-Nodes: 0
138
-CPUs: 0
139
-Total Memory: 0 B
140
-Name: 82b8516efb7c
141
-</source>
142
-There are a few things that are worth noticing:
143
-* The info command can be executed from ANY machine that can reach the master. The -H tcp://&lt;ip&gt;:&lt;port&gt; command specifies that the docker command should be executed on a remote host.
144
-* Containers - this is the result of the docker ps -a command for the cluster we just set up.
145
-* Images - the result of the docker images command.
146
-* Role - as expected, this is the primary manager.
147
-* Strategy - Swarm has a number of strategies it supports for setting up containers in the cluster. spread means that a new container will run on the node with the least amount of containers on it.
148
-* Filters - Swarm can choose where to run containers based on different filters supplied in the command line. More info can be found [https://docs.docker.com/swarm/scheduler/filter/ here].<br />
149
-<br />
150
-When we now look at the docker info on our replicated manager, we can see the following information:
151
-<source lang="bash" enclose="div">
152
-docker-client:~$ docker -H tcp://192.168.0.2:8888 info
153
-Containers: 0
154
-Images: 0
155
-Role: replica
156
-Primary: 192.168.0.1:8888
157
-Strategy: spread
158
-Filters: health, port, dependency, affinity, constraint
159
-Nodes: 0
160
-CPUs: 0
161
-Total Memory: 0 B
162
-Name: ac06f826e507
163
-</source>
164
-Notice that the only differences between both managers are:
165
-Role: as expected, this is the replicated manager.
166
-Primary: contains the primary manager.<br />
167
-<br />
168
-
169
-== Setting Up the Agents ==
170
-
171
-In Swarm, in order for a node to become a part of the cluster, it should "join" that said cluster - do the following for each of the agents.
172
-Edit the '' /usr/lib/systemd/system/docker.service '' file so that each agent will be able to join the cluster:
173
-<source lang="bash" enclose="div">
174
-root@pt-swarm-agent1 [ ~ ]# cat /usr/lib/systemd/system/docker.service
175
-[Unit]
176
-Description=Docker Daemon
177
-Wants=network-online.target
178
-After=network-online.target
179
- 
180
-[Service]
181
-ExecStart=/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eno16777984:2375 --cluster-store zk://192.168.0.1,192.168.0.2/swarm
182
-ExecReload=/bin/kill -HUP $MAINPID
183
-KillMode=process
184
-Restart=always
185
-MountFlags=slave
186
-LimitNOFILE=1048576
187
-LimitNPROC=1048576
188
-LimitCORE=infinity
189
- 
190
-[Install]
191
-WantedBy=multi-user.target
192
-</source>
193
-* '' -H tcp://0.0.0.0:2375 ''- This ensures that the Docker remote API on Swarm Agents is available over TCP for the Swarm Manager.
194
-* '' -H unix:///var/run/docker.sock ''- The Docker daemon can listen for Docker Remote API requests via three different types of Socket: unix, tcp, and fd. 
195
-** tcp - If you need to access the Docker daemon remotely, you need to enable the tcp Socket.
196
-** fd - On Systemd based systems, you can communicate with the daemon via Systemd socket activation.
197
-* '' --cluster-advertise <NIC>:2375 ''- advertises the machine on the network by stating the ethernet card and the port used by the Swarm Managers.
198
-* '' --cluster-store zk://192.168.0.1,192.168.0.2/swarm ''- as we defined before, the service discovery being used here is Zookeeper.
199
-<br />
200
-Enable and start the docker service:
201
-<source lang="bash" enclose="div">
202
-root@pt-swarm-agent1 [ ~ ]# systemctl enable docker
203
-root@pt-swarm-agent1 [ ~ ]# systemctl daemon-reload && systemctl restart docker
204
-root@pt-swarm-agent1 [ ~ ]# systemctl status docker
205
-docker.service - Docker Daemon
206
-   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
207
-   Active: active (running) since Tue 2016-01-12 00:46:18 UTC; 4s ago
208
- Main PID: 11979 (docker)
209
-   CGroup: /system.slice/docker.service
210
-           `-11979 /bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eno16777984:2375 --cluster-store zk://192.168.0.1,192.168.0.2/swarm
211
-</source><br />
212
-All that remains is to have the agents join the cluster:
213
-<source lang="bash" enclose="div">
214
-root@pt-swarm-agent1 [ ~ ]# docker run -d swarm join --advertise=192.168.0.3:2375 zk://192.168.0.1,192.168.0.2/swarm
215
-</source><br />
216
-A look at the output of the docker info command will now show:
217
-<source lang="bash" enclose="div">
218
-docker-client:~$ docker -H tcp://192.168.0.1:8888 info
219
-Containers: 3
220
-Images: 9
221
-Role: primary
222
-Strategy: spread
223
-Filters: health, port, dependency, affinity, constraint
224
-Nodes: 3
225
- pt-swarm-agent1.example.com: 192.168.0.3:2375
226
-  └ Status: Healthy
227
-  └ Containers: 1
228
-  └ Reserved CPUs: 0 / 1
229
-  └ Reserved Memory: 0 B / 2.055 GiB
230
-  └ Labels: executiondriver=native-0.2, kernelversion=4.1.3-esx, operatingsystem=VMware Photon/Linux, storagedriver=overlay
231
- pt-swarm-agent2.example.com: 192.168.0.4:2375
232
-  └ Status: Healthy
233
-  └ Containers: 1
234
-  └ Reserved CPUs: 0 / 1
235
-  └ Reserved Memory: 0 B / 2.055 GiB
236
-  └ Labels: executiondriver=native-0.2, kernelversion=4.1.3-esx, operatingsystem=VMware Photon/Linux, storagedriver=overlay
237
- pt-swarm-agent3.example.com: 192.168.0.5:2375
238
-  └ Status: Healthy
239
-  └ Containers: 1
240
-  └ Reserved CPUs: 0 / 1
241
-  └ Reserved Memory: 0 B / 2.055 GiB
242
-  └ Labels: executiondriver=native-0.2, kernelversion=4.1.3-esx, operatingsystem=VMware Photon/Linux, storagedriver=overlay
243
-CPUs: 3
244
-Total Memory: 6.166 GiB
245
-Name: 82b8516efb7c
246
-</source>
247
-
248
-== Setting Up DNS ==
249
-
250
-Docker does not have its own self-provided DNS so we use a [https://github.com/ahmetalpbalkan/wagl wagl] DNS.
251
-Setting it up is very simple. In this case, one of the masters will also be the DNS. Simply execute:
252
-<source lang="bash" enclose="div">
253
-docker-client:~$ docker run -d --restart=always --name=dns -p 53:53/udp --link manager1:swarm ahmet/wagl wagl --swarm tcp://swarm:2375
254
-</source>
255
-* '' --restart=always ''- Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container continuously. The container will also always start on daemon startup, regardless of the current state of the container.
256
-* '' --link manager1:swarm ''- link the manager1 container (by name) and give it the alias swarm.
257
-That's it, DNS is up and running.
258
-
259
-= Test Your Cluster =
260
-
261
-== Running Nginx ==
262
-
263
-Execute the following commands from any docker client:
264
-<source lang="bash" enclose="div">
265
-docker-client:~$ docker -H tcp://192.168.0.1:8888 run -d -l dns.service=api -l dns.domain=example -p 80:80 vmwarecna/nginx
266
-docker-client:~$ docker -H tcp://192.168.0.1:8888 run -d -l dns.service=api -l dns.domain=example -p 80:80 vmwarecna/nginx
267
-</source>
268
-Note that this is the same command, executed twice. It tells the master to run 2 of the similar containers, each of which has 2 dns labels.<br />
269
-Now, from any container in the cluster that has dnsutils, you can execute the following (for example):
270
-<source lang="bash" enclose="div">
271
-root@13271a2d0fcb:/# dig +short A api.example.swarm
272
-192.168.0.3
273
-192.168.0.4
274
-root@13271a2d0fcb:/# dig +short SRV _api._tcp.example.swarm
275
-1 1 80 192.168.0.3.
276
-1 1 80 192.168.0.4.
277
-</source>
278 1
\ No newline at end of file
279 2
deleted file mode 100644
... ...
@@ -1,38 +0,0 @@
1
-# Installing the Lightwave Client on a Photon Image and Joining the Client to a Domain
2
-
3
-After you have set up a Lightwave domain controller, you can join Photon clients to that domain. You install the Lightwave client first. After the client is installed, you join the client to the domain.
4
-
5
-## Prerequisites
6
-
7
-- Prepare a Photon OS client for the Lightwave client installation.
8
-- Verify that the hostname of the client can be resolved.
9
-- Verify that you have 184 MB free for the Lightwave client installation.
10
-
11
-## Procedure
12
-
13
-1. Log in to your Photon OS client over SSH.
14
-2. Install the Lightwave client by running the following command. 
15
-	
16
-	`# tdnf install lightwave-client -y`
17
-
18
-3. Edit the `iptables` firewall rules configuration file to allow connections on port `2020` as a default setting.
19
-	
20
-	The default Photon OS 2.0 firewall settings block all incoming, outgoing, and forwards so that you must configure the rules.
21
-
22
-	1. Open the  iptables settings file.
23
-	
24
-	`# vi /etc/systemd/scripts/iptables`
25
-
26
-	2. Add allow information over tcp for port 2020 in the end of the file, save, and close the file.
27
-
28
-	`iptables -A INPUT -p tcp -m tcp --dport 2020 -j ACCEPT`
29
-
30
-	3. Run the following command to allow the required connections without restarting the client.
31
-
32
-	`# iptables -A INPUT -p tcp -m tcp --dport 2020 -j ACCEPT`
33
-
34
-4. Join the client to the domain by running the `domainjoin.sh` script and configuring the domain controller FQDN, domain, and the password for the `administrator` user.
35
-
36
-	`# domainjoin.sh --domain-controller <lightwave-server-FQDN> --domain <your-domain> --password '<administrator-user-password>`
37
-
38
-5. In a browser, go to https://*Lightwave-Server-FQDN* to verify that the client appears under the tenants list for the domain.
39 1
\ No newline at end of file
40 2
deleted file mode 100644
... ...
@@ -1,34 +0,0 @@
1
-# Installing the Lightwave Server and Configuring It as a Domain Controller on a Photon Image
2
-
3
-You can configure Lightwave server as domain controller on a Photon client. You install the Lightwave server first. After the server is installed, you configure a new domain. 
4
-
5
-## Prerequisites
6
-
7
-- Prepare a Photon OS client for the Lightwave server installation.
8
-- Verify that the hostname of the client can be resolved.
9
-- Verify that you have 500 MB free for the Lightwave server installation.
10
-
11
-## Procedure
12
-
13
-1. Log in to your Photon OS client over SSH as an administrator.
14
-2. Install the Lightwave server by running the following command. 
15
-	
16
-	`# tdnf install lightwave -y`
17
-3. Configure the Lightwave server as domain controller by selecting a domain name and password for the `administrator` user.
18
-	
19
-	The minimum required password complexity is 8 characters, one symbol, one upper case letter, and one lower case letter. 
20
-	Optionally, if you want to access the domain controller over IP, configure the ip under the `--ssl-subject-alt-name` parameter.
21
-	`# configure-lightwave-server --domain <your-domain> --password '<administrator-user-password>' --ssl-subject-alt-name <machine-ip-address>`
22
-4. Edit `iptables` rules to allow connections to and from the client.
23
-
24
-	The default Photon OS 2.0 firewall settings block all incoming, outgoing, and forwards so that you must reconfigure them.
25
-	
26
-	`# iptables -P INPUT ACCEPT`
27
-
28
-	`# iptables -P OUTPUT ACCEPT`
29
-
30
-	`# iptables -P FORWARD ACCEPT`
31
-
32
-5. In a browser, go to https://*lightwave-server-FQDN* to verify that you can log in to the newly created domain controller.
33
-	1. On the Cascade Identity Services page, enter the domain that you configured and click **Take me to Lightwave Admin**.
34
-	2. On the Welcome page, enter administrator@your-domain as user name and the password that you set during the domain controller configuration and click **LOGIN**.
35 1
\ No newline at end of file
36 2
deleted file mode 100644
... ...
@@ -1,11 +0,0 @@
1
-# Installing and Using Lightwave on Photon OS #
2
-
3
-Project Lightwave is an open-sourced project that provides enterprise-grade identity and access management services, and can be used to solve key security, governance, and compliance challenges for a variety of use cases within the enterprise. Through integration between Photon OS and Project Lightwave, organizations can enforce security and governance on container workloads, for example, by ensuring only authorized containers are run on authorized hosts, by authorized users. For more details about Lightwave, see the [project Lightwave page on GitHub](https://github.com/vmware/lightwave).
4
-
5
-## Procedure
6
-
7
-1. [Installing the Lightwave Server and Configuring It as a Domain Controller on a Photon Image](Installing-Lightwave-Server-and-Setting-Up-a-Domain.md)
8
-2. [Installing the Lightwave Client on a Photon Image and Joining the Client to a Domain](Installing-Lightwave-Client-and-Joining-a-Domain.md)
9
-3. [Installing the Photon Management Daemon on a Lightwave Client](Installing-the-Photon-Management-Daemon-on-a-Lightwave-Client.md)
10
-4. [Remotely Upgrade a Single Photon OS Machine With Lightwave Client and Photon Management Daemon Installed](Remotely-Upgrade-a-Photon-OS-Machine-With-Lightwave-Client-and-Photon-Management-Daemon-Installed.md)
11
-5. [Remotely Upgrade Multiple Photon OS Machines With Lightwave Client and Photon Management Daemon Installed](Remotely-Upgrade-Photon-OS-Machine-With-Lightwave-Client-and-Photon-Management-Daemon-Installed.md)
12 1
deleted file mode 100644
... ...
@@ -1,35 +0,0 @@
1
-# Installing the Photon Management Daemon on a Lightwave Client 
2
-
3
-After you have installed and configured a domain on Lightwave, and joined a client to the domain, you can install the Photon Management Daemon on that client so that you can remotely manage it.
4
-
5
-## Prerequisites
6
-
7
-- Have an installed Lightwave server with configured domain controller on it.
8
-- Have an installed Lightwave client that is joined to the domain.
9
-- Verify that you have 100 MB free for the daemon installation on the client.
10
-
11
-## Procedure
12
-
13
-1. Log in to a machine with installed Lightwave client over SSH as an administrator.
14
-2. Install the Photon Management Daemon.
15
-	
16
-	`# tdnf install pmd -y`
17
-2. Start the Photon Management Daemon.
18
-	 
19
-	`# systemctl start pmd`
20
-3. Verify that the daemon is in an `active` state.
21
-
22
-	`# systemctl status pmd`
23
-4. (Optional) In a new console, use `curl` to verify that the Photon Management Daemon returns information.
24
-
25
-	Use the root credentials for the local client to authenticate against the daemon service.
26
-	`# curl https://<lightwave-client-FQDN>:2081/v1/info -ku root`
27
-
28
-5. (Optional) Create an administrative user for the Photon Management Daemon for your domain and assign it the domain administrator role.
29
-	1. In a browser, go to https://*lightwave-server-FQDN*.
30
-	1. On the Cascade Identity Services page, enter your domain name and click **Take me to Lightwave Admin**.
31
-	2. On the Welcome page, enter administrative credentials for your domain and click **Login**.
32
-	2. Click **Users & Groups** and click **Add** to create a new user.
33
-	3. On the Add New User page, enter user name, at least one name, password, and click **Save**.
34
-	3. Click the **Groups** tab, select the Administrators group, and click  **Membership**  to add the new user to the group.
35
-	4. On the View Members page, select the user that you created, click **Add Member**, click **Save**, and click **Cancel** to return to the previous page.
36 1
\ No newline at end of file
37 2
deleted file mode 100644
... ...
@@ -1,15 +0,0 @@
1
-# Photon OS Administration Guide and Other Documentation
2
-
3
-The Photon OS Administration Guide covers the basics of managing packages, controlling services with systemd, setting up networking, initializing Photon OS with cloud-init, running Docker containers, and working with other technologies, such as Kubernetes. The guide also includes a section to get you started using Photon OS quickly and easily. The guide is at the following URL: 
4
-
5
-https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md
6
-
7
-The Photon OS Troubleshooting Guide describes the fundamentals of troubleshooting problems on Photon OS. This guide covers the basics of troubleshooting systemd, packages, network interfaces, services such as SSH and Sendmail, the file system, and the Linux kernel. The guide includes a quick tour of the tools that you can use for troubleshooting and provides examples along the way. The guide also demonstrates how to access the system's log files. It is at the following URL:
8
-
9
-https://github.com/vmware/photon/blob/master/docs/photon-os-troubleshooting-guide.md 
10
-
11
-Additional documentation appears in the docs directory of the Photon OS GitHub:
12
-
13
-https://github.com/vmware/photon/tree/master/docs
14
-
15
-
16 1
deleted file mode 100644
... ...
@@ -1,52 +0,0 @@
1
-# Introduction
2
-
3
-## 1.1 What is OSTree? How about RPM-OSTree?
4
-
5
-OSTree is a tool to manage bootable, immutable, versioned filesystem trees. Unlike traditional package managers like rpm or dpkg that know how to install, uninstall, configure packages, OSTree has no knowledge of the relationship between files. But when you add rpm capabilities on top of OSTree, it becomes RPM-OSTree, meaning a filetree replication system that is also package-aware.   
6
-The idea behind it is to use a client / server architecture to keep your Linux installed machines (physical or VM) in sync with the latest bits, in a predictable and reliable manner. To achieve that, OSTree uses a git-like repository that records the changes to any file and replicate them to any subscriber.  
7
-A system administrator or an image builder developer takes a base Linux image, prepares the packages and other configuration on a server box, executes a command to compose a filetree that the host machines will download and then incrementally upgrade whenever a new change has been committed.
8
-You may read more about OSTree [here](https://wiki.gnome.org/Projects/OSTree).
9
-
10
-## 1.2 Why use RPM-OSTree in Photon?
11
-There are several important benefits:
12
-* Reliable, efficient: The filetree replication is simple, reliable and efficient. It will only transfer deltas over the network. If you have deployed two almost identical bootable images on same box (differing just by several files), it will not take twice the space. The new tree will have a set of hardlinks to the old tree and only the different files will have a separate copy stored to disk.
13
-* Atomic: the filetree replication is atomic. At the end of a deployment, you are either booting from one deployment, or the other. There is no "partial deployed bootable image". If anything bad happens during replication or deployment- power loss, network failure, your machine boots from the old image. There is even a tool option to cleanup old deployed (successfully or not) image.
14
-* Manageable: You are provided simple tools to figure out exactly what packages have been installed, to compare files, configuration and package changes between versions.
15
-* Predictable, repeatable: A big headache for a system administrator is to maintain a farm of computers with different packages, files and configuration installed in different order, that will result in exponential set of test cases. With RPM-OStree, you get identical, predictable installed systems. 
16
-
17
-As drawbacks, I would mention:
18
-* Some applications configured by user on host may have compatibility issues if they save configuration or download into read only directories like /usr.
19
-* People not used with "read only" file systems will be disappointed that they could no longer use RPM, yum, tdnf to install whatever they want. Think of this as an "enterprise policy". They may circumvent this by customizing the target directory to a writable directory like /var or using rpm to install packages and record them using a new RPM repository in a writable place.
20
-* Administrators need to be aware about the directories re-mapping specific to OSTree and plan accordingly.
21
-
22
-## 1.3 Photon with RPM-OSTree installation profiles
23
-Photon takes advantage of RPM-OSTree and offers several installation choices:
24
-* Photon RPM-OSTree server - used to compose customized Photon OS installations and to prepare updates. I will call it for short 'server'.
25
-* Photon RPM-OSTree host connected to a default online server repository via http or https, maintained by VMware Photon OS team, where future updates will be published. This will create a minimal installation profile, but with the option to self-upgrade. I will call it for short 'default host'.
26
-* Photon RPM-OSTree host connected to a custom server repository. It requires a Photon RPM-OSTree Server installed in advance. I will call it for short 'custom host'.
27
-
28
-## 1.4 Terminology
29
-I use the term "OSTree" (starting with capitals) throughout this document, when I refer to the general use of this technology, the format of the repository or replication protocol. I use "RPM-OSTree" to emphasize the layer that adds RedHat Package Manager compatibility on both ends - at server and at host. However, since Photon OS is an RPM-based Linux, there are places in the documentation and even in the installer menus where "OSTree" may be used instead of "RPM-OSTree" when the distinction is not obvious or doesn't matter in that context.
30
-When "ostree" and "rpm-ostree" (in small letters) are encountered, they refer to the usage of the specific Unix commands.   
31
-
32
-Finally, "Photon RPM-OSTree" is the application or implementation of RPM-OStree system into Photon OS, materialized into two options: Photon Server and Photon Host (or client). "Server" or "Host" may be used with or without the "Photon" and/or "RPM-OStree" qualifier, but it means the same thing. 
33
-
34
-## 1.5 Sample code
35
-Codes samples used throughout the book are small commands that can be typed at shell command prompt and do not require downloading additional files. As an alternative, one can remote via ssh, so cut & paste sample code from outside sources or copy files via scp will work. See the Photon Administration guide to learn [how to enable ssh](photon-admin-guide.md#permitting-root-login-with-ssh). 
36
-The samples assume that the following VMs have been installed - see the steps in the next chapters:
37
-* A default host VM named **photon-host-def**.
38
-* Two server VMs named **photon-srv1** and **photon-srv2**.
39
-* Two custom host VMs named **photon-host-cus1** and **photon-host-cus2**, connected each to the corresponding server during install.
40
-
41
-## 1.6 How to read this book
42
-I've tried to structure this book to be used both as a sequential read and as a reference documentation.   
43
-If you are just interested in deploying a host system and keeping it up to date, then read chapters 2 and 5.   
44
-If you want to install your own server and experiment with customizing packages for your Photon hosts, then read chapters 6 to 9. There are references to the concepts discussed throughout the book, if you need to understand them better.  
45
-However, if you want to read page by page, information is presented from simple to complex, although as with any technical book, we occasionally run into the chicken and egg problem - forward references to concepts that have yet to be explained later. In other cases, concepts are introduced and presented in great detail that may be seem hard to follow at first, but I promise they will make sense in the later pages when you get to use them.
46
-
47
-## 1.7 Difference between versions
48
-This book has been written when Photon 1.0 was released, so all the information presented apply directly to Photon 1.0 and also to Photon 1.0 Revision 2 (in short Photon 1.0 Rev2 or Photon 1.0r, as some people refer to it as Photon 1.0 Refresh). This release is relevant to OSTree, because of ISO including an updated RPM-OSTree repository containing upgraded packages, as well as matching updated online repo that plays well into the upgrade story. Other than that, differences are minimal.  
49
-
50
-The guide has been updated significantly for Photon OS 2.0. Information of what's different is scattered through chapters 2, 6, 7, 8. [Install or rebase to Photon OS 2.0](Photon-RPM-OSTree-Install-or-rebase-to-Photon-OS-2.0.md) is dedicated to the topic.    
51
-
52
-OSTree technology is evolving too and rather than pointing out at what package version some feature has been introduced or changed, the focus is on the ostree and rpm-ostree package versions included with the Photon OS major releases.
53 1
deleted file mode 100644
... ...
@@ -1,89 +0,0 @@
1
-# Remotes
2
-
3
-In Chapter 3 we talked about the Refspec that contains a **photon:** prefix, that is the name of a remote. When a Photon host is installed, a remote is added - which contains the URL for an OSTree repository that is the origin of the commits we are going to pull from and deploy filetrees, in our case the Photon RPM-OSTree server we installed the host from. This remote is named **photon**, which may be confusing, because it's also the OS name and part of the Refspec (branch) path.
4
-
5
-## 10.1 Listing remotes
6
-A host repo can be configured to switch between multiple remotes to pull from, however only one remote is the "active" one at a time. We can list the remotes created so far, which brings back the expected result.
7
-```
8
-root@photon-host-def [ ~ ]# ostree remote list
9
-photon
10
-```
11
-We can inquiry about the URL for that remote name, which for the default host is the expected Photon OS online OSTree repo.
12
-```
13
-root@photon-host-def [ ~ ]# ostree remote show-url photon
14
-https://dl.bintray.com/vmware/photon/rpm-ostree/1.0
15
-```
16
-But where is this information stored? The repo's config file has it.
17
-```
18
-root@photon-host-def [ ~ ]# cat /ostree/repo/config 
19
-[core]
20
-repo_version=1
21
-mode=bare
22
-
23
-[remote "photon"]
24
-url=https://dl.bintray.com/vmware/photon/rpm-ostree/1.0
25
-gpg-verify=false
26
-```
27
-
28
-If same command is executed on the custom host we've installed, it's going to reveal the URL of the Photon RPM-OSTree server connected to during setup.
29
-```
30
-root@photon-host-cus [ ~ ]# ostree remote show-url photon
31
-http://10.118.101.168
32
-```
33
-
34
-## 10.2 GPG signature verification
35
-You may wonder what is the purpose of ```gpg-verify=false``` in the config file, associated with the specific remote. This will instruct any host update to skip the signing verification for the updates that come from server, resulted from tree composed locally at the server, as they are not signed. Without this, host updating will fail.  
36
-
37
-There is a whole chapter about signing, importing keys and so on that I will not get into, but the idea is that signing adds an extra layer of security, by validating that everything you download comes from the trusted publisher and has not been altered. That is the case for all Photon OS artifacts downloaded from VMware official site. All OVAs and packages, either from the online RPMS repositories or included in the ISO file - are signed by VMware. We've seen a similar setting ```gpgcheck=1``` in the RPMS repo configuration files that tdnf uses to validate or not the signature for all packages downloaded to be installed.
38
-
39
-
40
-## 10.3 Switching repositories
41
-Since mapping name/url is stored in the repo's config file, in principle you can re-assign a different URL, connecting the host to a different server. The next upgrade will get the latest commit chain from the new server.   
42
-If we edit photon-host-def's repo config and replace the bintray URL by photon-srv1's IP address, all original packages in the original 1.0_minimal version will be preserved, but any new package change (addition, removal, upgrade) added after that (in 1.0_minimal.1, 1.0_minimal.2) will be reverted and all new commits from photon-srv1 (that may have same version) will be applied. This is because the two repos are identical copies, so they have the same original commit ID as a common ancestor, but they diverge from there.  
43
-This may create confusion and it's one of the reasons I insisted on creating your own scheme of versioning.
44
-  
45
-If the old and new repo have nothing in common (no common ancestor commit), this will undo even the original commit, so all commits from the new tree will be applied.  
46
-A better solution would be to add a new remote that will identify where the commits come from.
47
-
48
-## 10.4 Adding and removing remotes
49
-
50
-A cleaner way to switch repositories is to add remotes that point to different servers. Let's add another server that we will refer to as **photon2**, along with (optional) the refspecs for branches that it provides (we will see later that in the newer OSTree versions, we don't need to know the branch names, they could be [queried at run-time](Photon-RPM-OSTree-10-Remotes.md#105-listing-available-branches)). The 'minimal' and 'full' branch ref names containing '2.0' suggest this may be a Photon OS 2.0 RPM-OSTree server. 
51
-```
52
-root@photon-host-cus [ ~ ]# ostree remote add --repo=/ostree/repo -v --no-gpg-verify photon2 http://10.118.101.86 photon/2.0/x86_64/minimal photon/2.0/x86_64/full
53
-root@photon-host-cus [ ~ ]# ostree remote list
54
-photon
55
-photon2
56
-root@photon-host-cus [ ~ ]# ostree remote show-url photon2
57
-http://10.118.101.86
58
-```
59
-Where is this information stored? There is an extra config file created per each remote:
60
-```
61
-root@photon-host-cus [ ~ ]# cat /etc/ostree/remotes.d/photon2.conf 
62
-[remote "photon2"]
63
-url=http://10.118.101.86
64
-branches=photon/2.0/x86_64/minimal;photon/2.0/x86_64/full;
65
-gpg-verify=false
66
-```
67
-You may have guessed what is the effect of ```--no-gpg-verify option```.  
68
-Obviously, remotes could also be deleted.
69
-```
70
-root@photon-host-cus [ ~ ]# ostree remote delete photon2
71
-root@photon-host-cus [ ~ ]# ostree remote list
72
-photon
73
-```
74
-
75
-## 10.5 List available branches
76
-If a host has been deployed from a specific branch and would like to switch to a different one, maybe from a different server, how would it know what branches are available? In git, you would run ```git remote show origin``` or ```git remote -a``` (although last command would not show all branches, unless you ran ```git fetch``` first).  
77
-
78
-Fortunately, in Photon OS 2.0 and higher, the hosts are able to query the server, if summary metadata has been generated, as we've seen in [8.5](Photon-RPM-OSTree:-8-File-oriented-server-operations.md#85-creating-summary-metadata).  This command lists all branches available for remote **photon2**.
79
-
80
-```
81
-root@photon-host-cus [ ~ ]# ostree remote refs photon2 
82
-photon2:photon/2.0/x86_64/base
83
-photon2:photon/2.0/x86_64/full
84
-photon2:photon/2.0/x86_64/minimal
85
-```
86
-
87
-###10.6 Switching branches (rebasing)
88
-
89
-
90 1
deleted file mode 100644
... ...
@@ -1,211 +0,0 @@
1
-# Running container applications between bootable images
2
-
3
-In this chapter, we want to test a docker application and make sure that all the settings and downloads done in one bootable filetree are going to be saved into writable folders and be available in the other image, in other words after reboot from the other image, everything is available exactly the same way.   
4
-We are going to do this twice: first, to verify an existing bootable image installed in parallel and then create a new one.
5
-
6
-## 11.1 Downloading a docker container appliance
7
-Photon OS comes with docker package installed and configured, but we expect that the docker daemon is inactive (not started). Configuration file /usr/lib/systemd/system/docker.service is read-only (remember /usr is bound as read-only). 
8
-```
9
-root@sample-host-def [ ~ ]# systemctl status docker
10
-* docker.service - Docker Daemon
11
-   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
12
-   Active: inactive (dead)
13
-
14
-root@sample-host-def [ ~ ]# cat /usr/lib/systemd/system/docker.service
15
-[Unit]
16
-Description=Docker Daemon
17
-Wants=network-online.target
18
-After=network-online.target
19
-
20
-[Service]
21
-ExecStart=/bin/docker -d -s overlay
22
-ExecReload=/bin/kill -HUP $MAINPID
23
-KillMode=process
24
-Restart=always
25
-MountFlags=slave
26
-LimitNOFILE=1048576
27
-LimitNPROC=1048576
28
-LimitCORE=infinity
29
-
30
-[Install]
31
-WantedBy=multi-user.target
32
-```
33
-Now let's enable docker daemon to start at boot time - this will create a symbolic link into writable folder /etc/systemd/system/multi-user.target.wants to its systemd configuration, as with all other systemd controlled services. 
34
-```
35
-root@sample-host-def [ ~ ]# systemctl enable docker
36
-Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
37
-
38
-root@sample-host-def [ ~ ]# ls -l /etc/systemd/system/multi-user.target.wants
39
-total 0
40
-lrwxrwxrwx 1 root root 38 Sep  6 08:38 docker.service -> /usr/lib/systemd/system/docker.service
41
-lrwxrwxrwx 1 root root 47 Aug 28 20:21 iptables.service -> ../../../../lib/systemd/system/iptables.service
42
-lrwxrwxrwx 1 root root 47 Aug 28 20:21 remote-fs.target -> ../../../../lib/systemd/system/remote-fs.target
43
-lrwxrwxrwx 1 root root 50 Aug 28 20:21 sshd-keygen.service -> ../../../../lib/systemd/system/sshd-keygen.service
44
-lrwxrwxrwx 1 root root 43 Aug 28 20:21 sshd.service -> ../../../../lib/systemd/system/sshd.service
45
-lrwxrwxrwx 1 root root 55 Aug 28 20:21 systemd-networkd.service -> ../../../../lib/systemd/system/systemd-networkd.service
46
-lrwxrwxrwx 1 root root 55 Aug 28 20:21 systemd-resolved.service -> ../../../../lib/systemd/system/systemd-resolved.service
47
-```
48
-To verify that the symbolic link points to a file in a read-only directory, try to make a change in this file using vim and save. you'll get an error: "/usr/lib/systemd/system/docker.service" E166: Can't open linked file for writing".  
49
-Finally, let's start the daemon, check again that is active. 
50
-```
51
-root@sample-host-def [ ~ ]# systemctl start docker
52
-
53
-root@sample-host-def [ ~ ]# systemctl status -l docker
54
-* docker.service - Docker Daemon