<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.t-hoerup.dk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Torben</id>
	<title>HoerupWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.t-hoerup.dk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Torben"/>
	<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php/Special:Contributions/Torben"/>
	<updated>2026-05-13T10:17:25Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Radioamat%C3%B8r&amp;diff=12223</id>
		<title>Radioamatør</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Radioamat%C3%B8r&amp;diff=12223"/>
		<updated>2024-04-26T11:00:28Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
* [https://sdfi.dk/digital-infrastruktur/frekvenser/radioamatoerer- &#039;Styrelsen for Dataforsyning og Infrastruktur&#039;s side vedr radioamatør]&lt;br /&gt;
* [https://frekvensregister.sdfi.dk/Search/Search.aspx Søg andre radioamatører]&lt;br /&gt;
** [http://oz6ks.dk/oz-callsigns/  Simpel genvej til ovenstående]&lt;br /&gt;
* [http://www.d-star4all.dk/dstar4all_repmap_frame.html Repeater kort]&lt;br /&gt;
* [http://asr.oz5thy.dk/ Analog repeater net]&lt;br /&gt;
&lt;br /&gt;
* [http://ham-digital.org/dmr-userreg.php DMR ID registry]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DMR ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
http://ipsc2-dk.dmrplus.dk/ipsc/#&lt;br /&gt;
https://www.pistar.uk/dmr+_options.php&lt;br /&gt;
&lt;br /&gt;
http://www.k9npx.com/2019/02/hotspot-offset-calibration.html&lt;br /&gt;
&lt;br /&gt;
Low  433001900&lt;br /&gt;
high 433005000&lt;br /&gt;
&lt;br /&gt;
Diff 3100&lt;br /&gt;
half=1550&lt;br /&gt;
&lt;br /&gt;
offset = low+half&lt;br /&gt;
offset = 3450&lt;br /&gt;
&lt;br /&gt;
==Live radio==&lt;br /&gt;
&lt;br /&gt;
* http://87.63.154.250:81/&lt;br /&gt;
==Prøve==&lt;br /&gt;
&lt;br /&gt;
* http://operatorlicens.dk/ (D-licens)&lt;br /&gt;
* http://b-certifikat.dk/&lt;br /&gt;
* http://a-certifikat.dk/&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Radioamat%C3%B8r&amp;diff=12222</id>
		<title>Radioamatør</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Radioamat%C3%B8r&amp;diff=12222"/>
		<updated>2024-04-26T10:59:07Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
* [https://ens.dk/ansvarsomraader/frekvenser/radioamatoerer &#039;Styrelsen for Dataforsyning og Infrastruktur&#039;s side vedr radioamatør]&lt;br /&gt;
* [https://frekvensregister.sdfi.dk/Search/Search.aspx Søg andre radioamatører]&lt;br /&gt;
** [http://oz6ks.dk/oz-callsigns/  Simpel genvej til ovenstående]&lt;br /&gt;
* [http://www.d-star4all.dk/dstar4all_repmap_frame.html Repeater kort]&lt;br /&gt;
* [http://asr.oz5thy.dk/ Analog repeater net]&lt;br /&gt;
&lt;br /&gt;
* [http://ham-digital.org/dmr-userreg.php DMR ID registry]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DMR ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
http://ipsc2-dk.dmrplus.dk/ipsc/#&lt;br /&gt;
https://www.pistar.uk/dmr+_options.php&lt;br /&gt;
&lt;br /&gt;
http://www.k9npx.com/2019/02/hotspot-offset-calibration.html&lt;br /&gt;
&lt;br /&gt;
Low  433001900&lt;br /&gt;
high 433005000&lt;br /&gt;
&lt;br /&gt;
Diff 3100&lt;br /&gt;
half=1550&lt;br /&gt;
&lt;br /&gt;
offset = low+half&lt;br /&gt;
offset = 3450&lt;br /&gt;
&lt;br /&gt;
==Live radio==&lt;br /&gt;
&lt;br /&gt;
* http://87.63.154.250:81/&lt;br /&gt;
==Prøve==&lt;br /&gt;
&lt;br /&gt;
* http://operatorlicens.dk/ (D-licens)&lt;br /&gt;
* http://b-certifikat.dk/&lt;br /&gt;
* http://a-certifikat.dk/&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Radioamat%C3%B8r&amp;diff=12221</id>
		<title>Radioamatør</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Radioamat%C3%B8r&amp;diff=12221"/>
		<updated>2023-11-02T20:15:27Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* [https://ens.dk/ansvarsomraader/frekvenser/radioamatoerer Engergistyrelsens side vedr radioamatør]&lt;br /&gt;
* [https://frekvensregister.ens.dk/Search/Search.aspx Søg andre radioamatører]&lt;br /&gt;
** [http://oz6ks.dk/oz-callsigns/  Simpel genvej til ovenstående]&lt;br /&gt;
* [http://www.d-star4all.dk/dstar4all_repmap_frame.html Repeater kort]&lt;br /&gt;
* [http://asr.oz5thy.dk/ Analog repeater net]&lt;br /&gt;
&lt;br /&gt;
* [http://ham-digital.org/dmr-userreg.php DMR ID registry]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DMR ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
http://ipsc2-dk.dmrplus.dk/ipsc/#&lt;br /&gt;
https://www.pistar.uk/dmr+_options.php&lt;br /&gt;
&lt;br /&gt;
http://www.k9npx.com/2019/02/hotspot-offset-calibration.html&lt;br /&gt;
&lt;br /&gt;
Low  433001900&lt;br /&gt;
high 433005000&lt;br /&gt;
&lt;br /&gt;
Diff 3100&lt;br /&gt;
half=1550&lt;br /&gt;
&lt;br /&gt;
offset = low+half&lt;br /&gt;
offset = 3450&lt;br /&gt;
&lt;br /&gt;
==Live radio==&lt;br /&gt;
&lt;br /&gt;
* http://87.63.154.250:81/&lt;br /&gt;
==Prøve==&lt;br /&gt;
&lt;br /&gt;
* http://operatorlicens.dk/ (D-licens)&lt;br /&gt;
* http://b-certifikat.dk/&lt;br /&gt;
* http://a-certifikat.dk/&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12213</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12213"/>
		<updated>2021-11-11T14:02:20Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
https://www.catalog.update.microsoft.com/Search.aspx?q=wsl&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman/docker&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker.io upstream package works - but you need to get [[#iptables|iptables]] and [[#systemd|systemd]] in order&lt;br /&gt;
&lt;br /&gt;
Docker desktop integrates with WSL, but has recently changed license - so it might be time to look into podman or upstream instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12212</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12212"/>
		<updated>2021-10-21T06:37:11Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman/docker&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker.io upstream package works - but you need to get [[#iptables|iptables]] and [[#systemd|systemd]] in order&lt;br /&gt;
&lt;br /&gt;
Docker desktop integrates with WSL, but has recently changed license - so it might be time to look into podman or upstream instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12211</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12211"/>
		<updated>2021-10-21T06:36:18Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* iptables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman/docker&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker.io upstream package works - but you need to get [[#iptables|iptables]] and [[#systemd|systemd]] in order&lt;br /&gt;
&lt;br /&gt;
Docker desktop integrates with WSL(2) &amp;lt;strike&amp;gt;but the native docker package will not work within wsl2&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12210</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12210"/>
		<updated>2021-10-06T09:46:45Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* docker */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker.io upstream package works - but you need to get [[#iptables|iptables]] and [[#systemd|systemd]] in order&lt;br /&gt;
&lt;br /&gt;
Docker desktop integrates with WSL(2) &amp;lt;strike&amp;gt;but the native docker package will not work within wsl2&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12209</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12209"/>
		<updated>2021-10-06T09:46:10Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* docker */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker.io upstream package works - but you need to get [[#iptables]] and systemd in order&lt;br /&gt;
&lt;br /&gt;
Docker desktop integrates with WSL(2) &amp;lt;strike&amp;gt;but the native docker package will not work within wsl2&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12208</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12208"/>
		<updated>2021-10-06T09:44:42Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker.io upstream package works - but you need to get iptables and systemd in order&lt;br /&gt;
&lt;br /&gt;
Docker desktop integrates with WSL(2) &amp;lt;strike&amp;gt;but the native docker package will not work within wsl2&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12207</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12207"/>
		<updated>2021-10-03T17:40:14Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=kernel=&lt;br /&gt;
remember to check the release notes occasionally&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker desktop integrates with WSL(2) but the native docker package will not work within wsl2&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12206</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12206"/>
		<updated>2021-10-03T16:35:34Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
= iptables = &lt;br /&gt;
wsl2 kernel will have iptables support - but not nftables so eg debian needs to be configure to use the iptables-legacy variant&lt;br /&gt;
https://wiki.debian.org/nftables&lt;br /&gt;
&lt;br /&gt;
this will affect both sshuttle and podman&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker desktop integrates with WSL(2) but the native docker package will not work within wsl2&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12205</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12205"/>
		<updated>2021-10-03T16:10:49Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker desktop integrates with WSL(2) but the native docker package will not work within wsl2&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12204</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12204"/>
		<updated>2021-10-03T16:08:48Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker desktop integrates with WSL(2) but the native docker package will not work within wsl2&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;br /&gt;
(Minikube works fine with podman in wsl)&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12203</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12203"/>
		<updated>2021-09-30T08:42:15Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
https://docs.microsoft.com/en-us/windows/wsl/kernel-release-notes&lt;br /&gt;
&lt;br /&gt;
= systemd =&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker desktop integrates with WSL(2) but the native docker package will not work within wsl2&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12202</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12202"/>
		<updated>2021-09-29T20:02:00Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= docker =&lt;br /&gt;
Docker desktop integrates with WSL(2) but the native docker package will not work within wsl2&lt;br /&gt;
&lt;br /&gt;
Docker Desktop has recently changed license - so it might be time to look into podman instead ?&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12201</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12201"/>
		<updated>2021-09-29T19:53:45Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
OR&lt;br /&gt;
* https://github.com/arkane-systems/genie&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12200</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12200"/>
		<updated>2021-09-19T16:26:25Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;br /&gt;
&lt;br /&gt;
if you combine this with a local https://en.wikipedia.org/wiki/Proxy_auto-config file and use file:///c:/path-to-your-pac in your browser  you can specify which hosts to route to squid&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12199</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12199"/>
		<updated>2021-08-26T09:48:34Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12198</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12198"/>
		<updated>2021-08-26T09:48:20Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic redirection is done in iptables within wsl - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12197</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12197"/>
		<updated>2021-08-26T09:46:31Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= network =&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic routing is done in iptables - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12196</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12196"/>
		<updated>2021-08-26T09:46:13Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# network #&lt;br /&gt;
you can have services listen on *:&amp;lt;port&amp;gt; inside WSL and access the services from windows&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sshuttle works under WSL (rembember that sshuttle only handles TCP - so forget UDP+ICMP !) but the traffic routing is done in iptables - so windows applications can&#039;t use it&lt;br /&gt;
BUT&lt;br /&gt;
you can eg install squid http proxy inside WSL and configure your browser to use localhost:3128 for proxy, and thereby force your req to go into wsl and from there be able to utilize the sshuttle tunnel&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12195</id>
		<title>WSL - Windows Subsystem for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=WSL_-_Windows_Subsystem_for_Linux&amp;diff=12195"/>
		<updated>2021-08-26T08:26:49Z</updated>

		<summary type="html">&lt;p&gt;Torben: Created page with &amp;quot;  Brug af systemd under WSL * https://github.com/shayne/wsl2-hacks&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Brug af systemd under WSL&lt;br /&gt;
* https://github.com/shayne/wsl2-hacks&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Main_Page&amp;diff=12194</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Main_Page&amp;diff=12194"/>
		<updated>2021-08-26T08:26:15Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Programmering==&lt;br /&gt;
===C++===&lt;br /&gt;
*[[MFC]] og Visual C++&lt;br /&gt;
*[[Cpp]] - C++&lt;br /&gt;
**[[Cpp standard containers]]&lt;br /&gt;
**[[Serial Port Detection]]&lt;br /&gt;
*[[Linux development]] - Linux &amp;amp; C++&lt;br /&gt;
*[[wxWidgets]]&lt;br /&gt;
**[[wxFormbuilder]]&lt;br /&gt;
&lt;br /&gt;
===Java===&lt;br /&gt;
*[[AppServer]]&lt;br /&gt;
*[[Java Tools]]&lt;br /&gt;
*[[Java dev env quick guide]]&lt;br /&gt;
&lt;br /&gt;
===C#===&lt;br /&gt;
*[[Quick Guide to .exe code signing]]&lt;br /&gt;
&lt;br /&gt;
===Diverse===&lt;br /&gt;
*[[CodeBlocks]]&lt;br /&gt;
*[[UML]]&lt;br /&gt;
&lt;br /&gt;
*[[Communication]]&lt;br /&gt;
*[[PIC]]&lt;br /&gt;
&lt;br /&gt;
==Grundfos==&lt;br /&gt;
*[[SCM]] (Software Configuration Management)&lt;br /&gt;
&lt;br /&gt;
==Projekter==&lt;br /&gt;
*[[Latency Simulation]]&lt;br /&gt;
*[[Linux Corporate Network]]&lt;br /&gt;
*[[RADIUS]]&lt;br /&gt;
*[[OpenVPN]]&lt;br /&gt;
*[[Slide show Linux]]&lt;br /&gt;
*[[Power Assessment]]&lt;br /&gt;
*[[Todic Stream]]&lt;br /&gt;
*[[CaddiBuntu]]&lt;br /&gt;
*[[NetworkMonitoring]]&lt;br /&gt;
*[[Android]]&lt;br /&gt;
*[[AllJavaServer]]&lt;br /&gt;
*Debian&lt;br /&gt;
**[[Debian]]&lt;br /&gt;
**[[BackPorts]]&lt;br /&gt;
**[[SFTP chroot + rsync]]&lt;br /&gt;
*[[haproxy]]&lt;br /&gt;
** [[pfSense + letsencrypt + haproxy]]&lt;br /&gt;
*[[VPS udbydere]]&lt;br /&gt;
*[[Timelapse]]&lt;br /&gt;
*[[xbmc]]&lt;br /&gt;
* [[HomeLab Virtualisering]]&lt;br /&gt;
** [[xen]]&lt;br /&gt;
** [[HomeLab Server HW]]&lt;br /&gt;
*[[CPU Comparison]]&lt;br /&gt;
*[[HoerupNet]]&lt;br /&gt;
**[[Netværk liste]]&lt;br /&gt;
** [[pfsense hardware crypto]]&lt;br /&gt;
** [[pfsense openconnect]]&lt;br /&gt;
** [[pfsense softether]]&lt;br /&gt;
** [[nextcloud]]&lt;br /&gt;
** [[sshbastion]]&lt;br /&gt;
** Hoerup devops&lt;br /&gt;
***[[Puppet]]&lt;br /&gt;
***[[Icinga2]]&lt;br /&gt;
***[[Bacula]]&lt;br /&gt;
*[[Mobile]]&lt;br /&gt;
*[[OpenStreetMap]]&lt;br /&gt;
*[[MariaDB]]&lt;br /&gt;
*[[Docker]]&lt;br /&gt;
*[[Windows]] -eww&lt;br /&gt;
**[[MDT]]&lt;br /&gt;
** [[WSL - Windows Subsystem for Linux]]&lt;br /&gt;
*[[esxi og vCenter]]&lt;br /&gt;
&lt;br /&gt;
*[[Udvidet linux - webserver]]&lt;br /&gt;
*[[Linux-Padawans]]&lt;br /&gt;
*[[Workshop - MySQL]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[Radioamatør]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[ToDo]]-liste over diverse nørderier&lt;br /&gt;
*[[Deprecated]]-kirkegården for gamle sider&lt;br /&gt;
&lt;br /&gt;
==Andet==&lt;br /&gt;
&lt;br /&gt;
[[AndroidApps]]&lt;br /&gt;
&lt;br /&gt;
[[AndroidTablets]]&lt;br /&gt;
&lt;br /&gt;
[[Tidsfordriv]]&lt;br /&gt;
&lt;br /&gt;
[[Lejlighed]]&lt;br /&gt;
&lt;br /&gt;
Han er ikke den [[Skarpeste Kniv]]&lt;br /&gt;
&lt;br /&gt;
[[AlternativMusik]]&lt;br /&gt;
&lt;br /&gt;
[[GrønnegadeMusik]]&lt;br /&gt;
&lt;br /&gt;
[[ScoutCraft]]&lt;br /&gt;
&lt;br /&gt;
[[Hardware]]&lt;br /&gt;
&lt;br /&gt;
[[Bryllup]]&lt;br /&gt;
&lt;br /&gt;
[[Middelalder Skole]]&lt;br /&gt;
&lt;br /&gt;
[[Forbrug]]&lt;br /&gt;
&lt;br /&gt;
[[Sommer]]&lt;br /&gt;
&lt;br /&gt;
[[misc]]&lt;br /&gt;
&lt;br /&gt;
==Hjælp og Sandkasse==&lt;br /&gt;
&lt;br /&gt;
En oversigt over media-wikis syntax kan findes på http://meta.wikimedia.org/wiki/Help:Editing. Alternativt kan du kigge på min egen lille [[syntax]]-side.&lt;br /&gt;
&lt;br /&gt;
Hvis du ikke er helt på det rene med wiki-syntaxen, eller hvis du vil afprøve en bestemt formatering, så vær venlig at bruge [[Sandkassen]] i stedet &lt;br /&gt;
for nogle af de andre sider.&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12193</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12193"/>
		<updated>2021-02-16T07:24:30Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&#039;xkb&#039;, &#039;dk&#039;)]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.desktop.input-sources sources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.settings-daemon.plugins.power active&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-add-repository universe&lt;br /&gt;
&lt;br /&gt;
apt install -y network-manager-openconnect-gnome  freerdp2-x11 curl&lt;br /&gt;
&lt;br /&gt;
xfreerdp /multimonitor /u:user /v:hostname&lt;br /&gt;
&lt;br /&gt;
note ctrl + alt + enter for escape xfreerdp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
curl -sS https://download.spotify.com/debian/pubkey_0D811D58.gpg | sudo apt-key add - &lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;deb http://repository.spotify.com stable non-free&amp;quot; | sudo tee /etc/apt/sources.list.d/spotify.list&lt;br /&gt;
&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install spotify-client&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12192</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12192"/>
		<updated>2021-02-16T07:24:04Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&#039;xkb&#039;, &#039;dk&#039;)]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.desktop.input-sources sources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.settings-daemon.plugins.power active&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-add-repository universe&lt;br /&gt;
&lt;br /&gt;
apt install -y network-manager-openconnect-gnome  freerdp2-x11 curl&lt;br /&gt;
&lt;br /&gt;
xfreerdp /multimonitor /u:user /v:hostname&lt;br /&gt;
&lt;br /&gt;
note ctrl + alt + enter for escape xfreerdp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
curl -sS https://download.spotify.com/debian/pubkey_0D811D58.gpg | sudo apt-key add - &lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;deb http://repository.spotify.com stable non-free&amp;quot; | sudo tee /etc/apt/sources.list.d/spotify.list&lt;br /&gt;
&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install spotify&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12191</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12191"/>
		<updated>2021-02-16T07:23:29Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&#039;xkb&#039;, &#039;dk&#039;)]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.desktop.input-sources sources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.settings-daemon.plugins.power active&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-add-repository universe&lt;br /&gt;
&lt;br /&gt;
apt install -y network-manager-openconnect-gnome  freerdp2-x11 curl&lt;br /&gt;
&lt;br /&gt;
xfreerdp /multimonitor /u:user /v:hostname&lt;br /&gt;
&lt;br /&gt;
note ctrl + alt + enter for escape xfreerdp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
curl -sS https://download.spotify.com/debian/pubkey_0D811D58.gpg | sudo apt-key add - &lt;br /&gt;
echo &amp;quot;deb http://repository.spotify.com stable non-free&amp;quot; | sudo tee /etc/apt/sources.list.d/spotify.list&lt;br /&gt;
sudo apt install spotify&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12190</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12190"/>
		<updated>2021-02-16T07:21:11Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&#039;xkb&#039;, &#039;dk&#039;)]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.desktop.input-sources sources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.settings-daemon.plugins.power active&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-add-repository universe&lt;br /&gt;
&lt;br /&gt;
apt install -y network-manager-openconnect-gnome  freerdp2-x11&lt;br /&gt;
&lt;br /&gt;
xfreerdp /multimonitor /u:user /v:hostname&lt;br /&gt;
&lt;br /&gt;
# ctrl + alt + enter for escape xfreerdp&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12189</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12189"/>
		<updated>2021-02-16T07:18:30Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&#039;xkb&#039;, &#039;dk&#039;)]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.desktop.input-sources sources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.settings-daemon.plugins.power active&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-add-repository universe&lt;br /&gt;
&lt;br /&gt;
apt install -y network-manager-openconnect-gnome  freerdp2-x11&lt;br /&gt;
&lt;br /&gt;
xfreerdp /dualmonitor /u:user /v:hostname&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12188</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Misc&amp;diff=12188"/>
		<updated>2021-02-16T07:10:45Z</updated>

		<summary type="html">&lt;p&gt;Torben: Created page with &amp;quot;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&amp;#039;xkb&amp;#039;, &amp;#039;dk&amp;#039;)]&amp;quot;  gsettings get org.gnome.desktop.input-sources sources    gsettings set org.gnome.settings-daemon.plugi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;gsettings set org.gnome.desktop.input-sources sources &amp;quot;[(&#039;xkb&#039;, &#039;dk&#039;)]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.desktop.input-sources sources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false&lt;br /&gt;
&lt;br /&gt;
gsettings get org.gnome.settings-daemon.plugins.power active&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-add-repository universe&lt;br /&gt;
&lt;br /&gt;
apt install -y network-manager-openconnect-gnome  freerdp2-x11&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Main_Page&amp;diff=12187</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Main_Page&amp;diff=12187"/>
		<updated>2021-02-16T07:06:37Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Programmering==&lt;br /&gt;
===C++===&lt;br /&gt;
*[[MFC]] og Visual C++&lt;br /&gt;
*[[Cpp]] - C++&lt;br /&gt;
**[[Cpp standard containers]]&lt;br /&gt;
**[[Serial Port Detection]]&lt;br /&gt;
*[[Linux development]] - Linux &amp;amp; C++&lt;br /&gt;
*[[wxWidgets]]&lt;br /&gt;
**[[wxFormbuilder]]&lt;br /&gt;
&lt;br /&gt;
===Java===&lt;br /&gt;
*[[AppServer]]&lt;br /&gt;
*[[Java Tools]]&lt;br /&gt;
*[[Java dev env quick guide]]&lt;br /&gt;
&lt;br /&gt;
===C#===&lt;br /&gt;
*[[Quick Guide to .exe code signing]]&lt;br /&gt;
&lt;br /&gt;
===Diverse===&lt;br /&gt;
*[[CodeBlocks]]&lt;br /&gt;
*[[UML]]&lt;br /&gt;
&lt;br /&gt;
*[[Communication]]&lt;br /&gt;
*[[PIC]]&lt;br /&gt;
&lt;br /&gt;
==Grundfos==&lt;br /&gt;
*[[SCM]] (Software Configuration Management)&lt;br /&gt;
&lt;br /&gt;
==Projekter==&lt;br /&gt;
*[[Latency Simulation]]&lt;br /&gt;
*[[Linux Corporate Network]]&lt;br /&gt;
*[[RADIUS]]&lt;br /&gt;
*[[OpenVPN]]&lt;br /&gt;
*[[Slide show Linux]]&lt;br /&gt;
*[[Power Assessment]]&lt;br /&gt;
*[[Todic Stream]]&lt;br /&gt;
*[[CaddiBuntu]]&lt;br /&gt;
*[[NetworkMonitoring]]&lt;br /&gt;
*[[Android]]&lt;br /&gt;
*[[AllJavaServer]]&lt;br /&gt;
*Debian&lt;br /&gt;
**[[Debian]]&lt;br /&gt;
**[[BackPorts]]&lt;br /&gt;
**[[SFTP chroot + rsync]]&lt;br /&gt;
*[[haproxy]]&lt;br /&gt;
** [[pfSense + letsencrypt + haproxy]]&lt;br /&gt;
*[[VPS udbydere]]&lt;br /&gt;
*[[Timelapse]]&lt;br /&gt;
*[[xbmc]]&lt;br /&gt;
* [[HomeLab Virtualisering]]&lt;br /&gt;
** [[xen]]&lt;br /&gt;
** [[HomeLab Server HW]]&lt;br /&gt;
*[[CPU Comparison]]&lt;br /&gt;
*[[HoerupNet]]&lt;br /&gt;
**[[Netværk liste]]&lt;br /&gt;
** [[pfsense hardware crypto]]&lt;br /&gt;
** [[pfsense openconnect]]&lt;br /&gt;
** [[pfsense softether]]&lt;br /&gt;
** [[nextcloud]]&lt;br /&gt;
** [[sshbastion]]&lt;br /&gt;
** Hoerup devops&lt;br /&gt;
***[[Puppet]]&lt;br /&gt;
***[[Icinga2]]&lt;br /&gt;
***[[Bacula]]&lt;br /&gt;
*[[Mobile]]&lt;br /&gt;
*[[OpenStreetMap]]&lt;br /&gt;
*[[MariaDB]]&lt;br /&gt;
*[[Docker]]&lt;br /&gt;
*[[Windows]] -eww&lt;br /&gt;
***[[MDT]]&lt;br /&gt;
*[[esxi og vCenter]]&lt;br /&gt;
&lt;br /&gt;
*[[Udvidet linux - webserver]]&lt;br /&gt;
*[[Linux-Padawans]]&lt;br /&gt;
*[[Workshop - MySQL]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[Radioamatør]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[ToDo]]-liste over diverse nørderier&lt;br /&gt;
*[[Deprecated]]-kirkegården for gamle sider&lt;br /&gt;
&lt;br /&gt;
==Andet==&lt;br /&gt;
&lt;br /&gt;
[[AndroidApps]]&lt;br /&gt;
&lt;br /&gt;
[[AndroidTablets]]&lt;br /&gt;
&lt;br /&gt;
[[Tidsfordriv]]&lt;br /&gt;
&lt;br /&gt;
[[Lejlighed]]&lt;br /&gt;
&lt;br /&gt;
Han er ikke den [[Skarpeste Kniv]]&lt;br /&gt;
&lt;br /&gt;
[[AlternativMusik]]&lt;br /&gt;
&lt;br /&gt;
[[GrønnegadeMusik]]&lt;br /&gt;
&lt;br /&gt;
[[ScoutCraft]]&lt;br /&gt;
&lt;br /&gt;
[[Hardware]]&lt;br /&gt;
&lt;br /&gt;
[[Bryllup]]&lt;br /&gt;
&lt;br /&gt;
[[Middelalder Skole]]&lt;br /&gt;
&lt;br /&gt;
[[Forbrug]]&lt;br /&gt;
&lt;br /&gt;
[[Sommer]]&lt;br /&gt;
&lt;br /&gt;
[[misc]]&lt;br /&gt;
&lt;br /&gt;
==Hjælp og Sandkasse==&lt;br /&gt;
&lt;br /&gt;
En oversigt over media-wikis syntax kan findes på http://meta.wikimedia.org/wiki/Help:Editing. Alternativt kan du kigge på min egen lille [[syntax]]-side.&lt;br /&gt;
&lt;br /&gt;
Hvis du ikke er helt på det rene med wiki-syntaxen, eller hvis du vil afprøve en bestemt formatering, så vær venlig at bruge [[Sandkassen]] i stedet &lt;br /&gt;
for nogle af de andre sider.&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12183</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12183"/>
		<updated>2020-11-10T21:42:19Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* Delayed mount */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
 ceph fs add_data_pool myfs cephfs-ec&lt;br /&gt;
&lt;br /&gt;
 #create subvolume, utilizing cephfs-ec as backing pool&lt;br /&gt;
 ceph fs subvolume create myfs subfs --pool_layout cephfs-ec&lt;br /&gt;
 &lt;br /&gt;
 #mount subvolume, default gets an annoyingly long path&lt;br /&gt;
 mount -t ceph -o name=admin 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/ /mnt/cephfs/&lt;br /&gt;
&lt;br /&gt;
 #fstab&lt;br /&gt;
 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/  /mnt/cephfs ceph  name=admin,_netdev 0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Delayed mount =&lt;br /&gt;
 #cat mnt-cephfs.timer&lt;br /&gt;
 [Unit]&lt;br /&gt;
 Description=delayed mount of cephfs&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 [Timer]&lt;br /&gt;
 OnBootSec=1min&lt;br /&gt;
 Unit=mnt-cephfs.service&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 [Install]&lt;br /&gt;
 WantedBy=timers.target&lt;br /&gt;
&lt;br /&gt;
 # cat mnt-cephfs.service&lt;br /&gt;
 [Unit]&lt;br /&gt;
 Description=Mount Cephs&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 [Service]&lt;br /&gt;
 Type=oneshot&lt;br /&gt;
 ExecStart=mount /mnt/cephfs&lt;br /&gt;
 User=root&lt;br /&gt;
 Group=root&lt;br /&gt;
 &lt;br /&gt;
 #systemctl enable mnt-cephfs.timer&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12182</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12182"/>
		<updated>2020-11-10T21:41:54Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
 ceph fs add_data_pool myfs cephfs-ec&lt;br /&gt;
&lt;br /&gt;
 #create subvolume, utilizing cephfs-ec as backing pool&lt;br /&gt;
 ceph fs subvolume create myfs subfs --pool_layout cephfs-ec&lt;br /&gt;
 &lt;br /&gt;
 #mount subvolume, default gets an annoyingly long path&lt;br /&gt;
 mount -t ceph -o name=admin 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/ /mnt/cephfs/&lt;br /&gt;
&lt;br /&gt;
 #fstab&lt;br /&gt;
 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/  /mnt/cephfs ceph  name=admin,_netdev 0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Delayed mount =&lt;br /&gt;
 #cat mnt-cephfs.timer&lt;br /&gt;
 [Unit]&lt;br /&gt;
 Description=delayed mount of cephfs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 [Timer]&lt;br /&gt;
 OnBootSec=1min&lt;br /&gt;
 Unit=mnt-cephfs.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 [Install]&lt;br /&gt;
 WantedBy=timers.target&lt;br /&gt;
&lt;br /&gt;
 # cat mnt-cephfs.service&lt;br /&gt;
 [Unit]&lt;br /&gt;
 Description=Mount Cephs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 [Service]&lt;br /&gt;
 Type=oneshot&lt;br /&gt;
 ExecStart=mount /mnt/cephfs&lt;br /&gt;
 User=root&lt;br /&gt;
 Group=root&lt;br /&gt;
&lt;br /&gt;
 #systemctl enable mnt-cephfs.timer&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12181</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12181"/>
		<updated>2020-11-10T19:41:00Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
 ceph fs add_data_pool myfs cephfs-ec&lt;br /&gt;
&lt;br /&gt;
 #create subvolume, utilizing cephfs-ec as backing pool&lt;br /&gt;
 ceph fs subvolume create myfs subfs --pool_layout cephfs-ec&lt;br /&gt;
 &lt;br /&gt;
 #mount subvolume, default gets an annoyingly long path&lt;br /&gt;
 mount -t ceph -o name=admin 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/ /mnt/cephfs/&lt;br /&gt;
&lt;br /&gt;
 #fstab&lt;br /&gt;
 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/  /mnt/cephfs ceph  name=admin,_netdev 0 0&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12180</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12180"/>
		<updated>2020-11-10T19:32:03Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
 ceph fs add_data_pool myfs cephfs-ec&lt;br /&gt;
&lt;br /&gt;
 #create subvolume, utilizing cephfs-ec as backing pool&lt;br /&gt;
 ceph fs subvolume create myfs subfs --pool_layout cephfs-ec&lt;br /&gt;
 &lt;br /&gt;
 #mount subvolume, default gets an annoyingly long path&lt;br /&gt;
 mount -t ceph -o name=admin 192.168.2.199:/volumes/_nogroup/subfs/60918416-6df6-4b4a-a071-ffd527fba26c/ /mnt/cephfs/&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12179</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12179"/>
		<updated>2020-11-10T19:24:28Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
 &lt;br /&gt;
 ceph fs add_data_pool myfs cephfs-ec&lt;br /&gt;
&lt;br /&gt;
 ceph fs subvolume create myfs subfs --pool_layout cephfs-ec&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12178</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12178"/>
		<updated>2020-11-10T18:59:52Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
 &lt;br /&gt;
 ceph fs add_data_pool myfs cephfs-ec&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12177</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12177"/>
		<updated>2020-11-10T18:56:56Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
 &lt;br /&gt;
 ceph osd pool create cephfs-ec erasure ec41&lt;br /&gt;
 ceph osd pool  application enable  cephfs-ec cephfs&lt;br /&gt;
 ceph osd pool set cephfs-ec allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12176</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12176"/>
		<updated>2020-11-10T18:51:24Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
 &lt;br /&gt;
 #brug admin keyring&lt;br /&gt;
 mount -o name=admin -t ceph 192.168.2.199:/ /mnt/cephfs/&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12175</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12175"/>
		<updated>2020-11-10T18:47:39Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
 &lt;br /&gt;
 # set intended use på pools&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.data cephfs&lt;br /&gt;
 ceph osd pool  application enable  cephfs.myfs.meta cephfs&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12174</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12174"/>
		<updated>2020-11-10T18:44:45Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
 &lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
 &lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
 &lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12173</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12173"/>
		<updated>2020-11-10T18:43:00Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,discrad,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
&lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
&lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
&lt;br /&gt;
 # metadata OG data pool til rod fs skal være replicated, men vi sætter crushrule for at tillade alle på samme host&lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.myfs.data crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12171</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12171"/>
		<updated>2020-11-03T19:00:02Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 # Filsystemet kalder vi myfs&lt;br /&gt;
&lt;br /&gt;
 #Inden opsæt opretter vi datapool med EC profil&lt;br /&gt;
 ceph osd pool create cephfs.myfs.data erasure ec41&lt;br /&gt;
&lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds myfs&lt;br /&gt;
&lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create myfs&lt;br /&gt;
&lt;br /&gt;
 # metadata skal være replicated, men vi sætter crushrule &lt;br /&gt;
 ceph osd pool set cephfs.myfs.meta crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12170</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12170"/>
		<updated>2020-11-03T18:57:02Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds cephfs&lt;br /&gt;
&lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create cephfs&lt;br /&gt;
&lt;br /&gt;
 # metadata skal være replicated, men vi sætter crushrule &lt;br /&gt;
 ceph osd pool set cephfs.cephfs.meta crush_rule repl1&lt;br /&gt;
 ceph osd pool set cephfs.cephfs.data crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12169</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12169"/>
		<updated>2020-11-03T18:48:34Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 #setup metadata server&lt;br /&gt;
 ceph orch apply mds cephfs&lt;br /&gt;
&lt;br /&gt;
 # opret volume&lt;br /&gt;
 ceph fs volume create cephfs&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12168</id>
		<title>Single Host Ceph Server</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Single_Host_Ceph_Server&amp;diff=12168"/>
		<updated>2020-11-03T18:46:01Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* CephFS - Filesystem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Clean Centos 8&lt;br /&gt;
&lt;br /&gt;
=Basic Stuff og cephadm=&lt;br /&gt;
 yum install -y python3 podman chrony lvm2 wget &lt;br /&gt;
 wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm&lt;br /&gt;
 chmod +x /root/cephadm&lt;br /&gt;
&lt;br /&gt;
 mkdir -p /etc/ceph&lt;br /&gt;
&lt;br /&gt;
 ./cephadm add-repo --release octopus&lt;br /&gt;
 ./cephadm install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Boostrap monitor på egen ip=&lt;br /&gt;
 cephadm bootstrap --mon-ip 192.168.2.206   &lt;br /&gt;
&lt;br /&gt;
=Installer ceph=&lt;br /&gt;
 cephadm add-repo --release octopus&lt;br /&gt;
 cephadm install ceph-common&lt;br /&gt;
 cephadm install ceph &lt;br /&gt;
&lt;br /&gt;
=Opret OSD&#039;er med alle diske (få lige specifik kommando fra Hoerup)=&lt;br /&gt;
 ceph orch apply osd --all-available-devices&lt;br /&gt;
 ceph status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)=&lt;br /&gt;
 ceph osd crush rule create-replicated repl1 default osd&lt;br /&gt;
 ceph osd pool ls&lt;br /&gt;
 ceph osd pool set device_health_metrics crush_rule repl1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Block Device=&lt;br /&gt;
==EC stuff, her med 4+1==&lt;br /&gt;
 ceph osd pool create rbdmeta replicated repl1&lt;br /&gt;
 ceph osd erasure-code-profile get default&lt;br /&gt;
 ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd&lt;br /&gt;
 ceph osd pool create rbddata erasure ec41&lt;br /&gt;
&lt;br /&gt;
==Hint at denne pool skal bruges til block storage==&lt;br /&gt;
 ceph osd pool application enable rbddata rbd&lt;br /&gt;
 ceph osd pool application enable rbdmeta rbd&lt;br /&gt;
&lt;br /&gt;
==Tillad EC blok overwrites==&lt;br /&gt;
 ceph osd pool set rbddata allow_ec_overwrites true&lt;br /&gt;
&lt;br /&gt;
 rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1&lt;br /&gt;
 rbd ls rbdmeta&lt;br /&gt;
&lt;br /&gt;
==Mapper et rbd image ind som blockdevice==&lt;br /&gt;
 rbd map rbdmeta/ectestimage1&lt;br /&gt;
&lt;br /&gt;
==Indskriv i &#039;&#039;&#039;/etc/ceph/rbdmap&#039;&#039;&#039;== &lt;br /&gt;
 rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring&lt;br /&gt;
&lt;br /&gt;
 systemctl enable rbdmap.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount filsystem==&lt;br /&gt;
 mkfs.xfs /dev/rbd0 &lt;br /&gt;
 mkdir /storage&lt;br /&gt;
 mount -t xfs /dev/rbd0 /storage/&lt;br /&gt;
 df -h /storage/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;/etc/fstab&#039;&#039;&#039;==&lt;br /&gt;
 /dev/rbd0       /storage/       xfs     defaults,_netdev        0       0&lt;br /&gt;
&lt;br /&gt;
=CephFS - Filesystem=&lt;br /&gt;
 setup metadata server&lt;br /&gt;
 ceph orch apply mds cephfs&lt;br /&gt;
&lt;br /&gt;
=Hvad mangler vi ?=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Clean shutdown / reboot ?&lt;br /&gt;
&lt;br /&gt;
ceph logs ? &lt;br /&gt;
&lt;br /&gt;
Scrubbing ?&lt;br /&gt;
&lt;br /&gt;
Overvågning / prometheus ?&lt;br /&gt;
&lt;br /&gt;
Defekt disk, ny disk.&lt;br /&gt;
&lt;br /&gt;
Rest API&lt;br /&gt;
&lt;br /&gt;
=Sources n crap=&lt;br /&gt;
https://docs.ceph.com/en/latest/cephadm/install/&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e&lt;br /&gt;
&lt;br /&gt;
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Zap disk for re-use==&lt;br /&gt;
 ceph-volume lvm zap /dev/sdX&lt;br /&gt;
eller&lt;br /&gt;
 dd if=/dev/zero of=/dev/vdc bs=1M count=10&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Udvidet_linux_-_webserver&amp;diff=12136</id>
		<title>Udvidet linux - webserver</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Udvidet_linux_-_webserver&amp;diff=12136"/>
		<updated>2020-05-23T20:33:50Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* WAF - Ekstra opg */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Udvidet linux - webserver=&lt;br /&gt;
Forskellige opgaver vedr linux - skal ses som en udvidet af de mere trivielle skole linux opg.&lt;br /&gt;
&lt;br /&gt;
Slut resultatet når alle øvelser er lavet, skulle gerne være et fuldt Highly-Available web cluster udelukkende med opensource komponenter.&lt;br /&gt;
&lt;br /&gt;
I øvelserne kan der være brugt begreber der ikke er forklaret yderligere - her er det et selvstudium at undersøge det videre.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* Basal linux viden: skal kunne installere linux og navigere rundt, edit config filer mm&lt;br /&gt;
* Mulighed for at deploy&#039;e  5-10 små linux vm&#039;er á min 1 cpu/512mb ram - eller have tilsvarende antal raspberry Pi&#039;s ved hånden.&lt;br /&gt;
** I øvelserne bruger vi som udgangspunkt 1 rolle pr node - for at holde det simpelt og adskilt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Basic webserver = &lt;br /&gt;
* Find en web application du vil hoste i dit cluster. &lt;br /&gt;
** Skal være en applikation der bruger data fra en database&lt;br /&gt;
** Det er valgfrit hvad man vælger (kan være en php app på apache med mysql, ruby app på nginx med redis, java på tomcat med postgresql, nodejs med mongodb osv)&lt;br /&gt;
** Gå evt uden for comfort-zone og vælg sprog/webserver/db som du ikke har prøvet før&lt;br /&gt;
* Installer 1 webserver node (web01) og 1 database node(db01)&lt;br /&gt;
** installer webserver og database server softwaren&lt;br /&gt;
** deploy din web application på web01, sæt den til at bruge db01&lt;br /&gt;
** sikre web applikationen virker efter hensigten&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;graphviz&amp;gt;&lt;br /&gt;
graph basic{&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/graphviz&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= High Availability WebServer =&lt;br /&gt;
Du begynder at få traffik på dit site. Ved spids-belastninger kniber det for web01 at følge med - desuden er du nervøs hvor hvad der sker hvis web01 crasher - derfor vil du gerne have en webserver mere.&lt;br /&gt;
&lt;br /&gt;
* Installer endnu en webserver node (web02) eller lav en clon af web01 (husk at rette hostname, ip osv)&lt;br /&gt;
** Sikre at applicationen virker som forventet når du tilgår web02 i browseren&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Du skal bruge en loadbalancer foran dine webservere til at fordele trafikken. Sådan en node kaldes også en [https://en.wikipedia.org/wiki/Reverse_proxy reverse proxy]&lt;br /&gt;
&lt;br /&gt;
Der er mange forskellige pakker der kan bruges her. Se f.eks. [http://www.haproxy.org/ haproxy] , [https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html apache mod_proxy] , [https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ nginx]&lt;br /&gt;
(Skulle der bruges en Enterpri$e løsning her, kunne det f.eks. være citrix netscaler)&lt;br /&gt;
&lt;br /&gt;
* Installer en node til din loadbalancer (lb01)&lt;br /&gt;
* installer en proxy/loadbalancer efter eget valg og sæt den til at kunne forwarde trafikken til både web01 og web02.&lt;br /&gt;
** For variationens skyld så lad være med at bruge samme server som på web01/web02, så du ikke bruger f.eks. nginx til både lb og web&lt;br /&gt;
* du skulle nu kunne tilgå din webapp gennem lb01&lt;br /&gt;
* haproxy har en indbygget status side, apache mod_proxy kan sættes til at vise status igennem mod_status, og nginx har også en status side. Bruger du andet, se om den kan rapportere status på lignende måde.&lt;br /&gt;
** Sæt proxy status siden op lb01&lt;br /&gt;
* test nu at din LB kan håndtere at en webserver ikke svarer&lt;br /&gt;
** ved hvert step følg med på status siden&lt;br /&gt;
** stop webserver på web01 og sikre at du stadig kan bruge din app gennem lb01&lt;br /&gt;
** start webserver på web01 og stop den på web02 - sikre at du stadig kan bruge din app gennem lb01&lt;br /&gt;
** start webserver igen på web02&lt;br /&gt;
&lt;br /&gt;
De fleste reverse proxyer understøtter flere metoder til load balancing, når den skal finde ud af hvilen server der skal håndtere en given http request, f.eks. round-robin, by-source-ip, eller least-used. &lt;br /&gt;
* Undersøg hvilke metoder din reverse proxy understøtter.&lt;br /&gt;
&amp;lt;i&amp;gt;Hver opmærksom på at hvis din applikation bruger logins eller sessions på anden vis, skal en bruger helst ramme samme server hver gang. (dog er det undtaget hvis at du har sat clustered sessions op i din applikation)&amp;lt;/i&amp;gt; &lt;br /&gt;
* Undersøg om du kan lave sticky-sessions eller om du vil lave balance by ip&lt;br /&gt;
&lt;br /&gt;
Ud over at din LB gerne selv skulle kunne detektere at en node ikke er tilgængelig, så vil du også gerne kunne disable en webserver i LB i forbindelse med servicevinduer, så at LB ikke forgæves prøver at sende trafik førend webserveren er klar igen.&lt;br /&gt;
* Undersøg hvordan du kan disable og enable en webserver i din LB pool&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph network {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
=Synkronisering af webserver=&lt;br /&gt;
&lt;br /&gt;
Du har nu 2 webservere kørende, som helst skulle køre nøjagtig samme udgave af web applikationen. Så du skal overveje hvordan sikrer du dig at dette fortsætter med at være tilfældet i forbindelse med upgrade eller hvis du har behov for at deploye endnu en server.&lt;br /&gt;
Hvis man kan uploade filer gennem applikationen så skal alle webserverne helst have samme view af upload mappen.&lt;br /&gt;
&lt;br /&gt;
php filer kan synkroniseres, ved andre apps kan det være nødvendigt at undersøge config mgmt tools såsom salt,puppet eller ansible.&lt;br /&gt;
&lt;br /&gt;
filer kan synkroniseres med f.eks. rsync - eller man kan lave replikerede filsystemer med f.eks. drbd/gluster - du kan også store dem på en seperat server og mounte via NFS (men hvordan påvirker det så dit mål om HA?)&lt;br /&gt;
&lt;br /&gt;
* Hvad vil bruge for at sikre dine web servere er identiske ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=High Availability LB=&lt;br /&gt;
Dine webservere er redundate og LB sørger for at omdirigere trafikken, hvis en crasher - men hvad med din LB? Så du skal nu lave en lb mere - men hvordan sikrer du HA mellem de 2? Måden der skal bruges her er at lave en shared virtual IP som de skal forhandle om at være primær på.&lt;br /&gt;
&amp;lt;i&amp;gt;Vær ops på at shared IP kan give problemer hvis din router/fw har lang levetid på arp cache !!&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Deploy en ny LB node (lb02) og sikre at configurationen matcher med lb01&lt;br /&gt;
* sikre at du kan tilgå dit website gennem lb02&lt;br /&gt;
* opsæt virtual shared IP på lb01 og lb02 med f.eks. heartbeat / pacemaker eller keepalived.&lt;br /&gt;
* test at din HA virker på shared IP (luk lb01 ned og test, start lb01 og luk lb02 osv ...)&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph ha_lb {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
lb02 -- web01&lt;br /&gt;
lb02 -- web02&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
=Optimering af applikation=&lt;br /&gt;
Du får mere traffik og dine 2 webservere kan ikke følge med længere. Inden du kaster flere hardware resourcer ind i løsningen bør du kigge på om applikationen er fintunet.&lt;br /&gt;
&lt;br /&gt;
* Hvis det er en php applikation:&lt;br /&gt;
** Undersøg hvad en opcode cache er&lt;br /&gt;
** Slå opcode caching til på dine webservere&lt;br /&gt;
* Hvis det er java&lt;br /&gt;
** Passer dine memory settings til dit miljø (-Xmx / -Xms)&lt;br /&gt;
** Passer din garbage collector til applikationen og belastningstypen&lt;br /&gt;
* Hvis det er et 3 sprog - se om der er nogen ting der skal tunes i runtime opsætningen&lt;br /&gt;
&lt;br /&gt;
* Du bør også kigge på om din database er tunet korrekt&lt;br /&gt;
** mysql: se på f.eks. innodb_buffer_pool_size&lt;br /&gt;
** postgres: se f.eks. på shared_buffers &lt;br /&gt;
** undersøg hvad der passer til din database type&lt;br /&gt;
&lt;br /&gt;
* De fleste applikationer vil kunne bruge en caching komponent til at gemme data, så at den ikke behøver at hente samtlige elementer fra databasen ved hver side visning. Populære cache servere er memcached og redis.&lt;br /&gt;
** undersøg om din applikation kan bruge en sådan caching løsning. Sæt cache server op på dine web noder (eller deploy nye cache servere), og slå caching til i din applikation&lt;br /&gt;
** lav benchmark med f.eks. [https://httpd.apache.org/docs/2.4/programs/ab.html apache ab] af respons tid med og uden cache&lt;br /&gt;
&lt;br /&gt;
= HA Database=&lt;br /&gt;
Du kan på nuværende punkt skalere dit webserver lag horisontalt  - men hvad med databasen? Den er stadig single point of failure - så hvis den crasher er du stadig på den.&lt;br /&gt;
&lt;br /&gt;
* Undersøg hvad du har af muligheder for at skalere din DB.&lt;br /&gt;
** Er din løsning en fuld replika ... eller laver du data partitionering (sharding) -i såfald er du nu afhængig af at begge kører?&lt;br /&gt;
** Er din løsning en warm standby, hot standby eller active/active&lt;br /&gt;
** Er der nogen regler omkring quorum (så du skal minimum bruge 3 DB nodes) ? &lt;br /&gt;
** Hvad betyder split-brain?&lt;br /&gt;
** kan alle noder tage imod writes - eller skal man lave read/write split?&lt;br /&gt;
** Er der noget der skal ændres manuelt hvis en node crasher?&lt;br /&gt;
* Prøv at se om du kan sætte det op med db02 og evt db03&lt;br /&gt;
* Hvordan omkonfigurerer du din web applikation til at bruge DB clusteret&lt;br /&gt;
** Skal du bruge en database load balancer ?&lt;br /&gt;
Database replikering kan være kompliceret at sætte op - så det er superfedt hvis du kan få det til at virke - men okay hvis du kaster håndklædet.&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph ha_db {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
lb02 -- web01&lt;br /&gt;
lb02 -- web02&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
web01 -- db02&lt;br /&gt;
web02 -- db02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
= Static content =&lt;br /&gt;
Du er nu fuldt HA og har redundans på samtlige elementer,, men dine webservere har problemer med at følge med. Du kunne lave en webserver mere - men lad os nu antage at de serverer en stor mængde statisk indhold (f.eks. billedfiler) - og det er ikke sikkert at din webserver er optimal i forhold til dette. Nogle er bedre til den slags requests: nginx eller lighttpd (apache kan bruges hvis du disabler .htaccess og bruger en threaded mpm)&lt;br /&gt;
&lt;br /&gt;
* lav 1 eller 2 noder til static content (staticweb01 / staticweb02) og installer en &amp;quot;letvægts&amp;quot; http server&lt;br /&gt;
* find noget static content fra din app der kan kopieres over til staticweb01/02&lt;br /&gt;
* undersøg hvordan du i dine LB noder kan route request til staticweb01/02 ud fra en del af url&#039;en (f.eks. en /static mappe)&lt;br /&gt;
&lt;br /&gt;
Denne måde med at route trafik ud fra request kan også bruges hvis du f.eks. har flere forskellige applikationer bag samme LB - her vil man måske bare route ud fra requested hostname istedet for en sti&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph static_content {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
lb02 -- web01&lt;br /&gt;
lb02 -- web02&lt;br /&gt;
&lt;br /&gt;
lb01 -- staticweb01&lt;br /&gt;
lb01 -- staticweb02&lt;br /&gt;
lb02 -- staticweb01&lt;br /&gt;
lb02 -- staticweb02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
web01 -- db02&lt;br /&gt;
web02 -- db02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
= WebCache = &lt;br /&gt;
Hvis du har et højt antal side visninger af samme content f.eks. visningen af forsiden på dit site - kan man offloade webserveren ved at lade en reverse proxy servere en cachet udgave af siden. Derved kan man spare en del resourcer på web og database serverne. Jeg vil her anbefale at bruge varnish men man kan også kigge på f.eks. apache mod_cache sammen med mod_proxy eller nginx proxy_cache&lt;br /&gt;
&lt;br /&gt;
* Lav 2 noder webcache01 og webcache02&lt;br /&gt;
* setup en caching server på dem begge&lt;br /&gt;
** caching serveren skal kunne trække data fra web01 og web02&lt;br /&gt;
** omkonfigurer lb01/02 til at hente data fra webcache01/02 istedet for web01/02&lt;br /&gt;
** configurer din cache til f.eks. at den må cache forsiden - men alt andet skal trækkes fra webserver &lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Nu er det ikke længere kun lb men også dine webcache nodes der skal tage højde for korrekt session routning&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph network_cache {&lt;br /&gt;
&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- webcache01&lt;br /&gt;
lb01 -- webcache02&lt;br /&gt;
lb02 -- webcache01&lt;br /&gt;
lb02 -- webcache02&lt;br /&gt;
&lt;br /&gt;
lb01 -- staticweb01&lt;br /&gt;
lb01 -- staticweb02&lt;br /&gt;
lb02 -- staticweb01&lt;br /&gt;
lb02 -- staticweb02&lt;br /&gt;
&lt;br /&gt;
webcache01 -- web01&lt;br /&gt;
webcache01 -- web02&lt;br /&gt;
webcache02 -- web01&lt;br /&gt;
webcache02 -- web02&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
web01 -- db02&lt;br /&gt;
web02 -- db02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
= X-Forwarded-For =&lt;br /&gt;
Med disse disse lag af proxy server - vil web serverne kun se IP på den proxy server der har videresendt trafikken til den. Dvs at webservernes log filer får et forkert billede af hvem der forespørger på hvad. En måde at styre det på er at få proxy serveren til at inject en http header med oprindelig requester ip - her bruges tit x-forwarded-for.&lt;br /&gt;
&lt;br /&gt;
* Undersøg hvordan du får dine proxy servere til at videre  sende x-forwarded-for &lt;br /&gt;
* Undersøg hvordan du får din webserver til at trække requester ip fra x-forwarded-for&lt;br /&gt;
* verificer at ved request er det klientens IP der gemmes i webserver logfilen&lt;br /&gt;
&lt;br /&gt;
= WAF - Ekstra opg =&lt;br /&gt;
Hvis du vil beskytte dit website kan du implementere en såkaldt Web Application Firewall (WAF). En traditionel firewall kigger på IP og TCP pakker - mens at en WAF inspicerer HTTP requests.&lt;br /&gt;
&lt;br /&gt;
En opensource WAF du kan kigge nærmere på er https://www.modsecurity.org/&lt;br /&gt;
&lt;br /&gt;
= Ready for production =&lt;br /&gt;
Dit site er toptunet og full HA, og du kan scalere ved at deploye flere noder - men er du production-ready hvis du ikke har overblik over om dine elementer kører eller ej ? Og hvad vil du gøre hvis hele dit datacenter futter af ?&lt;br /&gt;
&lt;br /&gt;
Monitorering&lt;br /&gt;
* Undersøg opensource monitorerings løsninger ( f.eks. icinga eller zabbix )&lt;br /&gt;
** prøv at lave en simpelt monitorering hvor du som minimum laver ping check af dine servere&lt;br /&gt;
&lt;br /&gt;
Backup&lt;br /&gt;
* Du kan kigge på f.eks. bacula - men du må også gerne holde det simpelt.&lt;br /&gt;
** Lav som minimum en backup af din database automatisk en gang i døgnet&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=CPU_Comparison&amp;diff=12135</id>
		<title>CPU Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=CPU_Comparison&amp;diff=12135"/>
		<updated>2020-05-08T10:00:34Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Testene er kørt med dnetc: ./dnetc --benchmark RC5-72&lt;br /&gt;
&lt;br /&gt;
Se også [[VCPU_Comparison]] for virtuelle maskiner, og [[GPU_Comparison]] for grafikkort.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz (NUC10i7)=&lt;br /&gt;
 [May 08 09:58:08 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;10006A60&amp;quot;)&lt;br /&gt;
 [May 08 09:58:08 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [May 08 09:58:27 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.26 [72,093,680 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-7600K CPU @ 3.80GHz=&lt;br /&gt;
 [Jan 21 14:12:27 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.14 [61,213,840 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 9136 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-7600K+%40+3.80GHz&amp;amp;id=2919&lt;br /&gt;
&lt;br /&gt;
=Intel Core i3-6100 CPU @ 3.7GHz=&lt;br /&gt;
 [Jan 22 16:21:36 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.52 [59,701,011 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 5490 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i3-6100+%40+3.70GHz&amp;amp;id=2617&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-8400 CPU @ 2.80GHz=&lt;br /&gt;
 [Sep 29 06:29:14 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100069EA&amp;quot;)&lt;br /&gt;
 [Sep 29 06:29:14 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [Sep 29 06:29:33 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                  &lt;br /&gt;
                      0.00:00:16.98 [57,705,817 keys/sec]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-8550U CPU @ 1.80GHz=&lt;br /&gt;
 [Sep 26 07:24:26 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                                                                                    &lt;br /&gt;
                      0.00:00:16.91 [57,487,911 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 8322 https://www.cpubenchmark.net/cpu.php?cpu=Intel%2BCore%2Bi7-8550U%2B%40%2B1.80GHz&amp;amp;id=3064&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-8250U CPU @ 1.60GHz (brix gb-bri5h-8250)=&lt;br /&gt;
 [Sep 29 10:11:31 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100068EA&amp;quot;)&lt;br /&gt;
 [Sep 29 10:11:31 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [Sep 29 10:11:50 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                  &lt;br /&gt;
                      0.00:00:16.99 [54,845,778 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-4460 CPU @ 3.20GHz=&lt;br /&gt;
 [Jan 17 21:18:21 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:17.11 [41,443,462 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-6600U CPU @ 2.60GHz=&lt;br /&gt;
 [Jan 17 11:03:29 UTC] RC5-72: Benchmark for core #12 (YK/RT AVX2)                                                                                                                                                     &lt;br /&gt;
                      0.00:00:16.94 [35,473,609 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Xeon CPU E3-1220 v3 @ 3.10GHz=&lt;br /&gt;
 [Jan 17 21:13:38 UTC] RC5-72: Benchmark for core #12 (YK/RT AVX2)&lt;br /&gt;
                      0.00:00:16.27 [26,439,019 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-3770 CPU @ 3.40GHz=&lt;br /&gt;
 [Oct 06 18:10:01 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:16.97 [14,635,885 keys/sec]&lt;br /&gt;
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3770+%40+3.40GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-6260U CPU @ 1.80GHz (NUC6i5)=&lt;br /&gt;
 [Feb 28 10:47:44 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100064E3&amp;quot;)&lt;br /&gt;
 [Feb 28 10:47:44 UTC] RC5-72: Running micro-bench to select fastest core...&lt;br /&gt;
 [Feb 28 10:48:07 UTC] RC5-72: using core #3 (GO 2-pipe d).&lt;br /&gt;
 [Feb 28 10:48:25 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)                                                                                                                                                                           &lt;br /&gt;
                      0.00:00:16.23 [11,812,584 keys/sec]&lt;br /&gt;
&lt;br /&gt;
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-6260U+%40+1.80GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-3210M @ 2.50GHz=&lt;br /&gt;
 [Nov 28 08:08:50 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:16.98 [11,403,529 keys/sec]&lt;br /&gt;
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-3210M+%40+2.50GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core 2 Duo CPU E7400 @ 2.80GHz=&lt;br /&gt;
 [Nov 28 08:20:06 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:17.04 [11,032,801 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel 2140 @ 2.70GHz=&lt;br /&gt;
 [Oct 10 08:46:27 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:17.62 [10,494,361 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=AMD Turion II Neo N54L 2.2Ghz=&lt;br /&gt;
 [Feb 05 21:58:19 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)                &lt;br /&gt;
                      0.00:00:17.08 [9,790,931 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-3427U 1.8GHz @ ~2.3GHz (NUC)=&lt;br /&gt;
 [Apr 15 19:21:53 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:17.30 [9,787,326 keys/sec]&lt;br /&gt;
&lt;br /&gt;
= Intel Xeon E5410 2.33GHz=&lt;br /&gt;
 [Feb 06 07:29:58 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:16.05 [8,716,320 keys/sec]&lt;br /&gt;
					  &lt;br /&gt;
=AMD Turion II Neo N40L=&lt;br /&gt;
 [Nov 27 19:56:07 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)                &lt;br /&gt;
                      0.00:00:16.84 [6,305,492 keys/sec]&lt;br /&gt;
					  &lt;br /&gt;
=AMD e-350=&lt;br /&gt;
 [Nov 26 12:29:37 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:16.97 [4,660,745 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Odroid XU4/HC2 : Samsung Exynos5 Octa ARM =&lt;br /&gt;
 [Jan 12 21:07:35 UTC] Automatic processor type detection found&lt;br /&gt;
                      an ARM Cortex-A15 processor.&lt;br /&gt;
 [Jan 12 21:07:35 UTC] RC5-72: using core #2 (XScale 1-pipe).&lt;br /&gt;
 [Jan 12 21:07:53 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)            &lt;br /&gt;
                      0.00:00:16.16 [3,457,678 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel atom 330 +ion=&lt;br /&gt;
 [Nov 26 12:29:55 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)&lt;br /&gt;
                      0.00:00:16.64 [3,199,913 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel atom 270=&lt;br /&gt;
 [Nov 27 20:01:28 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)                &lt;br /&gt;
                      0.00:00:17.05 [3,123,175 keys/sec] &lt;br /&gt;
&lt;br /&gt;
= Raspberry Pi 2 B, , Broadcom BCM2836 =&lt;br /&gt;
 [Feb 23 21:24:02 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)                                                                                                                                                                         &lt;br /&gt;
                      0.00:00:16.07 [1,150,992 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=ARM1176JZF-S 700 MHz (Raspberry PI, Broadcom BCM2835)=&lt;br /&gt;
 [Jan 11 23:24:47 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)&lt;br /&gt;
                      0.00:00:17.43 [775,956 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Infrant Technologics, Inc. (Netgear ReadyNAS duo v.1)=&lt;br /&gt;
 [Jun 19 07:24:07 UTC] RC5-72: Benchmark for core #4 (AnBe 1-pipe)&lt;br /&gt;
                      0.00:00:16.53 [269,082 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=External References=&lt;br /&gt;
http://cgi.distributed.net/speed/ - Distributed.net Client Speed Comparisons&lt;br /&gt;
&lt;br /&gt;
http://www.distributed.net/Download_clients#linux - Download Dnetc client.&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=CPU_Comparison&amp;diff=12134</id>
		<title>CPU Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=CPU_Comparison&amp;diff=12134"/>
		<updated>2020-05-08T09:59:59Z</updated>

		<summary type="html">&lt;p&gt;Torben: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Testene er kørt med dnetc: ./dnetc --benchmark RC5-72&lt;br /&gt;
&lt;br /&gt;
Se også [[VCPU_Comparison]] for virtuelle maskiner, og [[GPU_Comparison]] for grafikkort.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz=&lt;br /&gt;
 [May 08 09:58:08 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;10006A60&amp;quot;)&lt;br /&gt;
 [May 08 09:58:08 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [May 08 09:58:27 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.26 [72,093,680 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-7600K CPU @ 3.80GHz=&lt;br /&gt;
 [Jan 21 14:12:27 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.14 [61,213,840 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 9136 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-7600K+%40+3.80GHz&amp;amp;id=2919&lt;br /&gt;
&lt;br /&gt;
=Intel Core i3-6100 CPU @ 3.7GHz=&lt;br /&gt;
 [Jan 22 16:21:36 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.52 [59,701,011 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 5490 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i3-6100+%40+3.70GHz&amp;amp;id=2617&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-8400 CPU @ 2.80GHz=&lt;br /&gt;
 [Sep 29 06:29:14 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100069EA&amp;quot;)&lt;br /&gt;
 [Sep 29 06:29:14 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [Sep 29 06:29:33 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                  &lt;br /&gt;
                      0.00:00:16.98 [57,705,817 keys/sec]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-8550U CPU @ 1.80GHz=&lt;br /&gt;
 [Sep 26 07:24:26 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                                                                                    &lt;br /&gt;
                      0.00:00:16.91 [57,487,911 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 8322 https://www.cpubenchmark.net/cpu.php?cpu=Intel%2BCore%2Bi7-8550U%2B%40%2B1.80GHz&amp;amp;id=3064&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-8250U CPU @ 1.60GHz (brix gb-bri5h-8250)=&lt;br /&gt;
 [Sep 29 10:11:31 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100068EA&amp;quot;)&lt;br /&gt;
 [Sep 29 10:11:31 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [Sep 29 10:11:50 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                  &lt;br /&gt;
                      0.00:00:16.99 [54,845,778 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-4460 CPU @ 3.20GHz=&lt;br /&gt;
 [Jan 17 21:18:21 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:17.11 [41,443,462 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-6600U CPU @ 2.60GHz=&lt;br /&gt;
 [Jan 17 11:03:29 UTC] RC5-72: Benchmark for core #12 (YK/RT AVX2)                                                                                                                                                     &lt;br /&gt;
                      0.00:00:16.94 [35,473,609 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Xeon CPU E3-1220 v3 @ 3.10GHz=&lt;br /&gt;
 [Jan 17 21:13:38 UTC] RC5-72: Benchmark for core #12 (YK/RT AVX2)&lt;br /&gt;
                      0.00:00:16.27 [26,439,019 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-3770 CPU @ 3.40GHz=&lt;br /&gt;
 [Oct 06 18:10:01 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:16.97 [14,635,885 keys/sec]&lt;br /&gt;
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3770+%40+3.40GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-6260U CPU @ 1.80GHz (NUC6i5)=&lt;br /&gt;
 [Feb 28 10:47:44 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100064E3&amp;quot;)&lt;br /&gt;
 [Feb 28 10:47:44 UTC] RC5-72: Running micro-bench to select fastest core...&lt;br /&gt;
 [Feb 28 10:48:07 UTC] RC5-72: using core #3 (GO 2-pipe d).&lt;br /&gt;
 [Feb 28 10:48:25 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)                                                                                                                                                                           &lt;br /&gt;
                      0.00:00:16.23 [11,812,584 keys/sec]&lt;br /&gt;
&lt;br /&gt;
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-6260U+%40+1.80GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-3210M @ 2.50GHz=&lt;br /&gt;
 [Nov 28 08:08:50 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:16.98 [11,403,529 keys/sec]&lt;br /&gt;
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-3210M+%40+2.50GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core 2 Duo CPU E7400 @ 2.80GHz=&lt;br /&gt;
 [Nov 28 08:20:06 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:17.04 [11,032,801 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel 2140 @ 2.70GHz=&lt;br /&gt;
 [Oct 10 08:46:27 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:17.62 [10,494,361 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=AMD Turion II Neo N54L 2.2Ghz=&lt;br /&gt;
 [Feb 05 21:58:19 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)                &lt;br /&gt;
                      0.00:00:17.08 [9,790,931 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-3427U 1.8GHz @ ~2.3GHz (NUC)=&lt;br /&gt;
 [Apr 15 19:21:53 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:17.30 [9,787,326 keys/sec]&lt;br /&gt;
&lt;br /&gt;
= Intel Xeon E5410 2.33GHz=&lt;br /&gt;
 [Feb 06 07:29:58 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:16.05 [8,716,320 keys/sec]&lt;br /&gt;
					  &lt;br /&gt;
=AMD Turion II Neo N40L=&lt;br /&gt;
 [Nov 27 19:56:07 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)                &lt;br /&gt;
                      0.00:00:16.84 [6,305,492 keys/sec]&lt;br /&gt;
					  &lt;br /&gt;
=AMD e-350=&lt;br /&gt;
 [Nov 26 12:29:37 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:16.97 [4,660,745 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Odroid XU4/HC2 : Samsung Exynos5 Octa ARM =&lt;br /&gt;
 [Jan 12 21:07:35 UTC] Automatic processor type detection found&lt;br /&gt;
                      an ARM Cortex-A15 processor.&lt;br /&gt;
 [Jan 12 21:07:35 UTC] RC5-72: using core #2 (XScale 1-pipe).&lt;br /&gt;
 [Jan 12 21:07:53 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)            &lt;br /&gt;
                      0.00:00:16.16 [3,457,678 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel atom 330 +ion=&lt;br /&gt;
 [Nov 26 12:29:55 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)&lt;br /&gt;
                      0.00:00:16.64 [3,199,913 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel atom 270=&lt;br /&gt;
 [Nov 27 20:01:28 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)                &lt;br /&gt;
                      0.00:00:17.05 [3,123,175 keys/sec] &lt;br /&gt;
&lt;br /&gt;
= Raspberry Pi 2 B, , Broadcom BCM2836 =&lt;br /&gt;
 [Feb 23 21:24:02 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)                                                                                                                                                                         &lt;br /&gt;
                      0.00:00:16.07 [1,150,992 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=ARM1176JZF-S 700 MHz (Raspberry PI, Broadcom BCM2835)=&lt;br /&gt;
 [Jan 11 23:24:47 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)&lt;br /&gt;
                      0.00:00:17.43 [775,956 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Infrant Technologics, Inc. (Netgear ReadyNAS duo v.1)=&lt;br /&gt;
 [Jun 19 07:24:07 UTC] RC5-72: Benchmark for core #4 (AnBe 1-pipe)&lt;br /&gt;
                      0.00:00:16.53 [269,082 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=External References=&lt;br /&gt;
http://cgi.distributed.net/speed/ - Distributed.net Client Speed Comparisons&lt;br /&gt;
&lt;br /&gt;
http://www.distributed.net/Download_clients#linux - Download Dnetc client.&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=Udvidet_linux_-_webserver&amp;diff=12133</id>
		<title>Udvidet linux - webserver</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=Udvidet_linux_-_webserver&amp;diff=12133"/>
		<updated>2019-05-06T13:48:57Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* WebCache */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Udvidet linux - webserver=&lt;br /&gt;
Forskellige opgaver vedr linux - skal ses som en udvidet af de mere trivielle skole linux opg.&lt;br /&gt;
&lt;br /&gt;
Slut resultatet når alle øvelser er lavet, skulle gerne være et fuldt Highly-Available web cluster udelukkende med opensource komponenter.&lt;br /&gt;
&lt;br /&gt;
I øvelserne kan der være brugt begreber der ikke er forklaret yderligere - her er det et selvstudium at undersøge det videre.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* Basal linux viden: skal kunne installere linux og navigere rundt, edit config filer mm&lt;br /&gt;
* Mulighed for at deploy&#039;e  5-10 små linux vm&#039;er á min 1 cpu/512mb ram - eller have tilsvarende antal raspberry Pi&#039;s ved hånden.&lt;br /&gt;
** I øvelserne bruger vi som udgangspunkt 1 rolle pr node - for at holde det simpelt og adskilt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Basic webserver = &lt;br /&gt;
* Find en web application du vil hoste i dit cluster. &lt;br /&gt;
** Skal være en applikation der bruger data fra en database&lt;br /&gt;
** Det er valgfrit hvad man vælger (kan være en php app på apache med mysql, ruby app på nginx med redis, java på tomcat med postgresql, nodejs med mongodb osv)&lt;br /&gt;
** Gå evt uden for comfort-zone og vælg sprog/webserver/db som du ikke har prøvet før&lt;br /&gt;
* Installer 1 webserver node (web01) og 1 database node(db01)&lt;br /&gt;
** installer webserver og database server softwaren&lt;br /&gt;
** deploy din web application på web01, sæt den til at bruge db01&lt;br /&gt;
** sikre web applikationen virker efter hensigten&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;graphviz&amp;gt;&lt;br /&gt;
graph basic{&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/graphviz&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= High Availability WebServer =&lt;br /&gt;
Du begynder at få traffik på dit site. Ved spids-belastninger kniber det for web01 at følge med - desuden er du nervøs hvor hvad der sker hvis web01 crasher - derfor vil du gerne have en webserver mere.&lt;br /&gt;
&lt;br /&gt;
* Installer endnu en webserver node (web02) eller lav en clon af web01 (husk at rette hostname, ip osv)&lt;br /&gt;
** Sikre at applicationen virker som forventet når du tilgår web02 i browseren&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Du skal bruge en loadbalancer foran dine webservere til at fordele trafikken. Sådan en node kaldes også en [https://en.wikipedia.org/wiki/Reverse_proxy reverse proxy]&lt;br /&gt;
&lt;br /&gt;
Der er mange forskellige pakker der kan bruges her. Se f.eks. [http://www.haproxy.org/ haproxy] , [https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html apache mod_proxy] , [https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ nginx]&lt;br /&gt;
(Skulle der bruges en Enterpri$e løsning her, kunne det f.eks. være citrix netscaler)&lt;br /&gt;
&lt;br /&gt;
* Installer en node til din loadbalancer (lb01)&lt;br /&gt;
* installer en proxy/loadbalancer efter eget valg og sæt den til at kunne forwarde trafikken til både web01 og web02.&lt;br /&gt;
** For variationens skyld så lad være med at bruge samme server som på web01/web02, så du ikke bruger f.eks. nginx til både lb og web&lt;br /&gt;
* du skulle nu kunne tilgå din webapp gennem lb01&lt;br /&gt;
* haproxy har en indbygget status side, apache mod_proxy kan sættes til at vise status igennem mod_status, og nginx har også en status side. Bruger du andet, se om den kan rapportere status på lignende måde.&lt;br /&gt;
** Sæt proxy status siden op lb01&lt;br /&gt;
* test nu at din LB kan håndtere at en webserver ikke svarer&lt;br /&gt;
** ved hvert step følg med på status siden&lt;br /&gt;
** stop webserver på web01 og sikre at du stadig kan bruge din app gennem lb01&lt;br /&gt;
** start webserver på web01 og stop den på web02 - sikre at du stadig kan bruge din app gennem lb01&lt;br /&gt;
** start webserver igen på web02&lt;br /&gt;
&lt;br /&gt;
De fleste reverse proxyer understøtter flere metoder til load balancing, når den skal finde ud af hvilen server der skal håndtere en given http request, f.eks. round-robin, by-source-ip, eller least-used. &lt;br /&gt;
* Undersøg hvilke metoder din reverse proxy understøtter.&lt;br /&gt;
&amp;lt;i&amp;gt;Hver opmærksom på at hvis din applikation bruger logins eller sessions på anden vis, skal en bruger helst ramme samme server hver gang. (dog er det undtaget hvis at du har sat clustered sessions op i din applikation)&amp;lt;/i&amp;gt; &lt;br /&gt;
* Undersøg om du kan lave sticky-sessions eller om du vil lave balance by ip&lt;br /&gt;
&lt;br /&gt;
Ud over at din LB gerne selv skulle kunne detektere at en node ikke er tilgængelig, så vil du også gerne kunne disable en webserver i LB i forbindelse med servicevinduer, så at LB ikke forgæves prøver at sende trafik førend webserveren er klar igen.&lt;br /&gt;
* Undersøg hvordan du kan disable og enable en webserver i din LB pool&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph network {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
=Synkronisering af webserver=&lt;br /&gt;
&lt;br /&gt;
Du har nu 2 webservere kørende, som helst skulle køre nøjagtig samme udgave af web applikationen. Så du skal overveje hvordan sikrer du dig at dette fortsætter med at være tilfældet i forbindelse med upgrade eller hvis du har behov for at deploye endnu en server.&lt;br /&gt;
Hvis man kan uploade filer gennem applikationen så skal alle webserverne helst have samme view af upload mappen.&lt;br /&gt;
&lt;br /&gt;
php filer kan synkroniseres, ved andre apps kan det være nødvendigt at undersøge config mgmt tools såsom salt,puppet eller ansible.&lt;br /&gt;
&lt;br /&gt;
filer kan synkroniseres med f.eks. rsync - eller man kan lave replikerede filsystemer med f.eks. drbd/gluster - du kan også store dem på en seperat server og mounte via NFS (men hvordan påvirker det så dit mål om HA?)&lt;br /&gt;
&lt;br /&gt;
* Hvad vil bruge for at sikre dine web servere er identiske ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=High Availability LB=&lt;br /&gt;
Dine webservere er redundate og LB sørger for at omdirigere trafikken, hvis en crasher - men hvad med din LB? Så du skal nu lave en lb mere - men hvordan sikrer du HA mellem de 2? Måden der skal bruges her er at lave en shared virtual IP som de skal forhandle om at være primær på.&lt;br /&gt;
&amp;lt;i&amp;gt;Vær ops på at shared IP kan give problemer hvis din router/fw har lang levetid på arp cache !!&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Deploy en ny LB node (lb02) og sikre at configurationen matcher med lb01&lt;br /&gt;
* sikre at du kan tilgå dit website gennem lb02&lt;br /&gt;
* opsæt virtual shared IP på lb01 og lb02 med f.eks. heartbeat / pacemaker eller keepalived.&lt;br /&gt;
* test at din HA virker på shared IP (luk lb01 ned og test, start lb01 og luk lb02 osv ...)&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph ha_lb {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
lb02 -- web01&lt;br /&gt;
lb02 -- web02&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
=Optimering af applikation=&lt;br /&gt;
Du får mere traffik og dine 2 webservere kan ikke følge med længere. Inden du kaster flere hardware resourcer ind i løsningen bør du kigge på om applikationen er fintunet.&lt;br /&gt;
&lt;br /&gt;
* Hvis det er en php applikation:&lt;br /&gt;
** Undersøg hvad en opcode cache er&lt;br /&gt;
** Slå opcode caching til på dine webservere&lt;br /&gt;
* Hvis det er java&lt;br /&gt;
** Passer dine memory settings til dit miljø (-Xmx / -Xms)&lt;br /&gt;
** Passer din garbage collector til applikationen og belastningstypen&lt;br /&gt;
* Hvis det er et 3 sprog - se om der er nogen ting der skal tunes i runtime opsætningen&lt;br /&gt;
&lt;br /&gt;
* Du bør også kigge på om din database er tunet korrekt&lt;br /&gt;
** mysql: se på f.eks. innodb_buffer_pool_size&lt;br /&gt;
** postgres: se f.eks. på shared_buffers &lt;br /&gt;
** undersøg hvad der passer til din database type&lt;br /&gt;
&lt;br /&gt;
* De fleste applikationer vil kunne bruge en caching komponent til at gemme data, så at den ikke behøver at hente samtlige elementer fra databasen ved hver side visning. Populære cache servere er memcached og redis.&lt;br /&gt;
** undersøg om din applikation kan bruge en sådan caching løsning. Sæt cache server op på dine web noder (eller deploy nye cache servere), og slå caching til i din applikation&lt;br /&gt;
** lav benchmark med f.eks. [https://httpd.apache.org/docs/2.4/programs/ab.html apache ab] af respons tid med og uden cache&lt;br /&gt;
&lt;br /&gt;
= HA Database=&lt;br /&gt;
Du kan på nuværende punkt skalere dit webserver lag horisontalt  - men hvad med databasen? Den er stadig single point of failure - så hvis den crasher er du stadig på den.&lt;br /&gt;
&lt;br /&gt;
* Undersøg hvad du har af muligheder for at skalere din DB.&lt;br /&gt;
** Er din løsning en fuld replika ... eller laver du data partitionering (sharding) -i såfald er du nu afhængig af at begge kører?&lt;br /&gt;
** Er din løsning en warm standby, hot standby eller active/active&lt;br /&gt;
** Er der nogen regler omkring quorum (så du skal minimum bruge 3 DB nodes) ? &lt;br /&gt;
** Hvad betyder split-brain?&lt;br /&gt;
** kan alle noder tage imod writes - eller skal man lave read/write split?&lt;br /&gt;
** Er der noget der skal ændres manuelt hvis en node crasher?&lt;br /&gt;
* Prøv at se om du kan sætte det op med db02 og evt db03&lt;br /&gt;
* Hvordan omkonfigurerer du din web applikation til at bruge DB clusteret&lt;br /&gt;
** Skal du bruge en database load balancer ?&lt;br /&gt;
Database replikering kan være kompliceret at sætte op - så det er superfedt hvis du kan få det til at virke - men okay hvis du kaster håndklædet.&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph ha_db {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
lb02 -- web01&lt;br /&gt;
lb02 -- web02&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
web01 -- db02&lt;br /&gt;
web02 -- db02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
= Static content =&lt;br /&gt;
Du er nu fuldt HA og har redundans på samtlige elementer,, men dine webservere har problemer med at følge med. Du kunne lave en webserver mere - men lad os nu antage at de serverer en stor mængde statisk indhold (f.eks. billedfiler) - og det er ikke sikkert at din webserver er optimal i forhold til dette. Nogle er bedre til den slags requests: nginx eller lighttpd (apache kan bruges hvis du disabler .htaccess og bruger en threaded mpm)&lt;br /&gt;
&lt;br /&gt;
* lav 1 eller 2 noder til static content (staticweb01 / staticweb02) og installer en &amp;quot;letvægts&amp;quot; http server&lt;br /&gt;
* find noget static content fra din app der kan kopieres over til staticweb01/02&lt;br /&gt;
* undersøg hvordan du i dine LB noder kan route request til staticweb01/02 ud fra en del af url&#039;en (f.eks. en /static mappe)&lt;br /&gt;
&lt;br /&gt;
Denne måde med at route trafik ud fra request kan også bruges hvis du f.eks. har flere forskellige applikationer bag samme LB - her vil man måske bare route ud fra requested hostname istedet for en sti&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph static_content {&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- web01&lt;br /&gt;
lb01 -- web02&lt;br /&gt;
lb02 -- web01&lt;br /&gt;
lb02 -- web02&lt;br /&gt;
&lt;br /&gt;
lb01 -- staticweb01&lt;br /&gt;
lb01 -- staticweb02&lt;br /&gt;
lb02 -- staticweb01&lt;br /&gt;
lb02 -- staticweb02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
web01 -- db02&lt;br /&gt;
web02 -- db02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
= WebCache = &lt;br /&gt;
Hvis du har et højt antal side visninger af samme content f.eks. visningen af forsiden på dit site - kan man offloade webserveren ved at lade en reverse proxy servere en cachet udgave af siden. Derved kan man spare en del resourcer på web og database serverne. Jeg vil her anbefale at bruge varnish men man kan også kigge på f.eks. apache mod_cache sammen med mod_proxy eller nginx proxy_cache&lt;br /&gt;
&lt;br /&gt;
* Lav 2 noder webcache01 og webcache02&lt;br /&gt;
* setup en caching server på dem begge&lt;br /&gt;
** caching serveren skal kunne trække data fra web01 og web02&lt;br /&gt;
** omkonfigurer lb01/02 til at hente data fra webcache01/02 istedet for web01/02&lt;br /&gt;
** configurer din cache til f.eks. at den må cache forsiden - men alt andet skal trækkes fra webserver &lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Nu er det ikke længere kun lb men også dine webcache nodes der skal tage højde for korrekt session routning&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#tag:graphviz|graph network_cache {&lt;br /&gt;
&lt;br /&gt;
node [fontsize=10]&lt;br /&gt;
edge [fontsize=10]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
lb01 -- webcache01&lt;br /&gt;
lb01 -- webcache02&lt;br /&gt;
lb02 -- webcache01&lt;br /&gt;
lb02 -- webcache02&lt;br /&gt;
&lt;br /&gt;
lb01 -- staticweb01&lt;br /&gt;
lb01 -- staticweb02&lt;br /&gt;
lb02 -- staticweb01&lt;br /&gt;
lb02 -- staticweb02&lt;br /&gt;
&lt;br /&gt;
webcache01 -- web01&lt;br /&gt;
webcache01 -- web02&lt;br /&gt;
webcache02 -- web01&lt;br /&gt;
webcache02 -- web02&lt;br /&gt;
&lt;br /&gt;
web01 -- db01&lt;br /&gt;
web02 -- db01&lt;br /&gt;
web01 -- db02&lt;br /&gt;
web02 -- db02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 }|format=&amp;quot;png&amp;quot;}}&lt;br /&gt;
&lt;br /&gt;
= X-Forwarded-For =&lt;br /&gt;
Med disse disse lag af proxy server - vil web serverne kun se IP på den proxy server der har videresendt trafikken til den. Dvs at webservernes log filer får et forkert billede af hvem der forespørger på hvad. En måde at styre det på er at få proxy serveren til at inject en http header med oprindelig requester ip - her bruges tit x-forwarded-for.&lt;br /&gt;
&lt;br /&gt;
* Undersøg hvordan du får dine proxy servere til at videre  sende x-forwarded-for &lt;br /&gt;
* Undersøg hvordan du får din webserver til at trække requester ip fra x-forwarded-for&lt;br /&gt;
* verificer at ved request er det klientens IP der gemmes i webserver logfilen&lt;br /&gt;
&lt;br /&gt;
= WAF - Ekstra opg =&lt;br /&gt;
Hvis du vil beskytte dit website kan du implementere en såkaldt Web Application Firewall (WAF). En traditionel firewall kigger på IP og TCP pakker - mens at en WAF inspicerer HTTP requests.&lt;br /&gt;
&lt;br /&gt;
WAF er et nyere concept end de andre elementer i dit cluster - så det kan måske være lidt svært at finde god information om hvordan man laver den - og hvor den skal placeres i netværket.&lt;br /&gt;
&lt;br /&gt;
En opensource WAF du kan kigge nærmere på er https://www.modsecurity.org/&lt;br /&gt;
&lt;br /&gt;
= Ready for production =&lt;br /&gt;
Dit site er toptunet og full HA, og du kan scalere ved at deploye flere noder - men er du production-ready hvis du ikke har overblik over om dine elementer kører eller ej ? Og hvad vil du gøre hvis hele dit datacenter futter af ?&lt;br /&gt;
&lt;br /&gt;
Monitorering&lt;br /&gt;
* Undersøg opensource monitorerings løsninger ( f.eks. icinga eller zabbix )&lt;br /&gt;
** prøv at lave en simpelt monitorering hvor du som minimum laver ping check af dine servere&lt;br /&gt;
&lt;br /&gt;
Backup&lt;br /&gt;
* Du kan kigge på f.eks. bacula - men du må også gerne holde det simpelt.&lt;br /&gt;
** Lav som minimum en backup af din database automatisk en gang i døgnet&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
	<entry>
		<id>https://wiki.t-hoerup.dk/index.php?title=CPU_Comparison&amp;diff=12132</id>
		<title>CPU Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.t-hoerup.dk/index.php?title=CPU_Comparison&amp;diff=12132"/>
		<updated>2019-01-12T21:10:27Z</updated>

		<summary type="html">&lt;p&gt;Torben: /* Odroid XU4/HC2 : Samsung Exynos5 Octa ARM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Testene er kørt med dnetc: ./dnetc --benchmark RC5-72&lt;br /&gt;
&lt;br /&gt;
Se også [[VCPU_Comparison]] for virtuelle maskiner, og [[GPU_Comparison]] for grafikkort.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-7600K CPU @ 3.80GHz=&lt;br /&gt;
 [Jan 21 14:12:27 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.14 [61,213,840 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 9136 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-7600K+%40+3.80GHz&amp;amp;id=2919&lt;br /&gt;
&lt;br /&gt;
=Intel Core i3-6100 CPU @ 3.7GHz=&lt;br /&gt;
 [Jan 22 16:21:36 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:16.52 [59,701,011 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 5490 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i3-6100+%40+3.70GHz&amp;amp;id=2617&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-8400 CPU @ 2.80GHz=&lt;br /&gt;
 [Sep 29 06:29:14 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100069EA&amp;quot;)&lt;br /&gt;
 [Sep 29 06:29:14 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [Sep 29 06:29:33 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                  &lt;br /&gt;
                      0.00:00:16.98 [57,705,817 keys/sec]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-8550U CPU @ 1.80GHz=&lt;br /&gt;
 [Sep 26 07:24:26 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                                                                                    &lt;br /&gt;
                      0.00:00:16.91 [57,487,911 keys/sec]&lt;br /&gt;
&lt;br /&gt;
Passmark: 8322 https://www.cpubenchmark.net/cpu.php?cpu=Intel%2BCore%2Bi7-8550U%2B%40%2B1.80GHz&amp;amp;id=3064&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-8250U CPU @ 1.60GHz (brix gb-bri5h-8250)=&lt;br /&gt;
 [Sep 29 10:11:31 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100068EA&amp;quot;)&lt;br /&gt;
 [Sep 29 10:11:31 UTC] RC5-72: using core #4 (YK AVX2).&lt;br /&gt;
 [Sep 29 10:11:50 UTC] RC5-72: Benchmark for core #4 (YK AVX2)                  &lt;br /&gt;
                      0.00:00:16.99 [54,845,778 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-4460 CPU @ 3.20GHz=&lt;br /&gt;
 [Jan 17 21:18:21 UTC] RC5-72: Benchmark for core #4 (YK AVX2)&lt;br /&gt;
                      0.00:00:17.11 [41,443,462 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-6600U CPU @ 2.60GHz=&lt;br /&gt;
 [Jan 17 11:03:29 UTC] RC5-72: Benchmark for core #12 (YK/RT AVX2)                                                                                                                                                     &lt;br /&gt;
                      0.00:00:16.94 [35,473,609 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Xeon CPU E3-1220 v3 @ 3.10GHz=&lt;br /&gt;
 [Jan 17 21:13:38 UTC] RC5-72: Benchmark for core #12 (YK/RT AVX2)&lt;br /&gt;
                      0.00:00:16.27 [26,439,019 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i7-3770 CPU @ 3.40GHz=&lt;br /&gt;
 [Oct 06 18:10:01 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:16.97 [14,635,885 keys/sec]&lt;br /&gt;
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3770+%40+3.40GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-6260U CPU @ 1.80GHz (NUC6i5)=&lt;br /&gt;
 [Feb 28 10:47:44 UTC] Automatic processor type detection did not&lt;br /&gt;
                      recognize the processor (tag: &amp;quot;100064E3&amp;quot;)&lt;br /&gt;
 [Feb 28 10:47:44 UTC] RC5-72: Running micro-bench to select fastest core...&lt;br /&gt;
 [Feb 28 10:48:07 UTC] RC5-72: using core #3 (GO 2-pipe d).&lt;br /&gt;
 [Feb 28 10:48:25 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)                                                                                                                                                                           &lt;br /&gt;
                      0.00:00:16.23 [11,812,584 keys/sec]&lt;br /&gt;
&lt;br /&gt;
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-6260U+%40+1.80GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-3210M @ 2.50GHz=&lt;br /&gt;
 [Nov 28 08:08:50 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:16.98 [11,403,529 keys/sec]&lt;br /&gt;
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-3210M+%40+2.50GHz&lt;br /&gt;
&lt;br /&gt;
=Intel Core 2 Duo CPU E7400 @ 2.80GHz=&lt;br /&gt;
 [Nov 28 08:20:06 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:17.04 [11,032,801 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel 2140 @ 2.70GHz=&lt;br /&gt;
 [Oct 10 08:46:27 UTC] RC5-72: Benchmark for core #3 (GO 2-pipe d)&lt;br /&gt;
                      0.00:00:17.62 [10,494,361 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=AMD Turion II Neo N54L 2.2Ghz=&lt;br /&gt;
 [Feb 05 21:58:19 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)                &lt;br /&gt;
                      0.00:00:17.08 [9,790,931 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel Core i5-3427U 1.8GHz @ ~2.3GHz (NUC)=&lt;br /&gt;
 [Apr 15 19:21:53 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:17.30 [9,787,326 keys/sec]&lt;br /&gt;
&lt;br /&gt;
= Intel Xeon E5410 2.33GHz=&lt;br /&gt;
 [Feb 06 07:29:58 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:16.05 [8,716,320 keys/sec]&lt;br /&gt;
					  &lt;br /&gt;
=AMD Turion II Neo N40L=&lt;br /&gt;
 [Nov 27 19:56:07 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)                &lt;br /&gt;
                      0.00:00:16.84 [6,305,492 keys/sec]&lt;br /&gt;
					  &lt;br /&gt;
=AMD e-350=&lt;br /&gt;
 [Nov 26 12:29:37 UTC] RC5-72: Benchmark for core #11 (GO 2-pipe b)&lt;br /&gt;
                      0.00:00:16.97 [4,660,745 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Odroid XU4/HC2 : Samsung Exynos5 Octa ARM =&lt;br /&gt;
 [Jan 12 21:07:35 UTC] Automatic processor type detection found&lt;br /&gt;
                      an ARM Cortex-A15 processor.&lt;br /&gt;
 [Jan 12 21:07:35 UTC] RC5-72: using core #2 (XScale 1-pipe).&lt;br /&gt;
 [Jan 12 21:07:53 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)            &lt;br /&gt;
                      0.00:00:16.16 [3,457,678 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel atom 330 +ion=&lt;br /&gt;
 [Nov 26 12:29:55 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)&lt;br /&gt;
                      0.00:00:16.64 [3,199,913 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Intel atom 270=&lt;br /&gt;
 [Nov 27 20:01:28 UTC] RC5-72: Benchmark for core #6 (GO 2-pipe)                &lt;br /&gt;
                      0.00:00:17.05 [3,123,175 keys/sec] &lt;br /&gt;
&lt;br /&gt;
= Raspberry Pi 2 B, , Broadcom BCM2836 =&lt;br /&gt;
 [Feb 23 21:24:02 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)                                                                                                                                                                         &lt;br /&gt;
                      0.00:00:16.07 [1,150,992 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=ARM1176JZF-S 700 MHz (Raspberry PI, Broadcom BCM2835)=&lt;br /&gt;
 [Jan 11 23:24:47 UTC] RC5-72: Benchmark for core #2 (XScale 1-pipe)&lt;br /&gt;
                      0.00:00:17.43 [775,956 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=Infrant Technologics, Inc. (Netgear ReadyNAS duo v.1)=&lt;br /&gt;
 [Jun 19 07:24:07 UTC] RC5-72: Benchmark for core #4 (AnBe 1-pipe)&lt;br /&gt;
                      0.00:00:16.53 [269,082 keys/sec]&lt;br /&gt;
&lt;br /&gt;
=External References=&lt;br /&gt;
http://cgi.distributed.net/speed/ - Distributed.net Client Speed Comparisons&lt;br /&gt;
&lt;br /&gt;
http://www.distributed.net/Download_clients#linux - Download Dnetc client.&lt;/div&gt;</summary>
		<author><name>Torben</name></author>
	</entry>
</feed>